DPDK-CH3

来源:互联网 发布:matlab读取txt数据 编辑:程序博客网 时间:2024/05/21 05:06

Chapter 3 ENVIRONMENTABSTRACTION LAYER

Environment Abstraction Layer (EAL) 负责底层的资源比如像硬件以及内存空间。它提供了一个通用接口,隐藏了从app到lib的环境细节。由初始化例程来负责如何分配这些资源(比如,memory space, PCI devices, timers, consoles 等等)。

典型的EAL提供的服务如下;

  • DPDK loading and launching: DPDK以及它的应用作为一个单独的应用程序被连接,必须通过一些方法load;
  • Core Affinity/Assignment Procedures: EAL提供了一些机制为将一些执行单元分配到特定的core上,同时在上面产生执行例程;
  • Sytem Memory Reservation: EAL帮助预留不同的内存区域,for example, device之间交互的内存预取;
  • PCI Address ABstraction: EAL提供了access PCI地址空间的接口;
  • Trace and Debug Functions: Logs, dump_stack, panic and so on.
  • Utility Functions: Spinlocks and atomic counters that not provided in libc.
  • CPU Feature Identification: 运行时检测指定的硬件特性,比如Intel AVX是否支持。 确定当前的CPU是否支持编译需要的一些特性集合;
  • Interrupt Handling: Interfaces to register/unregister callbacks to specific interrupt sources;
  • Alarm Functions: Interfaces to set/remove callbacks to be run at a specific time;

EAL in a Linux-userland Execution Environment

在Linux用户空间环境中,DPDK应用作为用户空间app使用pthread lib的程序运行。 关于devices以及address space的PCI信息可以通过/sys kernel interface以及kenrnel modules像uio_pci_generic, or igb_uio被发现。 参考 UIO: User-space drivers documentation in the Linux kernel。 This meory is mmap’d in the application.

EAL 通过在hugetlbfs中使用mmap()执行物理内存的分配(使用大页来提高性能)。 这些内存被暴露给DPDK的服务层比如像 Mempool Library。

此时, DPDK 服务层将会被初始化,然后通过pthread setaffinity calls, 每个执行单元将被分配到指定的lcore上作为一个用户级的thread运行。

Time reference通过CPU Time-Stamp Counter(TSC)被提供或者通过HPET kernel API through a mmap() call.

Initalization and Core Launching

部分初始化工作通过glibc的启动函数完成。一个check同时在初始化阶段完成来保证the micro architecture type chose in the config file 被CPU支持。 然后, main() function is called. 核心的初始化阶段以及launch is done in rte_eal_init() (see the API doc). 它包含了调用pthread library (更加确切地说, pthread_self(), pthread_create(), and pthread_setaffinity_np()).

![Figure 2](./programming_guid/2015-05-29 19:36:22屏幕截图.png)

* [Figure 2] . EAL Initialization in a Linux Application Environment *


Note: 初始化一些对象,例如像memory zones, rings, memory pools, lpm talbs 以及 Hash tables should be done as part of the overall application initialization on the master lcore. The creation and initialization functions for these objects are not multi-thread safe. However, once initialized, the objects themselves can safely be used in multiple threads simultaneously.

Multi-process Support

Linuxapp EAL allows a multi-process as well as a multi-threaded (pthread) deployment model. See Chapter 2.20 Multi-process Support for more details.

Memory Mapping Discovery and Memory Reservation

分配大量连续的物理内存是通过hugetlbfs kernel filesystem来实现的。 EAL提供了一些API来预留named memory zones in this contiguous memory. The physical address of the reserved memory for that memory zone is also returned to the user by the memory zone reservation API.


Note : Memory reservations done using the APIs provided by the rte_malloc library are also backed by pages from the hugetlbfs filesystem. However, physical address information is not available for the blocks of memory allocated in this way.

Xen Dom0 support without hugetlbs

The existing memory management implementation is based on the Linux kernel hugepage
mechanism. However, Xen Dom0 does not support hugepages, so a new Linux kernel module
rte_dom0_mm is added to workaround this limitation.

== The EAL uses IOCTL interface to notify the Linux kernel module rte_dom0_mm to allocate memory of specified size, and get all memory segments information from the module, and the EAL uses MMAP interface to map the allocated memory. == For each memory segment, the physical addresses are contiguous within it but actual hardware addresses are contiguous within 2MB.

PCI Access

The EAL uses the /sys/bus/pci utilities provided by the kernel to scan the content on the PCI bus. To access PCI memory, a kernel module called uio_pci_generic provides a /dev/uioX device file and resource file in /sys that can be mmap’d to obtain access to PIC address space from the application. The DPDK-specific igb_uio module can also be used for this. Both drivers use the uio kernel feature (userland driver).

Per-lcore and Shared Variables

Note: lcore refers to a logical execution unit of the processor, sometimes called a hardware thead.

Shared variable are the default behavior. Per-lcore variables are implemented Local Storage (TLS) to provide per-thread local storage.

Logs

A logging APi is provided by EAL. By default, in a Linux application, logs are sent to syslog and also to the console. However, the log function can be overridden by the user to use a different logging mechanism.

Trace and debug Functions

There are some debug functions to dump the stack in glibc. The rte_panic() function can
voluntarily provoke a SIG_ABORT, which can trigger the generation of a core file, readable by gdb.

CPU Feature Identification

The EAL can query the CPU at runtime (using the rte_cpu_get_feature() function) to determine
which CPU features are available.

User Space Interrupt and Alarm Handling

The EAL creates a host thread to poll the UIO device file descriptors to detect the interrupts. Callbacks can be registered or unregistered by the EAL functions for a specific interrupt event and are called in the host thread asynchronously. The EAL also allows timed callbacks to be used in the same way as for NIC interrupts.

Note: The only interrupts supported by the DPDK Poll-Mode Drivers are those for link status change, i.e. link up and link down notification.

Blacklisting

The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted, so they are ignored by the DPDK. The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).

Misc Functions

Locks and atomic operations are per-architecture (i686 and x86_64).

3.2 Memory Segments and Memory Zones (memzone)

The mapping of physical memory is provided by this feature in the EAL. As physical memory can have gaps, the memory is described in a table of descriptors, and each descriptor (called rte_memseg ) describes a contiguous portion of memory.

On top of this, the memzone allocator’s role is to reserve contiguous portions of physical memory. These zones are identified by a unique name when the memory is reserved.

The rte_memzone descriptors are also located in the configuration structure. This structure is accessed using rte_eal_get_configuration(). The loopup (by name) of a memory zone returns a descriptor containing the physical address of the memory zone.

Memory zones can be reserved with specific start address alignment by supplying the align parameter (by default, they are aligned to cache line size). The alignment value should be a power of two and not less than the cache line size (64 bytes). Memory zones can also be reserved from either 2 MB or 1 GB hugepages, provided that both are available on the system.

Multiple pthread

DPDK usually pins one pthread per core to avoid the overhead of task switching. This allows for significant performance gains, but lacks flexibility and is not always efficient.

Power management helps to improve the CPU efficiency by limiting the CPU runtime frequency. However, alternately it is possible to utilize the idle cycles available to take advantage of the
full capability of the CPU.

By taking advantage of cgroup, the CPU utilization quota can be simply assigned. This gives another way to improve the CPU efficient, however, there is a prerequisite; DPDK must handle the context switching switching between multiple pthreads per core.

For further flexibility, it is useful to set pthread affinity not only to a CPU but to a CPU set.

EAL pthread and lcore Affinity

The term “lcore” refers to an EAL thread, which is really a Linux/FreeBSD pthread. “EAL pthreads” are created and managed by EAL and execute the tasks issued by remote launch. In each EAL pthread, there is a TLS(Thread Local Storage) called _lcore_id for unique identification. As EAL pthread ususally bind 1:1 to the physical CPU, the _lcore_id is typically equal to the CPU ID.

When using multiple pthreads, however, the binding is no longer always 1:1 between an EAL pthread and a specified pyhsical CPU. The EAL pthread may have affinity to a CPU seete, and as such the _lcore_id will not be the same as the CPU ID. For this reason, there is an EAL long opetion ‘-lcores’ defined to assign the CPU affinity of lcores. For s specified lcore ID or ID group, the option allows setting the CPU set for that EAL pthread.

The format pattern: -lcores=’[@cpu_set][,[@cpu_set],…]’

‘lcore_set’ and ‘cpu_set’ can be a single number, range or a group.

A number is a “digit([0-9]+)”; a range is “-”; a group is “(

non-EAL pthread support

It is possible to use the DPDK execution context with any user pthread (Non-EAL pthreads). In a non-EAL pthread, the _lcore_id is always LCORE_ID_ANY which identifies that it is not an EAL thread with a valid, unique, _lcore_id. Some libraries will use an laternative unique (e.g. TID), some will not be impacted at all, ans some will work but with limitations (e.g. timer and mempool libraries).

All these impacts are mentioned in Known Issues section.

Public Thread API

There are two public APIs rte_thread_set_affinity() and rte_pthread_get_affinity() introduced for threads. When they’re used in any pthread context, the Thread Local Storage (TLS) will be set/get.

Those TLS include _cpuset and _socket_id:

  • _cpuset stores the CPUs bitmap to which the pthread is affinitized.
  • _socket_id stores the NUMA node of the CPU set. If the CPUs in CPU set belong to
    different NUMA node, the _socket_id will be set to SOCKET_ID_ANY.

Known Issues

  • rte_mempool
    The rte_mempool uses a per-lcore cache inside the mempool. For non-EAL pthreads, rte_lcore_id() will not return a valid number. So for now, when rte_mempool is used with non-EAL pthread, the put/get operations will bypass the mempool cache and there is a performance penalty because of this bypass. Support for non-EAL mempool cache is currently being enaled.

  • rte_ring
    rte_ring supports multi-producer enqueue and multi-consumer dequeue. However, it is non-preemptive, this has a knock on effect of making rte_mempool non-preemtable.
    Note: The “non-preemptive” constraint means:

    • a pthread doing multi-producers enqueues on a given ring must not be preempted by another pthread doing a multi-producer enqueue on the same ring.
    • a pthread doing multi-consumers dequeues on a given ring must not be preempted by another pthread doing a multi-consumer dequeue on the same ring.

    == Bypassing this constraint it may cause the 2nd pthread to spin until the 1st one is scheduled again.== Moreover, if the 1st pthread is preempted by a context that has an higher priority, it may even cause a dead lock.

    This does not mean it cannot be used, simply, there is a need to narrow down the situation when it is used by multi-pthread on the same core.

    1. It CAN be used for any single-producer or single-consumer situation.
    2. It MAY be used by multi-producer/consumer pthread whose scheduling policy are all SCHED_OTHER(cfs). User SHOULD be aware of the performance penalty before using it.
    3. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.

    RTE_RING_PAUSE_PEP_COUNT is defined for rte_ring to reduce contention. It’s mainly for case 2, a yield is issued after number of times pause repeat.

    It adds a sched_yield() syscall if the thread spins for too long while waiting on the other thread to finish its operations on the ring. This gives the pre-empted thread a chance to proceed and finish with the ring enqueue/dequeue operation.

  • rte_timer
    Running rte_timer_manager() on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.

  • rte_log
    In non-EAL pthread, there is no per thread loglevel and logtype, global loglevels are used.
  • misc
    The debug statistics of rte_ring, rte_mempool and rte_timer are not supported in a non-EAL pthread.

cgroup control

The following is a simple example of cgroup control usage, there are two pthreads(t0 and t1) doing packet I/O on the same core ($CPU). We expect only 50% of CPU spend on packet IO.

mkdir /sys/fs/cgroup/cpu/pkt_iomkdir /sys/fs/cgroup/cpuset/pkt_ioecho $cpu > /sys/fs/cgroup/cpuset/cpuset.cpusecho $t0 > /sys/fs/cgroup/cpu/pkt_io/tasksecho $t0 > /sys/fs/cgroup/cpuset/pkt_io/tasksecho $t1 > /sys/fs/cgroup/cpu/pkt_io/tasksecho $t1 > /sys/fs/cgroup/cpuset/pkt_io/taskscd /sys/fs/cgroup/cpu/pkt_ioecho 100000 > pkt_io/cpu.cfs_period_usecho 50000 > pkt_io/cpu.cfs_quota_us
0 0