http://jason.digitalinertia.net/dockered-dpdk-packaging-open-vswitch/

来源:互联网 发布:ims 层次数据库 编辑:程序博客网 时间:2024/06/18 16:51

http://jason.digitalinertia.net/dockered-dpdk-packaging-open-vswitch/

Dockered DPDK: packaging Open vSwitch

FacebookTwitterGoogle+LinkedIn分享

I recently attended the NFV World Congress in San Jose, and had a great time talking to vendors about their solutions and current trends toward widespread NFV adoption. Intel’s hot new(ish) multicore programming framework – the Data Plane Development Kit, or DPDK – was part of the marketing spiel of almost everyone even remotely invested in the NFVI.  The main interest is in the poll mode driver, which dedicates a CPU core to polling devices rather than waiting for interrupts to signal when a packet has arrived.  This has resulted in some amazing packet processing rates, such as a DPDK-accelerated Open vSwitch switching at 14.88Mp/s.

Since I’ve been working with Docker lately, I naturally started imagining what could be done with combining crazy fast DPDK applications with the lightweight virtualization and deployment flexibility of Docker.  Many DPDK applications – such as Open vSwitch – have some requirements in the DPDK build that may break other applications if they relied on the same libraries.  This makes it a great candidate for containerization, since we can give the application its very own tested build and run environment.

I was not, of course, the first to think of this – some Googling will turn up quite a few bits and pieces that have been helpful in writing this post.  My goal here is to bring that information into a consolidated tutorial and to explain the containerized DPDK framework that I have published to Dockerhub.

DPDK Framework in a Container

DPDK applications need to access a set of headers and libraries for compilation, so I decided to create a base container (Github,Dockerhub) with those resources.  Here’s the Dockerfile:

Pretty basic stuff at first – get some packages, set the all-important RTE_SDK environment variable, grab the source.  One thing that is important is to not rely on kernel headers; doing so would be seriously non portable.  Theuio andigb_uio kernel modules have to be built and installed by the host that will run the DPDK container. Therefore, we configure the SDK to not compile kernel modules, and therefore not require installing kernel headers on the build system.

The key part of this build script is the deferment of compilation to when the application is built, so that the application can specify its requirements. This is done by requiring the DPDK application providedpdk_env.sh anddpdk_config.sh, which provide environment variables (such asRTE_TARGET) and a set of commands to run before compilation occurs. For example, Open vSwitch requires that DPDK be compiled withCONFIG_RTE_BUILD_COMBINE_LIBS=y in its configuration, which would be inserted indpdk_config.sh.

DPDK Application in a Container

Now that the framework is there, time to use it in an application.  In this post I will demonstrate Open vSwitch in a container (Github,Dockerhub), which could be plenty useful.  To begin, here’s the dpdk_env.sh and dpdk_config.sh files:

OVS has some special requirements for DPDK, which is kind of the point of putting it in a container, right? Here’s the Dockerfile to build it:

The ONBUILD instructions in the DPDK Dockerfile will be executed first, of course, which will compile the DPDK framework. Then we install more packages for OVS, get the source, and compile with DPDK options. In the last few lines, we move the final script into the container, which is all the stuff OVS needs running:

Now, here you could go a bit differently, and the repository I linked to may change somewhat. It could be said that it is more Dockerish to put theovsdb-server in its own container, and then link them. However, this is a self contained example, so we’ll just go with this.

Running Open vSwitch

Before we start it up, we need to fulfill some prerequisites. I won’t go into details on the how and why, but please see theDPDK Getting Started Guide and theOVS-DPDK installation guide.  OVS requires 1GB huge pages, so you need your /etc/default/grub to have at least these options:

followed by an update-grub and reboot. You also need to mount them with this or the/etc/fstab equivalent:

Compile the kernel module on the host system and insert it. Download DPDK, extract, and run thedpdk/tools/setup.sh script. Choose to build to the x86_64-native-linuxapp-gcc target, currently option 9, and then insert the UIO module, currently option 12. Finally, bind one of your interfaces with option 18, though you’ll have to bring that interface down first.

Now you can start the container. Here’s what I used:

This gives the container access to the huge page mount, and the uio0 device that you just bound to the UIO driver. I found that I needed to run the container as--privileged to access parts of the/dev/uio0 filesystem, though it appears that some people are able to get around this. I will update this post if I find out how to run the container without privileged.

If all goes well, you now have DPDK-accelerated OVS running in a container, and you can go about adding interfaces to the container, adding them to OVS, and setting up rules for forwarding packets at ludicrous speeds. Good luck, and please let me know how it works out for you!

Links

DPDK base Docker container – rakurai/dpdk – Github, Dockerhub
Open vSwitch Docker container – rakurai/ovs-dpdk – Github, Dockerhub
DPDK Getting Started Guide
OVS-DPDK installation guide






3 comments

  1. Murad Kablan

    Hey Jason,
    Thanks for this great article.
    I am having a problem running multiple containers with DPDK on the same host as the same time. It seems that they block each other when it comes to use of hugepages.
    This is the error I get when I try to run a DPDK application in the second container.
    EAL: Detected lcore 0 as core 0 on socket 0
    EAL: Detected lcore 1 as core 0 on socket 1
    EAL: Detected lcore 2 as core 1 on socket 0
    EAL: Detected lcore 3 as core 1 on socket 1
    EAL: Detected lcore 4 as core 2 on socket 0
    EAL: Detected lcore 5 as core 2 on socket 1
    EAL: Detected lcore 6 as core 3 on socket 0
    EAL: Detected lcore 7 as core 3 on socket 1
    EAL: Detected lcore 8 as core 4 on socket 0
    EAL: Detected lcore 9 as core 4 on socket 1
    EAL: Detected lcore 10 as core 5 on socket 0
    EAL: Detected lcore 11 as core 5 on socket 1
    EAL: Support maximum 128 logical core(s) by configuration.
    EAL: Detected 12 lcore(s)
    EAL: No free hugepages reported in hugepages-1048576kB
    PANIC in rte_eal_init():

    Any advice on how to solve this issue?

    Thanks,
    Murad

    Reply
    1. admin

      Hi Murad, thanks for reading. I haven’t attempted running more than one application at once, but if you’re using something like the Dockerfile I posted, the last line of run_ovs.sh:

      ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile

      might be the issue. Arguments before the ” — ” are DPDK args, such as “-c 0x1″ being the core mask to tell it to only run on core 0. As noted in the OVS documentation for DPDK installation, you can specify memory restrictions if you have more huge pages allocated with an argument like

      --socket-mem 1024,0

      to allocate 1GB on NUMA node 0. So changing the last line of the script to

      ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 -- unix:$DB_SOCK --pidfile

      may solve your problem if 1 core and a single 1GB huge page suits your use case. Of course, doing something fancy with the command line args for the script and Dockerfile would be more elegant, but you get the idea.

      Let me know if this helps!

      Reply
      1. Murad Kablan

        Thanks for the feedback.
        So I found out what was going on and the problem falls into two issues.
        First issue is the memory. As you said, I had to limit each container’s memory usage. But I also had to specify that each container is working independently from other containers using the option –file-prefix

        The second issue is that DPDK applications block all PCI devices by default. So I when I ran the second container, the first container’s port was blocked. So I had to specify or “whiteliste” the port used by each container
        by using the option
        –pci-whitelist PCI-ID

        I’m about to run as much containers as possible on a single host to see if DPDK performance can be effected or not.

        Thanks again and please keep posting these cool articles!

        Reply

Leave a Reply


0 0
原创粉丝点击