docker overlay network测试

来源:互联网 发布:java微服务架构 编辑:程序博客网 时间:2024/05/20 07:54

docker自1.9版本后,引入了overlay网络(本文不具体分析其背后使用的技术)。重点解决之前docker网络在跨主机通信方面的不足。本文记录,参考官方指导文档,搭建测试overlay网络的过程。
文中使用的os为centos7,内核版本为3.10。而docker 1.9版本overlay网络要求内核版本在3.19以上。自docker 1.10版本后,docker overlay 支持3.10版本内核。因此,本文中使用的docker版本为1.10.3。
测试环境工包含三台virtualbox虚拟机。1台作为key-value store存储的机器,本文测试中选用的是etcd。另外,两台用来测试跨host的通信,分别为net1和net2。
docker info:

[root@net1 vagrant]# docker  infoContainers: 1 Running: 1 Paused: 0 Stopped: 0Images: 1Server Version: 1.10.3 Storage Driver: devicemapper Pool Name: docker-253:0-469034-pool Pool Blocksize: 65.54 kB Base Device Size: 10.74 GB Backing Filesystem: xfs Data file: /dev/vg-docker/data Metadata file: /dev/vg-docker/metadata Data Space Used: 41.16 MB Data Space Total: 10.74 GB Data Space Available: 10.7 GB Metadata Space Used: 761.9 kB Metadata Space Total: 10.63 GB Metadata Space Available: 10.63 GB Udev Sync Supported: true Deferred Removal Enabled: false Deferred Deletion Enabled: false Deferred Deleted Device Count: 0 Library Version: 1.02.107-RHEL7 (2015-12-01)Execution Driver: native-0.2Logging Driver: json-filePlugins:  Volume: local Network: null host overlay bridgeKernel Version: 3.10.0-229.el7.x86_64Operating System: CentOS Linux 7 (Core)OSType: linuxArchitecture: x86_64CPUs: 1Total Memory: 993.6 MiBName: net1ID: TU6M:E6WM:PZDN:ULJX:EWKS:UPLQ:Z54D:XP52:64C7:Z4XN:TJ76:VG7OWARNING: bridge-nf-call-ip6tables is disabledCluster store: etcd://172.28.0.2:4001Cluster advertise: 172.28.0.3:0

注:使用默认的loop的时候,overlay网络测试存在问题。无法创建成功。
参考官方文档,需要配置docker daemon的如下参数:

--cluster-store=PROVIDER://URL Describes the location of the KV service.--cluster-advertise=HOST_IP|HOST_IFACE:PORTThe IP address or interface of the HOST used for clustering.--cluster-store-opt=KEY-VALUE OPTIONSOptions such as TLS certificate or tuning discovery Timers

docker daemon参数

/usr/bin/docker daemon -H fd:// --storage-driver=devicemapper --storage-opt dm.datadev=/dev/vg-docker/data --storage-opt dm.metadatadev=/dev/vg-docker/metadata  --cluster-store=etcd://172.28.0.2:4001 --cluster-advertise=eth1:0

端口,7946控制面,4789数据面。

firewall-cmd --permanent --add-port=7946/tcpfirewall-cmd --permanent --add-port=7946/udpfirewall-cmd --permanent --add-port=4789/udp

配置完上述参数后,即可创建网络。

docker network create -d overlay mutihost[root@net1 vagrant]# docker  network lsNETWORK ID          NAME                DRIVER15bb57daf277        multihost           overlay             3cd7ab7018e9        docker_gwbridge     bridge              a874aa0d9e0b        bridge              bridge              9fe04ff37f6f        none                null                010a53c2bf04        host                host [root@net1 vagrant]# docker  network inspect multihost[    {        "Name": "multihost",        "Id": "15bb57daf27731da102c8a5c5bf903e574daa33f5286e938009734a8cd5ce93c",        "Scope": "global",        "Driver": "overlay",        "IPAM": {            "Driver": "default",            "Options": null,            "Config": [                {                    "Subnet": "10.0.0.0/24",                    "Gateway": "10.0.0.1/24"                }            ]        },        "Containers": {},        "Options": {}    }][root@net2 vagrant]# docker network inspect multihost[    {        "Name": "multihost",        "Id": "15bb57daf27731da102c8a5c5bf903e574daa33f5286e938009734a8cd5ce93c",        "Scope": "global",        "Driver": "overlay",        "IPAM": {            "Driver": "default",            "Options": null,            "Config": [                {                    "Subnet": "10.0.0.0/24",                    "Gateway": "10.0.0.1/24"                }            ]        },        "Containers": {            "37162168dca4ad715d12f6bc78d1bf0678ff9128fe5d55178e39ed08e847f80a": {                "Name": "tender_kirch",                "EndpointID": "765451d1201d570c626470d16c92515de55f4ea1df2a58a03bef8e1767873897",                "MacAddress": "02:42:0a:00:00:05",                "IPv4Address": "10.0.0.5/24",                "IPv6Address": ""            }        },        "Options": {}    }][root@net2 vagrant]# docker  run -it --rm=true --net=multihost centos /bin/bash[root@37162168dca4 /]# ping 10.0.0.3PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.505 ms64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.619 ms64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=0.632 ms64 bytes from 10.0.0.3: icmp_seq=4 ttl=64 time=0.660 ms64 bytes from 10.0.0.3: icmp_seq=5 ttl=64 time=0.663 ms

观察容器内的链路

[root@37162168dca4 /]# ip link show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:0011: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT     link/ether 02:42:0a:00:00:05 brd ff:ff:ff:ff:ff:ff13: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT     link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff[root@37162168dca4 /]# ethtool -S eth1NIC statistics:     peer_ifindex: 14[root@37162168dca4 /]# ethtool -S eth0NIC statistics:     peer_ifindex: 12

host上的网桥

[root@net2 vagrant]# brctl showbridge name bridge id       STP enabled interfacesdocker0     8000.024297afd372   no      docker_gwbridge     8000.0242117ceeda   no      veth2cef6dbov-000100-15bb5     8000.96f96b0c7379   no      vetha6b50db                            vx-000100-15bb5[root@net2 vagrant]# ip -d link12: vetha6b50db: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ov-000100-15bb5 state UP mode DEFAULT     link/ether be:c0:b4:39:e5:fc brd ff:ff:ff:ff:ff:ff promiscuity 1     veth addrgenmode eui64 14: veth2cef6db: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT     link/ether ca:d6:b0:d9:e9:5c brd ff:ff:ff:ff:ff:ff promiscuity 1     veth addrgenmode eui64                             vx-000100-15bb5[root@net2 vagrant]# ip -d link show vx-000100-15bb510: vx-000100-15bb5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ov-000100-15bb5 state UNKNOWN mode DEFAULT     link/ether 96:f9:6b:0c:73:79 brd ff:ff:ff:ff:ff:ff promiscuity 1     vxlan id 256 srcport 0 0 dstport 4789 proxy l2miss l3miss ageing 300 addrgenmode eui64 

从容器内查询的结果看,链接overaly网络的时候,会创建两个网桥,其中ov-000100-15bb5 网桥用来中有两个设备,一个veth peer用来链接容器和网桥,还有一个vxlan设备,从查询数据看vxlan id为256。
另外,还有一个docker_gwbridge网桥,容器也通过veth pair设备连接到了该网桥。这个网络的作用主要是方便容器对外提供服务。

0 0