Docker 网络(十三)—— Calico for Docker

来源:互联网 发布:淘宝店铺图文详情 编辑:程序博客网 时间:2024/05/21 09:06

13 Calico for Docker

Calico 简介

Calico是一个纯三层的协议,为OpenStack虚机和Docker容器提供多主机间通信。Calico不使用重叠网络比如flannel和libnetwork重叠网络驱动,它是一个纯三层的方法,使用虚拟路由代替虚拟交换,每一台虚拟路由通过BGP协议传播可达信息(路由)到剩余数据中心。

环境准备

  • 两台Linux节点(Node1:192.168.56.10 和 Node2:192.168.56.20),系统:ubuntu 14.04
  • 在两台节点上安装Docker
  • 配置Etcd集群

配置和下载

在两个节点上配置docker网桥(默认是docker0),两个网桥在不同的网络,网络配置细节如下:
Node1
* IP:192.168.56.10
* Docker网桥网络:192.168.1.0/24

Node2
* IP:192.168.56.20
* Docker网桥网络:172.17.0.0/16
下载并配置etcd
下载etcd

$ curl -L  https://github.com/coreos/etcd/releases/download/v2.3.6/etcd-v2.3.6-linux-amd64.tar.gz -o etcd-v2.3.6-linux-amd64.tar.gz$ tar zxvf etcd-v2.3.6-linux-amd64.tar.gz$ cd etcd-v2.3.6-linux-amd64/

NODE1

./etcd -name node1 -initial-advertise-peer-urls http://192.168.56.10:2380 \  -listen-peer-urls http://0.0.0.0:2380 \  -listen-client-urls http://0.0.0.0:2379,http://127.0.0.1:4001 \  -advertise-client-urls http://0.0.0.0:2379 \  -initial-cluster-token etcd-cluster \  -initial-cluster node1=http://192.168.56.10:2380,node2=http://192.168.56.20:2380 \  -initial-cluster-state new

NODE2

./etcd -name node2 -initial-advertise-peer-urls http://192.168.56.20:2380 \  -listen-peer-urls http://0.0.0.0:2380 \  -listen-client-urls http://0.0.0.0:2379,http://127.0.0.1:4001 \  -advertise-client-urls http://0.0.0.0:2379 \  -initial-cluster-token etcd-cluster \  -initial-cluster node1=http://192.168.56.10:2380,node2=http://192.168.56.20:2380 \  -initial-cluster-state new

下载 calicoctl

$ wget http://www.projectcalico.org/builds/calicoctl$ chmod +x calicoctl $ sudo mv calicoctl /usr/local/bin/

启动Calico服务

在Docker环境中Calico服务是做为容器来运行的,使用host的网络配置。所有容器配置使用Calico服务,做为calico节点互相通信。
运行下面的命令在节点1/2来启动calico-node
Node1

sudocalicoctlnodeip=192.168.56.10Node2 sudo calicoctl node –ip=192.168.56.20
运行后输出如下:

ubuntu@node1:~$ docker ps CONTAINER ID        IMAGE                COMMAND               CREATED             STATUS              PORTS               NAMES18b33d6365d6        calico/node:latest   "/sbin/start_runit"   12 minutes ago      Up 12 minutes                           calico-node

在启动别的容器之前,我们需要配置一个IP地址池带有 ipipnat-outgoing选项。所以带有有效配置的容器就可以访问互联网,在每个节点上运行下面的命令:

$ calicoctl pool add 10.0.10.0/24 –ipip –nat-outgoing

容器网络配置

启动容器
首先在每台主机上运行几个容器
Node1

ubuntu@node1:~$ docker run -itd  --net=none --name=worker-1 ubuntu:14ubuntu@node1:~$ docker run -itd  --net=none --name=worker-2 ubuntu:14

Node 2

$ docker run -itd –net=none –name=worker-3 ubuntu:14

配置Calico网络

现在所有容器都运行了但没有任何网络设备,用Calico来分配网络设备给这些容器,分配给容器的IP应该在IP地址池范围内。
Node1

$ sudo calicoctl container add worker-1 10.0.10.1$ sudo calicoctl container add worker-2 10.0.10.2

Node 2

$ sudo calicoctl container add worker-3 10.0.10.3

一旦容器有Calico网络了,他们能获取到对应IP的网络设备,至此他们还不能互相访问或访问互联网,因为没有创建并分配 配置文件给容器。

在任意一个节点上创建一些配置文件
Node1

ubuntu@node1:~$ calicoctl profile add PROF_1Created profile PROF_1ubuntu@node1:~$ calicoctl profile add PROF_2Created profile PROF_2

分配配置文件给容器,在相同配置文件中的容器可以互相访问
Node1

ubuntu@node1:~$ calicoctl container worker-1 profile append PROF_1ubuntu@node1:~$ calicoctl container worker-2 profile append PROF_2

Node2

ubuntu@node2:~$ calicoctl container worker-3 profile append PROF_1
至此所有配置都完成了,稍后测试一下这些容器之间的网络连接。

测试

每一个容器应该都可以访问互联网

ubuntu@node1:~$ docker exec worker-1 ping -c2 www.baidu.comPING www.a.shifen.com (220.181.111.188) 56(84) bytes of data.64 bytes from 220.181.111.188: icmp_seq=1 ttl=46 time=3.81 ms64 bytes from 220.181.111.188: icmp_seq=2 ttl=46 time=3.67 msubuntu@node1:~$ docker exec worker-2 ping -c2 www.baidu.comPING www.a.shifen.com (220.181.112.244) 56(84) bytes of data.64 bytes from 220.181.112.244: icmp_seq=1 ttl=46 time=3.43 ms64 bytes from 220.181.112.244: icmp_seq=2 ttl=46 time=3.40 ms

相同配置文件中的容器

ubuntu@node1:~$ docker exec worker-1 ping -c2 10.0.10.3PING 10.0.10.3 (10.0.10.3) 56(84) bytes of data.64 bytes from 10.0.10.3: icmp_seq=1 ttl=62 time=0.918 ms64 bytes from 10.0.10.3: icmp_seq=2 ttl=62 time=0.793 ms

不同配置文件中的容器

ubuntu@node1:~$ docker exec worker-1 ping -c2 10.0.10.2PING 10.0.10.2 (10.0.10.2) 56(84) bytes of data.--- 10.0.10.2 ping statistics ---2 packets transmitted, 0 received, 100% packet loss, time 1001ms

如果我们添加 worker-2 到配置文件 PROF_1中,那么worker-2就可以和另外两个容器通信了。

ubuntu@node1:~$ calicoctl container worker-2 profile append PROF_1Profile(s) PROF_1 appended.ubuntu@node1:~$ docker exec worker-1 ping -c2 10.0.10.2PING 10.0.10.2 (10.0.10.2) 56(84) bytes of data.64 bytes from 10.0.10.2: icmp_seq=1 ttl=63 time=0.143 ms64 bytes from 10.0.10.2: icmp_seq=2 ttl=63 time=0.078 ms

简单性能测试
在worker-1上执行 iperf -s,在worker-3上执行 iperf -c 10.0.10.1;

root@d3fe08905044:/# iperf -c 10.0.10.1------------------------------------------------------------Client connecting to 10.0.10.1, TCP port 5001TCP window size: 85.0 KByte (default)------------------------------------------------------------[  3] local 10.0.10.3 port 40212 connected with 10.0.10.1 port 5001[ ID] Interval       Transfer     Bandwidth[  3]  0.0-10.0 sec   923 MBytes   774 Mbits/sec

work-2上执行 iperf -c 10.0.10.1;

ubuntu@node1:~$ docker exec -it worker-2 bashroot@34b9c9de3ae9:/# iperf -c 10.0.10.1------------------------------------------------------------Client connecting to 10.0.10.1, TCP port 5001TCP window size: 85.0 KByte (default)------------------------------------------------------------[  3] local 10.0.10.2 port 54677 connected with 10.0.10.1 port 5001[ ID] Interval       Transfer     Bandwidth[  3]  0.0-10.0 sec  14.6 GBytes  12.5 Gbits/sec

在本地主机执行相同的操作

ubuntu@node1:~$ iperf -c 192.168.56.10------------------------------------------------------------Client connecting to 192.168.56.10, TCP port 5001TCP window size: 2.50 MByte (default)------------------------------------------------------------[  4] local 192.168.56.10 port 5001 connected with 192.168.56.10 port 54611[  3] local 192.168.56.10 port 54611 connected with 192.168.56.10 port 5001[ ID] Interval       Transfer     Bandwidth[  3]  0.0-10.0 sec  17.1 GBytes  14.7 Gbits/sec[ ID] Interval       Transfer     Bandwidth[  4]  0.0-10.0 sec  17.1 GBytes  14.7 Gbits/sec

可以看到在同一台主机上性能还可以,但跨主机通信性能差异比较大。

集成Calico到Docker网络

Calico能够被集成到Docker网络中在Docker 1.9版后,Calico运行另一个容器作为Docker网络插件,并集成到Docker docker network 命令中。

集成Calico需要Docker运行在集群模式下,停止在Node1/2上运行的Docker进程,运行下面的集群参数:
Node1

$ sudo service docker stop$ sudo /usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.56.10:2379 --cluster-advertise=192.168.56.10:2375

Node2

$ sudo service docker stop$ sudo /usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.56.20:2379 --cluster-advertise=192.168.56.20:2375

--libnetwork 参数运行Calico
Node1

ubuntu@node1:~$ sudo calicoctl node –libnetwork –ip=192.168.56.10

Node2

ubuntu@node2:~$ sudo calicoctl node –libnetwork –ip=192.168.56.20

ubuntu@node1:~$ docker ps CONTAINER ID        IMAGE                           COMMAND               CREATED              STATUS              PORTS               NAMES5e54cb9688d3        calico/node-libnetwork:latest   "./start.sh"          54 seconds ago       Up 54 seconds                           calico-libnetwork6b2986e5f0f2        calico/node:latest              "/sbin/start_runit"   About a minute ago   Up About a minute                       calico-node

使用 docker network 命令创建Calico网络

ubuntu@node1:~$ docker network create --driver=calico --subnet=10.0.10.0/24 net147e2882efe513c93c16082c55f87a3daea79bd37a1bbb429b1bf252b3c8270d3ubuntu@node1:~$ docker network  ls NETWORK ID          NAME                DRIVER70b7184a47ab        bridge              bridge              0901295002bf        host                host                7898d5978de9        myapp               overlay             47e2882efe51        net1                calico              1c3fbfa13991        none                null     

使用--net=net1 运行容器即可

ubuntu@node1:~$ docker run -it --net=net1 ubuntu:14 bashroot@b64ec256274c:/# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default     link/ipip 0.0.0.0 brd 0.0.0.012: cali0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff    inet 10.0.10.2/24 scope global cali0       valid_lft forever preferred_lft forever    inet6 fe80::ecee:eeff:feee:eeee/64 scope link tentative        valid_lft forever preferred_lft forever15: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default     link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff    inet 172.18.0.2/16 scope global eth1       valid_lft forever preferred_lft forever    inet6 fe80::42:acff:fe12:2/64 scope link        valid_lft forever preferred_lft foreverroot@b64ec256274c:/# ping www.baidu.com PING www.a.shifen.com (220.181.111.188) 56(84) bytes of data.64 bytes from 220.181.111.188: icmp_seq=1 ttl=46 time=3.52 ms64 bytes from 220.181.111.188: icmp_seq=2 ttl=46 time=4.31 ms
1 0
原创粉丝点击