k8s之多节点,多pod,duoservice之间通信实验
来源:互联网 发布:开淘宝店必备软件 编辑:程序博客网 时间:2024/06/05 10:40
一、这个问题困扰了好几天,单个节点的pod之间通信没问题,但是当涉及到多个节点之间的通信就不行。最后问题锁定在flannel打通不同节点之间的网络的问题。
二、实验环境
1、ubuntu-14.04、k8s-1.4、flannel-0.5.5、etcd-2.3、docker-1.12.3
2、k8s集群总共有三个节点,一个master(192.168.110.151,同时作为私有registry仓库),minion1(192.168.110.152),minion2(192.168.110.154)
三、实验过程
1、配置集群相关软件
(1)master的etcd(放在/home/docker/xu/etcd目录下)
root@master:/home/docker/xu/etcd# tree
.
├── etcd
├── etcd0.etcd
│ └── member
│ ├── snap
│ │ ├── 000000000000005c-0000000000013889.snap
│ │ ├── 0000000000000085-0000000000015f9a.snap
│ │ ├── 00000000000000bc-00000000000186ab.snap
│ │ ├── 00000000000000bf-000000000001adbc.snap
│ │ └── 00000000000000bf-000000000001d4cd.snap
│ └── wal
│ ├── 0000000000000000-0000000000000000.wal
│ └── 0000000000000001-0000000000012017.wal
├── etcdctl
└── run.sh
其中run.sh是启动脚本,内容如下:
killall -9 etcd./etcd \-name etcd0 \-data-dir etcd0.etcd \-initial-advertise-peer-urls http://master:2380 \-listen-peer-urls http://master:2380 \-listen-client-urls http://master:2379,http://127.0.0.1:2379 \-advertise-client-urls http://master:2379 \-initial-cluster-token etcd-cluster \-initial-cluster etcd0=http://master:2380,etcd1=http://dockertest4:2380,etcd2=http://dockertest5:2380 \-initial-cluster-state new(2)minion1的etch配置(放在/home/docker/xu/etcd目录下)
root@dockertest4:/home/docker/xu/etcd# tree
.
├── etcd
├── etcd1.etcd
│ └── member
│ ├── snap
│ │ ├── 000000000000005c-0000000000013888.snap
│ │ ├── 0000000000000085-0000000000015f99.snap
│ │ ├── 00000000000000bc-00000000000186aa.snap
│ │ ├── 00000000000000bf-000000000001adbb.snap
│ │ └── 00000000000000bf-000000000001d4cd.snap
│ └── wal
│ ├── 0000000000000000-0000000000000000.wal
│ └── 0000000000000001-0000000000012025.wal
├── etcdctl
└── run.sh
其中run.sh是启动脚本,内容如下:
killall -9 etcd./etcd \-name etcd1 \-data-dir etcd1.etcd \-initial-advertise-peer-urls http://dockertest4:2380 \-listen-peer-urls http://dockertest4:2380 \-listen-client-urls http://dockertest4:2379,http://127.0.0.1:2379 \-advertise-client-urls http://dockertest4:2379 \-initial-cluster-token etcd-cluster \-initial-cluster etcd0=http://master:2380,etcd1=http://dockertest4:2380,etcd2=http://dockertest5:2380 \-initial-cluster-state new(3)minion2的etcd配置(放在/home/docker/xu/etcd目录下)
root@dockertest5:/home/docker/xu/etcd# tree
.
├── etcd
├── etcd2.etcd
│ └── member
│ ├── snap
│ │ ├── 000000000000005c-0000000000013889.snap
│ │ ├── 0000000000000085-0000000000015f9a.snap
│ │ ├── 00000000000000bc-00000000000186ab.snap
│ │ ├── 00000000000000bf-000000000001adbc.snap
│ │ └── 00000000000000bf-000000000001d4cd.snap
│ └── wal
│ ├── 0000000000000000-0000000000000000.wal
│ └── 0000000000000001-0000000000012006.wal
├── etcdctl
└── run.sh
run.sh是启动配置文件,文件内容如下
killall -9 etcd./etcd \-name etcd2 \-data-dir etcd2.etcd \-initial-advertise-peer-urls http://dockertest5:2380 \-listen-peer-urls http://dockertest5:2380 \-listen-client-urls http://dockertest5:2379,http://127.0.0.1:2379 \-advertise-client-urls http://dockertest5:2379 \-initial-cluster-token etcd-cluster \-initial-cluster etcd0=http://master:2380,etcd1=http://dockertest4:2380,etcd2=http://dockertest5:2380 \-initial-cluster-state new
(4)master的k8s的配置(kube-apiserver、kube-controller-manager,kube-scheduler,他们所在的目录是/home/docker/xu/ku/kubernetes/server/bin)
root@master:/home/docker/xu/kubernetes/server/bin# tree
.
├── hyperkube
├── kube-apiserver
├── kube-apiserver.docker_tag
├── kube-controller-manager
├── kube-controller-manager.docker_tag
├── kubectl
├── kube-dns
├── kubelet
├── kubemark
├── kube-proxy
├── kube-proxy.docker_tag
├── kube-scheduler
├── kube-scheduler.docker_tag
├── run-apiserver.sh
├── run-controller-manager.sh
└── run-scheduler.sh
其中run-apiserver.sh、run-controller-manager.sh和run-scheduler.sh分别是启动脚本,脚本内容分别如下:
./kube-apiserver --address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range='192.168.110.0/24' --kubelet_port=10250 --v=0 --logtostderr=true --etcd_servers=http://192.168.110.151:2379 --allow_privileged=false >> /opt/k8s/kube-apiserver.log 2>&1 &
./kube-controller-manager --v=0 --logtostderr=false --log_dir=/opt/k8s/kube --master=192.168.110.151:8080 >> /opt/k8s/kube-controller-manager.log 2>&1 &~
./kube-scheduler --master='192.168.110.151:8080' --v=0 --log_dir=/opt/k8s/kube >> /opt/k8s/kube-scheduler.log 2>&1 &
(5)minion1的k8s的配置(kube-proxy、kubelet他们所在的目录是/home/docker/xu/kubernetes/server/bin)
root@dockertest4:/home/docker/xu/kubernetes/server/bin# tree
.
├── hyperkube
├── kube-apiserver
├── kube-apiserver.docker_tag
├── kube-controller-manager
├── kube-controller-manager.docker_tag
├── kubectl
├── kube-dns
├── kubelet
├── kubemark
├── kube-proxy
├── kube-proxy.docker_tag
├── kube-scheduler
├── kube-scheduler.docker_tag
├── run-let.sh
└── run-proxy.sh
其中run-proxy.sh和run-let.sh分别是启动脚本,内容分别如下
./kube-proxy --logtostderr=true --v=0 --master=http://192.168.110.151:8080 --hostname_override=192.168.110.152 >> /opt/k8s/kube-proxy.log
./kubelet --logtostderr=true --v=0 --allow-privileged=false --log_dir=/opt/k8s/kube --address=0.0.0.0 --port=10250 --hostname_override=192.168.110.152 --api_servers=http://192.168.110.151:8080 >> /opt/k8s/kube-kubelet.log
(6)minion2的k8s的配置(kube-proxy、kubelet他们所在的目录是/home/docker/xu/k8s/server/bin)
root@dockertest5:/home/docker/xu/k8s/server/bin# tree
.
├── hyperkube
├── kube-apiserver
├── kube-apiserver.docker_tag
├── kube-controller-manager
├── kube-controller-manager.docker_tag
├── kubectl
├── kube-dns
├── kubelet
├── kubemark
├── kube-proxy
├── kube-proxy.docker_tag
├── kube-scheduler
├── kube-scheduler.docker_tag
├── run-let.sh
└── run-proxy.sh
./kube-proxy --logtostderr=true --v=0 --master=http://192.168.110.151:8080 --hostname_override=192.168.110.154 >> /opt/k8s/kube-proxy.log
./kubelet --logtostderr=true --v=0 --allow-privileged=false --log_dir=/opt/k8s/kube --address=0.0.0.0 --port=10250 --hostname_override=192.168.110.154 --api_servers=http://192.168.110.151:8080 >> /opt/k8s/kube-kubelet.log~
2、启动相关软件
1、master节点的etcd,在master节点的etcd所在目录下执行./run.sh
2、minion1节点的etcd,在master节点的etcd所在目录下执行./run.sh
3、minion2节点的etcd,在master节点的etcd所在目录下执行./run.sh
4、验证etcd是否启动成功
在master节点的etcd所在目录下执行./etcdctl member list,看见下面的输出,说明etcd集群启动成功
root@master:/home/docker/xu/etcd# ./etcdctl member list35e013635b05ca4f: name=etcd1 peerURLs=http://dockertest4:2380 clientURLs=http://dockertest4:2379 isLeader=true70192b54fb86c1a5: name=etcd0 peerURLs=http://master:2380 clientURLs=http://master:2379 isLeader=false835aada27b736086: name=etcd2 peerURLs=http://dockertest5:2380 clientURLs=http://dockertest5:2379 isLeader=false5、在master节点的flannel,在flannel所在目录下执行如下命令
./flanneld -etcd-endpoints http://192.168.110.151:2379其中,http://192.168.110.151:2379 是etcd的地址
6、minion1的flannel,在flannel所在目录下执行如下命令:
./flanneld -etcd-endpoints http://192.168.110.151:23797、minion2的flannel,在flannel所在目录下执行如下命令:
./flanneld -etcd-endpoints http://192.168.110.151:23798、在master节点的etcd目录下执行如下命令:
etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'9、在master节点的etcd所在目录执行如下命令,确认上一步命令是否成功
root@master:/home/docker/xu/etcd# ./etcdctl ls /coreos.com/network/subnets/coreos.com/network/subnets/10.1.44.0-24/coreos.com/network/subnets/10.1.54.0-24/coreos.com/network/subnets/10.1.60.0-2410、然后分别在每个节点下执行如下命令
(1)mk-docker-opts.sh -i
(2)source /run/flannel/subnet.env
(3)ifconfig docker0 ${FLANNEL_SUBNET}
(4)sudo service docker stop
(5) dockerd -dns 8.8.8.8 --dns 8.8.4.4 --insecure-registry 192.168.110.151:5000 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
11、查看每个节点的路由信息,确认flannel是否打通网络
root@master:/home/docker/xu/flannel# route -n内核 IP 路由表目标 网关 子网掩码 标志 跃点 引用 使用 接口0.0.0.0 192.168.110.2 0.0.0.0 UG 0 0 0 eth010.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel010.1.60.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0192.168.110.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0
注意:如果打通网络 flannel0和docker0的目标地址应该在一个大网路里面
12、master节点的k8s,在k8s所在目录下执行如下命令
./run-apiserver.sh
./run-controller-manager.sh
./run-scheduler.sh
13、分别在minion1和minion2的节点上执行如下命令
./run-proxy.sh
./run-let.sh
14、执行如下命令,看见相应输出,说明k8s启动成功
root@master:/home/docker/xu/flannel# kubectl get csNAME STATUS MESSAGE ERRORcontroller-manager Healthy ok etcd-0 Healthy {"health": "true"} scheduler Healthy ok
root@master:/home/docker/xu/flannel# kubectl get nodesNAME STATUS AGE192.168.110.152 Ready 1d192.168.110.154 Ready 1d
3、验证环境
(1)本实验是javaweb程序链接mysq数据库,其中在minion2上运行1个mysql pod,然后分别在minion1和minion2上运行一个tomcat(镜像里面包含看wen应用测试代码)。
(2)如果minion1上的tomcat也能访问到mysql数据库,说明两个节点的网络已经打通
(3)mysql.ysml和tomcat.yaml的文件内容分别如下
apiVersion: v1kind: Servicemetadata: name: mysqlspec: ports: - port: 3306 selector: app: mysql_pod---apiVersion: v1kind: ReplicationControllermetadata: name: mysql-deploymentspec: replicas: 1 template: metadata: labels: app: mysql_pod spec: containers: - name: mysql image: 192.168.110.151:5000/mysql imagePullPolicy: IfNotPresent ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD value: "123456"
apiVersion: v1kind: Servicemetadata: name: hpe-java-webspec: type: NodePort ports: - port: 8080 nodePort: 31002 selector: app: hpe_java_web_pod---apiVersion: v1kind: ReplicationControllermetadata: name: hpe-java-web-deployementspec: replicas: 2 template: metadata: labels: app: hpe_java_web_pod spec: containers: - name: myweb image: 192.168.110.151:5000/tomact8 imagePullPolicy: IfNotPresent ports: - containerPort: 8080
(4)分别启动mysql和tomcat容器
kubectl create -f mysql.yaml
kubectl create -f tomcat.ysml
(5)验证启动状态
root@master:/home/docker/xu/test# kubectl get podsNAME READY STATUS RESTARTS AGEhpe-java-web-deployement-4oeax 1/1 Running 0 53mhpe-java-web-deployement-kqkv8 1/1 Running 0 53mmysql-deployment-bk5v1 1/1 Running 0 55m
root@master:/home/docker/xu/test# kubectl get serviceNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEhpe-java-web 192.168.110.63 <nodes> 8080/TCP 53mkubernetes 192.168.110.1 <none> 443/TCP 1dmysql 192.168.110.220 <none> 3306/TCP 55m
(5)javaweb的实验代码如下:
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%><%@page import="java.sql.*" %><html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><title>Insert title here</title></head><body><table border="1" align="center"> <tr> <td>卡号</td> </tr> <% String driverClass="com.mysql.jdbc.Driver"; String ip=System.getenv("MYSQL_SERVICE_HOST"); String port=System.getenv("MYSQL_SERVICE_PORT"); System.out.println(port+"asasdfasdfasdfasdfasd"); //String ip = "localhost"; //String port = "3306"; Connection conn; try{ Class.forName(driverClass); conn = java.sql.DriverManager.getConnection("jdbc:mysql://"+ip+":"+port+"/bms", "root","123456"); Statement stmt=conn.createStatement(); String sql="select * from bms_appuser"; ResultSet rs=stmt.executeQuery(sql); while(rs.next()){ %> <tr> <td><%=rs.getString(3) %></td> </tr> <% } } catch(Exception ex){ ex.printStackTrace(); } %></table></body></html>(6)javaweb的工程名称是K8S,为了简单只是含有上述一个jsp页面,index.jsp。数据库、数据库表可以根据自己方便随便创建。
(7)分别在minion1、minion2和除了集群以为的其他机器上访问http://192.168.110.151:31002/K8S/inedex.jsp,如果多能访问到自己的数据,说明实验成功。
- k8s之多节点,多pod,duoservice之间通信实验
- 搭建单节点(一个master和一个minion、k8s集群)多pod实验环境
- k8s创建pod
- STM32W108无线射频模块多节点之间通信实例
- SM32W108无线射频模块多个节点之间通信实例
- 搭建及使用K8s集群 <k8s dashboard pod方式部署>
- k8s源码分析-----kubelet pod处理流程
- k8s如何管理Pod(rc、rs、deployment)
- k8s-configmap 在pod中使用
- 实验二 串口与节点通信(上)
- 实验四 节点与基站的通信
- 节点-PC串口通信实验问题
- 多线程之多窗口卖票&线程之间的通信
- 【原创】k8s源码分析-----kubelet(8)pod管理
- k8s源码分析--kubelet中pod处理流程(续)
- k8s nodename nodeselector deployment pod 测试 重启 运维
- [k8s]pod调度-不完整版本-及dashboard原理
- [k8s]pod持久存储的几种方式图解
- 《3D数学基础:图形与游戏开发 》
- HDU:1256 画8
- XMind思维导图使用详解
- 移动端页面简易配置
- 剑指Offer之斐波那契数列问题
- k8s之多节点,多pod,duoservice之间通信实验
- Scala基础语法学习笔记
- leetcode(55).217. Contains Duplicate
- Ubuntu中普通用户与root用户切换命令
- remove_if详解,配合erase
- 在线考试系统(2)
- 摇一摇 menglong0329
- Android JNI编程(六)——C语言函数指针、Unition联合体、枚举、Typedef别名、结构体、结构体指针
- 5.Spring学习笔记_自动装配(by尚硅谷_佟刚)