搭建单节点(一个master和一个minion、k8s集群)多pod实验环境

来源:互联网 发布:orange作图软件中文版 编辑:程序博客网 时间:2024/05/16 14:07

一、环境描述

(1)两个node,一个master和一个minion,其中master节点的ip是192.168.110.151,minion的ip是192.168.110.152

(2)其中151的机器上启动私有registry,提供k8s集群所需要的image

(3)master节点上运行kube-apiserver、kube-controller-manager、kube-scheduler和etcd;minion节点上运行kube-

         proxy、kubelet和etcd

(4)其中151机器的hostname是master,152机器的hostname是dockertest4

二、环境搭建


1、etcd环境搭建

(1)master节点操作,将etcd的etcd和etcdctl文件拷贝到/home/docker/xu/etcd目录下,然后在该目录下创建run.sh文件,文件内容如下:

killall -9 etcd./etcd \-name etcd0 \-data-dir etcd0.etcd \-initial-advertise-peer-urls http://master:2380 \-listen-peer-urls http://master:2380 \-listen-client-urls http://master:2379,http://127.0.0.1:2379 \-advertise-client-urls http://master:2379 \-initial-cluster-token etcd-cluster \-initial-cluster etcd0=http://master:2380,etcd1=http://dockertest4:2380 \-initial-cluster-state new
(2)minion节点上的操作,将etcd的etcd和etcdctl文件拷贝到/home/docker/xu/etcd目录下,然后在该目录下创建run.sh文件,文件内容如下:

killall -9 etcd./etcd \-name etcd1 \-data-dir etcd1.etcd \-initial-advertise-peer-urls http://dockertest4:2380 \-listen-peer-urls http://dockertest4:2380 \-listen-client-urls http://dockertest4:2379,http://127.0.0.1:2379 \-advertise-client-urls http://dockertest4:2379 \-initial-cluster-token etcd-cluster \-initial-cluster etcd0=http://master:2380,etcd1=http://dockertest4:2380 \-initial-cluster-state new

(3)分别执行上述两个run.sh文件(执行命令./run.sh

(4)在master节点的/home/docker/xu/etcd目录下执行./etcdctl member list命令,如果环境搭建成功会看见如下输出

root@master:/home/docker/xu/etcd# ./etcdctl member lista8393743a0bdfe3: name=etcd1 peerURLs=http://dockertest4:2380 clientURLs=http://dockertest4:2379 isLeader=truec93427c50eaf2937: name=etcd0 peerURLs=http://master:2380 clientURLs=http://master:2379 isLeader=false

2、k8s环境搭建

(1)将kubernetes\server\bin中的所有文件分别拷贝到master和minion节点的/home/docker/xu/k8s目录下

(2)在master节点的/home/docker/xu/k8s目录下编辑run-apiserver.sh、run-controller-manager.sh和run-scheduler.sh文件,文件内容分别如下:

./kube-apiserver --address=0.0.0.0  --insecure-port=8080 --service-cluster-ip-range='192.168.110.0/24' --kubelet_port=10250 --v=0 --logtostderr=true --etcd_servers=http://192.168.110.151:2379 --allow_privileged=false  >> /opt/k8s/kube-apiserver.log 2>&1 &

./kube-controller-manager  --v=0 --logtostderr=false --log_dir=/opt/k8s/kube --master=192.168.110.151:8080 >> /opt/k8s/kube-controller-manager.log 2>&1 &


./kube-scheduler  --master='192.168.110.151:8080' --v=0  --log_dir=/opt/k8s/kube  >> /opt/k8s/kube-scheduler.log 2>&1 &

注意:需要在/opt目录下创建k8s文件夹,并在k8s文件夹中创建相应的日志文件

需要给上述三个文件赋予可执行权限:chmod +x 文件名

(3)在master节点的/home/docker/xu/k8s目录下编辑run-proxy.sh、run-let.sh文件,文件内容分别如下:

./kube-proxy  --logtostderr=true --v=0 --master=http://192.168.110.151:8080 --hostname_override=192.168.110.152   >> /opt/k8s/kube-proxy.log

./kubelet  --logtostderr=true --v=0 --allow-privileged=false  --log_dir=/opt/k8s/logs/kube  --address=0.0.0.0  --port=10250  --hostname_override=192.168.110.152  --api_servers=http://192.168.110.151:8080   >> /opt/k8s/kube-kubelet.log

注意:需要在/opt目录下创建k8s文件夹,并在k8s文件夹中创建相应的日志文件

需要给上述三个文件赋予可执行权限:chmod +x 文件名

(4)在master的/home/docker/xu/k8s执行如下命令:

./run-apiserver.sh

./run-controller-manager.sh

./run-scheduler.sh

(5)在minion节点的/home/docker/xu/k8s执行如下命令

./run-proxy.sh

./run-let.sh

(6)在master节点的/home/docker/xu/k8s执行如下命令,如果有下面的输出,说明集群搭建成功:

root@master:/home/docker/xu/kubernetes/server/bin# kubectl get nodesNAME              STATUS    AGE192.168.110.152   Ready     1d

root@master:/home/docker/xu/kubernetes/server/bin# kubectl get csNAME                 STATUS    MESSAGE              ERRORcontroller-manager   Healthy   ok                   scheduler            Healthy   ok                   etcd-0               Healthy   {"health": "true"}  

root@master:/home/docker/xu/kubernetes/server/bin# kubectl cluster-infoKubernetes master is running at http://localhost:8080To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

3、实验环境搭建

(1)实验描述

试验中会会创建两个pod,其中一个pod回去连接另一个pod进行通信

(2)在master节点的/home/docker/xu/test目录下创建mysql.yaml、tomcat.yaml,文件内容分别如下:

apiVersion: v1kind: Servicemetadata:  name: mysqlspec:  ports:    - port: 3306  selector:    app: mysql_pod---apiVersion: v1kind: ReplicationControllermetadata:  name: mysql-deploymentspec:  replicas: 1  template:    metadata:      labels:        app: mysql_pod    spec:      containers:        - name: mysql          image: 192.168.110.151:5000/mysql          imagePullPolicy: IfNotPresent          ports:            - containerPort: 3306          env:            - name: MYSQL_ROOT_PASSWORD              value: "123456"


apiVersion: v1kind: Servicemetadata:  name: hpe-java-webspec:  type: NodePort  ports:    - port: 8080      nodePort: 31002  selector:    app: hpe_java_web_pod---apiVersion: v1kind: ReplicationControllermetadata:  name: hpe-java-web-deployementspec:  replicas: 1  template:    metadata:      labels:        app: hpe_java_web_pod    spec:      containers:        - name: myweb          image: 192.168.110.151:5000/tomact8          imagePullPolicy: IfNotPresent          ports:            - containerPort: 8080


(3)创建tomcat镜像

Dockerfile文件内容如下

FROM tomcatMAINTAINER xuguokun <921586520@qq.com>ADD K8S.war /usr/local/tomcat/webapps/

其中K8S.war是个javaweb工程。其实很简单就是一个jsp文件,jsp文件内容如下

<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%><%@page import="java.sql.*" %><html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><title>Insert title here</title></head><body><table border="1" align="center">    <tr>      <td>卡号</td>    </tr>    <%        String driverClass="com.mysql.jdbc.Driver";        String ip=System.getenv("MYSQL_SERVICE_HOST");    String port=System.getenv("MYSQL_SERVICE_PORT");        System.out.println(port+"asasdfasdfasdfasdfasd");        //String ip = "localhost";    //String port = "3306";       Connection conn;    try{    Class.forName(driverClass);    conn = java.sql.DriverManager.getConnection("jdbc:mysql://"+ip+":"+port+"/bms", "root","123456");     Statement stmt=conn.createStatement();    String sql="select * from bms_appuser";    ResultSet rs=stmt.executeQuery(sql);    while(rs.next()){    %>    <tr>    <td><%=rs.getString(3) %></td>    </tr>    <%     }    }    catch(Exception ex){    ex.printStackTrace();    }    %></table></body></html>

创建tomcat镜像的命令(在Dockerfile和K8S.war所在的目录执行)

docker build -t 192.168.110.151:5000/tomcat8


(4)创建pod,在test目录下执行如下命令

kubectl create -f mysql.yaml和kubectl create -f tomcat.yaml

(5)然后在master节点的test目录下执行如下命令,并得到相应的输出,说明环境搭建成功

root@master:/home/docker/xu/test# kubectl get podsNAME                             READY     STATUS    RESTARTS   AGEhpe-java-web-deployement-w8kts   1/1       Running   0          12hmysql-deployment-ovz6y           1/1       Running   0          12h


root@master:/home/docker/xu/test# kubectl get serviceNAME           CLUSTER-IP        EXTERNAL-IP   PORT(S)    AGEhpe-java-web   192.168.110.101   <nodes>       8080/TCP   12hkubernetes     192.168.110.1     <none>        443/TCP    1dmysql          192.168.110.158   <none>        3306/TCP   12h

root@master:/home/docker/xu/test# kubectl exec hpe-java-web-deployement-w8kts -- printenv | grep SERVICEMYSQL_SERVICE_HOST=192.168.110.158KUBERNETES_SERVICE_PORT_HTTPS=443HPE_JAVA_WEB_SERVICE_PORT=8080MYSQL_SERVICE_PORT=3306HPE_JAVA_WEB_SERVICE_HOST=192.168.110.101KUBERNETES_SERVICE_HOST=192.168.110.1KUBERNETES_SERVICE_PORT=443

(6)数据表创建,通过docker exec进入mysql容器,然后创建bms数据库,并在bms中创建表bms_appuser。bms_appuser中有三个字段,分别是id name和value

(7)然后在152的节点的浏览器中访问http://192.168.110.101:8080/K8S/index.jsp

页面显示内容如下:

卡号
00901016
0090051F
00900E33

4、实验结束

0 0
原创粉丝点击