kubernetes简介

来源:互联网 发布:即时通讯app源码下载 编辑:程序博客网 时间:2024/05/16 17:01

kubernetes简介

1、kubernetes核心概念

  关于k8s的概念,这里有一篇简明扼要的文章:http://www.dockone.io/article/932,此处不再赘述。

2、kubernetes安装

2.1、环境介绍

  操作系统:CentOS-7-x86_64-Minimal-1611,节点信息如下:

节点 hostname IP 部署 K8S Master node-0 10.211.55.4 etcd、flannel、k8s K8S Slave-1 node-1 10.211.55.19 flannel、k8s K8S Slave-2 node-2 10.211.55.20 flannel、k8s Docker registry node-3 10.211.55.15 docker

Docker registry的部署与使用不在本文范围之内

2.2、软件版本

  • ETCD:3.1.9
  • Flannel:0.7.1
  • Kubernetes:1.5.2

2.3、安装步骤

2.3.1、安装ETCD

  1. 配置防火墙,开放etcd的默认的2379端口

    [chenlei@node-0 ~]$ sudo firewall-cmd --zone=public --add-port=2379/tcp --permanent[chenlei@node-0 ~]$ sudo firewall-cmd --reload[chenlei@node-0 ~]$ sudo firewall-cmd --list-all
  2. yum安装etcd

    [chenlei@node-0 ~]$ sudo yum install -y etcd
  3. 配置etcd,修改/etc/etcd/etcd.conf中的ETCD_LISTEN_CLIENT_URLSETCD_ADVERTISE_CLIENT_URLS

    [chenlei@node-0 ~]$ sudo vi /etc/etcd/etcd.conf# [member]ETCD_NAME=defaultETCD_DATA_DIR="/var/lib/etcd/default.etcd"#ETCD_WAL_DIR=""#ETCD_SNAPSHOT_COUNT="10000"#ETCD_HEARTBEAT_INTERVAL="100"#ETCD_ELECTION_TIMEOUT="1000"#ETCD_LISTEN_PEER_URLS="http://localhost:2380"#ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""##[cluster]#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"#ETCD_INITIAL_CLUSTER_STATE="new"#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"#ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"ETCD_ADVERTISE_CLIENT_URLS="http://node-0:2379"#ETCD_DISCOVERY=""#ETCD_DISCOVERY_SRV=""#ETCD_DISCOVERY_FALLBACK="proxy"#ETCD_DISCOVERY_PROXY=""#ETCD_STRICT_RECONFIG_CHECK="false"#ETCD_AUTO_COMPACTION_RETENTION="0"##[proxy]#ETCD_PROXY="off"#ETCD_PROXY_FAILURE_WAIT="5000"#ETCD_PROXY_REFRESH_INTERVAL="30000"#ETCD_PROXY_DIAL_TIMEOUT="1000"#ETCD_PROXY_WRITE_TIMEOUT="5000"#ETCD_PROXY_READ_TIMEOUT="0"##[security]#ETCD_CERT_FILE=""#ETCD_KEY_FILE=""#ETCD_CLIENT_CERT_AUTH="false"#ETCD_TRUSTED_CA_FILE=""#ETCD_AUTO_TLS="false"#ETCD_PEER_CERT_FILE=""#ETCD_PEER_KEY_FILE=""#ETCD_PEER_CLIENT_CERT_AUTH="false"#ETCD_PEER_TRUSTED_CA_FILE=""#ETCD_PEER_AUTO_TLS="false"##[logging]#ETCD_DEBUG="false"# examples for -log-package-levels etcdserver=WARNING,security=DEBUG#ETCD_LOG_PACKAGE_LEVELS=""##[profiling]#ETCD_ENABLE_PPROF="false"#ETCD_METRICS="basic"~                                     
  4. 启动etcd服务

    [chenlei@node-0 ~]$ systemctl start etcd[chenlei@node-0 ~]$ systemctl status etcd
  5. 检查etcd是否正常

    [chenlei@node-0 ~]$ etcdctl --endpoints 'http://node-0:2379' cluster-healthmember 8e9e05c52164694d is healthy: got healthy result from http://node-0:2379cluster is healthy

2.3.2、安装Flannel

  1. 在node-0、node-1、node-2上安装flanneld

    [chenlei@node-0 ~]$ sudo yum install flannel -y[chenlei@node-1 ~]$ sudo yum install flannel -y[chenlei@node-2 ~]$ sudo yum install flannel -y
  2. 在node-0、node-1、node-2上修改/etc/sysconfig/flanneld,使FLANNEL_ETCD_ENDPOINTS的值指向我们刚才安装的ETCD服务

    [chenlei@node-0 ~]$ sudo vi /etc/sysconfig/flanneld# Flanneld configuration options# etcd url location.  Point this to the server where etcd runs#FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"FLANNEL_ETCD_ENDPOINTS="http://node-0:2379"# etcd config key.  This is the configuration key that flannel queries# For address range assignmentFLANNEL_ETCD_PREFIX="/atomic.io/network"# Any additional options that you want to pass#FLANNEL_OPTIONS="eth0"~                                                                                                                 ~ 
    [chenlei@node-1 ~]$ sudo vi /etc/sysconfig/flanneld# Flanneld configuration options# etcd url location.  Point this to the server where etcd runs#FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"FLANNEL_ETCD_ENDPOINTS="http://node-0:2379"# etcd config key.  This is the configuration key that flannel queries# For address range assignmentFLANNEL_ETCD_PREFIX="/atomic.io/network"# Any additional options that you want to pass#FLANNEL_OPTIONS="eth0"~                                                                                                                 ~ 
    [chenlei@node-2 ~]$ sudo vi /etc/sysconfig/flanneld# Flanneld configuration options# etcd url location.  Point this to the server where etcd runs#FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"FLANNEL_ETCD_ENDPOINTS="http://node-0:2379"# etcd config key.  This is the configuration key that flannel queries# For address range assignmentFLANNEL_ETCD_PREFIX="/atomic.io/network"# Any additional options that you want to pass#FLANNEL_OPTIONS="eth0"~                                                                                                                 ~ 
  3. 在etcd中注册flannel的网络配置,注册时的目录前缀为上一步骤编辑的文件/etc/sysconfig/flanneld中,FLANNEL_ETCD_PREFIX的值,默认为”/atomic.io/network”

    [chenlei@node-0 ~]$ etcdctl --endpoints 'http://node-0:2379' mk /atomic.io/network/config '{ "Network": "10.2.0.0/16" }'

    注意:此处似乎只能使用B类IP,将子网掩码设置为24时flannel无法启动,提示子网注册失败,操作系统网络无法连接,ssh连接断开。

  4. 启动node-0、node-1、node-2上flannel服务

    [chenlei@node-0 ~]$ systemctl start flanneld[chenlei@node-0 ~]$ systemctl status flanneld[chenlei@node-0 ~]$ ip addr[chenlei@node-1 ~]$ systemctl start flanneld[chenlei@node-1 ~]$ systemctl status flanneld[chenlei@node-1 ~]$ ip addr[chenlei@node-2 ~]$ systemctl start flanneld[chenlei@node-2 ~]$ systemctl status flanneld[chenlei@node-2 ~]$ ip addr

    flannel正常启动后,使用ip add命令可以看到出现新的网络设备:flannel0

2.3.3、安装K8S

如果使用官方的repo文件安装,需要在本地创建对应的repo文件,本人在第一次尝试的时候,直接使用默认的源,安装的版本和官方的源版本是一样的。

  1. 创建官方repo文件(在node-0、node-1、node-2三个节点上执行

    [chenlei@node-0 ~]$ sudo vi /etc/yum.repos.d/virt7-docker-common-release.repo[virt7-docker-common-release]name=virt7-docker-common-releasebaseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/gpgcheck=0
  2. 安装kubernetes(在node-0、node-1、node-2三个节点上执行,默认并非最新版本,若此前已经安装了docker,建议先卸载掉,k8s会自己安装一个docker

    [chenlei@node-0 ~]$ sudo yum -y install --enablerepo=virt7-docker-common-release kubernetes
  3. 配置/etc/kubernetes/apiserver(在node-0上执行,修改内容:KUBE_API_ADDRESS、KUBE_API_PORT、KUBE_ETCD_SERVERS、KUBE_ADMISSION_CONTROL

    [chenlei@node-0 ~]$ sudo vi /etc/kubernetes/apiserver#### kubernetes system config## The following values are used to configure the kube-apiserver## The address on the local server to listen to.#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"# The port on the local server to listen on.# KUBE_API_PORT="--port=8080"KUBE_API_PORT="--port=8080"# Port minions listen on# KUBELET_PORT="--kubelet-port=10250"# Comma separated list of nodes in the etcd cluster#KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"KUBE_ETCD_SERVERS="--etcd-servers=http://node-0:2379"# Address range to use for servicesKUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"# default admission control policies#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"# Add your own!KUBE_API_ARGS=""~                                                                                                                                                                  ~ 
  4. 配置/etc/kubernetes/config(在node-0、node-1、node-2三个节点上执行,修改内容:KUBE_MASTER,三个节点全部指定同一个master

    [chenlei@node-0 ~]$ sudo vi /etc/kubernetes/config#### kubernetes system config## The following values are used to configure various aspects of all# kubernetes services, including##   kube-apiserver.service#   kube-controller-manager.service#   kube-scheduler.service#   kubelet.service#   kube-proxy.service# logging to stderr means we get it in the systemd journalKUBE_LOGTOSTDERR="--logtostderr=true"# journal message level, 0 is debugKUBE_LOG_LEVEL="--v=0"# Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV="--allow-privileged=false"# How the controller-manager, scheduler, and proxy find the apiserver#KUBE_MASTER="--master=http://127.0.0.1:8080"KUBE_MASTER="--master=http://node-0:8080"~                                                                                                                                                                  ~ 
  5. 防火墙配置,放开node-0(master)上的8080端口(在node-0上执行

    [chenlei@node-0 ~]$ sudo firewall-cmd --zone=public --add-port=8080/tcp --permanent[chenlei@node-0 ~]$ sudo firewall-cmd --zone=public --add-port=2379/tcp --permanent[chenlei@node-0 ~]$ sudo firewall-cmd --reload[chenlei@node-0 ~]$ sudo firewall-cmd --list-all
  6. 启动kube-apiserver、kube-controller-manager、kube-scheduler服务(在node-0上执行

    [chenlei@node-0 ~]$ systemctl start kube-apiserver[chenlei@node-0 ~]$ systemctl status kube-apiserver[chenlei@node-0 ~]$ systemctl start kube-controller-manager[chenlei@node-0 ~]$ systemctl status kube-controller-manager[chenlei@node-0 ~]$ systemctl start kube-scheduler[chenlei@node-0 ~]$ systemctl status kube-scheduler
  7. 配置/etc/kubernetes/kubelet(在node-1、node-2两个slave节点上执行,修改内容:KUBELET_ADDRESS、KUBELET_HOSTNAME、KUBELET_API_SERVER、KUBELET_POD_INFRA_CONTAINER

    [chenlei@node-1 ~]$ sudo vi /etc/kubernetes/kubelet#### kubernetes kubelet (minion) config# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)#KUBELET_ADDRESS="--address=127.0.0.1"KUBELET_ADDRESS="--address=0.0.0.0"# The port for the info server to serve on# KUBELET_PORT="--port=10250"# You may leave this blank to use the actual hostname#KUBELET_HOSTNAME="--hostname-override=127.0.0.1"KUBELET_HOSTNAME="--hostname-override=node-1"# location of the api-server#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"KUBELET_API_SERVER="--api-servers=http://node-0:8080"# pod infrastructure container#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.211.55.15:5000/rhel7/pod-infrastructure:latest"# Add your own!KUBELET_ARGS=""~  ~ 
    [chenlei@node-2 ~]$ sudo vi /etc/kubernetes/kubelet#### kubernetes kubelet (minion) config# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)#KUBELET_ADDRESS="--address=127.0.0.1"KUBELET_ADDRESS="--address=0.0.0.0"# The port for the info server to serve on# KUBELET_PORT="--port=10250"# You may leave this blank to use the actual hostname#KUBELET_HOSTNAME="--hostname-override=127.0.0.1"KUBELET_HOSTNAME="--hostname-override=node-2"# location of the api-server#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"KUBELET_API_SERVER="--api-servers=http://node-0:8080"# pod infrastructure container#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.211.55.15:5000/rhel7/pod-infrastructure:latest"# Add your own!KUBELET_ARGS=""

    注意:KUBELET_POD_INFRA_CONTAINER 项,由k8s创建的所有容器都会附带一个容器pod-infrastructure,而这个附带的容器的镜像就是由 KUBELET_POD_INFRA_CONTAINER 指定的,默认的registry.access.redhat.com/rhel7/pod-infrastructure:latest可能下载慢或者下载不下来,可以想办法下载下来后上传到私库中以便下次使用。k8s如何使用私库在下面的内容中会介绍,这些镜像的网盘地址见文章末尾。

  8. 启动kubelet、kube-proxy服务(在node-1、node-2两个slave节点上执行

    [chenlei@node-1 ~]$ systemctl start kubelet[chenlei@node-1 ~]$ systemctl status kubelet[chenlei@node-1 ~]$ systemctl start kube-proxy[chenlei@node-1 ~]$ systemctl status kube-proxy
    [chenlei@node-2 ~]$ systemctl start kubelet[chenlei@node-2 ~]$ systemctl status kubelet[chenlei@node-2 ~]$ systemctl start kube-proxy[chenlei@node-2 ~]$ systemctl status kube-proxy
  9. 检查k8s集群节点状态(在node-0上执行,后续的主要操作都在此master上执行

    [chenlei@node-0 ~]$ kubectl get nodesNAME      STATUS    AGE       VERSIONnode-1    Ready     21d       v1.5.2node-2    Ready     21d       v1.5.2

    至此,k8s的集群已经搭建完成,下面的内容介绍应用=部署

3、kubernetes部署应用

3.1、部署pod

  • 编写yaml文件:single-config-file.yaml

    apiVersion: v1kind: Podmetadata:name: pod-containersspec:restartPolicy: Nevercontainers:- name: tomcat  image: 10.211.55.15:5000/tomcat:9.0.0.M26  ports:  - containerPort: 8080    hostPort: 8080  command: ["/bin/sh"]  args: ["-c", "/usr/local/tomcat/bin/startup.sh && tail -f /usr/local/tomcat/logs/catalina.out"]~~ 
  • 创建/部署pod

    [chenlei@node-0 ~]$ kubectl create -f single-config-file.yaml 
  • 查看pod部署信息

    [chenlei@node-0 ~]$ kubectl get pods[chenlei@node-0 ~]$ kubectl get pod pod-containers -o wide
  • 访问pod中的服务,注意防火墙放开hostPort

    浏览器访问:http://node-1:8080(http://node-ip:hostPort)
    这里写图片描述

    如果页面打开极慢,上网搜索:Tomcat启动慢。

  • 删除pod

    [chenlei@node-0 ~]$ kubectl delete pod pod-containers

3.2、部署RC(ReplicationController)

  • 编写yaml文件(replication-controller-config-file.yaml)

    apiVersion: v1kind: ReplicationControllermetadata:name: my-replicationlabels:  name: my-replicationspec:replicas: 4selector:  name: my-replication-podtemplate:  metadata:    labels:      name: my-replication-pod  spec:    containers:    - name: my-containers      image: 10.211.55.15:5000/tomcat:9.0.0.M26      ports:      - containerPort: 8080      command: ["/bin/sh"]      args: ["-c", "/usr/local/tomcat/bin/startup.sh && tail -f /usr/local/tomcat/logs/catalina.out"] 
  • 创建/部署RC

    [chenlei@node-0 ~]$ kubectl create -f replication-controller-config-file.yaml replicationcontroller "my-replication" created
  • 查看RC部署信息

    [chenlei@node-0 ~]$ kubectl get rc
  • 查看pods

    [chenlei@node-0 ~]$ kubectl get podsNAME                   READY     STATUS    RESTARTS   AGEmy-replication-3n1qs   1/1       Running   0          3mmy-replication-cv26z   1/1       Running   0          3mmy-replication-g5w5q   1/1       Running   0          3mmy-replication-n6zmn   1/1       Running   0          3m
  • 查看RC详细信息

    [chenlei@node-0 ~]$ kubectl describe rc my-replicationName:     my-replicationNamespace:    defaultSelector: name=my-replication-podLabels:       name=my-replicationAnnotations:  <none>Replicas: 4 current / 4 desiredPods Status:  4 Running / 0 Waiting / 0 Succeeded / 0 FailedPod Template:Labels: name=my-replication-podContainers: my-containers:  Image:    10.211.55.15:5000/tomcat:9.0.0.M26  Port: 8080/TCP  Command:    /bin/sh  Args:    -c    /usr/local/tomcat/bin/startup.sh && tail -f /usr/local/tomcat/logs/catalina.out  Environment:  <none>  Mounts:       <none>Volumes:        <none>Events:FirstSeen   LastSeen    Count   From            SubObjectPath   Type        Reason          Message---------   --------    -----   ----            -------------   --------    ------          -------3m      3m      1   replication-controller          Normal      SuccessfulCreate    Created pod: my-replication-3n1qs3m      3m      1   replication-controller          Normal      SuccessfulCreate    Created pod: my-replication-n6zmn3m      3m      1   replication-controller          Normal      SuccessfulCreate    Created pod: my-replication-cv26z3m      3m      1   replication-controller          Normal      SuccessfulCreate    Created pod: my-replication-g5w5q
  • 调整RC的集群大小

    [chenlei@node-0 ~]$ kubectl scale rc my-replication --replicas=3replicationcontroller "my-replication" scaled[chenlei@node-0 Kubernetes-test]$ kubectl get podsNAME                   READY     STATUS        RESTARTS   AGEmy-replication-3n1qs   1/1       Running       0          55mmy-replication-cv26z   1/1       Running       0          55mmy-replication-g5w5q   1/1       Terminating   0          55mmy-replication-n6zmn   1/1       Running       0          55m
  • 访问RC中的应用

    RC中的应用需要通过下方的service访问

  • 删除RC

    [chenlei@node-0 ~]$ kubectl delete -f replication-controller-config-file.yaml 

3.3、部署SERVICE

  • 在上一部RC运行的基础上创建service,所以需要先“创建/部署RC”(上一步删除了)

    [chenlei@node-0 ~]$ kubectl create -f replication-controller-config-file.yaml replicationcontroller "my-replication" created
  • 编写yaml文件(service-config-file.yaml,此处使用NodePort方式暴露服务)

    apiVersion: v1kind: Servicemetadata:name: my-servicelabels:  name: my-servicespec:type: NodePortports:- port: 80  targetPort: 8080  protocol: TCP  nodePort: 30002selector:  name: my-replication-pod~ ~
  • 创建/部署Service

    [chenlei@node-0 ~]$ kubectl create -f service-config-file.yaml service "my-service" created
  • 查看service

    [chenlei@node-0 ~]$ kubectl get services[chenlei@node-0 ~]$ kubectl get servicesNAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGEkubernetes   10.254.0.1       <none>        443/TCP        21dmy-service   10.254.160.164   <nodes>       80:30002/TCP   11m
  • 查看service的endpoints

    [chenlei@node-0 ~]$ kubectl get endpointsNAME         ENDPOINTS                                                 AGEkubernetes   10.211.55.4:6443                                          21dmy-service   10.2.58.4:8080,10.2.58.5:8080,10.2.9.2:8080 + 1 more...   47s
  • 查看service详细信息

    [chenlei@node-0 ~]$ kubectl describe svc my-serviceName:         my-serviceNamespace:        defaultLabels:           name=my-serviceAnnotations:      <none>Selector:     name=my-replication-podType:         NodePortIP:           10.254.110.8Port:         <unset> 80/TCPNodePort:     <unset> 30002/TCPEndpoints:        10.2.58.4:8080,10.2.58.5:8080,10.2.9.2:8080 + 1 more...Session Affinity: NoneEvents:           <none>
  • 查看node-1或者node-2上是否监听30002端口

    [chenlei@node-1 ~]$ ss -ant | grep 30002LISTEN     0      128         :::30002                   :::* 
  • 访问service服务,注意防火墙放开30002

    浏览器访问:http://node-1:30002(http://node-ip:nodePort)。

    如果你的浏览器显示如下:

    这里写图片描述

    尝试关掉防火墙,然后在访问试试。这是因为iptables拒绝了所有转发请求,下面的坑里有进一步的说明与处理方法。

    正确的页面如下:

    这里写图片描述

  • 删除service

    [chenlei@node-0 ~]$ kubectl delete service my-serviceservice "my-service" deleted

    删除service并不会删除对应的RC以及RC对于的Pod

4、部署Dashboard

  Dashboard是k8s的一个web界面,提供一些直观的操作。

4.1、准备镜像文件

  部署Dashboard需要准备kubernetes-dashboard-amd64镜像,因为我的k8s版本号是1.5.X,所以镜像的版本也只能是1.5.x,git上的yaml给的镜像仓库地址是:gcr.io/google_containers/kubernetes-dashboard-amd64,所以完整的地址是gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1,因为一些众所周知的原因,你可能无法下载到这个镜像文件,所以本文最后提供网盘链接,你需要在下载后导入并且上传到你自己的私库中。

4.2、创建yaml文件

  • kubernetes-dashboard-deployment.yaml

    apiVersion: extensions/v1beta1kind: Deploymentmetadata:# Keep the name in sync with image version and# gce/coreos/kube-manifests/addons/dashboard counterpartsname: kubernetes-dashboard-latestnamespace: kube-systemspec:replicas: 1template:  metadata:    labels:      k8s-app: kubernetes-dashboard      version: latest      kubernetes.io/cluster-service: "true"  spec:    containers:    - name: kubernetes-dashboard      image: 10.211.55.15:5000/google_containers/kubernetes-dashboard-amd64:v1.5.1      ports:      - containerPort: 9090      args:       -  --apiserver-host=http://10.211.55.4:8080      livenessProbe:        httpGet:          path: /          port: 9090        initialDelaySeconds: 30        timeoutSeconds: 30
  • kubernetes-dashboard-service.yaml

    apiVersion: v1kind: Servicemetadata:name: kubernetes-dashboardnamespace: kube-systemlabels:  k8s-app: kubernetes-dashboard  kubernetes.io/cluster-service: "true"spec:type: NodePortselector:  k8s-app: kubernetes-dashboardports:- port: 80  targetPort: 9090  nodePort: 30001

4.3、创建服务

  • 创建Deployment

    [chenlei@node-0 ~]$ kubectl create -f kubernetes-dashboard-deployment.yaml
  • 创建Service

    [chenlei@node-0 ~]$ kubectl create -f kubernetes-dashboard-service.yaml
  • 检查node是否件套30001端口

    [chenlei@node-1 ~]$ ss -ant | grep 30001LISTEN     0      128         :::30001                   :::* 
    [chenlei@node-2 ~]$ ss -ant | grep 30001LISTEN     0      128         :::30001                   :::* 

4.4、通过nodePort访问服务

这里写图片描述

  • 如果你访问有问题,往上翻!
  • dashboard默认的Namespace是default,而我们部署时yaml文件中指定了“namespace: kube-system”,注意切换。

5、kubernetes暴露服务

  kubernetes有三种方式对外暴露服务:NodePort Service、LoadBlancer Service、Ingress。

  NodePort上面的service已经提及,NodePort会在每一个slave node上打开一个相同的端口,通过nodeIP:NodePort的方式访问服务,服务一旦多起来,NodePort 在每个节点上开启的端口会及其庞大,而且难以维护,不推荐需要部署大量服务的场景使用。

  LoadBlancer是使用外部负载均衡器,目前尚未部署,后续在补充!

  Ingress间的理解就是使用代理服务,并配置服务转发规则,由代理服务全权受理所有的访问请求,然后根据转发规则将请求转发到集群内部的Service中。这样只需要对外暴露代理服务的端口即可,代理本身在集群内部,可以自由访问集群中的S service。 所有,实际Ingress的实现是包括两部分的规则制定和代理服务(也就是Ingress和Ingress Controller):

5.1、部署Ingress

5.1.1、准备镜像文件

  需要的镜像文件如下:

  • gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.13
  • gcr.io/google_containers/defaultbackend:1.0

defaultbackend的作用是在代理规则匹配失败时提供默认访问(其实就是返回404)。

5.1.2、创建yaml文件

  • default-backend.yaml

    apiVersion: extensions/v1beta1kind: Deploymentmetadata:name: default-http-backendlabels:  k8s-app: default-http-backendnamespace: kube-systemspec:replicas: 1template:  metadata:    labels:      k8s-app: default-http-backend  spec:    terminationGracePeriodSeconds: 60    containers:    - name: default-http-backend      # Any image is permissable as long as:      # 1. It serves a 404 page at /      # 2. It serves 200 on a /healthz endpoint      #image: gcr.io/google_containers/defaultbackend:1.0      image: 10.211.55.15:5000/google_containers/defaultbackend:1.0      livenessProbe:        httpGet:          path: /healthz          port: 8080          scheme: HTTP        initialDelaySeconds: 30        timeoutSeconds: 5      ports:      - containerPort: 8080      resources:        limits:          cpu: 10m          memory: 20Mi        requests:          cpu: 10m          memory: 20Mi<hr />apiVersion: v1kind: Servicemetadata:name: default-http-backendnamespace: kube-systemlabels:  k8s-app: default-http-backendspec:ports:- port: 80  targetPort: 8080selector:  k8s-app: default-http-backend
  • nginx-ingress-daemonset.yaml

    apiVersion: extensions/v1beta1kind: DaemonSetmetadata:name: nginx-ingress-lblabels:  name: nginx-ingress-lbnamespace: kube-systemspec:template:  metadata:    labels:      name: nginx-ingress-lb    annotations:      prometheus.io/port: '10254'      prometheus.io/scrape: 'true'  spec:    hostNetwork: true    terminationGracePeriodSeconds: 60    containers:    #- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.13    - image: 10.211.55.15:5000/google_containers/nginx-ingress-controller:0.9.0-beta.13      name: nginx-ingress-lb      readinessProbe:        httpGet:          path: /healthz          port: 10254          scheme: HTTP      livenessProbe:        httpGet:          path: /healthz          port: 10254          scheme: HTTP        initialDelaySeconds: 10        timeoutSeconds: 1      ports:      - containerPort: 80        hostPort: 80      - containerPort: 443        hostPort: 443      env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace      args:      - /nginx-ingress-controller      - --default-backend-service=$(POD_NAMESPACE)/default-http-backend      - --apiserver-host=http://node-0:8080
  • dashboard-ingress.yaml

    apiVersion: extensions/v1beta1kind: Ingressmetadata:name: dashboard-ingressannotations:  ingress.kubernetes.io/force-ssl-redirect: "false"  ingress.kubernetes.io/ssl-redirect: "false"#    ingress.kubernetes.io/rewrite-target: /namespace: kube-systemspec:rules:#  - host: chenlei.com- http:    paths:    - path: /      backend:        serviceName: kubernetes-dashboard        servicePort: 80

    注意dashboard-ingress.yaml中空格的数量

5.1.3、创建服务

事先按照上面的内容正确部署好kubernetes-dashboard-service.yaml

  • 部署default-backend

    [chenlei@node-0 ~]$ kubectl create -f default-backend.yaml
  • 部署nginx-ingress-daemonset

    [chenlei@node-0 ~]$ kubectl create -f nginx-ingress-daemonset.yaml
  • 部署dashboard-ingress

    [chenlei@node-0 ~]$ kubectl create -f dashboard-ingress.yml

5.1.4、查看部署信息

  • 查看服务

    [chenlei@node-0 ~]$ kubectl -n kube-system get allNAME                                             READY     STATUS    RESTARTS   AGEpo/default-http-backend-3495647973-x838r         1/1       Running   2          1dpo/kubernetes-dashboard-latest-217549839-flpgp   1/1       Running   3          1dpo/nginx-ingress-lb-vrvrv                        1/1       Running   1          1dpo/nginx-ingress-lb-wnm1s                        1/1       Running   1          1dNAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGEsvc/default-http-backend   10.254.155.251   <none>        80/TCP         1dsvc/kubernetes-dashboard   10.254.175.156   <nodes>       80:30001/TCP   1dNAME                                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEdeploy/default-http-backend          1         1         1            1           1ddeploy/kubernetes-dashboard-latest   1         1         1            1           1dNAME                                       DESIRED   CURRENT   READY     AGErs/default-http-backend-3495647973         1         1         1         1drs/kubernetes-dashboard-latest-217549839   1         1         1         1d
  • 查看ingress

    [chenlei@node-0 ~]$ kubectl -n kube-system get ingNAME                HOSTS     ADDRESS            PORTS     AGEdashboard-ingress   *         10.211.55.19,...   80        1d[chenlei@node-0 ~]$ kubectl -n kube-system describe ing dashboard-ingressName:         dashboard-ingressNamespace:        kube-systemAddress:      10.211.55.19,10.211.55.20Default backend:  default-http-backend:80 (10.2.87.3:8080)Rules:Host    Path    Backends----    ----    --------*       /   kubernetes-dashboard:80 (10.2.87.2:9090)Annotations:force-ssl-redirect: falsessl-redirect:       falseEvents:FirstSeen   LastSeen    Count   From            SubObjectPath   Type        Reason  Message---------   --------    -----   ----            -------------   --------    ------  -------12m     12m     1   ingress-controller          Normal      CREATE  Ingress kube-system/dashboard-ingress11m     11m     1   ingress-controller          Normal      CREATE  Ingress kube-system/dashboard-ingress

5.1.5、访问服务

这里写图片描述

注意防火墙,放开80及443端口

注意查看时选择namespace,yaml中均设置namespace: kube-system

6、kubernetes常用命令

  • 最简单的方式部署应用

    [chenlei@node-0 ~]$ kubectl run my-alpine --image=alpine --replicas=6 ping www.baidu.com

    这种方式部署的类型是deployment,单纯的删除pod会立即启动一个新的pod,需要删除deployment。

  • 根据yaml文件(single-config-file.yaml)创建pod

  • [chenlei@node-0 ~]$ kubectl create -f single-config-file.yaml

    这种方式可用于创建各类k8s对象,比如:deployment、deployment、service……

  • 查看默认命名空间下的所有pod

    [chenlei@node-0 ~]$ kubectl get pods
  • 查看pod所在的节点

    [chenlei@node-0 ~]$ kubectl -o wide get pods
  • 查看指定命名空间(kube-system)下的所有pod

    [chenlei@node-0 ~]$ kubectl -n kube-system get pods
  • 根据名称(pod-containers)查看指定的pod

    [chenlei@node-0 ~]$ kubectl get pod pod-containers
  • 查看指定pod(pod-containers)的详细信息

    [chenlei@node-0 ~]$ kubectl describe pod pod-containers
  • 删除指定的pod(pod-containers)

    [chenlei@node-0 ~]$ kubectl delete pod pod-containers
  • 根据创建pod的yaml文件进行删除

    [chenlei@node-0 ~]$ kubectl delete -f single-config-file.yaml 

    这种删除方式适用于各类k8s对象,比如:deployment、deployment、service……

  • 删除默认命名空间下的所有pod

    [chenlei@node-0 ~]$ kubectl delete pods --all
  • 查看deployment

    [chenlei@node-0 ~]$ kubectl get deployment
  • 根据名称(my-alpine)删除deployment

    [chenlei@node-0 ~]$ kubectl delete deployment my-alpine
  • 查看events

    [chenlei@node-0 ~]$ kubectl get events
  • 删除所有events

    [chenlei@node-0 ~]$ kubectl delete events --all

    也可根据名称单独删除

  • 查看所有ReplicationController(rc)

    [chenlei@node-0 ~]$ kubectl get rc
  • 修改rc的实例数量

    [chenlei@node-0 ~]$ kubectl scale rc my-replication --replicas=3
  • 查看k8s版本

    [chenlei@node-0 ~]$ kubectl version
  • 查看k8s集群信息

    [chenlei@node-0 ~]$ kubectl cluster-info
  • 查看service

    [chenlei@node-0 ~]$ kubectl get services
  • 查看endpoint

    [chenlei@node-0 ~]$ kubectl get endpoints

  上面的命令只是简单的列举了部分,k8s对各类对象的操作命令规律都是相同的,默认情况下(没有用-n参数指定命名空间时)都针对的是默认命名空间:default。要一次查看所有命名空间使用参数 --all-namespaces ,例如:[chenlei@node-0 ~]$ kubectl get pods --all-namespaces

7、这些都是坑

  • 通过k8s创建pod之后,在slave节点上指向 docker ps 命令查看容器时提示:Cannot connect to the Docker daemon. Is the docker daemon running on this host

    原因:systemctl启动的服务属于root,slave节点上的docker服务是跟随kubelet启动的,所以要想执行docker命令,必须以root身份执行,正确的命令sudo docker ps

  • Service通过nodePort方式暴露服务时,浏览器访问提示:“拒绝链接请求”,检查防火墙端口已经放开,但是彻底关闭防火墙时可以正常访问

    原因(以上面的30001端口为例):

    查看防火墙端口:

    [chenlei@node-1 ~]$ sudo firewall-cmd --list-all[sudo] password for chenlei: public (active)target: defaulticmp-block-inversion: nointerfaces: eth0sources: services: dhcpv6-client sshports: 2379/tcp 443/tcp 30001/tcp 80/tcp 8080/tcp 30002/tcpprotocols: masquerade: noforward-ports: sourceports: icmp-blocks: rich rules: 

    查看iptables规则(k8s是通过iptables配置转发规则)

    [chenlei@node-1 ~]$ sudo iptables -L -n --line-numbersChain INPUT (policy ACCEPT)num  target     prot opt source               destination         1    KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           2    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED3    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           4    INPUT_direct  all  --  0.0.0.0/0            0.0.0.0/0           5    INPUT_ZONES_SOURCE  all  --  0.0.0.0/0            0.0.0.0/0           6    INPUT_ZONES  all  --  0.0.0.0/0            0.0.0.0/0           7    DROP       all  --  0.0.0.0/0            0.0.0.0/0            ctstate INVALID8    REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibitedChain FORWARD (policy ACCEPT)num  target     prot opt source               destination         1    DOCKER-ISOLATION  all  --  0.0.0.0/0            0.0.0.0/0           2    DOCKER     all  --  0.0.0.0/0            0.0.0.0/0           3    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED4    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           5    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           6    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED7    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           8    FORWARD_direct  all  --  0.0.0.0/0            0.0.0.0/0           9    FORWARD_IN_ZONES_SOURCE  all  --  0.0.0.0/0            0.0.0.0/0           10   FORWARD_IN_ZONES  all  --  0.0.0.0/0            0.0.0.0/0           11   FORWARD_OUT_ZONES_SOURCE  all  --  0.0.0.0/0            0.0.0.0/0           12   FORWARD_OUT_ZONES  all  --  0.0.0.0/0            0.0.0.0/0           13   DROP       all  --  0.0.0.0/0            0.0.0.0/0            ctstate INVALID14   REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibitedChain OUTPUT (policy ACCEPT)num  target     prot opt source               destination         1    KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */2    KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           3    OUTPUT_direct  all  --  0.0.0.0/0            0.0.0.0/0           Chain DOCKER (1 references)num  target     prot opt source               destination         Chain DOCKER-ISOLATION (1 references)num  target     prot opt source               destination         1    RETURN     all  --  0.0.0.0/0            0.0.0.0/0           Chain FORWARD_IN_ZONES (1 references)num  target     prot opt source               destination         1    FWDI_public  all  --  0.0.0.0/0            0.0.0.0/0           [goto] 2    FWDI_public  all  --  0.0.0.0/0            0.0.0.0/0           [goto] Chain FORWARD_IN_ZONES_SOURCE (1 references)num  target     prot opt source               destination         Chain FORWARD_OUT_ZONES (1 references)num  target     prot opt source               destination         1    FWDO_public  all  --  0.0.0.0/0            0.0.0.0/0           [goto] 2    FWDO_public  all  --  0.0.0.0/0            0.0.0.0/0           [goto] Chain FORWARD_OUT_ZONES_SOURCE (1 references)num  target     prot opt source               destination         Chain FORWARD_direct (1 references)num  target     prot opt source               destination         Chain FWDI_public (2 references)num  target     prot opt source               destination         1    FWDI_public_log  all  --  0.0.0.0/0            0.0.0.0/0           2    FWDI_public_deny  all  --  0.0.0.0/0            0.0.0.0/0           3    FWDI_public_allow  all  --  0.0.0.0/0            0.0.0.0/0           4    ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           Chain FWDI_public_allow (1 references)num  target     prot opt source               destination         Chain FWDI_public_deny (1 references)num  target     prot opt source               destination         Chain FWDI_public_log (1 references)num  target     prot opt source               destination         Chain FWDO_public (2 references)num  target     prot opt source               destination         1    FWDO_public_log  all  --  0.0.0.0/0            0.0.0.0/0           2    FWDO_public_deny  all  --  0.0.0.0/0            0.0.0.0/0           3    FWDO_public_allow  all  --  0.0.0.0/0            0.0.0.0/0           Chain FWDO_public_allow (1 references)num  target     prot opt source               destination         Chain FWDO_public_deny (1 references)num  target     prot opt source               destination         Chain FWDO_public_log (1 references)num  target     prot opt source               destination         Chain INPUT_ZONES (1 references)num  target     prot opt source               destination         1    IN_public  all  --  0.0.0.0/0            0.0.0.0/0           [goto] 2    IN_public  all  --  0.0.0.0/0            0.0.0.0/0           [goto] Chain INPUT_ZONES_SOURCE (1 references)num  target     prot opt source               destination         Chain INPUT_direct (1 references)num  target     prot opt source               destination         Chain IN_public (2 references)num  target     prot opt source               destination         1    IN_public_log  all  --  0.0.0.0/0            0.0.0.0/0           2    IN_public_deny  all  --  0.0.0.0/0            0.0.0.0/0           3    IN_public_allow  all  --  0.0.0.0/0            0.0.0.0/0           4    ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           Chain IN_public_allow (1 references)num  target     prot opt source               destination         1    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:22 ctstate NEW2    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:2379 ctstate NEW3    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:443 ctstate NEW4    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:30001 ctstate NEW5    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 ctstate NEW6    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 ctstate NEW7    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:30002 ctstate NEWChain IN_public_deny (1 references)num  target     prot opt source               destination         Chain IN_public_log (1 references)num  target     prot opt source               destination         Chain KUBE-FIREWALL (2 references)num  target     prot opt source               destination         1    DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000Chain KUBE-SERVICES (1 references)num  target     prot opt source               destination         Chain OUTPUT_direct (1 references)num  target     prot opt source               destination 

      发现FORWARD表中第14行REJECT了所有的请求,尝试删除此规则并刷新页面

    [chenlei@node-1 ~]$ sudo iptables -D FORWARD 14[chenlei@node-2 ~]$ sudo iptables -D FORWARD 14

      刷新后页面能正常打开,但是重启firewalld服务后,iptables规则又恢复了,这种方式只能临时解决问题。因为后续的工作中并不打算使用nodePort方式,所以不再深入研究。

  • 进入容器内部之后,尝试ping其他k8s service对应的CLUSTER-IP(VIP),发现无法ping通

    原因:这是正常现象,因为从vip到容器内部到访问路径是通过iptables的dnat转发规则配置的,明确指定了端口,从上面iptables的Chain IN_public_allow可以看到。如果想测试服务是否正常访问可以使用curl。

    K8S通过iptables的转发过程参考:http://www.jianshu.com/p/bbb673e79c3e

8、附录

8.1、使用阿里的镜像加速以及使用自己搭建的私库

  编辑node-1和node-2上的/etc/sysconfig/docker文件,修改OPTIONS和INSECURE_REGISTRY的值,文件完整内容如下(registry-mirror使用你自己的加速地址,nsecure-registry使用你本地的镜像仓库):

# /etc/sysconfig/docker# Modify these options if you want to change the way the docker daemon runs#OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror https://xxxxxxxx.mirror.aliyuncs.com'if [ -z "${DOCKER_CERT_PATH}" ]; then    DOCKER_CERT_PATH=/etc/dockerfi# If you want to add your own registry to be used for docker search and docker# pull use the ADD_REGISTRY option to list a set of registries, each prepended# with --add-registry flag. The first registry added will be the first registry# searched.#ADD_REGISTRY='--add-registry registry.access.redhat.com'# If you want to block registries from being used, uncomment the BLOCK_REGISTRY# option and give it a set of registries, each prepended with --block-registry# flag. For example adding docker.io will stop users from downloading images# from docker.io# BLOCK_REGISTRY='--block-registry'# If you have a registry secured with https but do not have proper certs# distributed, you can tell docker to not look for full authorization by# adding the registry to the INSECURE_REGISTRY line and uncommenting it.# INSECURE_REGISTRY='--insecure-registry'INSECURE_REGISTRY='--insecure-registry 10.211.55.15:5000'# On an SELinux system, if you remove the --selinux-enabled option, you# also need to turn on the docker_transition_unconfined boolean.# setsebool -P docker_transition_unconfined 1# Location used for temporary files, such as those created by# docker load and build operations. Default is /var/lib/docker/tmp# Can be overriden by setting the following environment variable.# DOCKER_TMPDIR=/var/tmp# Controls the /etc/cron.daily/docker-logrotate cron job status.# To disable, uncomment the line below.# LOGROTATE=false## docker-latest daemon can be used by starting the docker-latest unitfile.# To use docker-latest client, uncomment below lines#DOCKERBINARY=/usr/bin/docker-latest#DOCKERDBINARY=/usr/bin/dockerd-latest#DOCKER_CONTAINERD_BINARY=/usr/bin/docker-containerd-latest#DOCKER_CONTAINERD_SHIM_BINARY=/usr/bin/docker-containerd-shim-latest~                                                                                                                                                    ~

8.2、镜像网盘下载地址

  • 下载地址

    pod-infrastructure.tar.gz:http://pan.baidu.com/s/1bWWMWA

    nginx-ingress-controller-0.9.0-beta.13.tar.gz:http://pan.baidu.com/s/1nvn5rLn

    kubernetes-dashboard-amd64-v1.5.1.tar.gz:http://pan.baidu.com/s/1bBYgIq

    defaultbackend-1.0.tar.gz:http://pan.baidu.com/s/1jHDdFuu

  • 导入方法(以kubernetes-dashboard-amd64-v1.5.1.tar.gz为例)

    --- gzip解压得到tar文件 ---[chenlei@node-1 ~]$ gzip -d kubernetes-dashboard-amd64-v1.5.1.tar.gz --- docker load ---[chenlei@node-1 ~]$ sudo docker load -i ./kubernetes-dashboard-amd64-v1.5.1.tar --- 查看本地images ---[chenlei@node-1 ~]$ sudo docker images--- tag ---[chenlei@node-1 ~]$ sudo docker tag gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1 10.211.55.15:5000/google_containers/kubernetes-dashboard-amd64:v1.5.1--- push ---[chenlei@node-1 ~]$ sudo docker push 10.211.55.15:5000/google_containers/kubernetes-dashboard-amd64:v1.5.1

原创粉丝点击