kubernetes 1.5安装dashboard,heapster

来源:互联网 发布:php 记录程序运行时间 编辑:程序博客网 时间:2024/05/16 09:15



Installing Kubernetes 1.5 on all nodes

不知道为什么blog格式全乱了,看起来很累,也不知道怎么改,所以这篇文章会被重新分成三篇。以下是连接地址:

第一个 集群安装:              http://blog.csdn.net/wenwst/article/details/54409205
第二个dashboard 安装:  http://blog.csdn.net/wenwst/article/details/54410012
第三个 heapster 安装:     http://blog.csdn.net/wenwst/article/details/54601110



系统配置:

Linux  3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

配置前系统 操作:

Last login: Mon Dec 26 22:26:56 2016[root@localhost ~]# systemctl disable firewalldRemoved symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.[root@localhost ~]# systemctl stop firewalld[root@localhost ~]# 

配置主机名:

hostnamectl --static set-hostname centos-master


确定selinux是关闭的:

/etc/selinux/configSELINUX=disabled


-----------------------------------------可选--------------------------

把下面两行加入到/etc/hosts中

61.91.161.217 gcr.io
61.91.161.217 www.gcr.io

--------------------------------------------------------------------------




以下在所有的节点上安装:

Kubernetes 1.5


方法1:

来源于官网配置

在centos系统yum中加入:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0EOF


yum install -y socat kubelet kubeadm kubectl kubernetes-cni


---这里需要注意,根据自己的实际情况选择,如果在下一步下载镜 像的速度太慢,可以加上---



在docker的启动文件  /lib/systemd/system/docker.service中加入

--registry-mirror="http://b438f72b.m.daocloud.io"


详细如下:

[root@localhost ~]# vi /lib/systemd/system/docker.service [Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network.target[Service]Type=notify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by dockerExecStart=/usr/bin/dockerd --registry-mirror="http://b438f72b.m.daocloud.io"ExecReload=/bin/kill -s HUP $MAINPID# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinity# Uncomment TasksMax if your systemd version supports it.# Only systemd 226 and above support this version.#TasksMax=infinityTimeoutStartSec=0# set delegate yes so that systemd does not reset the cgroups of docker containersDelegate=yes# kill only the docker process, not all processes in the cgroupKillMode=process[Install]WantedBy=multi-user.target~


让docker开机启动

systemctl enable docker

启动docker

systemctl start docker

让kubelet开机启动

systemctl enable kubelet

启动kubelet

systemctl start kubelet

----------------------

systemctl enable docker;systemctl start docker;systemctl enable kubelet;systemctl start kubelet

---------------------------------------------

下载镜像:

images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)for imageName in ${images[@]} ; do  docker pull jicki/$imageName  docker tag jicki/$imageName gcr.io/google_containers/$imageName  docker rmi jicki/$imageNamedone



--------------------------------------------------

注意--

虽然我们在这里安装下载了weaveworks/weave-kube:1.8.2 但还是要注意安装weaveworks的yaml文件中对应的版本。

特别是在安装dns时候,kubeadm会自动安装,因此没有yaml,那么使用下面的命令进行查看:

 kubectl --namespace=kube-system edit deployment kube-dns

--------------------------------------------------


这两个是网络

docker pull weaveworks/weave-kube:1.8.2docker pull weaveworks/weave-npc:1.8.2

这两个是监控

 docker pull kubernetes/heapster:canary

docker pull kubernetes/heapster_influxdb:v0.6

docker pull gcr.io/google_containers/heapster_grafana:v3.1.1



以上的操作在每一个节点都要执行,镜像也要在每一个节点上下载。



接下来配置集群:

在你的master服务器上面运行:

kubeadm init --api-advertise-addresses=192.168.7.206 --pod-network-cidr 10.245.0.0/16

上面的192.168.7.206是我的master的地址。

这个命令不可以运行两回。  也就是只能运行一次。如果再次运行,需要执行 kubeadm reset.

输出 内容如下:

[root@centos-master ~]# kubeadm init[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.[preflight] Running pre-flight checks[init] Using Kubernetes version: v1.5.1[tokens] Generated token: "60a95a.93c425347a1695ab"[certificates] Generated Certificate Authority key and certificate.[certificates] Generated API Server key and certificate[certificates] Generated Service Account signing keys[certificates] Created keys and certificates in "/etc/kubernetes/pki"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"[apiclient] Created API client, waiting for the control plane to become ready[apiclient] All control plane components are healthy after 81.803134 seconds[apiclient] Waiting for at least one node to register and become ready[apiclient] First node is ready after 2.002437 seconds[apiclient] Creating a test deployment[apiclient] Test deployment succeeded[token-discovery] Created the kube-discovery deployment, waiting for it to become ready[token-discovery] kube-discovery is ready after 22.002704 seconds[addons] Created essential addon: kube-proxy[addons] Created essential addon: kube-dnsYour Kubernetes master has initialized successfully!You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:    http://kubernetes.io/docs/admin/addons/You can now join any number of machines by running the following on each node:kubeadm join --token=60a95a.93c425347a1695ab 192.168.7.206

这个拷下来,后面有用。


在上面的操作中,已经初始化了master. 最后一行是token,后面将用来增加结点。

接下来安装结点:


在所有的结点运行下面的命令:

kubeadm join --token=60a95a.93c425347a1695ab 192.168.7.206

运行完输出 信息如下:

[root@centos-minion-1 kubelet]# kubeadm join --token=60a95a.93c425347a1695ab 192.168.7.206[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.[preflight] Running pre-flight checks[tokens] Validating provided token[discovery] Created cluster info discovery client, requesting info from "http://192.168.7.206:9898/cluster-info/v1/?token-id=60a95a"[discovery] Cluster info object received, verifying signature using given token[discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.7.206:6443][bootstrap] Trying to connect to endpoint https://192.168.7.206:6443[bootstrap] Detected server version: v1.5.1[bootstrap] Successfully established connection with endpoint "https://192.168.7.206:6443"[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request[csr] Received signed certificate from the API server:Issuer: CN=kubernetes | Subject: CN=system:node:centos-minion-1 | CA: falseNot before: 2016-12-23 07:06:00 +0000 UTC Not After: 2017-12-23 07:06:00 +0000 UTC[csr] Generating kubelet configuration[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"Node join complete:* Certificate signing request sent to master and response  received.* Kubelet informed of new secure connection details.Run 'kubectl get nodes' on the master to see this machine join.





在master上面运行kubectl get nodes

[root@centos-master ~]# kubectl get nodesNAME              STATUS         AGEcentos-master     Ready,master   14mcentos-minion-1   Ready          5mcentos-minion-2   Ready          45s

网上有一个命令是:

kubectl taint nodes --all dedicated-

[root@centos-master ~]# kubectl taint nodes --all dedicated-taint key="dedicated" and effect="" not found.taint key="dedicated" and effect="" not found.taint key="dedicated" and effect="" not found.



执行这个命令好像

这个命令好像是可以让master上面也可以运行pod。

但执行没有效果。

kubectl get nodes


[root@centos-master ~]# kubectl get nodesNAME              STATUS         AGEcentos-master     Ready,master   21mcentos-minion-1   Ready          13mcentos-minion-2   Ready          8m

还是这个样子。


然后:

[root@centos-master new]# kubectl --namespace=kube-system get podNAME                                    READY     STATUS    RESTARTS   AGEdummy-2088944543-9zfjl                  1/1       Running   0          2detcd-centos-master                      1/1       Running   0          2dkube-apiserver-centos-master            1/1       Running   0          2dkube-controller-manager-centos-master   1/1       Running   0          2dkube-discovery-1769846148-6ldk1         1/1       Running   0          2dkube-proxy-34q7p                        1/1       Running   0          2dkube-proxy-hqkkg                        1/1       Running   1          2dkube-proxy-nbgn3                        1/1       Running   0          2dkube-scheduler-centos-master            1/1       Running   0          2dweave-net-kkdh9                         2/2       Running   0          42mweave-net-mtd83                         2/2       Running   0          2mweave-net-q91sr                         2/2       Running   2          42m




现在,需要安装pod network.

执行下面命令:

kubectl apply -f https://git.io/weave-kube 这个命令里面的镜像可能无法下载,那么,按以下的方法来做。
[root@centos-master new]# vi weave-daemonset.yaml apiVersion: extensions/v1beta1kind: DaemonSetmetadata:  name: weave-net  namespace: kube-systemspec:  template:    metadata:      labels:        name: weave-net      annotations:        scheduler.alpha.kubernetes.io/tolerations: |          [            {              "key": "dedicated",              "operator": "Equal",              "value": "master",              "effect": "NoSchedule"            }          ]    spec:      hostNetwork: true      hostPID: true      containers:        - name: weave          image: weaveworks/weave-kube:1.8.2          command:            - /home/weave/launch.sh          livenessProbe:            initialDelaySeconds: 30            httpGet:              host: 127.0.0.1              path: /status              port: 6784          securityContext:            privileged: true          volumeMounts:            - name: weavedb              mountPath: /weavedb            - name: cni-bin              mountPath: /opt            - name: cni-bin2              mountPath: /host_home            - name: cni-conf              mountPath: /etc          resources:            requests:              cpu: 10m        - name: weave-npc          image: weaveworks/weave-kube:1.8.2          resources:            requests:              cpu: 10m          securityContext:            privileged: true      restartPolicy: Always      volumes:        - name: weavedb          emptyDir: {}        - name: cni-bin          hostPath:            path: /opt        - name: cni-bin2          hostPath:            path: /home        - name: cni-conf          hostPath:            path: /etc


注意上面的镜像地址,如果前面下载的镜像不对,最好找好相应版本的镜像。


kubectl apply -f weave-daemonset.yaml

安装完后,显示pod:

[root@localhost ~]# kubectl --namespace=kube-system get podNAME                                    READY     STATUS    RESTARTS   AGEdummy-2088944543-xjj21                  1/1       Running   0          55metcd-centos-master                      1/1       Running   0          55mkube-apiserver-centos-master            1/1       Running   0          55mkube-controller-manager-centos-master   1/1       Running   0          55mkube-discovery-1769846148-c45gd         1/1       Running   0          55mkube-dns-2924299975-96xms               4/4       Running   0          55mkube-proxy-33lsn                        1/1       Running   0          55mkube-proxy-jnz6q                        1/1       Running   0          55mkube-proxy-vfql2                        1/1       Running   0          20mkube-scheduler-centos-master            1/1       Running   0          55mweave-net-k5tlz                         2/2       Running   0          19mweave-net-q3n89                         2/2       Running   0          19mweave-net-x57k7                         2/2       Running   0          19m









安装dashboard过程如下:



下载安装yaml文件:

wget  https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

[root@centos-master new]# cat kubernetes-dashboard.yaml # Copyright 2015 Google Inc. All Rights Reserved.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# Configuration to deploy release version of the Dashboard UI.## Example usage: kubectl create -f <this_file>kind: DeploymentapiVersion: extensions/v1beta1metadata:  labels:    app: kubernetes-dashboard  name: kubernetes-dashboard  namespace: kube-systemspec:  replicas: 1  selector:    matchLabels:      app: kubernetes-dashboard  template:    metadata:      labels:        app: kubernetes-dashboard      # Comment the following annotation if Dashboard must not be deployed on master      annotations:        scheduler.alpha.kubernetes.io/tolerations: |          [            {              "key": "dedicated",              "operator": "Equal",              "value": "master",              "effect": "NoSchedule"            }          ]    spec:      containers:      - name: kubernetes-dashboard        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0        imagePullPolicy: Always        ports:        - containerPort: 9090          protocol: TCP        args:          # Uncomment the following line to manually specify Kubernetes API server Host          # If not specified, Dashboard will attempt to auto discover the API server and connect          # to it. Uncomment only if the default does not work.          # - --apiserver-host=http://my-address:port        livenessProbe:          httpGet:            path: /            port: 9090          initialDelaySeconds: 30          timeoutSeconds: 30---kind: ServiceapiVersion: v1metadata:  labels:    app: kubernetes-dashboard  name: kubernetes-dashboard  namespace: kube-systemspec:  type: NodePort  ports:  - port: 80    targetPort: 9090  selector:    app: kubernetes-dashboard

上面加红加粗标注的镜像地址可以提前下载到master服务器上面。


 kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

查看到

kubernetes-dashboard-3095304083-w3fjd  

已经运行。

[root@centos-master new]# kubectl get pod --all-namespacesNAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGEkube-system   dummy-2088944543-9zfjl                  1/1       Running   0          2dkube-system   etcd-centos-master                      1/1       Running   0          2dkube-system   kube-apiserver-centos-master            1/1       Running   0          2dkube-system   kube-controller-manager-centos-master   1/1       Running   0          2dkube-system   kube-discovery-1769846148-6ldk1         1/1       Running   0          2dkube-system   kube-proxy-34q7p                        1/1       Running   0          2dkube-system   kube-proxy-hqkkg                        1/1       Running   1          2dkube-system   kube-proxy-nbgn3                        1/1       Running   0          2dkube-system   kube-scheduler-centos-master            1/1       Running   0          2dkube-system   kubernetes-dashboard-3095304083-w3fjd   1/1       Running   0          31mkube-system   weave-net-kkdh9                         2/2       Running   0          1hkube-system   weave-net-mtd83                         2/2       Running   0          48mkube-system   weave-net-q91sr                         2/2       Running   2          1h


查看节点端口为:31551

[root@centos-master new]# kubectl describe svc kubernetes-dashboard --namespace=kube-systemName:                   kubernetes-dashboardNamespace:              kube-systemLabels:                 app=kubernetes-dashboardSelector:               app=kubernetes-dashboardType:                   NodePortIP:                     10.96.35.20Port:                   <unset> 80/TCPNodePort:               <unset> 31551/TCPEndpoints:              10.40.0.1:9090Session Affinity:       NoneNo events.







安装heapster

yaml文件来自于github.

[root@localhost heapster]# cat grafana-deployment.yaml apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: monitoring-grafana  namespace: kube-systemspec:  replicas: 1  template:    metadata:      labels:        task: monitoring        k8s-app: grafana    spec:      volumes:      - name: grafana-storage        emptyDir: {}      containers:      - name: grafana        image: gcr.io/google_containers/heapster_grafana:v3.1.1        ports:          - containerPort: 3000            protocol: TCP        volumeMounts:        - mountPath: /var          name: grafana-storage        env:        - name: INFLUXDB_HOST          value: monitoring-influxdb        - name: GRAFANA_PORT          value: "3000"          # The following env variables are required to make Grafana accessible via          # the kubernetes api-server proxy. On production clusters, we recommend          # removing these env variables, setup auth for grafana, and expose the grafana          # service using a LoadBalancer or a public IP.        - name: GF_AUTH_BASIC_ENABLED          value: "false"        - name: GF_AUTH_ANONYMOUS_ENABLED          value: "true"        - name: GF_AUTH_ANONYMOUS_ORG_ROLE          value: Admin        - name: GF_SERVER_ROOT_URL          # value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/          value: /


grafana-deployment.yaml    中的image我们在上面已经下载过

docker pull gcr.io/google_containers/heapster_grafana:v3.1.1


[root@localhost heapster]# cat grafana-service.yaml apiVersion: v1kind: Servicemetadata:  labels:    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)    # If you are NOT using this as an addon, you should comment out this line.    kubernetes.io/cluster-service: 'true'    kubernetes.io/name: monitoring-grafana  name: monitoring-grafana  namespace: kube-systemspec:  # In a production setup, we recommend accessing Grafana through an external Loadbalancer  # or through a public IP.  # type: LoadBalancer  type: NodePort  ports:  - port: 80    targetPort: 3000  selector:    k8s-app: grafana

influxdb-deployment.yaml 中的image也是下载过的。

[root@localhost heapster]# cat influxdb-deployment.yaml apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: monitoring-influxdb  namespace: kube-systemspec:  replicas: 1  template:    metadata:      labels:        task: monitoring        k8s-app: influxdb    spec:      volumes:      - name: influxdb-storage        emptyDir: {}      containers:      - name: influxdb        image: kubernetes/heapster_influxdb:v0.6        volumeMounts:        - mountPath: /data          name: influxdb-storage

[root@localhost heapster]# cat influxdb-service.yaml apiVersion: v1kind: Servicemetadata:  labels:    task: monitoring    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)    # If you are NOT using this as an addon, you should comment out this line.    kubernetes.io/cluster-service: 'true'    kubernetes.io/name: monitoring-influxdb  name: monitoring-influxdb  namespace: kube-systemspec:  # type: NodePort  ports:  - name: api    port: 8086    targetPort: 8086  selector:    k8s-app: influxdb


heapster-deployment.yaml  文件内容

[root@localhost heapster]# cat heapster-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
        version: v6
    spec:
      containers:
      - name: heapster
        image: kubernetes/heapster:canary
        imagePullPolicy: Always
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb:8086


image也是下载过的。

[root@localhost heapster]# cat heapster-service.yaml apiVersion: v1kind: Servicemetadata:  labels:    task: monitoring    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)    # If you are NOT using this as an addon, you should comment out this line.    kubernetes.io/cluster-service: 'true'    kubernetes.io/name: Heapster  name: heapster  namespace: kube-systemspec:  ports:  - port: 80    targetPort: 8082  selector:    k8s-app: heapster


六个文件准备好了以后。

直接

kubectl create -f grafana-deployment.yaml -f grafana-service.yaml -f influxdb-deployment.yaml -f  influxdb-service.yaml -f heapster-deployment.yaml -f  heapster-service.yaml  

查看pod:

[root@localhost heapster]# kubectl get pod --namespace=kube-systemNAME                                    READY     STATUS    RESTARTS   AGEdummy-2088944543-xjj21                  1/1       Running   0          2hetcd-centos-master                      1/1       Running   0          2hheapster-2193675300-j1jxn               1/1       Running   0          1hkube-apiserver-centos-master            1/1       Running   0          2hkube-controller-manager-centos-master   1/1       Running   0          2hkube-discovery-1769846148-c45gd         1/1       Running   0          1hkube-dns-2924299975-96xms               4/4       Running   0          1hkube-proxy-33lsn                        1/1       Running   0          1hkube-proxy-jnz6q                        1/1       Running   0          1hkube-proxy-vfql2                        1/1       Running   0          1hkube-scheduler-centos-master            1/1       Running   0          2hkubernetes-dashboard-3000605155-8mxgz   1/1       Running   0          1hmonitoring-grafana-810108360-h92v7      1/1       Running   0          1hmonitoring-influxdb-3065341217-q2445    1/1       Running   0          1hweave-net-k5tlz                         2/2       Running   0          1hweave-net-q3n89                         2/2       Running   0          1hweave-net-x57k7                         2/2       Running   0          1h

查看服务:

[root@localhost heapster]# kubectl get svc --namespace=kube-systemNAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGEheapster               10.98.45.1      <none>        80/TCP          1hkube-dns               10.96.0.10      <none>        53/UDP,53/TCP   2hkubernetes-dashboard   10.108.45.66    <nodes>       80:32155/TCP    1hmonitoring-grafana     10.97.110.225   <nodes>       80:30687/TCP    1hmonitoring-influxdb    10.96.175.67    <none>        8086/TCP        1h


查看grafana的详细信息:

[root@localhost heapster]# kubectl  --namespace=kube-system describe svc monitoring-grafanaName:                   monitoring-grafanaNamespace:              kube-systemLabels:                 kubernetes.io/cluster-service=true                        kubernetes.io/name=monitoring-grafanaSelector:               k8s-app=grafanaType:                   NodePortIP:                     10.97.110.225Port:                   <unset> 80/TCPNodePort:               <unset> 30687/TCPEndpoints:              10.32.0.2:3000Session Affinity:       NoneNo events.


看到开放端口为30687

通过节点IP加端口号访问:





点图标:



点默认;





然后确认是k8s就可以了:




接下来:




图形出来了:





                                             
2 0
原创粉丝点击