Kubernetes 1.7.5部署以及kubernates-dashboard 1.7安装
来源:互联网 发布:淘宝漏洞收货可以退款 编辑:程序博客网 时间:2024/06/05 05:58
- Kubernetes 175部署
- 环境
- 准备工作
- 系统设置
- 配置yum源
- docker
- kubernetes
- 安装docker以及kubernetes
- docker安装
- kubernetes安装
- 镜像下载
- 安装kubeadm等
- Master节点安装
- 安装node加入集群
- 验证
- 安装kubernetes-dashboard 171
- 镜像准备
- 认证文件准备
- dashboard YAML文件
- 启动dashboard
- 后记
- 总结
- 生命在于折腾
- 生命不息折腾不止
Kubernetes 1.7.5部署
环境
- 系统
CentOS 7.2
- CPU
8 cores
- 内存
16G 64G 64G
- 三台主机,
hostname
为hd-22 hd-26 hd-28
其中hd-22
为master
节点
准备工作
系统设置
- 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
- 禁用交换内存
swapoff -a
- 禁用SELinux
setenforce 0
配置yum源
docker
# cat > /etc/yum.repo.d/docker-main.repo <<EOFname=Docker main Repositorybaseurl=https://get.daocloud.io/docker/yum-repo/main/centos/7enabled=1gpgcheck=1gpgkey=https://get.daocloud.io/docker/yum/gpgEOF
kubernetes
# cat > /etc/yum.repo.d/kubernetes.repo <<EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0EOF
安装docker以及kubernetes
docker安装
# yum -y install docker-engine# mkdir -p /etc/docker# cat > /etc/docker/daemon.json <<EOF{ "registry-mirrors": ["http://docker.mirrors.ustc.edu.cn"], "exec-opts": ["native.cgroupdriver=systemd"]}EOF# systemctl enable docker && systemctl start docker
kubernetes安装
k8s集群分为master
节点和node
节点,master节点主要用于对于集群进行管理。k8s安装一般有两种安装方式,第一种为官方提供的工具kubeadm
安装,第二种为二进制文件
安装,此处主要介绍第一种
镜像下载
注意:由于使用
kubeadm
在安装的过程中会使用一些谷歌开源的镜像,但是国内无法访问到grc.io
,所以此处需要手动去pull
镜像并进行docker tag
操作。此处我已编译的k8s镜像为v1.7.5
# docker pull alleyj/k8s-dns-dnsmasq-nanny-amd64:1.14.4# docker pull alleyj/k8s-dns-kube-dns-amd64:1.14.4# docker pull alleyj/k8s-dns-sidecar-amd64:1.14.4# docker pull alleyj/controller-manager-amd64:v1.7.5# docker pull alleyj/kube-apiserver-amd64:v1.7.5# docker pull alleyj/kube-scheduler-amd64:v1.7.5# docker pull alleyj/kube-proxy-amd64:v1.7.5# docker pull alleyj/kube-discovery-amd64:1.0# docker pull alleyj/dnsmasq-metrics-amd64:1.0# docker pull alleyj/etcd-amd64:3.0.17# docker pull alleyj/exechealthz-amd64:1.2# docker pull alleyj/k8s-dns-dnsmasq-nanny-amd64:1.14.1# docker pull alleyj/k8s-dns-kube-dns-amd64:1.14.1# docker pull alleyj/k8s-dns-sidecar-amd64:1.14.1# docker pull alleyj/kube-apiserver-amd64:v1.6.0# docker pull alleyj/kube-controller-manager-amd64:v1.6.0# docker pull alleyj/kube-proxy-amd64:v1.6.0# docker pull alleyj/kube-scheduler-amd64:v1.6.0# docker pull alleyj/pause-amd64:3.0# 下载后可将其push至自己的私服中方便其他节点使用,此处省略# docker tag alleyj/k8s-dns-dnsmasq-nanny-amd64:1.14.4 grc.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4# docker tag alleyj/k8s-dns-kube-dns-amd64:1.14.4 grc.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4# docker tag alleyj/k8s-dns-sidecar-amd64:1.14.4 grc.io/google_containers/k8s-dns-sidecar-amd64:1.14.4# docker tag alleyj/controller-manager-amd64:v1.7.5 grc.io/google_containers/controller-manager-amd64:v1.7.5# docker tag alleyj/kube-apiserver-amd64:v1.7.5 grc.io/google_containers/kube-apiserver-amd64:v1.7.5# docker tag alleyj/kube-scheduler-amd64:v1.7.5 grc.io/google_containers/kube-scheduler-amd64:v1.7.5# docker tag alleyj/kube-proxy-amd64:v1.7.5 grc.io/google_containers/kube-proxy-amd64:v1.7.5# docker tag alleyj/kube-discovery-amd64:1.0 grc.io/google_containers/kube-discovery-amd64:1.0# docker tag alleyj/dnsmasq-metrics-amd64:1.0 grc.io/google_containers/dnsmasq-metrics-amd64:1.0# docker tag alleyj/etcd-amd64:3.0.17 grc.io/google_containers/etcd-amd64:3.0.17# docker tag alleyj/exechealthz-amd64:1.2 grc.io/google_containers/exechealthz-amd64:1.2# docker tag alleyj/k8s-dns-dnsmasq-nanny-amd64:1.14.1 grc.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1# docker tag alleyj/k8s-dns-kube-dns-amd64:1.14.1 grc.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1# docker tag alleyj/k8s-dns-sidecar-amd64:1.14.1 grc.io/google_containers/k8s-dns-sidecar-amd64:1.14.1# docker tag alleyj/kube-apiserver-amd64:v1.6.0 grc.io/google_containers/kube-apiserver-amd64:v1.6.0# docker tag alleyj/kube-controller-manager-amd64:v1.6.0 grc.io/google_containers/kube-controller-manager-amd64:v1.6.0# docker tag alleyj/kube-proxy-amd64:v1.6.0 grc.io/google_containers/kube-proxy-amd64:v1.6.0# docker tag alleyj/kube-scheduler-amd64:v1.6.0 grc.io/google_containers/kube-scheduler-amd64:v1.6.0# docker tag alleyj/pause-amd64:3.0 grc.io/google_containers/pause-amd64:3.0
安装kubeadm等
# yum -y install kubectl kubeadm kubelet kubernetes-cni# systemctl enable kubelet && systemctl start kubelet
每个节点都需要安装,安装结束后启动
kubelet
后状态为loaded
Master节点安装
# kubeadm init --kubernetes-version=v1.7.5
执行此命令后会进行kubernetes安装,如果卡在一个步骤不动的话,可通过journalctl -xue | kubelet
查看具体的错误,直到输出类似于:kubeadm join --token fa1219.d7b8db5b25685776 10.8.177.22:6443
时候,安装完成,记录下次命令,此命令为后续节点交由master管理的指令。
接着执行:
# mkdir -p $HOME/.kube# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config# chown $(id -u):$(id -g) $HOME/.kube/config# 说明,如果需要由普通用户执行kubectl命令,也可以进行以上操作
此时,执行kubectl get nodes
# kubectl get nodesNAME STATUS AGE VERSIONhd-22 NoReady 1d v1.7.5
节点显示NoReady是因为没有安装网络插件,此处选用weave
,在master上执行
# curl -O https://git.io/weave-kube-1.6# kubectl apply -f weave-daemonset-k8s-1.6.yaml
安装node,加入集群
# yum -y install kubectl kubeadm kubelet kubernetes-cni# systemctl enable kubelet && systemctl start kubelet# kubeadm join --token fa1219.d7b8db5b25685776 10.8.177.22:6443
以上,安装完成。
验证
查看nodes
信息:
# kubectl get nodesNAME STATUS AGE VERSIONhd-22 Ready 1d v1.7.5hd-26 Ready 23h v1.7.5hd-28 Ready 23h v1.7.5[k8s@hd-22 ~]$
查看pods
信息
# kubectl get po -n=kube-systemNAME READY STATUS RESTARTS AGEetcd-hd-22 1/1 Running 0 1dkube-apiserver-hd-22 1/1 Running 0 1dkube-controller-manager-hd-22 1/1 Running 0 1dkube-dns-2425271678-nsfts 3/3 Running 0 1dkube-proxy-0f4nd 1/1 Running 0 23hkube-proxy-1q518 1/1 Running 0 23hkube-proxy-943l5 1/1 Running 0 1dkube-scheduler-hd-22 1/1 Running 0 1dweave-net-f0nrb 2/2 Running 0 1dweave-net-fxzl0 2/2 Running 0 23hweave-net-lp448 2/2 Running 0 23h
如果发现某个pod状态不是running
则执行kubectl describe <podName> -n=kube-system
查看具体的报错信息.
安装kubernetes-dashboard 1.7.1
kubernetes-dashboard 1.6.x以前和1.7.x的差距比较大,主要增加了一些https的认证
镜像准备
# docker pull alleyj/kubernetes-dashboard-init-amd64:v1.0.1# docker pull alleyj/kubernetets/kubernetes-dashboard-amd64:v1.7.1# docker pull alleyj/heapster-influxdb-amd64:v1.3.3# docker pull alleyj/heapster-grafana-amd64:v4.4.3# docker pull alleyj/heapster-amd64:v1.4.0# 下载后可将其push至自己的私服中方便其他节点使用,此处省略# docker tag alleyj/kubernetes-dashboard-init-amd64:v1.0.1 grc.io/google_containers/kubernetes-dashboard-init-amd64:v1.0.1# docker tag alleyj/kubernetets grc.io/google_containers/kubernetets# docker tag alleyj/heapster-influxdb-amd64:v1.3.3 grc.io/google_containers/heapster-influxdb-amd64:v1.3.3# docker tag alleyj/heapster-grafana-amd64:v4.4.3 grc.io/google_containers/heapster-grafana-amd64:v4.4.3# docker tag alleyj/heapster-amd64:v1.4.0 grc.io/google_containers/heapster-amd64:v1.4.0
认证文件准备
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout alleyz.key -x509 -days 365 -out dashboard.crt
按照提示输入(最后一个需要输入master的主机名称):
Generating a 4096 bit RSA private key............................................................................................................................................................................................++......................................................................................++writing new private key to 'alleyz.key'-----You are about to be asked to enter information that will be incorporatedinto your certificate request.What you are about to enter is what is called a Distinguished Name or a DN.There are quite a few fields but you can leave some blankFor some fields there will be a default value,If you enter '.', the field will be left blank.-----Country Name (2 letter code) [XX]:86State or Province Name (full name) []:BeijingLocality Name (eg, city) [Default City]:BeijingOrganization Name (eg, company) [Default Company Ltd]:hollycrmOrganizational Unit Name (eg, section) []:tdCommon Name (eg, your name or your server's hostname) []:hd-22
最终会产生两个文件:
# ll-rw-r--r--. 1 root root 2086 Nov 14 09:59 dashboard.crt-rw-r--r--. 1 root root 3272 Nov 14 09:59 dashboard.key# pwd/root/k8s/dash-certs## 必须得执行此句话# kubectl create secret generic kubernetes-dashboard-certs --from-file=/root/k8s/dash-certs -n kube-system
dashboard YAML文件
# curl -O https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
启动dashboard
# kubectl apply -f kubernetes-dashboard.yaml
启动后并不能直接访问,如果当前Master服务可以打开浏览器的话需要执行:
# kubectl proxyStarting to serve on 127.0.0.1:8001
在本机可使用http://127.0.0.1:8001
访问,如果期望在其他机器上进行访问,则需要执行下边的命令:
# kubectl -n kube-system edit service kubernetes-dashboard
将type: ClusterIP
中的ClusterIP
改为NodePort
,接着查看开放的端口号:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes-dashboard 10.96.144.60 <nodes> 443:30686/TCP 8h
可以看到映射的端口号为30686
此时,通过https://10.8.177.22:30686/
即可成功访问!!!
后记
在dashboard登录的界面发现需要使用kubeconfig
或者Token
的方式登录,此处需要进行繁琐的SSL认证授权操作,所以,如果是测试环境可执行以下:
# cat > dashboard-admin.yaml <<EOFapiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboardroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects:- kind: ServiceAccount name: kubernetes-dashboard namespace: kube-systemEOF# kubectl create -f dashboard-admin.yaml
然后登录的时候直接选择skip
即可看到所有的集群信息了
总结
在集群安装中经常出现各种各样的错误,所以需要及时查看日志进行修复,常用的命令:
journalctl -xue | kubelet
根据输出解决安装的问题kubectl describe <podName> -n=kube-system
查看内部pod
启动错误的原因kubectl logs -f <podName> -n=kube-system
查看容器内部的日志
生命在于折腾
生命不息,折腾不止
- Kubernetes 1.7.5部署以及kubernates-dashboard 1.7安装
- centos7 单节点部署k8s以及kubernetes-dashboard安装
- Kubernetes 1.5部署安装dashboard
- docker kubernetes dashboard安装部署详细介绍
- Kubernetes 1.5安装 kubernetes dashboard
- 安装Kubernetes-Dashboard插件
- Kubernetes集群中部署dashboard
- kubernetes 1.5安装dashboard,heapster
- kubernetes-dashboard
- minikube 安装 Kubernetes Dashboard 并集成 Heapster
- kubernetes + kubernetes-dashboard 安装和各种踩坑
- 解决Centos7下Kubernetes(k8s)部署好之后无法访问dashboard
- Kubernetes安装部署
- kubernetes1.8.4 安装指南 -- 11. 安装kubernetes dashboard
- Kubernets搭建Kubernetes-dashboard
- kubernetes/dashboard源码分析
- Kubernets搭建Kubernetes-dashboard
- kubernetes addons dashboard
- 《大话设计模式》java实现之建造者模式
- Android 经典笔记之八:网络请求数据基础介绍
- TensorFlow在MNIST中的应用-Softmax回归分类
- 常用图像数据集:标注、检索
- 大明山徙步20171111
- Kubernetes 1.7.5部署以及kubernates-dashboard 1.7安装
- linux系统编程
- Android中对服务器发送http请求
- 开源许可证的一些介绍
- RTMP H5 直播流技术解析
- 网站发布会svg图片不加载,在IIS服务器上部署svg/woff/woff2字体
- spark yarn 参数分析
- AOP与OOP
- 送给还没炖熟的胖头鱼