手动搭建Kubernetes1.8高可用集群(5)Node
来源:互联网 发布:空气净化器评测 知乎 编辑:程序博客网 时间:2024/05/16 09:06
一、准备
1、etcd集群
2、Node2,Node3上搭建Node,以下所有操作都在Node3上进行。Node2只需要修改kubelet配置就可以了
3、创建目录,并分发证书
/etc/kubernetes/manifests 属主kube 属组kube-cert 权限0700/etc/kubernetes/ssl/etc/nginx
二、安装kubelet
1、复制二进制文件
docker run --rm -v /usr/local/bin:/systembindir quay.io/coreos/hyperkube:v1.8.3_coreos.0 /bin/cp /hyperkube /systembindir/kubelet
三、准备kubelet配置文件
1、/etc/systemd/system/kubelet.service
[Unit]Description=Kubernetes Kubelet ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceWants=docker.socket[Service]EnvironmentFile=-/etc/kubernetes/kubelet.envExecStart=/usr/local/bin/kubelet \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBELET_API_SERVER \ $KUBELET_ADDRESS \ $KUBELET_PORT \ $KUBELET_HOSTNAME \ $KUBE_ALLOW_PRIV \ $KUBELET_ARGS \ $DOCKER_SOCKET \ $KUBELET_NETWORK_PLUGIN \ $KUBELET_CLOUDPROVIDERRestart=alwaysRestartSec=10s[Install]WantedBy=multi-user.target
2、/etc/kubernetes/kubelet.env
# logging to stderr means we get it in the systemd journalKUBE_LOGTOSTDERR="--logtostderr=true"KUBE_LOG_LEVEL="--v=2"# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)KUBELET_ADDRESS="--address=192.168.1.126 --node-ip=192.168.1.126"# The port for the info server to serve on# KUBELET_PORT="--port=10250"# You may leave this blank to use the actual hostnameKUBELET_HOSTNAME="--hostname-override=node3"KUBELET_ARGS="--pod-manifest-path=/etc/kubernetes/manifests \--cadvisor-port=0 \--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 \--node-status-update-frequency=10s \--docker-disable-shared-pid=True \--client-ca-file=/etc/kubernetes/ssl/ca.pem \--tls-cert-file=/etc/kubernetes/ssl/node-node3.pem \--tls-private-key-file=/etc/kubernetes/ssl/node-node3-key.pem \--anonymous-auth=false \--cgroup-driver=cgroupfs \--cgroups-per-qos=True \--fail-swap-on=False \--enforce-node-allocatable="" --cluster-dns=10.233.0.3 --cluster-domain=cluster.local --resolv-conf=/etc/resolv.conf --kubeconfig=/etc/kubernetes/node-kubeconfig.yaml --require-kubeconfig --kube-reserved cpu=100m,memory=256M --node-labels=node-role.kubernetes.io/node=true --feature-gates=Initializers=true,PersistentLocalVolumes=False "KUBELET_NETWORK_PLUGIN="--network-plugin=cni --network-plugin-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"# Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV="--allow-privileged=true"KUBELET_CLOUDPROVIDER=""PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
3、/etc/kubernetes/node-kubeconfig.yaml
apiVersion: v1kind: Configclusters:- name: local cluster: certificate-authority: /etc/kubernetes/ssl/ca.pem server: https://localhost:6443users:- name: kubelet user: client-certificate: /etc/kubernetes/ssl/node-node3.pem client-key: /etc/kubernetes/ssl/node-node3-key.pemcontexts:- context: cluster: local user: kubelet name: kubelet-cluster.localcurrent-context: kubelet-cluster.local
4、启动kubelet
systemctl start kubelet && systemctl enable kubelet
[root@node1 ~]# ss -tnlState Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 192.168.1.123:10250 *:* LISTEN 0 128 192.168.1.123:2379 *:* LISTEN 0 128 127.0.0.1:2379 *:* LISTEN 0 128 192.168.1.123:2380 *:* LISTEN 0 128 192.168.1.123:10255 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 127.0.0.1:10248 *:* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::*
四、配置kube-proxy,apiserver,scheduler,controller-manager
1、/etc/kubernetes/kube-proxy-kubeconfig.yaml
apiVersion: v1kind: Configclusters:- name: local cluster: certificate-authority: /etc/kubernetes/ssl/ca.pem server: https://localhost:6443users:- name: kube-proxy user: client-certificate: /etc/kubernetes/ssl/kube-proxy-node3.pem client-key: /etc/kubernetes/ssl/kube-proxy-node3-key.pemcontexts:- context: cluster: local user: kube-proxy name: kube-proxy-cluster.localcurrent-context: kube-proxy-cluster.local
2、/etc/kubernetes/manifests/kube-proxy.manifest
apiVersion: v1kind: Podmetadata: name: kube-proxy namespace: kube-system labels: k8s-app: kube-proxy annotations: kubespray.kube-proxy-cert/serial: "DBA85609D00B0FAE"spec: hostNetwork: true dnsPolicy: ClusterFirst containers: - name: kube-proxy image: quay.io/coreos/hyperkube:v1.8.3_coreos.0 imagePullPolicy: IfNotPresent resources: limits: cpu: 500m memory: 2000M requests: cpu: 150m memory: 64M command: - /hyperkube - proxy - --v=2 - --kubeconfig=/etc/kubernetes/kube-proxy-kubeconfig.yaml - --bind-address=192.168.1.126 - --cluster-cidr=10.233.64.0/18 - --proxy-mode=iptables securityContext: privileged: true volumeMounts: - mountPath: /etc/ssl/certs name: ssl-certs-host readOnly: true - mountPath: "/etc/kubernetes/ssl" name: etc-kube-ssl readOnly: true - mountPath: "/etc/kubernetes/kube-proxy-kubeconfig.yaml" name: kubeconfig readOnly: true - mountPath: /var/run/dbus name: var-run-dbus readOnly: false volumes: - name: ssl-certs-host hostPath: path: /etc/pki/tls - name: etc-kube-ssl hostPath: path: "/etc/kubernetes/ssl" - name: kubeconfig hostPath: path: "/etc/kubernetes/kube-proxy-kubeconfig.yaml" - name: var-run-dbus hostPath: path: /var/run/dbus
3、/etc/nginx/nginx.conf
error_log stderr notice;worker_processes auto;events { multi_accept on; use epoll; worker_connections 1024;}stream { upstream kube_apiserver { least_conn; server 192.168.1.121:6443; server 192.168.1.122:6443; } server { listen 127.0.0.1:6443; proxy_pass kube_apiserver; proxy_timeout 10m; proxy_connect_timeout 1s; }}
4、/etc/kubernetes/manifests/nginx-proxy.yml
apiVersion: v1kind: Podmetadata: name: nginx-proxy namespace: kube-system labels: k8s-app: kube-nginxspec: hostNetwork: true containers: - name: nginx-proxy image: nginx:1.11.4-alpine imagePullPolicy: IfNotPresent resources: limits: cpu: 300m memory: 512M requests: cpu: 25m memory: 32M securityContext: privileged: true volumeMounts: - mountPath: /etc/nginx name: etc-nginx readOnly: true volumes: - name: etc-nginx hostPath: path: /etc/nginx
四、验证
1、docker ps
[root@node3 ~]# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES6597dd5a1ce1 00bc1e841a8f "nginx -g 'daemon ..." 9 seconds ago Up 9 seconds k8s_nginx-proxy_nginx-proxy-node3_kube-system_768ecc5f8a5c2500c7b1d97c4351756d_0fc03ac3c0887 gcr.io/google_containers/pause-amd64:3.0 "/pause" 10 seconds ago Up 9 seconds k8s_POD_nginx-proxy-node3_kube-system_768ecc5f8a5c2500c7b1d97c4351756d_06d7ae2b0e831 bd322856b660 "/hyperkube proxy ..." 5 minutes ago Up 5 minutes k8s_kube-proxy_kube-proxy-node3_kube-system_6e62ad1c50c542344a458bc75eef02f7_02a8382b3d714 gcr.io/google_containers/pause-amd64:3.0 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-proxy-node3_kube-system_6e62ad1c50c542344a458bc75eef02f7_0c3befa316f36 quay.io/coreos/etcd:v3.2.4 "/usr/local/bin/etcd" About an hour ago Up About an hour etcd3
2、ss -tnl
[root@node3 ~]# ss -tnlState Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 192.168.1.125:10250 *:* LISTEN 0 128 127.0.0.1:6443 *:* LISTEN 0 128 192.168.1.125:2379 *:* LISTEN 0 128 127.0.0.1:2379 *:* LISTEN 0 128 192.168.1.125:2380 *:* LISTEN 0 128 192.168.1.125:10255 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 127.0.0.1:10248 *:* LISTEN 0 128 127.0.0.1:10249 *:* LISTEN 0 128 :::10256 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25
3、kubectl get node #在Master上运行,除非你配置了
[root@node1 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONnode1 NotReady master 1h v1.8.3+coreos.0node2 NotReady master,node 1h v1.8.3+coreos.0node3 NotReady node 1m v1.8.3+coreos.0
阅读全文
0 0
- 手动搭建Kubernetes1.8高可用集群(5)Node
- 手动搭建Kubernetes1.8高可用集群(1)ETCD
- 手动搭建Kubernetes1.8高可用集群(3)Docker
- 手动搭建Kubernetes1.8高可用集群(4)Master
- 手动搭建Kubernetes1.8高可用集群(6)calico
- 手动搭建Kubernetes1.8高可用集群(7)dnsmasq
- 手动搭建Kubernetes1.8高可用集群(2)TLS Certificates
- Hadoop高可用集群搭建(HA)
- Kubernetes1.8.3 集群环境搭建(CentOS)
- MySQL-MMM高可用集群搭建(可用)
- 搭建高可用mongodb集群
- 搭建高可用MongoDB集群
- 搭建高可用mongodb集群
- 搭建高可用MongoDB集群
- 搭建高可用mongoDB集群
- 高可用集群的搭建
- hadoop 高可用集群搭建
- Redis高可用集群搭建
- 【Scikit-Learn 中文文档】决策树
- svn ignore 的用法
- pip升级出错解决方法ReadTimeoutError: HTTPSConnectionPool(host='pypi.python.org', port=443): Read ti med out.
- Python 切片(Slice)
- 214124
- 手动搭建Kubernetes1.8高可用集群(5)Node
- 用两个栈实现队列 用两个栈来实现一个队列,完成队列的Push和Pop操作。 队列中的元素为int类型
- java.util.Random.nextBytes()方法实例
- android GSON解析的一些问题
- 腾讯云总监:如何成为 AI 工程师?
- 凯哥自媒体赚钱秘籍:其实自媒体写文章赚钱很简单!
- 手动搭建Kubernetes1.8高可用集群(6)calico
- Tablayout+Fragment的简单切换
- jsp常用的onchange事件