ubuntu 14.04 安装Kubernetes 1.4.12过程

来源:互联网 发布:大牛晒密软件 编辑:程序博客网 时间:2024/06/14 08:15

一、系统环境

本次部署为单机kubernetes环境,全程在root用户下执行 。安装过程在vultr的vps上进行,国内安装可能会有一些镜像无法访问,需使用vpn或者代理。
Ubuntu 14.04 kvm虚拟机,kubernetes 1.4.12,
Docker Version: 17.05.0-ce,API version: 1.29 (minimum version 1.12)

二、安装

2.1 安装Docker

curl https://get.docker.com | sh

2.2 安装其它需要的软件

apt install vim bridge-utils

2.3 配置免密码登录本机和slave机器

本文只安装单机kubernetes,故只配置免密码登录本机,需要免密码登录其它机器,请参考其它文章

root@vultr:/# mkdir ~/.sshroot@vultr:/# cd ~/.sshroot@vultr:~/.ssh# ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:89:a4:b1:f8:1d:28:81:ea:84:fe:86:c9:9c:49:38:86 root@vultr.guestThe key's randomart image is:+--[ RSA 2048]----+|                 || .               ||. . . .          ||o  o * . .       ||=.o + o S        ||Eo o . .         ||=+= . .          || B..             ||  ..             |+-----------------+root@vultr:~/.ssh# cat id_rsa.pub >> authorized_keys

2.4 安装kubernetes

2.4.1 下载kubernetes源码包

root@vultr:~# wget https://github.com/kubernetes/kubernetes/archive/v1.4.12.tar.gzroot@vultr:~# tar -xvzf v1.4.12.tar.gzroot@vultr:~# mv kubernetes-1.4.12/ kubernetes

2.4.2 修改配置文件

root@vultr:~# vi kubernetes/cluster/ubuntu/config-default.sh 

将文件中

# node信息,默认配置文件由三个物理node组成,其中第一个node既是master,也是minion,后两个是minionexport nodes=${nodes:-"vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223"}# a表示Master,i表示minionroles=${roles:-"ai i i"}# minion node个数export NUM_NODES=${NUM_NODES:-3}

根据实际情况修改,export nodes是待安装kubernetes的机器地址和无密码登录的用户。本文中只有一个node,既是Master也是minion

# ip 45.63.60.83为物理网卡ip地址export nodes=${nodes:-"root@45.63.60.83"}roles=${roles:-"ai"}export NUM_NODES=${NUM_NODES:-1}

2.4.3 启动集群

执行./kube-up.sh文件,启动kubernetes

root@vultr:~# export KUBERNETES_PROVIDER=ubunturoot@vultr:~# export KUBE_VERSION=1.4.12root@vultr:~# export ETCD_VERSION=2.3.4root@vultr:~#  export FLANNEL_VERSION=0.5.4root@vultr:~/kubernetes/cluster# ./kube-up.sh ... Starting cluster using provider: ubuntu... calling verify-prereqsIdentity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)... calling kube-up~/kubernetes/cluster/ubuntu ~/kubernetes/clusterPrepare flannel 0.5.4 release ...  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed100   608    0   608    0     0   2572      0 --:--:-- --:--:-- --:--:--  2576100 3399k  100 3399k    0     0  1045k      0  0:00:03  0:00:03 --:--:-- 1303kPrepare etcd 2.3.4 release ...  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed100   606    0   606    0     0   2554      0 --:--:-- --:--:-- --:--:--  2546100 8343k  100 8343k    0     0  2436k      0  0:00:03  0:00:03 --:--:-- 3521kPrepare kubernetes 1.4.12 release ...  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed100   593    0   593    0     0   2533      0 --:--:-- --:--:-- --:--:--  2545100 1033M  100 1033M    0     0  38.8M      0  0:00:26  0:00:26 --:--:-- 43.5M~/kubernetes/cluster/ubuntu/kubernetes/server ~/kubernetes/cluster/ubuntu ~/kubernetes/cluster~/kubernetes/cluster/ubuntu ~/kubernetes/clusterDone! All your binaries locate in kubernetes/cluster/ubuntu/binaries directory~/kubernetes/clusterDeploying master and node on machine 45.63.60.83make-ca-cert.sh                                                                                                100% 4136     4.0KB/s   00:00    easy-rsa.tar.gz                                                                                                100%   42KB  42.4KB/s   00:00    config-default.sh                                                                                              100% 5548     5.4KB/s   00:00    util.sh                                                                                                        100%   29KB  28.9KB/s   00:00    kube-proxy.conf                                                                                                100%  688     0.7KB/s   00:00    kubelet.conf                                                                                                   100%  645     0.6KB/s   00:00    kube-proxy                                                                                                     100% 2233     2.2KB/s   00:00    kubelet                                                                                                        100% 2158     2.1KB/s   00:00    etcd.conf                                                                                                      100%  707     0.7KB/s   00:00    kube-scheduler.conf                                                                                            100%  682     0.7KB/s   00:00    kube-apiserver.conf                                                                                            100%  682     0.7KB/s   00:00    kube-controller-manager.conf                                                                                   100%  761     0.7KB/s   00:00    kube-controller-manager                                                                                        100% 2672     2.6KB/s   00:00    kube-scheduler                                                                                                 100% 2360     2.3KB/s   00:00    etcd                                                                                                           100% 2073     2.0KB/s   00:00    kube-apiserver                                                                                                 100% 2358     2.3KB/s   00:00    reconfDocker.sh                                                                                                100% 2068     2.0KB/s   00:00    kube-controller-manager                                                                                        100%  135MB  67.3MB/s   00:02    kube-scheduler                                                                                                 100%   77MB  76.5MB/s   00:01    etcd                                                                                                           100%   17MB  16.6MB/s   00:01    flanneld                                                                                                       100%   16MB  15.8MB/s   00:00    kube-apiserver                                                                                                 100%  144MB  72.0MB/s   00:02    etcdctl                                                                                                        100%   14MB  14.2MB/s   00:00    kube-proxy                                                                                                     100%   69MB  69.0MB/s   00:01    flanneld                                                                                                       100%   16MB  15.8MB/s   00:01    kubelet                                                                                                        100%  123MB 123.2MB/s   00:01    flanneld.conf                                                                                                  100%  579     0.6KB/s   00:00    flanneld                                                                                                       100% 2121     2.1KB/s   00:00    flanneld.conf                                                                                                  100%  570     0.6KB/s   00:00    flanneld                                                                                                       100% 2131     2.1KB/s   00:00    etcd start/running, process 13149etcd cluster has no published client endpoints.Try '--no-sync' if you want to access non-published client endpoints(http://127.0.0.1:2379,http://127.0.0.1:4001).Error:  client: no endpoints availableetcd cluster has no published client endpoints.Try '--no-sync' if you want to access non-published client endpoints(http://127.0.0.1:2379,http://127.0.0.1:4001).Error:  client: no endpoints availableError:  100: Key not found (/coreos.com) [12]{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}}{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}}docker stop/waitingdocker start/running, process 13652Connection to 45.63.60.83 closed.Validating masterValidating root@45.63.60.83Using master 45.63.60.83cluster "ubuntu" set.user "ubuntu" set.context "ubuntu" set.switched to context "ubuntu".Wrote config for ubuntu to /root/.kube/config... calling validate-clusterFound 1 node(s).NAME          STATUS    AGE45.63.60.83   Ready     18sValidate output:NAME                 STATUS    MESSAGE              ERRORscheduler            Healthy   ok                   controller-manager   Healthy   ok                   etcd-0               Healthy   {"health": "true"}   Cluster validation succeededDone, listing cluster services:Kubernetes master is running at http://45.63.60.83:8080To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.root@vultr:~/kubernetes/cluster# cd ubuntu/root@vultr:~/kubernetes/cluster/ubuntu# ./deployAddons.sh Creating kube-system namespace...The namespace 'kube-system' is already there. Skipping.Deploying DNS on Kubernetesreplicationcontroller "kube-dns-v20" createdservice "kube-dns" createdKube-dns rc and service is successfully deployed.Creating Kubernetes Dashboard replicationControllerreplicationcontroller "kubernetes-dashboard-v1.4.0" createdCreating Kubernetes Dashboard serviceservice "kubernetes-dashboard" createdroot@vultr:~/kubernetes/cluster# cp ./ubuntu/binaries/kubectl /usr/bin/ root@vultr:~/kubernetes/cluster# kubectl cluster-infoKubernetes master is running at http://45.63.60.83:8080KubeDNS is running at http://45.63.60.83:8080/api/v1/proxy/namespaces/kube-system/services/kube-dnskubernetes-dashboard is running at http://45.63.60.83:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboardTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

2.4.4 停止集群

root@vultr:~/kubernetes/cluster# ./kube-down.sh 
0 0
原创粉丝点击