集群架构
节点IP地址 | 角色 | 安装组件名称 |
172.16.18.150 | master(管理节点) | docker、etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kube-proxy、flannel |
172.16.18.151 | node(计算节点) | docker、etcd、kubelet、kube-proxy、flannel |
172.16.18.152 | node(计算节点) | docker、etcd、kubelet、kube-proxy、flannel |
软件版本
1 2 3 4 5 6 7 8 |
系统版本:centos7.4 内核版本:4.17 kubernetes版本:1.11.0 etcd版本:v3.3.8 flannel版本:v0.10.0 cfssl版本:1.2 cni版本: v0.7.1 |
文章目录
系统初始化
1.ssh密钥授信
1 2 3 4 |
[root@k8s-master ~]# ssh-keygen [root@k8s-master ~]# ssh-copy-id root@172.16.18.151 [root@k8s-master ~]# ssh-copy-id root@172.16.18.152 |
2.升级内核
第一步:添加中科大的elrepo的yum源
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
[root@k8s-master ~]# vim /etc/yum.repos.d/elrepo.repo [elrepo] name=ELRepo.org Community Enterprise Linux Repository – el7 baseurl=https://mirrors.ustc.edu.cn/elrepo/elrepo/el7/$basearch/ enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org protect=0 [elrepo-testing] name=ELRepo.org Community Enterprise Linux Testing Repository – el7 baseurl=https://mirrors.ustc.edu.cn/elrepo/testing/el7/$basearch/ enabled=0 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org protect=0 [elrepo-kernel] name=ELRepo.org Community Enterprise Linux Kernel Repository – el7 baseurl=https://mirrors.ustc.edu.cn/elrepo/kernel/el7/$basearch/ enabled=0 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org protect=0 [elrepo-extras] name=ELRepo.org Community Enterprise Linux Extras Repository – el7 baseurl=https://mirrors.ustc.edu.cn/elrepo/extras/el7/$basearch/ enabled=0 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org protect=0 |
第二步:安装最新的内核版本:
1 2 |
[root@k8s-master ~]# yum --enablerepo=elrepo-kernel install kernel-ml -y |
第三步:修改grub默认的内核版本
1 2 3 4 5 6 7 8 9 |
[root@k8s-master ~]# cat /etc/default/grub GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=0 #这里改成0 GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rhgb quiet" GRUB_DISABLE_RECOVERY="true" |
运行grub2-mkconfig 命令来重新创建内核配置
1 2 |
[root@k8s-master ~]# grub2-mkconfig -o /boot/grub2/grub.cfg |
第四步:重启服务器并查看是否已更新到最新内核版本
1 2 3 4 |
[root@k8s-master ~]# reboot [root@k8s-master ~]# uname -r 4.17.6-1.el7.elrepo.x86_64 |
注意:别忘了其他两个节点也升级内核
3.安装docker
第一步:安装Docker的yum源
1 2 3 |
[root@k8s-master ~]# cd /etc/yum.repos.d/ [root@k8s-master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo |
第二步:安装Docker:
1 2 |
[root@k8s-master ~]# yum install -y docker-ce |
第三步:启动后台进程:
1 2 3 |
[root@k8s-master ~]# systemctl enable docker [root@k8s-master ~]# systemctl start docker |
4.创建部署目录
在三台节点上创建部署目录
1 2 3 4 |
[root@k8s-master ~]# mkdir -p /opt/kubernetes/{cfg,bin,ssl,log} [root@k8s-node1 ~]# mkdir -p /opt/kubernetes/{cfg,bin,ssl,log} [root@k8s-node2 ~]# mkdir -p /opt/kubernetes/{cfg,bin,ssl,log} |
配置环境变量
1 2 3 4 5 6 7 8 9 10 11 |
[root@k8s-master ~]# cat .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin:/opt/kubernetes/bin export PATH [root@k8s-master ~]# source .bash_profile |
5.准备软件包
各软件包下载地址
1 2 3 4 5 6 |
kubernetes: https://github.com/kubernetes/kubernetes/releases etcd: https://github.com/coreos/etcd/releases cfssl: https://pkg.cfssl.org flannel: https://github.com/coreos/flannel/releases cni: https://github.com/containernetworking/plugins/releases |
在master节点上创建软件包存放目录
1 2 |
[root@k8s-master ~]# mkdir -p /usr/local/src/k8s-1.11.0/ssl |
注意:将需要用到的软件都下载到k8s-1.11.0目录中,ssl目录作为后面创建证书存放的目录
下载kubernetes集群软件包到制定目录
1 2 3 4 5 6 7 8 |
[root@k8s-master k8s-1.11.0]# ll -h total 525M -rw-r--r-- 1 root root 14M Jul 12 10:09 kubernetes-client-linux-amd64.tar.gz -rw-r--r-- 1 root root 95M Jul 12 10:11 kubernetes-node-linux-amd64.tar.gz -rw-r--r-- 1 root root 416M Jul 12 10:13 kubernetes-server-linux-amd64.tar.gz -rw-r--r-- 1 root root 1.8M Jul 12 10:19 kubernetes.tar.gz drwxr-xr-x 2 root root 6 Jul 17 23:36 ssl |
解压软件包
1 2 3 4 5 |
[root@k8s-master k8s-1.11.0]# tar zxf kubernetes.tar.gz [root@k8s-master k8s-1.11.0]# tar zxf kubernetes-server-linux-amd64.tar.gz [root@k8s-master k8s-1.11.0]# tar zxf kubernetes-client-linux-amd64.tar.gz [root@k8s-master k8s-1.11.0]# tar zxf kubernetes-node-linux-amd64.tar.gz |
CA证书制作
1.安装cfssl
1 2 3 4 5 6 7 8 9 10 11 12 |
[root@k8s-master ~]# cd /usr/local/src/k8s-1.11.0/ [root@k8s-master k8s-1.11.0]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 [root@k8s-master k8s-1.11.0]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 [root@k8s-master k8s-1.11.0]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 [root@k8s-master k8s-1.11.0]# chmod +x cfssl* [root@k8s-master k8s-1.11.0]# mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo [root@k8s-master k8s-1.11.0]# mv cfssljson_linux-amd64 /opt/kubernetes/bin/cfssljson [root@k8s-master k8s-1.11.0]# mv cfssl_linux-amd64 /opt/kubernetes/bin/cfssl 复制cfssl命令文件到k8s-node1和k8s-node2节点。如果实际中多个节点,就都需要同步复制。 [root@k8s-master k8s-1.11.0]# scp /opt/kubernetes/bin/cfssl* k8s-node1:/opt/kubernetes/bin [root@k8s-master k8s-1.11.0]# scp /opt/kubernetes/bin/cfssl* k8s-node2:/opt/kubernetes/bin |
2.初始化cfssl
1 2 3 4 |
[root@k8s-master k8s-1.11.0]# cd ssl [root@k8s-master ssl]# cfssl print-defaults config > config.json [root@k8s-master ssl]# cfssl print-defaults csr > csr.json |
3.创建用来生成 CA 文件的 JSON 配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@k8s-master ssl]# vim ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } |
4.创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@k8s-master ssl]# vim ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "HangZhou", "L": "HangZhou", "O": "k8s", "OU": "System" } ] } |
5.生成CA证书(ca.pem)和密钥(ca-key.pem)
1 2 3 4 5 6 7 8 |
[root@k8s-master ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca [root@k8s-master ssl]# ls -l ca* -rw-r--r-- 1 root root 292 Jul 12 22:19 ca-config.json -rw-r--r-- 1 root root 1005 Jul 12 22:20 ca.csr -rw-r--r-- 1 root root 210 Jul 12 22:20 ca-csr.json -rw------- 1 root root 1675 Jul 12 22:20 ca-key.pem -rw-r--r-- 1 root root 1363 Jul 12 22:20 ca.pem |
6.分发证书
1 2 3 4 5 |
[root@k8s-master ssl]# cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl SCP证书到k8s-node1和k8s-node2节点 [root@k8s-master ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json k8s-node1:/opt/kubernetes/ssl [root@k8s-master ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json k8s-node2:/opt/kubernetes/ssl |
部署ETCD集群
1.准备etcd软件包到目录
1 2 3 4 5 6 7 8 9 |
[root@k8s-master ~]# cd /usr/local/src/k8s-1.11.0/ [root@k8s-master k8s-1.11.0]# wget https://github.com/coreos/etcd/releases/download/v3.3.8/etcd-v3.3.8-linux-amd64.tar.gz [root@k8s-master k8s-1.11.0]# tar zxf etcd-v3.3.8-linux-amd64.tar.gz [root@k8s-master k8s-1.11.0]# cd etcd-v3.3.8-linux-amd64 [root@k8s-master etcd-v3.3.8-linux-amd64]# cp etcd etcdctl /opt/kubernetes/bin/ scp命令到k8s-node1和k8s-node2节点 [root@k8s-master etcd-v3.3.8-linux-amd64]# scp etcd etcdctl k8s-node1:/opt/kubernetes/bin/ [root@k8s-master etcd-v3.3.8-linux-amd64]# scp etcd etcdctl k8s-node1:/opt/kubernetes/bin/ |
2.创建etcd证书签名请求
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
[root@k8s-master ~]# cd /usr/local/src/k8s-1.11.0/ssl/ [root@k8s-master ssl]# vim etcd-csr.json { "CN": "etcd", "hosts": [ "127.0.0.1", "172.16.18.150", "172.16.18.151", "172.16.18.152" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "HangZhou", "L": "HangZhou", "O": "k8s", "OU": "System" } ] } |
3.生成etcd证书和私钥
1 2 3 4 5 6 7 8 9 10 |
[root@k8s-master ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \ -ca-key=/opt/kubernetes/ssl/ca-key.pem \ -config=/opt/kubernetes/ssl/ca-config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd [root@k8s-master ssl]# ls -l etcd* -rw-r--r-- 1 root root 1066 Jul 15 02:35 etcd.csr -rw-r--r-- 1 root root 289 Jul 15 02:34 etcd-csr.json -rw------- 1 root root 1679 Jul 15 02:35 etcd-key.pem -rw-r--r-- 1 root root 1440 Jul 15 02:35 etcd.pem |
4.将证书移动到/opt/kubernetes/ssl目录下
1 2 3 4 5 |
[root@k8s-master ssl]# cp etcd*.pem /opt/kubernetes/ssl scp证书到k8s-node1和k8s-node2节点 [root@k8s-master ssl]# scp etcd*.pem k8s-node1:/opt/kubernetes/ssl [root@k8s-master ssl]# scp etcd*.pem k8s-node2:/opt/kubernetes/ssl |
5.创建etcd配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
[root@k8s-master ~]# vim /opt/kubernetes/cfg/etcd.conf #[member] ETCD_NAME="etcd-node1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_SNAPSHOT_COUNTER="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="https://172.16.18.150:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.18.150:2379,https://127.0.0.1:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.18.150:2380" # if you use different ETCD_NAME (e.g. test), # set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="etcd-node1=https://172.16.18.150:2380,etcd-node2=https://172.16.18.151:2380,etcd-node3=https://172.16.18.152:2380" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.18.150:2379" #[security] CLIENT_CERT_AUTH="true" ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem" ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem" ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem" PEER_CLIENT_CERT_AUTH="true" ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem" ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem" ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem" scp配置文件到k8s-node1和k8s-node2节点 [root@k8s-master ~]# scp /opt/kubernetes/cfg/etcd.conf k8s-node1:/opt/kubernetes/cfg/ [root@k8s-master ~]# scp /opt/kubernetes/cfg/etcd.conf k8s-node2:/opt/kubernetes/cfg/ |
6.创建ETCD系统服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@k8s-master ~]# cat /etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target [Service] Type=simple WorkingDirectory=/var/lib/etcd EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd" Type=notify [Install] WantedBy=multi-user.target scp系统服务到k8s-node1和k8s-node2节点 [root@k8s-master ~]# scp /etc/systemd/system/etcd.service k8s-node1:scp /etc/systemd/system/ [root@k8s-master ~]# scp /etc/systemd/system/etcd.service k8s-node2:scp /etc/systemd/system/ |
7.重新加载系统服务
1 2 3 4 5 6 7 |
在所有节点上创建etcd存储目录并启动etcd [root@k8s-master ~]# mkdir -p /var/lib/etcd [root@k8s-master ~]# systemctl daemon-reload [root@k8s-master ~]# systemctl enable etcd [root@k8s-master ~]# systemctl start etcd [root@k8s-master ~]# systemctl status etcd |
8.验证集群
1 2 3 4 5 6 7 8 9 |
[root@k8s-master ~]# etcdctl --endpoints=https://172.16.18.150:2379 \ --ca-file=/opt/kubernetes/ssl/ca.pem \ --cert-file=/opt/kubernetes/ssl/etcd.pem \ --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health member 2918635b94c3bb6d is healthy: got healthy result from https://172.16.18.150:2379 member 47b5e156149dcdfe is healthy: got healthy result from https://172.16.18.151:2379 member ce472f4e0f65dfd0 is healthy: got healthy result from https://172.16.18.152:2379 cluster is healthy |
Master节点部署
部署Kubernetes API服务
1.复制命令到制定目录
1 2 3 4 5 |
[root@k8s-master ~]# cd /usr/local/src/k8s-1.11.0/kubernetes/ [root@k8s-master kubernetes]# cp server/bin/kube-apiserver /opt/kubernetes/bin/ [root@k8s-master kubernetes]# cp server/bin/kube-controller-manager /opt/kubernetes/bin/ [root@k8s-master kubernetes]# cp server/bin/kube-scheduler /opt/kubernetes/bin/ |
2.创建生成CSR的JSON配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
[root@k8s-master kubernetes]# cd /usr/local/src/k8s-1.11.0/ssl/ [root@k8s-master ssl]# vim kubernetes-csr.json { "CN": "kubernetes", "hosts": [ "127.0.0.1", "172.16.18.150", "10.1.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "HangZhou", "L": "HangZhou", "O": "k8s", "OU": "System" } ] } |
3.生成kubernetes证书和私钥
1 2 3 4 5 6 7 8 9 10 11 |
[root@k8s-master ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \ -ca-key=/opt/kubernetes/ssl/ca-key.pem \ -config=/opt/kubernetes/ssl/ca-config.json \ -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes [root@k8s-master ssl]# ls -l kubernetes*.pem -rw------- 1 root root 1679 Jul 12 22:36 kubernetes-key.pem -rw-r--r-- 1 root root 1619 Jul 12 22:36 kubernetes.pem scp证书到k8s-node1和k8s-node2节点 [root@k8s-master ssl]# scp kubernetes*.pem k8s-node1:/opt/kubernetes/ssl/ [root@k8s-master ssl]# scp kubernetes*.pem k8s-node2:/opt/kubernetes/ssl/ |
4.创建kube-apiserver使用的客户端token文件
1 2 3 4 5 |
[root@k8s-master ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' c0191fdd1c3e476ad885755ec18f5c64 [root@k8s-master ~]# vim /opt/kubernetes/ssl/bootstrap-token.csv c0191fdd1c3e476ad885755ec18f5c64,kubelet-bootstrap,10001,"system:kubelet-bootstrap" |
5.创建基础用户名/密码认证配置
1 2 3 4 |
[root@k8s-master ~]# vim /opt/kubernetes/ssl/basic-auth.csv admin,admin,1 readonly,readonly,2 |
6.部署Kubernetes API Server服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
[root@k8s-master ~]# vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] ExecStart=/opt/kubernetes/bin/kube-apiserver \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \ --bind-address=172.16.18.150 \ --insecure-bind-address=127.0.0.1 \ --authorization-mode=Node,RBAC \ --runtime-config=rbac.authorization.k8s.io/v1 \ --kubelet-https=true \ --anonymous-auth=false \ --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \ --service-cluster-ip-range=10.1.0.0/16 \ --service-node-port-range=20000-40000 \ --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \ --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \ --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \ --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \ --etcd-servers=https://172.16.18.150:2379,https://172.16.18.151:2379,https://172.16.18.152:2379 \ --enable-swagger-ui=true \ --allow-privileged=true \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/opt/kubernetes/log/api-audit.log \ --event-ttl=1h \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target |
7.启动API Server服务
1 2 3 4 5 |
[root@k8s-master ~]# systemctl daemon-reload [root@k8s-master ~]# systemctl enable kube-apiserver [root@k8s-master ~]# systemctl start kube-apiserver [root@k8s-master ~]# systemctl status kube-apiserver |
8.部署Controller Manager服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
[root@k8s-master ~]# vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/kube-controller-manager \ --address=127.0.0.1 \ --master=http://127.0.0.1:8080 \ --allocate-node-cidrs=true \ --service-cluster-ip-range=10.1.0.0/16 \ --cluster-cidr=10.2.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --leader-elect=true \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target |
9.启动Controller Manager服务
1 2 3 4 5 |
[root@k8s-master ~]# systemctl daemon-reload [root@k8s-master ~]# systemctl enable kube-controller-manager [root@k8s-master ~]# systemctl start kube-controller-manager [root@k8s-master ~]# systemctl status kube-controller-manager |
10.部署Kubernetes Scheduler服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@k8s-master ~]# vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/kube-scheduler \ --address=127.0.0.1 \ --master=http://127.0.0.1:8080 \ --leader-elect=true \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target |
11.启动Kubernetes Scheduler服务
1 2 3 4 5 |
[root@k8s-master ~]# systemctl daemon-reload [root@k8s-master ~]# systemctl enable kube-scheduler [root@k8s-master ~]# systemctl start kube-scheduler [root@k8s-master ~]# systemctl status kube-scheduler |
12.部署kubectl 命令行工具
第一步:复制二进制命令到指定目录
1 2 3 |
[root@k8s-master ~]# cd /usr/local/src/k8s-1.11.0/kubernetes/client/bin/ [root@k8s-master bin]# cp kubectl /opt/kubernetes/bin/ |
第二步: 创建admin证书签名请求
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@k8s-master ~]# cd /usr/local/src/k8s-1.11.0/ssl/ [root@k8s-master ssl]# vim admin-csr.json { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "HangZhou", "L": "HangZhou", "O": "system:masters", "OU": "System" } ] } |
第三步: 生成admin证书和私钥
1 2 3 4 5 6 7 8 9 10 11 12 |
[root@k8s-master ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \ -ca-key=/opt/kubernetes/ssl/ca-key.pem \ -config=/opt/kubernetes/ssl/ca-config.json \ -profile=kubernetes admin-csr.json | cfssljson -bare admin [root@k8s-master ssl]# ls -l admin* -rw-r--r-- 1 root root 1013 Jul 12 22:45 admin.csr -rw-r--r-- 1 root root 231 Jul 12 22:45 admin-csr.json -rw------- 1 root root 1675 Jul 18 04:49 admin-key.pem -rw-r--r-- 1 root root 1407 Jul 18 04:49 admin.pem 复制证书到k8s目录 [root@k8s-master ssl]# cp admin*.pem /opt/kubernetes/ssl/ |
第四步: 设置集群参数
1 2 3 4 5 6 |
[root@k8s-master ssl]# kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://172.16.18.150:6443 Cluster "kubernetes" set. |
第五步: 设置客户端认证参数
1 2 3 4 5 6 |
[root@k8s-master ssl]# kubectl config set-credentials admin \ --client-certificate=/opt/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/opt/kubernetes/ssl/admin-key.pem User "admin" set. |
第六步: 设置上下文参数
1 2 3 4 5 |
[root@k8s-master ssl]# kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin Context "kubernetes" created. |
第七步: 设置默认上下文
1 2 3 |
[root@k8s-master ssl]# kubectl config use-context kubernetes Switched to context "kubernetes". |
第八步: 验证kubectl命令工具
1 2 3 4 5 6 7 8 |
[root@k8s-master ssl]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} |
Node节点部署
部署kubelet服务
1.复制二进制命令到node节点
1 2 3 4 5 6 |
[root@k8s-master ~]# cd /usr/local/src/k8s-1.11.0/kubernetes/server/bin/ [root@k8s-master bin]# cp kubelet kube-proxy /opt/kubernetes/bin/ scp二进制命令到k8s-node1和k8s-node2节点 [root@k8s-master bin]# scp kubelet kube-proxy k8s-node1:/opt/kubernetes/bin/ [root@k8s-master bin]# scp kubelet kube-proxy k8s-node2:/opt/kubernetes/bin/ |
2.创建角色绑定
1 2 3 |
[root@k8s-master ssl]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding "kubelet-bootstrap" created |
3.创建 kubelet bootstrapping kubeconfig文件 设置集群参数
1 2 3 4 5 6 7 |
[root@k8s-master ssl]# kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://172.16.18.150:6443 \ --kubeconfig=bootstrap.kubeconfig Cluster "kubernetes" set. |
4.设置客户端认证参数
1 2 3 4 5 |
[root@k8s-master ssl]# kubectl config set-credentials kubelet-bootstrap \ --token=c0191fdd1c3e476ad885755ec18f5c64 \ --kubeconfig=bootstrap.kubeconfig User "kubelet-bootstrap" set. |
注意:这里的token就是我们前面配置的token参数
5.设置上下文参数
1 2 3 4 5 6 |
[root@k8s-master ssl]# kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig Context "default" created. |
6.选择默认上下文
1 2 3 4 5 6 7 |
[root@k8s-master ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig Switched to context "default". [root@k8s-master ssl]# cp bootstrap.kubeconfig /opt/kubernetes/cfg scp kubeconfig文件到k8s-node1和k8s-node2节点 [root@k8s-master ssl]# scp bootstrap.kubeconfig k8s-node1:/opt/kubernetes/cfg [root@k8s-master ssl]# scp bootstrap.kubeconfig k8s-node2:/opt/kubernetes/cfg |
7.设置CNI支持
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
在三个节点上都创建cni目录 [root@k8s-master ~]# mkdir -p /etc/cni/net.d [root@k8s-master ~]# vim /etc/cni/net.d/10-default.conf { "name": "flannel", "type": "flannel", "delegate": { "bridge": "docker0", "isDefaultGateway": true, "mtu": 1400 } } scp配置文件到k8s-node1和k8s-node2节点 [root@k8s-master ~]# scp /etc/cni/net.d/10-default.conf k8s-node1:/etc/cni/net.d/ [root@k8s-master ~]# scp /etc/cni/net.d/10-default.conf k8s-node2:/etc/cni/net.d/ |
8.在node节点上部署kublet服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
在2个node节点上创建kubelet目录 [root@k8s-node1 ~]# mkdir /var/lib/kubelet [root@k8s-node1 ~]# vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/opt/kubernetes/bin/kubelet \ --address=172.16.18.151 \ --hostname-override=172.16.18.151 \ --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \ --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --cert-dir=/opt/kubernetes/ssl \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --cni-bin-dir=/opt/kubernetes/bin/cni \ --cluster-dns=10.1.0.2 \ --cluster-domain=cluster.local. \ --hairpin-mode hairpin-veth \ --allow-privileged=true \ --fail-swap-on=false \ --logtostderr=true \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5 scp服务文件到k8s-node2节点 [root@k8s-node1 ~]# scp /usr/lib/systemd/system/kubelet.service k8s-node2:/usr/lib/systemd/system/ |
9.启动kubelet服务
1 2 3 4 5 6 |
在2个node节点上启动kubelet服务 [root@k8s-node1 ~]# systemctl daemon-reload [root@k8s-node1 ~]# systemctl enable kubelet [root@k8s-node1 ~]# systemctl start kubelet [root@k8s-node1 ~]# systemctl status kubelet |
10.查看csr请求 注意是在master节点上执行
1 2 3 4 |
[root@k8s-master ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-0_w5F1FM_la_SeGiu3Y5xELRpYUjjT2icIFk9gO9KOU 1m kubelet-bootstrap Pending |
11.批准kubelet的TLS证书请求
1 2 3 4 5 6 7 |
[root@k8s-master ~]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve 查看节点状态是否为ready [root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION 172.16.18.151 Ready <none> 5d v1.11.0 172.16.18.152 Ready <none> 5d v1.11.0 |
部署Kubernetes Proxy服务
1.配置kube-proxy使用LVS
1 2 3 |
在3个节点上安装lvs [root@k8s-node1 ~]# yum install -y ipvsadm ipset conntrack |
2.创建kube-proxy证书请求
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
在master节点上 [root@k8s-master ~]# cd /usr/local/src/k8s-1.11.0/ssl/ [root@k8s-master ssl]# cat kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "HangZhou", "L": "HangZhou", "O": "k8s", "OU": "System" } ] } |
3.生成证书
1 2 3 4 5 6 7 8 |
[root@k8s-master ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \ -ca-key=/opt/kubernetes/ssl/ca-key.pem \ -config=/opt/kubernetes/ssl/ca-config.json \ -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy scp证书到k8s-node1和k8s-node2节点上 [root@k8s-master ssl]# scp kube-proxy*.pem k8s-node1:/opt/kubernetes/ssl/ [root@k8s-master ssl]# scp kube-proxy*.pem k8s-node2:/opt/kubernetes/ssl/ |
4.创建kube-proxy配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
在master节点上创建kube-proxy配置文件 [root@k8s-master ~]# cd /usr/local/src/k8s-1.11.0/ssl/ [root@k8s-master ssl]# kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://172.16.18.150:6443 \ --kubeconfig=kube-proxy.kubeconfig Cluster "kubernetes" set. [root@k8s-master ssl]# kubectl config set-credentials kube-proxy \ --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \ --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig User "kube-proxy" set. [root@k8s-master ssl]# kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig Context "default" created. [root@k8s-master ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig Switched to context "default". [root@k8s-master ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/ scp配置文件到k8s-node1和k8s-node2节点 [root@k8s-master ssl]# scp kube-proxy.kubeconfig k8s-node1:/opt/kubernetes/cfg/ [root@k8s-master ssl]# scp kube-proxy.kubeconfig k8s-node2:/opt/kubernetes/cfg/ |
5.创建kube-proxy服务配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
在3个节点上创建目录 [root@k8s-master ssl]# mkdir /var/lib/kube-proxy [root@k8s-master ssl]# vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \ --bind-address=172.16.18.150 \ --hostname-override=172.16.18.150 \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \ --masquerade-all \ --feature-gates=SupportIPVSProxyMode=true \ --proxy-mode=ipvs \ --ipvs-min-sync-period=5s \ --ipvs-sync-period=5s \ --ipvs-scheduler=rr \ --logtostderr=true \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target scp服务文件到k8s-node1和k8s-node2节点 [root@k8s-master ~]# scp /usr/lib/systemd/system/kube-proxy.service k8s-node1:/usr/lib/systemd/system/ |
6.启动Kube-Proxy服务
1 2 3 4 5 6 7 8 9 10 11 12 13 |
在3个节点上启动服务 [root@k8s-master ~]# systemctl daemon-reload [root@k8s-master ~]# systemctl enable kube-proxy [root@k8s-master ~]# systemctl start kube-proxy [root@k8s-master ~]# systemctl status kube-proxy [root@k8s-master ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.16.18.150:24178 rr -> 10.2.29.3:8443 Masq 1 0 0 |
Flannel网络部署
1.为Flannel生成证书
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
[root@k8s-master ~]# cd /usr/local/src/k8s-1.11.0/ssl/ [root@k8s-master ssl]# vim flanneld-csr.json { "CN": "flanneld", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "HangZhou", "L": "HangZhou", "O": "k8s", "OU": "System" } ] } [root@k8s-master ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \ -ca-key=/opt/kubernetes/ssl/ca-key.pem \ -config=/opt/kubernetes/ssl/ca-config.json \ -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld [root@k8s-master ssl]# cp flanneld*.pem /opt/kubernetes/ssl/ scp证书到k8s-node1和k8s-node2节点 [root@k8s-master ssl]# scp flanneld*.pem k8s-node1:/opt/kubernetes/ssl/ [root@k8s-master ssl]# scp flanneld*.pem k8s-node1:/opt/kubernetes/ssl/ |
2.下载Flannel软件包
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[root@k8s-master k8s-1.11.0]# cd /usr/local/src/k8s-1.11.0/ [root@k8s-master k8s-1.11.0]# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz [root@k8s-master k8s-1.11.0]# tar zxf flannel-v0.10.0-linux-amd64.tar.gz [root@k8s-master k8s-1.11.0]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/ scp组件到k8s-node1和k8s-node2节点上 [root@k8s-master k8s-1.11.0]# scp flanneld mk-docker-opts.sh k8s-node1:/opt/kubernetes/bin/ [root@k8s-master k8s-1.11.0]# scp flanneld mk-docker-opts.sh k8s-node2:/opt/kubernetes/bin/ 复制对应脚本到/opt/kubernetes/bin目录下 [root@k8s-master k8s-1.11.0]# cd /usr/local/src/k8s-1.11.0/kubernetes/cluster/centos/node/bin [root@k8s-master bin]# cp remove-docker0.sh /opt/kubernetes/bin/ scp脚本到k8s-node1和k8s-node2节点上 [root@k8s-master bin]# scp remove-docker0.sh k8s-node1:/opt/kubernetes/bin/[root@k8s-master bin]# scp remove-docker0.sh k8s-node2:/opt/kubernetes/bin/ |
3.配置Flannel
1 2 3 4 5 6 7 8 9 10 |
[root@k8s-master bin]# vim /opt/kubernetes/cfg/flannel FLANNEL_ETCD="-etcd-endpoints=https://172.16.18.150:2379,https://172.16.18.151:2379,https://172.16.18.152:2379" FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network" FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem" FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem" FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem" scp配置文件到k8s-node1和k8s-node2节点上 [root@k8s-master bin]# scp /opt/kubernetes/cfg/flannel k8s-node1:/opt/kubernetes/cfg/ [root@k8s-master bin]# scp /opt/kubernetes/cfg/flannel k8s-node2:/opt/kubernetes/cfg/ |
4.配置Flannel服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
[root@k8s-master ~]# vim /usr/lib/systemd/system/flannel.service [Unit] Description=Flanneld overlay address etcd agent After=network.target Before=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/flannel ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE} ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker Type=notify [Install] WantedBy=multi-user.target RequiredBy=docker.service scp配置文件到k8s-node1和k8s-node2节点上 [root@k8s-master ~]# scp /usr/lib/systemd/system/flannel.service k8s-node1:/usr/lib/systemd/system/ [root@k8s-master ~]# scp /usr/lib/systemd/system/flannel.service k8s-node2:/usr/lib/systemd/system/ |
Flannel CNI集成
1.下载CNI插件
1 2 3 4 5 6 7 8 9 |
[root@k8s-master ~]# cd /usr/local/src/k8s-1.11.0/ [root@k8s-master k8s-1.11.0]# wget wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz 在三个节点上创建cni目录 [root@k8s-master k8s-1.11.0]# mkdir /opt/kubernetes/bin/cni [root@k8s-master k8s-1.11.0]# tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni scp组件到k8s-node1和k8s-node2节点 [root@k8s-master k8s-1.11.0]# scp -r /opt/kubernetes/bin/cni/* k8s-node1:/opt/kubernetes/bin/cni/ [root@k8s-master k8s-1.11.0]# scp -r /opt/kubernetes/bin/cni/* k8s-node2:/opt/kubernetes/bin/cni/ |
2.创建Etcd的key
1 2 3 4 |
[root@k8s-master ~]# etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \ --no-sync -C https://172.16.18.150:2379,https://172.16.18.151:2379,https://172.16.18.152:2379 \ mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null 2>&1 |
3.启动Flannel服务
1 2 3 4 5 6 |
在三个节点上启动Flannel服务 [root@k8s-master ~]# systemctl daemon-reload [root@k8s-master ~]# systemctl enable flannel [root@k8s-master ~]# systemctl start flannel [root@k8s-master ~]# systemctl status flannel |
4.配置Docker使用Flannel
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
[root@k8s-master ~]# vim /usr/lib/systemd/system/docker.service [Unit] #在Unit下面修改After和增加Requires Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service flannel.service Wants=network-online.target Requires=flannel.service [Service] #增加EnvironmentFile=-/run/flannel/docker Type=notify EnvironmentFile=-/run/flannel/docker ExecStart=/usr/bin/dockerd $DOCKER_OPTS # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process # restart the docker process if it exits prematurely Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target scp服务到k8s-node1和k8s-node2节点上 [root@k8s-master ~]# scp /usr/lib/systemd/system/docker.service k8s-node1:scp /usr/lib/systemd/system/ [root@k8s-master ~]# scp /usr/lib/systemd/system/docker.service k8s-node2:scp /usr/lib/systemd/system/ |
5.重启Docker服务
1 2 3 4 |
在三个节点上重启docker服务 [root@k8s-master ~]# systemctl daemon-reload [root@k8s-master ~]# systemctl restart docker |