什么?资源不够用了? 怼服务器配置啊(向上扩展),怼机器啊(横向扩展)!

(注: 该节点添加是基于之前k8s集群搭建的环境进行)

1.系统初始化:

a. 主机名配置

node4:
echo "linux-node4.example.com" > /etc/hostname

b. 设置/etc/hosts保证主机名能够解析

node4:
echo "192.168.56.14 linux-node4 linux-node4.example.com" >> /etc/hosts

c. 关闭SELinux及防火墙

node4:
systemctl disable firewalld; systemctl stop firewalld 
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

d. 环境变量配置(后续k8s相关命令都会放到/opt/kubernetes/bin目录下)

echo "PATH=$PATH:$HOME/bin:/opt/kubernetes/bin" >>  ~/.bash_profile
source ~/.bash_profile

4.安装Docker

a:使用国内Docker源

[root@linux-node4 ~]# cd /etc/yum.repos.d/
[root@linux-node4 yum.repos.d]# wget \
https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

b:Docker安装:

[root@linux-node4 ~]# yum install -y docker-ce

c:启动后台进程:

[root@linux-node4 ~]# systemctl enable docker
[root@linux-node4 ~]# systemctl start docker

5.准备部署目录

[root@linux-node4 ~]#  mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}
# 目录结构, 所有文件均存放在/opt/kubernetes目录下:
[root@linux-node4 ~]# tree -L 1 /opt/kubernetes/
/opt/kubernetes/
├── bin   #二进制文件
├── cfg   #配置文件
├── log   #日志文件
└── ssl   #证书文件

6.做好master节点跟其他node节点的ssh互信,便于搭建

[root@linux-node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.56.14

7.拷贝配置(from master)

1.拷贝CFSSL

[root@linux-node1 ~]# scp /opt/kubernetes/bin/cfssl* 192.168.56.14:/opt/kubernetes/bin

2.分发证书

[root@linux-node1 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.56.14:/opt/kubernetes/ssl

3.拷贝kubernetes 证书和私钥

[root@linux-node1 ssl]# scp /opt/kubernetes/ssl/kubernetes*.pem 192.168.56.14:/opt/kubernetes/ssl/

4.拷贝kubelet,kube-proxy软件包

[root@linux-node1 ~]# cd /usr/local/src/kubernetes/server/bin/
[root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.14:/opt/kubernetes/bin/

5.拷贝角色绑定文件

[root@linux-node1 ~]# scp /opt/kubernetes/cfg/bootstrap.kubeconfig 192.168.56.14:/opt/kubernetes/cfg

6.拷贝CNI支持配置

[root@linux-node1 ~]# ssh linux-node4 "mkdir /etc/cni/net.d -p"
[root@linux-node1 ~]# scp /etc/cni/net.d/10-default.conf linux-node4:/etc/cni/net.d/

7.创建kubelet所需目录

[root@linux-node1 ~]# ssh linux-node4 "mkdir /var/lib/kubelet"

8.拷贝Flannel网络证书

[root@linux-node1 ~]# scp  /opt/kubernetes/ssl/flanneld*.pem 192.168.56.14:/opt/kubernetes/ssl/

9.复制flanner相关软件包

[root@linux-node1 ~]# ssh linux-node4 "mkdir /opt/kubernetes/bin/cni"
[root@linux-node1 ~]# scp /opt/kubernetes/bin/cni/* 192.168.56.14:/opt/kubernetes/bin/cni/
[root@linux-node1 ~]# scp /usr/lib/systemd/system/flannel.service 192.168.56.14:/usr/lib/systemd/system/
[root@linux-node1 ~]# scp /opt/kubernetes/cfg/flannel 192.168.56.14:/opt/kubernetes/cfg/
[root@linux-node1 ~]# cd /usr/local/src
[root@linux-node1 src]# scp flanneld mk-docker-opts.sh 192.168.56.14:/opt/kubernetes/bin/
[root@linux-node1 src]# cd /usr/local/src/kubernetes/cluster/centos/node/bin/
[root@linux-node1 bin]# scp remove-docker0.sh 192.168.56.14:/opt/kubernetes/bin/

# docker 服务脚本
[root@linux-node1 bin]# scp /usr/lib/systemd/system/docker.service 192.168.56.14:/usr/lib/systemd/system/

10.拷贝kubelet系统服务配置

[root@linux-node1 ~]# scp /usr/lib/systemd/system/kubelet.service 192.168.56.14:/usr/lib/systemd/system/
[root@linux-node1 ~]# ssh linux-node4 "systemctl daemon-reload"
[root@linux-node1 ~]# ssh linux-node4 "systemctl enable kubelet"
[root@linux-node1 ~]# ssh linux-node4 "systemctl start kubelet"
[root@linux-node1 ~]# ssh linux-node4 "systemctl status kubelet"
# 然后在node1节点查看TLS证书请求,通过以下就行了,如果结果为空,就需要查看kubelet的日志啦,反正我在这里就跳进了一回自己挖的坑o(╥﹏╥)o

[root@linux-node1 ~]# kubectl get csr
[root@linux-node1 ~]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate
[root@linux-node1 ~]# kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
192.168.56.11   Ready     master    1d        v1.10.1
192.168.56.12   Ready     node      31d       v1.10.1
192.168.56.13   Ready     node      31d       v1.10.1
192.168.56.14   Ready     node      1d        v1.10.1

# 以上,新的计算节点192.168.56.14已经成功添加

11.配置kube-proxy使用LVS

[root@linux-node4 ~]# yum install -y ipvsadm ipset conntrack

12.分发kube-proxy证书

[root@linux-node1 ssl]# scp kube-proxy*.pem 192.168.56.14:/opt/kubernetes/ssl/

13.分发kubeconfig配置文件

[root@linux-node1 ssl]# scp ../cfg/kube-proxy.kubeconfig 192.168.56.14:/opt/kubernetes/cfg/

14.安装nfs(因为node节点作为客户端也需要安装nfs客户端工具)

[root@linux-node1 ssl]# ssh linux-node4 "yum -y install nfs-utils rpcbind"

15.创建工作目录

[root@linux-node1 ~]# ssh linux-node4 "mkdir /var/lib/kube-proxy"

16.拷贝服务配置文件

[root@linux-node1 ~]# cp /usr/lib/systemd/system/kube-proxy.service 192.168.56.14:/usr/lib/systemd/system/kube-proxy.service

# 注意需要修改配置文件的IP
[root@linux-node1 ~]# ssh linux-node4 "systemctl daemon-reload"
[root@linux-node1 ~]# ssh linux-node4 "systemctl enable kube-proxy"
[root@linux-node1 ~]# ssh linux-node4 "systemctl start kube-proxy"
[root@linux-node1 ~]# ssh linux-node4 "systemctl status kube-proxy"

验证

我们先来Scaling flask-app pod到20个, 然后看Kubenertes Scheduler是否能把请求调度到新的节点上面

[root@linux-node1 ~]# kubectl scale --replicas=20 deployment.extensions/flask-app -n flask-app-extions-stage
deployment.extensions "flask-app" scaled


上图可以看到 Scaling的pod已经由Kubenertes Scheduler调度到各个node运行,并且状态已经是Running

以上,可以看到我新添加的节点已经成功加到了集群当中,而且运行结果也是我预期的;唯一不足的地方是这个步骤如果后期在工作中用到的话是不太方便维护的,但是手动这样会知道一个节点需要用到哪些东西,需要怎么配置,也算是对k8s的运行机制有更深点的认知。
后续有时间还是搞一下用SaltStack来部署吧~🍺🍺🍺