1. 集群规划
准备三个主机,一个 Master ,两个 Node。
- 操作系统,CentOS 7
- 配置,2 Core 4 GB
- Docker 版本,18.06.3
- Kubernetes 版本,1.15.3
如果是购买的云主机,请将以下端口打开:
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
 | # Master
TCP     6443*       Kubernetes API Server
TCP     2379-2380   etcd server client API
TCP     10250       Kubelet API
TCP     10251       kube-scheduler
TCP     10252       kube-controller-manager
TCP     10255       Read-Only Kubelet API
# Nodes
TCP     10250       Kubelet API
TCP     10255       Read-Only Kubelet API
TCP     30000-32767 NodePort Services
 | 
2. Master 和 Node 节点操作
在使用 Kubeadm 安装 Kubernetes 之前,全部节点需要进行一些基础配置和安装。
2.1 hosts 配置(非必须)
配置 hosts 是为了能够通过主机名找到其他主机。首先查看主机名,执行:
这里假设主机名分别为 i-6fns0nua (192.168.10.2),i-m69skuyd(192.168.10.3),i-h29fw205(192.168.10.4)。分别在每个主机上配置 hosts:
| 1
2
3
4
 | cat /etc/hosts
192.168.10.2 i-6fns0nua master
192.168.10.3 i-m69skuyd node1
192.168.10.4 i-h29fw205 node2
 | 
2.2 系统配置
| 1
2
 | systemctl stop firewalld
systemctl disable firewalld
 | 
| 1
2
 | setenforce 0
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
 | 
| 1
2
 | swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
 | 
当前的 Kubernetes 版本不支持 swap。
| 1
2
3
4
5
6
7
 | cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.swappiness=0
EOF
sysctl --system
 | 
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
 | cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
yum install -y ipset ipvsadm
 | 
2.3 安装 Docker
在每个主机上,执行命令,安装最新的 Docker 版本:
| 1
2
3
4
 | yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce-18.06.3.ce-3.el7
systemctl start docker.service & systemctl enable docker.service
 | 
通过执行 docker info 命令,可以看到 Docker 默认的 Cgroup Driver 为 cgroupfs。 而 Kubelet 为 systemd,与 Docker 不一致。这里选择,将 Docker 的 Cgroup Driver 修改为 systemd。
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker
Docker 从 1.13 版本开始调整了默认的防火墙规则,禁用了 iptables filter 表中 FOWARD 链。这样会引起 Kubernetes 集群中跨 Node 的 Pod 无法通信。查看 iptables filter 表中 FOWARD 链的默认策略是否为 ACCEPT 。
| 1
2
3
4
5
6
 | iptables -nvL
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
14105 3771K KUBE-FORWARD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
   43  2656 KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
   43  2656 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0
 | 
如果没有 ACCEPT ,执行命令:
| 1
 | iptables -P FORWARD ACCEPT
 | 
2.4 安装 kubeadm、kubelet、kubectl
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
 | cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.15.3-0.x86_64 kubeadm-1.15.3-0.x86_64 kubectl-1.15.3-0.x86_64
systemctl start kubelet && systemctl enable kubelet
 | 
如果主机网络受限,可以使用其他 yum 源。
3. Master 节点配置
3.1 kubeadm init 初始化集群
在 Master 节点上,执行命令:
| 1
2
3
4
 | kubeadm init \
  --kubernetes-version=v1.15.3 \
  --pod-network-cidr=10.244.0.0/16 \
  --apiserver-advertise-address=192.168.10.2
 | 
- kubernetes-version,指定安装版本
- pod-network-cidr,指定 Pod 所属网络
- apiserver-advertise-address,指定 Maser 节点
安装完毕之后,Console 会输出一段提示:
| 1
2
3
4
 | Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.10.2:6443 --token 7deqem.n42r8n2rnmpzfuq7 \
    --discovery-token-ca-cert-hash sha256:7a86632f54de1004bb3f38124b663f837399d6ba9aa803d58c6707a76c02a6cb
 | 
kubeadm join 这条命令将会被用于添加 Node 节点。
3.2 配置 Kubectl
将访问凭证,拷贝到登陆用户目录下,执行命令:
| 1
2
3
 | mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config * 
 | 
查看集群状态:
| 1
2
3
4
5
 | kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
 | 
3.3 安装 Pod 网络插件
网络插件,二选一:
| 1
 | kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 | 
需要将 POD_CIDR 替换为 kubeadn init 中 pod-network-cid 的值,calico 中的默认值是 192.168.0.0/16。
| 1
2
3
 | curl https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O
sed -i -e "s?192.168.0.0/16?10.244.0.0/16?g" calico.yaml
kubectl apply -f calico.yaml
 | 
3.4 允许 Master 运行 Pod
| 1
2
 | kubectl taint nodes --all node-role.kubernetes.io/master-
node/i-6fns0nua untainted
 | 
3.5 安装 Dashboard
- 下载最新的 kubernetes-dashboard.yaml
| 1
 | wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
 | 
- 编辑 kubernetes-dashboard.yaml,修改服务类型和端口
为了外网可以直接访问,需要将 Dashboard Service 改为 NodePort,同时指定访问端口。
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
 | ...
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort # 添加
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30002 # 指定端口(可选,如果不指定,端口将随机分配)
...
 | 
| 1
 | kubectl create -f kubernetes-dashboard.yaml
 | 
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
 | cat dashboard-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dashboard-admin
subjects:
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
 | 
执行命令:
| 1
 | kubectl create -f dashboard-user.yaml
 | 
查看 admin 用户登录的 token
| 1
 | kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin | awk '{print $1}') | grep token: | awk -F : '{print $2}' | xargs echo
 | 
查看服务端口:
| 1
2
3
 | kubectl -n kube-system get svc kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.110.76.188   <none>        443:30002/TCP   100m
 | 
打开地址:https://<host_ip>:30002/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/。选择 token 认证,输入上一步 Console 输出的字符串即可。除了修改 Dashboard Service 类型之外,还可以使用 kubectl proxy 允许外网访问 dashboard。
| 1
 | kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
 | 
3.5 测试集群 DNS 是否可用
创建容器,并进入终端:
| 1
2
3
 | kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
 | 
使用 curl 命令,测试一下 DNS 和网络:
| 1
 | [ [email protected]:/ ]nslookup kubernetes.default
 | 
4. 添加 Node
在每个 Node 节点上,以 root 用户权限,执行命令:
| 1
2
 | kubeadm join 192.168.10.2:6443 --token 7deqem.n42r8n2rnmpzfuq7 \
    --discovery-token-ca-cert-hash sha256:7a86632f54de1004bb3f38124b663f837399d6ba9aa803d58c6707a76c02a6cb
 | 
5. 配置远程 Kubectl 访问
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
 | cat /etc/kubernetes/admin.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRU...0tLQo=
    server: https://host_ip:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS....tLS0tCg==
 | 
将上一步配置中的 host_ip,替换为 kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local 之一。以 kubernetes.default.svc.cluster.local 为例。添加 hosts 配置:
| 1
2
 | cat /etc/hosts
<host_ip> kubernetes.default.svc.cluster.local
 | 
如果不通过配置 hosts 进行访问,会提示证书不符的错误。本地访问测试:
| 1
2
3
4
5
 | kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}
 | 
6. 安装命令汇总
全新 CentOS 7,安装 Kubelet 环境。
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
 | systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.swappiness=0
EOF
sysctl --system
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
yum install -y ipset ipvsadm
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce-18.06.3.ce-3.el7
systemctl start docker.service & systemctl enable docker.service
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.15.3-0.x86_64 kubeadm-1.15.3-0.x86_64 kubectl-1.15.3-0.x86_64
systemctl start kubelet && systemctl enable kubelet
 |