Kuerbernetes 1.11 集群二进制安装

本文章主要介绍如何通过使用官方提供的二进制包安装配置k8s 1.11集群,并且所有的镜像yaml文件都提供了下载地址~
Kuerbernetes 1.11 二进制安装
k8s
2019年06月13日
本文截选 https://k.i4t.com
更多文章请持续关注https://i4t.com

什么是Kubernetes?

Kubernetes是一个完备的分布式系统支撑平台。Kubernetes具有完备的集群管理能力,包括多层次的安全防护和准入机制/多租户应用支撑能力、透明的服务注册和服务发现机制、内建智能负载均衡器、强大的故障发现和自我修复功能、服务滚动升级和在线扩容能力、可扩展的资源自动调度机制,以及多粒度的资源配额管理能力。同时kubernetes提供了完善的管理工具,这些工具覆盖了包括开发、测试部署、运维监控在内的各个环节;因此kubernetes是一个全新的基于容器技术的分布式架构解决方案,并且是一个一站式的完备的分布式系统开发和支撑平台
16.png-99.2kB

Kubernetes 基础服务简介

在这里我们只是简单的介绍一下Kubernetes基础组件,后面文章会有详细介绍!

Kubernetes Service介绍

Service(服务)是分布式集群架构的核心,一个Server 对象拥有如下关键特征
(1) 拥有一个唯一指定的名字(比如mysql-server)
(2) 拥有一个虚拟IP (Cluster IP、Service IP或VIP)和端口号
(3) 能够提供某种远程服务能力
(4) 被映射到了提供这种服务能力的一组容器应用上
Service的服务进程目前都基于Socker通信方式对外提供服务,比如redis、memcache、MySQL、Web Server,或者是实现了某个具体业务的一个特定的TCP Server进程。虽然一个Service通常由多个相关的服务进程来提供服务,每个服务进程都有一个独立的Endpoint(IP+Port)访问点,但Kubernetes 能够让我们通过Service虚拟Cluster IP+Service Port连接到指定的Service上。有了Kubernetes内建的透明负载均衡和故障恢复机制,不管后端有多少服务进程,也不管某个服务进程是否会由于发生故障而重新部署到其他机器,都不会影响到我们对服务的正常调用。更重要的是这个Service本身一旦创建就不再变化,这意味着Kubernetes集群中,我们再也不用为了服务的IP地址变来变去的问题而头疼。
17.png-29.7kB

Kubernetes Pod介绍

Pod概念
Pod运行在一个我们称之为节点Node的环境中,可以是私有云也可以是公有云的虚拟机或者物理机,通常在一个节点上运行几百个Pod;其次,每个Pod里运行着一个特殊的被称之为Pause的容器,其他容器则为业务容器,这些业务容器共享Pause容器的网络栈和Volume挂载卷,因此他们之间的通讯和数据交换更为高效。在设计时我们可以充分利用这一特征将一组密切相关的服务进程放入同一个Pod中。
并不是每个Pod和它里面运行的容器都能映射到一个Service 上,只有那些提供服务(无论是对内还是对外)的一组Pod才会被映射成一个服务。
19.png-67.1kB

Service 和Pod如何关联

容器提供了强大的隔离功能,所以有必要把为Service提供服务的这组进程放入到容器中隔离。Kubernetes设计了Pod对象,将每个服务进程包装到相应的Pod中,使其成为Pod中运行的一个容器Container。为了建立Service 和Pod间的关联关系,Kubernetes 首先给每个Pod填上了一个标签Label,给运行MySQL的Pod贴上name=mysql标签,给运行PHP的Pod贴上name=php标签,然后给相应的Service定义标签选择器Label Selector,比如MySQL Service的标签选择器的选择条件为name=mysql,意为该Service 要作用于所有包含name=mysql Label的Pod上。这样就巧妙的解决了Service和Pod关联问题

Kubernetes RC介绍

RC介绍
在Kubernetes集群中,你只需要为需要扩容的Service关联的Pod创建一个RC Replication Controller则该Service的扩容以至于后来的Service升级等头疼问题都可以迎刃而解。
定义一个RC文件包括以下3个关键点

  • (1) 目标Pod的定义
  • (2) 目标Pod需要运行的副本数量(Replicas)
  • (3) 要监控的目标Pod的标签(Label)
  • 在创建好RC系统自动创建好Pod后,kubernetes会通过RC中定义的Label筛选出对应的Pod实力并实时监控其状态和数量,如果实例数量少于定义的副本数量Replicas,则会用RC中定义的Pod模板来创建一个新的Pod,然后将此Pod调度到合适的Node上运行,直到Pod实例的数量达到预定目标。这个过程完全是自动化的,无需人干预。只要修改RC中的副本数量即可。

    Kubernetes Master介绍

    Master介绍
    Kubernetes 里的Master指的是集群控制节点,每个Kubernetes集群里需要有一个Master节点来负责整个集群的管理和控制,基本上Kubernetes所有的控制命令都发给它,它负责具体的执行过程,我们后面执行的所有命令基本上都是在Master节点上运行的。如果Master宕机或不可用,那么集群内容器的管理都将失效
    Master节点上运行着以下一组关键进程:

  • Kubernetes API Server (kube-apiserver):提供了HTTP Rest接口的关键服务进程,是Kubernetes里所有资源的增、删、改、查等操作的唯一入口,也是集群控制的入口进程
  • Kubernetes Controller Manager (kube-controller-manager):Kubernetes里所有的资源对象的自动化控制中心
  • Kubernetes Scheduler (kube-scheduler):负责资源调度(Pod调度)的进程
  • 另外在Master节点上还需要启动一个etcd服务,因为Kubernetes里的所有资源对象的数据全部是保存在etcd中
    20.png-82.3kB

    Kubernetes Node介绍

    Node介绍
    除了Master,集群中其他机器被称为Node节点,每个Node都会被Master分配一些工作负载Docker容器,当某个Node宕机时,其上的工作负载会被Master自动转移到其他节点上去。
    每个Node节点上都运行着以下一组关键进程。

  • kubelet:负责Pod对应容器的创建、停止等任务,同时与Master节点密切协作,实现集群管理的基本功能
  • kube-proxy:实现Kubernetes Service的通信与负载均衡机制的重要组件。
  • Docker Engine(Docker): Docker引擎,负责本机的容器创建和管理工作。
  • 21.png-94.7kB

    Kubernetes 中Master与Node工作内容

    在集群管理方面,Kubernets将集群中的机器划分为一个Master节点和一群工作节点(Node),其中,在Master节点上运行着集群管理相关的一组进程kube-apiserver、kube-controller-manager和kube-scheduler,这些进程实现了整个集群的资源管理、Pod调度、弹性收缩、安全控制、系统监控和纠错等管理功能,并且都是全自动完成的。Node作为集群中的工作节点,运行真正的应用程序,在Node上Kubernetes管理的最小运行单元是Pod。Node上运行着Kubernetes的kubelet、kube-proxy服务进程,这些服务进程负责Pod创建、启动、监控、重启、销毁、以及实现软件模式的负载均衡
    k8s详细介绍请参考https://k.i4t.com
    Kuerbernetes 1.11 集群二进制安装-每日运维
    温馨提示:整个环境只需要修改IP地址!不要其他删除的

    一、环境准备

    本次我们安装Kubernetes不使用集群版本安装,使用单点安装。
    环境准备需要master和node都要进行操作
    环境如下:

  • IP 主机名 节点 服务 192.168.60.24 master master etcd、kube-apiserver、kube-controller-manage、kube-scheduler 如果master上不安装Node可以不安装以下服务docker、kubelet、kube-proxy、calico 192.168.60.25 node node docker、kubelet、kube-proxy、nginx(master上node节点可以buanzhuangnginx) k8s组件版本:v1.11
  • docker版本:v17.03
  • etcd版本:v3.2.22
  • calico版本:v3.1.3
  • dns版本:1.14.7
  • 为了防止大家的系统和我的不一样,我这里提供Centos7.4的下载地址,请大家的版本和我保持一致
    百度云 密码q2xj
    Kubernetes版本
    本次版本采用v1.11
    查看系统及内核版本

    ➜ cat /etc/redhat-release
    CentOS Linux release 7.4.1708 (Core)
    
    ➜ uname -a
    3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
    
    #我们要升级内核版本
    

    温馨提示:下面的操作需要在两台服务器上执行
    设置主机名

    ➜ hostnamectl set-hostname [master|node]
    ➜ bash
    

    master 设置互信

    ➜ yum install expect wget -y
    ➜ for i in 192.168.60.25;do
    ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
    expect -c "
    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.60.25
            expect {
                    "*yes/no*" {send "yesr"; exp_continue}
                    "*password*" {send "123456r"; exp_continue}
                    "*Password*" {send "123456r";}
            } "
    done 
    

    设置host

    ➜ echo "192.168.60.25 node" >>/etc/hosts
    ➜ echo "192.168.60.24 master" >>/etc/hosts
    

    设置时间同步

     yum -y install ntp
     systemctl enable ntpd
     systemctl start ntpd
     ntpdate -u cn.pool.ntp.org
     hwclock --systohc
     timedatectl set-timezone Asia/Shanghai
    

    关闭swap分区

    ➜ swapoff -a     #临时关闭swap分区
    ➜ vim /etc/fstab  #永久关闭swap分区
    swap was on /dev/sda11 during installation
    UUID=0a55fdb5-a9d8-4215-80f7-f42f75644f69 none  swap    sw      0       0
    #注释掉SWAP分区项,即可
    #不听我的kubelet启动报错自己百度
    

    设置Yum源

     curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
     wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
     yum makecache
     yum install wget vim lsof net-tools lrzsz -y
    

    关闭防火墙

     systemctl stop firewalld
     systemctl disable firewalld
     setenforce 0
     sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
    

    升级内核

    不要问我为什么
    yum update 
    rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
    rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
    yum --enablerepo=elrepo-kernel install kernel-ml -y&&
    sed -i s/saved/0/g /etc/default/grub&&
    grub2-mkconfig -o /boot/grub2/grub.cfg && reboot
    
    #不重启不生效!
    

    Kubernetes 升级内核失败
    查看内核

    ➜ uname -a
    Linux master 4.17.6-1.el7.elrepo.x86_64 #1 SMP Wed Jul 11 17:24:30 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
    

    设置内核参数

    echo "* soft nofile 190000" >> /etc/security/limits.conf
    echo "* hard nofile 200000" >> /etc/security/limits.conf
    echo "* soft nproc 252144" >> /etc/security/limits.conf
    echo "* hadr nproc 262144" >> /etc/security/limits.conf
    tee /etc/sysctl.conf <<-'EOF'
    # System default settings live in /usr/lib/sysctl.d/00-system.conf.
    # To override those settings, enter new settings here, or in an /etc/sysctl.d/.conf file
    #
    # For more information, see sysctl.conf(5) and sysctl.d(5).
    
    net.ipv4.tcp_tw_recycle = 0
    net.ipv4.ip_local_port_range = 10000 61000
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_fin_timeout = 30
    net.ipv4.ip_forward = 1
    net.core.netdev_max_backlog = 2000
    net.ipv4.tcp_mem = 131072  262144  524288
    net.ipv4.tcp_keepalive_intvl = 30
    net.ipv4.tcp_keepalive_probes = 3
    net.ipv4.tcp_window_scaling = 1
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_max_syn_backlog = 2048
    net.ipv4.tcp_low_latency = 0
    net.core.rmem_default = 256960
    net.core.rmem_max = 513920
    net.core.wmem_default = 256960
    net.core.wmem_max = 513920
    net.core.somaxconn = 2048
    net.core.optmem_max = 81920
    net.ipv4.tcp_mem = 131072  262144  524288
    net.ipv4.tcp_rmem = 8760  256960  4088000
    net.ipv4.tcp_wmem = 8760  256960  4088000
    net.ipv4.tcp_keepalive_time = 1800
    net.ipv4.tcp_sack = 1
    net.ipv4.tcp_fack = 1
    net.ipv4.tcp_timestamps = 1
    net.ipv4.tcp_syn_retries = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.bridge.bridge-nf-call-arptables = 1
    EOF
    echo "options nf_conntrack hashsize=819200" >> /etc/modprobe.d/mlx4.conf 
    modprobe br_netfilter
    sysctl -p
    

    二、Kubernetes Install

    Master配置

    2.1 安装CFSSL工具

    工具说明:
    client certificate 用于服务端认证客户端,例如etcdctl、etcd proxy、fleetctl、docker客户端
    server certificate 服务端使用,客户端以此验证服务端身份,例如docker服务端、kube-apiserver
    peer certificate 双向证书,用于etcd集群成员间通信
    安装CFSSL工具

    ➜ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    chmod +x cfssl_linux-amd64
    mv cfssl_linux-amd64 /usr/bin/cfssl
    
    ➜ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    chmod +x cfssljson_linux-amd64
    mv cfssljson_linux-amd64 /usr/bin/cfssljson
    
    ➜ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    chmod +x cfssl-certinfo_linux-amd64
    mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
    

    2.2 生成ETCD证书

    etcd作为Kubernetes集群的主数据库,在安装Kubernetes各服务之前需要首先安装和启动
    创建CA证书

    #创建etcd目录,用户生成etcd证书,请步骤和我保持一致
    ➜ mkdir /root/etcd_ssl && cd /root/etcd_ssl
    
    cat > etcd-root-ca-csr.json << EOF
    {
      "key": {
        "algo": "rsa",
        "size": 4096
      },
      "names": [
        {
          "O": "etcd",
          "OU": "etcd Security",
          "L": "beijing",
          "ST": "beijing",
          "C": "CN"
        }
      ],
      "CN": "etcd-root-ca"
    }
    EOF
    

    etcd集群证书

    cat >  etcd-gencert.json << EOF  
    {                                 
      "signing": {                    
        "default": {                  
          "expiry": "87600h"           
        },                            
        "profiles": {                 
          "etcd": {             
            "usages": [               
                "signing",            
                "key encipherment",   
                "server auth", 
                "client auth"  
            ],  
            "expiry": "87600h"  
          }  
        }  
      }  
    }  
    EOF
    

    # 过期时间设置成了 87600h
    ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;
    signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;
    server auth:表示client可以用该 CA 对server提供的证书进行验证;
    client auth:表示server可以用该CA对client提供的证书进行验证;
    etcd证书签名请求

    cat > etcd-csr.json << EOF
    {
      "key": {
        "algo": "rsa",
        "size": 4096
      },
      "names": [
        {
          "O": "etcd",
          "OU": "etcd Security",
          "L": "beijing",
          "ST": "beijing",
          "C": "CN"
        }
      ],
      "CN": "etcd",
      "hosts": [
        "127.0.0.1",
        "localhost",
        "192.168.60.24"
      ]
    }
    EOF
    
    $ hosts写master地址
    

    生成证书

    cfssl gencert --initca=true etcd-root-ca-csr.json 
    | cfssljson --bare etcd-root-ca
    

    创建根CA

    cfssl gencert --ca etcd-root-ca.pem 
    --ca-key etcd-root-ca-key.pem 
    --config etcd-gencert.json 
    -profile=etcd etcd-csr.json | cfssljson --bare etcd
    

    ETCD所需证书如下

    ➜ ll
    total 36
    -rw-r--r-- 1 root root 1765 Jul 12 10:48 etcd.csr
    -rw-r--r-- 1 root root  282 Jul 12 10:48 etcd-csr.json
    -rw-r--r-- 1 root root  471 Jul 12 10:48 etcd-gencert.json
    -rw------- 1 root root 3243 Jul 12 10:48 etcd-key.pem
    -rw-r--r-- 1 root root 2151 Jul 12 10:48 etcd.pem
    -rw-r--r-- 1 root root 1708 Jul 12 10:48 etcd-root-ca.csr
    -rw-r--r-- 1 root root  218 Jul 12 10:48 etcd-root-ca-csr.json
    -rw------- 1 root root 3243 Jul 12 10:48 etcd-root-ca-key.pem
    -rw-r--r-- 1 root root 2078 Jul 12 10:48 etcd-root-ca.pem
    

    2.3 安装启动ETCD

    ETCD 只有apiserver和Controller Manager需要连接
    yum install etcd -y && 上传rpm包,使用rpm -ivh 安装

    分发etcd证书
     ➜ mkdir -p /etc/etcd/ssl && cd /root/etcd_ssl
    
    查看etcd证书
    ➜ ll /root/etcd_ssl/
    total 36
    -rw-r--r--. 1 root root 1765 Jul 20 10:46 etcd.csr
    -rw-r--r--. 1 root root  282 Jul 20 10:42 etcd-csr.json
    -rw-r--r--. 1 root root  471 Jul 20 10:40 etcd-gencert.json
    -rw-------. 1 root root 3243 Jul 20 10:46 etcd-key.pem
    -rw-r--r--. 1 root root 2151 Jul 20 10:46 etcd.pem
    -rw-r--r--. 1 root root 1708 Jul 20 10:46 etcd-root-ca.csr
    -rw-r--r--. 1 root root  218 Jul 20 10:40 etcd-root-ca-csr.json
    -rw-------. 1 root root 3243 Jul 20 10:46 etcd-root-ca-key.pem
    -rw-r--r--. 1 root root 2078 Jul 20 10:46 etcd-root-ca.pem
    
    
    复制证书到相关目录
    mkdir /etc/etcd/ssl
    cp *.pem /etc/etcd/ssl/
    chown -R etcd:etcd /etc/etcd/ssl
    chown -R etcd:etcd /var/lib/etcd
    chmod -R 644 /etc/etcd/ssl/
    chmod 755 /etc/etcd/ssl/
    
    
    配置修改ETCD-master配置
    ➜ cp /etc/etcd/etcd.conf{,.bak} && >/etc/etcd/etcd.conf
    
    cat >/etc/etcd/etcd.conf <<EOF
    # [member]
    ETCD_NAME=etcd
    ETCD_DATA_DIR="/var/lib/etcd/etcd.etcd"
    ETCD_WAL_DIR="/var/lib/etcd/wal"
    ETCD_SNAPSHOT_COUNT="100"
    ETCD_HEARTBEAT_INTERVAL="100"
    ETCD_ELECTION_TIMEOUT="1000"
    ETCD_LISTEN_PEER_URLS="https://192.168.60.24:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.60.24:2379,http://127.0.0.1:2379"
    ETCD_MAX_SNAPSHOTS="5"
    ETCD_MAX_WALS="5"
    #ETCD_CORS=""
    
    # [cluster]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.60.24:2380"
    # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
    ETCD_INITIAL_CLUSTER="etcd=https://192.168.60.24:2380"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.60.24:2379"
    #ETCD_DISCOVERY=""
    #ETCD_DISCOVERY_SRV=""
    #ETCD_DISCOVERY_FALLBACK="proxy"
    #ETCD_DISCOVERY_PROXY=""
    #ETCD_STRICT_RECONFIG_CHECK="false"
    #ETCD_AUTO_COMPACTION_RETENTION="0"
    
    # [proxy]
    #ETCD_PROXY="off"
    #ETCD_PROXY_FAILURE_WAIT="5000"
    #ETCD_PROXY_REFRESH_INTERVAL="30000"
    #ETCD_PROXY_DIAL_TIMEOUT="1000"
    #ETCD_PROXY_WRITE_TIMEOUT="5000"
    #ETCD_PROXY_READ_TIMEOUT="0"
    
    # [security]
    ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
    ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
    ETCD_AUTO_TLS="true"
    ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
    ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
    ETCD_PEER_AUTO_TLS="true"
    
    # [logging]
    #ETCD_DEBUG="false"
    # examples for -log-package-levels etcdserver=WARNING,security=DEBUG
    #ETCD_LOG_PACKAGE_LEVELS=""
    EOF
    
    ###需要将192.168.60.24修改成master的地址
    

    启动etcd

    systemctl daemon-reload
    systemctl restart etcd
    systemctl enable etcd
    

    测试是否可以使用

    export ETCDCTL_API=3
    etcdctl --cacert=/etc/etcd/ssl/etcd-root-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.60.24:2379 endpoint health
    
    ##测试的时候把ip更换成master的ip即可,多个ip以逗号分隔
    

    可用状态如下:

    [root@master ~]# export ETCDCTL_API=3
    [root@master ~]# etcdctl --cacert=/etc/etcd/ssl/etcd-root-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.60.24:2379 endpoint health
    https://192.168.60.24:2379 is healthy: successfully committed proposal: took = 643.432µs
    

    查看2379 ETCD端口

    ➜ netstat -lntup
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 192.168.60.24:2379      0.0.0.0:*               LISTEN      2016/etcd           
    tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      2016/etcd           
    tcp        0      0 192.168.60.24:2380      0.0.0.0:*               LISTEN      2016/etcd           
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      965/sshd            
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1081/master         
    tcp6       0      0 :::22                   :::*                    LISTEN      965/sshd            
    tcp6       0      0 ::1:25                  :::*                    LISTEN      1081/master         
    udp        0      0 127.0.0.1:323           0.0.0.0:*                           721/chronyd         
    udp6       0      0 ::1:323                 :::*                                721/chronyd 
    

    ########### 以上ETCD安装并配置完成 ###############

    2.4 安装Docker

    下载Docker安装包

    wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
    wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
    

    由于网络经常超时,我们已经把镜像上传上去,可以直接下载我提供的安装包安装即可
    docker及K8S包下载 密码:1zov
    安装修改配置

    ➜ yum install docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -y
    ➜ yum install docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm -y
    
    设置开机启动并启动docker
    systemctl enable docker 
    systemctl start docker 
    
    替换docker相关配置
    sed -i '/ExecStart=/usr/bin/dockerd/iExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT' /usr/lib/systemd/system/docker.service
    sed -i '/dockerd/s/$/ --storage-driver=overlay2/g' /usr/lib/systemd/system/docker.service
    
    重启docker
    systemctl daemon-reload 
    systemctl restart docker
    

    如果之前已安装旧版本,可以卸载安装新的

    yum remove docker 
                          docker-common 
                          docker-selinux 
                          docker-engine
    

    2.5 安装Kubernetes

    如何下载Kubernetes
    压缩包kubernetes.tar.gz内包含了Kubernetes的服务程序文件、文档和示例;压缩包kubernetes-src.tar.gz内则包含了全部源代码。也可以直接Server Binaries中的kubernetes-server-linux-amd64.tar.gz文件,其中包含了Kubernetes需要运行的全部服务程序文件
    Kubernetes 下载地址:https://github.com/kubernetes/kubernetes/releases
    Kuerbernetes 1.11 集群二进制安装-每日运维
    Kuerbernetes 1.11 集群二进制安装-每日运维
    GitHub下载
    docker及K8S包下载 密码:1zov
    Kubernetes配置

    tar xf kubernetes-server-linux-amd64.tar.gz
    for i in hyperkube kube-apiserver kube-scheduler kubelet kube-controller-manager kubectl kube-proxy;do
    cp ./kubernetes/server/bin/$i /usr/bin/
    chmod 755 /usr/bin/$i
    done
    

    2.6 生成分发Kubernetes证书

    设置证书目录

    mkdir /root/kubernets_ssl && cd /root/kubernets_ssl 
    

    k8s-root-ca-csr.json证书

    cat > k8s-root-ca-csr.json << EOF
    {
      "CN": "kubernetes",
      "key": {
        "algo": "rsa",
        "size": 4096
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    EOF
    

    k8s-gencert.json证书

    cat >  k8s-gencert.json << EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "kubernetes": {
            "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ],
            "expiry": "87600h"
          }
        }
      }
    }
    EOF
    

    kubernetes-csr.json 证书
    $ hosts字段填写上所有你要用到的节点ip(master),创建 kubernetes 证书签名请求文件 kubernetes-csr.json:

    cat >kubernetes-csr.json << EOF
    {
        "CN": "kubernetes",
        "hosts": [
            "127.0.0.1",
            "10.254.0.1",
            "192.168.60.24",
            "localhost",
            "kubernetes",
            "kubernetes.default",
            "kubernetes.default.svc",
            "kubernetes.default.svc.cluster",
            "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "ST": "BeiJing",
                "L": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    

    kube-proxy-csr.json 证书

    cat > kube-proxy-csr.json << EOF
    {
      "CN": "system:kube-proxy",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    EOF
    

    admin-csr.json证书

    cat > admin-csr.json << EOF
    {
      "CN": "admin",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "system:masters",
          "OU": "System"
        }
      ]
    }
    EOF
    

    生成Kubernetes证书

    ➜ cfssl gencert --initca=true k8s-root-ca-csr.json | cfssljson --bare k8s-root-ca
    
    ➜ for targetName in kubernetes admin kube-proxy; do
        cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName
    done
    

    #生成boostrap配置

    export KUBE_APISERVER="https://127.0.0.1:6443"
    export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
    echo "Tokne: ${BOOTSTRAP_TOKEN}"
    cat > token.csv <<EOF
    ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    EOF
    

    配置证书信息
    # Master 上该地址应为 https://MasterIP:6443
    进入Kubernetes证书目录/root/kubernetes_ssl

    export KUBE_APISERVER="https://127.0.0.1:6443"
    

    # 设置集群参数

    kubectl config set-cluster kubernetes 
      --certificate-authority=k8s-root-ca.pem 
      --embed-certs=true 
      --server=${KUBE_APISERVER} 
      --kubeconfig=bootstrap.kubeconfig
    

    # 设置客户端认证参数

    kubectl config set-credentials kubelet-bootstrap 
      --token=${BOOTSTRAP_TOKEN} 
      --kubeconfig=bootstrap.kubeconfig
    

    # 设置上下文参数

    kubectl config set-context default 
      --cluster=kubernetes 
      --user=kubelet-bootstrap 
      --kubeconfig=bootstrap.kubeconfig
    

    # 设置默认上下文

    kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
    

    # echo "Create kube-proxy kubeconfig..."

    kubectl config set-cluster kubernetes 
      --certificate-authority=k8s-root-ca.pem 
      --embed-certs=true 
      --server=${KUBE_APISERVER} 
      --kubeconfig=kube-proxy.kubeconfig
    

    # kube-proxy

    kubectl config set-credentials kube-proxy 
      --client-certificate=kube-proxy.pem 
      --client-key=kube-proxy-key.pem 
      --embed-certs=true 
      --kubeconfig=kube-proxy.kubeconfig
    

    # kube-proxy_config

    kubectl config set-context default 
      --cluster=kubernetes 
      --user=kube-proxy 
      --kubeconfig=kube-proxy.kubeconfig 
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    

    # 生成高级审计配置

    cat >> audit-policy.yaml <<EOF
    # Log all requests at the Metadata level.
    apiVersion: audit.k8s.io/v1beta1
    kind: Policy
    rules:
    - level: Metadata
    EOF
    

    #分发kubernetes证书#####

    cd /root/kubernets_ssl
    mkdir -p /etc/kubernetes/ssl
    cp *.pem /etc/kubernetes/ssl
    cp *.kubeconfig token.csv audit-policy.yaml /etc/kubernetes
    useradd -s /sbin/nologin -M kube
    chown -R kube:kube /etc/kubernetes/ssl
    

    # 生成kubectl的配置

    cd /root/kubernets_ssl
    kubectl config set-cluster kubernetes 
      --certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem 
      --embed-certs=true 
      --server=https://127.0.0.1:6443
    
    
    
    kubectl config set-credentials admin 
      --client-certificate=/etc/kubernetes/ssl/admin.pem 
      --embed-certs=true 
      --client-key=/etc/kubernetes/ssl/admin-key.pem
    
    
    kubectl config set-context kubernetes 
      --cluster=kubernetes 
      --user=admin
    
    
    kubectl config use-context kubernetes
    

    # 设置 log 目录权限

    mkdir -p /var/log/kube-audit /usr/libexec/kubernetes
    chown -R kube:kube /var/log/kube-audit /usr/libexec/kubernetes
    chmod -R 755 /var/log/kube-audit /usr/libexec/kubernetes
    

    2.7 服务配置配置

    Master操作
    证书与 rpm 都安装完成后,只需要修改配置(配置位于 /etc/kubernetes 目录)后启动相关组件即可

    cd /etc/kubernetes
    

    config 通用配置
    以下操作不提示默认即可,需要修改已注释

    cat > /etc/kubernetes/config <<EOF
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=2"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=true"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://127.0.0.1:8080"
    EOF
    

    apiserver 配置

    cat > /etc/kubernetes/apiserver <<EOF
    ###
    # kubernetes system config
    #
    # The following values are used to configure the kube-apiserver
    #
    
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--advertise-address=0.0.0.0 --insecure-bind-address=0.0.0.0 --bind-address=0.0.0.0"
    
    # The port on the local server to listen on.
    KUBE_API_PORT="--insecure-port=8080 --secure-port=6443"
    
    # Port minions listen on
    # KUBELET_PORT="--kubelet-port=10250"
    
    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS=--etcd-servers=https://192.168.60.24:2379
    
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    
    # default admission control policies
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction"
    
    # Add your own!
    KUBE_API_ARGS="--authorization-mode=RBAC,Node 
                   --endpoint-reconciler-type=lease 
                   --runtime-config=batch/v2alpha1=true 
                   --anonymous-auth=false 
                   --kubelet-https=true 
                   --enable-bootstrap-token-auth 
                   --token-auth-file=/etc/kubernetes/token.csv 
                   --service-node-port-range=30000-50000 
                   --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem 
                   --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem 
                   --client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem 
                   --service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem 
                   --etcd-quorum-read=true 
                   --storage-backend=etcd3 
                   --etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem 
                   --etcd-certfile=/etc/etcd/ssl/etcd.pem 
                   --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem 
                   --enable-swagger-ui=true 
                   --apiserver-count=3 
                   --audit-policy-file=/etc/kubernetes/audit-policy.yaml 
                   --audit-log-maxage=30 
                   --audit-log-maxbackup=3 
                   --audit-log-maxsize=100 
                   --audit-log-path=/var/log/kube-audit/audit.log 
                   --event-ttl=1h "
    EOF
    
    #需要修改的地址是etcd的,集群逗号为分隔符填写
    将192.168.60.24:2379修改为master的ip
    

    controller-manager 配置

    cat > /etc/kubernetes/controller-manager <<EOF
    ###
    # The following values are used to configure the kubernetes controller-manager
    
    # defaults from config and apiserver should be adequate
    
    # Add your own!
    KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 
                                  --service-cluster-ip-range=10.254.0.0/16 
                                  --cluster-name=kubernetes 
                                  --cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem 
                                  --cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem 
                                  --service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem 
                                  --root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem 
                                  --leader-elect=true 
                                  --node-monitor-grace-period=40s 
                                  --node-monitor-period=5s 
                                  --pod-eviction-timeout=60s"
    EOF
    

    scheduler 配置

    cat >scheduler <<EOF
    ###
    # kubernetes scheduler config
    
    # default config should be adequate
    
    # Add your own!
    KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0"
    EOF
    

    设置服务启动脚本
    Kubernetes服务的组件配置已经生成,接下来我们配置组件的启动脚本
    ###kube-apiserver.service服务脚本###

    vim /usr/lib/systemd/system/kube-apiserver.service
    
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    After=etcd.service
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/apiserver
    User=root
    ExecStart=/usr/bin/kube-apiserver 
            $KUBE_LOGTOSTDERR 
    	    $KUBE_LOG_LEVEL 
            $KUBE_ETCD_SERVERS 
    	    $KUBE_API_ADDRESS 
            $KUBE_API_PORT 
    	    $KUBELET_PORT 
            $KUBE_ALLOW_PRIV 
    	    $KUBE_SERVICE_ADDRESSES 
            $KUBE_ADMISSION_CONTROL 
    	    $KUBE_API_ARGS
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    ###kube-controller-manager.service服务脚本###

    vim /usr/lib/systemd/system/kube-controller-manager.service
    
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/controller-manager
    User=root
    ExecStart=/usr/bin/kube-controller-manager 
            $KUBE_LOGTOSTDERR 
    	    $KUBE_LOG_LEVEL 
            $KUBE_MASTER 
    	    $KUBE_CONTROLLER_MANAGER_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    ###kube-scheduler.service服务脚本###

    vim /usr/lib/systemd/system/kube-scheduler.service
    
    [Unit]
    Description=Kubernetes Scheduler Plugin
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/scheduler
    User=root
    ExecStart=/usr/bin/kube-scheduler 
            $KUBE_LOGTOSTDERR 
    	    $KUBE_LOG_LEVEL 
            $KUBE_MASTER 
    	    $KUBE_SCHEDULER_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    启动kube-apiserver、kube-controller-manager、kube-schedule

    systemctl daemon-reload
    systemctl start kube-apiserver
    systemctl start kube-controller-manager
    systemctl start kube-scheduler
    
    
    设置开机启动
    systemctl enable kube-apiserver
    systemctl enable kube-controller-manager
    systemctl enable kube-scheduler
    

    提示:kube-apiserver是主要服务,如果apiserver启动失败其他的也会失败
    验证是否成功

    [root@master system]# kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    controller-manager   Healthy   ok                   
    scheduler            Healthy   ok                   
    etcd-0               Healthy   {"health": "true"} 
    

    #创建ClusterRoleBinding
    由于 kubelet 采用了 TLS Bootstrapping,所有根绝 RBAC 控制策略,kubelet 使用的用户 kubelet-bootstrap 是不具备任何访问 API 权限的
    这是需要预先在集群内创建 ClusterRoleBinding 授予其 system:node-bootstrapper Role

    kubectl create clusterrolebinding kubelet-bootstrap 
      --clusterrole=system:node-bootstrapper 
      --user=kubelet-bootstrap
    
    
    删除命令------ 不执行!
    kubectl delete  clusterrolebinding kubelet-bootstrap
    

    2.8 Master 上安装node节点

    对于node节点,master也可以进行安装
    master上node节点安装kube-proxy、kubelet

    ######Kuberlet配置
    
    cat >/etc/kubernetes/kubelet <<EOF
    ###
    # kubernetes kubelet (minion) config
    
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=192.168.60.24"
    
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname-override=master"
    
    # location of the api-server
    # KUBELET_API_SERVER=""
    
    # Add your own!
    KUBELET_ARGS="--cgroup-driver=cgroupfs 
                  --cluster-dns=10.254.0.2 
                  --resolv-conf=/etc/resolv.conf 
                  --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig 
                  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig 
                  --cert-dir=/etc/kubernetes/ssl 
                  --cluster-domain=cluster.local. 
                  --hairpin-mode promiscuous-bridge 
                  --serialize-image-pulls=false 
                  --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"
    EOF
    
    
    将IP地址修改为master上的IP地址和主机名,其他不需要修改
    

    创建服务脚本
    ###kubelet.service服务脚本###
    文件名称:kubelet.service

    vim /usr/lib/systemd/system/kubelet.service
    
    [Unit]
    Description=Kubernetes Kubelet Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    WorkingDirectory=/var/lib/kubelet
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/kubelet
    ExecStart=/usr/bin/kubelet 
            $KUBE_LOGTOSTDERR 
    	    $KUBE_LOG_LEVEL 
            $KUBELET_API_SERVER 
    	    $KUBELET_ADDRESS 
            $KUBELET_PORT 
    	    $KUBELET_HOSTNAME 
            $KUBE_ALLOW_PRIV 
    	    $KUBELET_ARGS
    Restart=on-failure
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    

    创建工程目录

    /var/lib/kubelet    这个目录如果没有需要我们手动创建
    mkdir /var/lib/kubelet -p
    

    #kube-proxy配置

    cat >/etc/kubernetes/proxy <<EOF
    ###
    # kubernetes proxy config
    
    # default config should be adequate
    
    # Add your own!
    KUBE_PROXY_ARGS="--bind-address=192.168.60.24 
                     --hostname-override=master 
                     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig 
                     --cluster-cidr=10.254.0.0/16"
    EOF
    
    #master ip && name
    

    kube-proxy启动脚本

    vim /usr/lib/systemd/system/kube-proxy.service
    
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/proxy
    ExecStart=/usr/bin/kube-proxy 
            $KUBE_LOGTOSTDERR 
    	    $KUBE_LOG_LEVEL 
            $KUBE_MASTER 
    	    $KUBE_PROXY_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    启动kubelet and Kube-proxy

    systemctl daemon-reload
    systemctl restart kube-proxy
    systemctl restart kubelet
    

    当我们启动完成之后,在kubelet日志中可以看到下面的日志,提示我们已经创建好了,但是需要我们通过一下认证。
    Kuerbernetes 1.11 集群二进制安装-每日运维
    通过kubectl get csr查看
    2.jpg-123.6kB

    三、Kubernetes Node Install

    Node节点配置

    3.1 Docker安装

    不多BB
    docker及K8S包下载 密码:1zov

    wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
    wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
    
    yum install docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -y
    yum install docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm -y
    
    systemctl enable docker 
    systemctl start docker 
    
    sed -i '/ExecStart=/usr/bin/dockerd/iExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT' /usr/lib/systemd/system/docker.service
    sed -i '/dockerd/s/$/ --storage-driver=overlay2/g' /usr/lib/systemd/system/docker.service
    
    systemctl daemon-reload 
    systemctl restart docker
    

    3.2 分配证书

    我们需要去Master上分配证书kubernetes``etcd给Node
    虽然 Node 节点上没有 Etcd,但是如果部署网络组件,如 calico、flannel 等时,网络组件需要联通 Etcd 就会用到 Etcd 的相关证书。
    从Mster节点上将hyperkuber kubelet kubectl kube-proxy 拷贝至node上。拷贝证书的这几步都是在master上操作的

    for i in hyperkube kubelet kubectl kube-proxy;do
    scp ./kubernetes/server/bin/$i 192.168.60.25:/usr/bin/
    ssh 192.168.60.25 chmod 755 /usr/bin/$i
    done
    
    ##这里的IP是node节点ip
    在K8S二进制上一级,for循环看不懂就别玩K8s了
    

    分发K8s证书
    cd K8S证书目录

    cd /root/kubernets_ssl/
    for IP in 192.168.60.25;do
        ssh $IP mkdir -p /etc/kubernetes/ssl
    	scp *.pem $IP:/etc/kubernetes/ssl
        scp *.kubeconfig token.csv audit-policy.yaml $IP:/etc/kubernetes
    	ssh $IP useradd -s /sbin/nologin/ kube
        ssh $IP chown -R kube:kube /etc/kubernetes/ssl
    done
    
    #master上执行
    

    分发ETCD证书

    for IP in 192.168.60.25;do
        cd /root/etcd_ssl
        ssh $IP mkdir -p /etc/etcd/ssl
        scp *.pem $IP:/etc/etcd/ssl
        ssh $IP chmod -R 644 /etc/etcd/ssl/*
        ssh $IP chmod 755 /etc/etcd/ssl
    done
    
    #master上执行
    

    给Node设置文件权限

    ssh root@192.168.60.25 mkdir -p /var/log/kube-audit /usr/libexec/kubernetes &&
    ssh root@192.168.60.25 chown -R kube:kube /var/log/kube-audit /usr/libexec/kubernetes &&
    ssh root@192.168.60.25 chmod -R 755 /var/log/kube-audit /usr/libexec/kubernetes
    
    #master上执行
    

    3.3 Node节点配置

    node 节点上配置文件同样位于 /etc/kubernetes 目录
    node 节点只需要修改 config kubelet proxy这三个配置文件,修改如下
    #config 通用配置
    注意: config 配置文件(包括下面的 kubelet、proxy)中全部未 定义 API Server 地址,因为 kubelet 和 kube-proxy 组件启动时使用了 --require-kubeconfig 选项,该选项会使其从 *.kubeconfig 中读取 API Server 地址,而忽略配置文件中设置的;
    所以配置文件中设置的地址其实是无效的

    cat > /etc/kubernetes/config <<EOF
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=2"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=true"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    # KUBE_MASTER="--master=http://127.0.0.1:8080"
    EOF
    

    # kubelet 配置

    cat >/etc/kubernetes/kubelet <<EOF
    ###
    # kubernetes kubelet (minion) config
    
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=192.168.60.25"
    
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname-override=node"
    
    # location of the api-server
    # KUBELET_API_SERVER=""
    
    # Add your own!
    KUBELET_ARGS="--cgroup-driver=cgroupfs 
                  --cluster-dns=10.254.0.2 
                  --resolv-conf=/etc/resolv.conf 
                  --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig 
                  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig 
                  --cert-dir=/etc/kubernetes/ssl 
                  --cluster-domain=cluster.local. 
                  --hairpin-mode promiscuous-bridge 
                  --serialize-image-pulls=false 
                  --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"
    EOF
    
    #这里的IP地址是node的IP地址和主机名
    

    复制启动脚本

    vim /usr/lib/systemd/system/kubelet.service
    
    [Unit]
    Description=Kubernetes Kubelet Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    WorkingDirectory=/var/lib/kubelet
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/kubelet
    ExecStart=/usr/bin/kubelet 
            $KUBE_LOGTOSTDERR 
    	    $KUBE_LOG_LEVEL 
            $KUBELET_API_SERVER 
    	    $KUBELET_ADDRESS 
            $KUBELET_PORT 
    	    $KUBELET_HOSTNAME 
            $KUBE_ALLOW_PRIV 
    	    $KUBELET_ARGS
    Restart=on-failure
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    

    mkdir /var/lib/kubelet -p
    工程目录我们设置在/var/lib/kubele需要我们手动创建
    启动kubelet

    sed -i 's#127.0.0.1#192.168.60.24#g' /etc/kubernetes/bootstrap.kubeconfig
    #这里的地址是master地址
    #这里是为了测试kubelet是否可以连接到master上,后面启动nginx的作用是为了master的高可用
    
    
    systemctl daemon-reload
    systemctl restart kubelet
    systemctl enable kubelet
    

    #修改kube-proxy配置

    cat >/etc/kubernetes/proxy <<EOF
    ###
    # kubernetes proxy config
    
    # default config should be adequate
    
    # Add your own!
    KUBE_PROXY_ARGS="--bind-address=192.168.60.25 
                     --hostname-override=node 
                     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig 
                     --cluster-cidr=10.254.0.0/16"
    EOF
    
    #替换node IP
    --bind-address= node ip地址
    --hostname-override= node主机名
    

    kube-proxy启动脚本

    vim /usr/lib/systemd/system/kube-proxy.service
    
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/proxy
    ExecStart=/usr/bin/kube-proxy 
            $KUBE_LOGTOSTDERR 
    	    $KUBE_LOG_LEVEL 
            $KUBE_MASTER 
    	    $KUBE_PROXY_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    3.4 创建 nginx 代理

    此时所有 node 应该连接本地的 nginx 代理,然后 nginx 来负载所有 api server;以下为 nginx 代理相关配置
    我们也可以不用nginx代理。需要修改 bootstrap.kubeconfig kube-proxy.kubeconfig中的 API Server 地址即可
    注意: 对于在 master 节点启动 kubelet 来说,不需要 nginx 做负载均衡;可以跳过此步骤,并修改 kubelet.kubeconfig、kube-proxy.kubeconfig 中的 apiserver 地址为当前 master ip 6443 端口即可
    # 创建配置目录

    mkdir -p /etc/nginx
    

    # 写入代理配置

    cat > /etc/nginx/nginx.conf <<EOF
    error_log stderr notice;
    
    worker_processes auto;
    events {
      multi_accept on;
      use epoll;
      worker_connections 1024;
    }
    
    stream {
        upstream kube_apiserver {
            least_conn;
            server 192.168.60.24:6443 weight=20 max_fails=1 fail_timeout=10s;
            #server中代理master的IP
        }
    
        server {
            listen        0.0.0.0:6443;
            proxy_pass    kube_apiserver;
            proxy_timeout 10m;
            proxy_connect_timeout 1s;
        }
    }
    EOF
    
    ##servcer 中代理的ip应该是master中的apiserver端口
    

    # 更新权限

    chmod +r /etc/nginx/nginx.conf
    

    #启动nginx的docker容器。运行转发

    docker run -it -d -p 127.0.0.1:6443:6443 -v /etc/nginx:/etc/nginx  --name nginx-proxy --net=host --restart=on-failure:5 --memory=512M  nginx:1.13.5-alpine
    
    
    小提示:可以提前拉nginx镜像
    docker pull daocloud.io/library/nginx:1.13.5-alpine
    

    为了保证 nginx 的可靠性,综合便捷性考虑,node 节点上的 nginx 使用 docker 启动,同时 使用 systemd 来守护, systemd 配置如下

    cat >/etc/systemd/system/nginx-proxy.service <<EOF 
    [Unit]
    Description=kubernetes apiserver docker wrapper
    Wants=docker.socket
    After=docker.service
    
    [Service]
    User=root
    PermissionsStartOnly=true
    ExecStart=/usr/bin/docker start nginx-proxy
    Restart=always
    RestartSec=15s
    TimeoutStartSec=30s
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    
    ➜ systemctl daemon-reload
    ➜ systemctl start nginx-proxy
    ➜ systemctl enable nginx-proxy
    

    我们要确保有6443端口,才可以启动kubelet

    sed -i 's#192.168.60.24#127.0.0.1#g' /etc/kubernetes/bootstrap.kubeconfig
    

    查看6443端口

    [root@node kubernetes]# netstat -lntup
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      2042/kube-proxy     
    tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      1925/nginx: master  
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      966/sshd            
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1050/master         
    tcp6       0      0 :::10256                :::*                    LISTEN      2042/kube-proxy     
    tcp6       0      0 :::22                   :::*                    LISTEN      966/sshd            
    tcp6       0      0 ::1:25                  :::*                    LISTEN      1050/master         
    udp        0      0 127.0.0.1:323           0.0.0.0:*                           717/chronyd         
    udp6       0      0 ::1:323                 :::*                                717/chronyd  
    
    [root@node kubernetes]# lsof -i:6443
    lsof: no pwd entry for UID 100
    lsof: no pwd entry for UID 100
    COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
    kubelet 1765     root    3u  IPv4  27573      0t0  TCP node1:39246->master:sun-sr-https (ESTABLISHED)
    nginx   1925     root    4u  IPv4  29028      0t0  TCP *:sun-sr-https (LISTEN)
    lsof: no pwd entry for UID 100
    nginx   1934      100    4u  IPv4  29028      0t0  TCP *:sun-sr-https (LISTEN)
    lsof: no pwd entry for UID 100
    nginx   1935      100    4u  IPv4  29028      0t0  TCP *:sun-sr-https (LISTEN)
    

    启动kubelet-proxy
    在启动kubelet之前最好将kube-proxy重启一下

    systemctl restart kube-proxy
    systemctl enable kubelet
    
    systemctl daemon-reload
    systemctl restart kubelet
    systemctl enable kubelet
    

    记得检查kubelet状态!

    3.5 认证

    由于采用了 TLS Bootstrapping,所以 kubelet 启动后不会立即加入集群,而是进行证书申请,从日志中可以看到如下输出
    3.jpg-765.4kB

    7月 24 13:55:50 master kubelet[1671]: I0724 13:55:50.877027    1671 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
    

    此时只需要在 master 允许其证书申请即可
    # 查看 csr

    ➜  kubectl get csr
    NAME        AGE       REQUESTOR           CONDITION
    csr-l9d25   2m        kubelet-bootstrap   Pending
    '
    
    如果我们将2台都启动了kubelet都配置好了并且启动了,这里会显示2台,一个master一个node
    

    # 签发证书

    kubectl certificate approve csr-l9d25  
    #csr-l9d25 为证书名称
    
    或者执行kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
    

    # 查看 node
    签发完成证书

    [root@master ~]# kubectl get nodes
    NAME      STATUS    ROLES     AGE       VERSION
    master    Ready         40m       v1.11.0
    node      Ready         39m       v1.11.0
    

    认证后自动生成了kubelet kubeconfig 文件和公私钥:

    $ ls -l /etc/kubernetes/kubelet.kubeconfig
    -rw------- 1 root root 2280 Nov  7 10:26 /etc/kubernetes/kubelet.kubeconfig
    $ ls -l /etc/kubernetes/ssl/kubelet*
    -rw-r--r-- 1 root root 1046 Nov  7 10:26 /etc/kubernetes/ssl/kubelet-client.crt
    -rw------- 1 root root  227 Nov  7 10:22 /etc/kubernetes/ssl/kubelet-client.key
    -rw-r--r-- 1 root root 1115 Nov  7 10:16 /etc/kubernetes/ssl/kubelet.crt
    -rw------- 1 root root 1675 Nov  7 10:16 /etc/kubernetes/ssl/kubelet.key
    

    #注意:
    apiserver如果不启动后续没法操作
    kubelet里面配置的IP地址都是本机(master配置node)
    Node服务上先启动nginx-proxy在启动kube-proxy。kube-proxy里面地址配置本机127.0.0.1:6443实际上就是master:6443

    四、K8s组件安装

    4.1 Calico介绍

    Kuerbernetes 1.11 集群二进制安装-每日运维
    calico是一个比较有趣的虚拟网络解决方案,完全利用路由规则实现动态组网,通过BGP协议通告路由。
    calico的好处是endpoints组成的网络是单纯的三层网络,报文的流向完全通过路由规则控制,没有overlay等额外开销。
    calico的endpoint可以漂移,并且实现了acl。
    calico的缺点是路由的数目与容器数目相同,非常容易超过路由器、三层交换、甚至node的处理能力,从而限制了整个网络的扩张。
    calico的每个node上会设置大量(海量)的iptables规则、路由,运维、排障难度大。
    calico的原理决定了它不可能支持VPC,容器只能从calico设置的网段中获取ip。
    calico目前的实现没有流量控制的功能,会出现少数容器抢占node多数带宽的情况。
    calico的网络规模受到BGP网络规模的限制。
    名词解释
    endpoint: 接入到calico网络中的网卡称为endpoint
    AS: 网络自治系统,通过BGP协议与其它AS网络交换路由信息
    ibgp: AS内部的BGP Speaker,与同一个AS内部的ibgp、ebgp交换路由信息。
    ebgp: AS边界的BGP Speaker,与同一个AS内部的ibgp、其它AS的ebgp交换路由信息。
    workloadEndpoint: 虚拟机、容器使用的endpoint
    hostEndpoints: 物理机(node)的地址
    组网原理
    calico组网的核心原理就是IP路由,每个容器或者虚拟机会分配一个workload-endpoint(wl)。
    从nodeA上的容器A内访问nodeB上的容器B时:
    Kuerbernetes 1.11 集群二进制安装-每日运维
    核心问题是,nodeA怎样得知下一跳的地址?答案是node之间通过BGP协议交换路由信息。
    每个node上运行一个软路由软件bird,并且被设置成BGP Speaker,与其它node通过BGP协议交换路由信息。
    可以简单理解为,每一个node都会向其它node通知这样的信息:
    我是X.X.X.X,某个IP或者网段在我这里,它们的下一跳地址是我。
    通过这种方式每个node知晓了每个workload-endpoint的下一跳地址。
    Calico组件介绍:
    Kuerbernetes 1.11 集群二进制安装-每日运维
    Felix:Calico agent 运行在每台node上,为容器设置网络信息:IP,路由规则,iptable规则等
    etcd:calico后端存储
    BIRD: BGP Client: 负责把Felix在各node上设置的路由信息广播到Calico网络( 通过BGP协议)。
    BGP Route Reflector: 大规模集群的分级路由分发。
    calico: calico命令行管理工具
    calico-node:calico服务程序,用于设置Pod的网络资源,保证pod的网络与各Node互联互通,它还需要以HostNetwork模式运行,直接使用宿主机网络。
    install-cni:在各Node上安装CNI二进制文件到/opt/cni/bin目录下,并安装相应的网络配置文件到/etc/cni/net.d目录下。
    Calico作为一款针对数据中心的虚拟网络工具,借助BGP、路由表和iptables,实现了一个无需解包封包的三层网络,并且有调试简单的特点。虽然目前还有些小缺陷,比如stable版本还无法支持私有网络,但希望在后面的版本中会更加强大。
    Kuerbernetes 1.11 集群二进制安装-每日运维
    Kuerbernetes 1.11 集群二进制安装-每日运维
    参考:
    https://blog.csdn.net/ptmozhu/article/details/70159919
    http://www.lijiaocn.com/%E9%A1%B9%E7%9B%AE/2017/04/11/calico-usage.html

    4.2 Calico 安装配置

    Calico 目前部署也相对比较简单,只需要创建一下 yml 文件即可
    # 获取相关 Cliaco.yaml 版本我们使用3.1,低版本会有Bug

    wget http://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml
    wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml
    
    如果有网络问题请往下找我百度云的连接
    

    #替换 Etcd 地址-master这里的IP地址为etcd的地址

    sed -i 's@.*etcd_endpoints:.*@  etcd_endpoints: "https://192.168.60.24:2379"@gi' calico.yaml
    

    # 替换 Etcd 证书
    修改 Etcd 相关配置,以下列出主要修改部分(etcd 证书内容需要被 base64 转码)

    export ETCD_CERT=`cat /etc/etcd/ssl/etcd.pem | base64 | tr -d 'n'`
    export ETCD_KEY=`cat /etc/etcd/ssl/etcd-key.pem | base64 | tr -d 'n'`
    export ETCD_CA=`cat /etc/etcd/ssl/etcd-root-ca.pem | base64 | tr -d 'n'`
    
    
    sed -i "s@.*etcd-cert:.*@  etcd-cert: ${ETCD_CERT}@gi" calico.yaml
    sed -i "s@.*etcd-key:.*@  etcd-key: ${ETCD_KEY}@gi" calico.yaml
    sed -i "s@.*etcd-ca:.*@  etcd-ca: ${ETCD_CA}@gi" calico.yaml
    
    sed -i 's@.*etcd_ca:.*@  etcd_ca: "/calico-secrets/etcd-ca"@gi' calico.yaml
    sed -i 's@.*etcd_cert:.*@  etcd_cert: "/calico-secrets/etcd-cert"@gi' calico.yaml
    sed -i 's@.*etcd_key:.*@  etcd_key: "/calico-secrets/etcd-key"@gi' calico.yaml
    

    # 设定calico的地址池,注意不要与集群IP与宿主机IP段相同

    sed -i s/192.168.0.0/172.16.0.0/g calico.yaml
    

    修改kubelet配置
    Cliaco 官方文档要求 kubelet 启动时要配置使用 cni 插件 --network-plugin=cni,同时 kube-proxy
    不能使用 --masquerade-all 启动(会与 Calico policy 冲突),所以需要修改所有 kubelet 和 proxy 配置文件
    #修改所有(master & node都需要修改)的kubelet配置,在运行参数中加上以下参数

    vim /etc/kubernetes/kubelet
                  --network-plugin=cni 
    
    #注意在这部的时候最好重启下kubelet服务与docker服务,避免配置更新不及时造成的错误
    systemctl daemon-reload
    systemctl restart docker
    systemctl restart kubelet
    
    systemctl start kube-proxy.service
    systemctl enable kube-proxy.service
    

    执行部署操作,注意,在开启 RBAC 的情况下需要单独创建 ClusterRole 和 ClusterRoleBinding
    https://www.kubernetes.org.cn/1879.html
    RoleBinding和ClusterRoleBinding
    https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
    ##提示有些镜像是需要我们去docker hub下载,我们这里可以将镜像导入
    镜像下载地址 密码:ibyt
    导入镜像(master和node都需要导入)
    pause.tar
    不导入镜像会超时

    Events:
      Type     Reason                  Age               From               Message
      ----     ------                  ----              ----               -------
      Normal   Scheduled               51s               default-scheduler  Successfully assigned default/nginx-deployment-7c5b578d88-lckk2 to node
      Warning  FailedCreatePodSandBox  5s (x3 over 43s)  kubelet, node      Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection timed out
    

    提示:因为calico的镜像在国外,我这里已经将镜像到处,大家使用docker load -i calio.tar将镜像导入即可
    calico镜像及yaml文件打包 密码:wxi1
    建议将calico的master节点和node节点镜像都相同

    [root@node ~]# docker images
    REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
    nginx                       1.13.2-alpine       2d92198f40ec        12 months ago       15.5 MB
    daocloud.io/library/nginx   1.13.2-alpine       2d92198f40ec        12 months ago       15.5 MB
    [root@node ~]# 
    [root@node ~]# 
    [root@node ~]# docker load < calico-node.tar
    cd7100a72410: Loading layer [==================================================>] 4.403 MB/4.403 MB
    ddc4cb8dae60: Loading layer [==================================================>]  7.84 MB/7.84 MB
    77087b8943a2: Loading layer [==================================================>] 249.3 kB/249.3 kB
    c7227c83afaf: Loading layer [==================================================>] 4.801 MB/4.801 MB
    2e0e333a66b6: Loading layer [==================================================>] 231.8 MB/231.8 MB
    Loaded image: quay.io/calico/node:v3.1.3
    
    master有以下镜像
    [root@master ~]# docker images
    REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
    quay.io/calico/node                    v3.1.3              7eca10056c8e        7 weeks ago         248 MB
    quay.io/calico/kube-controllers        v3.1.3              240a82836573        7 weeks ago         55 MB
    quay.io/calico/cni                     v3.1.3              9f355e076ea7        7 weeks ago         68.8 MB
    gcr.io/google_containers/pause-amd64   3.0                 99e59f495ffa        2 years ago         747 kB
    [root@master ~]#
    
    @@@@@@@@@@@@@@@@@@@@@@@@@
    
    Node有以下镜像
    [root@node ~]# docker images
    REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
    quay.io/calico/node                    v3.1.3              7eca10056c8e        7 weeks ago         248 MB
    quay.io/calico/cni                     v3.1.3              9f355e076ea7        7 weeks ago         68.8 MB
    nginx                                  1.13.5-alpine       ea7bef82810a        9 months ago        15.5 MB
    gcr.io/google_containers/pause-amd64   3.0                 99e59f495ffa        2 years ago         747 kB
    

    创建pod及rbac

    kubectl apply -f rbac.yaml 
    kubectl create -f calico.yaml
    

    启动之后,查看pod

    [root@master ~]# kubectl get pod -o wide --namespace=kube-system
    NAME                                        READY     STATUS    RESTARTS   AGE       IP              NODE
    calico-node-8977h                           2/2       Running   0          2m        192.168.60.25   node
    calico-node-bl9mf                           2/2       Running   0          2m        192.168.60.24   master
    calico-policy-controller-79bc74b848-7l6zb   1/1       Running   0          2m        192.168.60.24   master
    

    Pod Yaml参考https://mritd.me/2017/07/31/calico-yml-bug/
    calicoctl
    calicoctl 1.0之后calicoctl管理的都是资源(resource),之前版本的ip pool,profile, policy等都是资源。资源通过yaml或者json格式方式来定义,通过calicoctl create或者apply来创建和应用,通过calicoctl get命令来查看
    calicoctl 下载

    wget https://github.com/projectcalico/calicoctl/releases/download/v3.1.3/calicoctl
    chmod +x calicoctl 
    mv calicoctl /usr/bin/
    
    #下载不下来往上翻,我已经上传到百度云
    

    检查calicoctl是否安装成功

    [root@master yaml]# calicoctl version
    Version:      v1.3.0
    Build date:   
    Git commit:   d2babb6
    

    配置calicoctl的datastore

    [root@master ~]# mkdir -p /etc/calico/
    

    #编辑calico控制器的配置文件
    下载的默认是3.1,修改版本即可下载2.6
    2.6版本配置如下

    cat > /etc/calico/calicoctl.cfg<<EOF
    apiVersion: v1
    kind: calicoApiConfig
    metadata:
    spec:
      datastoreType: "etcdv2"
      etcdEndpoints: "https://192.168.60.24:2379"
      etcdKeyFile: "/etc/etcd/ssl/etcd-key.pem"
      etcdCertFile: "/etc/etcd/ssl/etcd.pem"
      etcdCACertFile: "/etc/etcd/ssl/etcd-root-ca.pem"
    EOF
    #需要连接ETCD,此处的地址是etcd的(Master上)
    

    3.1的只需要根据相关的修改就可以

    apiVersion: projectcalico.org/v3
    kind: CalicoAPIConfig
    metadata:
    spec:
      datastoreType: "etcdv3"
      etcdEndpoints: "https://192.168.60.24:2379"
      etcdKeyFile: "/etc/etcd/ssl/etcd-key.pem"
      etcdCertFile: "/etc/etcd/ssl/etcd.pem"
      etcdCACertFile: "/etc/etcd/ssl/etcd-root-ca.pem"
    

    官方文档:https://docs.projectcalico.org/v3.1/usage/calicoctl/configure/
    不同版本有不同版本的配置,建议参考官方文档~
    #查看calico状态

    [root@master calico]# calicoctl node status
    Calico process is running.
    
    IPv4 BGP status
    +---------------+-------------------+-------+----------+-------------+
    | PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
    +---------------+-------------------+-------+----------+-------------+
    | 192.168.60.25 | node-to-node mesh | up    | 06:13:41 | Established |
    +---------------+-------------------+-------+----------+-------------+
    
    IPv6 BGP status
    No IPv6 peers found.
    

    查看deployment

    [root@master ~]# kubectl get deployment  --namespace=kube-system 
    NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    calico-kube-controllers    1         1         1            1           4h
    calico-policy-controller   0         0         0            0           4h
    
    [root@master ~]# kubectl get pods --namespace=kube-system  -o wide
    NAME                                      READY     STATUS    RESTARTS   AGE       IP              NODE
    calico-kube-controllers-b785696ff-b7kjv   1/1       Running   0          4h        192.168.60.25   node
    calico-node-szl6m                         2/2       Running   0          4h        192.168.60.25   node
    calico-node-tl4xc                         2/2       Running   0          4h        192.168.60.24   master
    

    查看创建后的

    [root@master ~]# kubectl get pod,svc -n kube-system
    NAME                                          READY     STATUS    RESTARTS   AGE
    pod/calico-kube-controllers-b785696ff-b7kjv   1/1       Running   0          4h
    pod/calico-node-szl6m                         2/2       Running   0          4h
    pod/calico-node-tl4xc                         2/2       Running   0          4h
    pod/kube-dns-66544b5b44-vg8lw                 2/3       Running   5          4m
    

    测试
    创建完calico,我们需要测试一下是否正常

    cat > test.service.yaml << EOF
    kind: Service
    apiVersion: v1
    metadata:
      name: nginx-service
    spec:
      selector:
        app: nginx
      ports:
        - protocol: TCP
          port: 80
          targetPort: 80
          nodePort: 31000
      type: NodePort
    EOF
    
    
      ##暴露的端口是31000
    

    编辑deploy文件

    cat > test.deploy.yaml << EOF
    apiVersion: apps/v1beta2
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.13.0-alpine
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 80
    EOF
    

    创建yaml文件

    [root@master k8s_yaml]# kubectl create -f test.service.yaml
    service/nginx-service created
    [root@master k8s_yaml]# kubectl create -f test.deploy.yaml
    deployment.apps/nginx-deployment created
    
    pod正常启动后我们就可以继续查看了
    [root@master k8s_yaml]# kubectl get pod
    NAME                                READY     STATUS    RESTARTS   AGE
    nginx-deployment-5ffbbc5c94-9zvh9   1/1       Running   0          45s
    nginx-deployment-5ffbbc5c94-jc8zw   1/1       Running   0          45s
    nginx-deployment-5ffbbc5c94-lcrlt   1/1       Running   0          45s
    

    这时候我们可以通过ip+31000访问
    9.jpg-269kB

    4.3 DNS

    Kuerbernetes 1.11 集群二进制安装-每日运维
    kubernetes 提供了 service 的概念可以通过 VIP 访问 pod 提供的服务,但是在使用的时候还有一个问题:怎么知道某个应用的 VIP?比如我们有两个应用,一个 app,一个 是 db,每个应用使用 rc 进行管理,并通过 service 暴露出端口提供服务。app 需要连接到 db 应用,我们只知道 db 应用的名称,但是并不知道它的 VIP 地址。
    最简单的办法是从 kubernetes 提供的 API 查询。但这是一个糟糕的做法,首先每个应用都要在启动的时候编写查询依赖服务的逻辑,这本身就是重复和增加应用的复杂度;其次这也导致应用需要依赖 kubernetes,不能够单独部署和运行(当然如果通过增加配置选项也是可以做到的,但这又是增加负责度)。
    开始的时候,kubernetes 采用了 docker 使用过的方法——环境变量。每个 pod 启动时候,会把通过环境变量设置所有服务的 IP 和 port 信息,这样 pod 中的应用可以通过读取环境变量来获取依赖服务的地址信息。这种方式服务和环境变量的匹配关系有一定的规范,使用起来也相对简单,但是有个很大的问题:依赖的服务必须在 pod 启动之前就存在,不然是不会出现在环境变量中的。
    更理想的方案是:应用能够直接使用服务的名字,不需要关心它实际的 ip 地址,中间的转换能够自动完成。名字和 ip 之间的转换就是 DNS 系统的功能,因此 kubernetes 也提供了 DNS 方法来解决这个问题。
    Kube-DNS安装
    DNS yaml文件下载 密码:8nzg
    kube-dns下载
    https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kube-dns/kube-dns.yaml.in

    手动下载并且修改名字
    
    
    ##新版本新增加了很多东西,如果怕改错请直接下载我的包,这里面设计对接kubelet的配置,例如10.254.0.2以及cluster.local
    
    ##建议使用我提供的yaml
    
    
    sed -i 's/$DNS_DOMAIN/cluster.local/gi' kube-dns.yaml
    sed -i 's/$DNS_SERVER_IP/10.254.0.2/gi' kube-dns.yaml
    

    导入镜像

    docker load -i kube-dns.tar
    
    ##可以不导入镜像,默认会去yaml文件指定的地方下载,如果使用导入的镜像,请yaml也是用相同的!
    

    创建Pod

    kubectl create -f kube-dns.yaml
    
    #需要修改yaml的imag地址,和本地镜像对接
    

    查看pod

    [root@master ~]# kubectl get pods --namespace=kube-system 
    NAME                                      READY     STATUS    RESTARTS   AGE
    calico-kube-controllers-b49d9b875-8bwz4   1/1       Running   0          3h
    calico-node-5vnsh                         2/2       Running   0          3h
    calico-node-d8gqr                         2/2       Running   0          3h
    kube-dns-864b8bdc77-swfw5                 3/3       Running   0          2h
    

    验证

    #创建一组pod和Server 查看pod内网通信是否正常
    
    [root@master test]# cat demo.deploy.yml
    apiVersion: apps/v1beta2
    kind: Deployment
    metadata:
      name: demo-deployment
    spec:
      replicas: 5
      selector:
        matchLabels:
          app: demo
      template:
        metadata:
          labels:
            app: demo
        spec:
          containers:
          - name: demo
            image: daocloud.io/library/tomcat:6.0-jre7
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 80
    

    顺便验证一下内外网的通信
    11.jpg-963kB

    4.4 部署 DNS 自动扩容部署

    GitHub上下载
    GitHub:https://github.com/kubernetes/kubernetes/tree/release-1.8/cluster/addons/dns-horizontal-autoscaler
    dns-horizontal-autoscaler-rbac.yaml文件解析:
    实际它就创建了三个资源:ServiceAccount、ClusterRole、ClusterRoleBinding ,创建帐户,创建角色,赋予权限,将帐户绑定到角色上面。
    Kuerbernetes 1.11 集群二进制安装-每日运维
    Kuerbernetes 1.11 集群二进制安装-每日运维
    导入镜像,要不太慢了

    ### node 和master都需要哦~
    
    root@node ~]# docker load -i gcr.io_google_containers_cluster-proportional-autoscaler-amd64_1.1.2-r2.tar 
    3fb66f713c9f: Loading layer 4.221 MB/4.221 MB
    a6851b15f08c: Loading layer 45.68 MB/45.68 MB
    Loaded image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2-r2
    
    
    查看镜像
    [root@master ~]# docker images|grep cluster
    gcr.io/google_containers/cluster-proportional-autoscaler-amd64   1.1.2-r2            7d892ca550df        13 months ago       49.6 MB
    

    确保对应yaml的镜像

    wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
    
    还需要下载一个rbac文件
    https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kube-dns/kube-dns.yaml.in
    
     kubectl create -f dns-horizontal-autoscaler-rbac.yaml  
     kubectl create -f dns-horizontal-autoscaler.yaml 
    ## 直接下载需要修改配置
    

    自动扩容yaml文件

    [root@master calico]# cat dns-horizontal-autoscaler.yaml
    # Copyright 2016 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    kind: ServiceAccount
    apiVersion: v1
    metadata:
      name: kube-dns-autoscaler
      namespace: kube-system
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: system:kube-dns-autoscaler
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
    rules:
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["list"]
      - apiGroups: [""]
        resources: ["replicationcontrollers/scale"]
        verbs: ["get", "update"]
      - apiGroups: ["extensions"]
        resources: ["deployments/scale", "replicasets/scale"]
        verbs: ["get", "update"]
    # Remove the configmaps rule once below issue is fixed:
    # kubernetes-incubator/cluster-proportional-autoscaler#16
      - apiGroups: [""]
        resources: ["configmaps"]
        verbs: ["get", "create"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: system:kube-dns-autoscaler
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
    subjects:
      - kind: ServiceAccount
        name: kube-dns-autoscaler
        namespace: kube-system
    roleRef:
      kind: ClusterRole
      name: system:kube-dns-autoscaler
      apiGroup: rbac.authorization.k8s.io
    
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: kube-dns-autoscaler
      namespace: kube-system
      labels:
        k8s-app: kube-dns-autoscaler
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    spec:
      template:
        metadata:
          labels:
            k8s-app: kube-dns-autoscaler
          annotations:
            scheduler.alpha.kubernetes.io/critical-pod: ''
        spec:
          containers:
          - name: autoscaler
            image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2-r2
            resources:
                requests:
                    cpu: "20m"
                    memory: "10Mi"
            command:
              - /cluster-proportional-autoscaler
              - --namespace=kube-system
              - --configmap=kube-dns-autoscaler
              # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base
              - --target=Deployment/kube-dns
              # When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
              # If using small nodes, "nodesPerReplica" should dominate.
              - --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}}
              - --logtostderr=true
              - --v=2
          tolerations:
          - key: "CriticalAddonsOnly"
            operator: "Exists"
          serviceAccountName: kube-dns-autoscaler
    [root@master calico]#
    

    演示
    Kuerbernetes 1.11 集群二进制安装-每日运维
    详情参考 Autoscale the DNS Service in a Cluster

    相关文章:

    1. Kubernetes 1.14 二进制集群安装
    2. Kubenetes 1.13.5 集群二进制安装
    3. CentOS 7 ETCD集群配置大全
    4. kubeadm 搭建Kubernetes 1.18集群