kubernetes高可用集群(多master,v1.15官方最新版)


开篇介绍
kubernetes已经在我们生产环境运行近一年时间,目前稳定运行。从系统的搭建到项目的迁移,中间遇到过不少问题。生产环境采用多master节点实现kubernetes的高可用,用haproxy+keepalived负载均衡master。现抽空总结下系统的搭建过程,帮助大家快速搭建自己的k8s系统。以下是我生产环境的运行截图 kubernente版本更新迭代非常快,我当时搭建生产环境kubernetes时官方的最新版本是v1.11,现在官方已经更新到了v1.15,本文就以最新版本进行概述。
2. kubernetes简介 kubernetes是google基于borg开源的容器编排调度引擎,一个用于容器集群的自动化部署、扩容以及运维的开源平台。kubernetes 具备完善的集群管理能力,包括多层次的安全防护和准入机制、多租户应用支撑能力、透明的服务注册和服务发现机制、内建负载均衡器、故障发现和自我修复能力、服务滚动升级和在线扩容、可扩展的资源自动调度机制、多粒度的资源配额管理能力。 kubernetes 还提供完善的管理工具,涵盖开发、部署测试、运维监控等各个环节。kubernetes作为CNCF(Cloud Native Computing Foundation)最重要的成员之一,它的目标不仅仅是一个编排系统,而是提供一个规范,可以让你来描述集群的架构,定义服务的最终状态,kubernetes可以帮你将系统自动地达到和维持在这个状态。

3. kubernetes架构
在这张系统架构图中,可以把服务分为运行在工作节点上的服务和组成集群级别控制节点的服务。kubernetes 节点有运行应用容器必备的服务,而这些都是受master的控制。每次个节点上都要运行docker,docker来负责所有具体的映像下载和容器运行。 kubernetes主要由以下几个核心组件组成:etcd保存了整个集群的状态;apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);kube-proxy负责为Service提供cluster内部的服务发现和负载均衡; 除了核心组件,还有一些推荐的组件:kube-dns负责为整个集群提供DNS服务Ingress Controller为服务提供外网入口Heapster提供资源监控Dashboard提供GUIFederation提供跨可用区的集群Fluentd-elasticsearch提供集群日志采集、存储与查询
4. 搭建过程 下面开始咱们今天的干货,集群的搭建过程。4.1 环境准备8C16G
4.2 环境说明 本文采用三台master和三台node搭建kubernetes集群,采用两台机器搭建haproxy+keepalived负载均衡master,保证master高可用,从而保证整个kubernetes高可用。官方要求机器配置必须>=2C2G,操作系统>=16.04。
4.3 搭建过程4.3.1 基本设置 修改hosts文件,8台机器全部修改root@haproxy1:~# cat /etc/hosts192.168.10.1 haproxy1192.168.10.2 haproxy2192.168.10.3 master1192.168.10.4 master2192.168.10.5 master3192.168.10.6 node1192.168.10.7 node2192.168.10.8 node3192.168.10.10 kubernetes.haproxy.com4.3.2 haproxy+keepalived搭建 安装haproxyroot@haproxy1:/data# wget https://github.com/haproxy/haproxy/archive/v2.0.0.tar.gzroot@haproxy1:/data# tar -xf v2.0.0.tar.gzroot@haproxy1:/data# cd haproxy-2.0.0/root@haproxy1:/data/haproxy-2.0.0# make TARGET=linux-glibcroot@haproxy1:/data/haproxy-2.0.0# make install PREFIX=/data/haproxyroot@haproxy1:/data/haproxy# mkdir confroot@haproxy1:/data/haproxy# vim conf/haproxy.cfgglobal log 127.0.0.1 local0 err maxconn 50000 user haproxy group haproxy daemon nbproc 1 pidfile haproxy.piddefaults mode tcp log 127.0.0.1 local0 err maxconn 50000 retries 3 timeout connect 5s timeout client 30s timeout server 30s timeout check 2slisten admin_stats mode http bind 0.0.0.0:1080 log 127.0.0.1 local0 err stats refresh 30s stats uri /haproxy-status stats realm Haproxy Statistics stats auth will:will stats hide-version stats admin if TRUEfrontend k8s bind 0.0.0.0:8443 mode tcp default_backend k8sbackend k8s mode tcp balance roundrobin server master1 192.168.10.3:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3 server master2 192.168.10.4:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3 server master3 192.168.10.5:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3root@haproxy1:/data/haproxy# id -u haproxy &> /dev/null || useradd -s /usr/sbin/nologin -r haproxyroot@haproxy1:/data/haproxy# mkdir /usr/share/doc/haproxyroot@haproxy1:/data/haproxy# wget -qO – https://github.com/haproxy/haproxy/blob/master/doc/configuration.txt | gzip -c > /usr/share/doc/haproxy/configuration.txt.gzroot@haproxy1:/data/haproxy# vim /etc/default/haproxy# Defaults file for HAProxy## This is sourced by both, the initscript and the systemd unit file, so do not# treat it as a shell script fragment.
# Change the config file location if needed#CONFIG=”/etc/haproxy/haproxy.cfg”
# Add extra flags here, see haproxy(1) for a few options#EXTRAOPTS=”-de -m 16″
root@haproxy1:/data# vim /lib/systemd/system/haproxy.service[Unit]Description=HAProxy Load BalancerDocumentation=man:haproxy(1)Documentation=file:/usr/share/doc/haproxy/configuration.txt.gzAfter=network.target syslog.serviceWants=syslog.service
[Service]Environment=CONFIG=/data/haproxy/conf/haproxy.cfgEnvironmentFile=-/etc/default/haproxyExecStartPre=/data/haproxy/sbin/haproxy -f ${CONFIG} -c -qExecStart=/data/haproxy/sbin/haproxy -W -f ${CONFIG} -p /data/haproxy/conf/haproxy.pid $EXTRAOPTSExecReload=/data/haproxy/sbin/haproxy -c -f ${CONFIG}ExecReload=/bin/kill -USR2 $MAINPIDKillMode=mixedRestart=alwaysType=forking
[Install]WantedBy=multi-user.target
root@haproxy2:/data/haproxy# systemctl daemon-reloadroot@haproxy2:/data/haproxy# systemctl start haproxyroot@haproxy2:/data/haproxy# systemctl status haproxy

安装keepalivedroot@haproxy1:/data# wget https://www.keepalived.org/software/keepalived-2.0.16.tar.gzroot@haproxy1:/data# tar -xf keepalived-2.0.16.tar.gzroot@haproxy1:/data# cd keepalived-2.0.16/root@haproxy1:/data/keepalived-2.0.16# ./configure –prefix=/data/keepalivedroot@haproxy1:/data/keepalived-2.0.16# ./configure –prefix=/data/keepalivedroot@haproxy1:/data/keepalived-2.0.16# make && make installroot@haproxy1:/data/keepalived# mkdir confroot@haproxy1:/data/keepalived# vim conf/keepalived.conf! Configuration File for keepalivedglobal_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id haproxy1}vrrp_script chk_haproxy { #HAproxy 服务监控脚本 script “/data/keepalived/check_haproxy.sh” interval 2 weight 2}vrrp_instance VI_1 { state MASTER interface ens160 virtual_router_id 1 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_haproxy } virtual_ipaddress { 192.168.10.10/24 }}
root@haproxy1:/data/keepalived# vim /etc/default/keepalived# Options to pass to keepalived
# DAEMON_ARGS are appended to the keepalived command-lineDAEMON_ARGS=””
root@haproxy1:/data/keepalived# vim /lib/systemd/system/keepalived.service[Unit]Description=Keepalive Daemon (LVS and VRRP)After=network-online.targetWants=network-online.target# Only start if there is a configuration fileConditionFileNotEmpty=/data/keepalived/conf/keepalived.conf
[Service]Type=forkingKillMode=processEnvironment=CONFIG=/data/keepalived/conf/keepalived.conf# Read configuration variable file if it is presentEnvironmentFile=-/etc/default/keepalivedExecStart=/data/keepalived/sbin/keepalived -f ${CONFIG} -p /data/keepalived/conf/keepalived.pid $DAEMON_ARGSExecReload=/bin/kill -HUP $MAINPID
[Install]WantedBy=multi-user.target
root@haproxy1:/data/keepalived# systemctl daemon-reloadroot@haproxy1:/data/keepalived# systemctl start keepalived.serviceroot@haproxy1:/data/keepalived# vim /data/keepalived/check_haproxy.sh#!/bin/bashA=`ps -C haproxy –no-header | wc -l`if [ $A -eq 0 ];thensystemctl start haproxy.servicesleep 3if [ `ps -C haproxy –no-header | wc -l ` -eq 0 ];thensystemctl stop keepalived.servicefifi 同理haproxy2机器上安装haproxy和keepalived
4.3.3 kubernetes集群搭建 基本设置 关闭交换分区,kubernetes集群的6台机器必须全部关闭root@master1:~# free -m total used free shared buff/cache availableMem: 16046 128 15727 8 190 15638Swap: 979 0 979root@master1:~# swapoff -aroot@master1:~# free -m total used free shared buff/cache availableMem: 16046 128 15726 8 191 15638Swap: 0 0 0 安装docker 6台机器均需安装# 使apt能够使用https访问root@master1:~# apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-commonroot@master1:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –OKroot@master1:~# apt-key fingerprint 0EBFCD88pub 4096R/0EBFCD88 2017-02-22 Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88uid Docker Release (CE deb) sub 4096R/F273FCD8 2017-02-22
# 增加docker apt源root@master1:~# add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”# 安装dockerroot@master1:~# apt-get updateroot@master1:~# apt-get install -y docker-ce docker-ce-cli containerd.ioroot@master1:~# docker –versionDocker version 18.09.6, build 481bc77

安装kubernetes组件# 安装kubeadm,kubelet,kubectl 6台机器均需安装root@master1:~# apt-get updateroot@master1:~# apt-get install -y apt-transport-https curlroot@master1:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add –OKroot@master1:~# cat /etc/apt/sources.list.d/kubernetes.list> deb https://apt.kubernetes.io/ kubernetes-xenial main> EOFroot@master1:~# apt-get updateroot@master1:~# apt-get install -y kubelet kubeadm kubectlroot@master1:~# apt-mark hold kubelet kubeadm kubectlkubelet set on hold.kubeadm set on hold.kubectl set on hold. 创建集群 控制节点1root@master1:~# vim kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: stablecontrolPlaneEndpoint: “kubernetes.haproxy.com:8443”networking: podSubnet: “10.244.0.0/16”
root@master1:~# kubeadm init –config=kubeadm-config.yaml –upload-certs
完成后截图如下root@master1:~# mkdir -p $HOME/.kuberoot@master1:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/configroot@master1:~# chown $(id -u):$(id -g) $HOME/.kube/config# 安装网络组件,这里采用fannelroot@master1:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml 查看安装结果root@master1:~# kubectl get pod -n kube-system -w
集群加入另外控制节点 当时我们生产环境v1.11版需每个控制节点写主配置文件,分别在每个节点上执行一系列操作加入集群,现在v1.15版支持kubeadm join直接加入,步骤简单了很多。 控制节点2root@master2:~# kubeadm join kubernetes.haproxy.com:8443 –token a3g3x0.zc6qxcdqu60jgtz1 –discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5 –experimental-control-plane –certificate-key a2a84ebc181ba34a943e5003a702b71e2a1e7e236f8d1d687d9a19d2bf803a77root@master2:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/configroot@master2:~# chown $(id -u):$(id -g) $HOME/.kube/config
查看安装结果root@master2:~# kubectl get nodes 控制节点3root@master3:~# kubeadm join kubernetes.haproxy.com:8443 –token a3g3x0.zc6qxcdqu60jgtz1 –discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5 –experimental-control-plane –certificate-key a2a84ebc181ba34a943e5003a702b71e2a1e7e236f8d1d687d9a19d2bf803a77root@master3:~# mkdir -p $HOME/.kuberoot@master3:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/configroot@master3:~# chown $(id -u):$(id -g) $HOME/.kube/config
查看安装结果root@master3:~# kubectl get nodes 添加工作节点root@node1:~# kubeadm join kubernetes.haproxy.com:8443 –token a3g3x0.zc6qxcdqu60jgtz1 –discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5root@node2:~# kubeadm join kubernetes.haproxy.com:8443 –token a3g3x0.zc6qxcdqu60jgtz1 –discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5root@node3:~# kubeadm join kubernetes.haproxy.com:8443 –token a3g3x0.zc6qxcdqu60jgtz1 –discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc开发云主机域名2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5
整个集群搭建完成查看结果 任一master上执行root@master1:~# kubectl get pods –all-namespacesroot@master1:~# kubectl get nodes
至此,整个高可用集群搭建完毕
5. 参考文档https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin https://www.kubernetes.org.cn/docs

相关推荐: 『高级篇』docker之微服务服务docker化(18)

原创文章,欢迎转载。转载请注明:转载自IT人故事会,谢谢!原文链接地址:『高级篇』docker之微服务服务docker化(18)这次进入微服务的部署,代码也基本都通过了。如果比做一首歌曲的话,前奏已经结开发云主机域名束,现在开始我们的高潮部分,如果吧我们的服务…

免责声明:本站发布的图片视频文字,以转载和分享为主,文章观点不代表本站立场,本站不承担相关法律责任;如果涉及侵权请联系邮箱:360163164@qq.com举报,并提供相关证据,经查实将立刻删除涉嫌侵权内容。

(0)
打赏 微信扫一扫 微信扫一扫
上一篇 05/13 22:24
下一篇 05/13 22:24

相关推荐