当前位置: 首页 > news >正文

使用kubeadm搭建高可用集群-k8s相关组件及1.16版本的安装部署

 

本文是向大家分享k8s相关组件及1.16版本的安装部署,它能够让大家初步了解k8s核心组件的原理及k8s的相关优势,有兴趣的同学可以部署安装下。


什么是kubernetes

  1. kubernetes是Google 开源的容器集群管理系统,是大规模容器应用编排系统,是在众多容器之上的又一抽象层
  2. 它支持自动部署,大规模可伸缩,应用容器化管理
  3. kubernetes是Google 开源的一个容器编排引擎,它支持自动化部署,大规模可伸缩,应用容器化管理
  4. 在kubernetes中部署应用是一件容易的事,因其有着弹性伸缩,横向扩展的优势并同时提供负载均衡能力以及良好的自愈性(自动部署,自动重启,自动复制,自动扩展等)

主要功能包括:

  • 基于容器的应用部署,维护和滚动升级
  • 负载均衡和服务发现
  • 跨机器和跨地区的集群调度
  • 自动伸缩
  • 无状态服务和有状态服务
  • 插件机制保证扩展性

kubernetes特点:

  • 可移植性:支持公有云,私有云,混合云,多重云
  • 可扩展性:模块化,插件化,可挂载,可组合
  • 自动化:自动部署,自动重启,自动复制,自动扩展/伸缩

kubernetes 核心组件:

1. master组件

  • kube-apiserver 提供了资源操作的唯一入口,任何资源的请求/调用操作都是通过它,并提供认证,授权,访问控制,API 注册和发现机制
  • kube -controller-manager 集群控制器,负责维护集群的状态,比如故障检测,自动扩展,滚动更新等
  • kube- scheduler 负责资源的调度,按照预定的调度策略将pod调度到相应的机器上,为pod选择一个node
  • etcd 保存了整个集群的状态信息,分布式键值对(k/v)存储服务
  • core DNS 第三方插件,提供集群的dns服务,实现服务注册和服务发现,为service提供dns记录

2.Node 组件

  • kubelet 负责维护容器的生命周期,同时也负责volume(CVI )和网络(CNI )的管理
  • kube- proxy 负责为service提供cluster内部的服务发现和负载均衡(负责将后端pod访问规则具体为节点上的iptables/ipvs规则)
  • container runtime (docker)负责镜像管理以及pod和容器的真正运行(CRI)

1、部署环境说明

本文通过kubeadm搭建一个高可用的k8s集群,kubeadm可以帮助我们快速的搭建k8s集群,高可用主要体现在对master节点组件及etcd存储的高可用,文中使用到的服务器ip及角色对应如下:

版本号: v1.16.3

2、集群架构及部署准备工作

2.1、集群架构说明

高可用主要体现在master相关组件及etcd,master中apiserver是集群的入口,搭建三个master通过keepalived提供一个vip实现高可用,并且添加haproxy来为apiserver提供反向代理的作用,这样来自haproxy的所有请求都将轮询转发到后端的master节点上。如果仅仅使用keepalived,当集群正常工作时,所有流量还是会到具有vip的那台master上,因此加上了haproxy使整个集群的master都能参与进来,集群的健壮性更强。对应架构图如下所示:

2.2、修改hostshostname

所有节点修改主机名和hosts文件,文件内容如下

172.30.66.222    master.k8s.io   k8s-vip
172.30.66.190    master01.k8s.io k8s-master-01
172.30.66.191    master02.k8s.io k8s-master-02
172.30.66.192    master03.k8s.io k8s-master-03
172.30.66.193    node01.k8s.io   k8s-node-01
172.30.66.194    node02.k8s.io   k8s-node-02
172.30.66.195    node03.k8s.io   k8s-node-03

2.3、其他准备

所有节点操作

· 主机时间同步时间同步可以通过chrony或者ntp来实现,这里不再赘述

· 关闭防火墙关闭centos7自带的firewalld防火墙服务

· 关闭selinux

· 禁用swap kubeadm会检查当前主机是否禁用了swap,如果启动了swap将导致安装不能正常进行,所以需要禁用所有的swap。

# 临时关闭
# swapoff -a && sysctl -w vm.swappiness=0
# 永久关闭,在文件中添加注释
# vim /etc/fstab
...
UUID=7bf41652-e6e9-415c-8dd9-e112641b220e /boot                   xfs     defaults        00
#/dev/mapper/centos-swap swap                    swap    defaults        00
# 或者利用sed命令完事儿
# sed -ri '/^[^#]*swap/s@^@#@'/etc/fstab

· 设置系统其它参数

开启路由转发

vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward =1
net.bridge.bridge-nf-call-ip6tables =1
net.bridge.bridge-nf-call-iptables =1
# modprobe br_netfilter
# sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward =1
net.bridge.bridge-nf-call-ip6tables =1
net.bridge.bridge-nf-call-iptables =1

设置资源配置文件

# echo "* soft nofile 65536">>/etc/security/limits.conf
# echo "* hard nofile 65536">>/etc/security/limits.conf
# echo "* soft nproc 65536"  >>/etc/security/limits.conf
# echo "* hard nproc 65536"  >>/etc/security/limits.conf
# echo "* soft  memlock  unlimited"  >>/etc/security/limits.conf
# echo "* hard  memlock  unlimited"  >>/etc/security/limits.conf

· 安装相关包

# yum install -y conntrack-tools libseccomp libtool-ltdl

3、部署keepalived

在三台master操作

3.1、安装

# yum install -y keepalived

3.2、配置

默认的keepalived配置较复杂,这里用更为简明的方式进行配置,另外的两台master配置和上面类似,只需要修改对应的state配置为BACKUP,priority权重值不同即可,配置中的其他字段这里不做说明。

k8s-master-01的配置:

cat >/etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalivedglobal_defs {
router_id k8s
}vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}vrrp_instance VI_1 {
state MASTER
interfaceens160
virtual_router_id 51
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab
}
virtual_ipaddress {
172.30.66.222
}
track_script {
check_haproxy
}}
EOF

k8s-master-02的配置:

cat >/etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalivedglobal_defs {
router_id k8s
}vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}vrrp_instance VI_1 {
state BACKUP
interfaceens160
virtual_router_id 51
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab
}
virtual_ipaddress {
172.30.66.222
}track_script {
check_haproxy
}}
EOF

k8s-master-03的配置:

cat >/etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalivedglobal_defs {
router_id k8s
}vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}vrrp_instance VI_1 {
state BACKUP
interfaceens160
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab
}
virtual_ipaddress {
172.30.66.222
}track_script {
check_haproxy
}}
EOF

3.3、启动和检查

在三台master节点都启动服务

# 设置开机启动
# systemctl enable keepalived.service
# 启动keepalived
# systemctl start keepalived.service
# 查看启动状态
# systemctl status keepalived.service启动后查看k8s-master-01的网卡信息[root@k8s-master-01~]# ip a s ens160
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:b7:2c:71 brd ff:ff:ff:ff:ff:ff
inet 172.30.66.190/24 brd 172.30.66.255 scope global ens160
valid_lft forever preferred_lft forever
inet 172.30.66.222/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::923a:1078:ee79:b965/64 scope link 
valid_lft forever preferred_lft forever    inet6 

尝试停掉k8s-master-01的keepalived服务,查看vip是否能漂移到其他的master,并且重新启动k8s-master-01的keepalived服务,查看vip是否能正常漂移回来,证明配置没有问题。

4、部署haproxy

在三台master操作

4.1、安装

# yum install -y haproxy

4.2、配置

三台master节点的配置均相同,配置中声明了后端代理的三个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口,其他的配置不做赘述。

cat >/etc/haproxy/haproxy.cfg <<EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in/var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events.  This is done
#    by adding the '-r' option to the SYSLOGD_OPTIONSin
#    /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
#   file. A line like the following can be added to
#   /etc/sysconfig/syslog
#
#    local2.*                       /var/log/haproxy.log
#
log         127.0.0.1 local2chroot      /var/lib/haproxy
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon # turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------  
defaults
mode                    http
log                     global
option                  httplog
option                  dontlognull
option http-server-close
option forwardfor       except 127.0.0.0/8
option                  redispatch
retries                 3
timeout http-request    10s
timeout queue           1m
timeout connect         10s
timeout client          1m
timeout server          1m
timeout http-keep-alive 10s
timeout check           10s
maxconn                 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode                 tcp
bind                 *:16443
option               tcplog
default_backend      kubernetes-apiserver    
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode        tcp
balance     roundrobin
server      master01.k8s.io   172.31.66.190:6443 check
server      master02.k8s.io   172.31.66.191:6443 check
server      master03.k8s.io   172.31.66.192:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind                 *:1080stats auth           admin:awesomePassword
stats refresh        5s
stats realm          HAProxy\ Statistics
stats uri            /admin?stats
EOF

4.3、启动和检查

在三台master节点都启动服务

# 设置开机启动
# systemctl enable haproxy
# 开启haproxy
# systemctl start haproxy
# 查看启动状态
# systemctl status haproxy
检查端口
[root@k8s-master-01~]# netstat -lntup|grep haproxy
tcp        0      00.0.0.0:1080            0.0.0.0:*               LISTEN      7067/haproxy        
tcp        0      00.0.0.0:16443           0.0.0.0:*               LISTEN      7067/haproxy        
udp        0      00.0.0.0:47041           0.0.0.0:*                           7066/haproxy 

5、安装docker

所有节点操作,使用yum安装,

5.1、安装

# step 1:安装必要的一些系统工具
# yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2:添加软件源信息
# sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3:查找Docker-CE的版本:
# yum list docker-ce.x86_64 --showduplicates | sort -r
# Step 4:安装指定版本的Docker-CE
# yum makecache fast
# yum install -y docker-ce-18.09.9

5.2、配置

修改docker的配置文件,目前k8s推荐使用的docker文件驱动是systemd,按照k8s官方文档可查看如何配置

# vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver":"json-file",
"log-opts": {
"max-size":"100m"
},
"storage-driver":"overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}

修改docker的服务配置文件,指定docker的数据目录为外挂的磁盘--graph /data/docker

# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -Hfd://--containerd=/run/containerd/containerd.sock --graph /data/docker

5.3、启动

启动docker服务

# systemctl daemon-reload
# systemctl start docker.service
# systemctl enable docker.service
# systemctl status docker.service
检查docker信息
# docker version
Client: Docker Engine - Community
Version:           19.03.5
APIversion:       1.39 (downgraded from 1.40)
Go version:        go1.12.12
Git commit:        633a0ea
Built:             Wed Nov 1307:25:412022
OS/Arch:           linux/amd64
Experimental:      falseServer: Docker Engine - Community
Engine:
Version:          18.09.9
APIversion:      1.39 (minimum version 1.12)
Go version:       go1.11.13
Git commit:       039a7df
Built:            Wed Sep  416:22:322022
OS/Arch:          linux/amd64Experimental:     false

6、安装kubeadmkubeletkubectl

所有节点操作

6.1、添加阿里云k8syum

cat <<EOF>/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

6.2、安装

# yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3
# systemctl enable kubelet配置kubectl自动补全
[root@k8s-master-01~]# source <(kubectl completion bash)
[root@k8s-master-01~]# echo "source <(kubectl completion bash)">>~/.bashrc

7、安装master

在具有vip的master上操作,这里为k8s-master-01

7.1、创建kubeadm配置文件

[root@k8s-master-01~]# mkdir /usr/local/kubernetes/manifests -p
[root@k8s-master-01~]# cd /usr/local/kubernetes/manifests/
[root@k8s-master-01 manifests]# vim kubeadm-config.yaml
apiServer:
certSANs:
- k8s-master-01
- k8s-master-02
- k8s-master-03
- master.k8s.io
-172.30.66.222
-172.30.66.190
-172.30.66.191
-172.30.66.192
-127.0.0.1
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir:/etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint:"master.k8s.io:16443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:    
dataDir:/var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
dnsDomain: cluster.local  
podSubnet:10.244.0.0/16
serviceSubnet:10.1.0.0/16
scheduler: {}

7.2、初始化master节点

[root@k8s-master-01 manifests]# kubeadm init --config kubeadm-config.yaml 
[init] Using Kubernetes version: v1.16.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed forDNS names [k8s-master-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.k8s.io k8s-master-01 k8s-master-02 k8s-master-03 master.k8s.io] and IPs [10.1.0.1 172.30.66.190 172.30.66.222172.30.66.190172.30.66.191172.30.66.192127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed forDNS names [k8s-master-01 localhost] and IPs [172.30.66.190127.0.0.1::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed forDNS names [k8s-master-01 localhost] and IPs [172.30.66.190127.0.0.1::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for"kube-apiserver"
[control-plane] Creating static Pod manifest for"kube-controller-manager"
[control-plane] Creating static Pod manifest for"kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in"/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane asstatic Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.505682 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config"in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16"in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master-01as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-01as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: jv5z7n.3y1zi95p952y9p65
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml"with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities 
and service account keys on each node and then running the following asroot:kubeadm join master.k8s.io:16443--token jv5z7n.3y1zi95p952y9p65 \
--discovery-token-ca-cert-hash sha256:403bca185c2f3a4791685013499e7ce58f9848e2213e27194b75a2e3293d8812 \
--control-plane       Then you can join any number of worker nodes by running the following on each asroot:kubeadm join master.k8s.io:16443--token jv5z7n.3y1zi95p952y9p65 \
--discovery-token-ca-cert-hash sha256:403bca185c2f3a4791685013499e7ce58f9848e2213e27194b75a2e3293d8812

7.3、按照提示配置环境变量

[root@k8s-master-01 manifests]# mkdir -p $HOME/.kube
[root@k8s-master-01 manifests]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master-01 manifests]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

7.4、查看集群状态

[root@k8s-master-01 manifests]# kubectl get cs
NAME                 AGE
scheduler            <unknown>
controller-manager   <unknown>
etcd-0               <unknown>
[root@k8s-master-01 manifests]# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-58cc8c89f4-56n7g                0/1     Pending   0          87s
coredns-58cc8c89f4-zclz7                0/1     Pending   0          87s
etcd-k8s-master-01                      1/1     Running   0          18s
kube-apiserver-k8s-master-01            1/1     Running   0          21s
kube-controller-manager-k8s-master-01   1/1     Running   0          33s
kube-proxy-ptjjn                        1/1     Running   0          87s
kube-scheduler-k8s-master-01            1/1     Running   0          25s

执行kubectl get cs显示<unknown>是一个1.16版本已知的bug,后续官方应该会解决处理,这里处于pending状态的原因是因为还没有安装网络组件

8、安装集群网络

master节点操作

8.1、获取yaml

从官方地址获取到flannel的yaml

[root@k8s-master-01 manifests]# mkdir flannel
[root@k8s-master-01 manifests]# cd flannel
[root@k8s-master-01 flannel]# wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

确保yaml中的pod子网与前面执行kubeadm初始化时相同,yaml中的镜像如果无法获取,可以使用微软中国镜像源代替,例如

quay.io/coreos/flannel:v0.11.0-amd64  # 源地址
quay.azk8s.cn/coreos/flannel:v0.11.0-amd64  # 代替

8.2、安装

[root@k8s-master-01 flannel]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

8.3、检查

[root@k8s-master-01 flannel]# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-58cc8c89f4-56n7g                1/1     Running   0          20m
coredns-58cc8c89f4-zclz7                1/1     Running   0          20m
etcd-k8s-master-01                      1/1     Running   0          19m
kube-apiserver-k8s-master-01            1/1     Running   0          19m
kube-controller-manager-k8s-master-01   1/1     Running   0          19m
kube-flannel-ds-amd64-8d8bc             1/1     Running   0          51s
kube-proxy-ptjjn                        1/1     Running   0          20m
kube-scheduler-k8s-master-01            1/1     Running   0          19m

9、其他节点加入集群

9.1master加入集群

9.1.1、复制密钥及相关文件

在第一次执行init的机器,此处为k8s-master-01上操作复制文件到k8s-master-02

[root@k8s-master-01~]# ssh root@172.30.66.191 mkdir -p /etc/kubernetes/pki/etcd
[root@k8s-master-01~]# scp /etc/kubernetes/admin.conf root@172.30.66.191:/etc/kubernetes
admin.conf                                                                                                                                        100%5454   465.7KB/s   00:00    
[root@k8s-master-01~]# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@172.30.66.191:/etc/kubernetes/pki
ca.crt                                                                                                                                            100%1025    89.2KB/s   00:00    
ca.key                                                                                                                                            100%1675   212.1KB/s   00:00    
sa.key                                                                                                                                            100%1679   210.1KB/s   00:00    
sa.pub                                                                                                                                            100%  451    56.5KB/s   00:00    
front-proxy-ca.crt                                                                                                                                100%1038   131.9KB/s   00:00    
front-proxy-ca.key                                                                                                                                100%1679   208.3KB/s   00:00    
[root@k8s-master-01~]# scp /etc/kubernetes/pki/etcd/ca.* root@172.30.66.191:/etc/kubernetes/pki/etcd
ca.crt                                                                                                                                            100%1017   138.8KB/s   00:00    
ca.key

复制文件到k8s-master-03

[root@k8s-master-01~]# ssh root@172.30.66.192 mkdir -p /etc/kubernetes/pki/etcd
[root@k8s-master-01~]# scp /etc/kubernetes/admin.conf root@172.30.66.192:/etc/kubernetes
admin.conf                                                                                                                                        100%5454   824.2KB/s   00:00    
[root@k8s-master-01~]# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@172.30.66.192:/etc/kubernetes/pki
ca.crt                                                                                                                                            100%1025   144.6KB/s   00:00    
ca.key                                                                                                                                            100%1675   218.0KB/s   00:00    
sa.key                                                                                                                                            100%1679   245.7KB/s   00:00    
sa.pub                                                                                                                                            100%  451    57.3KB/s   00:00    
front-proxy-ca.crt                                                                                                                                100%1038   132.6KB/s   00:00    
front-proxy-ca.key                                                                                                                                100%1679   213.4KB/s   00:00    
[root@k8s-master-01~]# scp /etc/kubernetes/pki/etcd/ca.* root@172.30.66.192:/etc/kubernetes/pki/etcd
ca.crt                                                                                                                                            100%1017    55.0KB/s   00:00    
ca.key

9.1.2master加入集群

分别在其他两台master上操作,执行在k8s-master-01上init后输出的join命令,如果找不到了,可以在master01上执行以下命令输出

[root@k8s-master-01~]# kubeadm token create --print-join-command
kubeadm join master.k8s.io:16443--token ckf7bs.30576l0okocepg8b     --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba

在k8s-master-02上执行join命令,需要带上参数--control-plane表示把master控制节点加入集群

[root@k8s-master-02~]# kubeadm join master.k8s.io:16443--token ckf7bs.30576l0okocepg8b     --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the newcontrol plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed forDNS names [k8s-master-02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.k8s.io k8s-master-01 k8s-master-02 k8s-master-03 master.k8s.io] and IPs [10.1.0.1172.30.66.191172.30.66.222172.30.66.190172.30.66.191172.30.66.192127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed forDNS names [k8s-master-02 localhost] and IPs [172.30.66.191127.0.0.1::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed forDNS names [k8s-master-02 localhost] and IPs [172.30.66.191127.0.0.1::1]
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in"/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file:"/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for"kube-apiserver"
[control-plane] Creating static Pod manifest for"kube-controller-manager"
[control-plane] Creating static Pod manifest for"kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced newetcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for"etcd"
[etcd] Waiting for the newetcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2022-08-27T13:33:59.913+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https:// 172.30.66.191:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config"in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master-02as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-02as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a newcontrol plane instance was created:* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the newsecure connection details.
* Control plane (master) label and taint were applied to the newnode.
* The Kubernetes control plane instances scaled up.
*Anewetcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.
[root@k8s-master-02~]# mkdir -p $HOME/.kube
[root@k8s-master-02~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master-02~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

同样的,在k8s-master-03上执行join命令,输出及后续相关的步骤同上

[root@k8s-master-03~]# kubeadm join master.k8s.io:16443--token ckf7bs.30576l0okocepg8b     --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba --control-plane
[root@k8s-master-03~]# mkdir -p $HOME/.kube
[root@k8s-master-03~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master-03~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

9.1.3、检查

在其中一台master上执行命令检查集群及pod状态

[root@k8s-master-01~]# kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
k8s-master-01   Ready    master   36m     v1.16.3
k8s-master-02   Ready    master   3m20s   v1.16.3
k8s-master-03   Ready    master   21s     v1.16.3
[root@k8s-master-01~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-58cc8c89f4-56n7g                1/1     Running   0          36m
kube-system   coredns-58cc8c89f4-zclz7                1/1     Running   0          36m
kube-system   etcd-k8s-master-01                      1/1     Running   0          35m
kube-system   etcd-k8s-master-02                      1/1     Running   0          3m55s
kube-system   etcd-k8s-master-03                      1/1     Running   0          56s
kube-system   kube-apiserver-k8s-master-01            1/1     Running   0          35m
kube-system   kube-apiserver-k8s-master-02            1/1     Running   0          3m55s
kube-system   kube-apiserver-k8s-master-03            1/1     Running   0          57s
kube-system   kube-controller-manager-k8s-master-01   1/1     Running   1          35m
kube-system   kube-controller-manager-k8s-master-02   1/1     Running   0          3m55s
kube-system   kube-controller-manager-k8s-master-03   1/1     Running   0          57s
kube-system   kube-flannel-ds-amd64-7hnhl             1/1     Running   1          3m56s
kube-system   kube-flannel-ds-amd64-8d8bc             1/1     Running   0          17m
kube-system   kube-flannel-ds-amd64-fp2rb             1/1     Running   0          57s
kube-system   kube-proxy-gzswt                        1/1     Running   0          3m56s
kube-system   kube-proxy-hdrq7                        1/1     Running   0          57s
kube-system   kube-proxy-ptjjn                        1/1     Running   0          36m
kube-system   kube-scheduler-k8s-master-01            1/1     Running   1          35m
kube-system   kube-scheduler-k8s-master-02            1/1     Running   0          3m55s
kube-system   kube-scheduler-k8s-master-03            1/1     Running   0          57s

9.2node加入集群

9.2.1node加入集群

分别在其他三台node节点上操作,执行join命令在k8s-node-01上操作

[root@k8s-node-02~]# kubeadm join master.k8s.io:16443--token ckf7bs.30576l0okocepg8b     --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the newsecure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

同理

[root@k8s-node-02~]# kubeadm join master.k8s.io:16443--token ckf7bs.30576l0okocepg8b     --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba
[root@k8s-node-03~]# kubeadm join master.k8s.io:16443--token ckf7bs.30576l0okocepg8b     --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba

9.2.2、检查

[root@k8s-master-01~]# kubectl get node
NAME            STATUS   ROLES    AGE    VERSION
k8s-master-01   Ready    master   42m    v1.16.3
k8s-master-02   Ready    master   9m3s   v1.16.3
k8s-master-03   Ready    master   6m4s   v1.16.3
k8s-node-01     Ready    <none>   31s    v1.16.3
k8s-node-02     Ready    <none>   28s    v1.16.3
k8s-node-03     Ready    <none>   38s    v1.16.3
[root@k8s-master-01~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-58cc8c89f4-56n7g                1/1     Running   0          41m
kube-system   coredns-58cc8c89f4-zclz7                1/1     Running   0          41m
kube-system   etcd-k8s-master-01                      1/1     Running   0          40m
kube-system   etcd-k8s-master-02                      1/1     Running   0          9m4s
kube-system   etcd-k8s-master-03                      1/1     Running   0          6m5s
kube-system   kube-apiserver-k8s-master-01            1/1     Running   0          40m
kube-system   kube-apiserver-k8s-master-02            1/1     Running   0          9m4s
kube-system   kube-apiserver-k8s-master-03            1/1     Running   0          6m6s
kube-system   kube-controller-manager-k8s-master-01   1/1     Running   1          40m
kube-system   kube-controller-manager-k8s-master-02   1/1     Running   0          9m4s
kube-system   kube-controller-manager-k8s-master-03   1/1     Running   0          6m6s
kube-system   kube-flannel-ds-amd64-7hnhl             1/1     Running   1          9m5s
kube-system   kube-flannel-ds-amd64-8d8bc             1/1     Running   0          22m
kube-system   kube-flannel-ds-amd64-bwwlx             1/1     Running   0          33s
kube-system   kube-flannel-ds-amd64-fp2rb             1/1     Running   0          6m6s
kube-system   kube-flannel-ds-amd64-g9vdj             1/1     Running   0          40s
kube-system   kube-flannel-ds-amd64-xcbfr             1/1     Running   0          30s
kube-system   kube-proxy-485dl                        1/1     Running   0          30s
kube-system   kube-proxy-8p688                        1/1     Running   0          40s
kube-system   kube-proxy-fdq7c                        1/1     Running   0          33s
kube-system   kube-proxy-gzswt                        1/1     Running   0          9m5s
kube-system   kube-proxy-hdrq7                        1/1     Running   0          6m6s
kube-system   kube-proxy-ptjjn                        1/1     Running   0          41m
kube-system   kube-scheduler-k8s-master-01            1/1     Running   1          40m
kube-system   kube-scheduler-k8s-master-02            1/1     Running   0          9m4s
kube-system   kube-scheduler-k8s-master-03            1/1     Running   0          6m6s

9.3、集群后续扩容

默认情况下加入集群的token是24小时过期,24小时后如果是想要新的node加入到集群,需要重新生成一个token,命令如下

# 显示获取token列表
# kubeadm token list
# 生成新的token
# kubeadm token create

除token外,join命令还需要一个sha256的值,通过以下方法计算

openssl x509 -pubkey -in/etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null| openssl dgst -sha256 -hex | sed 's/^.* //'

用上面输出的token和sha256的值或者是利用kubeadm token create --print-join-command拼接join命令即可

10、集群缩容

master节点

kubectl drain <node name>--delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

node节点

kubeadm reset

11、安装dashboard

11.1、部署dashboard

地址:https://github.com/kubernetes/dashboard 文档:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ 部署最新版本v2.0.0-beta6,下载yaml

[root@k8s-master-01 manifests]# cd /usr/local/kubernetes/manifests/
[root@k8s-master-01 manifests]# mkdir dashboard
[root@k8s-master-01 manifests]# cd dashboard/
[root@k8s-master-01 dashboard]# wget -c https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
# 修改service类型为nodeport
[root@k8s-master-01 dashboard]# vim recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port:443
targetPort:8443
nodePort:30001
selector:
k8s-app: kubernetes-dashboard
...
[root@k8s-master-01 dashboard]# kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@k8s-master-01 dashboard]# kubectl get pods -n kubernetes-dashboard 
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-76585494d8-62vp9   1/1     Running   0          6m47s
kubernetes-dashboard-b65488c4-5t57x          1/1     Running   0          6m48s
[root@k8s-master-01 dashboard]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.1.207.27    <none>        8000/TCP        7m6s
kubernetes-dashboard        NodePort    10.1.207.168   <none>        443:30001/TCP   7m7s
# 在node上通过https://nodeip:30001访问是否正常

11.2、创建service account并绑定默认cluster-admin管理员集群角色

[root@k8s-master-01 dashboard]# vim dashboard-adminuser.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
[root@k8s-master-01 dashboard]# kubectl apply -f dashboard-adminuser.yaml 
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

获取token

[root@k8s-master-01 dashboard]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-hb5vs
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: d699cd10-82cb-48ac-af7e-e8eea540b46eType:  kubernetes.io/service-account-tokenData
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ing5T2gwbFR2Wk56SG9rR2xVck5BOFhVRnRWVE0wdHhSdndyOXZ3Uk5vYkUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWhiNXZzIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkNjk5Y2QxMC04MmNiLTQ4YWMtYWY3ZS1lOGVlYTU0MGI0NmUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.OkhaAJ5wLhQA2oR8wNIvEW9UYYtwEOuGQIMa281f42SD5UrJzHBxk1_YeNbTQFKMJHcgeRpLxCy7PyZotLq7S_x_lhrVtg82MPbagu3ofDjlXLKc3pU9R9DqCHyid1rGXA94muNJRRWuI4Vq4DaPEnZ0xjfkep4AVPiOjFTlHXuBa68qRc-XK4dhs95BozVIHwir1W2CWhlNdfgTEY2QYJX0N1WqBQu_UWi3ay3NDLQR6pn1OcsG4xCemHjjsMmrKElZthAAc3r1aUQdCV7YNpSBajCPSSyfbMiU3mOjy1xLipEijFditif3HGXpKyYLkbuOY4dYtZHocWK7bfgGDQ

11.3、使用token登录到dashboard界面

相关文章:

为了摸鱼,我开发了一个工具网站

&#x1f3e1; 博客首页&#xff1a;派 大 星 ⛳️ 欢迎关注 &#x1f433; 点赞 &#x1f392; 收藏 ✏️ 留言 &#x1f3a2; 本文由派大星原创编撰 &#x1f6a7; 系列专栏&#xff1a;《开源专栏》 &#x1f388; 本系列主要输出作者自创的开源项目 &#x1f517; 作品&…...

CodeForces - 545E Paths and Trees 最短路建树

题目链接&#xff1a;点击查看 Little girl Susie accidentally found her elder brothers notebook. She has many things to do, more important than solving problems, but she found this problem too interesting, so she wanted to know its solution and decided to a…...

koa + pug模板引擎

模板引擎 模板引擎&#xff1a;模板引擎是web应用中动态生成html的工具&#xff0c;负责将数据和模板结合。常见模板引擎有&#xff1a;ejs、jade&#xff08;现更名为pug&#xff09;、Handlebars、Nunjucks、Swig等&#xff1b;使用模板引擎可以是项目结构更加清晰&#xff…...

数字集成电路设计(二、Verilog HDL基础知识)

文章目录1. 语言要素1.1 空白符1.2 注释符1.3 标识符1.3.1 转义标识符1.4 关键字1.5 数值1.5.1 整数及其表示方式1.5.2 实数及其表示方式1.5.3 字符串及其表示方式2. 数据类型2.1 物理数据类型2.1.1 连线型2.1.2 寄存器型2.2 连线型和寄存器型数据类型的声明2.2.1 连线型数据类…...

【工具使用】Visual Studio Code远程调试

VS Code的其中一个关键的特征就是它极好的调试支持。VS Code的内置调试器帮助加速你的编辑、编译和调试循环。 调试扩展 VS Code有Node.js运行的内置的调试支持&#xff0c;并且能够调试Java脚本或者任何其他可以转译为JavaScript的语言。为了调试其他语言&#xff08;包括P…...

【Flutter】【widget】Table 表格widget

文章目录前言一、Table 是什么&#xff1f;二、使用步骤1.Table 基础使用2.宽度3.设置边框4.TableCell设置单元格式widget等其他设置总结前言 Table 表格widget&#xff0c;其实很少使用到的&#xff0c;等有需要的时候在查看该widget 一、Table 是什么&#xff1f; 表格widg…...

ADB学习笔记

简介&#xff1a; ADB的全称为Android Debug Bridge&#xff08;调试桥&#xff09;&#xff0c; 它是一个客户端-服务器端程序&#xff0c;其中客户端是你用来操作的电脑, 服务器端是android设备。作用显而易见&#xff0c;能方便我们在PC上对手机进行调试的一些工作。 原理…...

http load介绍

前几天工作中要对项目的接口做简单压测&#xff0c;就使用了http load做了简单测试&#xff0c;下面介绍一下这款工具的使用说明。简介&#xff1a;http_load是基于linux平台的性能测试工具&#xff0c;它体积非常小&#xff0c;仅100KB。它以并行复用的方式运行&#xff0c;可…...

DPDK之PMD原理

PMD是Poll Mode Driver的缩写&#xff0c;即基于用户态的轮询机制的驱动。本文将介绍PMD的基本原理。 在不考虑vfio的情况下&#xff0c;PMD的结构图如下&#xff1a; 图1. PMD结构图 虽然PMD是在用户态实现设备驱动&#xff0c;但还是依赖于内核提供的策略。其中uio模块&…...

Linux shell脚本之回顾及实用笔记

一、前言 我们从事运维的小伙伴,除了自动化运维外,在没有自动化条件下,借助shell脚本/Python脚本来提升运维效率,无疑是一个必选项,当前也可以自建自动化运维平台,我们这里还是以Linux shell脚本为主,来汇总一些常用的运维脚本,对于有基础的同学,也随本文一起回顾下相…...

TestNG使用总结

TestNG简介&#xff1a; TestNG是一个测试框架&#xff0c;其灵感来自JUnit和NUnit&#xff0c;但同时引入了一些新的功能&#xff0c;使其功能更强大&#xff0c;使用更方便。 TestNG相较于Junit的优点&#xff1a; 可指定执行顺序&#xff0c; dependsOnMethods 属性来应对…...

面向对象编程的弊端

英文原文&#xff1a;What’s Wrong with OOP and FP 我不理解为什么人们会对面向对象编程和函数式编程做无休无止的争论。就好象这类问题已经超越了人类智力极限&#xff0c;所以你可以几个世纪的这样讨论下去。经过这些年对编程语言的研究&#xff0c;我已经清楚的看到了问题…...

5.Servlet

一、Servlet快速入门 1.创建web项目&#xff0c;导入Servlet依赖坐标&#xff08;scope范围为provided因为上传后tomcat也有这个&#xff0c;可能会冲突&#xff09;pom.xml <dependency><groupId>javax.servlet</groupId><artifactId>javax.servlet-a…...

(续)SSM整合之springmvc笔记(@RequestMapping注解)(P124-130)还没完

RequestMapping注解 一.准备工作 1 新建spring_mvc_demo com.atguigu 2. 导入依赖 <packaging>war</packaging><dependencies><!-- SpringMVC --><dependency><groupId>org.springframework</groupId><artifactId>sprin…...

c++入门必学算法 质数筛

文章目录一、什么是质数筛二、暴力枚举1、暴力枚举基本思想&#xff1a;2、模板代码3、运行结果三、埃氏筛1、埃氏筛基本思想&#xff1a;2、模板代码3、运行结果四、欧拉筛1、对比埃氏筛2、欧拉筛的基本思想3、模板代码3、运行结果五、总结一、什么是质数筛 质数筛也叫素数筛…...

Verilog结构语句和函数、任务语句

目录 结构说明语句 initial说明语句 always说明语句 task和function说明语句 task说明语句 function说明语句 关于使用任务和函数的小结 结构说明语句 Verilog语言中的任何过程模块都从属于以下4种结构的说明语句&#xff1a; initial说明语句 一个模块种可以有多个i…...

String 创建字符串对象和字符串常量池的联系推理

文章目录String 创建字符串对象和字符串常量池的联系推理ref前提intern方法String s "abc";字符串相加String 创建字符串对象和字符串常量池的联系推理 可能有错误理解 ref String s1 new String(“abc”);这句话创建了几个字符串对象&#xff1f; 我提的issue …...

flex 计算指定日期是本年度第几周

/** * 计算指定日期是本年度第几周 *传日年月日&#xff0c;返回number */ private function weekOfYear(yyyy:Number,mm:Number,dd:Number):Number{ var myDate:Date new Date(yyyy, mm - 1, dd); var startDate:Date new Date(yyyy,0,1); v…...

SpringCloud Zuul(四)之工作原理

一、筛选器概述 Zuul的中心是一系列过滤器&#xff0c;这些过滤器能够在HTTP请求和响应的路由期间执行一系列操作。 以下是Zuul过滤器的主要特征&#xff1a; 类型&#xff1a;通常定义路由流程中应用过滤器的阶段&#xff08;尽管它可以是任何自定义字符串&#xff09;执行…...

【毕业设计】大数据分析的航空公司客户价值分析 - python

文章目录0 前言1 数据分析背景2 分析策略2.1 航空公司客户价值分析的LRFMC模型2.2 数据2.3 分析模型3 开始分析3.1 数据预处理3.1.1 数据预览3.1.2 数据清洗3.2 变量构建3.3 建模分析4 数据分析结论4.1 整体结论4.2 重要保持客户4.3 重要挽留客户4.4 一般客户与低价值客户5 最后…...

软件工程毕业设计课题(80)微信小程序毕业设计PHP电影视频播放小程序系统设计与实现

项目背景和意义 目的&#xff1a;本课题主要目标是设计并能够实现一个基于微信电影播放小程序系统&#xff0c;前台用户使用小程序&#xff0c;小程序使用微信开发者工具开发&#xff1b;后台管理使用基PPMySql的B/S架构&#xff0c;开发工具使用phpstorm&#xff1b;通过后台添…...

PyTorch搭建基于图神经网络(GCN)的天气推荐系统(附源码和数据集)

需要源码和数据集请点赞关注收藏后评论区留言~~~ 一、背景 极端天气情况一直困扰着人们的工作和生活。部分企业或者工种对极端天气的要求不同&#xff0c;但是目前主流的天气推荐系统是直接将天气信息推送给全部用户。这意味着重要的天气信息在用户手上得不到筛选&#xff0c;…...

Python 对象保存优化机制

Python 为了减少开销与内存的使用而设置一些规则: * 1. 但凡是不可变对象, 在同一个代码块中的对象, 只要是值相同的对象, 就不会重复创建, 而是直接引用已经存在的对象.交互环境下: 不写在一行, 字符类型数据指向一个内存地址, 整型超出小整数则执指向不同的地址. 代码块缩进相…...

隐式转换这个概念你听说过没?

世界上最遥远的距离不是生与死&#xff0c;而是你亲手制造的BUG就在你眼前&#xff0c;你却怎么都找不到她。 目录 1、隐式转换是什么意思 1.1整型截断 1.2整形提升 2、char的取值范围 2.1有符号char取值范围 2.2无符号char取值范围 前言&#xff1a; 大家好&#xff0c;…...

Web 性能优化:TLS

个人博客 Web 性能优化&#xff1a;TCP&#x1f3af; Web 性能优化&#xff1a;TLSWeb 性能优化&#xff1a;HTTP “do it, do it work, do it better … and secure ☠️” 随着追逐利益而来的恶意参与者越来越多&#xff0c;当前的 Web 应用&#xff0c;已经从野蛮生长转而…...

力扣113题引发的关于DFS和回溯的一点思考

最近刚学回溯和DFS&#xff0c;刷力扣遇到一道题&#xff08;113题&#xff09;&#xff0c;如下&#xff1a; 我们不细究回溯和DFS的区别联系。关于这道题的2种写法&#xff0c;我把第一种称为回溯。 class Solution {List<List<Integer>> res new LinkedList&l…...

Go 语言报错 StackGuardMultiplier redeclared in this block

前言 最近在 GitHub 刷到了 GitHub - golang-china/gopl-zh: Go语言圣经中文版&#xff0c; 然后又是周末&#xff0c;就起了玩心。搞一个 Go 玩玩&#xff0c;去 The Go Programming Language下载了 Go 语言安装包&#xff0c;一路默认安装。然后打开 VS Code 安装 Extensio…...

C\C++刷题ADY3

题目来源&#xff1a;力扣 1.第一题 203. 移除链表元素 - 力扣&#xff08;LeetCode&#xff09; 思路分析:&#xff08;不带哨兵位的头节点&#xff09; 每次去分析一个节点&#xff0c; 如果节点不是存的是6&#xff0c;就拿节点来尾插 如果节点存的不是6&#xff0c;就把节…...

解决elementui 的省市区级联选择器数据不回显问题

上周写了一个省市区三级联动的地址选择组件&#xff0c;今天测试发现了一个大问题&#xff0c;那就是我可以正常提交地址是没错&#xff0c;可是当我后端返回了数据&#xff0c;我要点击编辑的时候&#xff0c;它并不会自动就给我绑定上去。 vue实现省市区三级联动地址选择组件…...

[CSS]圆角边框与阴影

前言 系列文章目录&#xff1a; [目录]HTML CSS JS 根据视频和PPT整理视频及对应资料&#xff1a;HTML CSS 老师笔记&#xff1a; https://gitee.com/xiaoqiang001/html_css_material.git视频&#xff1a;黑马程序员pink老师前端入门教程&#xff0c;零基础必看的h5(html5)css3…...

微信小程序原理

前言 微信小程序采用JavaScript. WXML. WXSS三种技术进行开发&#xff0c;从技术讲和现有的前端开发差不多&#xff0c;但深入挖掘的话却又有所不同。 一、原理 JavaScript&#xff1a;首先JavaScript的代码是运行在微信App中的&#xff0c;并不是运行在浏览器中&#xff0c;…...

Neo4j 开发者大会 NODES 2022 活动日程已发布 - 11.16 ~ 11.17

各位 Graphistas&#xff1a; Neo4j 开发者大会 NODES 2022 将在 2022 年 11 月 16&#xff5e;17 日召开&#xff0c;不要错过这连续 24 小时跨越 3 个主要时区的大型在线活动&#xff0c;欢迎加入我们一起庆祝来自全球图技术社区的隆重分享。 现在访问官方网站注册活动: ht…...

生成者(建造者)模式

思考生成者模式 生成者模式就是将对象构建和对象内部构建分离 对象构建&#xff1a;手机的构建 对象内部构建&#xff1a;手机中屏幕和电池的构建 1.生成者模式的本质 生成器模式的本质:分离整体对象构建算法和对象的部件构造。 构建一个复杂的对象&#xff0c;本来就有构建的过…...

Google Swift 与 DC 传输

网络拥塞&#xff0c;默认指转发节点出现了严重的排队现象&#xff0c;甚至队列溢出而丢包。、 但接收端也是一个统计复用系统(通用 OS 均为统计复用系统&#xff0c;比如 Linux)&#xff0c;但凡统计复用系统就是潜在拥塞点&#xff0c;即可套用排队论模型。 人们很少将最后…...

webservice学习记录笔记(一)

一、先理解什么是服务 现在的应用程序变得越来越复杂&#xff0c;甚至只靠单一的应用程序无法完成全部的工作。更别说只使用一种语言了。 写应用程序查询数据库时&#xff0c;并没有考虑过为什么可以将查询结果返回给上层的应用程序&#xff0c;甚至认为&#xff0c;这就是数…...

[附源码]SSM计算机毕业设计-东湖社区志愿者管理平台JAVA

项目运行 环境配置&#xff1a; Jdk1.8 Tomcat7.0 Mysql HBuilderX&#xff08;Webstorm也行&#xff09; Eclispe&#xff08;IntelliJ IDEA,Eclispe,MyEclispe,Sts都支持&#xff09;。 项目技术&#xff1a; SSM mybatis Maven Vue 等等组成&#xff0c;B/S模式 M…...

spring Cloud微服务 security+oauth2认证授权中心自定义令牌增强,并实现登录和退出

文章目录认证授权中心自定义令牌增强自定义认证端点返回结果登录逻辑调整&#xff0c;增强令牌返回参数测试验证用户微服务构建配置类构建相关实体类登录退出登录在之前的博客我写了 SpringCloud整合spring security oauth2Redis实现认证授权&#xff0c;本文对返回的token实现…...

接口测试那些事儿

什么是接口&#xff1f; 首先&#xff0c;在讲接口测试之前&#xff0c;我们先要搞清楚接口类型的概念。 接口&#xff1a;可能是系统与系统&#xff08;包括服务与服务&#xff09;之间的调用&#xff0c;像A系统&#xff08;服务&#xff09;给B系统&#xff08;服务&#x…...

CodeForces - 1084C The Fair Nut and String 思维

The Fair Nut found a string s. The string consists of lowercase Latin letters. The Nut is a curious guy, so he wants to find the number of strictly increasing sequences p1,p2,…,pk , such that: For each i (1≤i≤k), spi a.For each i(1≤i<k), there is…...

高级测试工程师必备技术:用Git版本控制自动化测试代码

初识Git版本控制 自动化测试代码反复执行&#xff0c;如果借用持续集成工具会提高测试效率&#xff0c;那么需要我们把自动化测试代码发布到正式环境中&#xff0c;这时候用Git版本控制工具高效、稳定、便捷。 分布式版本控制 Git可以把代码仓库完整地镜像下来&#xff0c;有…...

【晶振专题】案例:为什么对着STM32低速32.768K晶振哈口气就不起振了?

本案例发现在一个工装产品上,首批一共做了10几台样机。发现有的样机在开机的时候读取不到RTC,有的样机却可以。读不到RTC是概率性出现的,发生在第一次上电的情况。开始他怀疑是环境问题,会不会和温度有关,于是同事在家做了大量的实验,发现对晶振吹口气就能让晶振不起振,…...

Gym - 101986B Parallel Lines dfs暴力

链接&#xff1a;点击查看 题意&#xff1a;偶数个点&#xff0c;两点可连成一条线&#xff0c;求平行线最大对数 题解&#xff1a;当时想的时候傻逼了&#xff0c;想成了每次选两个点就是16*15/2 * 14*13/2 ..... 其实不需要这样&#xff0c;因为每个点必须要匹配一个的&…...

Keychron Q1:客制化机械键盘|体验

在Cherry轴垄断的后几年&#xff0c;国产机械轴一举反超&#xff0c;在性价比、手感、耐用性上实现了全面碾压。至少现在的键圈和智能手机一样&#xff0c;支持国货不仅是情怀&#xff0c;更是为产品力在买单。 至于“如何卷”键盘的玩法可比智能手机多得去了&#xff0c;可能…...

ArcGIS计算地形湿度指数

TWI是区域地形对径流流向和蓄积影响的物理指标&#xff0c;有助于识别降雨径流模式、潜在土壤含水量增加区域和积水区域。 计算方法&#xff1a;TWI是通过细尺度地形与上梯度对地表面积的贡献相互作用&#xff0c;根据以下关系得到的(Beven et al.,1979) [1] : TWI ln [CA/…...

Linux 安装 Nginx

阿里巴巴开源镜像站-OPSX镜像站 阿里云开发者社区&#xff1a;Nginx Linux详细安装部署教程&#xff08;附带使用案例&#xff09; Nginx 下载 1、切换到root用户 2、安装c编译环境&#xff0c;如已安装可略过 yum install gcc-c 3、安装 Nginx 相关依赖 yum -y install…...

POJ - 2406 Power Strings next数组应用循环节

题目链接&#xff1a;点击查看 Language:Default Power Strings Time Limit: 3000MS Memory Limit: 65536KTotal Submissions: 61784 Accepted: 25534Description Given two strings a and b we define a*b to be their concatenation. For example, if a "abc" and…...

CodeForces - 545D Queue 贪心 排序

题目链接&#xff1a;点击查看 Little girl Susie went shopping with her mom and she wondered how to improve service quality. There are n people in the queue. For each person we know time ti needed to serve him. A person will be disappointed if the time he …...

Jmeter访问HTTPS请求

公司最近在搞全站HTTPS改造&#xff0c;进一步提高网站的安全性&#xff0c;防止运营商劫持。那么&#xff0c;改造完成后&#xff0c;所有前后端的URL将全部为https。 So &#xff0c;研究下怎么用Jmeter访问https请求呢。 其实很简单&#xff0c; 第一步在jmeter中创建HTT…...

web前端期末大作业:网站设计与实现——咖啡网站HTML+CSS+JavaScript

&#x1f380; 精彩专栏推荐&#x1f447;&#x1f3fb;&#x1f447;&#x1f3fb;&#x1f447;&#x1f3fb; ✍️ 作者简介: 一个热爱把逻辑思维转变为代码的技术博主 &#x1f482; 作者主页: 【主页——&#x1f680;获取更多优质源码】 &#x1f393; web前端期末大作业…...

毕业设计:SpringBoot+Vue+Element的校内跑腿平台

作者主页&#xff1a;编程指南针 作者简介&#xff1a;Java领域优质创作者、CSDN博客专家 、掘金特邀作者、多年架构师设计经验、腾讯课堂常驻讲师 主要内容&#xff1a;Java项目、毕业设计、简历模板、学习资料、面试题库、技术互助 文末获取源码 项目编号&#xff1a;BS-XX-…...