一.deployment
rc和rs控制器都是控制pod的副本数量的,但是,他们两个有个缺点,就是在部署新版本pod或者回滚代码的时候,需要先apply资源清单,然后再删除现有pod,通过资源控制,重新拉取新的pod来实现回滚或者迭代升级
deployments资源,实际上就是用来专门部署业务代码的控制器,专门用于企业业务代码的升级和回滚
deployment部署控制器,实际上控制的是rs副本控制器,如果说rs副本控制器是控制pod的副本数量的,那么deployment就是专门控制rs控制器资源的
简单来说:deployment不需要删除pod,rc,rs需要删除pod
1.deplyment的简单实现
·deplyment资源清单
[root@master 0721]# cat dp.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dp-demo spec: replicas: 3 selector: matchLabels: demoo0: demoo0 template: metadata: name: pod001 labels: demoo0: demoo0 spec: containers: - name: dd image: harbor.test.com/test/nginx:v1 ports: - containerPort: 80
· 创建查看
[root@master 0721]# kubectl apply -f dp.yaml deployment.apps/dp-demo created [root@master 0721]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE dp-demo 3/3 3 3 18s [root@master 0721]# kubectl get rs NAME DESIRED CURRENT READY AGE dp-demo-988687d45 3 3 3 25s [root@master 0721]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dp-demo-988687d45-2b9d2 1/1 Running 0 38s 10.100.2.80 worker2 <none> <none> dp-demo-988687d45-vp7xn 1/1 Running 0 38s 10.100.2.79 worker2 <none> <none> dp-demo-988687d45-xg4fl 1/1 Running 0 38s 10.100.1.58 worker1 <none> <none>
·通过标签查看pod时,可以发现pod多了一个标签
[root@master 0721]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS dp-demo-988687d45-2b9d2 1/1 Running 0 61s demoo0=demoo0,pod-template-hash=988687d45 dp-demo-988687d45-vp7xn 1/1 Running 0 61s demoo0=demoo0,pod-template-hash=988687d45 dp-demo-988687d45-xg4fl 1/1 Running 0 61s demoo0=demoo0,pod-template-hash=988687d45
注:
deployment:是用来部署服务的一个资源,是企业中常用的资源控制器
功能:
1,管理rs资源,通过rs资源管理pod
2,具有上线部署、副本设置、滚动升级、回滚等功能
3,提供了声明式更新,可以使用apply命令进行更新镜像版本之类的能力
使用场景:企业部署迭代应用
原理:
通过“标签”管理,实现rs资源的控制,它会在自动创建rs的过程中给rs自动生成一个特有的标签(专属于deployment),当apply更新清单的时候,它会通过标签选定是使用历史的rs还是重新创建rs
2.deployment升级回退
·v1版本
1.编辑deplyment资源清单
ps~直接用上面创建的那个也行
[root@master 0721]# cat dp.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dp-demo spec: replicas: 3 selector: matchLabels: demoo0: demoo0 template: metadata: name: pod001 labels: demoo0: demoo0 spec: containers: - name: dd image: harbor.test.com/test/nginx:v1 ports: - containerPort: 80
2.创建查看
[root@master 0721]# kubectl apply -f dp.yaml deployment.apps/dp-demo created [root@master 0721]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE dp-demo 3/3 3 3 18s [root@master 0721]# kubectl get rs NAME DESIRED CURRENT READY AGE dp-demo-988687d45 3 3 3 25s [root@master 0721]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dp-demo-988687d45-2b9d2 1/1 Running 0 38s 10.100.2.80 worker2 <none> <none> dp-demo-988687d45-vp7xn 1/1 Running 0 38s 10.100.2.79 worker2 <none> <none> dp-demo-988687d45-xg4fl 1/1 Running 0 38s 10.100.1.58 worker1 <none> <none>
3.创建service资源用于访问
·编辑service资源清单
[root@master 0721]# cat svc.yaml apiVersion: v1 kind: Service metadata: name: svc001 spec: type: NodePort selector: demoo0: demoo0 ClusterIP: 10.200.200.101 ports: - port: 99 targetPort: 80 nodePort: 30002
·创建查看
[root@master 0721]# kubectl apply -f svc.yaml service/svc001 created [root@master 0721]# kubectl describe svc svc001 Name: svc001 Namespace: default Labels: <none> Annotations: <none> Selector: demoo0=demoo0 Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.200.200.200 IPs: 10.200.200.200 Port: <unset> 99/TCP TargetPort: 80/TCP NodePort: <unset> 30002/TCP Endpoints: 10.100.1.58:80,10.100.2.79:80,10.100.2.80:80 Session Affinity: None External Traffic Policy: Cluster Events: <none> [root@master 0721]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dp-demo-988687d45-2b9d2 1/1 Running 0 22m 10.100.2.80 worker2 <none> <none> dp-demo-988687d45-vp7xn 1/1 Running 0 22m 10.100.2.79 worker2 <none> <none> dp-demo-988687d45-xg4fl 1/1 Running 0 22m 10.100.1.58 worker1 <none> <none>
4.浏览器访问测试
·v2版本
1.修改deployment清单中pod镜像版本为V2
[root@master 0721]# cat dp.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dp-demo spec: replicas: 3 selector: matchLabels: demoo0: demoo0 template: metadata: name: pod001 labels: demoo0: demoo0 spec: containers: - name: dd image: harbor.test.com/test/nginx:v2 ports: - containerPort: 80
2.重新加载deployment资源
[root@master 0721]# kubectl apply -f dp.yaml deployment.apps/dp-demo configured
3.浏览器访问测试
注:
deployment,不需要删除原有的pod,只需要apply重新更新一下资源清单,即可实现产品迭代,同比与rc和rs资源,优势明显
deployment资源,在apply升级后,是又重新创建了rs资源,也就是再升级的过程中,有两个rs资源
3.业务升级策略设置
升级过程中的控制策略
Kubernetes (k8s) 的升级策略取决于你想要如何管理更新。以下是一些常见的升级策略:
滚动更新(Rolling Update): 逐个更新Pod, 通过滚动更新Deployment来完成。
蓝绿部署(Blue/Green Deployment): 部署新版本的应用实例,然后将流量切换到新版本。
金丝雀部署(Canary Deployment): 部署新版本的一小部分,监控反馈,然后逐渐增加新版本的实例数量。
升级策略类型:
第一种:Recreate:先停止所有pod,再批量创建新的pod;生产环境不建议使用,因为用户在此时会访问不到服务;
第二种:RollingUpdate:滚动更新,即实现部分更新,逐渐替换掉原有的pod,也就是默认的策略;
·升级策略资源清单
[root@master 0721]# cat dp.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dp-demo spec: #声明升级策略 strategy: #设置升级策略类型 type: RollingUpdate #若设置了RollingUpdate类型,还需要设置更新的策略 rollingUpdate: #在原有pod副本数量的基础上,多启动pod的数量(也就是说,更新过程中同时可以存在2+副本数个pod, 新旧版本一起) maxSurge: 2 #在升级的过程中最大不可访问的pod的数量(也就是说,pod副本数-1的数量可以被访问) maxUnavailable: 1 replicas: 5 selector: matchLabels: demoo0: demoo0 template: metadata: name: pod001 labels: demoo0: demoo0 spec: containers: - name: dd image: harbor.test.com/test/nginx:v1 ports: - containerPort: 80
·升级创建资源
[root@master 0721]# kubectl apply -f dp.yaml deployment.apps/dp-demo configured [root@master 0721]# kubectl get rs NAME DESIRED CURRENT READY AGE dp-demo-6875bfb8b8 1 1 1 42m dp-demo-988687d45 5 5 3 69m [root@master 0721]# kubectl get rs NAME DESIRED CURRENT READY AGE dp-demo-6875bfb8b8 0 0 0 42m dp-demo-988687d45 5 5 4 69m [root@master 0721]# kubectl get rs NAME DESIRED CURRENT READY AGE dp-demo-6875bfb8b8 0 0 0 42m dp-demo-988687d45 5 5 5 69m
4.蓝绿发布器
蓝绿发布,就是准备两套代码,不需要停止老版本(不影响上一个版本的用户访问),而是在另一套环境中部署新版本然后进行测试,测试通过后将用户流量切换到新的版本,其特点是业务没有终端,升级风险相对较小
实现方式:
1,部署当前版本代码
2,部署svc资源
3,部署新版本使用新的deployment名称,新的标签
4,切换svc标签到新的pod中实现业务切换;
·蓝环境-v1
1.编辑资源清单
deployment
[root@master 0721]# cat dp.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dp-demo spec: replicas: 5 selector: matchLabels: demoo0: demoo0 template: metadata: name: pod001 labels: demoo0: demoo0 spec: containers: - name: dd image: harbor.test.com/test/nginx:v1 ports: - containerPort: 80
svc
apiVersion: v1 kind: Service metadata: name: svc001 spec: type: NodePort selector: demoo0: demoo0 clusterIP: 10.200.200.200 ports: - port: 99 targetPort: 80 nodePort: 30002
2.创建资源
[root@master 0721]# kubectl apply -f dp.yaml [root@master 0721]# kubectl apply -f svc.yaml
3.浏览器访问测试
·绿环境-v2
1.编辑资源清单
[root@master 0721]# cat dp-green.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dp-demo02 spec: replicas: 5 selector: matchLabels: demoo02: demoo02 template: metadata: name: pod001 labels: demoo02: demoo02 spec: containers: - name: dd image: harbor.test.com/test/nginx:v2 ports: - containerPort: 80
2.创建查看资源
[root@master 0721]# kubectl apply -f dp-green.yaml deployment.apps/dp-demo02 created [root@master 0721]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dp-demo01-7dbc8d76b9-4gl65 1/1 Running 0 5m26s 10.100.1.95 worker1 <none> <none> dp-demo01-7dbc8d76b9-67bpg 1/1 Running 0 5m26s 10.100.2.126 worker2 <none> <none> dp-demo01-7dbc8d76b9-8mh2c 1/1 Running 0 5m26s 10.100.2.124 worker2 <none> <none> dp-demo01-7dbc8d76b9-cnc6k 1/1 Running 0 5m26s 10.100.1.96 worker1 <none> <none> dp-demo01-7dbc8d76b9-wwsp6 1/1 Running 0 5m26s 10.100.2.125 worker2 <none> <none> dp-demo02-6f444d7988-ddbrs 1/1 Running 0 4m39s 10.100.1.97 worker1 <none> <none> dp-demo02-6f444d7988-fhjhm 1/1 Running 0 4m39s 10.100.2.128 worker2 <none> <none> dp-demo02-6f444d7988-hcljc 1/1 Running 0 4m39s 10.100.1.99 worker1 <none> <none> dp-demo02-6f444d7988-m5z9r 1/1 Running 0 4m39s 10.100.2.127 worker2 <none> <none> dp-demo02-6f444d7988-wpj47 1/1 Running 0 4m39s 10.100.1.98 worker1 <none> <none>
3.切换svc资源的标签,让其指向新版本
[root@master 0721]# cat svc.yaml apiVersion: v1 kind: Service metadata: name: svc001 spec: type: NodePort selector: demoo02: demoo02 clusterIP: 10.200.200.200 ports: - port: 99 targetPort: 80 nodePort: 30002
4.重新apply资源清单(svc)
有时候apply没用就需要先delete再apply重新创建
[root@master 0721]# kubectl delete svc svc001 service "svc001" deleted [root@master 0721]# kubectl apply -f svc.yaml service/svc001 created
5.浏览器访问测试
5.灰度发布(金丝雀发布)
实现的机制:
1,部署老版本,使用多副本(模拟正式环境)
2,部署svc,匹配标签
3,部署新版本,标签与老版本标签一致(让svc能够访问到,副本从0开始)
4,灰度版本测试没有问题,将恢复版本的副本数量,逐渐调高增加为生产数量
5,将旧版本逐渐调低至0,此时流量全部跑到了新版本上
·部署老版本
[root@master 0721]# cat dp.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dp-demo01 spec: replicas: 5 selector: matchLabels: demoo01: demoo01 template: metadata: name: pod001 labels: demoo01: demoo01 spec: containers: - name: dd image: harbor.test.com/test/nginx:v1 ports: - containerPort: 80
[root@master 0721]# kubectl apply -f dp.yaml deployment.apps/dp-demo01 created
·部署新版本
[root@master 0721]# cat dp-green.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dp-demo02 spec: replicas: 0 selector: matchLabels: demoo01: demoo01 template: metadata: name: pod001 labels: demoo01: demoo01 spec: containers: - name: dd image: harbor.xinjizhiwa.com/test/nginx:v2 ports: - containerPort: 80
·部署svc
[root@master 0721]# cat svc.yaml apiVersion: v1 kind: Service metadata: name: svc001 spec: type: NodePort selector: demoo01: demoo01 clusterIP: 10.200.200.200 ports: - port: 99 targetPort: 80 nodePort: 30002
[root@master 0721]# kubectl apply -f svc.yaml service/svc001 created [root@master 0721]# kubectl describe svc svc001 Name: svc001 Namespace: default Labels: <none> Annotations: <none> Selector: demoo01=demoo01 Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.200.200.200 IPs: 10.200.200.200 Port: <unset> 99/TCP TargetPort: 80/TCP NodePort: <unset> 30002/TCP Endpoints: 10.100.1.100:80,10.100.1.101:80,10.100.2.129:80 + 2 more... Session Affinity: None External Traffic Policy: Cluster Events: <none>
·将新版本的副本逐渐调高
就是将老版本rs控制器数量逐渐调低,新版本的逐渐调高(修改资源清单中的rs数量)
6.案例
步骤分析:
1.准备NFS环境
2.【wordpress的pod】创建deployment资源的wordpress(pod)容器
3.【用户访问的svc】创建用户访问的svc资源
4.【数据库的pod】创建deployment资源的数据库服务的pod容器
5.【数据库的svc】创建业务服务wordpress的pod资源访问数据库的svc资源
·准备nfs环境
nfs之前装过,存储节点配置nfs配置文件也在之前配置过了,所以创建个存储路径就好了
1.创建存储路径
mkdir -p /k8s/data/{mysql,wordpress}
·编辑wordpress的depoloy资源清单
[root@master demowordpress]# cat dm-wordperss.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dm-wp spec: replicas: 3 selector: matchLabels: k8s: wp template: metadata: name: pod01 labels: k8s: wp spec: volumes: - name: vol-wp nfs: server: 10.0.0.230 path: /k8s/data/wordpress containers: - name: c-wp image: wordpress:latest ports: - name: wp-c-port containerPort: 80 volumeMounts: - name: vol-wp mountPath: /var/www/html/wp-content/uploads env: - name: WORDPRESS_DB_HOST value: 10.200.200.200:3306 - name: WORDPRESS_DB_USER value: admin - name: WORDPRESS_DB_PASSWORD value: demoo - name: WORDPRESS_DB_NAME value: wordpress
·编辑wordpress的svc资源
[root@master demowordpress]# cat svc-wordpress.yaml apiVersion: v1 kind: Service metadata: name: svc-wp spec: type: NodePort selector: k8s: wp clusterIP: 10.200.200.100 ports: - port: 99 targetPort: 80 nodePort: 31000
·编辑数据库的deploy资源清单
[root@master demowordpress]# cat dm-mysql.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dm-sql spec: replicas: 1 selector: matchLabels: k8s: sql template: metadata: name: pod02 labels: k8s: sql spec: volumes: - name: vol-sql nfs: server: 10.0.0.230 path: /k8s/data/mysql containers: - name: c-db image: mysql:8.0 ports: - name: db-port containerPort: 3306 volumeMounts: - name: vol-sql mountPath: /var/lib/mysql env: - name: MYSQL_DATABASE value: wordpress - name: MYSQL_USER value: admin - name: MYSQL_PASSWORD value: demoo - name: MYSQL_ROOT_PASSWORD value: demoo
·编辑数据库的svc资源
[root@master demowordpress]# cat svc-mysql.yaml apiVersion: v1 kind: Service metadata: name: svc-sql spec: type: NodePort selector: k8s: sql clusterIP: 10.200.200.200 ports: - port: 3306 targetPort: 3306 nodePort: 32000
·创建查看资源
[root@master demowordpress]# kubectl apply -f . deployment.apps/dm-sql created deployment.apps/dm-wp created service/svc-sql unchanged service/svc-wp unchanged [root@master demowordpress]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 14d svc-sql NodePort 10.200.200.200 <none> 3306:32000/TCP 14m svc-wp NodePort 10.200.200.100 <none> 99:31000/TCP 11m [root@master demowordpress]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dm-sql-86b77b85c9-cqtd6 1/1 Running 0 2m28s 10.100.1.106 worker1 <none> <none> dm-wp-75f457464f-2zn79 1/1 Running 0 2m28s 10.100.1.104 worker1 <none> <none> dm-wp-75f457464f-94tm5 1/1 Running 0 2m28s 10.100.2.2 worker2 <none> <none> dm-wp-75f457464f-jb7zx 1/1 Running 0 2m28s 10.100.1.105 worker1 <none> <none>
·查看nfs存储路径,是否有数据
[root@harbor data]# ll mysql/ 总用量 198056 -rw-r----- 1 polkitd input 56 7月 28 16:28 auto.cnf -rw-r----- 1 polkitd input 3117698 7月 28 16:28 binlog.000001 -rw-r----- 1 polkitd input 156 7月 28 16:28 binlog.000002 -rw-r----- 1 polkitd input 32 7月 28 16:28 binlog.index -rw------- 1 polkitd input 1680 7月 28 16:28 ca-key.pem -rw-r--r-- 1 polkitd input 1112 7月 28 16:28 ca.pem -rw-r--r-- 1 polkitd input 1112 7月 28 16:28 client-cert.pem -rw------- 1 polkitd input 1680 7月 28 16:28 client-key.pem -rw-r----- 1 polkitd input 196608 7月 28 16:28 #ib_16384_0.dblwr -rw-r----- 1 polkitd input 8585216 7月 28 16:28 #ib_16384_1.dblwr -rw-r----- 1 polkitd input 5698 7月 28 16:28 ib_buffer_pool -rw-r----- 1 polkitd input 12582912 7月 28 16:28 ibdata1 -rw-r----- 1 polkitd input 50331648 7月 28 16:28 ib_logfile0 -rw-r----- 1 polkitd input 50331648 7月 28 16:28 ib_logfile1 -rw-r----- 1 polkitd input 12582912 7月 28 16:29 ibtmp1 drwxr-x--- 2 polkitd input 187 7月 28 16:28 #innodb_temp drwxr-x--- 2 polkitd input 143 7月 28 16:28 mysql -rw-r----- 1 polkitd input 31457280 7月 28 16:28 mysql.ibd drwxr-x--- 2 polkitd input 8192 7月 28 16:28 performance_schema -rw------- 1 polkitd input 1676 7月 28 16:28 private_key.pem -rw-r--r-- 1 polkitd input 452 7月 28 16:28 public_key.pem -rw-r--r-- 1 polkitd input 1112 7月 28 16:28 server-cert.pem -rw------- 1 polkitd input 1680 7月 28 16:28 server-key.pem drwxr-x--- 2 polkitd input 28 7月 28 16:28 sys -rw-r----- 1 polkitd input 16777216 7月 28 16:28 undo_001 -rw-r----- 1 polkitd input 16777216 7月 28 16:28 undo_002 drwxr-x--- 2 polkitd input 6 7月 28 16:28 wordpress