内容大纲
- 1. 容器编排概述
- 2. Kubernetes编排
- 3. Docker Swarm编排
- 4. OpenShift编排
- 5. Nomad编排
- 6. 应用部署管理
- 7. 自动扩缩容
- 8. 服务发现与负载均衡
- 9. 存储管理
- 10. 最佳实践
1. 容器编排概述
容器编排是指自动化容器的部署、管理和扩展的过程。容器编排工具可以帮助管理大规模容器化应用,提供服务发现、负载均衡、自动扩缩容、滚动更新等功能。
容器编排的核心功能包括:
- 容器调度和管理
- 服务发现和负载均衡
- 自动扩缩容
- 滚动更新和回滚
- 存储管理
- 网络管理
- 健康检查
- 配置管理
学习交流加群风哥微信: itpux-com
2. Kubernetes编排
2.1 Kubernetes集群管理
# 查看集群信息
$ kubectl cluster-info
# 输出结果
Kubernetes control plane is running at https://kubernetes.default.svc
CoreDNS is running at https://kubernetes.default.svc/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
# 查看节点状态
$ kubectl get nodes -o wide
# 输出结果
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-01 Ready control-plane,master 30d v1.21.0 192.168.1.10
worker-01 Ready
worker-02 Ready
# 查看节点详情
$ kubectl describe node worker-01
# 查看集群资源使用情况
$ kubectl top nodes
# 输出结果
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master-01 250m 12% 1024Mi 13%
worker-01 450m 22% 2048Mi 26%
worker-02 380m 19% 1800Mi 23%
# 查看命名空间
$ kubectl get namespaces
# 输出结果
NAME STATUS AGE
default Active 30d
kube-node-lease Active 30d
kube-public Active 30d
kube-system Active 30d
2.2 Deployment管理
# 创建Deployment
$ cat > nginx-deployment.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.19.10 ports: - containerPort: 80 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 200m memory: 256Mi livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 5 EOF $ kubectl apply -f nginx-deployment.yaml # 查看Deployment $ kubectl get deployments # 输出结果 NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 1m # 查看Deployment详情 $ kubectl describe deployment nginx-deployment # 扩容Deployment $ kubectl scale deployment nginx-deployment --replicas=5 # 更新Deployment镜像 $ kubectl set image deployment/nginx-deployment nginx=nginx:1.20.0 # 查看滚动更新状态 $ kubectl rollout status deployment/nginx-deployment # 输出结果 Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "nginx-deployment" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "nginx-deployment" rollout to finish: 4 out of 5 new replicas have been updated... deployment "nginx-deployment" successfully rolled out # 查看更新历史 $ kubectl rollout history deployment/nginx-deployment # 回滚到上一个版本 $ kubectl rollout undo deployment/nginx-deployment # 回滚到指定版本 $ kubectl rollout undo deployment/nginx-deployment --to-revision=2
风哥风哥提示:Kubernetes是目前最流行的容器编排工具,提供了丰富的功能来管理容器化应用。
3. Docker Swarm编排
3.1 Swarm集群管理
# 初始化Swarm集群
$ docker swarm init –advertise-addr 192.168.1.10
# 输出结果
Swarm initialized: current node (abc123def456) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join –token SWMTKN-1-0abc123def456-xyz789 192.168.1.10:2377
To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.
# 添加工作节点
$ docker swarm join –token SWMTKN-1-0abc123def456-xyz789 192.168.1.10:2377
# 查看节点列表
$ docker node ls
# 输出结果
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
abc123def456 * manager-01 Ready Active Leader 20.10.7
def456ghi789 worker-01 Ready Active 20.10.7
ghi789jkl012 worker-02 Ready Active 20.10.7
# 更新节点标签
$ docker node update –label-add zone=us-west worker-01
# 查看节点详情
$ docker node inspect worker-01
# 删除节点
$ docker node rm –force worker-02
# 离开Swarm集群
$ docker swarm leave –force
3.2 Service管理
# 创建Service
$ docker service create \
–name nginx \
–replicas 3 \
–publish 80:80 \
nginx:1.19.10
# 输出结果
image nginx:1.19.10 could not be accessed on a registry to record
its digest. Each node will access nginx:1.19.10 independently,
possibly leading to different nodes running different
versions of the image.
nginx
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
# 查看Service列表
$ docker service ls
# 输出结果
ID NAME MODE REPLICAS IMAGE PORTS
abc123def456 nginx replicated 3/3 nginx:1.19.10 *:80->80/tcp
# 查看Service详情
$ docker service ps nginx
# 输出结果
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
def456ghi789 nginx.1 nginx:1.19.10 worker-01 Running Running 2 minutes ago
ghi789jkl012 nginx.2 nginx:1.19.10 worker-02 Running Running 2 minutes ago
jkl012mno345 nginx.3 nginx:1.19.10 manager-01 Running Running 2 minutes ago
# 扩容Service
$ docker service scale nginx=5
# 更新Service镜像
$ docker service update –image nginx:1.20.0 nginx
# 滚动更新配置
$ docker service update \
–update-parallelism 1 \
–update-delay 10s \
–update-failure-action pause \
nginx
# 查看Service日志
$ docker service logs nginx
# 删除Service
$ docker service rm nginx
更多学习教程www.fgedu.net.cn
4. OpenShift编排
4.1 OpenShift项目管理
# 登录OpenShift
$ oc login -u admin -p password https://openshift.fgedu.net.cn:8443
# 输出结果
Login successful.
You have access to 10 projects, the list has been suppressed. You can list all projects with ‘oc projects’
Using project “default”.
# 创建项目
$ oc new-project myproject –description=”My Project” –display-name=”My Project”
# 查看项目列表
$ oc get projects
# 输出结果
NAME DISPLAY NAME STATUS
default Default Active
myproject My Project Active
# 切换项目
$ oc project myproject
# 查看项目状态
$ oc status
# 输出结果
In project My Project (myproject) on server https://openshift.fgedu.net.cn:8443
http://nginx-myproject.openshift.fgedu.net.cn (svc/nginx)
dc/nginx deploys docker.io/nginx:1.19.10
deployment #1 deployed 2 minutes ago – 3 pods
# 创建应用
$ oc new-app nginx:1.19.10 –name=nginx
# 查看应用
$ oc get all
# 输出结果
NAME READY STATUS RESTARTS AGE
pod/nginx-1-abc12 1/1 Running 0 2m
NAME DESIRED CURRENT READY AGE
rc/nginx-1 1 1 1 2m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/nginx ClusterIP 172.30.100.100
# 创建路由
$ oc expose svc nginx –hostname=nginx-myproject.openshift.fgedu.net.cn
# 查看路由
$ oc get routes
# 输出结果
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
nginx nginx-myproject.openshift.fgedu.net.cn nginx 8080 None
4.2 DeploymentConfig管理
# 创建DeploymentConfig
$ cat > nginx-dc.yaml << EOF apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.19.10 ports: - containerPort: 80 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 200m memory: 256Mi triggers: - type: ConfigChange - imageChangeParams: automatic: true containerNames: - nginx from: kind: ImageStreamTag name: nginx:1.19.10 type: ImageChange strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 intervalSeconds: 1 timeoutSeconds: 600 maxUnavailable: 25% maxSurge: 25% EOF $ oc apply -f nginx-dc.yaml # 查看DeploymentConfig $ oc get dc # 输出结果 NAME REVISION DESIRED CURRENT TRIGGERED BY nginx 1 3 3 config,image(nginx:1.19.10) # 扩容DeploymentConfig $ oc scale dc nginx --replicas=5 # 触发滚动更新 $ oc rollout latest nginx # 查看滚动更新状态 $ oc rollout status dc/nginx # 回滚到上一个版本 $ oc rollback nginx # 查看历史版本 $ oc rollout history dc/nginx
author:www.itpux.com
5. Nomad编排
5.1 Nomad集群管理
# 启动Nomad服务器
$ nomad server -dev
# 输出结果
==> Starting Nomad agent…
==> Nomad agent configuration:
Client: true
Log Level: DEBUG
Region: global (DC: dc1)
Server: true
Version: 1.2.6
==> Nomad agent started! Log data will stream in below:
2026/04/03 10:00:00 [INFO] nomad: starting server (version: 1.2.6)
2026/04/03 10:00:00 [INFO] nomad: server: cluster leadership acquired
2026/04/03 10:00:00 [INFO] nomad: server: established cluster leadership
# 查看集群状态
$ nomad server members
# 输出结果
Name Address Port Status Leader Protocol Build Datacenter Region
nomad-server.global 127.0.0.1 4648 alive true 2 1.2.6 dc1 global
# 查看节点状态
$ nomad node status
# 输出结果
ID DC Name Class Drain Eligibility Status
abc123 dc1 nomad-client-01
def456 dc1 nomad-client-02
# 查看节点详情
$ nomad node status abc123
5.2 Job管理
# 创建Job文件
$ cat > nginx-job.nomad << EOF job "nginx" { datacenters = ["dc1"] group "web" { count = 3 network { port "http" { static = 80 } } task "nginx" { driver = "docker" config { image = "nginx:1.19.10" ports = ["http"] } resources { cpu = 100 memory = 128 } service { name = "nginx" port = "http" check { type = "http" path = "/" interval = "10s" timeout = "2s" } } } } } EOF # 运行Job $ nomad job run nginx-job.nomad # 输出结果 ==> Monitoring evaluation “abc123”
Evaluation triggered by job “nginx”
Allocation “def456” created: node “abc123”, group “web”
Allocation “ghi789” created: node “def456”, group “web”
Allocation “jkl012” created: node “ghi789”, group “web”
Evaluation status changed: “pending” -> “complete”
==> Evaluation “abc123” finished with status “complete”
# 查看Job列表
$ nomad job status
# 输出结果
ID Type Priority Status Submit Date
nginx service 50 running 2026-04-03T10:00:00Z
# 查看Job详情
$ nomad job status nginx
# 扩容Job
$ nomad job scale nginx web 5
# 停止Job
$ nomad job stop nginx
# 删除Job
$ nomad job stop -purge nginx
更多学习教程公众号风哥教程itpux_com
6. 应用部署管理
6.1 部署策略
- 滚动更新:逐步替换旧版本
- 蓝绿部署:同时运行两个版本
- 金丝雀发布:逐步引入新版本
- A/B测试:同时运行多个版本
6.2 Kubernetes部署策略
# 滚动更新配置
$ cat > rolling-update-deployment.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 10 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.19.10 ports: - containerPort: 80 EOF # 蓝绿部署配置 $ cat > blue-green-deployment.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-blue spec: replicas: 10 selector: matchLabels: app: nginx version: blue template: metadata: labels: app: nginx version: blue spec: containers: - name: nginx image: nginx:1.19.10 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-green spec: replicas: 10 selector: matchLabels: app: nginx version: green template: metadata: labels: app: nginx version: green spec: containers: - name: nginx image: nginx:1.20.0 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx version: blue ports: - port: 80 targetPort: 80 EOF # 金丝雀发布配置 $ cat > canary-deployment.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-stable spec: replicas: 9 selector: matchLabels: app: nginx version: stable template: metadata: labels: app: nginx version: stable spec: containers: - name: nginx image: nginx:1.19.10 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-canary spec: replicas: 1 selector: matchLabels: app: nginx version: canary template: metadata: labels: app: nginx version: canary spec: containers: - name: nginx image: nginx:1.20.0 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - port: 80 targetPort: 80 EOF
风哥风哥提示:选择合适的部署策略可以降低发布风险,提高应用可用性。
7. 自动扩缩容
7.1 Kubernetes HPA
# 创建HPA
$ kubectl autoscale deployment nginx-deployment –cpu-percent=50 –min=3 –max=10
# 查看HPA状态
$ kubectl get hpa
# 输出结果
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-deployment Deployment/nginx-deployment 45%/50% 3 10 5 1m
# 创建基于自定义指标的HPA
$ cat > custom-hpa.yaml << EOF
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 70
- type: Pods
pods:
metric:
name: http_requests_per_second
target:
type: AverageValue
averageValue: 1000
EOF
$ kubectl apply -f custom-hpa.yaml
# 查看HPA详情
$ kubectl describe hpa nginx-hpa
7.2 Kubernetes VPA
# 创建VPA
$ cat > nginx-vpa.yaml << EOF apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: nginx-vpa spec: targetRef: apiVersion: apps/v1 kind: Deployment name: nginx-deployment updatePolicy: updateMode: "Auto" resourcePolicy: containerPolicies: - containerName: nginx minAllowed: cpu: 100m memory: 128Mi maxAllowed: cpu: 500m memory: 512Mi controlledResources: ["cpu", "memory"] EOF $ kubectl apply -f nginx-vpa.yaml # 查看VPA状态 $ kubectl get vpa # 输出结果 NAME MODE CPU MEM PROVIDED AGE nginx-vpa Auto 250m 256Mi True 1m # 查看VPA推荐值 $ kubectl describe vpa nginx-vpa
学习交流加群风哥QQ113257174
8. 服务发现与负载均衡
8.1 Kubernetes Service
# ClusterIP Service
$ cat > clusterip-service.yaml << EOF apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: ClusterIP selector: app: nginx ports: - port: 80 targetPort: 80 EOF # NodePort Service $ cat > nodeport-service.yaml << EOF apiVersion: v1 kind: Service metadata: name: nginx-nodeport spec: type: NodePort selector: app: nginx ports: - port: 80 targetPort: 80 nodePort: 30080 EOF # LoadBalancer Service $ cat > loadbalancer-service.yaml << EOF apiVersion: v1 kind: Service metadata: name: nginx-loadbalancer spec: type: LoadBalancer selector: app: nginx ports: - port: 80 targetPort: 80 EOF # 创建Service $ kubectl apply -f clusterip-service.yaml # 查看Service $ kubectl get svc # 输出结果 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service ClusterIP 172.30.100.100
nginx-nodeport NodePort 172.30.100.101
nginx-loadbalancer LoadBalancer 172.30.100.102 10.0.0.100 80:30080/TCP 1m
8.2 Ingress配置
$ cat > nginx-ingress.yaml << EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: nginx.fgedu.net.cn http: paths: - path: / pathType: Prefix backend: service: name: nginx-service port: number: 80 EOF $ kubectl apply -f nginx-ingress.yaml # 查看Ingress $ kubectl get ingress # 输出结果 NAME CLASS HOSTS ADDRESS PORTS AGE nginx-ingress
9. 存储管理
9.1 Kubernetes存储
# 创建PersistentVolume
$ cat > pv.yaml << EOF apiVersion: v1 kind: PersistentVolume metadata: name: nginx-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: standard hostPath: path: /data/nginx EOF # 创建PersistentVolumeClaim $ cat > pvc.yaml << EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nginx-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: standard EOF # 创建使用PVC的Pod $ cat > nginx-pod-pvc.yaml << EOF apiVersion: v1 kind: Pod metadata: name: nginx-pod spec: containers: - name: nginx image: nginx:1.19.10 ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html name: nginx-data volumes: - name: nginx-data persistentVolumeClaim: claimName: nginx-pvc EOF $ kubectl apply -f pv.yaml $ kubectl apply -f pvc.yaml $ kubectl apply -f nginx-pod-pvc.yaml # 查看PV和PVC $ kubectl get pv $ kubectl get pvc # 输出结果 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nginx-pv 10Gi RWO Retain Bound default/nginx-pvc standard 1m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginx-pvc Bound nginx-pv 10Gi RWO standard 1m
10. 最佳实践
10.1 容器编排最佳实践
- 使用声明式配置管理
- 实施健康检查
- 配置资源限制
- 使用自动扩缩容
- 实施滚动更新策略
- 配置存储和网络
- 实施安全策略
- 建立监控和日志体系
10.2 部署最佳实践
- 使用版本控制管理配置
- 实施CI/CD流水线
- 使用镜像标签管理版本
- 配置健康检查
- 实施滚动更新策略
10.3 扩缩容最佳实践
- 设置合理的资源请求和限制
- 配置自动扩缩容策略
- 监控扩缩容效果
- 定期评估扩缩容策略
- 考虑成本和性能平衡
- 选择合适的容器编排工具
- 使用声明式配置管理
- 实施健康检查和自动扩缩容
- 配置合理的资源限制
- 建立完善的监控和日志体系
- 实施安全策略
- 定期备份和演练恢复
- 持续优化编排策略
author:www.itpux.com
本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html
