1. 首页 > IT综合教程 > 正文

it教程FG385-容器编排管理

内容大纲

1. 容器编排管理概述

容器编排管理是指使用自动化工具和平台,对容器化应用进行部署、管理、监控和扩展的过程。随着容器技术的广泛应用,容器编排已成为现代应用部署的重要组成部分。

容器编排管理的核心功能包括:

  • 容器部署和调度
  • 自动伸缩
  • 服务发现和负载均衡
  • 配置管理
  • 存储管理
  • 网络管理
  • 健康检查和自愈
  • 滚动更新和回滚

更多学习教程www.fgedu.net.cn

2. Kubernetes基础

2.1 Kubernetes集群部署

# 安装kubeadm、kubelet和kubectl
$ apt-get update
$ apt-get install -y apt-transport-https ca-certificates curl
$ curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
$ echo “deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main” | tee /etc/apt/sources.list.d/kubernetes.list
$ apt-get update
$ apt-get install -y kubelet kubeadm kubectl
$ apt-mark hold kubelet kubeadm kubectl

# 初始化集群
$ kubeadm init –pod-network-cidr=192.168.0.0/16

# 配置kubectl
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 安装网络插件
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

# 加入节点
$ kubeadm join 192.168.1.100:6443 –token abcdef.1234567890abcdef \
–discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef

# 查看集群状态
$ kubectl cluster-info
$ kubectl get nodes
$ kubectl get pods –all-namespaces

2.2 Kubernetes资源管理

# 创建Deployment
$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
– name: nginx
image: nginx:1.19.10
ports:
– containerPort: 80

$ kubectl apply -f deployment.yaml

# 查看Deployment
$ kubectl get deployments
$ kubectl describe deployment nginx-deployment

# 创建Service
$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
– port: 80
targetPort: 80
type: LoadBalancer

$ kubectl apply -f service.yaml

# 查看Service
$ kubectl get services
$ kubectl describe service nginx-service

# 创建ConfigMap
$ cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name fgedudb;
location / {
root /usr/share/nginx/html;
index index.html;
}
}
}

$ kubectl apply -f configmap.yaml

# 创建Secret
$ cat secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: nginx-secret
type: Opaque
data:
username: YWRtaW4=
password: cGFzc3dvcmQ=

$ kubectl apply -f secret.yaml

2.3 Kubernetes操作

# 查看Pod
$ kubectl get pods
$ kubectl describe pod nginx-deployment-1234567890-abcde

# 查看日志
$ kubectl logs nginx-deployment-1234567890-abcde

# 进入Pod
$ kubectl exec -it nginx-deployment-1234567890-abcde — /bin/bash

# 扩容Deployment
$ kubectl scale deployment nginx-deployment –replicas=5

# 更新Deployment
$ kubectl set image deployment nginx-deployment nginx=nginx:1.20.0

# 回滚Deployment
$ kubectl rollout undo deployment nginx-deployment

# 查看Deployment历史
$ kubectl rollout history deployment nginx-deployment

# 回滚到特定版本
$ kubectl rollout undo deployment nginx-deployment –to-revision=2

# 删除资源
$ kubectl delete deployment nginx-deployment
$ kubectl delete service nginx-service
$ kubectl delete configmap nginx-config
$ kubectl delete secret nginx-secret

风哥风哥提示:Kubernetes是目前最流行的容器编排平台,提供了强大的容器管理功能,适合各种规模的应用部署。

3. Docker Swarm

3.1 Docker Swarm集群部署

# 初始化Swarm集群
$ docker swarm init –advertise-addr 192.168.1.100

# 查看加入命令
$ docker swarm join-token worker
$ docker swarm join-token manager

# 加入节点
$ docker swarm join –token SWMTKN-1-1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef 192.168.1.100:2377

# 查看集群状态
$ docker node ls

# 升级节点角色
$ docker node promote worker1

# 降级节点角色
$ docker node demote manager1

# 移除节点
$ docker node rm worker1

3.2 Docker Swarm服务管理

# 创建服务
$ docker service create –name nginx –replicas 3 –publish 80:80 nginx:1.19.10

# 查看服务
$ docker service ls
$ docker service ps nginx

# 扩展服务
$ docker service scale nginx=5

# 更新服务
$ docker service update –image nginx:1.20.0 nginx

# 回滚服务
$ docker service rollback nginx

# 删除服务
$ docker service rm nginx

# 创建带有配置的服务
$ docker config create nginx-config nginx.conf
$ docker service create \
–name nginx \
–replicas 3 \
–publish 80:80 \
–config source=nginx-config,target=/etc/nginx/nginx.conf \
nginx:1.19.10

# 创建带有密钥的服务
$ echo “password” | docker secret create db-password –
$ docker service create \
–name db \
–replicas 1 \
–secret source=db-password,target=db-password \
-e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db-password \
mysql:5.7

3.3 Docker Swarm网络管理

# 创建网络
$ docker network create –driver overlay my-network

# 查看网络
$ docker network ls

# 使用网络创建服务
$ docker service create \
–name nginx \
–replicas 3 \
–publish 80:80 \
–network my-network \
nginx:1.19.10

# 创建另一个服务并加入同一网络
$ docker service create \
–name app \
–replicas 3 \
–network my-network \
myapp:latest

# 查看网络详情
$ docker network inspect my-network

学习交流加群风哥微信: itpux-com

4. OpenShift

4.1 OpenShift集群部署

# 下载OpenShift Installer
$ wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux.tar.gz
$ tar xvf openshift-install-linux.tar.gz

# 配置安装参数
$ mkdir -p ~/openshift-install
$ cd ~/openshift-install
$ cat install-config.yaml
apiVersion: v1
baseDomain: fgedu.net.cn
metadata:
name: test
controlPlane:
name: master
replicas: 3
platform:
aws:
type: m5.xlarge
zones:
– us-east-1a
– us-east-1b
– us-east-1c
compute:
– name: worker
replicas: 3
platform:
aws:
type: m5.xlarge
zones:
– us-east-1a
– us-east-1b
– us-east-1c
platform:
aws:
region: us-east-1
pullSecret: ‘{}’
sshKey: |
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC…

# 生成安装文件
$ ./openshift-install create manifests

# 安装集群
$ ./openshift-install create cluster

# 查看集群状态
$ ./openshift-install wait-for install-complete

# 配置kubectl
$ export KUBECONFIG=~/openshift-install/auth/kubeconfig
$ kubectl cluster-info
$ kubectl get nodes

4.2 OpenShift应用部署

# 登录OpenShift
$ oc login https://api.test.fgedu.net.cn:6443

# 创建项目
$ oc new-project my-project

# 部署应用
$ oc new-app nginx:1.19.10 –name=nginx

# 查看应用
$ oc get pods
$ oc get services

# 暴露服务
$ oc expose service nginx

# 查看路由
$ oc get routes

# 缩放应用
$ oc scale deployment nginx –replicas=5

# 更新应用
$ oc set image deployment nginx nginx=nginx:1.20.0

# 回滚应用
$ oc rollout undo deployment nginx

# 查看部署历史
$ oc rollout history deployment nginx

# 回滚到特定版本
$ oc rollout undo deployment nginx –to-revision=2

# 删除应用
$ oc delete deployment nginx
$ oc delete service nginx
$ oc delete route nginx

4.3 OpenShift构建

# 从源码构建应用
$ oc new-app https://github.com/fgedu/myapp –name=myapp

# 查看构建状态
$ oc get builds
$ oc logs build/myapp-1

# 从Dockerfile构建应用
$ cat Dockerfile
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [“npm”, “start”]

$ oc new-build –strategy=docker –dockerfile=Dockerfile –name=myapp

# 查看构建配置
$ oc get buildconfigs

# 触发构建
$ oc start-build myapp

# 从构建创建部署
$ oc new-app myapp:latest

学习交流加群风哥QQ113257174

5. HashiCorp Nomad

5.1 Nomad集群部署

# 安装Nomad
$ wget https://releases.hashicorp.com/nomad/1.2.6/nomad_1.2.6_linux_amd64.zip
$ unzip nomad_1.2.6_linux_amd64.zip
$ sudo mv nomad /usr/local/bin/

# 配置Nomad
$ sudo mkdir -p /etc/nomad.d
$ cat /etc/nomad.d/server.hcl
server {
enabled = true
bootstrap_expect = 3
}

datacenter = “dc1”
data_dir = “/opt/nomad/data”

# 启动Nomad服务器
$ sudo systemctl enable nomad
$ sudo systemctl start nomad

# 查看Nomad状态
$ nomad status
$ nomad server members

# 启动Nomad客户端
$ cat /etc/nomad.d/client.hcl
client {
enabled = true
servers = [“192.168.1.100:4647”]
}

datacenter = “dc1”
data_dir = “/opt/nomad/data”

$ sudo systemctl enable nomad
$ sudo systemctl start nomad

# 查看客户端状态
$ nomad node status

5.2 Nomad作业管理

# 创建作业文件
$ cat nginx.nomad
job “nginx” {
datacenters = [“dc1”]
type = “service”

group “web” {
count = 3

network {
port “http” {
to = 80
}
}

task “nginx” {
driver = “docker”

config {
image = “nginx:1.19.10”
ports = [“http”]
}

resources {
cpu = 500
memory = 256
}
}
}
}

# 提交作业
$ nomad job run nginx.nomad

# 查看作业状态
$ nomad job status nginx
$ nomad job status nginx -verbose

# 查看分配
$ nomad job allocations nginx

# 查看分配状态
$ nomad alloc status

# 查看日志
$ nomad alloc logs

# 缩放作业
$ nomad job scale nginx web 5

# 更新作业
$ nomad job run -detach nginx.nomad

# 停止作业
$ nomad job stop nginx

# 删除作业
$ nomad job delete nginx

5.3 Nomad服务发现

# 配置Consul
$ wget https://releases.hashicorp.com/consul/1.11.4/consul_1.11.4_linux_amd64.zip
$ unzip consul_1.11.4_linux_amd64.zip
$ sudo mv consul /usr/local/bin/

# 启动Consul服务器
$ consul agent -server -bootstrap-expect=3 -data-dir=/opt/consul/data -node=server1 -bind=192.168.1.100 -datacenter=dc1

# 启动Consul客户端
$ consul agent -data-dir=/opt/consul/data -node=client1 -bind=192.168.1.101 -datacenter=dc1 -join=192.168.1.100

# 配置Nomad使用Consul
$ cat /etc/nomad.d/server.hcl
server {
enabled = true
bootstrap_expect = 3
}

consul {
address = “127.0.0.1:8500”
}

datacenter = “dc1”
data_dir = “/opt/nomad/data”

# 创建带有服务发现的作业
$ cat nginx.nomad
job “nginx” {
datacenters = [“dc1”]
type = “service”

group “web” {
count = 3

network {
port “http” {
to = 80
}
}

service {
name = “nginx”
port = “http”

check {
name = “nginx health check”
type = “http”
path = “/”
interval = “10s”
timeout = “2s”
}
}

task “nginx” {
driver = “docker”

config {
image = “nginx:1.19.10”
ports = [“http”]
}

resources {
cpu = 500
memory = 256
}
}
}
}

# 查看Consul服务
$ consul services list
$ consul service inspect nginx

更多学习教程公众号风哥教程itpux_com

6. 应用部署

6.1 部署策略

# Kubernetes滚动更新
$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
– name: nginx
image: nginx:1.19.10
ports:
– containerPort: 80

$ kubectl apply -f deployment.yaml

# Kubernetes蓝绿部署
$ cat blue-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-blue
labels:
app: nginx
version: blue
spec:
replicas: 3
selector:
matchLabels:
app: nginx
version: blue
template:
metadata:
labels:
app: nginx
version: blue
spec:
containers:
– name: nginx
image: nginx:1.19.10
ports:
– containerPort: 80

$ kubectl apply -f blue-deployment.yaml

$ cat green-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-green
labels:
app: nginx
version: green
spec:
replicas: 3
selector:
matchLabels:
app: nginx
version: green
template:
metadata:
labels:
app: nginx
version: green
spec:
containers:
– name: nginx
image: nginx:1.20.0
ports:
– containerPort: 80

$ kubectl apply -f green-deployment.yaml

$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
version: blue
ports:
– port: 80
targetPort: 80

$ kubectl apply -f service.yaml

# 切换到green版本
$ kubectl patch service nginx-service -p ‘{“spec”:{“selector”:{“app”:”nginx”,”version”:”green”}}}’

# Kubernetes金丝雀部署
$ cat canary-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-canary
labels:
app: nginx
version: canary
spec:
replicas: 1
selector:
matchLabels:
app: nginx
version: canary
template:
metadata:
labels:
app: nginx
version: canary
spec:
containers:
– name: nginx
image: nginx:1.20.0
ports:
– containerPort: 80

$ kubectl apply -f canary-deployment.yaml

$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
– port: 80
targetPort: 80

$ kubectl apply -f service.yaml

6.2 配置管理

# Kubernetes ConfigMap
$ cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
app.conf: |
server {
port: 8080
host: 0.0.0.0
}
database {
url: “postgres://user:password@postgres:5432/db”
}

$ kubectl apply -f configmap.yaml

$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
– name: app
image: app:latest
volumeMounts:
– name: config-volume
mountPath: /app/config
volumes:
– name: config-volume
configMap:
name: app-config

$ kubectl apply -f deployment.yaml

# Kubernetes Secret
$ kubectl create secret generic db-secret \
–from-literal=username=admin \
–from-literal=password=secret123

$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
– name: app
image: app:latest
env:
– name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
– name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password

$ kubectl apply -f deployment.yaml

7. 自动伸缩

7.1 Kubernetes Horizontal Pod Autoscaler

# 创建HPA
$ kubectl autoscale deployment nginx-deployment –cpu-percent=50 –min=3 –max=10

# 查看HPA
$ kubectl get hpa
$ kubectl describe hpa nginx-deployment

# 基于自定义指标的HPA
$ cat hpa-custom.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app-deployment
minReplicas: 3
maxReplicas: 10
metrics:
– type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
– type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 60

$ kubectl apply -f hpa-custom.yaml

# 基于外部指标的HPA
$ cat hpa-external.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app-deployment
minReplicas: 3
maxReplicas: 10
metrics:
– type: External
external:
metric:
name: queue_length
selector:
matchLabels:
app: app
target:
type: AverageValue
averageValue: 10

$ kubectl apply -f hpa-external.yaml

7.2 Kubernetes Cluster Autoscaler

# 安装Cluster Autoscaler
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

# 配置Cluster Autoscaler
$ kubectl patch deployment cluster-autoscaler \
-n kube-system \
–type=json \
-p='[{“op”:”replace”,”path”:”/spec/template/spec/containers/0/command”,”value”:[“/cluster-autoscaler”,”–v=4″,”–logtostderr=true”,”–cloud-provider=aws”,”–skip-nodes-with-local-storage=false”,”–expander=least-waste”,”–node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/my-cluster”]]’

# 查看Cluster Autoscaler状态
$ kubectl logs -n kube-system deployment/cluster-autoscaler

# 测试Cluster Autoscaler
$ kubectl create deployment stress –image=polinux/stress
$ kubectl scale deployment stress –replicas=100

# 查看节点状态
$ kubectl get nodes

# 清理
$ kubectl delete deployment stress

author:www.itpux.com

8. 网络管理

8.1 Kubernetes网络

# 查看网络插件
$ kubectl get pods –namespace kube-system

# 查看网络策略
$ kubectl get networkpolicies

# 创建网络策略
$ cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: default
spec:
podSelector: {}
policyTypes:
– Ingress
– Egress

$ kubectl apply -f networkpolicy.yaml

# 创建允许特定流量的网络策略
$ cat allow-web.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web
namespace: default
spec:
podSelector:
matchLabels:
app: web
policyTypes:
– Ingress
– Egress
ingress:
– from:
– podSelector:
matchLabels:
app: frontend
ports:
– protocol: TCP
port: 80
egress:
– to:
– podSelector:
matchLabels:
app: backend
ports:
– protocol: TCP
port: 8080

$ kubectl apply -f allow-web.yaml

# 查看网络策略详情
$ kubectl describe networkpolicy allow-web

8.2 服务发现和负载均衡

# 创建ClusterIP服务
$ cat service-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: app
ports:
– port: 80
targetPort: 8080
type: ClusterIP

$ kubectl apply -f service-clusterip.yaml

# 创建NodePort服务
$ cat service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: app
ports:
– port: 80
targetPort: 8080
nodePort: 30080
type: NodePort

$ kubectl apply -f service-nodeport.yaml

# 创建LoadBalancer服务
$ cat service-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: app
ports:
– port: 80
targetPort: 8080
type: LoadBalancer

$ kubectl apply -f service-loadbalancer.yaml

# 创建Ingress
$ cat ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
– host: app.fgedu.net.cn
http:
paths:
– path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80

$ kubectl apply -f ingress.yaml

# 查看Ingress
$ kubectl get ingress
$ kubectl describe ingress app-ingress

9. 存储管理

9.1 Kubernetes存储

# 创建PersistentVolume
$ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-1
spec:
capacity:
storage: 10Gi
accessModes:
– ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /mnt/data

$ kubectl apply -f pv.yaml

# 创建PersistentVolumeClaim
$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-1
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 5Gi

$ kubectl apply -f pvc.yaml

# 在Pod中使用PVC
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
– name: app
image: app:latest
volumeMounts:
– name: data
mountPath: /app/data
volumes:
– name: data
persistentVolumeClaim:
claimName: pvc-1

$ kubectl apply -f pod.yaml

# 查看PV和PVC
$ kubectl get pv
$ kubectl get pvc

# 创建StorageClass
$ cat storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
– debug
volumeBindingMode: Immediate

$ kubectl apply -f storageclass.yaml

# 创建使用StorageClass的PVC
$ cat pvc-sc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-sc
spec:
storageClassName: standard
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 5Gi

$ kubectl apply -f pvc-sc.yaml

10. 最佳实践

10.1 容器编排最佳实践

  • 使用声明式配置
  • 实施健康检查
  • 使用资源限制
  • 实施自动伸缩
  • 使用配置管理
  • 实施网络策略
  • 使用持久化存储
  • 实施滚动更新
  • 使用命名空间隔离
  • 实施安全措施

10.2 Kubernetes最佳实践

# 使用命名空间
$ kubectl create namespace production
$ kubectl create namespace staging
$ kubectl create namespace development

# 使用资源限制
$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
– name: app
image: app:latest
resources:
requests:
cpu: “100m”
memory: “256Mi”
limits:
cpu: “500m”
memory: “512Mi”

# 实施健康检查
$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
– name: app
image: app:latest
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5

# 使用标签和注解
$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
labels:
app: app
environment: production
version: v1.0.0
annotations:
description: “Production deployment of app”
maintainer: “dev@fgedu.net.cn”
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
environment: production
version: v1.0.0
spec:
containers:
– name: app
image: app:1.0.0

# 使用Helm部署应用
$ helm create myapp
$ helm install myapp ./myapp
$ helm upgrade myapp ./myapp
$ helm delete myapp

10.3 监控和日志

# 安装Prometheus和Grafana
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install prometheus prometheus-community/kube-prometheus-stack

# 安装ELK Stack
$ helm repo add elastic https://helm.elastic.co
$ helm repo update
$ helm install elk elastic/elasticsearch
$ helm install kibana elastic/kibana
$ helm install logstash elastic/logstash

# 配置日志收集
$ cat filebeat.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
spec:
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
spec:
containers:
– name: filebeat
image: docker.elastic.co/beats/filebeat:7.14.0
args:
– -c
– /etc/filebeat/filebeat.yml
volumeMounts:
– name: config
mountPath: /etc/filebeat
– name: data
mountPath: /usr/share/filebeat/data
– name: varlog
mountPath: /var/log
volumes:
– name: config
configMap:
name: filebeat-config
– name: data
emptyDir: {}
– name: varlog
hostPath:
path: /var/log

# 查看监控
$ kubectl port-forward svc/prometheus-grafana 3000:80
# 打开浏览器访问 http://fgedudb:3000

# 查看日志
$ kubectl port-forward svc/kibana 5601:5601
# 打开浏览器访问 http://fgedudb:5601

生产环境建议

  • 选择合适的容器编排平台
  • 实施高可用集群
  • 使用声明式配置
  • 实施健康检查
  • 使用资源限制
  • 实施自动伸缩
  • 使用配置管理
  • 实施网络策略
  • 使用持久化存储
  • 实施滚动更新
  • 使用命名空间隔离
  • 实施安全措施
  • 建立监控和告警系统
  • 定期备份数据
  • 培训团队掌握容器编排技能

本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html

联系我们

在线咨询:点击这里给我发消息

微信号:itpux-com

工作日:9:30-18:30,节假日休息