1. 首页 > IT综合教程 > 正文

IT教程FG426-云原生架构实践

1. 云原生架构概述

云原生架构是一种设计和构建应用程序的方法,充分利用云服务的优势,实现高弹性、高可用性和可扩展性。更多学习教程www.fgedu.net.cn

# 云原生架构特点
1. 容器化:使用容器技术实现应用程序的隔离和标准化
2. 微服务:将应用程序拆分为独立的服务组件
3. 动态编排:使用Kubernetes等工具管理容器
4. 声明式API:通过配置文件定义应用程序状态
5. 自动化:实现持续集成和持续部署
6. 弹性伸缩:根据负载自动调整资源
7. 服务网格:实现服务间的智能通信
8. 可观测性:实时监控和分析系统状态

2. 云原生架构原则

云原生架构遵循一系列核心原则,确保系统的可靠性、可扩展性和可维护性。学习交流加群风哥微信: itpux-com

# 云原生架构原则
1. 服务化:将应用拆分为独立的服务
2. 容器化:使用容器封装应用及其依赖
3. 弹性:系统能够自动适应负载变化
4. 自动化:减少人工干预,提高效率
5. 可观测性:全面监控系统状态
6. 声明式配置:通过配置文件定义系统状态
7. 不可变基础设施:基础设施一旦部署,不再修改
8. 持续交付:实现快速、可靠的软件交付

3. 云原生技术栈

3.1 核心技术组件

# 云原生技术栈
1. 容器技术:Docker、containerd
2. 编排工具:Kubernetes、OpenShift
3. 服务网格:Istio、Linkerd
4. 存储:Rook、Longhorn
5. 网络:Calico、Flannel
6. 监控:Prometheus、Grafana
7. 日志:ELK Stack、Loki
8. CI/CD:Jenkins、GitLab CI
9. 配置管理:Helm、Kustomize
10. 安全:Falco、Trivy

4. Kubernetes实践

4.1 Kubernetes集群部署

# 使用kubeadm部署Kubernetes集群
# 安装kubeadm、kubelet和kubectl
$ apt-get update && apt-get install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add –
$ cat </etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ apt-get update
$ apt-get install -y kubelet kubeadm kubectl
$ apt-mark hold kubelet kubeadm kubectl

# 初始化主节点
$ kubeadm init –pod-network-cidr=10.244.0.0/16

# 配置kubectl
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 安装网络插件
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 加入工作节点
$ kubeadm join 192.168.1.100:6443 –token –discovery-token-ca-cert-hash

输出结果如下:
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”
[apiclient] All control plane components are healthy after 39.508055 seconds
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.21” in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see –upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels “node-role.kubernetes.io/master=”
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml”

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.100:6443 –token abcdef.0123456789abcdef \
–discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef

4.2 Kubernetes资源配置

# 创建Deployment资源
$ cat > nginx-deployment.yaml << 'EOF' apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.19.2 ports: - containerPort: 80 resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m" livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 60 periodSeconds: 10 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 30 periodSeconds: 10 EOF $ kubectl apply -f nginx-deployment.yaml # 创建Service资源 $ cat > nginx-service.yaml << 'EOF' apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - port: 80 targetPort: 80 type: LoadBalancer EOF $ kubectl apply -f nginx-service.yaml # 查看资源状态 $ kubectl get deployments $ kubectl get services $ kubectl get pods
输出结果如下:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 5m

$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 1h
nginx-service LoadBalancer 10.102.192.81 80:30080/TCP 2m

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-5d6f7f8b69-7q4k2 1/1 Running 0 5m
nginx-deployment-5d6f7f8b69-8q7k3 1/1 Running 0 5m
nginx-deployment-5d6f7f8b69-9q8k4 1/1 Running 0 5m

5. 服务网格实践

5.1 Istio服务网格部署

# 下载并安装Istio
$ curl -L https://istio.io/downloadIstio | sh –
$ cd istio-1.10.0
$ export PATH=$PWD/bin:$PATH

# 安装Istio
$ istioctl install –set profile=demo -y

# 为默认命名空间启用自动注入
$ kubectl label namespace default istio-injection=enabled

# 部署示例应用
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

# 检查部署状态
$ kubectl get pods
$ kubectl get services

# 访问应用
$ kubectl port-forward svc/productpage 9080:9080

输出结果如下:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-558b8b4b76-25c4b 2/2 Running 0 2m
productpage-v1-6987489c74-7w7bg 2/2 Running 0 2m
ratings-v1-7dc98c7588-6j7qw 2/2 Running 0 2m
reviews-v1-545db77b95-7j7zs 2/2 Running 0 2m
reviews-v2-7bf8c9648f-4x74q 2/2 Running 0 2m
reviews-v3-84779c7bbc-6752q 2/2 Running 0 2m

$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.105.241.19 9080/TCP 2m
kubernetes ClusterIP 10.96.0.1 443/TCP 2h
productpage ClusterIP 10.103.90.91 9080/TCP 2m
ratings ClusterIP 10.105.183.6 9080/TCP 2m
reviews ClusterIP 10.111.195.68 9080/TCP 2m

6. Serverless实践

6.1 Knative部署与使用

# 安装Knative
$ kubectl apply -f https://github.com/knative/serving/releases/download/v0.23.0/serving-crds.yaml
$ kubectl apply -f https://github.com/knative/serving/releases/download/v0.23.0/serving-core.yaml

# 安装网络层
$ kubectl apply -f https://github.com/knative/net-contour/releases/download/v0.23.0/contour.yaml
$ kubectl apply -f https://github.com/knative/net-contour/releases/download/v0.23.0/net-contour.yaml

# 部署Serverless应用
$ cat > hello-world.yaml << 'EOF' apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello-world spec: template: spec: containers: - image: gcr.io/knative-samples/helloworld-go env: - name: TARGET value: "World" EOF $ kubectl apply -f hello-world.yaml # 查看服务状态 $ kubectl get ksvc $ kubectl get pods

输出结果如下:
$ kubectl get ksvc
NAME URL LATESTCREATED LATESTREADY READY REASON
hello-world http://hello-world.default.fgedu.net.cn hello-world-00001 hello-world-00001 True

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world-00001-deployment-7c7c8f4f5c 2/2 Running 0 1m

学习交流加群风哥QQ113257174

7. 云原生CI/CD

7.1 GitLab CI/CD配置

# 创建.gitlab-ci.yml文件
$ cat > .gitlab-ci.yml << 'EOF' stages: - build - test - deploy variables: DOCKER_REGISTRY: "registry.fgedu.net.cn" APP_NAME: "fgedu-web" build: stage: build image: docker:latest services: - docker:dind script: - docker login -u $DOCKER_USER -p $DOCKER_PASSWORD $DOCKER_REGISTRY - docker build -t $DOCKER_REGISTRY/$APP_NAME:$CI_COMMIT_SHA . - docker push $DOCKER_REGISTRY/$APP_NAME:$CI_COMMIT_SHA only: - main test: stage: test image: python:3.9 script: - pip install -r requirements.txt - pytest -v only: - main deploy: stage: deploy image: bitnami/kubectl:latest script: - kubectl config use-context production - kubectl set image deployment/$APP_NAME $APP_NAME=$DOCKER_REGISTRY/$APP_NAME:$CI_COMMIT_SHA - kubectl rollout status deployment/$APP_NAME only: - main environment: name: production EOF

8. 可观测性实践

8.1 Prometheus + Grafana监控

# 安装Prometheus
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm install prometheus prometheus-community/prometheus

# 安装Grafana
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm install grafana grafana/grafana

# 配置Prometheus数据源
$ kubectl port-forward svc/grafana 3000:3000

# 访问Grafana
# http://fgedudb:3000
# fgedu:admin
# 密码:使用 kubectl get secret grafana -o jsonpath=”{.data.admin-password}” | base64 –decode

# 导入Kubernetes仪表盘
# Dashboard ID: 1860

输出结果如下:
$ helm install prometheus prometheus-community/prometheus
NAME: prometheus
LAST DEPLOYED: Fri Apr 3 10:00:00 2026
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-server.default.svc.cluster.local

$ helm install grafana grafana/grafana
NAME: grafana
LAST DEPLOYED: Fri Apr 3 10:05:00 2026
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get your ‘admin’ user password by running:
kubectl get secret –namespace default grafana -o jsonpath=”{.data.admin-password}” | base64 –decode ; echo
2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:
grafana.default.svc.cluster.local
3. Get the Grafana URL to visit by running these commands in the same shell:
export POD_NAME=$(kubectl get pods –namespace default -l “app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana” -o jsonpath=”{.items[0].metadata.name}”)
kubectl –namespace default port-forward $POD_NAME 3000

9. 云原生安全

9.1 Falco安全监控

# 安装Falco
$ helm repo add falcosecurity https://falcosecurity.github.io/charts
$ helm install falco falcosecurity/falco

# 查看Falco状态
$ kubectl get pods

# 查看Falco日志
$ kubectl logs -f deployment/falco

# 测试安全告警
$ kubectl exec -it — /bin/bash
$ touch /etc/shadow

输出结果如下:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
falco-5d6f7f8b69-7q4k2 1/1 Running 0 5m

$ kubectl logs -f deployment/falco
Mon Apr 3 10:10:00 2026: Falco version 0.30.0 (driver version 5.0.1)
Mon Apr 3 10:10:00 2026: Falco initialized with configuration file /etc/falco/falco.yaml
Mon Apr 3 10:10:00 2026: Loading rules from file /etc/falco/falco_rules.yaml
Mon Apr 3 10:10:00 2026: Loading rules from file /etc/falco/falco_rules.local.yaml
Mon Apr 3 10:15:00 2026: Notice A shell was spawned in a container with an attached terminal (user=root container=falco-5d6f7f8b69-7q4k2 shell=bash parent=runc cmdline=bash)
Mon Apr 3 10:15:30 2026: Critical File below /etc opened for writing (user=root container=falco-5d6f7f8b69-7q4k2 file=/etc/shadow command=touch /etc/shadow)

10. 最佳实践

生产环境风哥建议:
– 使用多区域部署提高可用性
– 实施最小权限原则
– 定期进行安全扫描
– 建立完善的监控和告警体系
– 采用不可变基础设施
– 实现自动化的CI/CD流程
– 定期进行灾难恢复演练
– 持续优化资源利用率

10.1 云原生架构清单

# 云原生架构清单
1. 容器化
– [ ] 应用容器化
– [ ] 镜像管理
– [ ] 容器安全

2. 编排管理
– [ ] Kubernetes集群部署
– [ ] 资源配置
– [ ] 自动扩缩容

3. 服务架构
– [ ] 微服务拆分
– [ ] 服务发现
– [ ] 负载均衡

4. 可观测性
– [ ] 指标监控
– [ ] 日志管理
– [ ] 链路追踪

5. 安全管理
– [ ] 网络安全
– [ ] 应用安全
– [ ] 数据安全

6. 自动化
– [ ] CI/CD流程
– [ ] 配置管理
– [ ] 基础设施即代码

风哥风哥提示:云原生架构是一个持续演进的过程,需要根据业务需求和技术发展不断优化和调整。

更多学习教程公众号风哥教程itpux_com

author:www.itpux.com

本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html

联系我们

在线咨询:点击这里给我发消息

微信号:itpux-com

工作日:9:30-18:30,节假日休息