内容大纲
- 1. 服务网格概述
- 2. 服务网格架构
- 3. Istio服务网格
- 4. Linkerd服务网格
- 5. Consul Connect
- 6. 流量管理
- 7. 安全管理
- 8. 可观测性
- 9. 部署与管理
- 10. 最佳实践
1. 服务网格概述
服务网格(Service Mesh)是用于处理服务间通信的基础设施层,它负责在微服务架构中实现服务发现、负载均衡、加密、身份认证、授权、熔断、限流等功能。服务网格通过在应用旁边部署代理(Sidecar)来实现这些功能,从而将业务逻辑与网络通信解耦。
服务网格的核心功能包括:
- 服务发现和负载均衡
- 流量管理和路由
- 安全通信和身份认证
- 熔断、限流和重试
- 可观测性(监控、日志、追踪)
学习交流加群风哥微信: itpux-com
2. 服务网格架构
2.1 数据平面
数据平面由一组智能代理(Sidecar)组成,这些代理部署在每个服务实例旁边,负责处理服务间的所有网络通信。代理通常使用Envoy或Linkerd等高性能代理软件。
2.2 控制平面
控制平面负责管理和配置代理,提供服务发现、流量管理、安全策略等功能。控制平面通常包括配置管理、证书管理、策略执行等组件。
2.3 服务网格架构图
┌─────────────────────────────────────────────────────────┐
│ 控制平面 │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ 配置管理 │ │ 证书管理 │ │ 策略执行 │ │
│ └──────────┘ └──────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────┘
│
│ 配置和策略
▼
┌─────────────────────────────────────────────────────────┐
│ 数据平面 │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ 服务A │ │ 服务B │ │
│ │ ┌────────────┐ │ │ ┌────────────┐ │ │
│ │ │ 应用 │ │ │ │ 应用 │ │ │
│ │ └────────────┘ │ │ └────────────┘ │ │
│ │ ┌────────────┐ │◄────►│ ┌────────────┐ │ │
│ │ │ Sidecar代理│ │ │ │ Sidecar代理│ │ │
│ │ └────────────┘ │ │ └────────────┘ │ │
│ └──────────────────┘ └──────────────────┘ │
└─────────────────────────────────────────────────────────┘
风哥风哥提示:服务网格通过将网络通信逻辑从应用代码中分离出来,实现了业务逻辑与基础设施的解耦,提高了系统的可维护性和可扩展性。
3. Istio服务网格
3.1 Istio安装
# 下载Istio
$ curl -L https://istio.io/downloadIstio | sh –
$ cd istio-1.12.0
$ export PATH=$PWD/bin:$PATH
# 安装Istio
$ istioctl install –set profile=demo -y
# 输出结果
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Egress gateways installed
✔ Installation complete
# 验证安装
$ kubectl get pods -n istio-system
# 输出结果
NAME READY STATUS RESTARTS AGE
istio-egressgateway-6b9f8c7d4d-abc12 1/1 Running 0 1m
istio-ingressgateway-7d8f9c6e5d-def34 1/1 Running 0 1m
istiod-8c9f7d6e5f-ghi56 1/1 Running 0 1m
# 启用Sidecar自动注入
$ kubectl label namespace default istio-injection=enabled
# 部署示例应用
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
# 验证应用部署
$ kubectl get pods
# 输出结果
NAME READY STATUS RESTARTS AGE
details-v1-6f8b6c7d4d-abc12 2/2 Running 0 1m
productpage-v1-7d8f9c6e5d-def34 2/2 Running 0 1m
ratings-v1-8c9f7d6e5f-ghi56 2/2 Running 0 1m
reviews-v1-9d0f8e7f6g-jkl78 2/2 Running 0 1m
reviews-v2-0e1g9f8g7h-mno90 2/2 Running 0 1m
reviews-v3-1f2h0g9h8i-pqr12 2/2 Running 0 1m
3.2 Istio流量管理
# 创建Gateway
$ cat > gateway.yaml << EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" EOF # 创建VirtualService $ cat > virtualservice.yaml << EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 EOF $ kubectl apply -f gateway.yaml $ kubectl apply -f virtualservice.yaml # 配置流量分割 $ cat > destination-rule.yaml << EOF apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3 EOF $ cat > traffic-split.yaml << EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 weight: 50 - destination: host: reviews subset: v2 weight: 50 EOF $ kubectl apply -f destination-rule.yaml $ kubectl apply -f traffic-split.yaml # 查看配置 $ kubectl get gateway $ kubectl get virtualservice $ kubectl get destinationrule
更多学习教程www.fgedu.net.cn
4. Linkerd服务网格
4.1 Linkerd安装
# 下载Linkerd CLI
$ curl –proto ‘=https’ –tlsv1.2 -sSfL https://run.linkerd.io/install | sh
# 验证安装
$ linkerd version
# 输出结果
Client version: stable-2.11.1
Server version: unavailable
# 安装Linkerd控制平面
$ linkerd install | kubectl apply -f –
# 输出结果
namespace/linkerd created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created
serviceaccount/linkerd-identity created
…
# 验证安装
$ linkerd check
# 输出结果
kubernetes-api
————–
√ can initialize the client
√ can query the Kubernetes API
kubernetes-version
——————
√ is running the minimum Kubernetes API version
linkerd-existence
—————–
√ ‘linkerd-config’ config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
Status check results are √
# 查看Linkerd组件
$ kubectl get pods -n linkerd
# 输出结果
NAME READY STATUS RESTARTS AGE
linkerd-destination-6b9f8c7d4d-abc12 2/2 Running 0 1m
linkerd-identity-7d8f9c6e5d-def34 2/2 Running 0 1m
linkerd-proxy-injector-8c9f7d6e5f-ghi56 2/2 Running 0 1m
# 部署示例应用
$ kubectl apply -f https://run.linkerd.io/emojivoto.yml
# 注入Linkerd代理
$ kubectl get -n emojivoto deploy -o yaml \
| linkerd inject – \
| kubectl apply -f –
# 输出结果
deployment “emoji” injected
deployment “vote-bot” injected
deployment “voting” injected
deployment “web” injected
deployment.apps/emoji configured
deployment.apps/vote-bot configured
deployment.apps/voting configured
deployment.apps/web configured
4.2 Linkerd流量管理
# 创建服务配置
$ cat > service-profile.yaml << EOF apiVersion: linkerd.io/v1alpha2 kind: ServiceProfile metadata: name: web.emojivoto.svc.cluster.local namespace: emojivoto spec: routes: - name: GET /api/vote condition: method: GET pathRegex: /api/vote timeout: 100ms isRetryable: true - name: POST /api/vote condition: method: POST pathRegex: /api/vote timeout: 200ms isRetryable: false EOF $ kubectl apply -f service-profile.yaml # 配置流量分割 $ cat > traffic-split.yaml << EOF apiVersion: split.smi-spec.io/v1alpha1 kind: TrafficSplit metadata: name: web-split namespace: emojivoto spec: service: web backends: - service: web weight: 50 - service: web-canary weight: 50 EOF $ kubectl apply -f traffic-split.yaml # 查看流量分割状态 $ kubectl get trafficsplit -n emojivoto # 输出结果 NAME SERVICE web-split web
author:www.itpux.com
5. Consul Connect
5.1 Consul Connect安装
# 添加HashiCorp Helm仓库
$ helm repo add hashicorp https://helm.releases.hashicorp.com
$ helm repo update
# 安装Consul
$ helm install consul hashicorp/consul –set global.name=consul
# 输出结果
NAME: consul
LAST DEPLOYED: Fri Apr 3 10:00:00 2026
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Thank you for installing HashiCorp Consul!
# 验证安装
$ kubectl get pods
# 输出结果
NAME READY STATUS RESTARTS AGE
consul-consul-abc12 1/1 Running 0 1m
consul-consul-def34 1/1 Running 0 1m
consul-consul-ghi56 1/1 Running 0 1m
# 访问Consul UI
$ kubectl port-forward service/consul-ui 8500:80
# 打开浏览器访问
# http://fgedudb:8500
5.2 Consul Connect服务配置
# 定义服务
$ cat > web-service.yaml << EOF apiVersion: v1 kind: Service metadata: name: web annotations: "consul.hashicorp.com/connect-service": "web" "consul.hashicorp.com/connect-service-port": "8080" spec: ports: - port: 8080 name: http selector: app: web --- apiVersion: apps/v1 kind: Deployment metadata: name: web spec: replicas: 1 selector: matchLabels: app: web template: metadata: labels: app: web annotations: "consul.hashicorp.com/connect-inject": "true" "consul.hashicorp.com/connect-service": "web" "consul.hashicorp.com/connect-service-port": "8080" spec: containers: - name: web image: nginx:latest ports: - containerPort: 8080 EOF $ kubectl apply -f web-service.yaml # 定义服务意图 $ cat > service-intentions.yaml << EOF apiVersion: consul.hashicorp.com/v1alpha1 kind: ServiceIntentions metadata: name: web spec: destination: name: web sources: - name: api action: allow - name: frontend action: allow EOF $ kubectl apply -f service-intentions.yaml # 定义服务默认值 $ cat > service-defaults.yaml << EOF apiVersion: consul.hashicorp.com/v1alpha1 kind: ServiceDefaults metadata: name: web spec: protocol: http meshGateway: mode: local expose: checks: true EOF $ kubectl apply -f service-defaults.yaml
更多学习教程公众号风哥教程itpux_com
6. 流量管理
6.1 流量路由
# 基于内容的路由
$ cat > content-based-routing.yaml << EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v1 EOF # 基于权重的路由 $ cat > weight-based-routing.yaml << EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 weight: 90 - destination: host: reviews subset: v2 weight: 10 EOF # 故障注入 $ cat > fault-injection.yaml << EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - fault: delay: percentage: value: 100 fixedDelay: 7s abort: percentage: value: 100 httpStatus: 500 route: - destination: host: reviews subset: v1 EOF $ kubectl apply -f content-based-routing.yaml $ kubectl apply -f weight-based-routing.yaml $ kubectl apply -f fault-injection.yaml
6.2 流量管理最佳实践
- 使用渐进式发布策略
- 配置合理的超时和重试
- 实施熔断和限流
- 使用故障注入测试
- 监控流量分布
风哥风哥提示:流量管理是服务网格的核心功能,需要合理配置路由规则、超时、重试和熔断策略。
7. 安全管理
7.1 mTLS配置
# 启用严格mTLS模式
$ cat > strict-mtls.yaml << EOF apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: istio-system spec: mtls: mode: STRICT EOF $ kubectl apply -f strict-mtls.yaml # 查看mTLS状态 $ kubectl get peerauthentication -n istio-system # 输出结果 NAME AGE default 1m # 配置服务级别的mTLS $ cat > service-mtls.yaml << EOF apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: reviews namespace: default spec: selector: matchLabels: app: reviews mtls: mode: STRICT portLevelMtls: 9080: mode: PERMISSIVE EOF $ kubectl apply -f service-mtls.yaml # 验证mTLS配置 $ istioctl authn tls-check reviews-v1-6f8b6c7d4d-abc12 # 输出结果 HOST:PORT STATUS SERVER CLIENT reviews.default.svc.cluster.local:9080 OK STRICT ISTIO_MUTUAL
7.2 授权策略
# 创建授权策略
$ cat > authorization-policy.yaml << EOF apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: reviews-policy namespace: default spec: selector: matchLabels: app: reviews action: ALLOW rules: - from: - source: principals: ["cluster.local/ns/default/sa/bookinfo-reviews"] to: - operation: methods: ["GET"] paths: ["/reviews/*"] when: - key: request.headers[x-token] values: ["valid-token"] EOF $ kubectl apply -f authorization-policy.yaml # 创建拒绝策略 $ cat > deny-policy.yaml << EOF apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-policy namespace: default spec: selector: matchLabels: app: reviews action: DENY rules: - from: - source: namespaces: ["test"] to: - operation: methods: ["POST", "PUT", "DELETE"] EOF $ kubectl apply -f deny-policy.yaml # 查看授权策略 $ kubectl get authorizationpolicy -n default # 输出结果 NAME AGE reviews-policy 1m deny-policy 1m
学习交流加群风哥QQ113257174
8. 可观测性
8.1 监控指标
# 查看Prometheus指标
$ kubectl port-forward -n istio-system service/prometheus 9090:9090
# 访问Prometheus UI
# http://fgedudb:9090
# 常用查询语句
# 请求成功率
sum(rate(istio_requests_total{response_code=”200″}[5m])) / sum(rate(istio_requests_total[5m]))
# 请求延迟P99
histogram_quantile(0.99, sum(rate(istio_request_duration_milliseconds_bucket[5m])) by (le))
# 请求量
sum(rate(istio_requests_total[5m])) by (destination_service)
# 错误率
sum(rate(istio_requests_total{response_code=~”5..”}[5m])) by (destination_service)
# 安装Grafana
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.12/samples/addons/grafana.yaml
# 访问Grafana
$ kubectl port-forward -n istio-system service/grafana 3000:3000
# 打开浏览器访问
# http://fgedudb:3000
8.2 分布式追踪
# 安装Jaeger
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.12/samples/addons/jaeger.yaml
# 访问Jaeger UI
$ kubectl port-forward -n istio-system service/tracing 16686:80
# 打开浏览器访问
# http://fgedudb:16686
# 配置追踪采样率
$ cat > telemetry.yaml << EOF
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: default-tracing
namespace: istio-system
spec:
tracing:
- randomSamplingPercentage: 100.00
EOF
$ kubectl apply -f telemetry.yaml
# 查看追踪数据
$ kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl http://fgedudb:15000/stats | grep tracing
# 输出结果
cluster.inbound|9080||ratings.default.svc.cluster.local.tracing.client_enabled: 1
cluster.inbound|9080||ratings.default.svc.cluster.local.tracing.service_enabled: 1
8.3 访问日志
# 启用访问日志
$ cat > access-logging.yaml << EOF apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: default-logging namespace: istio-system spec: accessLogging: - providers: - name: envoy EOF $ kubectl apply -f access-logging.yaml # 查看访问日志 $ kubectl logs -l app=reviews -c istio-proxy # 输出结果 [2026-04-03T10:00:00.000Z] "GET /reviews/0 HTTP/1.1" 200 - via_upstream - "-" 0 295 4 4 "-" "Mozilla/5.0" "abc123-def456-ghi789" "reviews:9080" "10.0.0.1:9080" inbound|9080|| 127.0.0.1:54321 10.0.0.1:9080 10.0.0.2:12345 outbound|9080||reviews.default.svc.cluster.local default
9. 部署与管理
9.1 服务网格部署策略
- 单集群部署:适用于小型应用
- 多集群部署:适用于跨地域应用
- 混合云部署:适用于云上云下混合场景
9.2 服务网格管理工具
# 查看代理状态
$ istioctl proxy-status
# 输出结果
NAME CLUSTER CDS LDS EDS RDS ISTIOD VERSION
details-v1-6f8b6c7d4d-abc12.default Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-8c9f7d6e5f-ghi56 1.12.0
productpage-v1-7d8f9c6e5d-def34.default Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-8c9f7d6e5f-ghi56 1.12.0
# 查看代理配置
$ istioctl proxy-config cluster details-v1-6f8b6c7d4d-abc12
# 查看代理监听器
$ istioctl proxy-config listener details-v1-6f8b6c7d4d-abc12
# 查看代理路由
$ istioctl proxy-config route details-v1-6f8b6c7d4d-abc12
# 分析配置
$ istioctl analyze
# 输出结果
✔ No validation issues found when analyzing namespace: default.
# 升级Istio
$ istioctl upgrade
# 卸载Istio
$ istioctl x uninstall –purge
10. 最佳实践
10.1 服务网格最佳实践
- 选择合适的服务网格实现
- 合理规划资源配额
- 配置安全策略
- 建立完善的监控体系
- 定期备份配置
- 实施渐进式发布
- 培训团队成员
- 建立文档和知识库
10.2 流量管理最佳实践
- 使用渐进式发布策略
- 配置合理的超时和重试
- 实施熔断和限流
- 使用故障注入测试
- 监控流量分布
10.3 安全管理最佳实践
- 启用mTLS加密
- 配置授权策略
- 定期轮换证书
- 监控安全事件
- 定期审计安全配置
10.4 可观测性最佳实践
- 建立完善的监控体系
- 配置分布式追踪
- 收集访问日志
- 设置告警规则
- 定期分析数据
- 选择合适的服务网格实现
- 合理规划资源配额
- 配置安全策略
- 建立完善的监控体系
- 实施渐进式发布
- 定期备份配置
- 培训团队成员
- 建立文档和知识库
author:www.itpux.com
本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html
