1. 首页 > 软件下载 > 正文

服务网格下载-Linkerd服务网格下载地址-Linkerd服务网格下载方法

1. Linkerd简介与版本说明

Linkerd是一个开源的轻量级服务网格(Service Mesh),专为Kubernetes设计。Linkerd以简单、轻量、高性能著称,提供零配置的mTLS、可观测性和可靠性功能。更多学习教程www.fgedu.net.cn

最新版本信息:

Linkerd 2.15.x – 最新稳定版(edge-24.3.x)

Linkerd 2.14.x – 长期支持版(stable-2.14.x)

Linkerd 2.13.x – 维护版

Linkerd 2.12.x – 旧版支持

生产环境建议:选择Linkerd 2.14.x或2.15.x版本作为生产环境部署版本,这些版本经过充分测试,具有长期支持周期。Linkerd相比Istio更轻量,适合对性能要求较高的场景。

2. Linkerd下载方式

Linkerd提供多种下载方式,包括官方安装脚本、二进制包下载和Helm Chart部署。学习交流加群风哥微信: itpux-com

方式一:官方安装脚本下载

# 使用官方安装脚本
$ curl –proto ‘=https’ –tlsv1.2 -sSfL https://run.linkerd.io/install | sh

输出示例如下:
Downloading linkerd2-cli-stable-2.14.10-linux-amd64.tar.gz…
Linkerd 2.14.10 was successfully downloaded 🎉

Add the linkerd CLI to your path with:
export PATH=$PATH:/root/.linkerd2/bin

Now run:
linkerd check –pre

to validate that your Kubernetes cluster is ready for Linkerd.

# 添加到PATH
$ export PATH=$PATH:$HOME/.linkerd2/bin

# 或永久添加
$ echo ‘export PATH=$PATH:$HOME/.linkerd2/bin’ >> ~/.bashrc
$ source ~/.bashrc

# 验证安装
$ linkerd version

输出示例如下:
Client version: stable-2.14.10
Server version: unavailable

方式二:二进制包下载

# 访问Linkerd GitHub Release页面
# https://github.com/linkerd/linkerd2/releases

# 下载Linkerd 2.14.10 CLI
$ wget https://github.com/linkerd/linkerd2/releases/download/stable-2.14.10/linkerd2-cli-stable-2.14.10-linux-amd64.tar.gz

# 解压安装包
$ tar -xzf linkerd2-cli-stable-2.14.10-linux-amd64.tar.gz

# 移动到系统目录
$ sudo mv linkerd2-cli-stable-2.14.10-linux-amd64/linkerd /usr/local/bin/

# 验证安装
$ linkerd version

输出示例如下:
Client version: stable-2.14.10
Server version: unavailable

# 查看帮助信息
$ linkerd –help

输出示例如下:
linkerd manages the Linkerd service mesh.

Usage:
linkerd [command]

Available Commands:
completion Generate the autocompletion script for the specified shell
diagnostics Commands used to diagnose Linkerd components
help Help about any command
inject Add the Linkerd proxy to a Kubernetes config
install Output Kubernetes configs to install Linkerd
install-cni Output Kubernetes configs to install the Linkerd CNI plugin
jaeger jaeger manages the jaeger extension of Linkerd service mesh
multicluster Manages the multicluster setup for Linkerd
profile Output service profile config for Kubernetes
repair Output the secret/linkerd-config-overrides resource if it exists
uninject Remove the Linkerd proxy from a Kubernetes config
uninstall Output Kubernetes resources to uninstall Linkerd control plane
upgrade Output Kubernetes configs to upgrade an existing Linkerd control plane
version Print the client and server versions

Flags:
–api-addr string Override kubeconfig and communicate directly with the control plane at host:port (mostly for testing)
–as string Username to impersonate for Kubernetes operations
–as-group stringArray Group to impersonate for Kubernetes operations
–as-uid string UID to impersonate for Kubernetes operations
–cni-namespace string Namespace in which the Linkerd CNI plugin is installed (default “linkerd-cni”)
–context string Name of the kubeconfig context to use
-h, –help help for linkerd
–kubeconfig string Path to the kubeconfig file to use for CLI requests
–linkerd-namespace string Namespace in which Linkerd is installed (default “linkerd”)
-L, –namespace string Namespace to use for –namespace flags
–verbose Turn on debug logging

方式三:Helm Chart部署

# 添加Linkerd Helm仓库
$ helm repo add linkerd https://helm.linkerd.io/stable

输出示例如下:
“linkerd” has been added to your repositories

# 添加Linkerd Edge仓库
$ helm repo add linkerd-edge https://helm.linkerd.io/edge

# 更新仓库
$ helm repo update

输出示例如下:
Hang tight while we grab the latest from your chart repositories…
…Successfully got an update from the “linkerd” chart repository
Update Complete. ⎵Happy Helming⎵

# 搜索Linkerd Charts
$ helm search repo linkerd

输出示例如下:
NAME CHART VERSION APP VERSION DESCRIPTION
linkerd/linkerd-control-plane 1.16.0 2.14.10 Install the Linkerd control-plane
linkerd/linkerd-crds 1.8.0 2.14.10 Install the Linkerd CRDs
linkerd/linkerd-viz 30.12.0 2.14.10 Install the Linkerd-viz extension
linkerd/linkerd2-cni 30.7.0 2.14.10 Install the Linkerd CNI plugin

# 下载Charts
$ helm pull linkerd/linkerd-crds –version 1.8.0
$ helm pull linkerd/linkerd-control-plane –version 1.16.0

3. Linkerd安装部署

Linkerd安装需要先进行预检查,确保Kubernetes集群满足要求。学习交流加群风哥QQ113257174

步骤1:预检查

# 检查Kubernetes版本
$ kubectl version –short

输出示例如下:
Client Version: v1.29.0
KVM Version: v1.29.0

# 检查节点状态
$ kubectl get nodes

输出示例如下:
NAME STATUS ROLES AGE VERSION
fgedu-node01 Ready control-plane 10d v1.29.0
fgedu-node02 Ready 10d v1.29.0
fgedu-node03 Ready 10d v1.29.0

# 运行预检查
$ linkerd check –pre

输出示例如下:
kubernetes-api
————–
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
——————
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

pre-kubernetes-setup
——————–
√ control plane namespace does not already exist
√ can create non-namespaced resources
√ can create ServiceAccounts
√ can create Services
√ can create Deployments
√ can create CronJobs
√ can create ConfigMaps
√ can create Secrets
√ can read Secrets
√ can read extension-apiserver-authentication configmap
√ can read kube-system extension-apiserver-authentication configmap
√ can create CustomResourceDefinitions

pre-kubernetes-capability
————————-
√ has NET_ADMIN capability
√ has NET_RAW capability

pre-linkerd-global-resources
—————————
√ no ClusterRoles exist
√ no ClusterRoleBindings exist
√ no CustomResourceDefinitions exist
√ no MutatingWebhookConfigurations exist
√ no ValidatingWebhookConfigurations exist
√ no PodSecurityPolicies exist

Status check results are √

步骤2:安装Linkerd控制平面

# 生成安装清单
$ linkerd install > linkerd-install.yaml

# 查看安装清单
$ head -50 linkerd-install.yaml

输出示例如下:

apiVersion: v1
kind: Namespace
metadata:
annotations:
linkerd.io/inject: disabled
labels:
config.linkerd.io/admission-webhooks: disabled
linkerd.io/is-control-plane: “true”
pod-security.kubernetes.io/enforce: privileged
name: linkerd

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.11.3
labels:
linkerd.io/extension: linkerd-identity
name: serviceprofiles.linkerd.io

# 安装Linkerd
$ linkerd install | kubectl apply -f –

输出示例如下:
namespace/linkerd created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created
serviceaccount/linkerd-identity created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
serviceaccount/linkerd-proxy-injector created
secret/linkerd-proxy-injector-k8s-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-sp-validator created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-sp-validator created
serviceaccount/linkerd-sp-validator created
secret/linkerd-sp-validator-k8s-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-policy-validator created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-policy-validator created
serviceaccount/linkerd-destination created
secret/linkerd-policy-validator-k8s-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-policy-validator-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created
serviceaccount/linkerd-destination created
serviceaccount/linkerd-heartbeat created
clusterrole.rbac.authorization.k8s.io/linkerd-heartbeat created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
configmap/linkerd-identity-trust-roots created
secret/linkerd-identity-issuer created
service/linkerd-identity created
service/linkerd-identity-headless created
deployment.apps/linkerd-identity created
service/linkerd-proxy-injector created
service/linkerd-sp-validator created
deployment.apps/linkerd-proxy-injector created
deployment.apps/linkerd-sp-validator created
service/linkerd-destination created
deployment.apps/linkerd-destination created
cronjob.batch/linkerd-heartbeat created

步骤3:验证安装

# 等待组件就绪
$ linkerd check

输出示例如下:
kubernetes-api
————–
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
——————
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

linkerd-existence
—————–
√ ‘linkerd-config’ config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API

linkerd-config
————–
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ control plane PodSecurityPolicies exist

linkerd-identity
—————-
√ certificate config is valid
√ trust anchors are matching
√ trust anchors are valid for at least 60 days
√ trust anchors are valid for at least 365 days
√ identity cert is valid for at least 60 days
√ identity cert is valid for at least 365 days
√ identity cert is trusted by trust anchors

linkerd-webhooks-and-apisvc
—————————
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days
√ policy-validator webhook has valid cert
√ policy-validator cert is valid for at least 60 days

linkerd-api
———–
√ control plane pods are ready
√ control plane self-check
√ [kubernetes] control plane can talk to Kubernetes
√ [prometheus] control plane can talk to Prometheus

linkerd-version
—————
√ can determine the latest version
√ cli is up-to-date

control-plane-version
———————
√ control plane is up-to-date
√ control plane and cli versions match

Status check results are √

# 查看安装的组件
$ kubectl get pods -n linkerd

输出示例如下:
NAME READY STATUS RESTARTS AGE
linkerd-destination-6b9d7c8d5d-abc12 4/4 Running 0 2m
linkerd-identity-7c8d9e6f5g-hij34 2/2 Running 0 2m
linkerd-proxy-injector-8d9e0f1g6h-klm56 2/2 Running 0 2m

# 查看服务
$ kubectl get svc -n linkerd

输出示例如下:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
linkerd-destination ClusterIP 10.96.0.100 8086/TCP,8090/TCP 2m
linkerd-identity ClusterIP 10.96.0.101 8080/TCP,9090/TCP 2m
linkerd-identity-headless ClusterIP None 8080/TCP 2m
linkerd-proxy-injector ClusterIP 10.96.0.102 443/TCP 2m

4. Linkerd配置详解

Linkerd支持通过配置文件和命令行参数进行自定义配置。from:www.itpux.com

配置文件示例

# 创建自定义配置文件
$ cat > linkerd-config.yaml << EOF proxy: resources: cpu: request: 100m limit: 1000m memory: request: 128Mi limit: 512Mi proxyInit: resources: cpu: request: 10m limit: 100m memory: request: 10Mi limit: 50Mi identity: issuer: scheme: linkerd.io/tls installNamespace: false namespace: linkerd controlPlaneReplicas: 3 policyController: resources: cpu: request: 100m limit: 1000m memory: request: 128Mi limit: 512Mi destination: resources: cpu: request: 100m limit: 1000m memory: request: 128Mi limit: 512Mi identityProxyResources: &idpResources cpu: request: 100m limit: 1000m memory: request: 128Mi limit: 512Mi proxyInjectorResources: *idpResources spValidatorResources: *idpResources debugContainer: image: cr.l5d.io/linkerd/debug:stable-2.14.10 EOF # 使用自定义配置安装 $ linkerd install -f linkerd-config.yaml | kubectl apply -f - 输出示例如下: namespace/linkerd unchanged clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity configured ...
生产环境建议:配置资源限制避免Sidecar占用过多资源;启用高可用模式部署多个控制平面副本;配置外部证书管理器实现证书轮换;配置日志级别便于故障排查。

5. Linkerd Sidecar注入

Linkerd通过Sidecar注入实现服务网格功能,支持自动注入和手动注入。风哥提示:生产环境推荐使用自动注入方式。

方式一:命名空间自动注入

# 为命名空间启用自动注入
$ kubectl annotate namespace default linkerd.io/inject=enabled

输出示例如下:
namespace/default annotated

# 验证注解
$ kubectl get namespace default -o yaml | grep -A2 annotations

输出示例如下:
annotations:
linkerd.io/inject: enabled
kubectl.kubernetes.io/last-applied-configuration: |
creationTimestamp: “2026-03-15T10:00:00Z”

# 部署测试应用
$ cat > nginx-deploy.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test spec: replicas: 2 selector: matchLabels: app: nginx-test template: metadata: labels: app: nginx-test spec: containers: - name: nginx image: nginx:alpine ports: - containerPort: 80 EOF $ kubectl apply -f nginx-deploy.yaml 输出示例如下: deployment.apps/nginx-test created # 查看Pod(包含Sidecar) $ kubectl get pods 输出示例如下: NAME READY STATUS RESTARTS AGE nginx-test-abc123-def45 2/2 Running 0 30s nginx-test-ghi789-jkl01 2/2 Running 0 30s # 查看Pod详情 $ kubectl describe pod nginx-test-abc123-def45 | grep -A5 Containers 输出示例如下: Containers: nginx: Container ID: containerd://abc123 Image: nginx:alpine Image ID: docker.io/library/nginx@sha256:def456 Port: 80/TCP Host Port: 0/TCP linkerd-proxy: Container ID: containerd://ghi789 Image: cr.l5d.io/linkerd/proxy:stable-2.14.10 Image ID: cr.l5d.io/linkerd/proxy@sha256:jkl012

方式二:手动注入

# 手动注入Sidecar
$ linkerd inject nginx-deploy.yaml | kubectl apply -f –

输出示例如下:
Deployment “nginx-test” injected
deployment.apps/nginx-test configured

# 查看注入后的配置
$ linkerd inject nginx-deploy.yaml –manual

输出示例如下:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test
spec:
replicas: 2
selector:
matchLabels:
app: nginx-test
template:
metadata:
annotations:
linkerd.io/created-by: linkerd/cli stable-2.14.10
linkerd.io/identity-mode: default
linkerd.io/proxy-version: stable-2.14.10
labels:
app: nginx-test
spec:
containers:
– name: nginx
image: nginx:alpine
ports:
– containerPort: 80
– name: linkerd-proxy
image: cr.l5d.io/linkerd/proxy:stable-2.14.10
args:
– –control-plane-namespace
– linkerd
– –proxy-log-level
– warn,linkerd=info
– –proxy-version
– stable-2.14.10
env:
– name: LINKERD2_PROXY_LOG
value: warn,linkerd=info
– name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
value: linkerd-destination.linkerd.svc.cluster.local:8086
– name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
value: 0.0.0.0:4190
– name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
value: 0.0.0.0:4191
– name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
value: 127.0.0.1:4140
– name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
value: 0.0.0.0:4143
– name: LINKERD2_PROXY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
– name: LINKERD2_PROXY_IDENTITY_DIR
value: /var/run/linkerd/identity/end-entity
– name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
value: |
—–BEGIN CERTIFICATE—–

—–END CERTIFICATE—–
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /live
port: 4191
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 4191
initialDelaySeconds: 2
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 2102
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
– mountPath: /var/run/linkerd/identity/end-entity
name: linkerd-identity-end-entity
initContainers:
– name: linkerd-init
image: cr.l5d.io/linkerd/proxy-init:v2.3.0
args:
– –incoming-proxy-port
– “4143”
– –outgoing-proxy-port
– “4140”
– –proxy-uid
– “2102”
– –inbound-ports-to-ignore
– 4190,4191,4567,4568
securityContext:
capabilities:
add:
– NET_ADMIN
– NET_RAW
terminationMessagePolicy: FallbackToLogsOnError
volumes:
– name: linkerd-identity-end-entity
emptyDir:
medium: Memory

6. Linkerd流量管理

Linkerd通过ServiceProfile实现流量管理功能,包括超时、重试和流量分割等。更多学习教程公众号风哥教程itpux_com

步骤1:创建ServiceProfile

# 创建ServiceProfile
$ cat > serviceprofile.yaml << EOF apiVersion: linkerd.io/v1alpha2 kind: ServiceProfile metadata: name: nginx-test.default.svc.cluster.local namespace: default spec: routes: - name: GET / condition: method: GET pathRegex: / timeout: 10s isRetryable: true - name: GET /api condition: method: GET pathRegex: /api/.* timeout: 5s isRetryable: false EOF # 应用ServiceProfile $ kubectl apply -f serviceprofile.yaml 输出示例如下: serviceprofile.linkerd.io/nginx-test.default.svc.cluster.local created # 查看ServiceProfile $ kubectl get serviceprofile 输出示例如下: NAME AGE nginx-test.default.svc.cluster.local 10s

步骤2:配置流量分割

# 创建流量分割配置
$ cat > trafficsplit.yaml << EOF apiVersion: split.smi-spec.io/v1alpha2 kind: TrafficSplit metadata: name: nginx-split namespace: default spec: service: nginx-test backends: - service: nginx-test-v1 weight: 90 - service: nginx-test-v2 weight: 10 EOF # 应用配置 $ kubectl apply -f trafficsplit.yaml 输出示例如下: trafficsplit.split.smi-spec.io/nginx-split created # 查看流量分割 $ kubectl get trafficsplit 输出示例如下: NAME SERVICE nginx-split nginx-test

7. Linkerd安全配置

Linkerd默认启用mTLS,提供自动的双向TLS加密。from:www.itpux.com

步骤1:验证mTLS状态

# 检查mTLS状态
$ linkerd identity -n default

输出示例如下:
POD IDENTITY TLS
nginx-test-abc123-def45 nginx-test.default.serviceaccount.identity.linkerd.management.cluster.local true
nginx-test-ghi789-jkl01 nginx-test.default.serviceaccount.identity.linkerd.management.cluster.local true

# 查看连接状态
$ linkerd edges -n default

输出示例如下:
SRC DST SRC_NS DST_NS SECURED
nginx-test-abc12 nginx-test default default √
nginx-test-ghi78 nginx-test default default √

# 检查服务间的TLS连接
$ linkerd -n default check –proxy

输出示例如下:
kubernetes-api
————–
√ can initialize the client
√ can query the Kubernetes API

linkerd-existence
—————–
√ ‘linkerd-config’ config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API

linkerd-identity
—————-
√ certificate config is valid
√ trust anchors are matching
√ trust anchors are valid for at least 60 days
√ identity cert is valid for at least 60 days
√ identity cert is trusted by trust anchors

linkerd-webhooks-and-apisvc
—————————
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days

步骤2:安装可视化扩展

# 安装Linkerd Viz扩展
$ linkerd viz install | kubectl apply -f –

输出示例如下:
namespace/linkerd-viz created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created
serviceaccount/metrics-api created
serviceaccount/grafana created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created
serviceaccount/tap created
serviceaccount/tap-injector created
secret/tap-injector-k8s-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-tap-injector-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created
serviceaccount/web created
service/metrics-api created
service/grafana created
service/prometheus created
service/tap created
service/tap-injector created
service/web created
deployment.apps/grafana created
deployment.apps/prometheus created
deployment.apps/tap created
deployment.apps/tap-injector created
deployment.apps/web created
job.batch/linkerd-viz-checks created

# 检查Viz扩展状态
$ linkerd viz check

输出示例如下:
linkerd-viz
———–
√ linkerd-viz Namespace exists
√ linkerd-viz ClusterRoles exist
√ linkerd-viz ClusterRoleBindings exist
√ linkerd-viz ServiceAccounts exist
√ linkerd-viz Services exist
√ linkerd-viz Deployments exist
√ linkerd-viz ReplicaSets are ready
√ linkerd-viz Pods are ready
√ can initialize the client
√ can query the control plane API
√ tap API is up
√ prometheus is installed and configured correctly
√ grafana is installed and configured correctly

Status check results are √

# 访问Dashboard
$ linkerd viz dashboard &

输出示例如下:
Linkerd dashboard available at:
http://127.0.0.1:50750

8. Linkerd验证与测试

完成安装后,可以使用Linkerd提供的工具验证服务网格功能。

步骤1:查看服务状态

# 查看服务统计信息
$ linkerd viz stat deployments -n default

输出示例如下:
NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 TCP_CONN
nginx-test 2/2 100.00% 5.0rps 2ms 10ms 20ms 4

# 查看Pod统计信息
$ linkerd viz stat pods -n default

输出示例如下:
NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 TCP_CONN
nginx-test-abc123-def45 1/1 100.00% 2.5rps 2ms 10ms 20ms 2
nginx-test-ghi789-jkl01 1/1 100.00% 2.5rps 2ms 10ms 20ms 2

# 查看服务拓扑
$ linkerd viz edges deploy -n default

输出示例如下:
SRC DST SRC_NS DST_NS SECURED
nginx-test-abc12 nginx-test default default √
nginx-test-ghi78 nginx-test default default √

# 查看实时流量
$ linkerd viz tap deploy/nginx-test -n default

输出示例如下:
req id=0:0 proxy=in src=10.0.0.1:54321 dst=10.0.0.2:80 tls=true :method=GET :path=/ :authority=nginx-test.default.svc.cluster.local
rsp id=0:0 proxy=in src=10.0.0.1:54321 dst=10.0.0.2:80 tls=true :status=200
end id=0:0 proxy=in src=10.0.0.1:54321 dst=10.0.0.2:80 tls=true duration=5ms response-length=615B

步骤2:性能测试

# 安装负载测试工具
$ kubectl run -i –tty load-generator –rm –image=busybox –restart=Never — /bin/sh -c “while sleep 0.01; do wget -q -O- http://nginx-test; done”

# 查看实时指标
$ linkerd viz stat deploy -n default –watch

输出示例如下:
NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 TCP_CONN
nginx-test 2/2 100.00% 100.0rps 2ms 10ms 20ms 4

# 查看资源使用
$ linkerd viz stat deploy -n default -o wide

输出示例如下:
NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 TCP_CONN READ_BYTES/SEC WRITE_BYTES/SEC
nginx-test 2/2 100.00% 100.0rps 2ms 10ms 20ms 4 1.0MB/s 2.0MB/s

# 检查代理状态
$ linkerd diagnostics proxy-metrics -n default deploy/nginx-test

输出示例如下:
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 123456789

# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 12345678

# TYPE process_start_time_seconds gauge
process_start_time_seconds 1710489600

# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 123.45

生产环境建议:配置资源限制避免Sidecar占用过多资源;启用高可用模式部署多个控制平面副本;使用Viz扩展监控服务网格状态;配置ServiceProfile优化服务间通信。

本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html

联系我们

在线咨询:点击这里给我发消息

微信号:itpux-com

工作日:9:30-18:30,节假日休息