1. 首页 > IT综合教程 > 正文

it教程FG392-Kubernetes安全

内容大纲

1. Kubernetes安全概述

Kubernetes安全是保护Kubernetes集群及其运行的应用程序的一系列实践和措施。Kubernetes作为容器编排平台,提供了强大的功能,但也带来了新的安全挑战。

Kubernetes安全的核心领域包括:

  • 集群安全:保护Kubernetes集群的控制平面和节点
  • Pod安全:确保Pod的安全配置和运行
  • 网络安全:保护集群内和集群间的网络通信
  • 身份与访问管理:控制对集群资源的访问
  • 密钥管理:安全存储和使用敏感信息
  • 运行时安全:监控和保护运行中的容器
  • 合规与审计:确保集群符合安全标准
  • 安全监控:实时监控安全事件和漏洞

更多学习教程www.fgedu.net.cn

2. 集群安全

2.1 控制平面安全

# 控制平面组件安全
# 配置API服务器
$ kube-apiserver \
–tls-cert-file=/path/to/cert \
–tls-private-key-file=/path/to/key \
–client-ca-file=/path/to/ca \
–kubelet-certificate-authority=/path/to/ca \
–kubelet-client-certificate=/path/to/cert \
–kubelet-client-key=/path/to/key \
–authorization-mode=RBAC \
–enable-admission-plugins=NodeRestriction,PodSecurityPolicy \
–anonymous-auth=false \
–basic-auth-file=/path/to/auth

# 配置etcd
$ etcd \
–client-cert-auth \
–trusted-ca-file=/path/to/ca \
–cert-file=/path/to/cert \
–key-file=/path/to/key \
–peer-client-cert-auth \
–peer-trusted-ca-file=/path/to/ca \
–peer-cert-file=/path/to/peer-cert \
–peer-key-file=/path/to/peer-key

# 配置kubelet
$ kubelet \
–client-ca-file=/path/to/ca \
–tls-cert-file=/path/to/cert \
–tls-private-key-file=/path/to/key \
–authorization-mode=Webhook \
–read-only-port=0 \
–protect-kernel-defaults=true \
–make-iptables-util-chains=true

# 配置kube-proxy
$ kube-proxy \
–client-ca-file=/path/to/ca \
–tls-cert-file=/path/to/cert \
–tls-private-key-file=/path/to/key

2.2 节点安全

# 节点安全配置
# 安装安全补丁
$ apt-get update && apt-get upgrade -y

# 配置防火墙
$ ufw enable
$ ufw allow 22/tcp
$ ufw allow 6443/tcp
$ ufw allow 2379/tcp
$ ufw allow 2380/tcp
$ ufw allow 10250/tcp
$ ufw allow 10251/tcp
$ ufw allow 10252/tcp
$ ufw allow 10255/tcp

# 禁用不必要的服务
$ systemctl disable rpcbind
$ systemctl disable nfs-server
$ systemctl disable postfix

# 配置SELinux
$ setenforce 1
$ sed -i ‘s/SELINUX=permissive/SELINUX=enforcing/g’ /etc/selinux/config

# 配置AppArmor
$ apt-get install apparmor-profiles apparmor-utils
$ systemctl enable apparmor
$ systemctl start apparmor

# 配置内核参数
$ cat /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1

$ sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

2.3 集群配置安全

# 集群配置安全
# 使用kubeadm创建安全的集群
$ kubeadm init \
–control-plane-endpoint=”api.fgedu.net.cn” \
–upload-certs \
–pod-network-cidr=192.168.0.0/16

# 配置集群 RBAC
$ kubectl create clusterrolebinding admin-binding –clusterrole=admin –user=admin

# 配置Pod安全策略
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: ‘docker/default’
apparmor.security.beta.kubernetes.io/allowedProfileNames: ‘runtime/default’
seccomp.security.alpha.kubernetes.io/defaultProfileName: ‘docker/default’
apparmor.security.beta.kubernetes.io/defaultProfileName: ‘runtime/default’
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
– ALL
volumes:
– ‘configMap’
– ’emptyDir’
– ‘projected’
– ‘secret’
– ‘downwardAPI’
– ‘persistentVolumeClaim’
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: ‘MustRunAsNonRoot’
seLinux:
rule: ‘RunAsAny’
supplementalGroups:
rule: ‘MustRunAs’
ranges:
– min: 1
max: 65535
fsGroup:
rule: ‘MustRunAs’
ranges:
– min: 1
max: 65535
readOnlyRootFilesystem: true

# 应用Pod安全策略
$ kubectl apply -f pod-security-policy.yaml

# 绑定Pod安全策略
$ kubectl create clusterrole psp:restricted –verb=use –resource=podsecuritypolicies –resource-name=restricted
$ kubectl create clusterrolebinding default:restricted –clusterrole=psp:restricted –group=system:authenticated

风哥风哥提示:集群安全是Kubernetes安全的基础,需要从控制平面、节点和集群配置等多个层面进行防护。

3. Pod安全

3.1 Pod安全上下文

# Pod安全上下文配置
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
supplementalGroups: [1001, 1002]
seLinuxOptions:
level: “s0:c123,c456”
containers:
– name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
capabilities:
drop:
– ALL
add:
– NET_BIND_SERVICE
seccompProfile:
type: RuntimeDefault
seLinuxOptions:
level: “s0:c123,c456”

# 验证Pod安全上下文
$ kubectl get pod secure-pod -o yaml

# 查看Pod安全状态
$ kubectl describe pod secure-pod

3.2 容器安全

# 容器安全配置
# 使用官方基础镜像
FROM alpine:3.15

# 最小化镜像
RUN apk add –no-cache nginx && rm -rf /var/cache/apk/*

# 以非root用户运行
RUN adduser -D -u 1000 nginx
USER nginx

# 避免使用ADD
COPY index.html /var/www/html/

# 清理临时文件
RUN apt-get update && apt-get install -y nginx && rm -rf /var/lib/apt/lists/*

# 扫描容器镜像
$ docker run –rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy image myapp:latest

# 签名容器镜像
$ docker trust sign myapp:latest

# 验证容器镜像签名
$ docker trust inspect –pretty myapp:latest

3.3 Pod安全策略

# Pod安全策略配置
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: ‘docker/default’
apparmor.security.beta.kubernetes.io/allowedProfileNames: ‘runtime/default’
seccomp.security.alpha.kubernetes.io/defaultProfileName: ‘docker/default’
apparmor.security.beta.kubernetes.io/defaultProfileName: ‘runtime/default’
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
– ALL
volumes:
– ‘configMap’
– ’emptyDir’
– ‘projected’
– ‘secret’
– ‘downwardAPI’
– ‘persistentVolumeClaim’
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: ‘MustRunAsNonRoot’
seLinux:
rule: ‘RunAsAny’
supplementalGroups:
rule: ‘MustRunAs’
ranges:
– min: 1
max: 65535
fsGroup:
rule: ‘MustRunAs’
ranges:
– min: 1
max: 65535
readOnlyRootFilesystem: true

# 应用Pod安全策略
$ kubectl apply -f pod-security-policy.yaml

# 绑定Pod安全策略
$ kubectl create clusterrole psp:restricted –verb=use –resource=podsecuritypolicies –resource-name=restricted
$ kubectl create clusterrolebinding default:restricted –clusterrole=psp:restricted –group=system:authenticated

# 测试Pod安全策略
$ kubectl run test –image=busybox –command — sh -c “sleep 3600”
$ kubectl describe pod test

学习交流加群风哥微信: itpux-com

4. 网络安全

4.1 网络策略

# 网络策略配置
# 默认拒绝所有流量
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: default
spec:
podSelector: {}
policyTypes:
– Ingress
– Egress

# 应用默认拒绝网络策略
$ kubectl apply -f default-deny.yaml

# 允许特定流量
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web
namespace: default
spec:
podSelector:
matchLabels:
app: web
policyTypes:
– Ingress
– Egress
ingress:
– from:
– podSelector:
matchLabels:
app: frontend
ports:
– protocol: TCP
port: 80
egress:
– to:
– podSelector:
matchLabels:
app: backend
ports:
– protocol: TCP
port: 8080

# 应用允许特定流量的网络策略
$ kubectl apply -f allow-web.yaml

# 查看网络策略
$ kubectl get networkpolicies
$ kubectl describe networkpolicy allow-web

# 测试网络策略
$ kubectl run frontend –image=busybox –labels=app=frontend –command — sh -c “sleep 3600”
$ kubectl run web –image=nginx –labels=app=web
$ kubectl run backend –image=busybox –labels=app=backend –command — sh -c “nc -l -p 8080”
$ kubectl exec -it frontend — wget -qO- http://web
$ kubectl exec -it frontend — wget -qO- http://backend:8080

4.2 网络加密

# 配置TLS
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
type: kubernetes.io/tls
data:
tls.crt:
tls.key:

# 应用TLS Secret
$ kubectl apply -f tls-secret.yaml

# 配置Ingress使用TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: default
spec:
tls:
– hosts:
– app.fgedu.net.cn
secretName: tls-secret
rules:
– host: app.fgedu.net.cn
http:
paths:
– path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80

# 应用Ingress
$ kubectl apply -f ingress.yaml

# 验证TLS
$ curl -k https://app.fgedu.net.cn

# 配置Service Mesh (Istio)进行mTLS
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: default
spec:
mtls:
mode: STRICT

# 应用mTLS配置
$ kubectl apply -f peer-authentication.yaml

# 验证mTLS
$ istioctl authn tls-check app.default.svc.cluster.local

4.3 网络监控

# 安装网络监控工具
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm install prometheus prometheus-community/kube-prometheus-stack

# 配置网络监控
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: network-monitor
namespace: monitoring
spec:
selector:
matchLabels:
app: kube-proxy
endpoints:
– port: metrics

# 应用ServiceMonitor
$ kubectl apply -f service-monitor.yaml

# 查看网络监控指标
$ kubectl port-forward svc/prometheus-grafana 3000:80
# 打开浏览器访问 http://fgedudb:3000

# 配置网络告警
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: network-alerts
namespace: monitoring
spec:
groups:
– name: network
rules:
– alert: HighNetworkTraffic
expr: rate(node_network_receive_bytes_total[5m]) > 10000000
for: 5m
labels:
severity: warning
annotations:
summary: “High network traffic detected”
description: “Network traffic exceeds threshold on {{ $labels.instance }}”

# 应用网络告警
$ kubectl apply -f network-alerts.yaml

学习交流加群风哥QQ113257174

5. 身份与访问管理

5.1 RBAC配置

# RBAC配置
# 创建ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-service-account
namespace: default

# 应用ServiceAccount
$ kubectl apply -f service-account.yaml

# 创建Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
namespace: default
rules:
– apiGroups: [“”]
resources: [“pods”, “services”, “configmaps”]
verbs: [“get”, “list”, “watch”]
– apiGroups: [“apps”]
resources: [“deployments”, “replicasets”]
verbs: [“get”, “list”, “watch”]

# 应用Role
$ kubectl apply -f role.yaml

# 创建RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-role-binding
namespace: default
subjects:
– kind: ServiceAccount
name: app-service-account
namespace: default
roleRef:
kind: Role
name: app-role
apiGroup: rbac.authorization.k8s.io

# 应用RoleBinding
$ kubectl apply -f role-binding.yaml

# 验证权限
$ kubectl auth can-i get pods –as=system:serviceaccount:default:app-service-account
$ kubectl auth can-i create pods –as=system:serviceaccount:default:app-service-account

# 查看当前用户权限
$ kubectl auth can-i –list

5.2 OIDC集成

# 配置Kubernetes使用OIDC
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
etc:
kubelet:
extraArgs:
authentication-token-webhook: “true”
kube-apiserver:
extraArgs:
oidc-issuer-url: “https://accounts.google.com”
oidc-client-id: “my-client-id”
oidc-username-claim: “email”
oidc-groups-claim: “groups”
oidc-ca-file: “/path/to/ca.crt”

# 配置ServiceAccount使用OIDC
apiVersion: v1
kind: ServiceAccount
metadata:
name: oidc-service-account
namespace: default
annotations:
eks.amazonaws.com/role-arn: “arn:aws:iam::123456789012:role/oidc-role”

# 应用ServiceAccount
$ kubectl apply -f oidc-service-account.yaml

# 测试OIDC认证
$ kubectl get pods –as=oidc-user

# 配置RBAC for OIDC用户
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-user-binding
subjects:
– kind: User
name: “user@fgedu.net.cn”
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io

# 应用ClusterRoleBinding
$ kubectl apply -f oidc-user-binding.yaml

5.3 服务账号管理

# 查看ServiceAccount
$ kubectl get serviceaccounts –all-namespaces

# 创建ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-service-account
namespace: default

# 应用ServiceAccount
$ kubectl apply -f service-account.yaml

# 为Pod指定ServiceAccount
apiVersion: v1
kind: Pod
metadata:
name: app-pod
namespace: default
spec:
serviceAccountName: app-service-account
containers:
– name: app
image: myapp:latest

# 应用Pod
$ kubectl apply -f pod.yaml

# 查看Pod的ServiceAccount
$ kubectl get pod app-pod -o yaml | grep serviceAccountName

# 限制ServiceAccount权限
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: restricted-service-account
rules:
– apiGroups: [“”]
resources: [“pods”]
verbs: [“get”, “list”]

# 应用ClusterRole
$ kubectl apply -f cluster-role.yaml

# 绑定ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: restricted-service-account-binding
subjects:
– kind: ServiceAccount
name: app-service-account
namespace: default
roleRef:
kind: ClusterRole
name: restricted-service-account
apiGroup: rbac.authorization.k8s.io

# 应用ClusterRoleBinding
$ kubectl apply -f cluster-role-binding.yaml

更多学习教程公众号风哥教程itpux_com

6. 密钥管理

6.1 Kubernetes Secrets

# 创建Secret
$ kubectl create secret generic db-secret \
–from-literal=username=admin \
–from-literal=password=secret123

# 查看Secret
$ kubectl get secrets
$ kubectl describe secret db-secret

# 在Pod中使用Secret
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
– name: app
image: myapp:latest
env:
– name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
– name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password

# 应用Pod
$ kubectl apply -f pod.yaml

# 验证Secret使用
$ kubectl exec -it app-pod — env | grep DB_

# 使用Secret作为卷
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
– name: app
image: myapp:latest
volumeMounts:
– name: secret-volume
mountPath: /etc/secrets
readOnly: true
volumes:
– name: secret-volume
secret:
secretName: db-secret

# 应用Pod
$ kubectl apply -f pod-with-volume.yaml

# 验证Secret卷
$ kubectl exec -it app-pod — ls -la /etc/secrets/
$ kubectl exec -it app-pod — cat /etc/secrets/username

6.2 外部密钥管理

# 安装HashiCorp Vault
$ helm repo add hashicorp https://helm.releases.hashicorp.com
$ helm install vault hashicorp/vault

# 配置Vault
$ kubectl port-forward svc/vault 8200:8200
# 打开浏览器访问 http://fgedudb:8200

# 配置Kubernetes认证
$ vault auth enable kubernetes
$ vault write auth/kubernetes/config \
kubernetes_host=https://kubernetes.default.svc \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
token_reviewer_jwt=@/var/run/secrets/kubernetes.io/serviceaccount/token

# 创建Vault角色
$ vault write auth/kubernetes/role/my-role \
bound_service_account_names=app-service-account \
bound_service_account_namespaces=default \
policies=my-policy \
ttl=1h

# 创建Vault策略
$ vault policy write my-policy – << EOF path "secret/data/myapp/*" { capabilities = ["read"] } EOF # 在应用中使用Vault apiVersion: v1 kind: Pod metadata: name: app-pod spec: serviceAccountName: app-service-account containers: - name: app image: myapp:latest env: - name: VAULT_ADDR value: "http://vault.default.svc:8200" # 应用Pod $ kubectl apply -f pod.yaml # 测试Vault集成 $ kubectl exec -it app-pod -- curl -s -H "X-Vault-Token: $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" http://vault.default.svc:8200/v1/secret/data/myapp/db

6.3 密钥轮换

# 轮换Secret
$ kubectl create secret generic db-secret –from-literal=username=admin –from-literal=password=new-secret123 –dry-run=client -o yaml | kubectl apply -f –

# 查看Secret历史
$ kubectl get secret db-secret -o yaml

# 配置自动轮换
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-secret
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: db-secret
data:
– secretKey: username
remoteRef:
key: secret/data/myapp/db
property: username
– secretKey: password
remoteRef:
key: secret/data/myapp/db
property: password

# 应用ExternalSecret
$ kubectl apply -f external-secret.yaml

# 查看ExternalSecret
$ kubectl get externalsecrets

# 配置SecretStore
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: “http://vault.default.svc:8200”
path: “secret”
version: “v2”
auth:
kubernetes:
mountPath: /v1/auth/kubernetes
role: my-role

# 应用SecretStore
$ kubectl apply -f secret-store.yaml

author:www.itpux.com

7. 运行时安全

7.1 运行时监控

# 安装Falco
$ helm repo add falcosecurity https://falcosecurity.github.io/charts
$ helm install falco falcosecurity/falco

# 查看Falco日志
$ kubectl logs -f deployment/falco

# 配置Falco规则
apiVersion: falco.org/v1alpha1
kind: FalcoRule
metadata:
name: custom-rules
spec:
rules:
– rule: Detect shell in container
desc: Detect a shell being spawned in a container
condition: spawned_process and container and shell_procs
output: “Shell spawned in container (user=%user.name container_id=%container.id container_name=%container.name shell=%proc.name parent=%proc.pname command=%proc.cmdline)”
priority: WARNING

# 应用Falco规则
$ kubectl apply -f falco-rule.yaml

# 测试Falco规则
$ kubectl run test –image=busybox –command — sh -c “sleep 3600”
$ kubectl exec -it test — sh

# 安装Sysdig Secure
$ helm repo add sysdig https://charts.sysdig.com
$ helm install sysdig sysdig/sysdig-deploy \
–set global.sysdig.accessKey=YOUR_ACCESS_KEY

# 查看Sysdig Secure状态
$ kubectl get pods -n sysdig-agent

7.2 运行时防护

# 安装Kube-bench
$ docker run –rm -v /var/lib/kubelet:/var/lib/kubelet -v /etc/kubernetes:/etc/kubernetes –net=host aquasec/kube-bench

# 运行安全扫描
$ kubectl run kube-bench –rm –image=aquasec/kube-bench:latest –restart=Never —

# 安装Trivy Operator
$ helm repo add aquasecurity https://aquasecurity.github.io/helm-charts/
$ helm install trivy-operator aquasecurity/trivy-operator

# 查看安全扫描结果
$ kubectl get vulnerabilityreports
$ kubectl get configauditreports

# 安装Aqua Security
$ helm repo add aqua https://aquasecurity.github.io/helm-charts/
$ helm install aqua aqua/aqua

# 查看安全状态
$ kubectl get pods -n aqua

# 配置Pod安全策略
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: ‘docker/default’
apparmor.security.beta.kubernetes.io/allowedProfileNames: ‘runtime/default’
seccomp.security.alpha.kubernetes.io/defaultProfileName: ‘docker/default’
apparmor.security.beta.kubernetes.io/defaultProfileName: ‘runtime/default’
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
– ALL
volumes:
– ‘configMap’
– ’emptyDir’
– ‘projected’
– ‘secret’
– ‘downwardAPI’
– ‘persistentVolumeClaim’
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: ‘MustRunAsNonRoot’
seLinux:
rule: ‘RunAsAny’
supplementalGroups:
rule: ‘MustRunAs’
ranges:
– min: 1
max: 65535
fsGroup:
rule: ‘MustRunAs’
ranges:
– min: 1
max: 65535
readOnlyRootFilesystem: true

# 应用Pod安全策略
$ kubectl apply -f pod-security-policy.yaml

7.3 运行时响应

# 配置安全事件响应
apiVersion: falco.org/v1alpha1
kind: FalcoAlert
metadata:
name: security-alert
spec:
severity: CRITICAL
output: “Security event detected”
priority: CRITICAL
source: syscall
tags: [“security”]

# 应用FalcoAlert
$ kubectl apply -f falco-alert.yaml

# 配置Prometheus告警
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: security-alerts
namespace: monitoring
spec:
groups:
– name: security
rules:
– alert: ContainerSecurityViolation
expr: falco_events{rule=”Detect shell in container”} > 0
for: 5m
labels:
severity: critical
annotations:
summary: “Container security violation detected”
description: “A shell was spawned in container {{ $labels.container_name }}”

# 应用PrometheusRule
$ kubectl apply -f prometheus-rule.yaml

# 配置告警接收器
apiVersion: monitoring.coreos.com/v1
kind: AlertmanagerConfig
metadata:
name: security-alerts
namespace: monitoring
spec:
receivers:
– name: slack
slack_configs:
– api_url: “https://hooks.slack.com/services/XXX/YYY/ZZZ”
channel: “#security-alerts”
send_resolved: true
route:
group_by: [“alertname”]
group_interval: 5m
group_wait: 30s
repeat_interval: 1h
receiver: slack
routes:
– match:
severity: critical
receiver: slack

# 应用AlertmanagerConfig
$ kubectl apply -f alertmanager-config.yaml

8. 合规与审计

8.1 合规检查

# 运行 CIS 基准检查
$ docker run –rm -v /var/lib/kubelet:/var/lib/kubelet -v /etc/kubernetes:/etc/kubernetes –net=host aquasec/kube-bench –benchmark cis-1.6

# 查看检查结果
$ docker run –rm -v /var/lib/kubelet:/var/lib/kubelet -v /etc/kubernetes:/etc/kubernetes –net=host aquasec/kube-bench –benchmark cis-1.6 | grep FAIL

# 运行 NSA 安全指南检查
$ docker run –rm -v /var/lib/kubelet:/var/lib/kubelet -v /etc/kubernetes:/etc/kubernetes –net=host aquasec/kube-bench –benchmark nsa

# 运行 PCI DSS 检查
$ docker run –rm -v /var/lib/kubelet:/var/lib/kubelet -v /etc/kubernetes:/etc/kubernetes –net=host aquasec/kube-bench –benchmark pci

# 安装Sonobuoy
$ curl -L https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.56.14/sonobuoy_0.56.14_linux_amd64.tar.gz | tar -xz
$ sudo mv sonobuoy /usr/local/bin/

# 运行Sonobuoy
$ sonobuoy run –mode=compliance –compliance= cis-1.6

# 查看Sonobuoy状态
$ sonobuoy status

# 获取合规报告
$ sonobuoy retrieve
$ tar -xf *.tar.gz
$ cat plugins/e2e/results/global.json

8.2 审计日志

# 配置Kubernetes审计日志
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
etc:
kube-apiserver:
extraArgs:
audit-log-path: /var/log/kubernetes/audit.log
audit-log-maxage: “30”
audit-log-maxbackup: “10”
audit-log-maxsize: “100”
audit-policy-file: /etc/kubernetes/audit-policy.yaml

# 配置审计策略
$ cat audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
– level: Metadata
resources:
– group: “”
resources: [“pods”, “services”, “configmaps”]
– level: RequestResponse
resources:
– group: “”
resources: [“secrets”, “configmaps”]
– level: Metadata
resources:
– group: “apps”
resources: [“deployments”, “replicasets”]

# 应用审计策略
$ kubectl apply -f audit-policy.yaml

# 查看审计日志
$ kubectl logs -f kube-apiserver- -n kube-system

# 配置审计日志收集
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: audit-log-collector
namespace: kube-system
spec:
selector:
matchLabels:
app: audit-log-collector
template:
metadata:
labels:
app: audit-log-collector
spec:
containers:
– name: fluentd
image: fluent/fluentd:v1.14.0
volumeMounts:
– name: audit-logs
mountPath: /var/log/kubernetes
– name: config
mountPath: /etc/fluentd
volumes:
– name: audit-logs
hostPath:
path: /var/log/kubernetes
– name: config
configMap:
name: fluentd-config

# 应用DaemonSet
$ kubectl apply -f audit-log-collector.yaml

8.3 合规报告

# 生成合规报告
$ docker run –rm -v /var/lib/kubelet:/var/lib/kubelet -v /etc/kubernetes:/etc/kubernetes –net=host aquasec/kube-bench –benchmark cis-1.6 –json > audit.json

# 分析合规报告
$ jq ‘. | select(.status == “FAIL”)’ audit.json

# 生成HTML报告
$ docker run –rm -v $(pwd):/workspace -w /workspace node:14 sh -c “npm install -g @aquasecurity/kube-bench-reporter && kube-bench-reporter –json audit.json –html audit.html”

# 查看HTML报告
$ open audit.html

# 配置定期合规检查
apiVersion: batch/v1
kind: CronJob
metadata:
name: compliance-check
namespace: kube-system
spec:
schedule: “0 0 * * *”
jobTemplate:
spec:
template:
spec:
containers:
– name: kube-bench
image: aquasec/kube-bench:latest
command:
– sh
– -c
– kube-bench –benchmark cis-1.6 –json > /output/audit-$(date +%Y%m%d).json
volumeMounts:
– name: output
mountPath: /output
– name: kubelet
mountPath: /var/lib/kubelet
– name: kubernetes
mountPath: /etc/kubernetes
restartPolicy: OnFailure
volumes:
– name: output
persistentVolumeClaim:
claimName: compliance-reports
– name: kubelet
hostPath:
path: /var/lib/kubelet
– name: kubernetes
hostPath:
path: /etc/kubernetes

# 应用CronJob
$ kubectl apply -f compliance-check.yaml

9. 安全监控

9.1 安全指标监控

# 安装Prometheus和Grafana
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm install prometheus prometheus-community/kube-prometheus-stack

# 配置安全指标
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: security-monitor
namespace: monitoring
spec:
selector:
matchLabels:
app: falco
endpoints:
– port: metrics

# 应用ServiceMonitor
$ kubectl apply -f service-monitor.yaml

# 查看安全指标
$ kubectl port-forward svc/prometheus-grafana 3000:80
# 打开浏览器访问 http://fgedudb:3000

# 创建安全仪表盘
$ cat security-dashboard.json
{
“annotations”: {
“list”: []
},
“editable”: true,
“gnetId”: null,
“graphTooltip”: 0,
“id”: null,
“links”: [],
“panels”: [
{
“aliasColors”: {},
“bars”: false,
“dashLength”: 10,
“dashes”: false,
“datasource”: “Prometheus”,
“fieldConfig”: {
“defaults”: {},
“overrides”: []
},
“fill”: 1,
“fillGradient”: 0,
“gridPos”: {
“h”: 8,
“w”: 12,
“x”: 0,
“y”: 0
},
“hiddenSeries”: false,
“id”: 1,
“legend”: {
“avg”: false,
“current”: false,
“max”: false,
“min”: false,
“show”: true,
“total”: false,
“values”: false
},
“lines”: true,
“linewidth”: 1,
“nullPointMode”: “null”,
“options”: {
“alertThreshold”: true
},
“percentage”: false,
“pluginVersion”: “7.5.1”,
“pointradius”: 2,
“points”: false,
“renderer”: “flot”,
“seriesOverrides”: [],
“spaceLength”: 10,
“stack”: false,
“steppedLine”: false,
“targets”: [
{
“expr”: “falco_events_total”,
“interval”: “”,
“legendFormat”: “{{rule}}”,
“refId”: “A”
}
],
“thresholds”: [],
“timeFrom”: null,
“timeRegions”: [],
“timeShift”: null,
“title”: “Falco Events”,
“tooltip”: {
“shared”: true,
“sort”: 0,
“value_type”: “individual”
},
“type”: “graph”,
“xaxis”: {
“buckets”: null,
“mode”: “time”,
“name”: null,
“show”: true,
“values”: []
},
“yaxes”: [
{
“format”: “short”,
“label”: null,
“logBase”: 1,
“max”: null,
“min”: null,
“show”: true
},
{
“format”: “short”,
“label”: null,
“logBase”: 1,
“max”: null,
“min”: null,
“show”: true
}
],
“yaxis”: {
“align”: false,
“alignLevel”: null
}
}
],
“schemaVersion”: 26,
“style”: “dark”,
“tags”: [],
“templating”: {
“list”: []
},
“time”: {
“from”: “now-6h”,
“to”: “now”
},
“timepicker”: {},
“timezone”: “”,
“title”: “Security Dashboard”,
“uid”: “security-dashboard”,
“version”: 1
}

# 导入安全仪表盘
# 在Grafana中导入security-dashboard.json

9.2 安全事件响应

# 配置安全事件响应
apiVersion: falco.org/v1alpha1
kind: FalcoAlert
metadata:
name: security-alert
spec:
severity: CRITICAL
output: “Security event detected”
priority: CRITICAL
source: syscall
tags: [“security”]
notifier: slack
slack:
webhook: “https://hooks.slack.com/services/XXX/YYY/ZZZ”
channel: “#security-alerts”

# 应用FalcoAlert
$ kubectl apply -f falco-alert.yaml

# 配置Prometheus告警
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: security-alerts
namespace: monitoring
spec:
groups:
– name: security
rules:
– alert: ContainerSecurityViolation
expr: falco_events{rule=”Detect shell in container”} > 0
for: 5m
labels:
severity: critical
annotations:
summary: “Container security violation detected”
description: “A shell was spawned in container {{ $labels.container_name }}”

# 应用PrometheusRule
$ kubectl apply -f prometheus-rule.yaml

# 配置告警接收器
apiVersion: monitoring.coreos.com/v1
kind: AlertmanagerConfig
metadata:
name: security-alerts
namespace: monitoring
spec:
receivers:
– name: slack
slack_configs:
– api_url: “https://hooks.slack.com/services/XXX/YYY/ZZZ”
channel: “#security-alerts”
send_resolved: true
route:
group_by: [“alertname”]
group_interval: 5m
group_wait: 30s
repeat_interval: 1h
receiver: slack
routes:
– match:
severity: critical
receiver: slack

# 应用AlertmanagerConfig
$ kubectl apply -f alertmanager-config.yaml

# 配置安全事件响应流程
apiVersion: apps/v1
kind: Deployment
metadata:
name: security-response
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: security-response
template:
metadata:
labels:
app: security-response
spec:
containers:
– name: security-response
image: security-response:latest
env:
– name: SLACK_WEBHOOK
value: “https://hooks.slack.com/services/XXX/YYY/ZZZ”
– name: PAGERDUTY_API_KEY
valueFrom:
secretKeyRef:
name: pagerduty-secret
key: api-key

# 应用Deployment
$ kubectl apply -f security-response.yaml

10. 最佳实践

10.1 Kubernetes安全最佳实践

  • 使用官方基础镜像:确保镜像来源可信
  • 定期更新基础镜像:获取安全补丁
  • 扫描容器镜像:部署前进行安全扫描
  • 限制容器能力:使用最小权限原则
  • 使用只读文件系统:防止容器内文件被修改
  • 配置网络策略:限制容器间通信
  • 使用RBAC:精细控制访问权限
  • 集成OIDC:使用企业身份管理系统
  • 使用外部密钥管理:安全存储敏感信息
  • 配置安全监控:实时监控安全事件
  • 定期合规检查:确保符合安全标准
  • 建立安全事件响应流程:及时处理安全事件
  • 培训团队:提高安全意识和技能
  • 文档化安全配置:记录安全策略和配置
  • 定期安全审计:评估安全状况

10.2 生产环境建议

# 生产环境安全配置
1. 使用官方基础镜像:确保镜像来源可信
2. 定期更新基础镜像:获取安全补丁
3. 扫描容器镜像:部署前进行安全扫描
4. 限制容器能力:使用最小权限原则
5. 使用只读文件系统:防止容器内文件被修改
6. 配置网络策略:限制容器间通信
7. 使用RBAC:精细控制访问权限
8. 集成OIDC:使用企业身份管理系统
9. 使用外部密钥管理:安全存储敏感信息
10. 配置安全监控:实时监控安全事件
11. 定期合规检查:确保符合安全标准
12. 建立安全事件响应流程:及时处理安全事件
13. 培训团队:提高安全意识和技能
14. 文档化安全配置:记录安全策略和配置
15. 定期安全审计:评估安全状况

# 示例生产环境安全配置
apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-app
spec:
replicas: 3
selector:
matchLabels:
app: secure-app
template:
metadata:
labels:
app: secure-app
annotations:
container.apparmor.security.beta.kubernetes.io/app: runtime/default
spec:
serviceAccountName: app-service-account
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
– name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
– ALL
add:
– NET_BIND_SERVICE
resources:
limits:
memory: “512Mi”
cpu: “500m”
requests:
memory: “256Mi”
cpu: “200m”
env:
– name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
– name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
ports:
– containerPort: 8080

# 网络策略配置
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: secure-app-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: secure-app
policyTypes:
– Ingress
– Egress
ingress:
– from:
– podSelector:
matchLabels:
app: frontend
ports:
– protocol: TCP
port: 8080
egress:
– to:
– podSelector:
matchLabels:
app: database
ports:
– protocol: TCP
port: 5432
– to:
– namespaceSelector:
matchLabels:
name: kube-system
ports:
– protocol: TCP
port: 53
– protocol: UDP
port: 53

10.3 安全工具链

  • 容器镜像扫描:Trivy, Clair, Docker Scan
  • 运行时安全:Falco, Aqua Security, Sysdig Secure
  • 网络安全:Cilium, Calico, Istio
  • 身份管理:Keycloak, Dex, Azure AD
  • 密钥管理:HashiCorp Vault, AWS Secrets Manager, Azure Key Vault
  • 合规检查:kube-bench, Sonobuoy, CIS-CAT
  • 监控告警:Prometheus, Grafana, Alertmanager
  • 日志管理:ELK Stack, Loki, Fluentd
  • 安全审计:Auditbeat, OpenSearch
  • CI/CD安全:Snyk, OWASP ZAP, GitGuardian

生产环境建议

  • 建立安全策略:制定全面的Kubernetes安全策略
  • 安全左移:将安全集成到开发和部署的早期阶段
  • 自动化安全:使用工具和流程自动化安全检查
  • 持续安全监控:实时监控安全事件和漏洞
  • 定期安全审计:评估安全状况,发现安全隐患
  • 培训团队:提高安全意识和技能
  • 文档化安全配置:记录安全策略和配置
  • 建立安全事件响应流程:及时处理安全事件
  • 定期更新安全工具:获取新特性和安全补丁
  • 参与安全社区:了解最新的安全威胁和防护措施

本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html

联系我们

在线咨询:点击这里给我发消息

微信号:itpux-com

工作日:9:30-18:30,节假日休息