本文主要介绍边缘计算应用与实践,包括边缘计算基础概念、边缘计算架构、边缘计算平台、边缘计算应用和边缘计算未来。通过本文的学习,您将能够掌握边缘计算的核心知识点和实践技巧。
风哥教程参考官方文档相关内容进行编写,确保信息的准确性和权威性。
目录大纲
Part01-基础概念与理论知识
Part02-生产环境规划与建议
Part03-生产环境项目实施方案
Part04-生产案例与实战讲解
Part05-风哥经验总结与分享
边缘计算基础概念
边缘计算是一种将计算和数据存储放在网络边缘的计算范式。边缘计算的核心概念包括:
- 边缘节点:位于网络边缘的计算设备,如路由器、网关、IoT设备等
- 边缘网络:连接边缘节点和云中心的网络
- 边缘服务:在边缘节点上运行的服务
- 边缘智能:在边缘节点上进行的智能计算
- 边缘安全:边缘计算环境的安全保护
更多视频教程www.fgedu.net.cn
边缘计算架构
边缘计算的架构包括:
- 设备层:IoT设备、传感器等
- 边缘层:边缘节点、边缘网关等
- 雾层:区域边缘节点、雾计算节点等
- 云层:云数据中心、云服务等
边缘计算平台
边缘计算平台是用于部署和管理边缘计算应用的环境。主要的边缘计算平台包括:
- K3s:轻量级Kubernetes
- EdgeX Foundry:开源边缘计算平台
- OpenFog Consortium:雾计算标准组织
- Azure IoT Edge:微软的边缘计算平台
- AWS IoT Greengrass:亚马逊的边缘计算平台
- Google Cloud IoT Edge:谷歌的边缘计算平台
学习交流加群风哥微信: itpux-com
环境规划
在部署边缘计算环境前,需要进行详细的环境规划:
硬件规划
- 边缘节点:如路由器、网关、IoT设备等
- 网络设备:确保边缘节点与云中心的连接
- 存储设备:用于存储边缘数据
- 安全设备:保护边缘计算环境
软件规划
- 边缘计算平台:如K3s、EdgeX Foundry等
- 容器技术:如Docker、containerd等
- 边缘应用:如边缘AI、边缘分析等
- 监控工具:如Prometheus、Grafana等
- 安全工具:如TLS、防火墙等
最佳实践
边缘计算的最佳实践包括:
- 边缘节点选择:根据应用需求选择合适的边缘节点
- 网络设计:确保边缘节点与云中心的网络连接
- 数据管理:合理管理边缘数据的存储和传输
- 安全防护:实施边缘计算的安全措施
- 监控管理:实时监控边缘节点的运行状态
- 应用部署:优化边缘应用的部署和管理
学习交流加群风哥QQ113257174
性能优化
边缘计算性能优化的关键措施:
- 资源优化:合理分配边缘节点的资源
- 网络优化:减少边缘节点与云中心的网络延迟
- 计算优化:优化边缘计算的算法和模型
- 存储优化:合理管理边缘数据的存储
- 能耗优化:减少边缘节点的能耗
边缘计算部署
边缘计算的部署步骤如下:
1. 部署边缘计算平台
# 部署K3s $ curl -sfL https://get.k3s.io | sh - # 查看K3s状态 $ sudo systemctl status k3s # 部署EdgeX Foundry $ git clone https://github.com/edgexfoundry/edgex-compose.git $ cd edgex-compose $ docker-compose -f docker-compose-no-secty.yml up -d # 查看EdgeX Foundry状态 $ docker-compose -f docker-compose-no-secty.yml ps # 部署Azure IoT Edge # 1. 安装IoT Edge运行时 $ curl https://packages.microsoft.com/config/ubuntu/18.04/prod.list > ./microsoft-prod.list $ sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/ $ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg $ sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/ $ sudo apt-get update $ sudo apt-get install moby-engine $ sudo apt-get install moby-cli $ sudo apt-get install iotedge # 2. 配置IoT Edge $ sudo nano /etc/iotedge/config.yaml # 添加设备连接字符串 # 3. 重启IoT Edge $ sudo systemctl restart iotedge
2. 部署边缘应用
# 部署边缘AI应用
$ cat > edge-ai-app.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: edge-ai-app
spec:
replicas: 1
selector:
matchLabels:
app: edge-ai-app
template:
metadata:
labels:
app: edge-ai-app
spec:
containers:
- name: edge-ai-app
image: fgedu/edge-ai-app:latest
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
ports:
- containerPort: 8080
EOF
# 应用部署
$ kubectl apply -f edge-ai-app.yaml
# 部署边缘分析应用
$ cat > edge-analytics-app.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: edge-analytics-app
spec:
replicas: 1
selector:
matchLabels:
app: edge-analytics-app
template:
metadata:
labels:
app: edge-analytics-app
spec:
containers:
- name: edge-analytics-app
image: fgedu/edge-analytics-app:latest
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
ports:
- containerPort: 8081
EOF
# 应用部署
$ kubectl apply -f edge-analytics-app.yaml
3. 配置边缘网络
# 配置边缘网络
$ cat > edge-network.yaml << 'EOF'
apiVersion: v1
kind: Service
metadata:
name: edge-ai-service
spec:
selector:
app: edge-ai-app
ports:
- port: 80
targetPort: 8080
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: edge-analytics-service
spec:
selector:
app: edge-analytics-app
ports:
- port: 80
targetPort: 8081
type: NodePort
EOF
# 应用网络配置
$ kubectl apply -f edge-network.yaml
# 查看服务状态
$ kubectl get services
# 配置网络策略
$ cat > edge-network-policy.yaml << 'EOF'
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: edge-network-policy
spec:
podSelector:
matchLabels:
app: edge-ai-app
ingress:
- from:
- podSelector:
matchLabels:
app: edge-analytics-app
ports:
- protocol: TCP
port: 8080
EOF
# 应用网络策略
$ kubectl apply -f edge-network-policy.yaml
风哥风哥提示:在生产环境中,建议使用轻量级的边缘计算平台,如K3s,以适应边缘节点的资源约束。
边缘计算配置
边缘计算的配置步骤如下:
1. 配置边缘节点
# 配置K3s节点
$ sudo nano /etc/systemd/system/k3s.service
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
After=network-online.target
[Service]
Type=notify
EnvironmentFile=/etc/default/k3s
ExecStart=/usr/local/bin/k3s server --disable traefik --disable servicelb
KillMode=process
Delegate=yes
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
# 重启K3s
$ sudo systemctl daemon-reload
$ sudo systemctl restart k3s
# 配置EdgeX Foundry
$ cat > docker-compose.override.yml << 'EOF'
services:
core-data:
environment:
EDGEX_SECURITY_SECRET_STORE: "false"
core-metadata:
environment:
EDGEX_SECURITY_SECRET_STORE: "false"
core-command:
environment:
EDGEX_SECURITY_SECRET_STORE: "false"
support-notifications:
environment:
EDGEX_SECURITY_SECRET_STORE: "false"
support-scheduler:
environment:
EDGEX_SECURITY_SECRET_STORE: "false"
app-service-configurable:
environment:
EDGEX_SECURITY_SECRET_STORE: "false"
device-virtual:
environment:
EDGEX_SECURITY_SECRET_STORE: "false"
EOF
# 重启EdgeX Foundry
$ docker-compose -f docker-compose-no-secty.yml -f docker-compose.override.yml up -d
2. 配置边缘应用
# 配置边缘AI应用
$ cat > config.yaml << 'EOF'
model:
name: resnet50
version: 1.0
path: /models/resnet50.onnx
inference:
batch_size: 1
num_threads: 2
input:
source: camera
width: 224
height: 224
output:
destination: local
path: /data/results
EOF
# 挂载配置文件
$ kubectl create configmap edge-ai-config --from-file=config.yaml
# 更新部署
$ cat > edge-ai-app.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: edge-ai-app
spec:
replicas: 1
selector:
matchLabels:
app: edge-ai-app
template:
metadata:
labels:
app: edge-ai-app
spec:
containers:
- name: edge-ai-app
image: fgedu/edge-ai-app:latest
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
ports:
- containerPort: 8080
volumeMounts:
- name: config-volume
mountPath: /app/config
- name: model-volume
mountPath: /models
- name: data-volume
mountPath: /data
volumes:
- name: config-volume
configMap:
name: edge-ai-config
- name: model-volume
hostPath:
path: /opt/models
- name: data-volume
hostPath:
path: /opt/data
EOF
# 应用部署
$ kubectl apply -f edge-ai-app.yaml
# 配置边缘分析应用
$ cat > analytics-config.yaml << 'EOF'
input:
sources:
- type: mqtt
broker: tcp://fgedudb:1883
topic: sensors/data
processing:
window:
size: 10s
slide: 5s
functions:
- name: mean
- name: max
- name: min
output:
destinations:
- type: http
url: http://edge-ai-service:80/api/results
- type: local
path: /data/analytics
EOF
# 挂载配置文件
$ kubectl create configmap edge-analytics-config --from-file=analytics-config.yaml
# 更新部署
$ cat > edge-analytics-app.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: edge-analytics-app
spec:
replicas: 1
selector:
matchLabels:
app: edge-analytics-app
template:
metadata:
labels:
app: edge-analytics-app
spec:
containers:
- name: edge-analytics-app
image: fgedu/edge-analytics-app:latest
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
ports:
- containerPort: 8081
volumeMounts:
- name: config-volume
mountPath: /app/config
- name: data-volume
mountPath: /data
volumes:
- name: config-volume
configMap:
name: edge-analytics-config
- name: data-volume
hostPath:
path: /opt/data
EOF
# 应用部署
$ kubectl apply -f edge-analytics-app.yaml
3. 配置边缘安全
# 配置TLS
$ openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
$ sudo cp cert.pem /etc/ssl/certs/
$ sudo cp key.pem /etc/ssl/private/
# 配置防火墙
$ sudo ufw allow 8080/tcp
$ sudo ufw allow 8081/tcp
$ sudo ufw allow 1883/tcp
$ sudo ufw enable
# 配置访问控制
$ cat > auth.yaml << 'EOF'
apiVersion: v1
kind: Secret
metadata:
name: edge-auth
type: Opaque
data:
username: YWRtaW4=
password: cGFzc3dvcmQ=
EOF
# 应用认证配置
$ kubectl apply -f auth.yaml
# 配置边缘节点安全
$ cat > security-context.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: edge-ai-app
spec:
replicas: 1
selector:
matchLabels:
app: edge-ai-app
template:
metadata:
labels:
app: edge-ai-app
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
containers:
- name: edge-ai-app
image: fgedu/edge-ai-app:latest
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
ports:
- containerPort: 8080
EOF
# 应用安全配置
$ kubectl apply -f security-context.yaml
更多学习教程公众号风哥教程itpux_com
测试验证
边缘计算部署完成后,需要进行全面的测试验证:
1. 功能测试
# 测试边缘计算平台
$ kubectl get nodes
$ kubectl get pods
# 测试边缘应用
$ kubectl port-forward deployment/edge-ai-app 8080:8080 &
$ curl http://fgedudb:8080/health
$ kubectl port-forward deployment/edge-analytics-app 8081:8081 &
$ curl http://fgedudb:8081/health
# 测试边缘网络
$ kubectl get services
$ curl http://$(kubectl get node -o jsonpath='{.items[0].status.addresses[0].address}'):$(kubectl get service edge-ai-service -o jsonpath='{.spec.ports[0].nodePort}')/health
# 测试边缘数据处理
$ curl -X POST http://fgedudb:8080/api/inference -H "Content-Type: application/json" -d '{
"image": "base64_encoded_image"
}'
# 测试边缘分析
$ curl -X POST http://fgedudb:1883/sensors/data -d '{
"temperature": 25.5,
"humidity": 60
}'
2. 性能测试
# 测试边缘应用性能
$ ab -n 1000 -c 100 http://fgedudb:8080/health
# 测试边缘AI推理性能
$ time curl -X POST http://fgedudb:8080/api/inference -H "Content-Type: application/json" -d '{
"image": "base64_encoded_image"
}'
# 测试边缘分析性能
$ python -c "
import time
import requests
start_time = time.time()
for i in range(100):
response = requests.post('http://fgedudb:8081/api/analyze', json={
'temperature': 25.5 + i * 0.1,
'humidity': 60 + i * 0.5
})
print(f'Request {i+1}: {response.status_code}')
end_time = time.time()
print(f'Total time: {end_time - start_time:.4f} seconds')
print(f'Average time per request: {(end_time - start_time)/100:.6f} seconds')
"
# 测试边缘网络延迟
$ ping -c 10 $(kubectl get node -o jsonpath='{.items[0].status.addresses[0].address}')
# 测试边缘节点资源使用
$ kubectl top node
$ kubectl top pod
实战案例
以下是一个边缘计算的实战案例:
案例背景
某制造企业需要在工厂部署边缘计算系统,用于实时监控设备状态、预测设备故障和优化生产流程。该企业拥有大量的传感器和设备,需要在边缘节点上进行数据处理和分析,以减少数据传输到云中心的延迟和成本。
实施方案
- 部署K3s作为边缘计算平台
- 部署EdgeX Foundry用于设备管理和数据采集
- 部署边缘AI应用用于设备故障预测
- 部署边缘分析应用用于生产流程优化
- 配置边缘网络确保设备与云中心的连接
- 实施边缘安全措施保护系统安全
实施效果
通过边缘计算的实施,该企业实现了:
- 设备故障预测准确率达到95%
- 生产流程优化效率提高20%
- 数据传输成本降低60%
- 系统响应时间减少80%
- 设备停机时间减少70%
author:www.itpux.com
故障处理
边缘计算常见故障及处理方法:
1. 边缘节点故障
# 检查边缘节点状态 $ kubectl get nodes # 查看节点日志 $ kubectl describe node# 重启边缘节点 $ sudo systemctl restart k3s # 检查网络连接 $ ping -c 4 # 检查资源使用 $ kubectl top node # 重置边缘节点 $ k3s-killall.sh $ curl -sfL https://get.k3s.io | sh -
2. 边缘应用故障
# 检查应用状态 $ kubectl get pods # 查看应用日志 $ kubectl logs# 重启应用 $ kubectl delete pod # 检查应用配置 $ kubectl describe pod # 检查资源限制 $ kubectl get pod -o jsonpath='{.spec.containers[0].resources}' # 调整资源限制 $ kubectl edit deployment
3. 边缘网络故障
# 检查网络服务 $ kubectl get services # 检查网络策略 $ kubectl get networkpolicies # 测试网络连接 $ kubectl run -it --rm --image=busybox busybox -- ping -c 4 edge-ai-service # 检查网络配置 $ kubectl describe service edge-ai-service # 重启网络服务 $ sudo systemctl restart network-manager # 检查防火墙 $ sudo ufw status $ sudo ufw allow 8080/tcp
性能调优
边缘计算性能调优的具体措施:
1. 资源优化
# 配置资源限制
$ cat > resource-limits.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: edge-ai-app
spec:
replicas: 1
selector:
matchLabels:
app: edge-ai-app
template:
metadata:
labels:
app: edge-ai-app
spec:
containers:
- name: edge-ai-app
image: fgedu/edge-ai-app:latest
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
ports:
- containerPort: 8080
EOF
# 应用资源配置
$ kubectl apply -f resource-limits.yaml
# 配置节点资源预留
$ sudo nano /etc/systemd/system/k3s.service
[Service]
ExecStart=/usr/local/bin/k3s server --disable traefik --disable servicelb --kubelet-arg="system-reserved=cpu=100m,memory=100Mi" --kubelet-arg="kube-reserved=cpu=100m,memory=100Mi"
# 重启K3s
$ sudo systemctl daemon-reload
$ sudo systemctl restart k3s
# 配置Pod优先级
$ cat > priority-class.yaml << 'EOF'
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
globalDefault: false
description: "High priority for edge AI applications"
EOF
# 应用优先级配置
$ kubectl apply -f priority-class.yaml
# 更新应用优先级
$ cat > edge-ai-app.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: edge-ai-app
spec:
replicas: 1
selector:
matchLabels:
app: edge-ai-app
template:
metadata:
labels:
app: edge-ai-app
spec:
priorityClassName: high-priority
containers:
- name: edge-ai-app
image: fgedu/edge-ai-app:latest
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
ports:
- containerPort: 8080
EOF
# 应用配置
$ kubectl apply -f edge-ai-app.yaml
2. 网络优化
# 配置网络QoS
$ cat > network-qos.yaml << 'EOF'
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: network-high-priority
value: 1000000
globalDefault: false
description: "High priority for network traffic"
EOF
# 应用网络QoS配置
$ kubectl apply -f network-qos.yaml
# 配置Pod网络优先级
$ cat > edge-ai-app.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: edge-ai-app
spec:
replicas: 1
selector:
matchLabels:
app: edge-ai-app
template:
metadata:
labels:
app: edge-ai-app
spec:
priorityClassName: network-high-priority
containers:
- name: edge-ai-app
image: fgedu/edge-ai-app:latest
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
ports:
- containerPort: 8080
EOF
# 应用配置
$ kubectl apply -f edge-ai-app.yaml
# 配置网络策略优化
$ cat > optimized-network-policy.yaml << 'EOF'
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: optimized-edge-network-policy
spec:
podSelector:
matchLabels:
app: edge-ai-app
ingress:
- from:
- podSelector:
matchLabels:
app: edge-analytics-app
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: edge-analytics-app
ports:
- protocol: TCP
port: 8081
EOF
# 应用网络策略
$ kubectl apply -f optimized-network-policy.yaml
# 配置边缘节点网络优化
$ sudo nano /etc/sysctl.conf
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
# 应用网络优化
$ sudo sysctl -p
3. 计算优化
# 优化边缘AI模型
$ python -c "
import tensorflow as tf
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense
# 加载预训练模型
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
# 添加分类层
x = base_model.output
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)
# 创建模型
model = Model(inputs=base_model.input, outputs=predictions)
# 冻结基础层
for layer in base_model.layers:
layer.trainable = False
# 编译模型
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# 保存模型
model.save('resnet50_frozen.h5')
# 量化模型
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
# 保存量化模型
with open('resnet50_quantized.tflite', 'wb') as f:
f.write(tflite_model)
print('Model optimized and quantized')
"
# 配置边缘应用使用优化模型
$ cat > config.yaml << 'EOF'
model:
name: resnet50
version: 1.0
path: /models/resnet50_quantized.tflite
type: tflite
inference:
batch_size: 1
num_threads: 4
use_npu: true
input:
source: camera
width: 224
height: 224
output:
destination: local
path: /data/results
EOF
# 挂载配置文件
$ kubectl create configmap edge-ai-config --from-file=config.yaml
# 更新部署
$ kubectl apply -f edge-ai-app.yaml
# 优化边缘分析算法
$ cat > optimized-analytics.py << 'EOF'
import numpy as np
import time
class OptimizedAnalytics:
def __init__(self, window_size=10, slide_size=5):
self.window_size = window_size
self.slide_size = slide_size
self.data = []
def add_data(self, value):
self.data.append(value)
if len(self.data) > self.window_size:
self.data = self.data[-self.window_size:]
def compute_statistics(self):
if len(self.data) < self.window_size:
return None
# 使用numpy进行高效计算
data_array = np.array(self.data)
stats = {
'mean': np.mean(data_array),
'max': np.max(data_array),
'min': np.min(data_array),
'std': np.std(data_array)
}
return stats
# 测试优化后的分析算法
analytics = OptimizedAnalytics()
start_time = time.time()
for i in range(1000):
analytics.add_data(np.random.normal(25, 5))
if i % 10 == 0:
stats = analytics.compute_statistics()
if stats:
print(f'Step {i}: {stats}')
end_time = time.time()
print(f'Total time: {end_time - start_time:.4f} seconds')
EOF
# 运行优化后的分析算法
$ python optimized-analytics.py
经验总结
通过边缘计算的实践,我们总结了以下经验:
- 边缘计算是一种重要的计算范式,适用于低延迟、高带宽的场景
- 边缘计算平台的选择需要根据边缘节点的资源约束进行
- 边缘应用的部署和管理需要考虑资源限制和网络条件
- 边缘安全是边缘计算的重要组成部分
- 边缘计算与云计算的结合可以发挥各自的优势
- 持续的监控和维护是边缘计算系统稳定运行的关键
学习建议
对于想要学习边缘计算的人员,我们风哥建议:
- 掌握边缘计算的基本概念和原理
- 学习至少一种边缘计算平台,如K3s或EdgeX Foundry
- 了解边缘计算的应用场景和案例
- 通过实际项目积累经验
- 关注边缘计算的最新发展和研究
- 参加相关的培训和认证
未来趋势
边缘计算的未来发展趋势包括:
- 边缘AI的发展:更多的AI模型在边缘节点上运行
- 边缘计算标准化:边缘计算标准的建立
- 边缘计算与5G的融合:利用5G网络的低延迟特性
- 边缘计算的商业化:更多的商业应用
- 边缘计算的安全增强:更强大的边缘安全措施
- 边缘计算的生态系统:更完善的边缘计算生态
本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html
