一、ELK平台概述
ELK是Elasticsearch、Logstash、Kibana的缩写,是一套完整的日志收集、分析和可视化平台。ELK可以帮助企业集中管理日志,快速定位问题,分析系统运行状态,是现代IT运维的重要工具。
学习交流加群风哥微信: itpux-com,在FGedu企业的运维体系中,我们部署了ELK集群,实现了全公司日志的统一管理和分析。
1.1 ELK架构组成
ELK组件说明:
1. Elasticsearch
– 分布式搜索和分析引擎
– 存储和索引日志数据
– 提供RESTful API
– 支持水平扩展
2. Logstash
– 日志收集和处理工具
– 支持多种输入源
– 提供丰富的过滤器
– 支持多种输出目标
3. Kibana
– 日志可视化平台
– 提供丰富的图表和仪表板
– 支持日志查询和分析
– 支持报表导出
4. Beats
– 轻量级数据采集工具
– Filebeat:文件日志采集
– Metricbeat:系统指标采集
– Packetbeat:网络数据包采集
– Winlogbeat:Windows事件日志采集
# FGedu ELK架构
架构拓扑:
客户端服务器
│
▼
Filebeat
│
▼
Logstash
│
▼
Elasticsearch集群
│
▼
Kibana
│
▼
监控系统
# 部署模式
部署模式 特点 适用场景
——– —- ——–
单机部署 简单,资源占用少 测试环境
集群部署 高可用,性能好 生产环境
容器部署 部署灵活,易于管理 云环境
# 硬件配置推荐
组件 CPU 内存 磁盘
—— —- —- —-
Elasticsearch 8核+ 16GB+ 500GB+
Logstash 4核+ 8GB+ 100GB+
Kibana 4核+ 8GB+ 100GB+
Filebeat 1核+ 1GB+ 50GB+
# 版本选择
ELK Stack版本:
– 推荐使用7.x版本
– 所有组件版本必须一致
– 建议使用LTS版本
# 网络要求
– 所有组件之间需要网络互通
– Elasticsearch集群节点间需要高速网络
– 建议使用10GbE网络
– 确保防火墙配置正确
二、Elasticsearch安装配置
2.1 Elasticsearch安装部署
# 1. 系统准备
# 检查系统版本
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.6
# 检查Java版本
$ java -version
openjdk version “11.0.12”
OpenJDK Runtime Environment (build 11.0.12+7)
OpenJDK 64-Bit Server VM (build 11.0.12+7, mixed mode)
# 2. 安装Elasticsearch
# 添加Elasticsearch源
$ cat > /etc/yum.repos.d/elasticsearch.repo << EOF
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
# 安装Elasticsearch
$ yum install -y elasticsearch-7.17.0
# 3. 配置Elasticsearch
$ cat /etc/elasticsearch/elasticsearch.yml
# 集群配置
cluster.name: fgedu-es-cluster
node.name: es-node-1
# 网络配置
network.host: 0.0.0.0
http.port: 9200
# 集群发现
discovery.seed_hosts: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]
cluster.initial_master_nodes: ["es-node-1", "es-node-2", "es-node-3"]
# 数据和日志目录
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
# 内存配置
bootstrap.memory_lock: true
# 安全配置
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
# 4. 配置内存
$ cat /etc/elasticsearch/jvm.options
-Xms8g
-Xmx8g
# 5. 创建数据目录
$ mkdir -p /data/elasticsearch
$ chown -R elasticsearch:elasticsearch /data/elasticsearch
# 6. 配置系统参数
$ cat > /etc/sysctl.d/elasticsearch.conf << EOF
vm.max_map_count=262144
EOF
$ sysctl -p /etc/sysctl.d/elasticsearch.conf
# 7. 启动Elasticsearch
$ systemctl enable elasticsearch
$ systemctl start elasticsearch
# 8. 验证安装
$ curl -X GET "http://fgedudb:9200/"
{
"name" : "es-node-1",
"cluster_name" : "fgedu-es-cluster",
"cluster_uuid" : "QqX9s7x0QyG7z1X5e7l3Uw",
"version" : {
"number" : "7.17.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "bee86328705acaa9a6daede7140defd4d9ec56bd",
"build_date" : "2026-01-01T00:00:00.000Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
# 9. 配置安全
# 设置密码
$ /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [elastic]
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana_system]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
# 验证安全配置
$ curl -u elastic:password -X GET "http://fgedudb:9200/"
{
"name" : "es-node-1",
"cluster_name" : "fgedu-es-cluster",
"cluster_uuid" : "QqX9s7x0QyG7z1X5e7l3Uw",
"version" : {
"number" : "7.17.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "bee86328705acaa9a6daede7140defd4d9ec56bd",
"build_date" : "2026-01-01T00:00:00.000Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
# 10. 集群配置
# 在其他节点上安装Elasticsearch
# 修改elasticsearch.yml
node.name: es-node-2
network.host: 192.168.1.102
# 启动所有节点
$ systemctl start elasticsearch
# 查看集群状态
$ curl -u elastic:password -X GET "http://fgedudb:9200/_cluster/health"
{
"cluster_name" : "fgedu-es-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
三、Logstash安装配置
3.1 Logstash安装配置
# 1. 安装Logstash
$ yum install -y logstash-7.17.0
# 2. 配置Logstash
# 创建配置文件
$ cat /etc/logstash/conf.d/filebeat-input.conf
input {
beats {
port => 5044
}
}
filter {
grok {
match => { “message” => “%{COMBINEDAPACHELOG}” }
}
date {
match => [ “timestamp”, “dd/MMM/yyyy:HH:mm:ss Z” ]
target => “@timestamp”
}
geoip {
source => “clientip”
}
}
output {
elasticsearch {
hosts => [“http://192.168.1.101:9200”, “http://192.168.1.102:9200”, “http://192.168.1.103:9200”]
user => “logstash_system”
password => “Fgedu@Logstash123”
index => “apache-logs-%{+YYYY.MM.dd}”
}
stdout {
codec => rubydebug
}
}
# 3. 测试配置
$ /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/filebeat-input.conf –config.test_and_exit
Configuration OK
# 4. 启动Logstash
$ systemctl enable logstash
$ systemctl start logstash
# 5. 验证Logstash
$ curl -X POST “http://fgedudb:9600/”
{
“host” : “fgedu-logstash”,
“version” : “7.17.0”,
“http_address” : “127.0.0.1:9600”,
“id” : “12345678-1234-1234-1234-123456789012”,
“name” : “fgedu-logstash”,
“build_date” : “2026-01-01T00:00:00Z”,
“build_sha” : “1234567890abcdef1234567890abcdef12345678”,
“build_snapshot” : false
}
# 6. 配置多管道
# 创建管道配置文件
$ cat /etc/logstash/pipelines.yml
– pipeline.id: apache
path.config: “/etc/logstash/conf.d/apache.conf”
– pipeline.id: syslog
path.config: “/etc/logstash/conf.d/syslog.conf”
– pipeline.id: application
path.config: “/etc/logstash/conf.d/application.conf”
# 7. 配置syslog输入
$ cat /etc/logstash/conf.d/syslog.conf
input {
syslog {
port => 514
type => “syslog”
}
}
filter {
if [type] == “syslog” {
grok {
match => { “message” => “%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}” }
add_field => [ “received_at”, “%{@timestamp}” ]
add_field => [ “received_from”, “%{host}” ]
}
date {
match => [ “syslog_timestamp”, “MMM d HH:mm:ss”, “MMM dd HH:mm:ss” ]
}
}
}
output {
elasticsearch {
hosts => [“http://192.168.1.101:9200”]
user => “logstash_system”
password => “Fgedu@Logstash123”
index => “syslog-%{+YYYY.MM.dd}”
}
}
# 8. 配置应用日志输入
$ cat /etc/logstash/conf.d/application.conf
input {
beats {
port => 5045
}
}
filter {
if [fields][log_type] == “application” {
grok {
match => { “message” => “%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel} \[%{DATA:thread}\] %{DATA:class} – %{GREEDYDATA:log_message}” }
}
date {
match => [ “timestamp”, “yyyy-MM-dd HH:mm:ss.SSS” ]
target => “@timestamp”
}
}
}
output {
elasticsearch {
hosts => [“http://192.168.1.101:9200”]
user => “logstash_system”
password => “Fgedu@Logstash123”
index => “application-%{+YYYY.MM.dd}”
}
}
# 9. 重启Logstash
$ systemctl restart logstash
# 10. 查看Logstash日志
$ tail -f /var/log/logstash/logstash-plain.log
[2026-04-03T10:00:00,000][INFO ][logstash.javapipeline ][apache] Pipeline started successfully
[2026-04-03T10:00:00,000][INFO ][logstash.javapipeline ][syslog] Pipeline started successfully
[2026-04-03T10:00:00,000][INFO ][logstash.javapipeline ][application] Pipeline started successfully
[2026-04-03T10:00:00,000][INFO ][logstash.agent ] Pipelines running {:count=>3, :running_pipelines=>[:apache, :syslog, :application], :non_running_pipelines=>[]}
四、Kibana安装配置
4.1 Kibana安装配置
# 1. 安装Kibana
$ yum install -y kibana-7.17.0
# 2. 配置Kibana
$ cat /etc/kibana/kibana.yml
# 服务器配置
server.host: “0.0.0.0”
server.port: 5601
# Elasticsearch配置
elasticsearch.hosts: [“http://192.168.1.101:9200”, “http://192.168.1.102:9200”, “http://192.168.1.103:9200”]
elasticsearch.username: “kibana_system”
elasticsearch.password: “Fgedu@Kibana123”
# 安全配置
xpack.security.enabled: true
# 国际化
i18n.locale: “zh-CN”
# 3. 启动Kibana
$ systemctl enable kibana
$ systemctl start kibana
# 4. 验证Kibana
$ curl -X GET “http://fgedudb:5601/api/status”
{
“status”: {
“overall”: {
“state”: “green”,
“title”: “Green”,
“nickname”: “Looking good”,
“icon”: “success”,
“uiColor”: “Kibana::Palette::Green”
},
“statuses”: [
{
“id”: “kibana”,
“state”: “green”,
“title”: “Kibana status”,
“nickname”: “All good”,
“icon”: “success”,
“uiColor”: “Kibana::Palette::Green”
},
{
“id”: “elasticsearch”,
“state”: “green”,
“title”: “Elasticsearch status”,
“nickname”: “All good”,
“icon”: “success”,
“uiColor”: “Kibana::Palette::Green”
}
]
}
}
# 5. 访问Kibana
# 浏览器访问:http://192.168.1.100:5601
# 6. 配置索引模式
# 登录Kibana
– fgedu:elastic
– 密码:Fgedu@Elastic123
# 创建索引模式
1. 点击”Management” -> “Stack Management”
2. 点击”Kibana” -> “Index Patterns”
3. 点击”Create index pattern”
4. 输入索引模式:apache-logs-*
5. 点击”Next step”
6. 选择时间字段:@timestamp
7. 点击”Create index pattern”
# 7. 创建仪表板
# 创建可视化
1. 点击”Visualize Library”
2. 点击”Create visualization”
3. 选择可视化类型:Bar horizontal
4. 选择索引模式:apache-logs-*
5. 配置X轴:Terms aggregation on clientip
6. 配置Y轴:Count
7. 点击”Save”
# 创建仪表板
1. 点击”Dashboard”
2. 点击”Create dashboard”
3. 点击”Add”
4. 选择创建的可视化
5. 点击”Save”
6. 输入仪表板名称:Apache访问日志分析
7. 点击”Save”
# 8. 配置告警
# 创建告警
1. 点击”Management” -> “Stack Management”
2. 点击”Alerting”
3. 点击”Create alert”
4. 选择告警类型:Index threshold
5. 配置告警参数
6. 点击”Save”
# 9. 配置监控
# 启用监控
1. 点击”Management” -> “Stack Management”
2. 点击”Monitoring”
3. 点击”Enable monitoring”
# 查看监控
1. 点击”Monitoring”
2. 查看集群状态
3. 查看节点状态
4. 查看索引状态
# 10. 配置备份
# 创建快照仓库
1. 点击”Management” -> “Stack Management”
2. 点击”Snapshot and Restore”
3. 点击”Repositories”
4. 点击”Register a repository”
5. 选择仓库类型:Shared file system
6. 配置仓库路径
7. 点击”Register”
# 创建快照
1. 点击”Snapshots”
2. 点击”Take snapshot”
3. 输入快照名称
4. 点击”Take snapshot”
五、Filebeat安装配置
5.1 Filebeat安装配置
# 1. 安装Filebeat
$ yum install -y filebeat-7.17.0
# 2. 配置Filebeat
$ cat /etc/filebeat/filebeat.yml
# 输入配置
filebeat.inputs:
– type: log
enabled: true
paths:
– /var/log/httpd/access_log
tags: [“apache”]
fields:
log_type: “apache”
– type: log
enabled: true
paths:
– /var/log/messages
– /var/log/secure
tags: [“syslog”]
fields:
log_type: “syslog”
– type: log
enabled: true
paths:
– /opt/app/logs/*.log
tags: [“application”]
fields:
log_type: “application”
# 输出配置
output.logstash:
hosts: [“192.168.1.100:5044”]
# 模板配置
setup.template.settings:
index.number_of_shards: 3
index.number_of_replicas: 1
# 3. 配置模块
# 启用Apache模块
$ filebeat modules enable apache
# 配置Apache模块
$ cat /etc/filebeat/modules.d/apache.yml
– module: apache
access:
enabled: true
var.paths: [“/var/log/httpd/access_log”]
error:
enabled: true
var.paths: [“/var/log/httpd/error_log”]
# 4. 启动Filebeat
$ systemctl enable filebeat
$ systemctl start filebeat
# 5. 验证Filebeat
$ filebeat test config
Config OK
$ filebeat test output
logstash: 192.168.1.100:5044…
connection…
parse host… OK
dns lookup… OK
addresses: 192.168.1.100
dial up… OK
TLS…
security: No TLS configured for this output
talk to server… OK
# 6. 配置Metricbeat
# 安装Metricbeat
$ yum install -y metricbeat-7.17.0
# 配置Metricbeat
$ cat /etc/metricbeat/metricbeat.yml
# 模块配置
metricbeat.modules:
– module: system
metricsets:
– cpu
– memory
– disk
– filesystem
– network
period: 10s
processors:
– drop_event.when.regexp:
system.filesystem.mount_point: ‘^/(sys|cgroup|proc|dev|etc|host|lib)($|/)’
# 输出配置
output.elasticsearch:
hosts: [“http://192.168.1.101:9200”, “http://192.168.1.102:9200”, “http://192.168.1.103:9200”]
username: “beats_system”
password: “Fgedu@Beats123”
# 7. 启动Metricbeat
$ systemctl enable metricbeat
$ systemctl start metricbeat
# 8. 配置Winlogbeat(Windows服务器)
# 下载Winlogbeat
# https://artifacts.elastic.co/downloads/beats/winlogbeat/winlogbeat-7.17.0-windows-x86_64.zip
# 配置Winlogbeat
$ cat winlogbeat.yml
winlogbeat.event_logs:
– name: Application
– name: Security
– name: System
output.logstash:
hosts: [“192.168.1.100:5044”]
# 安装Winlogbeat服务
PS C:\winlogbeat> .\install-service-winlogbeat.ps1
# 启动Winlogbeat服务
PS C:\winlogbeat> Start-Service winlogbeat
# 9. 配置Packetbeat
# 安装Packetbeat
$ yum install -y packetbeat-7.17.0
# 配置Packetbeat
$ cat /etc/packetbeat/packetbeat.yml
interfaces:
device: any
packetbeat.protocols:
– type: http
ports: [80, 8080, 8443]
– type: tcp
ports: [22, 3306, 5432]
– type: dns
ports: [53]
output.elasticsearch:
hosts: [“http://192.168.1.101:9200”]
username: “beats_system”
password: “Fgedu@Beats123”
# 启动Packetbeat
$ systemctl enable packetbeat
$ systemctl start packetbeat
# 10. 查看Beats状态
$ filebeat status
filebeat status
condition: green
id: 12345678-1234-1234-1234-123456789012
version: 7.17.0
$ metricbeat status
metricbeat status
condition: green
id: 87654321-4321-4321-4321-210987654321
version: 7.17.0
六、ELK监控管理
6.1 ELK监控配置
# 1. 监控Elasticsearch
# 安装监控插件
$ /usr/share/elasticsearch/bin/elasticsearch-plugin install x-pack
# 查看集群状态
$ curl -u elastic:password -X GET “http://fgedudb:9200/_cluster/health”
{
“cluster_name” : “fgedu-es-cluster”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 3,
“number_of_data_nodes” : 3,
“active_primary_shards” : 10,
“active_shards” : 30,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}
# 查看节点状态
$ curl -u elastic:password -X GET “http://fgedudb:9200/_cat/nodes?v”
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.1.101 50 80 2 0.5 0.3 0.2 dilmrt * es-node-1
192.168.1.102 45 75 1 0.3 0.2 0.1 dilmrt – es-node-2
192.168.1.103 48 78 1 0.4 0.3 0.2 dilmrt – es-node-3
# 查看索引状态
$ curl -u elastic:password -X GET “http://fgedudb:9200/_cat/indices?v”
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana_7.17.0_001 abcdef1234567890 1 2 5 0 2.3mb 780kb
green open apache-logs-2026.04.03 1234567890abcdef 3 2 12500 0 15.2mb 5.1mb
green open syslog-2026.04.03 fedcba0987654321 3 2 8500 0 8.7mb 2.9mb
# 2. 监控Logstash
# 查看Logstash状态
$ curl -X GET “http://fgedudb:9600/_node/stats”
{
“host” : “fgedu-logstash”,
“version” : “7.17.0”,
“http_address” : “127.0.0.1:9600”,
“id” : “12345678-1234-1234-1234-123456789012”,
“name” : “fgedu-logstash”,
“pipeline” : {
“events” : {
“in” : 12500,
“filtered” : 12500,
“out” : 12500,
“queue_push_duration_in_millis” : 1234,
“duration_in_millis” : 5678
}
}
}
# 3. 监控Kibana
# 查看Kibana状态
$ curl -X GET “http://fgedudb:5601/api/status”
{
“status”: {
“overall”: {
“state”: “green”,
“title”: “Green”,
“nickname”: “Looking good”,
“icon”: “success”,
“uiColor”: “Kibana::Palette::Green”
}
}
}
# 4. 告警配置
# 配置Elasticsearch告警
$ cat /etc/elasticsearch/elasticsearch.yml
xpack.alerting.enabled: true
xpack.alerting.alert.history.index.max_size: 10mb
# 配置监控告警
# 在Kibana中创建告警
1. 点击”Management” -> “Stack Management”
2. 点击”Alerting”
3. 点击”Create alert”
4. 配置告警规则
5. 配置通知渠道
# 5. 备份与恢复
# 创建快照
$ curl -u elastic:password -X PUT “http://fgedudb:9200/_snapshot/fgedu_backup/snapshot_20260403?wait_for_completion=true”
{
“snapshot” : {
“snapshot” : “snapshot_20260403”,
“uuid” : “abcdef1234567890”,
“version_id” : 7170099,
“version” : “7.17.0”,
“indices” : [“.kibana_7.17.0_001”, “apache-logs-2026.04.03”, “syslog-2026.04.03”],
“include_global_state” : true,
“state” : “SUCCESS”,
“start_time” : “2026-04-03T10:00:00.000Z”,
“end_time” : “2026-04-03T10:05:00.000Z”,
“duration_in_millis” : 300000,
“failures” : [ ],
“shards” : {
“total” : 30,
“failed” : 0,
“successful” : 30
}
}
}
# 恢复快照
$ curl -u elastic:password -X POST “http://fgedudb:9200/_snapshot/fgedu_backup/snapshot_20260403/_restore”
{
“indices” : “apache-logs-2026.04.03”,
“rename_pattern” : “(.*)”,
“rename_replacement” : “restored_$1”
}
# 6. 性能优化
# Elasticsearch优化
# 1. 内存优化
$ cat /etc/elasticsearch/jvm.options
-Xms16g
-Xmx16g
# 2. 索引优化
$ curl -u elastic:password -X PUT “http://fgedudb:9200/apache-logs-*/_settings”
{
“index” : {
“number_of_shards” : 3,
“number_of_replicas” : 2,
“refresh_interval” : “30s”,
“translog” : {
“flush_threshold_size” : “1gb”,
“sync_interval” : “30s”
}
}
}
# 3. 缓存优化
$ curl -u elastic:password -X PUT “http://fgedudb:9200/_cluster/settings”
{
“persistent” : {
“indices.fielddata.cache.size” : “20%”,
“indices.memory.index_buffer_size” : “30%”
}
}
# Logstash优化
# 1. 内存优化
$ cat /etc/logstash/jvm.options
-Xms4g
-Xmx4g
# 2. 管道优化
$ cat /etc/logstash/pipelines.yml
– pipeline.id: apache
path.config: “/etc/logstash/conf.d/apache.conf”
pipeline.workers: 4
pipeline.batch.size: 1000
pipeline.batch.delay: 50
# Kibana优化
# 1. 内存优化
$ cat /etc/kibana/kibana.yml
server.maxPayloadBytes: 104857600
# 2. 缓存优化
$ cat /etc/kibana/kibana.yml
optimize.bundleCache.enabled: true
# 7. 安全配置
# 1. 访问控制
# 创建角色
$ curl -u elastic:password -X POST “http://fgedudb:9200/_security/role/logs_viewer”
{
“cluster” : [ “monitor” ],
“indices” : [
{
“names” : [ “apache-logs-*”, “syslog-*” ],
“privileges” : [ “read”, “view_index_metadata” ]
}
]
}
# 创建用户
$ curl -u elastic:password -X POST “http://fgedudb:9200/_security/user/logs_user”
{
“password” : “Fgedu@Logs123”,
“roles” : [ “logs_viewer” ],
“full_name” : “Logs Viewer”
}
# 2. TLS配置
# 生成证书
$ /usr/share/elasticsearch/bin/elasticsearch-certutil cert -out /etc/elasticsearch/certs/elastic-certificates.p12 -pass “”
# 配置TLS
$ cat /etc/elasticsearch/elasticsearch.yml
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
# 3. 审计日志
$ cat /etc/elasticsearch/elasticsearch.yml
xpack.security.audit.enabled: true
xpack.security.audit.logfile.events.emit_request_body: true
# 8. 日常维护
# 1. 索引管理
# 创建索引模板
$ curl -u elastic:password -X PUT “http://fgedudb:9200/_index_template/apache_logs”
{
“index_patterns” : [“apache-logs-*”],
“template” : {
“settings” : {
“number_of_shards” : 3,
“number_of_replicas” : 2
},
“mappings” : {
“properties” : {
“@timestamp” : { “type” : “date” },
“clientip” : { “type” : “ip” },
“request” : { “type” : “text” },
“status” : { “type” : “integer” },
“bytes” : { “type” : “integer” }
}
}
}
}
# 配置索引生命周期
$ curl -u elastic:password -X PUT “http://fgedudb:9200/_ilm/policy/apache_logs_policy”
{
“policy” : {
“phases” : {
“hot” : {
“actions” : {
“rollover” : {
“max_size” : “30gb”,
“max_age” : “7d”
}
}
},
“delete” : {
“min_age” : “30d”,
“actions” : {
“delete” : {}
}
}
}
}
}
# 2. 日志清理
# 清理旧索引
$ curl -u elastic:password -X DELETE “http://fgedudb:9200/apache-logs-2026.03.*”
# 3. 集群健康检查
#!/bin/bash
# 文件名: elk_health_check.sh
ELASTICSEARCH_URL=”http://fgedudb:9200″
USERNAME=”elastic”
PASSWORD=”Fgedu@Elastic123″
check_elasticsearch() {
echo “检查Elasticsearch状态…”
status=$(curl -s -u $USERNAME:$PASSWORD “$ELASTICSEARCH_URL/_cluster/health” | jq -r ‘.status’)
if [ “$status” == “green” ]; then
echo “Elasticsearch状态: 正常 ($status)”
elif [ “$status” == “yellow” ]; then
echo “警告: Elasticsearch状态为yellow”
else
echo “错误: Elasticsearch状态为$status”
fi
}
check_logstash() {
echo “检查Logstash状态…”
if curl -s “http://fgedudb:9600/” > /dev/null; then
echo “Logstash状态: 正常”
else
echo “错误: Logstash未运行”
fi
}
check_kibana() {
echo “检查Kibana状态…”
if curl -s “http://fgedudb:5601/api/status” > /dev/null; then
echo “Kibana状态: 正常”
else
echo “错误: Kibana未运行”
fi
}
echo “=== ELK健康检查 ===”
echo “时间: $(date ‘+%Y-%m-%d %H:%M:%S’)”
check_elasticsearch
check_logstash
check_kibana
$ chmod +x elk_health_check.sh
$ ./elk_health_check.sh
=== ELK健康检查 ===
时间: 2026-04-03 10:00:00
检查Elasticsearch状态…
Elasticsearch状态: 正常 (green)
检查Logstash状态…
Logstash状态: 正常
检查Kibana状态…
Kibana状态: 正常
总结
ELK日志分析平台是现代IT运维的重要工具,掌握ELK的安装配置和管理对于企业的日志管理和问题排查至关重要。本教程详细介绍了ELK平台概述、Elasticsearch安装配置、Logstash安装配置、Kibana安装配置、Filebeat安装配置和ELK监控管理。
更多学习教程www.fgedu.net.cn,在实际工作中,建议根据业务需求合理规划ELK架构,定期进行维护和优化,确保平台的稳定性和性能。
本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html
