Logstash是ELK Stack(Elasticsearch, Logstash, Kibana)中的重要组件,用于数据收集、转换和传输。以下是使用Logstash进行日志分析的详细步骤:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt-get install apt-transport-https
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update && sudo apt-get install logstash
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
sudo tee /etc/yum.repos.d/logstash.repo <<EOF
[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
sudo yum install logstash
Logstash配置文件通常位于/etc/logstash/conf.d/
目录下,包含三个主要部分:input、filter和output。
/etc/logstash/conf.d/syslog.conf
input {
file {
path => "/var/log/syslog"
start_position => "beginning"
sincedb_path => "/dev/null"
type => "syslog"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "syslog-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
file
:从文件读取日志syslog
:通过syslog协议接收日志beats
:接收Filebeat发送的日志kafka
:从Kafka主题消费日志grok
:解析非结构化日志数据mutate
:修改字段(重命名、删除、替换等)date
:解析日期geoip
:添加地理位置信息dissect
:另一种日志解析方式elasticsearch
:发送到Elasticsearchfile
:写入文件kafka
:发送到Kafkastdout
:输出到控制台(调试用)sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t -f /etc/logstash/conf.d/syslog.conf
sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/syslog.conf
sudo systemctl start logstash
sudo systemctl enable logstash
在/etc/logstash/pipelines.yml
中定义多个管道,每个管道处理不同的日志源。
pipeline.workers
pipeline.batch.size
curl -XGET 'localhost:9600/_node/stats/?pretty'
pipeline.workers
和pipeline.batch.size
dissect
插件替代简单场景/etc/logstash/jvm.options
通过以上配置和优化,您可以在Linux环境中高效地使用Logstash进行日志分析,并将数据发送到Elasticsearch进行进一步的可视化和分析。