elk日志收集
前言
最近搭建了elk平台 用于收集日志 发现网上的一些方案不能满足需求 翻了一下官方文章 总结了一套自己的配置 特此记录一下
lidop日志收集
一、部署方案
1.方案:
- 组件:elasticsearch + filebeat + kibana
- 版本:7.12.1
2.es集群部署:
机器 | 节点名称 | http监听端口 | tcp传输端口 | 服务器地址 | 部署方式 |
---|---|---|---|---|---|
b91 | yhow-node-1 | 9200 | 9301 | 192.168.2.91 | zypper |
b92 | yhow-node-2 | 9200 | 9302 | 192.168.2.92 | zypper |
b93 | yhow-node-3 | 9200 | 9303 | 192.168.2.93 | zypper |
3.kibana部署
机器 | http端口 | 地址 | 部署方式 |
---|---|---|---|
b91 | 5601 | 192.168.2.91 | zypper |
4.filebeat部署
机器 | 地址 | 部署方式 |
---|---|---|
lidop1-bj | 172.20.103.100 | yum |
lidop2-bj | 172.20.103.99 | yum |
lidop1-tky | 172.31.19.198 | yum |
lidop2-tky | 172.31.25.158 | yum |
lidop1-tw | 172.30.64.8 | yum |
lidop2-tw | 172.30.64.9 | yum |
二、es集群配置
配置中的xpack 见es认证说明
- b91
/etc/elasticsearch/elasticsearch.yml
cluster.name: yhow
node.name: yhow-node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.2.91
http.port: 9200
transport.tcp.port: 9301
discovery.seed_hosts: ["192.168.2.91:9301", "192.168.2.92:9302","192.168.2.93:9303"]
cluster.initial_master_nodes: ["yhow-node-1", "yhow-node-2","yhow-node-3"]
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
- b91
/etc/elasticsearch/elasticsearch.yml
cluster.name: yhow
node.name: yhow-node-2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.2.92
http.port: 9200
transport.tcp.port: 9302
discovery.seed_hosts: ["192.168.2.91:9301", "192.168.2.92:9302","192.168.2.93:9303"]
cluster.initial_master_nodes: ["yhow-node-1", "yhow-node-2","yhow-node-3"]
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
- b93
/etc/elasticsearch/elasticsearch.yml
cluster.name: yhow
node.name: yhow-node-3
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.2.93
http.port: 9200
transport.tcp.port: 9303
discovery.seed_hosts: ["192.168.2.91:9301", "192.168.2.92:9302", "192.168.2.93:9303"]
cluster.initial_master_nodes: ["yhow-node-1", "yhow-node-2","yhow-node-3"]
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
1.es 集群配置认证
es 7.0以后的插件xpack 有商业改为免费使用 因此采用xpack进行认证与授权
- bin目录
/usr/share/elasticsearch/bin
- 在es的bin目录执行
./elasticsearch-certutil cert -out elastic-certificates.p12 -pass ""
- 将生成的证书放到所有节点的 ES_CONF下
/etc/elasticsearch
- 在所有节点配置添加
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
生成es系统账户的账号和密码
es会将用户信息同步给所有节点
在bin目录执行
./elasticsearch-setup-passwords auto
Changed password for user apm_system
PASSWORD apm_system = ***************
Changed password for user kibana_system
PASSWORD kibana_system = ***************
Changed password for user kibana
PASSWORD kibana = ***************
Changed password for user logstash_system
PASSWORD logstash_system = ***************
Changed password for user beats_system
PASSWORD beats_system = ***************
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = ***************
Changed password for user elastic
PASSWORD elastic = ***************
三、kibana配置
- b91
/etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.2.91"
server.name: "yhow"
elasticsearch.hosts: ["http://192.168.2.91:9200","http://192.168.2.92:9200","http://192.168.2.93:9200"]
elasticsearch.username: "kibana_system" #配置访问es的用户名 es默认提供
elasticsearch.password: "***************" #配置访问es的密码 es默认提供
i18n.locale: "en" #设置语言 目前版本中文不完善
四、filebeat配置
- lidop全部机器
filebeat.config.inputs:
enabled: true
path: /etc/filebeat/inputs.d/*.yml # 配置类似nginx的 config.d 方便管理
setup.ilm.enabled: false
setup.template.name: "whyhow"
setup.template.pattern: "whyhow-*"
output.elasticsearch:
hosts: ["192.168.2.91:9200","192.168.2.92:9200","192.168.2.93:9200"]
index: "%{[fields.index]:other}-%{+yyyy.MM.dd}"
username: "filebeat_client" #配置filebeat访问es的用户 es默认不提供
password: "***************" #配置filebeat访问es的密码 es默认不提供
filebeat访问es所需权限:
- Cluster privileges:
all
,monitor
,manage
- Privileges:
auto_configure
,create_index
,manage
,all
- indices:
按需创建
- Cluster privileges:
- filebeat采集日志配置
/etc/filebeat/inputs.d/*.yml
- type: log
enabled: true
paths:
- /var/log2/lidop/sdk/sdk.log #被采集的日志文件路径
json.keys_under_root: true
json.overwrite_keys: true
fields:
index: 'whyhow-sdk'
processors:
- add_host_metadata: ~
- script:
lang: javascript
id: creat_time_filter
source: > # 日志文件时间格式转换成
function process(event) {
var createTime = event.Get("createTime");
var d = new Date(createTime);
event.Put("tmp_time",d);
}
- decode_json_fields:
fields: ["message"]
target: ""
- timestamp:
field: tmp_time
timezone: Asia/Shanghai
layouts:
- '2006-01-02 15:04:05'
test:
- '2019-06-22 16:33:51'
- drop_fields:
fields: ["log","input","agent","ecs","cloud","tmp_time"]
ignore_missing: false```
本篇博文采用《CC 协议》,转载必须注明作者和本文链接
真棒!
你好,你好我的头发是。。。。。。