前言
如题, 主要记录一下之前搭建部署的一套日志收集系统。这里采用docker-compose的方式运行,还是那句话主要是思路。
方式: 一台服务器上面利用docker-compose运行三个ES节点跟Kibana,并启用SSL,再使用Filebeat来收集服务器上面的应用日志到ES, 最后Kibana来做展示
配置
目录创建
1 | mkdir -p /data/{es/node-1/{data,certs,logs,config},plugins} |
部署文件
1 | version: "3" |
非SSL
elasticsearch.yaml配置文件
这里以node-1
为例,其他不同的地方就是 node.name
1 | vim /data/es/node-1/config/elasticsearch.yml |
kibana.yaml
1 | vim /data/kibana/config/kibana.yml |
启动
1 | root@ip-10-10-10-29:/data# docker-compose up -d |
上面已经启动,如果有报错查看容器日志就行了,之前部署的时候报错内容还是很清晰
进入容器查看:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90root@node-1:/usr/share/elasticsearch# curl localhost:9200
{
"name" : "node-1",
"cluster_name" : "elastic",
"cluster_uuid" : "tcKUxyWVQb-7LV4zmomsgg",
"version" : {
"number" : "7.17.5",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "8d61b4f7ddf931f219e3745f295ed2bbc50c8e84",
"build_date" : "2022-06-23T21:57:28.736740635Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
root@node-1:/usr/share/elasticsearch# curl 172.20.3:9200
{
"name" : "node-1",
"cluster_name" : "elastic",
"cluster_uuid" : "tcKUxyWVQb-7LV4zmomsgg",
"version" : {
"number" : "7.17.5",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "8d61b4f7ddf931f219e3745f295ed2bbc50c8e84",
"build_date" : "2022-06-23T21:57:28.736740635Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
root@node-1:/usr/share/elasticsearch# curl 172.20.4:9200
{
"name" : "node-2",
"cluster_name" : "elastic",
"cluster_uuid" : "tcKUxyWVQb-7LV4zmomsgg",
"version" : {
"number" : "7.17.5",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "8d61b4f7ddf931f219e3745f295ed2bbc50c8e84",
"build_date" : "2022-06-23T21:57:28.736740635Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
root@node-1:/usr/share/elasticsearch# curl 172.20.5:9200
{
"name" : "node-3",
"cluster_name" : "elastic",
"cluster_uuid" : "tcKUxyWVQb-7LV4zmomsgg",
"version" : {
"number" : "7.17.5",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "8d61b4f7ddf931f219e3745f295ed2bbc50c8e84",
"build_date" : "2022-06-23T21:57:28.736740635Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
root@ip-10-10-10-29:/data# curl -s 172.20.0.3:9200/_cluster/health | python3 -m json.tool
{
"cluster_name": "elastic",
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 3,
"active_shards": 6,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 100.0
}
以上说明ES集群已经正常工作,看一下通过nginx代理转发到kibana的结果
kibana也能正常工作,只是这里在访问的时候直接就进入了kibana,如果不加一个身份验证功能,这就等于裸奔,即便在放在内网也是极其不安全的,所以需要加一个身份验证。
SSL方式
利用自带的工具生成证书,也可以自行生成证书,但注意要限制域名和IP,否则在进行https通讯时会校验失败
进入node-1容器 ,操作1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93root@ip-10-10-10-29:~# docker exec -it node-1 bash
root@node-1:/usr/share/elasticsearch#
# 根据实际情况配置生成证书的yaml
cat <<EOF > /usr/share/elasticsearch/node.yml
instances:
- name: "node"
ip:
- "172.20.0.3"
- "172.20.0.4"
- "172.20.0.5"
- "127.0.0.1"
- "172.20.0.6"
- "10.10.10.29" # 这个IP是服务器的IP地址,因为我这里使用的是docker运行es集群,这个必须加上去
dns:
- "node-1"
- "node-2"
- "node-3 "
- "localhost"
- "kibana"
- "others"
EOF
# 生成ca证书,默认文件名为elastic-stack-ca.p12,可添加密码
bin/elasticsearch-certutil ca # 一路回车即可
# 生成证书,默认文件名为certificate-bundle.zip
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --silent --in node.yml # 一路回车即可
# 将证书复制到node-1配置目录下
root@node-1:/usr/share/elasticsearch# cp certificate-bundle.zip elastic-stack-ca.p12 config/certs/
root@node-1:/usr/share/elasticsearch# cd config/certs/
root@node-1:/usr/share/elasticsearch/config/certs# ls
certificate-bundle.zip elastic-stack-ca.p12
# 解压证书,目录结构为 实例名/实例名.p12 ,此处为node/node.p12
root@node-1:/usr/share/elasticsearch/config/certs# unzip certificate-bundle.zip
Archive: certificate-bundle.zip
creating: node/
inflating: node/node.p12
root@node-1:/usr/share/elasticsearch/config/certs# ls
certificate-bundle.zip elastic-stack-ca.p12 node
root@node-1:/usr/share/elasticsearch/config/certs# ls node/
node.p12
root@node-1:/usr/share/elasticsearch/config/certs# rm -rf node certificate-bundle.zip
root@node-1:/usr/share/elasticsearch/config/certs# ls
certificate-bundle.zip elastic-stack-ca.p12 node.p12
# 修改证书访问权限
root@node-1:/usr/share/elasticsearch/config# chown -R root:elasticsearch certs/
root@node-1:/usr/share/elasticsearch/config# chmod -R a+rx certs/
# 此时我们需要的证书已经生成了,同样的方式复制到node-2,node3后修改权限,修改属主等操作
# 退出容器,进入node-1的挂在目录:
root@ip-10-10-10-29:/data/es/node-1/certs# pwd
/data/es/node-1/certs
root@ip-10-10-10-29:/data/es/node-1/certs# cd ..
root@ip-10-10-10-29:/data/es/node-1# ls
certs config data log
root@ip-10-10-10-29:/data/es/node-1# cp -rf certs ../node-2/
root@ip-10-10-10-29:/data/es/node-1# cp -rf certs ../node-3/
root@ip-10-10-10-29:/data/es/node-1#
# 证书已经准备完毕
es/
├── node-1
│ ├── certs
│ │ ├── elastic-stack-ca.p12
│ │ └── node.p12
│ ├── config
│ │ ├── elasticsearch.yml
│ ├── data
│ │ └── nodes
│ └── log
├── node-2
│ ├── certs
│ │ ├── elastic-stack-ca.p12
│ │ └── node.p12
│ ├── config
│ │ ├── elasticsearch.yml
│ ├── data
│ │ └── nodes
│ └── log
├── node-3
│ ├── certs
│ │ ├── elastic-stack-ca.p12
│ │ └── node.p12
│ ├── config
│ │ ├── elasticsearch.yml
│ ├── data
│ │ └── nodes
│ └── log
└── plugins
修改配置文件/data/es/node-1/config/elasticsearch.yml. 以节点node-1
为例1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16# 这里的内容同非ssl部署一样
.......
.......
# 开启xpack安全特性
xpack.security.enabled: true
xpack.security.authc.api_key.enabled: true
# 开启https并配置证书
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/node.p12
xpack.security.http.ssl.truststore.path: certs/node.p12
# 开启节点间通讯ssl并配置证书
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: certs/node.p12
xpack.security.transport.ssl.truststore.path: certs/node.p12
每个节点都需修改后,重启一下es集群
重启完毕后,查看状态和日志,检查是否有异常
若无异常,可以开始设置密码, 进入容器 node-1
1
2
3
4
5
6
7
8
9
10
11
12
13
14root@ip-10-10-10-29:/data# docker exec -it node-1 bash
root@node-1:/usr/share/elasticsearch# ./bin/elasticsearch-setup-passwords auto
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
# 若集群没问题的话这里就会自动生成系统的一些密码:
.......
.......
.......
.......
# 也可以手动自定义密码
./bin/elasticsearch-setup-passwords interactive
使用 ES API 检查各个节点是否正常1
2
3curl -k --user elastic:密码 -X GET "https://172.20.0.3:9200" --cert-type P12 --cert /usr/share/elasticsearch/config/certs/node.p12
curl -k --user elastic:密码 -X GET "https://172.20.0.4:9200" --cert-type P12 --cert /usr/share/elasticsearch/config/certs/node.p12
curl -k --user elastic:密码 -X GET "https://172.20.0.5:9200" --cert-type P12 --cert /usr/share/elasticsearch/config/certs/node.p12
此时ssl已配置完毕,配置kibana1
2
3
4
5
6# 把之前放在node-1下面的证书复制到kibana/cert下面:
# 提取公钥
openssl pkcs12 -in elastic-stack-ca.p12 -out elastic-stack-ca.pem -nokeys -clcerts
openssl pkcs12 -in node.p12 -out node.pem -nokeys -clcerts
# 提取私钥
openssl pkcs12 -in node.p12 -out node.key -nocerts -nodes
修改配置文件kibana/config/kibana.yml
1 | server.host: 0.0.0.0 |
修改完毕后 ,重新部署一下 kibana服务即可
以上只要登录就可以使用了。
Filebeat 日志采集
这里我才用的方式是 docker运行 filebeat服务,然后将网关服务器的Nginx日志挂载进filebeat1
2
3
4
5
6
7
8
9
10
11
12
13
14vim filbeat/docker-compose.yaml
version: "3"
services:
filebeat:
container_name: filebeat_test
image: elastic/filebeat:7.9.0
user: root
volumes:
- wwwlogs/: log/nginx:ro
- ./filebeat.yml: share filebeat.yml:ro
- ./ca.crt: share certs/ca.crt
command: filebeat -e
filebeat的配置文件在 /opt/filebeat/filebeat.yml
1 | vim /opt/filebeat/filebeat.yml |
配置完毕之后,查看日志是否异常,如果没有异常,到node-1
上面通过ES API查询是否有index生成
1 | root@node-1:/usr/share/elasticsearch# curl -k --user elastic:xxxxxxxxxxxx -X GET "https://172.20.0.3:9200/_cat/indices " --cert-type P12 --cert /usr/share/elasticsearch/config/certs/node.p12 |
Kibana配置
上面可以看到filebeat已经正常往ES写入数据,在kibana中配置一下:
Stack Manager
- Index Patterns
- Create index pattern
直接创建后菜单栏: Discover
选择刚刚创建的这个Index Pattern
最后直接就可以看到收集到的日志kibana已经做好了字段分隔,只需要选择需要的字段,或者匹配某些字段就可以了。
比如 我想看状态码大于等于500的: status >= 500
,都可以在上面的搜索框中去定义筛选
当然也可以把这个日志做成可视化,但选择某个指标的时候,其他指标也可以联动起来,例如:
最后再结合前面写过的 如何利用ElastAlert2对ES中的日志创建告警 就可以轻松的对日志进行监控告警啦~
写在最后
上面过程仅仅是记录,主要还是思路。
k8s version
1 | --- |