ELK日志

一,Elastic Stack 在企业的常用架构

 1,没有日志收集系统运维工作的日常"痛点"概述

如上图所示,简单画了一下互联网常用的一些技术栈相关架构图,请问如果让你对上图中的各组件日志进行收集,分析,存储,展示该如何做呢?
你是否也会经常面临以下的运维痛点呢?
生产出现故障后,运维需要不停的查看各种不同的日志进行分析?是不是毫无头绪?
项目上线出现错误,如何快速定位问题?如果后端节点过多,日志分散怎么办?
开发人员需要实时查看日志但又不想给服务器的登陆权限,怎么办?难道每天帮开发取日志?
如何在海量的日志中快速的提取我们想要的数据?比如:pv,uv,TOP10的URL?如果分析的日志数据量大,那么势必会导致查询速度慢。难度增大,最终则会导致我们无法快速的获取到想要的指标。
CDN公司需要不停的分析日志,那分析什么?主要分析命中率,为什么?因为我们给用户承诺的命中率90%以上,如果没有达到90%,我们就要去分析数据为什么没有被命中,为什么没有被缓存下来。
近期某影视公司周五下午频繁出现被盗链的情况,导致异常流量突增2G有余。给公司带来了损失,那又该如何分析异常流量呢?
上百台Mysql实例的日志查询分析如何聚集?
docker,k8s平台日志如何收集分析?。。。
如上所有的痛点都可以使用日志分析系统"Elastic Stack"解决,将运维所有的服务器日志,业务系统日志都收集到一个平台下,然后提取想要的内容,比如错误信息,警告信息等,出过滤到这种消息,就马上告警。告警后,运维人员就能马上定位是哪台机器,哪个业务系统出了问题,出现了什么问题。

2,Elastic Stack 分布式日志系统概述

ELK(Elasticsearch logstash kibana)
Elasticsearch 解决数据存储和检索(查询)    专门做数据存储的
Logtash 做日志转换,日志收集,处理,将一些字段该删的删,该切割的切割,他能做一些简单的处理,他提供了丰富的插件。 专门做数据收集的
 Kibana 是一个图形化的管理插件,他可以展示数据。  专门做数据展示的。
Beats 就一个收集日志的作用  有了Beats就实现了收集和处理日志的解耦
The Good Old  代表 ELK
The Brand New  代表 ElasticStack(弹性堆栈)
 The Elastic Stack 包括 Elasticsearch ,Kibana,Beats,和 Logstash(也称为 ELK Stack).
Elasticsearch:
简称为 ES ,ES是一个开源的高扩展的分布式全文搜索引擎,是整个 Elastic Stack技术栈的核心。
它可以近乎实时的存储,检索数据,本身扩展性很好,可以扩展到上百台服务器,处理PB级别的数据。

 Kibana:

是一个免费且开放的用户界面,能够让你对 Elasticsearch 数据进行可视化,并让你在 ElasticStack中进行导航。你可以进行各种操作,从跟踪查询负载,到理解请求如何流经你的整个应用,都能轻松完成。

 Beats:

是一个免费且开放的平台,集合了多种单一用途数据采集器。他们从成百上千万台机器和系统向 Logstash或 Elasticsearch发送数据。

 Logstash:

是免费且开放的服务器端数据处理管道,能够从多个来源采集数据,转换数据,然后将数据发送到你最喜欢的 "存储库"中。

 Elastic Stack 的主要优点有如下几个:

 (1) 处理方式灵活:
elasticsearch 是实时全文索引,具有强大的搜索功能。
(2) 配置相对简单:
elasticsearch  全部使用 JSON 接口,logstash使用模块配置,kibana配置文件部分更简单
(3) 检索性能高效:
基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿级数据的查询秒级响应
 (4) 集群线性扩展:
elasticsearch 和 logstash 都可以灵活线性扩展。
(5) 前端操作绚丽:
kibana的前端设计比较绚丽,而且操作简单。
使用 elastic stack 能收集哪些日志:
容器管理工具: docker
容器编排工具:docker swarm,Kubernetes
 负载均衡服务器:lvs,haproxy,nginx
web 服务器:httpd,nginx,tomcat
 数据库:mysql,redis,NongoDB,Hbase,Kudu,ClickHouse,PostgreSQL
存储:nfs,gluterfs,fastdfs,HDFS,Ceph
 系统:message,security
业务:包括但不限于C,C++,Java,PHP,Go,Python.Shell等编程语言研发的App

3,Elastic Stack 企业级 "EFK" 架构图

数据流走向:源数据层(nginx,tomcat)-->数据采集层(filebeat)-->数据存储层(Elasticsearch).
4,Elastic Stack 企业级 "ELK" 架构图

数据流走向:源数据层(nginx,tomcat)-->数据采集/转换层(Logstash)-->数据存储层(Elasticsearch).
5,Elastic Stack 企业级 "ELFK"架构图解

数据流走向:源数据层(nginx,tomcat)-->数据采集(filebeat)-->转换层(Logstash)-->数据存储层(Elasticsearch).

6,Elastic Stack 企业级 "ELFK"+"kafka" 架构图解

数据流走向:源数据层(nginx,tomcat)-->数据采集(filebeat)-->数据缓存层(kafka)-->转换层(Logstash)-->数据存储层(ElasticSearch).
7,Elastic Stack 企业级 "ELFK"+"kafka" 架构演变

如上图所示,在实际工作中,如果有大数据部门的存在。也有可能kafka的数据要被多个公司使用的。
8,课程学习方法介绍
(1) 学而时习之,上课不能光听不练习,听懂不等于会了;
(2)将学习的内容用自己的话说出来,毕竟将来找工作的时候需要面试官面对面的交流;
(3) 多动手,画架构图,勤做笔记,好记性不如烂笔头;
(4) 课堂上讲解的内容,遇到问题可以尝试自己先行排查,但超过30分钟以上还搞不定,就得问老师或同学;
(5)认真完成课后作业,有助于你巩固知识点甚至扩展新的内容;

二,ElasticSearch和Solr的选择

Solr是一个开源的,基于Apache Lucene的企业级搜索平台,用于构建高性能,可扩展的搜索应用,他提供了全文搜索,高亮显示,分页搜索,实时索引等功能.
Lucene 一个全文搜索引擎。搜索引擎,全文检索,索引库
lucene的优点:
优点:
可以被认为是迄今为止最先进,性能最好的,功能最全的搜索引擎库(框架)。
缺点:
(1)只能在 Java项目中使用,并且要以jar包的方式直接集成在项目中;
(2)使用很复杂,你需要深入了解检索的相关知识来创建索引和搜索索引代码;
(3) 不支持集 群环境,索引数据不同步(不支持大型项目);
(4) 扩展性差,索引库和应用所在同一个服务器,当索引数据过大时,效率逐渐降低;
值得注意的是,上述的Lucene框架中的缺点,Elasticsearch全部都能解决。
ElasticSearch是一个实时的分布式搜素和分析引擎。他可以帮助你用前所未有的速度去处理大规模数据。
ES 可以用于全文搜索,结构化搜索以及分析,当然你也可以将这三者进行组合。
有哪些公司在使用 ElasticSearch呢,全球几乎所有的大型互联网公司都在拥抱这个开源项目:
https://www.elastic.co/cn/customers/success-stories
2,ElasticSearch 和 Solr如何选择

Solr是Apache Lucene 项目的开源企业搜索平台,其主要功能包括全文检索,命中标识,分页搜索,动态聚类,数据库集成,以及富文本(如word,PDF)的处理。
Solr是高度可扩展的,并提供了分布式搜索和索引复制。Solr是最流行的企业级搜索引擎,Solr4还增加了NoSQL支持。
Elasticsearch (下面简称ES)与Solr的比较:
(1) Solr利用 Zookeeper 进行分布式管理,而ES自身带有分布式协调管理功能;
(2) Solr支持更多格式(JSON,XNI,CSV)的数据,而ES仅支持JSON文件格式;
(3) Solr官方提供的功能更多,而ES本身更注重于核心功能,高级功能多有第三方插件提供;
(4) Solr在 "传统搜索"(已有数据)中表现好于ES,但在处理 "实时搜索"(实时建立索引)应用时效明显低于ES.
(5) Solr是传统搜索应用的有力解决方案,但 Elasticsearch更适用于新兴的实时搜索应用
有网友在生产环境测试,将搜索引擎从Solr转动Elasticsearch以后的平均查询速度有了将近50倍的提升。

三,集群基础环境初始化

1,准备虚拟机
IP地址主机名CPU配置内存配置磁盘配置角色说明
10.0.0.101elk1012 core4G20G+ES node
10.0.0.102elk1022 core4G20G+ES node
10.0.0.103elk1032 core4G20GES node
2, 修改软件源
​sed -e `a|^mirrorlist=|#mirrorlist=|g` \ -e `a|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g` \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo参考链接:
https://mirrors.tuna.tsinghua.edu.cn/help/centos/官网源码:
centos6.10
sed -e "s|^mirrorlist=|#mirrorlist=|g" \-e "s|^#baseurl=http://mirror.centos.org/centos/\$releasever|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos-vault/6.10|g" \-e "s|^#baseurl=http://mirror.centos.org/\$contentdir/\$releasever|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos-vault/6.10|g" \-i.bak \/etc/yum.repos.d/CentOS-*.repo---
sed -e "s|^mirrorlist=|#mirrorlist=|g" \-e "s|^#baseurl=http://mirror.centos.org/centos/\$releasever|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos-vault/7.9|g" \-e "s|^#baseurl=http://mirror.centos.org/\$contentdir/\$releasever|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos-vault/7.9|g" \-i.bak \/etc/yum.repos.d/CentOS-*.repo[点击并拖拽以移动]
​
3, 修改终端颜色
# 有点瑕疵
cat <<EOF>> ~/.bashrc
PS1='[\[\e[34;1m]\u@\[\e[0m\]\[\e[32;1m\]\H\[\e[0m\]\[\e[31;1m\]\W\[\e[0m\]]# '
EOF
source ~/.bashrc[root@elk189 ~]# cat <<EOF>> ~/.bashrc
> PS1='[\[\e[34;1m]\u@\[\e[0m\]\[\e[32;1m\]\H\[\e[0m\]\[\e[31;1m\] \W\[\e[0m\]]# '
> EOF
[root@elk189 ~]# source ~/.bashrc==========================
# 完美
cat <<EOF>> ~/.bashrc
PS1="\[\e[32;40m\][\[\e[31;40m\]\u\[\e[33;40m\]@\h \[\e[32;40m\]\w\[\e[1m\]]\\$ "
EOF
source ~/.bashrc

4, 修改sshd服务优化
sed -ri 's@^#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config
sed -ri 's#^GSSAPIAuthentication yes#GSSAPIAuthentication no#g' /etc/ssh/sshd_config
grep ^UseDNS /etc/ssh/sshd_config
grep ^GSSAPIAuthentication /etc/ssh/sshd_config[root@elk188 ~]# sed -ri 's@^#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config
[root@elk188 ~]# sed -ri 's#^GSSAPIAuthentication yes#GSSAPIAuthentication no#g' /etc/ssh/sshd_config
[root@elk188 ~]# grep ^UseDNS /etc/ssh/sshd_config
UseDNS no
[root@elk188 ~]# grep ^GSSAPIAuthentication /etc/ssh/sshd_config
GSSAPIAuthentication no
5,关闭防火墙
systemctl disable --now firewalld && systemctl is-enabled firewalld
systemctl status firewalld
6,禁用selinux
sed -ri 's#(SELINUX=)enforcing#\1disabled#' /etc/selinux/config
grep ^SELINUX= /etc/selinux/config
setenforce 0
getenforce 
7,配置集群免密登录及同步脚本
(1) 修改主机列表
cat >> /etc/hosts << 'EOF'
192.168.222.187 elk187.longchi.xyz
192.168.222.188 elk188.longchi.xyz
192.168.222.189 elk189.longchi.xyz
EOF(2) elk101节点上生成秘钥对
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa -q
ll ~/.ssh/id_rsa(3) elk101 配置所有集群节点的免密登录
for ((host_id=187;host_id<=189;host_id++));do ssh-copy-id elk${host_id}.longchi.xyz;done(4)链接测试
ssh 'elk187.longchi.xyz'
ssh 'elk188.longchi.xyz'
ssh 'elk189.longchi.xyz'logout # 退出登录(5) 所有节点安装 rsync 数据同步工具
yum -y install rsync
yum -y install tree(6) 编写同步脚本
vim /usr/local/sbin/data_rsync.sh # 将下面的内容拷贝到该文件
[root@elk187.longchi.xyz ~]#cat /usr/local/sbin/data_rsync.sh
#!/bin/bash
#Auther: zengguoqing
#encoding: utf-8if [ $# -ne 1 ];thenecho "Usage: $0 /path/to/file"exit
fi# 判断文件是否存在
if [ ! -e $1 ];thenecho "[ $1 ] dir or file not find"exit
fi#获取父路径
fullpath=`dirname $1`#获取子路径
basename=`basename $1`#进入父路径
cd $fullpathfor ((host_id=188;host_id<=189;host_id++))do# 使得终端输出变为绿色tput setaf 2echo ===== rsyncing elk${host_id}.longchi.xyz: $basename =====# 使得终端恢复原来的颜色tput setaf 7# 将数据同步到其他两个节点rsync -az $basename `whoami`@elk${host_id}.longchi.xyz:$fullpathif [ $? -eq 0 ];thenecho "命令执行成功"fi
done(7) 给脚本授权
chmod +x /usr/local/sbin/data_rsync.sh(8) 测试 同步成功(文件,文件夹,软连接都可以同步)[]root@elk187.longchi.xyz ~]#mkdir /tmp/opt
[]root@elk187.longchi.xyz ~]#echo 1111 > /tmp/opt/1.txt
[]root@elk187.longchi.xyz ~]#ll /tmp/opt/
total 4
-rw-r--r-- 1 root root 5 Oct  2 23:28 1.txt
[]root@elk187.longchi.xyz ~]#cat /tmp/opt/1.txt
1111
[]root@elk187.longchi.xyz ~]#vim /usr/local/sbin/data_rsync.sh
[]root@elk187.longchi.xyz ~]#chmod +x /usr/local/sbin/data_rsync.sh
[]root@elk187.longchi.xyz ~]#data_rsync.sh /tmp/test/
===== rsyncing elk188.longchi.xyz: test =====
命令执行成功
===== rsyncing elk189.longchi.xyz: test =====
命令执行成功# 测试  同步成功
[]root@elk189.longchi.xyz ~]#ll /tmp/opt
total 4
-rw-r--r-- 1 root root 5 Oct  2 23:28 1.txt
[]root@elk189.longchi.xyz ~]#cat /tmp/opt/1.txt
1111[]root@elk188.longchi.xyz ~]#ll /tmp/opt/
total 4
-rw-r--r-- 1 root root 5 Oct  2 23:28 1.txt
[]root@elk188.longchi.xyz ~]#cat /tmp/opt/1.txt
1111# 软链接也是可以同步的
[]root@elk187.longchi.xyz tmp]#ln -sv test t
‘t’ -> ‘test’
[]root@elk187.longchi.xyz tmp]#cd
[]root@elk187.longchi.xyz ~]#data_rsync.sh /tmp/t
===== rsyncing elk188.longchi.xyz: t =====
命令执行成功
===== rsyncing elk189.longchi.xyz: t =====
命令执行成功==============脚本解释 start=============
脚本解释:
[]root@elk187.longchi.xyz ~]#cat /usr/local/sbin/data_rsync.sh
#!/bin/bash
#Auther: zengguoqing
#encoding: utf-8if [ $# -ne 1 ];then	// 判断传参的个数是否是1 如果没有传参,就直接返回echo "Usage: $0 /path/to/file(绝对路径)"	// $0是当前的脚本名称,后面跟exit							// 脚本文件的绝对路径
fi# 判断文件是否存在
if [ ! -e $1 ];thenecho "[ $1 ] dir or file not find"exit
fi#获取父路径
fullpath=`dirname $1`#获取子路径
basename=`basename $1`#进入父路径
cd $fullpathfor ((host_id=188;host_id<=189;host_id++))do# 使得终端输出变为绿色tput setaf 2echo ===== rsyncing elk${host_id}.longchi.xyz: $basename =====# 使得终端恢复原来的颜色tput setaf 7# 将数据同步到其他两个节点rsync -az $basename `whoami`@elk${host_id}.longchi.xyz:$fullpathif [ $? -eq 0 ];thenecho "命令执行成功"fi
done==============脚本解释 end=============
8,集群时间同步
(1) 安装常用的Linux工具,你可以自定义
yum -y install vim net-tools(2) 安装 chrony 服务
yum -y install ntpdate chrony(3) 修改 chrony 服务配置文件
vim /etc/chrony.conf
...
# 注释官方的时间服务器,换成国内的时间服务器即可
server ntp.aliyun.com iburst
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst
server ntp4.aliyun.com iburst
server ntp5.aliyun.com iburst
...(4)配置 chronyd 的开机自启
systemctl enable --now chronyd(5) 查看 chronyd 的服务状态
systemctl status chronyd(6) 重启 chronyd 服务
systemctl restart chronyd
修改主机名
[root@elk187 ~]# hostnamectl set-hostname elk187.longchi.xyz
[root@elk188 ~]# hostnamectl set-hostname elk188.longchi.xyz
[root@elk189 ~]# hostnamectl set-hostname elk189.longchi.xyzhostnamectl set-hostname model-182.longchi.xyz[root@elk187 ~]# cat /etc/hostname
elk187.longchi.xyz
[root@elk188 ~]# cat /etc/hostname
elk188.longchi.xyz
[root@elk189 ~]# cat /etc/hostname
elk189.longchi.xyz
vim /etc/hosts 主机名追加
cat >> /etc/hosts << 'EOF'
192.168.222.187 elk187.longchi.xyz
192.168.222.188 elk188.longchi.xyz
192.168.222.189 elk189.longchi.xyz
EOF[root@elk187 ~]# cat >> /etc/hosts << 'EOF'
> 192.168.222.187 elk187.longchi.xyz
> 192.168.222.188 elk188.longchi.xyz
> 192.168.222.189 elk189.longchi.xyz
> EOF
[root@elk187 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.222.187 elk187.longchi.xyz
192.168.222.188 elk188.longchi.xyz
192.168.222.189 elk189.longchi.xyz
yum源备份
[root@elk188 ~]#  ls /etc/yum.repos.d/
CentOS-Base.repo  CentOS-Debuginfo.repo  CentOS-Media.repo    CentOS-Vault.repo
CentOS-CR.repo    CentOS-fasttrack.repo  CentOS-Sources.repo  CentOS-x86_64-kernel.repo
[root@elk188 ~]# cd /etc/yum.repos.d/
[root@elk188 yum.repos.d]# ls
CentOS-Base.repo  CentOS-Debuginfo.repo  CentOS-Media.repo    CentOS-Vault.repo
CentOS-CR.repo    CentOS-fasttrack.repo  CentOS-Sources.repo  CentOS-x86_64-kernel.repo
[root@elk188 yum.repos.d]# mkdir bak
[root@elk188 yum.repos.d]# mv C* bak
[root@elk188 yum.repos.d]# ll
total 0
drwxr-xr-x. 2 root root 220 Oct  2 13:55 bak
yum源配置
[root@elk189 ~]# cat /etc/yum.repos.d/yumtest.sh#!/bin/bash# 检查wget工具是否安装
if ! command -v wget &> /dev/null; thenecho "wget未安装,执行yum -y install wget 安装"exit 1
fi# 检查网络连接
ping -c 1 wwww.baidu.com > /dev/null 2>&1
if [ $? -ne 0 ]; thenecho "没有网络连接,脚本退出。"exit 1
fi# 创建备份目录
mkdir -p /etc/yum.repos.d/backup# 备份本地原有CentOS、epel库的yum源
cd /etc/yum.repos.d/
if [ -f "*repo" ]; thenecho "repo文件存在,开始备份..."rm -f /etc/yum.repos.d/backup/*repo  # 删除旧的备份文件mv /etc/yum.repos.d/CentOS*.repo /etc/yum.repos.d/backup/mv /etc/yum.repos.d/epel*.repo /etc/yum.repos.d/backup/echo "备份完成!"
elseecho "repo文件不存在,准备新建CentOS 7 yum源"
fi# 获取系统类型和版本ID
distro=$(cat /etc/os-release | grep '^ID=' | cut -d '=' -f 2 | tr -d '"')
osversion=$(cat /etc/os-release | grep '^VERSION_ID' | cut -d '=' -f 2 | tr -d '"')# 判断系统类型和版本并配置yum源
if [ "$distro" = "centos" ]; thenif [ "$osversion" = "7" ]; then# CentOS 7 repowget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo >/dev/null 2>&1# epel 7 repowget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo >/dev/null 2>&1if [ $? -eq 0 ]; then  # 检查wget是否成功echo "CentOS 7 yum源配置完成。"# 更新yum软件包仓库yum clean allyum makecacheyum repolistelseecho "无法获取CentOS 7 yum源,请检查网络或源地址。"fielseecho "非 CentOS 7 系统,请手动添加yum源。"fi
elseecho "非 CentOS 系统,请手动添加yum源。"
fi[root@elk189 ~]# cd /etc/yum.repos.d/
[root@elk189 yum.repos.d]# ll
total 4
drwxr-xr-x. 2 root root  220 Oct  2 13:56 bak
-rw-r--r--. 1 root root 1843 Oct  2 15:32 yumtest.sh
[root@elk189 yum.repos.d]# bash yumtest.sh
repo文件不存在,准备新建CentOS 7 yum源
CentOS 7 yum源配置完成。
Loaded plugins: fastestmirror, langpacks
Cleaning repos: base epel extras updates
Cleaning up list of fastest mirrors
Loaded plugins: fastestmirror, langpacks
Determining fastest mirrors* base: mirrors.aliyun.com* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com.....

四,ElasticSearch单点部署

1. 下载指定的ES版本
详细步骤见视频。
参考链接:
Elastic官网
https://www.elastic.co/cn/dowmloads/elasticsearch
https://www.elastic.co/ElasticSearch官网:
https://www.elastic.co/cn/elasticsearchhttps://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.3-linux-x86_64.tar.gz
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.3-x86_64.rpm
2, 部署JDK环境-可选步骤
官方链接:
https://www.oracle.com/java/technologies/javase/javase8u211-later-archive-downloads.htmlhttps://www.oracle.com/java/technologies/downloads/#java8
https://www.oracle.com/java/technologies/downloads/https://download.oracle.com/otn/java/jdk/8u321-b07/df5ad55fdd604472a86a45a217032c7d/jdk-8u321-linux-x64.rpmhttps://download.oracle.com/otn/java/jdk/8u321-b07/df5ad55fdd604472a86a45a217032c7d/jdk-8u321-linux-x64.tar.gzelk187节点部署oracle jdk步骤:
(1)创建目录
mkdir -pv /oldboyedu/softwares
(2)解压JDK到指定的目录
tar xf jdk-8u321-linux-x64.tar.gz -C /oldboyedu/softwares/(3) 创建符号链接
cd /oldboyedu/softwares/ && ln -sv jdk1.8.0_321 jdk(4) 创建环境变量
cat > /etc/profile.d/elk.sh << 'EOF'
#!/bin/bashexport JAVA_HOME=/oldboyedu/softwares/jdk
export PATH=$PATH:$JAVA_HOME/bin
EOFsource /etc/profile.d/elk.shvim /etc/profile.d/elk.sh
cat /etc/profile.d/elk.sh(5)查看JDK的版本号
java -version(6) 同步jdk环境到其他节点
data_rsync.sh /oldboyedu/
data_rsync.sh /etc/profile.d/elk.sh(7) 其他节点测试
source /etc/profile.d/elk.sh
java -version

3,单点部署elasticsearch

(1) 安装服务
yum -y localinstall elasticsearch-7.17.3-x86_64.rpmsystemctl daemon-reload
systemctl enable elasticsearch.service
systemctl start elasticsearch.service
systemctl status elasticsearch.service
systemctl cat elasticsearch(2) 修改配置文件
egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: oldboyedu-elk
node.name:oldboyedu-elk103
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.hosts: 10.0.0.103
discovery.seed_hosts:["10.0.0.103"]相关参数说明:
cluster.name: oldboyedu-elk
集群名称,若不指定,则默认是 "elasticsearch",日志文件的前缀也是集群名称node.name:oldboyedu-elk103
指定节点的名称,可以自定义,推荐使用当前的主机名,要求集群唯一path.data: /var/lib/elasticsearch
数据路径path.logs: /var/log/elasticsearch
日志路径network.host: 10.0.0.103  第一种方式
network.host: 0.0.0.0    表示当前节点的所有IP地址都可以访问 第二种方式
ES服务监听的IP地址discovery.seed_hosts: ["10.0.0.103"]	第一种方式
discovery.seed_hosts: ["elk187.longchi.xyz"]	第二种方式
服务发现的主机列表,对于单点部署而言,主机列表 "network.host" 字段配置相同即可(3) 启动服务
systemctl start elasticsearch.service
systemctl enable elasticsearch.service
systemctl status elasticsearch.service(4) 查看 ss -ntl
[]root@elk187.longchi.xyz ~]#ss -ntl
State Recv-Q Send-Q   Local Address:Port       Peer Address:Port
LISTEN 0     128      *:111                             *:*
LISTEN 0      5     192.168.122.1:53                    *:*
LISTEN 0     128      *:22                              *:*
LISTEN 0     128    127.0.0.1:631                       *:*
LISTEN 0     100    127.0.0.1:25                        *:*
LISTEN 0     128    127.0.0.1:6010                      *:*
LISTEN 0     128    [::]:111                           [::]:*
LISTEN 0     128    [::ffff:192.168.222.187]:9200      [::]:*
LISTEN 0     128    [::ffff:192.168.222.187]:9300      [::]:*
LISTEN 0     128    [::]:22                            [::]:*
LISTEN 0     128    [::1]:631                          [::]:*
LISTEN 0     100    [::1]:25                           [::]:*
LISTEN 0     128    [::1]:6010                         [::]:*# 解释参数:
'9200' ES:9200 端口是集群对外提供的一个端口,主要是提供的HTTP协议主要是提供的HTTP协议,主要作数据交互(集群内部与集群外部)即可以通过浏览器9200访问集群内部。'9300' ES:9300 集群内部各节点通信通过9300端口交换数据9300 端口使用的是TCP协议
查看集群主日志 tail -100f /var/log/elasticsearch/longchi-elk.log
[root@elk187.longchi.xyz ~]#ll /var/log/elasticsearch/longchi-elk.log
-rw-r--r-- 1 elasticsearch elasticsearch 239751 Oct  4 01:36 /var/log/elasticsearch/longchi-elk.log
[]root@elk187.longchi.xyz ~]#tail -100f /var/log/elasticsearch/longchi-elk.log
通过9200端口访问
curl 127.0.0.1:9200
{"name": "elk101.oldboyedu.com","cluster_name": "elasticsearch","cluster_uuid": "kzvSLczpRNCeXJbUJcCJoQ","version": {"number": "7.17.3","build_flavor": "default","build_type": "rpm","build_hash": "5ad023604c8d7416c9eb6c0eadb62b14e766caff","build_date": "2022-04-19T08:11:19.070913226z","build_snapshot": false,"lucene_version": "8.11.1","minimum_wire_compatibility_version": "6.8.0","minimum_index_compatibility_version": "6.0.0-beta1"}"tagline": "You Know,for Search"
}

4, OpenJDK切换Orcle jdk并修改堆内存大小
(1) 修改ES的环境变量配置文件
vim /etc/sysconfig/elasticsearch
...
ES_JAVA_HOME=/longchi/softwares/jdk(2) 修改堆内存大小
vim /etc/elasticsearch/jvm.options
...
-Xms256m
-Xmx256m(3) 验证堆内存大小
jmap -heap `ps -ef | grep java | grep -v grep | awk '{print $2}'`(4) 同步配置文件到其他节点
data_rsync.sh /etc/sysconfig/elasticsearch
data_rsync.sh /etc/elasticsearch/jvm.options

五,ElasticSearch 分布式集群部署

1, elk101 修改配置文件
vim /etc/elasticsearch/elasticsearch.yml
...
cluster.name: oldboyedu-elk
node.name: elk101
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["elk101","elk102","elk103"]
cluster.initial_master_nodes: ["elk101","elk102","elk103"]
2,同步配置文件到集群的其他节点
(1) elk101同步配置文件到集群的其他节点
data_rsync.sh /etc/elasticsearch/elasticsearch.yml(2) elk102节点配置
vim /etc/elasticsearch/elasticsearch.yml
...
node.name: elk102(3) elk103节点配置
vim /etc/elasticsearch/elasticsearch.yml
...
node.name: elk103----------------
启动服务前先执行 rm -rf /var/{lib,log}/elasticsearch/*三个节点同时启动
systemctl daemon-reload
systemctl start elasticsearch.service
systemctl status elasticsearch.service
systemctl restart elasticsearch.service# 187.188.189 同时测试出现如返回表示集群搭建成功
[root@elk188 ~]$ curl 192.168.222.188:9200/_cat/nodes
192.168.222.189 17 97 0 0.16 0.07 0.11 cdfhilmrstw * elk189.longchi.xyz
192.168.222.187 51 41 0 0.03 0.04 0.05 cdfhilmrstw - elk187.longchi.xyz
192.168.222.188 15 97 0 0.02 0.04 0.09 cdfhilmrstw - elk188.longchi.xyz
[root@elk188 ~]$[root@elk189 ~]$ curl 192.168.222.189:9200/_cat/nodes
192.168.222.188 15 97 0 0.01 0.04 0.09 cdfhilmrstw - elk188.longchi.xyz
192.168.222.187 54 41 0 0.08 0.06 0.05 cdfhilmrstw - elk187.longchi.xyz
192.168.222.189 17 97 0 0.09 0.07 0.10 cdfhilmrstw * elk189.longchi.xyz
[root@elk189 ~]$[root@elk187 ~]$ curl 192.168.222.187:9200/_cat/nodes
192.168.222.188 15 97 0 0.08 0.05 0.10 cdfhilmrstw - elk188.longchi.xyz
192.168.222.187 46 41 0 0.08 0.06 0.05 cdfhilmrstw - elk187.longchi.xyz
192.168.222.189 16 97 0 0.00 0.03 0.10 cdfhilmrstw * elk189.longchi.xyz
[root@elk187 ~]$# 187,188,189集群配置如下
[root@elk187 ~]$ egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: longchi-elk
node.name: elk187.longchi.xyz
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["elk187.longchi.xyz","elk188.longchi.xyz","elk189.longchi.xyz"]
cluster.initial_master_nodes: ["elk187.longchi.xyz","elk188.longchi.xyz","elk189.longchi.xyz"]
[root@elk187 ~]$[root@elk188 ~]$ egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: longchi-elk
node.name: elk188.longchi.xyz
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["elk187.longchi.xyz","elk188.longchi.xyz","elk189.longchi.xyz"]
cluster.initial_master_nodes: ["elk187.longchi.xyz","elk188.longchi.xyz","elk189.longchi.xyz"]
[root@elk188 ~]$[root@elk189 ~]$ egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: longchi-elk
node.name: elk189.longchi.xyz
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["elk187.longchi.xyz","elk188.longchi.xyz","elk189.longchi.xyz"]
cluster.initial_master_nodes: ["elk187.longchi.xyz","elk188.longchi.xyz","elk189.longchi.xyz"]
[root@elk189 ~]$# 表示已经成功搭建好集群
[root@elk189 ~]$ curl 192.168.222.189:9200
{"name" : "elk189.longchi.xyz","cluster_name" : "longchi-elk","cluster_uuid" : "DrM-yzthQDSJdL0Jz2JjnA", "version" : {"number" : "7.17.3","build_flavor" : "default","build_type" : "rpm","build_hash" : "5ad023604c8d7416c9eb6c0eadb62b14e766caff","build_date" : "2022-04-19T08:11:19.070913226Z","build_snapshot" : false,"lucene_version" : "8.11.1","minimum_wire_compatibility_version" : "6.8.0","minimum_index_compatibility_version" : "6.0.0-beta1"},"tagline" : "You Know, for Search"
}
[root@elk189 ~]$# 以下表示三台主机在同一个机器 他们的 "cluster_uuid" 值相同
[root@elk187 ~]$ "cluster_uuid" : "DrM-yzthQDSJdL0Jz2JjnA",
[root@elk188 ~]$ "cluster_uuid" : "DrM-yzthQDSJdL0Jz2JjnA",
[root@elk189 ~]$ "cluster_uuid" : "DrM-yzthQDSJdL0Jz2JjnA",

3,所以节点删除之前的临时数据

pkill java
rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
ll /var/{lib,log}/elasticsearch/ /tmp/

4, 所有节点启动 elasticsearch 服务

(1)所有节点启动服务
systemctl daemon-reload
systemctl start elasticsearch
curl 127.0.0.1:9200/_cat/nodes(2)启动过程中建议看日志
tail 100f /var/log/elasticsearch/longchi-elk.log大模型选择
大模型排行榜网址:https://www.superclueai.comOllama官网
官网:https://ollama.com/Ollama官网下载
https://ollama.com/download

5, 验证机器是否正常

curl elk187.longchi.xyz:9200/_cat/node?v
curl 192.168.222.187:9200/_cat/node# 以下表示机器部署成功
[root@elk188 ~]$ curl elk188.longchi.xyz:9200/_cat/nodes
192.168.222.189 34 88 0 0.03 0.02 0.05 cdfhilmrstw * elk189.longchi.xyz
192.168.222.188 24 86 0 0.12 0.04 0.05 cdfhilmrstw - elk188.longchi.xyz
192.168.222.187 37 44 0 0.06 0.04 0.05 cdfhilmrstw - elk187.longchi.xyz
[root@elk188 ~]$ curl 192.168.222.188:9200/_cat/nodes
192.168.222.189 34 88 0 0.01 0.02 0.05 cdfhilmrstw * elk189.longchi.xyz
192.168.222.188 24 86 0 0.10 0.05 0.06 cdfhilmrstw - elk188.longchi.xyz
192.168.222.187 40 44 0 0.03 0.04 0.05 cdfhilmrstw - elk187.longchi.xyz
[root@elk188 ~]$

六,部署 kibana 服务

下载软件 wget https://artifacts.elastic.co/downloads/kibana/kibana-7.17.3-x86_64.rpm
1,本地安装kibana服务
yum -y localinstall kibana-7.17.3-x86_64.rpm
2, 修改 kibana 的配置文件
vim /etc/kibana/kibana.yml
...
server.host: "10.0.0.101"
server.name: "longchi-kibana-server"
elasticsearch.hosts: ["http://10.0.0.101:9200","http://10.0.0.102:9200","http://10.0.0.103:9200"]
il8n.locale: "zh-CN"
-------------------
k8s-->kubernetes
i18n-->internationalization
internationalization and localization实操 '0.0.0.0' 表示绑定当前主机的所有网卡
[root@elk187 ~]$ egrep -v "^#|^$" /etc/kibana/kibana.yml
server.host: "0.0.0.0"
server.name: "longchi-elk"
elasticsearch.hosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]
i18n.locale: "zh-CN"

3, 启动 kibana 服务

systemctl daemon-reload
systemctl enable --now kibana
systemctl status kibana# 查看 kibana 自启动服务文件
systemctl cat kibana

4, 访问 kibana 的webUI

http://192.168.222.187:5601/app/home#/

七,filebeat 环境部署

filebeat介绍
filebeat是一个用于转发和收集日志数据的轻量级托运工具,监控日志文件或指定的本地路径,收集日志事件,并转发到Elasticsearch和Logstash用于检索。 Filebeat工作介绍:当启动Filebeat时,它启动一个或多个输入,这些输入查看为日志数据指定的位置。对于Filebeat定位的每个日志,Filebeat都会启动一个收集器。每个收集器读取单个日志以获取新内容,并将新日志数据发送给libbeat, libbeat聚合事件并将聚合的数据发送到为Filebeat配置的输出。
filebeat官网:https://www.elastic.co/guide/en/beats/filebeat/current/index.html
1, 部署 filebeat 环境部署
yum -y localinstall filebeat-7.17.3-x86_64.rpm
温馨提示:elk102节点操作。查看filebeat的版本
[root@elk188 ~]$ filebeat version
filebeat version 7.17.3 (amd64), libbeat 7.17.3 [1993ee88a11cb34f61a1fb45c7c3cf50533682cb built 2022-04-19 09:27:20 +0000 UTC]# 从 log 本地拿数据,参考文档:
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html
查看 filebeat 自启动配置文件
[root@elk188 ~]$ systemctl cat filebeat
# /usr/lib/systemd/system/filebeat.service
[Unit]
Description=Filebeat sends log files to Logstash or directly to Elasticsearch.
Documentation=https://www.elastic.co/beats/filebeat
Wants=network-online.target
After=network-online.target[Service]Environment="GODEBUG='madvdontneed=1'"
Environment="BEAT_LOG_OPTS="
Environment="BEAT_CONFIG_OPTS=-c /etc/filebeat/filebeat.yml"
Environment="BEAT_PATH_OPTS=--path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat"
ExecStart=/usr/share/filebeat/bin/filebeat --environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS
Restart=always[Install]
WantedBy=multi-user.target
查看过滤后的 filebeat 配置文件
[root@elk188 ~]$ egrep -v "^*#|^$" /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: filestreamenabled: falsepaths:- /var/log/*.log
filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: false
setup.template.settings:index.number_of_shards: 1
setup.kibana:
output.elasticsearch:hosts: ["localhost:9200"]
processors:- add_host_metadata:when.not.contains.tags: forwarded- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~
对 filebeat 配置文件进行更名
mv /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml-`date +%F`[root@elk188 ~]$ ll /etc/filebeat/
total 3880
drwxr-xr-x 2 root root       6 Oct 12 16:23 config
-rw-r--r-- 1 root root 3780088 Apr 19  2022 fields.yml
-rw-r--r-- 1 root root  170239 Apr 19  2022 filebeat.reference.yml
-rw------- 1 root root    8273 Oct 12 17:12 filebeat.yml
drwxr-xr-x 2 root root    4096 Oct 12 16:20 modules.d# 更名
[root@elk188 ~]$ mv /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml-`date +%F`
[root@elk188 ~]$ ll /etc/filebeat/
total 3880
drwxr-xr-x 2 root root       6 Oct 12 16:23 config
-rw-r--r-- 1 root root 3780088 Apr 19  2022 fields.yml
-rw-r--r-- 1 root root  170239 Apr 19  2022 filebeat.reference.yml
-rw------- 1 root root    8273 Oct 12 17:12 filebeat.yml-2024-10-12
drwxr-xr-x 2 root root    4096 Oct 12 16:20 modules.d
[root@elk188 ~]$
2,修改 filebeat的配置文件
(1) 编写测试的配置文件
mkdir /etc/filebeat/config -pv
cat > /etc/filebeat/config/01-stdin-to-console.yml << 'EOF'
# 指定输入的类型
filebeat.inputs:
# 指定输入的类型为 "stdin",表示标准输入
- type: stdin# 指定输出的类型
output.console:# 打印漂亮的格式pretty: true
EOF
(2)运行 filebeat 实例
filebeat -e -c /etc/filebeat/config/01-stdin-to-console.yml(3) 测试[root@elk188 ~]$ vim /etc/filebeat/filebeat.yml
[root@elk188 ~]$ cat /etc/filebeat/filebeat.yml
# 指定输入的类型
filebeat.inputs:
# 指定输入的类型为 "stdin",表示标准输入
- type: stdin# 指定输出的类型
output.console:# 打印漂亮的格式pretty: true[root@elk188 /tmp]$ echo 1111 > test.log
[root@elk188 /tmp]$ echo 2222 >> test.log
[root@elk188 /tmp]$ more test.log
1111
2222
[root@elk188 /tmp]$ echo -n 3333 >> test.log  '-n' 去掉换行符[root@elk188 /tmp]$ ll /tmp/test.log
-rw-r--r-- 1 root root 19 Oct  6 22:14 /tmp/test.log[root@elk188 ~]$ ll /var/lib/filebeat/registry/filebeat
total 8
-rw------- 1 root root 2025 Oct  6 18:59 log.json
-rw------- 1 root root   15 Oct  6 14:23 meta.json[root@elk188 ~]$ cat  /var/lib/filebeat/registry/filebeat/log.json
{"op":"set","id":1}
{"k":"filebeat::logs::","v":{"FileStateOS":{"inode":0,"device":0},"prev_id":"","offset":12,"timestamp":[279671284005235,1728249848],"ttl":0,"type":"","identifier_name":"","id":"","source":""}}
{"op":"set","id":2}
{"k":"filebeat::logs::","v":{"identifier_name":"","id":"","prev_id":"","source":"","offset":5,"ttl":0,"type":"","FileStateOS":{"inode":0,"device":0},"timestamp":[279671418128480,1728249976]}}
{"op":"set","id":3}
{"k":"filebeat::logs::","v":{"timestamp":[279671418128480,1728249976],"ttl":0,"identifier_name":"","id":"","prev_id":"","source":"","offset":5,"type":"","FileStateOS":{"inode":0,"device":0}}}
{"op":"set","id":4}
{"k":"filebeat::logs::native::0-0","v":{"prev_id":"","type":"","FileStateOS":{"inode":0,"device":0},"id":"native::0-0","offset":5,"timestamp":[279671418128480,1728249976],"ttl":-2,"identifier_name":"native","source":""}}
{"op":"set","id":5}
{"k":"filebeat::logs::","v":{"id":"","timestamp":[279671256823595,1728252729],"ttl":0,"identifier_name":"","prev_id":"","source":"","offset":5,"type":"","FileStateOS":{"inode":0,"device":0}}}
{"op":"set","id":6}
{"k":"filebeat::logs::native::0-0","v":{"type":"","FileStateOS":{"inode":0,"device":0},"offset":5,"ttl":-2,"source":"","timestamp":[279671418128480,1728249976],"identifier_name":"native","id":"native::0-0","prev_id":""}}
{"op":"set","id":7}
{"k":"filebeat::logs::","v":{"id":"","timestamp":[279671256823595,1728252729],"type":"","FileStateOS":{"inode":0,"device":0},"identifier_name":"","prev_id":"","source":"","offset":5,"ttl":0}}
{"op":"set","id":8}
{"k":"filebeat::logs::native::0-0","v":{"id":"native::0-0","FileStateOS":{"inode":0,"device":0},"identifier_name":"native","type":"","prev_id":"","source":"","offset":5,"timestamp":[279671256823595,1728252729],"ttl":-2}}
{"op":"set","id":9}
{"k":"filebeat::logs::native::0-0","v":{"ttl":-2,"type":"","FileStateOS":{"inode":0,"device":0},"identifier_name":"native","id":"native::0-0","prev_id":"","source":"","offset":5,"timestamp":[279671256823595,1728252729]}}[root@elk188 ~]$ cat  /var/lib/filebeat/registry/filebeat/meta.json
{"version":"1"}[root@elk188 ~]$ filebeat -e -c /etc/filebeat/filebeat.yml8888
{"@timestamp": "2024-10-06T21:26:15.325Z",	#时间戳,当前的时间"@metadata": {		# 元数据信息"beat": "filebeat",	# beat使用的是什么beat"type": "_doc",		# 文档类型"version": "7.17.3"	# filebeat 的版本号},"log": {		# 读取了哪些文件"offset": 0,	# 偏移量没有"file": {		#"path": ""	#}},"message": "8888",	# 实际输入的信息"input": {		# 输入信息"type": "stdin"		# 输入类型,标准输入},"ecs": {	# 云服务端的信息"version": "1.12.0"	# },"host": {	# 主机"name": "elk188.longchi.xyz"	# 当前的主机名},"agent": {	# 当前客户端信息"ephemeral_id": "c4bff02c-7f42-40bb-aa0c-b0299958c725",# "id": "5d9ba997-37da-4163-81e2-ac5f2182d98a",# "name": "elk188.longchi.xyz",# "type": "filebeat",# "version": "7.17.3",# "hostname": "elk188.longchi.xyz"# }
}2024-10-06T14:26:16.327-0700    ERROR   file/states.go:125      State for  should have been dropped, but couldn't as state is not finished.2024-10-06T14:26:19.421-0700    INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":360,"time":{"ms":8}},"total":{"ticks":420,"time":{"ms":8},"value":420},"user":{"ticks":60}},"handles":{"limit":{"hard":4096,"soft":1024},"open":10},"info":{"ephemeral_id":"c4bff02c-7f42-40bb-aa0c-b0299958c725","uptime":{"ms":180672},"version":"7.17.3"},"memstats":{"gc_next":19958064,"memory_alloc":11684128,"memory_sys":262144,"memory_total":60022392,"rss":104947712},"runtime":{"goroutines":28}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"open_files":0,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":1,"active":0,"batches":1,"total":1},"write":{"bytes":604}},"pipeline":{"clients":1,"events":{"active":0,"published":1,"total":1},"queue":{"acked":1}}},"registrar":{"states":{"current":1,"update":1},"writes":{"success":1,"total":1}},"system":{"load":{"1":0.03,"15":0.61,"5":0.7,"norm":{"1":0.015,"15":0.305,"5":0.35}}}}}}[root@elk188 ~]$ egrep -v "^*#|^$" /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: filestreamenabled: falsepaths:- /var/log/*.log
filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: false
setup.template.settings:index.number_of_shards: 1
setup.kibana:
output.elasticsearch:hosts: ["localhost:9200"]
processors:- add_host_metadata:when.not.contains.tags: forwarded- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~
3,修改filebeat的日志输出到终端
[root@elk188 ~]$ cat /etc/filebeat/config/02.log_to_console.yml
filebeat.inputs:
- type: logpaths:- /tmp/test.logoutput.console:pretty: true备注:log.json 他记录了各个文件的 "offset"# filebeat如何读取指定数据的原理
filebeat如何读取指定数据的原理: 换行读取
[root@elk188 ~]$ ll /tmp/test.log
-rw-r--r-- 1 root root 24 Oct 13 22:42 /tmp/test.log
[root@elk188 ~]$ ll /var/lib/filebeat/registry/filebeat/
total 16
-rw------- 1 root root 12032 Oct 13 23:47 log.json
-rw------- 1 root root    16 Oct 13 21:56 meta.json
[root@elk188 ~]$ vim /var/lib/filebeat/registry/filebeat/log.json
{"op":"set","id":31}
{"k":"filebeat::logs::native::20101024-2051","v":{"prev_id":"","source":"/tmp/test.log","ttl":-1,"FileStateOS":{"inode":20101024,"device":2051},"identifier_name":"native","id":"native::20101024-2051","offset":24,"timestamp":[279671438170877,1728884555],"type":"log"}}通过修改 'vim /var/lib/filebeat/registry/filebeat/log.json' 该文件中最后一行中的偏移量 'offset' 的值,就可以读取指定的日志。"source":"/tmp/test.log"	# 表示 'filebeat' 收集源文件# 'meta.json'记录版本信息
[root@elk188 ~]$ cat /var/lib/filebeat/registry/filebeat/meta.json
{"version":"1"}filebeat是根据(比如'/tmp/test.log')源文件和偏移量('offset":24')进行数据收集# 可以直接删除 '/var/lib/filebeat/registry/filebeat/' 目录下的数据,然后从头开始读取数据,即 'rm -rf /var/lib/filebeat/*'  
注意: 生产环境一般不要这样删除 log.json 的数据
如果你启动了 filebeat 实例,他还会去增加一个 'filebeat.lock' 锁文件,锁文件主要是占位作用,目的就是当前数据已经被 filebeat 实例使用了,所以你不能多个 filebeat 实例使用同一个数据目录。
[root@elk188 ~]$ cat /var/lib/filebeat/registry/filebeat/log.json
{"op":"set","id":1}
{"k":"filebeat::logs::native::20101024-2051","v":{"FileStateOS":{"inode":20101024,"device":2051},"source":"/tmp/test.log","offset":0,"timestamp":[279671657789998,1728890428],"ttl":-1,"id":"native::20101024-2051","prev_id":"","type":"log","identifier_name":"native"}}[root@elk188 ~]$ cat config/02-log-to-console.yml
filebeat.inputs:
- type: logpaths:- /tmp/test.logoutput.console:pretty: true
4, 基于模块采集 nginx 日志文件
filebeat.config.modules:# 指定模块的配置文件路径,如果是 yum 方式安装,在 7.17.3版本中不能使用如下的默认值。# path: $(path.config)/modules.d/*.yml# 经过实际测试,推荐大家使用如下的配置,此处写绝对路径即可;而对于二进制部署无需做此操作,path: /etc/filebeat/modules.d/*.yml# 开启热加载功能reload.enabled: trueoutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "oldboyedu-linux-nginx-access-%{+yyyy.MM.dd}"# 禁用索引生命周期管理
setup.ilm.enabled: false
# 设置索引模板的名称
setup.template.name: "oldboyedu-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "oldboyedu-linux*"
# 覆盖已有的索引模板,如果为 true,则会直接覆盖现有的索引模板,如果为 false则不覆盖;
setup.template.overwrite: true
# 配置索引模板:
setup.template.settings:# 设置分片数量index.number_of_shards: 3# 设置副本数量,要求小于集群数的数量index.number_of_replicas: 0

5,filebeat各种配置文件实例
除了自定义字段外,其他都为源数据字段
# 禁用索引生命周期管理,若开启,上面的 index 配置将无效
setup.ilm.enabled: false
# 设置索引模板的名称
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 匹配索引模板参数
setup.template.settings:# 设置分片数量index.number_of_shards: 3# 设置副本数量,要求小于集群数的数量index.number_of_replicas: 0# 实例
1)标准输入输出 终端调试输出
[root@elk188 ~]$ cat config/01-stdin-to-console.yml
# 指定输入的类型
filebeat.inputs:
# 指定输入的类型为 "stdin",表示标准输入
- type: stdin# 指定输出类型
output.console:# 打印漂亮的格式pretty: true2)日志在终端输出
[root@elk188 ~]$ cat config/02-log-to-console.yml
filebeat.inputs:
- type: logpaths:- /tmp/test.logoutput.console:pretty: true3)对 '/tmp/test.log'和 '/tmp/*.txt'这个两个文件做日志收集
[root@elk188 ~]$ cat config/03-log-to-console.yml
filebeat.inputs:
- type: logpaths:- /tmp/test.log- /tmp/*.txtoutput.console:pretty: truerm -rf /var/lib/filebeat/*   # 删除数据
filebeat -e -c /etc/filebeat/config/03-log-to-console.yml  # 启动实例
filebeat启动实例,会产生一个锁文件'filebeat.lock'
[root@elk188 ~]$ ll /var/lib/filebeat/
total 4
-rw------- 1 root root   0 Oct 14 22:43 filebeat.lock
-rw------- 1 root root 100 Oct 14 22:43 meta.json
drwxr-x--- 3 root root  22 Oct 14 22:43 registry# 创建一个文件 配置文件中没有该文件,我们可以发现 filebeat 是收集不到该文件的数据
[root@elk188 ~]$ echo kafka >> /tmp/kafka.log
# 创建一个文件 与配置文件中 '/tmp/*.txt'可以匹配,此时filebeat可以收集到该文件数据
[root@elk188 ~]$ echo kafka >> /tmp/kafka.txt  此时 'offset'的值为0
# 再次追加,此时filebeat收集到的 'offset' 值为6,可以看出filebeat收集为上次 'offset' 执行的值 
[root@elk188 ~]$ echo zookeeper >> /tmp/kafka.txt
[root@elk188 ~]$ ll /tmp/kafka.txt
-rw-r--r-- 1 root root 16 Oct 14 23:01 /tmp/kafka.txt# 支持多个文件扩展 使用场景 '/var/lib/docker/.../xdxx'
[root@elk188 ~]$ mkdir -p /tmp/test/elk
[root@elk188 ~]$ echo 2222 >> /tmp/test/elk/1.log
[root@elk188 ~]$ mkdir -p /tmp/test/kafka
[root@elk188 ~]$ echo 1111 >> /tmp/test/kafka/1.log
[root@elk188 ~]$ cat config/04-log-to-console.yml
filebeat.inputs:
- type: logpaths:- /tmp/test.log- /tmp/*.txt- type: logpaths:- /tmp/test/*/*.logoutput.console:pretty: true# 'enabled'是filebeat对日志收集启用(true)和禁用(false)
[root@elk188 ~]$ cat config/04-log-to-console.yml
filebeat.inputs:
- type: logenabled: falsepaths:- /tmp/test.log- /tmp/*.txt- type: logenabled: truepaths:- /tmp/test/*/*.logoutput.console:pretty: true执行以下命令可以看到启用与禁用结果
[root@elk188 ~]$ rm -rf /var/lib/filebeat/*
[root@elk188 ~]$ filebeat -e -c config/04-log-to-console.yml# 通用字段为【enabled,tags,field,field_under_root,processors,pipeline,keep_null,index,publisher_pipeline.disable_host】
[root@elk188 ~]$ cat config/04-log-to-console.yml
filebeat.inputs:
- type: logenabled: falsepaths:- /tmp/test.log- /tmp/*.txttags: ["longchi-linux"]- type: logenabled: truepaths:- /tmp/test/*/*.logtags: ["longchi-python"]output.console:pretty: true执行以下命令可以看到自定义tags标签(里面放的是一个个的值)
rm -rf /var/lib/filebeat/*
filebeat -e -c config/04-log-to-console.yml
[root@elk188 ~]$ rm -rf /var/lib/filebeat/*
[root@elk188 ~]$ filebeat -e -c config/04-log-to-console.yml# 增加通用字段
[root@elk188 ~]$ cat config/04-log-to-console.yml
filebeat.inputs:
- type: logenabled: truepaths:- /tmp/test.log- /tmp/*.txttags: ["longchi-linux","容器运维","DBA运维","SRE运维工程师"]field:school: "北京昌平区沙河镇"class: "linux80"- type: logenabled: truepaths:- /tmp/test/*/*.logtags: ["longchi-python","云原生运维"]output.console:pretty: true执行以下命令可以看到自定义filed字段(key-value)形式展示,以后可以取出值
rm -rf /var/lib/filebeat/*
filebeat -e -c config/04-log-to-console.yml
[root@elk188 ~]$ rm -rf /var/lib/filebeat/*
[root@elk188 ~]$ filebeat -e -c config/04-log-to-console.yml
{"@timestamp": "2024-10-15T21:43:45.288Z","@metadata": {"beat": "filebeat","type": "_doc","version": "7.17.3"},"host": {"name": "elk188.longchi.xyz"},"agent": {"name": "elk188.longchi.xyz","type": "filebeat","version": "7.17.3","hostname": "elk188.longchi.xyz","ephemeral_id": "0e0e2d4c-8e0c-4afd-a6cf-1c6d16c2f761","id": "132e1803-5bd6-417c-8dd6-251836669c7c"},"log": {"offset": 0,"file": {"path": "/tmp/test.log"}},"message": "1111","tags": [		# 展示的是一个个的值,tags通常只是做一个判断,若非要访问,以下标(0,1等)的方式访问"longchi-linux","容器运维","DBA运维","SRE运维工程师"],"input": {"type": "log"},"fields": {						# 是以 key-value值的形式展示"class": "linux80","school": "北京昌平区沙河镇"},"ecs": {"version": "1.12.0"}
}
除了自定义字段外,其他都为源数据字段4) 增加字段
[root@elk188 ~]$ rm -rf /var/lib/filebeat/*
[root@elk188 ~]$ vim /etc/filebeat/config/04.log_to_console.yml
[root@elk188 ~]$ cat /etc/filebeat/config/04.log_to_console.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true	# 启用或禁用# 指定数据路径paths:- /tmp/test.log- /tmp/*.txt# 给当前的输入类型打上标签tags: ["longchi-linux80","容器运维","DBA运维","SRE运维工程师"]# 自定义字段fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"# 将该字段设置为顶级字段# 将自定义字段的 key-value 放到顶级字段# 默认值为false,会将数据放在一个叫"fields"的字段下面fields_under_root: true- type: logenabled: truepaths:- /tmp/test/*/*.logtags: ["longchi-python","云原生开发"]fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"name: "longchi"domain_name: "longchi.xyz"output.console:pretty: true5) 输出到ES集群
[root@elk188 ~]$ cat /etc/filebeat/config/05-log-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /tmp/test.log- /tmp/*.txt# 给当前的输入类型打上标签tags: ["longchi-linux80","容器运维","DBA运维","SRE运维工程师"]# 自定义字段fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"# 将该字段设置为顶级字段# 将自定义字段的 key-value 放到顶级字段# 默认值为false,会将数据放在一个叫"fields"的字段下面fields_under_root: true- type: logenabled: truepaths:- /tmp/test/*/*.logtags: ["longchi-python","云原生开发"]fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"name: "longchi"domain_name: "longchi.xyz"output.elasticsearch:hosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"][root@elk188 ~]$ ll  /var/lib/filebeat/
total 4
-rw------- 1 root root   0 Oct  7 23:29 filebeat.lock
-rw------- 1 root root 100 Oct  6 14:23 meta.json
drwxr-x--- 3 root root  22 Oct  7 23:14 registry[root@elk188 ~]$ rm -rf /var/lib/filebeat/registry
[root@elk188 ~]$ filebeat -e -c /etc/filebeat/config/04.log_to_console.yml[root@elk188 ~]$ filebeat -e -c config/03.log_to_console.yml
2024-10-07T03:17:56.350-0700    INFO    instance/beat.go:685    
Home path: [/usr/share/filebeat] 
Config path: [/etc/filebeat] 
Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat] 
Hostfs Path: [/]# output 输出包括【Elasticsearch Service,Elasticsearch,Logstash,Kafka,Redis,File,Console,Change the output codec】
以上输出都是在console,下面演示实例在 elasticsearch 集群输出[root@elk188 ~]$ cp config/05.log_to_es.yml config/06.log_to_es.yml
[root@elk188 ~]$ cp /etc/filebeat/config/05.log_to_es.yml /etc/filebeat/config/06.log_to_es.yml
[root@elk188 ~]$ vim config/06.log_to_es.yml6) 配置索引,索引模式,关闭索引生命周期管理
[root@elk188 ~]$ vim /etc/filebeat/config/06.log_to_es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/06.log_to_es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true# 指定数据路径paths:- /tmp/test.log- /tmp/*.txt# 给当前的输入类型打上标签tags: ["longchi-linux80","容器运维","DBA运维","SRE运维工程师"]# 自定义字段fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"# 将该字段设置为顶级字段# 将自定义字段的 key-value放到顶级字段# 默认值为false,会将数据放在一个叫"fields"的字段下面fields_under_root: true- type: logenabled: truepaths:- /tmp/test/*/*.logtags: ["longchi-python","云原生开发"]fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"name: "longchi"domain_name: "longchi.xyz"output.elasticsearch:hosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-elk-%{+yyyy.MM.dd}"# 禁用索引生命周期管理,若开启,上面的 index 配置将无效
setup.ilm.enabled: false
# 设置索引模板的名称
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"[root@elk188 ~]$ vim config/06.log_to_es.yml
[root@elk188 ~]$ cat config/06.log_to_es.yml
# encoding: utf-8
filebeat.inputs:
- type: logenabled: truepaths:- /tmp/test.log- /tmp/*.txttags: ["longchi-linux80","容器运维","DBA运维","SRE运维工程师"]fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"fields_under_root: true- type: logenabled: truepaths:- /tmp/test/*/*.logtags: ["longchi-python","云原生开发"]fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"name: "longchi"domain_name: "longchi.xyz"output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189"]index: "longchi-linux-elk-%{+yyyy.MM.dd}"setup.ilm.enabled: false
setup.template.name: "longchi-linux"
setup.template.pattern: "longchi-linux*"
[root@elk188 ~]$ rm -rf /var/lib/filebeat/*
[root@elk188 ~]$ filebeat -e -c /etc/filebeat/config/06.log_to_es.yml7) 配置多个index(indices) 
# 使用多索引模式(indices),一个配置文件可以输出到不同的索引
[root@elk188 ~]$ cp config/06.log_to_es.yml config/07.log_to_es.yml
[root@elk188 ~]$ cp /etc/filebeat/config/06.log_to_es.yml /etc/filebeat/config/07.log_to_es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/07.log_to_es.yml
[root@elk188 ~]$ vim config/07.log_to_es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/07.log_to_es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/07.log_to_es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true# 指定数据路径paths:- /tmp/test.log- /tmp/*.txt# 给当前的输入类型打上标签tags: ["longchi-linux80","容器运维","DBA运维","SRE运维工程师"]# 自定义字段fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"# 将该字段设置为顶级字段# 将自定义字段的 key-value放到顶级字段# 默认值为false,会将数据放在一个叫"fields"的字段下面fields_under_root: true- type: logenabled: truepaths:- /tmp/test/*/*.logtags: ["longchi-python","云原生开发"]fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"name: "longchi"domain_name: "longchi.xyz"output.elasticsearch:hosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189"]# index: "longchi-linux-elk-%{+yyyy.MM.dd}"indices:- index: "longchi-linux-elk-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "longchi-linux80"- index: "longchi-linux-python-%{+yyyy.MM.dd}"when.contains:tags: "longchi-python"# 禁用索引生命周期管理,若开启,上面的 index 配置将无效
setup.ilm.enabled: false
# 设置索引模板的名称
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"[root@elk188 ~]$ vim config/07.log_to_es.yml
[root@elk188 ~]$ rm -rf /var/lib/filebeat/*
[root@elk188 ~]$ filebeat -e -c config/07.log_to_es.yml2024-10-16T19:47:59.500-0700    INFO    [input.harvester]       log/harvester.go:309    Harvester started for paths: [/tmp/test/*/*.log]        {"input_id": "5e168c0d-861e-4a69-977b-cf3743a4904c", "source": "/tmp/test/kafka/hosts.log", "state_id": "native::3534969-2051", "finished": false, "os_id": "3534969-2051", "harvester_id": "d9a9c1fd-9e4f-4acf-a3cc-a6caee5045ad"}8)如何在配置文件中去定义分片与副本数量
# 生产环境中一般设置10个分片和2个副本
# 注意: 副本数量不能大于集群的数量
[root@elk188 ~]$ cp /etc/filebeat/config/07-log-to-es.yml /etc/filebeat/config/08-log-to-es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/08-log-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/08-log-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /tmp/test.log- /tmp/*.txt# 给当前的输入类型打上标签tags: ["longchi-linux80","容器运维","DBA运维","SRE运维工程师"]# 自定义字段fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"# 将该字段设置为顶级字段# 将自定义字段的 key-value 放到顶级字段# 默认值为false,会将数据放在一个叫"fields"的字段下面fields_under_root: true- type: logenabled: truepaths:- /tmp/test/*/*.logtags: ["longchi-python","云原生开发"]fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"name: "longchi"domain_name: "longchi.xyz"output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]# index: "longchi-linux-elk-%{+yyyy.MM.dd}"indices:- index: "longchi-linux-elk-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "longchi-linux80"- index: "longchi-linux-python-%{+yyyy.MM.dd}"when.contains:tags: "longchi-python"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板
setup.template.overwrite: false
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-2(本集群),副本数量要求小于集群数量index.number_of_replicas: 1重新启动 filebeat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/08-log-to-es.yml# 创建模板
(kibana控制面板先删除索引管理->索引和索引模板再启动filebeat实例重新创建索引和模板)
2024-10-17T02:25:53.757-0700    INFO    template/load.go:131    Try loading template longchi-linux to Elasticsearch
2024-10-17T02:25:53.971-0700    INFO    template/load.go:123    Template with name "longchi-linux" loaded.备注:
1. 集群的颜色
Red: 集群的部分主分片无法访问
Yellow: 集群的部分副本分片无法访问
Green: 集群的主分片和副本分片可以访问2. 集群的主分片和副本分片的区别
(1) 主分片可以读写, 即rw
(2) 副本分片只能读,即ro3. filebeat 的两大组件 input,output
filebeat 的作用是数据采集和数据传输
filebeat 的工作原理 是按行读取4. Elasticsearch (简称 ES  主要作用做数据的存储,数据的查询,数据的分析)Rest API
索引: index-->数据的逻辑存储名称
分片:shard--> 一个索引至少有一个或多个分片,分片是我们索引真实存储地,他是真正的数据存储,每一个分片就是负责真正存储的
副本:replica是对分片的一个备份,一个分片至少有0个或多个副本,如果没有副本,该节点挂了,你在其他节点就找不到他的数据5,kibana (数据展示)
建立索引模式--->ES上的索引
创建索引模式时至少要匹配一个或多个索引
systemctl enable elasticsearch --now
systemctl enable kibana --now9)nginx 实例
[root@elk188 ~]$ cp /etc/filebeat/config/08-log-to-es.yml /etc/filebeat/config/09-nginx-to-es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/09-nginx-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/09-nginx-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/nginx/access.log*# 给当前的输入类型打上标签tags: ["access"]output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-nginx-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 1启动 filebeat 的 nginx 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/09-nginx-to-es.yml10)基于log类型收集nginx的json日志
修改 nginx 配置文件
[root@elk188 ~]$ cat /etc/nginx/nginx.conf
...log_format longchi_nginx_json '{"@timestamp": "$time_iso8601",''"host": "$server_addr",''"clientip": "$remote_addr",''"size": "$body_bytes_sent",''"responsetime": "$request_time",''"upstreamtime": "$upstream_response_time",''"upstreamhost": "$upstream_addr",''"http_host": "$host",''"uri": "$uri",''"domain": "$host",''"xff": "$http_x_forwarded_for",''"referer": "$http_referer",''"tcp_xff": "$proxy_protocol_addr",''"http_user_agent": "$http_user_agent",''"status": "$status"}';access_log /var/log/nginx/access.log longchi_nginx_json;
修改filebeat 启动配置文件
[root@elk188 ~]$ cat /etc/filebeat/config/10-nginx-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/nginx/access.log*# 给当前的输入类型打上标签tags: ["access"]parsers:- ndjson:json.keys_under_root: truejson.overwrite_keys: truejson.add_error_key: truejson.message_key: true# 字符行是json格式,如下配置
# json 所有的key 是否在顶级key(json)下
#  json.keys_under_root: true
# 如果外部存在key,是否覆盖
#  json.overwrite_keys: true
# 是否添加错误key,如解析出错,会添加解析错误信息
#  json.add_error_key: true
# 添加message 的key
#  json.message_key: logoutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-nginx-access-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 111)
补充知识点-索引和分片
# 分片是真正存储数据空间的,分片里面存放不同的文档,文档里面存放索引即文档是真正存储数据的地方 (分片里面对应一个lucene索引即lucene索引里面对应的是一个分片,他主要是一个搜索框架库,缺点:就是单节点访问,ES底层用的是lucene索引)
# 如何利用ES集群呢?可以设置多个分片,ES集群的分片一般设置为10,一般大厂(腾讯)都是这样设置,1G对应20个分片数,假设你服务器有32G,对应的是640个分片,一个节点有640个分片应该差不多了。
# 如何定义分片?这个就会涉及到一个路由计算,他是根据文档ID去进行hash计算的,文档的ID是他的唯一标识,让他的值除以集群数(主分片数),取余得到的值就是该分片存放文档的节点主机
"_id": "RAdbmJIBJmKm3sgs4J3G",
路由计算: hash(_id)%primary shard number=分片编号
缺点: 数据存在单点故障,当该节点挂掉后,我们可以通过副本去其他节点拿到数据
解决方案:准备副本 3分片一副本(每个节点都对应一个分片,3个节点就是3个分片,每个分片对应一个副本)如下图所示
注意: 生产环境中不能修改副本,不可以扩容,分片数量配置后是不可以修改的,副本数配置后可以修改吗?可以修改# 9200端口遵循http,https协议(外部通信),9300端口遵循tcp协议(内部通信)
# 过滤 kibana 配置文件  kibana也是作为ES集群的客户端  用户可以直接操作API的
[root@elk188 ~]$ egrep -v "^#|^$" /etc/kibana/kibana.yml
server.host: "0.0.0.0"
server.name: "longchi-elk"
elasticsearch.hosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]
i18n.locale: "zh-CN"
[root@elk188 ~]$ cat /etc/filebeat/config/07-log-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /tmp/test.log- /tmp/*.txt# 给当前的输入类型打上标签tags: ["longchi-linux80","容器运维","DBA运维","SRE运维工程师"]# 自定义字段fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"# 将该字段设置为顶级字段# 将自定义字段的 key-value 放到顶级字段# 默认值为false,会将数据放在一个叫"fields"的字段下面fields_under_root: true- type: logenabled: truepaths:- /tmp/test/*/*.logtags: ["longchi-python","云原生开发"]fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"name: "longchi"domain_name: "longchi.xyz"output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]# index: "longchi-linux-elk-%{+yyyy.MM.dd}"indices:- index: "longchi-linux-elk-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "longchi-linux80"- index: "longchi-linux-python-%{+yyyy.MM.dd}"when.contains:tags: "longchi-python"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"

如图集群配置2个副本分片必须分别放在2个不同节点(副本不可以放在自己主机上,要放在别的主机上)这样可以确保即使该节点挂掉,也可以从其他节点拿数据。

当一个机柜断电的时候可以去第二个机柜去拿数据

EFK架构数据流走向

6, 执行以下命令可以看到自定义字段
rm -rf /var/lib/filebeat/*
filebeat -e -c config/04-log-to-console.yml[root@elk188 ~]$ cat config/04-log-to-console.yml
filebeat.inputs:
- type: logenabled: truepaths:- /tmp/test.log- /tmp/*.txttags: ["longchi-linux","容器运维","DBA运维","SRE运维工程师"]fields:school: "北京昌平区沙河镇"class: "linux80"- type: logenabled: truepaths:- /tmp/test/*/*.logtags: ["longchi-python","云原生运维"]fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"# 将该字段设置为顶级字段# 将自定义字段的 key-value 放到顶级字段# 默认值为false,会将数据放在一个叫"fields"的字段下面fields_under_root: trueoutput.console:pretty: true执行以下命令可以看到自定义字段
rm -rf /var/lib/filebeat/*
filebeat -e -c config/04-log-to-console.yml
{"@timestamp": "2024-10-15T22:09:33.259Z","@metadata": {"beat": "filebeat","type": "_doc","version": "7.17.3"},"agent": {"type": "filebeat","version": "7.17.3","hostname": "elk188.longchi.xyz","ephemeral_id": "37eb1a3a-f9cd-4edd-9da6-3ab34f5c7819","id": "e66573c4-284e-46cd-8875-94f280168bcf","name": "elk188.longchi.xyz"},"log": {"offset": 0,"file": {"path": "/tmp/test.log"}},"message": "1111","tags": ["longchi-linux","容器运维","DBA运维","SRE运维工程师"],"input": {"type": "log"},"fields": {"class": "linux80","school": "北京昌平区沙河镇"},"ecs": {"version": "1.12.0"},"host": {"name": "elk188.longchi.xyz"}
}
{"@timestamp": "2024-10-15T22:09:33.259Z","@metadata": {"beat": "filebeat","type": "_doc","version": "7.17.3"},"log": {"offset": 0,"file": {"path": "/tmp/test/elk/1.log"}},"message": "2222","tags": ["longchi-python","云原生运维"],"input": {"type": "log"},"corporation": "上海隆迟实业有限公司","address": "上海市奉贤区奉城镇新奉公路2011号1幢1243室","agent": {"hostname": "elk188.longchi.xyz","ephemeral_id": "37eb1a3a-f9cd-4edd-9da6-3ab34f5c7819","id": "e66573c4-284e-46cd-8875-94f280168bcf","name": "elk188.longchi.xyz","type": "filebeat","version": "7.17.3"},"ecs": {"version": "1.12.0"},"host": {"name": "elk188.longchi.xyz"}
}
7, 启动 filebeat 实例
[root@elk188 ~]$ rm -rf /var/lib/filebeat/*
[root@elk188 ~]$ filebeat -e -c /etc/filebeat/config/06-log-to-es.yml

ES

# inputs 输入文档配置地址
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html# output 输出文档配置地址
https://www.elastic.co/guide/en/beats/filebeat/current/console-output.html# output 输出 elasticsearch 配置文档地址
https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html# 索引配置模板地址
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-template.html# 索引生命周期地址:
https://www.elastic.co/guide/en/beats/filebeat/7.17/ilm.html# 
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html# nginx module的文档地址
https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-module-nginx.html# tomcat module 的文档地址
https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-module-tomcat.html# filestream 类型的多行匹配
https://www.elastic.co/guide/en/beats/filebeat/7.17/multiline-examples.html# filebeat input中的tcp文档地址
https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-input-tcp.html# filebeat output的文件地址
https://www.elastic.co/guide/en/beats/filebeat/7.17/file-output.html# logstash的elasticsearch 输出 文档地址
https://www.elastic.co/guide/en/logstash/7.17/plugins-outputs-elasticsearch.html

八,filebeat企业常见案例(EFK架构)

1,使用filebeat收集nginx日志到es集群

(1) 安装nginx服务并启动
cat > /etc/yum.repos.d/nginx.repo << 'EOF'
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enable=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
EOFyum -y install nginx
systemctl start nginx
systemctl cat nginx
systemctl enable nginx
systemctl status nginx(2) 配置 filebeat 收集 nginx 日志并写入到ES
cat > /etc/filebeat/config/02-filestream-to-console.yml << 'EOF'
filebeat.inputs:
- type: filestream# 指定收集访问日志的路径paths:- /var/log/nginx/access.logout.elasticsearch:# 指定ES集群的列表hosts:- "http://192.168.222.187:9200"- "http://192.168.222.188:9200"- "http://192.168.222.189:9200"
EOF(3)启动 filebeat 实例
filebeat -e -c /etc/filebeat/config/02-filestreat-to-console.yml(4) 通过 kibana 查看日志

2,基于 log类型收集 nginx 原生日志

[root@elk188 ~]$ vim /etc/filebeat/config/09-nginx-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/09-nginx-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/nginx/access.log*# 给当前的输入类型打上标签tags: ["access"]output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-nginx-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 1启动 filebeat 的 nginx 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/09-nginx-to-es.yml

3,基于log类型收集 nginx 的 json 日志

(1) 修改 nginx 的源日志格式
vim /etc/nginx/nginx.conf
...log_format longchi_nginx_json '{"@timestamp":"$time_iso8601",''"@source":"$server_addr",''"idc":"huzhou",''"http_cookie":"$http_cookie",''"hostname":"$hostname",''"ip":"$http_x_forwarded_for",''"client":"$remote_addr",''"request_method":"$request_method",''"scheme":"$scheme",''"domain":"$server_name",''"referer":"$http_referer",''"request":"$request_uri",''"args":"$args",''"size":$body_bytes_sent,''"request_body":"$request_body",''"status": $status,''"responsetime":$request_time,''"upstreamtime":"$upstream_response_time",''"upstreamaddr":"$upstream_addr",''"http_user_agent":"$http_user_agent",''"https":"$https"''}';access_log /var/log/nginx/access.log longchi_nginx_json;(2) 检查 nginx 的配置文件语法并重启 nginx 服务
nginx -t
systemctl restart nginx(3) nginx的配置文件
[root@elk188 ~]$ cat /etc/nginx/nginx.confuser  nginx;
worker_processes  auto;error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;events {worker_connections  1024;
}http {include       /etc/nginx/mime.types;default_type  application/octet-stream;#    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
#                      '$status $body_bytes_sent "$http_referer" '
#                      '"$http_user_agent" "$http_x_forwarded_for"';#    access_log  /var/log/nginx/access.log  main;log_format longchi_nginx_json '{"@timestamp":"$time_iso8601",''"@source":"$server_addr",''"idc":"huzhou",''"http_cookie":"$http_cookie",''"hostname":"$hostname",''"ip":"$http_x_forwarded_for",''"client":"$remote_addr",''"request_method":"$request_method",''"scheme":"$scheme",''"domain":"$server_name",''"referer":"$http_referer",''"request":"$request_uri",''"args":"$args",''"size":$body_bytes_sent,''"request_body":"$request_body",''"status": $status,''"responsetime":$request_time,''"upstreamtime":"$upstream_response_time",''"upstreamaddr":"$upstream_addr",''"http_user_agent":"$http_user_agent",''"https":"$https"''}';access_log /var/log/nginx/access.log longchi_nginx_json;sendfile        on;#tcp_nopush     on;keepalive_timeout  65;#gzip  on;include /etc/nginx/conf.d/*.conf;
}
4, 基于模块采集 nginx 日志文件
[root@elk188 ~]$ cat /etc/filebeat/config/11-nginx-to-es.yml
# encoding: utf-8
filebeat.config.modules:path: ${path.config}/modules.d/*.yml# 开启热加载功能reload.enabled: trueoutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-nginx-access-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 0

5,基于模块采集 tomcat 日志文件
(1) 部署 tomcat 服务 
1) 解压 tomcat 软件包tar xf apache-tomcat-10.0.20.tar.gz -C /longchi/softwares/2) 创建符号链接
cd /longchi/softwares/ && ln -sv apache-tomcat-10.0.20 tomcat3) 配置环境变量
vim /etc/profile.d/elk.sh
...
# export JAVA_HOME=/usr/share/elasticsearch/jdkexport JAVA_HOME=/longchi/softwares/jdk
export TOMCAT_HOME=/longchi/softwares/tomcat
export PATH=$PATH:$TOMCAT_HOME/bin:$JAVA_HOME/bin4) 使得环境变量生效
source /etc/profile.d/elk.sh5) 启动 tomcat 服务
catalina.sh start6) 停止 tomcat 服务
catalina.sh stop(2) 启用 tomcat 的模块管理  配置收集采集日志
filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules disable nginx
filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules enable tomcat
1)启用tomcat可以用如下两个命令
[root@elk188 ~]$ mv /etc/filebeat/modules.d/tomcat.yml.disabled /etc/filebeat/modules.d/tomcat.yml[root@elk188 ~]$ filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules enable tomcat
#查看启用的命令
[root@elk188 ~]$ filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules list | head[root@elk188 ~]$ filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules disable nginx
Disabled nginx
[root@elk188 ~]$ filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules list | head
Enabled:
tomcatDisabled:
activemq
apache
[root@elk188 ~]$ cp /etc/filebeat/config/11-nginx-to-es.yml /etc/filebeat/config/12-tomcat-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/12-tomcat-to-es.yml
# encoding: utf-8
filebeat.config.modules:path: ${path.config}/modules.d/*.yml# 开启热加载功能reload.enabled: trueoutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-tomcat-access-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 0[root@elk188 ~]$ ll /etc/filebeat/modules.d/*.yml
-rw-r--r-- 1 root root 623 Apr 19  2022 /etc/filebeat/modules.d/tomcat.yml修改 tomcat 配置文件
[root@elk188 ~]$ vim /etc/filebeat/modules.d/tomcat.yml
[root@elk188 ~]$ egrep -v "^*#|^$" /etc/filebeat/modules.d/tomcat.yml
- module: tomcatlog:enabled: truevar.input: filevar.paths:- "/longchi/softwares/apache-tomcat-10.0.20/logs/localhost_access_log.2024-10-19.txt"
6, 基于 log 类型收集tomcat的原生日志
[root@elk188 ~]$ cp /etc/filebeat/config/12-tomcat-to-es.yml /etc/filebeat/config/13-tomcat-to-es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/13-tomcat-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/13-tomcat-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /longchi/softwares/apache-tomcat-10.0.20/logs/*.txt# 开启热加载reload.enabled: trueoutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-tomcat-access-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 0启动 filebeat 的tomcat原生日志 实例
[root@elk188 ~]$ rm -rf /var/lib/filebeat/*
[root@elk188 ~]$ filebeat -e -c /etc/filebeat/config/13-tomcat-to-es.yml
7,基于 log 类型收集 tomcat 的 json 日志
# 备份配置文件
[root@elk188 ~]$ cp /longchi/softwares/apache-tomcat-10.0.20/conf/{server.xml,server.xml-`date +%F`}
[root@elk188 ~]$ ll /longchi/softwares/apache-tomcat-10.0.20/conf/server.xml*
-rw------- 1 root root 6757 Oct 20 01:28 /longchi/softwares/apache-tomcat-10.0.20/conf/server.xml
-rw------- 1 root root 6757 Oct 20 01:29 /longchi/softwares/apache-tomcat-10.0.20/conf/server.xml-2024-10-201) 修改配置文件
vim /longchi/softwares/apache-tomcat-10.0.20/conf/server.xml
将 <Host></Host>中的内容替换为如下内容<Host name="tomcat.longchi.xyz"  appBase="webapps"unpackWARs="true" autoDeploy="true"><!-- SingleSignOn valve, share authentication between web applicationsDocumentation at: /docs/config/valve.html --><!--<Valve className="org.apache.catalina.authenticator.SingleSignOn" />--><!-- Access log processes all example.Documentation at: /docs/config/valve.htmlNote: The pattern used is equivalent to using pattern="common" --><Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"prefix="tomcat.longchi.xyz_access_log" suffix=".txt"pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}"/></Host>2) 修改 filebesat 的配置文件如下
[root@elk188 ~]$ cp /etc/filebeat/config/13-tomcat-to-es.yml /etc/filebeat/config/14-tomcat-to-es.yml[root@elk188 ~]$ cat /etc/filebeat/config/14-tomcat-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /longchi/softwares/apache-tomcat-10.0.20/logs/*.txtjson.keys_under_root: true# 开启热加载reload.enabled: trueoutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-tomcat-access-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 03) 修改主机名
[root@elk187 ~]$ vim /etc/hosts
[root@elk187 ~]$ data_rsync.sh /etc/hosts
===== rsyncing elk188.longchi.xyz: hosts =====
命令执行成功
===== rsyncing elk189.longchi.xyz: hosts =====
命令执行成功
[root@elk187 ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.222.187 elk187.longchi.xyz
192.168.222.188 elk188.longchi.xyz tomcat.longchi.xyz
192.168.222.189 elk189.longchi.xyz4)访问tomcat服务
[root@elk187 ~]$ curl -I tomcat.longchi.xyz:8080
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Mon, 21 Oct 2024 10:46:17 GMT[root@elk187 ~]$ curl tomcat.longchi.xyz:8080
[root@elk188 /longchi/softwares/apache-tomcat-10.0.20/logs]$ ll
total 20
-rw-r----- 1 root root 6976 Oct 21 03:43 catalina.2024-10-21.log
-rw-r----- 1 root root 6976 Oct 21 03:43 catalina.out
-rw-r----- 1 root root  696 Oct 21 03:49 tomcat.longchi.xyz_access_log.2024-10-21.txt# 同步脚本文件
[root@elk187 ~]$ cat data_rsync.sh
#!/bin/bash
#Auther: zengguoqing
#encoding: utf-8if [ $# -ne 1 ];thenecho "Usage: $0 /path/to/file"exit
fi# 判断文件是否存在
if [ ! -e $1 ];thenecho "[ $1 ] dir or file not find"exit
fi#获取父路径
fullpath=`dirname $1`#获取子路径
basename=`basename $1`#进入父路径
cd $fullpathfor ((host_id=188;host_id<=189;host_id++))do# 使得终端输出变为绿色tput setaf 2echo ===== rsyncing elk${host_id}.longchi.xyz: $basename =====# 使得终端恢复原来的颜色tput setaf 7# 将数据同步到其他两个节点rsync -az $basename `whoami`@elk${host_id}.longchi.xyz:$fullpathif [ $? -eq 0 ];thenecho "命令执行成功"fi
done启动 filebeat 的 tomcat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/14-tomcat-to-es.ymlKQL(Kibana Query Language)语言
status : "200" and clientip:"192.168.222.189"  点击更新,就可以看到结果
8,多行匹配-收集tomcat错误日志
[root@elk188 ~]$ cp /etc/filebeat/config/14-tomcat-to-es.yml /etc/filebeat/config/15-tomcat-to-es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/15-tomcat-to-es.yml
[root@elk188 /longchi/softwares/apache-tomcat-10.0.20/logs]$ vim /etc/filebeat/config/15-tomcat-to-es.yml
[root@elk188 /longchi/softwares/apache-tomcat-10.0.20/logs]$ cat /etc/filebeat/config/15-tomcat-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: false # 启用或禁用# 指定数据路径paths:- /longchi/softwares/apache-tomcat-10.0.20/logs/*.txtjson.keys_under_root: true# 开启热加载reload.enabled: true- type: logenabled: truepaths:- /longchi/softwares/apache-tomcat-10.0.20/logs/*.out# 指定多行匹配的类型,可选值为"pattern",还有一个 "count"multiline.type: pattern# 指定匹配模式multiline.pattern: '^\d{2}'# 下面参数参考官方文档即可multiline.negate: truemultiline.match: afteroutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-tomcat-error-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 0启动 filebeat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/15-tomcat-to-es.yml
9,多行匹配-收集 elasticsearch 的错误日志
1) elasticsearch 日志
[root@elk188 ~]$ ll /var/log/elasticsearch/longchi-elk.log
-rw-r--r-- 1 elasticsearch elasticsearch 3872898 Oct 21 17:00 /var/log/elasticsearch/longchi-elk.log2) 查看日志
[root@elk188 ~]$ tail -100f /var/log/elasticsearch/longchi-elk.log3) 查看 elasticsearch 服务自启动文件
[root@elk188 ~]$ systemctl cat elasticsearch
# /usr/lib/systemd/system/elasticsearch.service
[Unit]
Description=Elasticsearch
Documentation=https://www.elastic.co
Wants=network-online.target
After=network-online.target[Service]
Type=notify
RuntimeDirectory=elasticsearch
PrivateTmp=true
Environment=ES_HOME=/usr/share/elasticsearch
Environment=ES_PATH_CONF=/etc/elasticsearch
Environment=PID_DIR=/var/run/elasticsearch
Environment=ES_SD_NOTIFY=true
EnvironmentFile=-/etc/sysconfig/elasticsearchWorkingDirectory=/usr/share/elasticsearchUser=elasticsearch
Group=elasticsearchExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65535# Specifies the maximum number of processes
LimitNPROC=4096# Specifies the maximum size of virtual memory
LimitAS=infinity# Specifies the maximum file size
LimitFSIZE=infinity# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM# Send the signal only to the JVM rather than its control group
KillMode=process# Java process is never killed
SendSIGKILL=no# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143# Allow a slow startup before the systemd notifier module kicks in to extend the timeout
TimeoutStartSec=75[Install]
WantedBy=multi-user.target# Built for packages-7.17.3 (packages)4) 查看启动文件是一个文本文件,是可以打开的
[root@elk188 ~]$ file /usr/share/elasticsearch/bin/systemd-entrypoint
/usr/share/elasticsearch/bin/systemd-entrypoint: POSIX shell script, ASCII text executable
[root@elk188 ~]$ cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup scriptif [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; thenexec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
elseexec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi5) 创建错误日志
[root@elk188 ~]$ /usr/share/elasticsearch/bin/elasticsearch
warning: usage of JAVA_HOME is deprecated, use ES_JAVA_HOME
Future versions of Elasticsearch will require Java 11; your Java version from [/longchi/softwares/jdk1.8.0_321/jre] does not meet this requirement. Consider switching to a distribution of Elasticsearch with a bundled JDK. If you are already using a distribution with a bundled JDK, ensure the JAVA_HOME environment variable is not set.
warning: usage of JAVA_HOME is deprecated, use ES_JAVA_HOME
Future versions of Elasticsearch will require Java 11; your Java version from [/longchi/softwares/jdk1.8.0_321/jre] does not meet this requirement. Consider switching to a distribution of Elasticsearch with a bundled JDK. If you are already using a distribution with a bundled JDK, ensure the JAVA_HOME environment variable is not set.
[2024-10-21T22:38:00,621][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [elk188.longchi.xyz] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as rootat org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.17.3.jar:7.17.3]at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:157) ~[elasticsearch-7.17.3.jar:7.17.3]at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) ~[elasticsearch-7.17.3.jar:7.17.3]at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) ~[elasticsearch-cli-7.17.3.jar:7.17.3]at org.elasticsearch.cli.Command.main(Command.java:77) ~[elasticsearch-cli-7.17.3.jar:7.17.3]at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:122) ~[elasticsearch-7.17.3.jar:7.17.3]at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) ~[elasticsearch-7.17.3.jar:7.17.3]
Caused by: java.lang.RuntimeException: can not run elasticsearch as rootat org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:107) ~[elasticsearch-7.17.3.jar:7.17.3]at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:183) ~[elasticsearch-7.17.3.jar:7.17.3]at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) ~[elasticsearch-7.17.3.jar:7.17.3]at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:166) ~[elasticsearch-7.17.3.jar:7.17.3]... 6 more
uncaught exception in thread [main]
java.lang.RuntimeException: can not run elasticsearch as rootat org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:107)at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:183)at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434)at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:166)at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:157)at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77)at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112)at org.elasticsearch.cli.Command.main(Command.java:77)at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:122)at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)
For complete error details, refer to the log at /var/log/elasticsearch/longchi-elk.log
2024-10-22 05:38:01,058610 UTC [7634] INFO  Main.cc@111 Parent process died - ML controller exiting6)修改 filebeat 的配置文件
[root@elk188 ~]$ cp /etc/filebeat/config/15-tomcat-to-es.yml /etc/filebeat/config/16-tomcat-to-es.yml                                                      [root@elk188 ~]$ vim /etc/filebeat/config/16-tomcat-to-es.yml
[root@elk188 ~]$ mv /etc/filebeat/config/16-tomcat-to-es.yml /etc/filebeat/config/16-eslog-to-es.yml                                                       [root@elk188 ~]$ vim /etc/filebeat/config/16-eslog-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/16-eslog-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: false # 启用或禁用# 指定数据路径paths:- /longchi/softwares/apache-tomcat-10.0.20/logs/*.txtjson.keys_under_root: true# 开启热加载reload.enabled: true- type: logenabled: truepaths:- /var/log/elasticsearch/longchi-elk.log*# 指定多行匹配的类型,可选值为"pattern",还有一个 "count"multiline.type: pattern# 指定匹配模式multiline.pattern: '^\['# 下面参数参考官方文档即可multiline.negate: truemultiline.match: afteroutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-es-error-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 07) 启动 filebeat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/16-eslog-to-es.yml
10,日志过滤 可以指定黑白名单且都支持通配符
[root@elk188 ~]$ cp /etc/filebeat/config/04-log-to-console.yml /etc/filebeat/config/17-log-to-console.yml
[root@elk188 ~]$ vim /etc/filebeat/config/17-log-to-console.yml
[root@elk188 ~]$ filebeat -e -c /etc/filebeat/config/17-log-to-console.yml[root@elk188 /tmp/test]$ echo 1111 > test.log
[root@elk188 /tmp/test]$ vim /etc/filebeat/config/17-log-to-console.yml
[root@elk188 /tmp/test]$ echo 2222 >> test.log[root@elk188 ~]$ ll /tmp/test/test.log
-rw-r--r-- 1 root root 10 Oct 22 00:59 /tmp/test/test.log
[root@elk188 ~]$ echo 3333 >> /tmp/test/test.log
[root@elk188 ~]$ ll /tmp/test/test.log
-rw-r--r-- 1 root root 15 Oct 22 01:10 /tmp/test/test.log
[root@elk188 ~]$ cat /var/lib/filebeat/registry/filebeat/log.json
{"op":"set","id":1}[root@elk188 /tmp/test]$ cat /etc/filebeat/config/17-log-to-console.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: false # 启用或禁用# 指定数据路径paths:- /tmp/test.log- /tmp/*.txt# 给当前的输入类型打上标签tags: ["longchi-linux80","容器运维","DBA运维","SRE运维工程师"]# 自定义字段fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"# 将该字段设置为顶级字段# 将自定义字段的 key-value 放到顶级字段# 默认值为false,会将数据放在一个叫"fields"的字段下面fields_under_root: true- type: logenabled: truepaths:- /tmp/test/*.log# 包含指定内容才会采集,且区分大小写,指定白名单,支持通配符# include_lines: ['^ERR', '^WARN',"longchi"]# 指定排除的内容,指定黑名单exclude_lines: ['^DBG',"linux"]tags: ["longchi-python","云原生开发"]fields:corporation: "上海隆迟实业有限公司"address: "上海市奉贤区奉城镇新奉公路2011号1幢1243室"name: "longchi"domain_name: "longchi.xyz"output.console:pretty: true启动 filebeat 实例
filebeat -e -c /etc/filebeat/config/17-log-to-console.yml# 过滤日志
# 过滤关键字 'when'
[root@elk188 ~]$ grep when /etc/filebeat/config/*.yml
/etc/filebeat/config/07-log-to-es.yml:      when.contains:
/etc/filebeat/config/07-log-to-es.yml:      when.contains:
/etc/filebeat/config/08-log-to-es.yml:      when.contains:
/etc/filebeat/config/08-log-to-es.yml:      when.contains:# 
[root@elk188 ~]$ cp /etc/filebeat/config/08-log-to-es.yml /etc/filebeat/config/18-nginx-to-es.yml                                 [root@elk188 /var/log/nginx]$ vim /etc/filebeat/config/18-nginx-to-es.yml
[root@elk188 /var/log/nginx]$ cat /etc/filebeat/config/18-nginx-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/nginx/access.log*-# 给当前的输入类型打上标签tags: ["access"]# 解析 message 字段的json格式,并放在顶级字段中json.keys_under_root: true- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/nginx/error.log*tags: ["error"]include_lines: ["error"]output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]# index: "longchi-linux-elk-%{+yyyy.MM.dd}"indices:- index: "longchi-linux-web-nginx-access-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "access"- index: "longchi-linux-web-nginx-error-%{+yyyy.MM.dd}"when.contains:tags: "error"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板
setup.template.overwrite: false
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 0修改nginx配置文件 json格式
[root@elk188 /var/log/nginx]$ vim /etc/nginx/nginx.conf
[root@elk188 /var/log/nginx]$ cat /etc/nginx/nginx.confuser  nginx;
worker_processes  auto;error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;events {worker_connections  1024;
}http {include       /etc/nginx/mime.types;default_type  application/octet-stream;#    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
#                      '$status $body_bytes_sent "$http_referer" '
#                      '"$http_user_agent" "$http_x_forwarded_for"';
#
#    access_log  /var/log/nginx/access.log  main;log_format longchi_nginx_json '{"@timestamp": "$time_iso8601",''"host": "$server_addr",''"clientip": "$remote_addr",''"size": "$body_bytes_sent",''"responsetime": "$request_time",''"upstreamtime": "$upstream_response_time",''"upstreamhost": "$upstream_addr",''"http_host": "$host",''"uri": "$uri",''"domain": "$host",''"xff": "$http_x_forwarded_for",''"referer": "$http_referer",''"tcp_xff": "$proxy_protocol_addr",''"http_user_agent": "$http_user_agent",''"status": "$status"}';access_log /var/log/nginx/access.log longchi_nginx_json;sendfile        on;#tcp_nopush     on;keepalive_timeout  65;#gzip  on;include /etc/nginx/conf.d/*.conf;
}
[root@elk188 /var/log/nginx]$ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@elk188 /var/log/nginx]$ systemctl restart nginx[root@elk188 /var/log/nginx]$ cat access.log
{"@timestamp": "2024-10-22T04:40:51-07:00","host": "127.0.0.1","clientip": "127.0.0.1","size": "615","responsetime": "0.000","upstreamtime": "-","upstreamhost": "-","http_host": "127.0.0.1","uri": "/index.html","domain": "127.0.0.1","xff": "-","referer": "-","tcp_xff": "-","http_user_agent": "curl/7.29.0","status": "200"}启动 filebeat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/18-nginx-to-es.yml
11,nginx 和 tomcat 同时采集案例
1) 编写 filebeat 的web配置文件
[root@elk188 ~]$ cp /etc/filebeat/config/18-nginx-to-es.yml /etc/filebeat/config/19-web-to-es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/19-web-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/19-web-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/nginx/access.log*-# 给当前的输入类型打上标签tags: ["nginx-access"]# 解析 message 字段的json格式,并放在顶级字段中json.keys_under_root: true- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/nginx/error.log*tags: ["nginx-error"]include_lines: ["error"]- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true   # 启用或禁用# 指定数据路径paths:- /longchi/softwares/apache-tomcat-10.0.20/logs/*.txtjson.keys_under_root: true# 开启热加载# reload.enabled: truetags: ["tomcat-access"]- type: logenabled: truepaths:- /longchi/softwares/apache-tomcat-10.0.20/logs/*.out# 指定多行匹配的类型,可选值为"pattern",还有一个 "count"multiline.type: pattern# 指定匹配模式multiline.pattern: '^\d{2}'# 下面参数参考官方文档即可multiline.negate: truemultiline.match: aftertags: ["tomcat-error"]output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]# index: "longchi-linux-elk-%{+yyyy.MM.dd}"indices:- index: "longchi-linux-web-nginx-access-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "nginx-access"- index: "longchi-linux-web-nginx-error-%{+yyyy.MM.dd}"when.contains:tags: "nginx-error"- index: "longchi-linux-web-tomcat-access-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "tomcat-access"- index: "longchi-linux-web-tomcat-error-%{+yyyy.MM.dd}"when.contains:tags: "tomcat-error"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板
setup.template.overwrite: false
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 02)修改 nginx 配置文件 即将nginx输出的access.log原日志替换为如下的json格式
vim /etc/nginx/nginx.cong
...log_format longchi_nginx_json '{"@timestamp": "$time_iso8601",''"host": "$server_addr",''"clientip": "$remote_addr",''"size": "$body_bytes_sent",''"responsetime": "$request_time",''"upstreamtime": "$upstream_response_time",''"upstreamhost": "$upstream_addr",''"http_host": "$host",''"uri": "$uri",''"domain": "$host",''"xff": "$http_x_forwarded_for",''"referer": "$http_referer",''"tcp_xff": "$proxy_protocol_addr",''"http_user_agent": "$http_user_agent",''"status": "$status"}';access_log /var/log/nginx/access.log longchi_nginx_json;3) 修改 tomcat 配置文件将原<Host></Host>替换为如下内容<Host name="tomcat.longchi.xyz"  appBase="webapps"unpackWARs="true" autoDeploy="true"><!-- SingleSignOn valve, share authentication between web applicationsDocumentation at: /docs/config/valve.html --><!--<Valve className="org.apache.catalina.authenticator.SingleSignOn" />--><!-- Access log processes all example.Documentation at: /docs/config/valve.htmlNote: The pattern used is equivalent to using pattern="common" --><Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"prefix="tomcat.longchi.xyz_access_log" suffix=".txt"pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}"/></Host>4) 启动 filebeat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/19-web-to-es.yml
12,input 插件 filestream 类型案例
[root@elk188 ~]$ cp /etc/filebeat/config/09-nginx-to-es.yml /etc/filebeat/config/20-nginx-to-es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/20-nginx-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/20-nginx-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: filestream# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/nginx/access.log*# 给当前的输入类型打上标签tags: ["access"]# 对于 filestream 类型,不能使用'json.keys_under_root' 需要配置 parses 解析器# json.keys_under_root: true# 终上所述我们需要配置如下解析器来实现 json 解析parsers:- ndjson:keys_under_root: trueoutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-nginx-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 02)修改 nginx 配置文件 即将nginx输出的access.log原日志替换为如下的json格式
vim /etc/nginx/nginx.cong
...log_format longchi_nginx_json '{"@timestamp": "$time_iso8601",''"host": "$server_addr",''"clientip": "$remote_addr",''"size": "$body_bytes_sent",''"responsetime": "$request_time",''"upstreamtime": "$upstream_response_time",''"upstreamhost": "$upstream_addr",''"http_host": "$host",''"uri": "$uri",''"domain": "$host",''"xff": "$http_x_forwarded_for",''"referer": "$http_referer",''"tcp_xff": "$proxy_protocol_addr",''"http_user_agent": "$http_user_agent",''"status": "$status"}';access_log /var/log/nginx/access.log longchi_nginx_json;3)启动 filebeat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/20-nginx-to-es.yml
13,filestream 类型多行匹配
(1) 编写 filebeat 配置文件
cat > /etc/filebeat/config/08-tomcat-error-to-es.yml < 'EOF'
filebeat.inputs:
- type: filestreampaths: - /longchi/softwares/tomcat/logs/catalina.outparsers:- multiline:type: patternpattern: '^\d{2}'negate: truematch: afteroutput.elasticsearch:hosts:- "http://10.0.0.101:9200"- "http://10.0.0.102:9200"- "http://10.0.0.103:9200"index: "longchi-tomcat-error-%{+yyyy.MM.dd}"  setup.ilm.enabled: false
setup.template.name: "longchi-tomcat-error"
setup.template.pattern: "longchi-tomcat-error*"
setup.template.overwrite: true
setup.template.settings:index.number_of_shards: 3index.number_of_replicas: 0
EOF(2) 启动 filebeat 实例
filebeat test config -c /etc/filebeat/config/08-tomcat-error-to-es.yml
filebeat -e -c /etc/filebeat/config/08-tomcat-error-to-es.yml--------------------
[root@elk188 ~]$ cp /etc/filebeat/config/20-nginx-to-es.yml /etc/filebeat/config/21-tomcat-to-es.yml                                 [root@elk188 ~]$ vim /etc/filebeat/config/21-tomcat-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/21-tomcat-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: filestream# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /longchi/softwares/apache-tomcat-10.0.20/logs/*.txt# 给当前的输入类型打上标签tags: ["tomcat-access"]# 对于 filestream 类型,不能使用'json.keys_under_root' 需要配置 parses 解析器# json.keys_under_root: true# 终上所述我们需要配置如下解析器来实现 json 解析parsers:- ndjson:keys_under_root: true- type: filestream# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /longchi/softwares/apache-tomcat-10.0.20/logs/*.out# 给当前的输入类型打上标签tags: ["tomcat-error"]parsers:- multiline:type: patternpattern: '^\d{2}'negate: truematch: after#multiline.type: pattern#multiline.pattern: '^\d{2}'#multiline.negate: true#multiline.match: afteroutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]indices:- index: "longchi-linux-web-tomcat-access-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "tomcat-access"- index: "longchi-linux-web-tomcat-error-%{+yyyy.MM.dd}"when.contains:tags: "tomcat-error"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 0启动 filebeat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/21-tomcat-to-es.yml
14, 收集 系统 日志
# 收集 spooler maillog secure boot.log yum.log firewalld messages cron的日志,要求如下:
(1) 在同一个filebeat配置文件中书写
(2) 将上述8类日志分别写入不同的索引,索引前缀名称为 "longchi-elk-system-log-{xxx}-%{+yyyy.MM.dd}"
(3) 要求副本数量为0,分片数量为101,书写filebeat配置文件
[root@elk188 ~]$ cp /etc/filebeat/config/19-web-to-es.yml /etc/filebeat/config/22-system-to-es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/22-system-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/22-system-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/spooler# 给当前的输入类型打上标签tags: ["spooler"]# 解析 message 字段的json格式,并放在顶级字段中# json.keys_under_root: true- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/maillogtags: ["maillog"]# include_lines: ["error"]- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true   # 启用或禁用# 指定数据路径paths:- /var/log/secure# json.keys_under_root: true# 开启热加载# reload.enabled: truetags: ["secure"]- type: logenabled: truepaths:- /var/log/boot.log# 指定多行匹配的类型,可选值为"pattern",还有一个 "count"# multiline.type: pattern# 指定匹配模式# multiline.pattern: '^\d{2}'# 下面参数参考官方文档即可# multiline.negate: true# multiline.match: aftertags: ["boot"]- type: logenabled: truepaths:- /var/log/yum.logtags: ["yum"]- type: logenabled: truepaths:- /var/log/firewalldtags: ["firewalld"]- type: logenabled: truepaths:- /var/log/messagestags: ["messages"]- type: logenabled: truepaths:- /var/log/crontags: ["cron"]output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]# index: "longchi-linux-elk-%{+yyyy.MM.dd}"indices:- index: "longchi-elk-system-log-spooler-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "spooler"- index: "longchi-elk-system-log-maillog-%{+yyyy.MM.dd}"when.contains:tags: "maillog"- index: "longchi-elk-system-log-secure-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "secure"- index: "longchi-elk-system-log-boot-%{+yyyy.MM.dd}"when.contains:tags: "boot"- index: "longchi-elk-system-log-yum-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "yum"- index: "longchi-elk-system-log-firewalld-%{+yyyy.MM.dd}"when.contains:tags: "firewalld"- index: "longchi-elk-system-log-messages-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "messages"- index: "longchi-elk-system-log-cron-%{+yyyy.MM.dd}"when.contains:tags: "cron"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
# setup.template.name: "longchi-linux"
setup.template.name: "longchi-elk-system"
# 设置索引模板的匹配模式
# setup.template.pattern: "longchi-linux*"
setup.template.pattern: "longchi-elk-system*"
# 覆盖已有的索引模板
setup.template.overwrite: false
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 10# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 02,启动 filebeat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/22-system-to-es.yml注意: 当filebeat系统日志分多个配置文件实现,需要在启动文件后面添加 '--path.data /var/lib/filebeat/' 数据存储目录路径比如:
filebeat -e -c /etc/filebeat/config/22-system-to-es.yml --path.data /var/lib/filebeat/7.17.3 版本可能遇到的问题:
(1) input源配置一旦超过4个,写入ES时,就可能会复现出部分数据无法写入的问题;
有两种解决方案
方案一: 撤成多个 filebeat 实例
filebeat -e -c /etc/filebeat/config/22-system-to-es.yml --path.data /tmp/filebeat方案二: 日志聚合思路解决问题 
1.安装 rsyslog
yum -y install rsyslog
[root@elk188 ~]$ yum -y install rsyslog2. 修改配置文件,实现日志集合
[root@elk188 ~]$ vim /etc/rsyslog.conf1) 开启TCP端口
$ModLoad imtcp
$InputTCPServerRun 5142) 将所有的日志都重定向到一个位置 (开启日志聚合)
(1) 重定向远程主机(服务器) 比如:
*.* 			@192.168.222.188:514(2) 重定向到某一个位置,比如:
*.*				/var/log/longchi.log(3) 查看修改的配置文件
[root@elk188 ~]$ egrep -v "^*#|^$" /etc/rsyslog.conf
$ModLoad imtcp
$InputTCPServerRun 514
$WorkDirectory /var/lib/rsyslog
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$IncludeConfig /etc/rsyslog.d/*.conf
$OmitLocalLogging on
$IMJournalStateFile imjournal.state
*.info;mail.none;authpriv.none;cron.none                /var/log/messages
authpriv.*                                   /var/log/secure
mail.*                                       -/var/log/maillog
cron.*                                       /var/log/cron
*.emerg                                                 :omusrmsg:*
uucp,news.crit                               /var/log/spooler
local7.*                                     /var/log/boot.log
*.*                                          /var/log/longchi.log3. 重启 rsyslog 服务
[root@elk188 ~]$ systemctl restart rsyslog4. 查看
[root@elk188 ~]$ ll /var/log/longchi.log
-rw------- 1 root root 795 Oct 24 04:21 /var/log/longchi.log查看日志
[root@elk188 ~]$ cat /var/log/longchi.log
[root@elk188 ~]$ tail -10f /var/log/longchi.log
Oct 24 04:21:18 elk188 systemd: Stopping System Logging Service...
Oct 24 04:21:18 elk188 rsyslogd: [origin software="rsyslogd" swVersion="8.24.0-57.el7_9.3" x-pid="5658" x-info="http://www.rsyslog.com"] exiting on signal 15.
Oct 24 04:21:18 elk188 systemd: Stopped System Logging Service.
Oct 24 04:21:18 elk188 systemd: Starting System Logging Service...
。。。5. 测试
[root@elk188 ~]$ logger "My name is Jason Yim"
15, 聚合日志配置
(1) 修改 rsyslog 服务配置文件
[root@elk188 ~]$ egrep -v "^*#|^$" /etc/rsyslog.conf
# 开启 tcp 端口
$ModLoad imtcp
$InputTCPServerRun 514
$WorkDirectory /var/lib/rsyslog
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$IncludeConfig /etc/rsyslog.d/*.conf
$OmitLocalLogging on
$IMJournalStateFile imjournal.state
*.info;mail.none;authpriv.none;cron.none         /var/log/messages
authpriv.*                                       /var/log/secure
mail.*                                          -/var/log/maillog
cron.*                                           /var/log/cron
*.emerg                                          :omusrmsg:*
uucp,news.crit                                   /var/log/spooler
local7.*                                         /var/log/boot.log
*.*                                              /var/log/longchi.log
# 将所有日志都重定向到 '/var/log/longchi.log'(2) 重启 rsyslog 服务和测试
systemctl restart rsyslog
logger "My name is Janon Yim"(3) 编写 filebeat 配置文件
[root@elk188 ~]$ cp /etc/filebeat/config/22-system-to-es.yml /etc/filebeat/config/23-system-to-es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/23-system-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/23-system-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: filestream# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/longchi.log# 给当前的输入类型打上标签tags: ["rsyslog"]output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]indices:- index: "longchi-elk-system-rsyslog-%{+yyyy.MM.dd}"# 匹配指定字段包含的内容when.contains:tags: "rsyslog"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
# setup.template.name: "longchi-linux"
setup.template.name: "longchi-elk-system-rsyslog"
# 设置索引模板的匹配模式
# setup.template.pattern: "longchi-linux*"
setup.template.pattern: "longchi-elk-system-rsyslog*"
# 覆盖已有的索引模板
setup.template.overwrite: false
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 10# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 0(4) 重启 filebeat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/23-system-to-es.yml
16,filebeat 如何使用 黑白名单
aaaa  bbbb  cccc  dddd
# 包含c:
使用白名单实现--->include_lines: ['c']
使用黑名单实现--->exclude_lines: ['a','b','d']# 排除c
使用白名单实现--->include: ['a','b','d']
使用黑名单实现--->exclude: ['c']# 系统聚合处理日志配置文件
vim /etc/filebeat/config/24-systemlog-to-es.yml
filebeat.inputs:
- type: filestreamenabled: truepaths: - /var/log/longchi.logtags: ["rsyslog"]output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]indices:- index: "longchi-elk-system-rsyslog-%{+yyyy.MM.dd}"when.contains:tags: "rsyslog"setup.ilm.enabled: false
setup.template.name: "longchi-elk-system-rsyslog"
setup.template.pattern: "longchi-elk-system-rsyslog*"
setup.template.overwrite: true
setup.template.settings:index.number_of_shards: 10index.number_of_replicas: 0重启 filebeat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/24-systemlog-to-es.yml注意事项:在 filebeat 重启时,发现终端一直输出数据,一定要注意找到 'ERROR'字段,理解终止启动,然后去 kibana 展示模板去解决问题,比如索引模板已经存在引发的终端数据无法在kibana上展示,需要去删除已经存在的索引模板,清理以前的数据,再次重启,数据就可以在 kibana 上展示了。[root@elk188 ~]$ wc -l /var/log/longchi.log
4984 /var/log/longchi.log# 测试
[root@elk188 ~]$ logger "2222"
[root@elk188 ~]$ tail -10f /var/log/longchi.log
17,filebeat 收集路由器和交换机等设备的日志收集
filebeat 的输入TCP案例
发送tcp的两个命令 telnet nc(nmap-ncat)
[root@elk188 ~]$ cp /etc/filebeat/config/24-systemlog-to-es.yml /etc/filebeat/config/25-tcp-to-es.yml# nc 官网
https://nmap.org/ncat 
https://nmap.org/download.html# 安装 telnet 和 nc 命令
[root@elk188 ~]$ yum -y install telnet nc
[root@elk187 ~]$ yum -y install telnet nc
[root@elk189 ~]$ yum -y install telnet nc编写 filebeat 配置文件
[root@elk188 ~]$ vim /etc/filebeat/config/25-tcp-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/25-tcp-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: tcpmax_message_size: 20MiBhost: "0.0.0.0:9000"output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-elk-tcp-%{+yyyy.MM.dd}"
#  indices:
#    - index: "longchi-elk-tcp-%{+yyyy.MM.dd}"
#      when.contains:
#        tags: "tcp"setup.ilm.enabled: false
setup.template.name: "longchi-elk-tcp"
setup.template.pattern: "longchi-elk-tcp*"
setup.template.overwrite: true
setup.template.settings:index.number_of_shards: 3index.number_of_replicas: 0启动 filebeat 实例
rm -rf /var/lib/filebaet/*
filebeat -e -c /etc/filebeat/config/25-tcp-to-es.yml测试 kibana 是否展示下面的输入信息(通过客户端189,187将数据发送到188服务端)
[root@elk189 ~]$ nc 192.168.222.188 9000
我是中国人
我爱你祖国母亲
...
[root@elk187 ~]$ nc 192.168.222.188 9000
我们是东方巨人
北京奥运会假如我们的设备不允许安装 linux 程序,没有办法发送数据,没有办法在里面创建服务,比如说路由器,交换机等,他们没法部署 linux 的各种服务,因为路由器,交换机没有CPU(虽然可以链接设备上,但是也有缺点)架构,所以很多指令不支持,但是我们又想将路由器,交换机等日志打到ES集群,或者其他的一些机器上,但是他们支持TCP发送数据,可以发到服务器9000端口,这样我们就可以将日志做一个收集。

看到如下信息,表示filebeat通过TCP输入的信息已经成功展示在kibana上。

该方案的架构图如下:

18,filebeat 收集路由器和交换机等设备多端口的日志收集
[root@elk188 ~]$ cp /etc/filebeat/config/25-tcp-to-es.yml /etc/filebeat/config/26-tcp-to-es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/26-tcp-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/26-tcp-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: tcpmax_message_size: 20MiBhost: "0.0.0.0:9000"tags: ["黑名单"]- type: tcpmax_message_size: 20MiBhost: "0.0.0.0:8000"tags: ["用户上报"]output.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]# index: "longchi-elk-system-rsyslog-%{+yyyy.MM.dd}"indices:- index: "longchi-elk-tcp-black-%{+yyyy.MM.dd}"when.contains:tags: "黑名单"- index: "longchi-elk-tcp-users-%{+yyyy.MM.dd}"when.contains:tags: "用户上报"setup.ilm.enabled: false
setup.template.name: "longchi-elk-tcp"
setup.template.pattern: "longchi-elk-tcp*"
setup.template.overwrite: true
setup.template.settings:index.number_of_shards: 3index.number_of_replicas: 0启动 filebeat 实例
filebeat -e -c /etc/filebeat/config/25-tcp-to-es.yml
19,filebeat的file输出 日志聚合
[root@elk188 ~]$ mkdir -pv /tmp/filebeat
mkdir: created directory ‘/tmp/filebeat’
[root@elk188 ~]$ cp /etc/filebeat/config/25-tcp-to-es.yml /etc/filebeat/config/27-tcp-to-file.yml                                 [root@elk188 ~]$ vim /etc/filebeat/config/27-tcp-to-file.yml
[root@elk188 ~]$ cat /etc/filebeat/config/27-tcp-to-file.yml
# encoding: utf-8
filebeat.inputs:
- type: tcpmax_message_size: 20MiBhost: "0.0.0.0:9000"output.file:path: "/tmp/filebeat"filename: longchi-linux# 指定文件滚动大小,默认值是20Mrotate_every_kb: 10000# 指定保存文件个数 范围2-1024个文件 默认是7个文件number_of_files: 1023# 指定文件权限,默认权限是0600permissions: 0600启动 filebeat 实例
filebeat -e -c /etc/filebeat/config/27-tcp-to-file.yml
20,filebeat 收集日志到 redis 服务
(1) 部署 redis 服务
yum -y install epel-release
yum -y install redis
yum -y install redis epel-release# 查看 redis 服务的自启动配置文件 
[root@elk187 ~]$ systemctl cat redis
# /usr/lib/systemd/system/redis.service(2) 修改配置文件
[root@elk187 ~]$ vim /etc/redis.conf
[root@elk187 ~]$ egrep -v "^*#|^$" /etc/redis.conf
将源配置文件修改如下两行即可
bind 0.0.0.0
requirepass longchi(3) 启动 redis 服务
systemctl start redis
systemctl enable redis
systemctl status redis
ss -ntl(4) 其他节点连接测试 redis 环境是否正常使用
测试  '--raw' 表示指定严格日志输出 '-n 5' 表示连接5号数据库
redis-cli -a longchi -h 192.168.222.187 -p 6379 --raw -n 5[root@elk189 ~]$ redis-cli -a longchi -h 192.168.222.187 -p 6379 --raw
192.168.222.187:6379> KEYS *192.168.222.187:6379> KEYS *
school
192.168.222.187:6379> get school
longchi
192.168.222.187:6379> FLUSHALL
OK
# 进入5号数据库
[root@elk189  ~]$ redis-cli -a longchi -h 192.168.222.187 -p 6379 --raw -n 5
192.168.222.187:6379[5]>(5) 将 filebeat 数据写入到 Redis 环境
编写filebeat配置文件
[root@elk188 ~]$ vim /etc/filebeat/config/28-tcp-to-redis.yml
[root@elk188 ~]$ cat /etc/filebeat/config/28-tcp-to-redis.yml
# encoding: utf-8
filebeat.inputs:
- type: tcpmax_message_size: 20MiBhost: "0.0.0.0:9000"output.redis:# 写入redis集群的主机hosts: ["192.168.222.187:6379"]# 指定redis认证口令password: "longchi"# 指定的key值key: "longchi-linux-filebeat"# 指定连接数据库的编号db: 5# 规定超时时间timeout: 3(6) 启动 filebeat 实例
filebeat -e -c /etc/filebeat/config/28-tcp-to-redis.yml(7) 写入数据有两种方式
1) 直接终端写入 
[root@elk187 ~]$ echo 33333333333333333333333333333333333333333 | nc 192.168.222.188 9000
[root@elk187 ~]$ cat /etc/hosts | nc 192.168.222.188 90002) 登录 nc 192.168.222.188 9000 终端,在终端直接输入[root@elk187 ~]$ nc 192.168.222.188 9000
my name is zengguoqing
111111111111111111111111111111111111111111111111111
222222222222222222222222222222222222222222222222222222(8) 测试 从客户端登录,查看写入数据数据
redis-cli -a longchi -h 192.168.222.187 -p 6379 -n 5
# 登录redis数据库
[root@elk189 ~]$ redis-cli -a longchi -h 192.168.222.187 -p 6379 -n 5
# 查看里面的key值
192.168.222.187:6379[5]> KEYS *
1) "longchi-linux-filebeat"
# 查看 key 值的类型
192.168.222.187:6379[5]> TYPE longchi-linux-filebeat
list
# 查看输入结果
192.168.222.187:6379[5]> LRANGE longchi-linux-filebeat 0 -1
1) "{\"@timestamp\":\"2024-10-25T12:17:02.405Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"7.17.3\"},\"message\":\"AAAAAAA\",\"input\":{\"type\":\"tcp\"},\"ecs\":{\"version\":\"1.12.0\"},\"host\":{\"name\":\"elk188.longchi.xyz\"},\"agent\":{\"name\":\"elk188.longchi.xyz\",\"type\":\"filebeat\",\"version\":\"7.17.3\",\"hostname\":\"elk188.longchi.xyz\",\"ephemeral_id\":\"d7c7fbc0-ab15-4085-b610-d004fad9654f\",\"id\":\"28d4655d-de6a-4d4b-9aec-3ce92bcbcf11\"},\"log\":{\"source\":{\"address\":\"192.168.222.187:46194\"}}}"
2) "{\"@timestamp\":\"2024-10-25T12:17:07.237Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"7.17.3\"},\"message\":\"BBBBBBB\",\"log\":{\"source\":{\"address\":\"192.168.222.187:46194\"}},\"input\":{\"type\":\"tcp\"},\"host\":{\"name\":\"elk188.longchi.xyz\"},\"agent\":{\"type\":\"filebeat\",\"version\":\"7.17.3\",\"hostname\":\"elk188.longchi.xyz\",\"ephemeral_id\":\"d7c7fbc0-ab15-4085-b610-d004fad9654f\",\"id\":\"28d4655d-de6a-4d4b-9aec-3ce92bcbcf11\",\"name\":\"elk188.longchi.xyz\"},\"ecs\":{\"version\":\"1.12.0\"}}"
3) "{\"@timestamp\":\"2024-10-25T12:17:10.977Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"7.17.3\"},\"agent\":{\"version\":\"7.17.3\",\"hostname\":\"elk188.longchi.xyz\",\"ephemeral_id\":\"d7c7fbc0-ab15-4085-b610-d004fad9654f\",\"id\":\"28d4655d-de6a-4d4b-9aec-3ce92bcbcf11\",\"name\":\"elk188.longchi.xyz\",\"type\":\"filebeat\"},\"message\":\"CCCCCCC\",\"log\":{\"source\":{\"address\":\"192.168.222.187:46194\"}},\"input\":{\"type\":\"tcp\"},\"ecs\":{\"version\":\"1.12.0\"},\"host\":{\"name\":\"elk188.longchi.xyz\"}}"
192.168.222.187:6379[5]># 官网参考地址
https://www.elastic.co/guide/en/beats/filebeat/7.17/redis-output.html
# 官网参考内容如下:
output.redis:hosts: ["localhost"]password: "my_password"key: "filebeat"db: 5timeout: 3
21,从redis类型读取数据[从慢日志中读取,目前是测试阶段]
(1) 启动redis
cat > /oldboyedu/softwares/redis/redis16379.conf << EOF
port 16379
daemonize yes
bind 10.0.0.108
requirepass "oldboyedu_linux77"
slowlog-max-len=1
slowlog-log-slower-than=1000
EOFredis-server /oldboyedu/softwares/redis/redis16379.conf(2) 链接 redis 进行测试
redis-cli -h 10.0.0.108 -p 16379 -a oldboyedu_linux77(3) 编写 filebeat 配置文件
filebeat.inputs:
- type: redishosts: ["10.0.0.108:16379"]network: tcp4passwoed: "oldboyedu_linux77"timeout: 3output.console:pretty: true

补充知识 rsyslog

安装 rsyslog

[root@elk188 ~]$ yum -y install rsyslog
[root@elk188 ~]$ systemctl cat rsyslog
# /usr/lib/systemd/system/rsyslog.service
[Unit]
Description=System Logging Service
;Requires=syslog.socket
Wants=network.target network-online.target
After=network.target network-online.target
Documentation=man:rsyslogd(8)
Documentation=http://www.rsyslog.com/doc/[Service]
Type=notify
EnvironmentFile=-/etc/sysconfig/rsyslog
ExecStart=/usr/sbin/rsyslogd -n $SYSLOGD_OPTIONS
Restart=on-failure
UMask=0066
StandardOutput=null
Restart=on-failure[Install]
WantedBy=multi-user.target
;Alias=syslog.service

一、rsyslog简介

Rsyslog的全称是 rocket-fast system for log ,可用于接受来自各种来源的输入,转换 它们,并将结果输出到不同的目的地。 它提供了高性能、强大的安全功能和模块化设计。虽然rsyslog最初是一个常规的系 统日志,但它已经发展成为一种瑞士军刀式的日志记录,当应用有限处理时, RSYSLOG每秒可以向本地目的地发送超过一百万条消息。即使使用远程目的地和更 精细的处理,性能通常被认为是“惊人的”。 rsyslog是一个开源工具,被广泛用于Linux系统以通过TCP/UDP协议转发或接收日 志消息。rsyslog守护进程可以被配置成两种环境,一种是配置成日志收集服务器, rsyslog进程可以从网络中收集其它主机上的日志数据,这些主机会将日志配置为发 送到另外的远程服务器。rsyslog的另外一个用法,就是可以配置为客户端,用来过 滤和发送内部日志消息到本地文件夹(如/var/log)或一台可以路由到的远程rsyslog 服务器上。 官网地址:https://www.rsyslog.com/ 在 CentOS 6.x 中,日志服务已经由 rsyslogd 取代了原先的 syslogd。Red Hat 公司 认为 syslogd 已经不能满足工作中的需求,rsyslogd 相比 syslogd 具有一些新的特 点: 基于TCP网络协议传输日志信息。 更安全的网络传输方式。 有日志信息的即时分析框架。 后台数据库。 在配置文件中可以写简单的逻辑判断。 与syslog配置文件相兼容。 rsyslogd 日志服务更加先进,功能更多。但是,不论是该服务的使用,还是日志文 件的格式,其实都是和 syslogd 服务相兼容的,所以学习起来基本和 syslogd 服务一 致。 系统中的绝大多数日志文件是由 rsyslogd 服务来统一管理的,只要各个进程将信息 给予这个服务,它就会自动地把日志按照特定的格式记录到不同的日志文件中。也就 是说,采用 rsyslogd 服务管理的日志文件,它们的格式应该是统一的。 在 Linux 系统中有一部分日志不是由 rsyslogd 服务来管理的,比如 apache 服务, 它的日志是由 Apache 软件自己产生并记录的,并没有调用 rsyslogd 服务。但是为 了便于读取,apache 日志文件的格式和系统默认日志的格式是一致的。 
我们如何知道 Linux 中的 rsyslogd 服务是否启动了呢?如何查询 rsyslogd 服务的自 启动状态呢?命令如下:
ps aux | grep "rsyslog" | grep -v "grep"[root@elk188 ~]$ ps aux | grep "rsyslog" | grep -v "grep"
root       1030  0.0  0.0 216400  5012 ?        Ssl  00:04   0:00 /usr/sbin/rsyslogd -n#有rsyslogd服务的进程,所以这个服务已经启动了
#centos6.x查看服务有无开机自启[root@localhost ~]# chkconfig --list | grep rsyslog
rsyslog 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭
#rsyslog服务在2、3、4、5运行级别上是开机自启动的#centos7.x查看服务有无开机自启
systemctl  list-unit-files rsyslog.service[root@elk188 ~]$ systemctl  list-unit-files rsyslog.service
UNIT FILE       STATE
rsyslog.service enabled
1 unit files listed.

二、常见日志格式及文件

日志文件是重要的系统信息文件,其中记录了许多重要的系统事件,包括用户的登录 信息、系统的启动信息、系统的安全信息、邮件相关信息、各种服务相关信息等。这 些信息有些非常敏感,所以在 Linux 中这些日志文件只有 root 用户可以读取。 那么,系统日志文件保存在什么地方呢?还记得 /var/ 目录吗?它是用来保存系统动 态数据的目录,那么 /var/log/ 目录就是系统日志文件的保存位置。我们通过表 1 来 说明一下系统中的重要日志文件。

日志文件说明解释
/var/log/cron记录与系统定时任务相关的日志
/var/log/cups/记录打印信息的日志
/var/log/dmesg记录系统在开机时内核自检的信息,也可以使用dmesg命令直接查看内核自检信息
/var/log/btmp记录错误登录的日志。这个文件是二进制文件,不能直接用vim查看,而要使用 lastb 命令查看。命令如下: [root@elk188 ~]$ lastb root tty1 btmp begins Wed Oct 2 17:44:27 2024
/var/log/lastlog记录系统中所有用户最后一次的登录时间的日志。这个文件也是二进制文件不能直接用vim查看。而要使用 lastlog命令查看
/var/log/maillog记录邮件信息的日志
/var/log/messages它是核心系统日志文件,其中包含了系统启动时的引导信息,以及系统运行时的其他状态消息。I/O错误,网络错误和其他系统错误都会记录到此文件中。其他信息,比如某个人的身份切换为root,已经用户自定义安装软件的日志,也会在这里列出。
/var/log/secure记录验证和授权方面的信息,只要涉及账号和密码的程序都会记录,比如系统的登录,ssh的登录,su切换用户,sudo授权,甚至添加用户和修改用户密码都会记录在这个日志文件中
/var/log/wtmp永久记录所有用户的登录,注销信息,同时记录系统的后动,重启,关机事件。同样,这个文件也是二进制文件,不能直接用vim查,而要使用last命令查看
/var/tun/ulmp记录当前已经登录的用户的信息。这个文件会随着用户的登录和注销而不断变化,只记录当前登录用户的信息。同样,这个文件不能直接用vim查看,而要使用w,who,users等命令查看

除系统默认的日志之外,采用 RPM 包方式安装的系统服务也会默认把日志记录在 /var/log/ 目录中(源码包安装的服务日志存放在源码包指定的目录中)。不过这些 日志不是由 rsyslogd 服务来记录和管理的,而是各个服务使用自己的日志管理文档 来记录自身的日志。以下介绍的日志目录在你的 Linux 上不一定存在,只有安装了相应的服务,日志才会出现。服务日志如表 2 所示:

日志文件说明
/var/log/httpd/RPM包安装的apache服务的默认日志目录
/var/log/mail/RPM包安装的邮件服务的额外日志目录
/var/log/samba/RPM包安装的Samba服务的日志目录
/var/log/sssd/守护进程安全服务目录

三、Linux日志文件的格式分析

只要是由日志服务 rsyslogd 记录的日志文件,它们的格式就都是一样的。所以我们只要了解了日志文件的格式,就可以很轻松地看懂日志文件。

日志文件的格式包含以下 4 列:

事件产生的时间。
​
产生事件的服务器的主机名。
​
产生事件的服务名或程序名。
​
事件的具体信息。

我们查看一下 /var/log/secure 日志,这个日志中主要记录的是用户验证和授权方面的信息,更加容易理解。命令如下:

[root@localhost ~]# vim /var/log/secure

Nginx日志常用参数详解

log_format json '{"@timestamp":"$time_iso8601",''"scheme":"$scheme",''"http_referer":"$http_referer",''"args":"$args",''"http_user_agent":"$http_user_agent",''"remote_addr":"$remote_addr",''"hosts":"$host",''"server_name":"$server_name",''"server_protocol":"$server_protocol",''"request_method":"$request_method",''"request_uri":"$request_uri",''"uri":"$uri",''"request_length":"$request_length",''"body_byte_sent": "$body_bytes_sent",''"request_time":"$request_time",''"server_addr":"$server_addr",''"status": $status,''"bytes_sent":"$bytes_sent",''"upstream_addr":"$upstream_addr",''"upstream_status":"$upstream_status",''"upstream_connect_time":"$upstream_connect_time",''"upstream_response_time":"$upstream_response_time",''"request_id":"$request_id"''}';可以加上这些:
$request_filename:当前请求的文件路径,由root或alias指令与URI请求生成。
$http_cookie:客户端cookie信息
$http_host #请求地址,即浏览器中你输入的地址(IP或域名)
$server_port:请求到达服务器的端口号。
$connection_requests 当前通过一个连接获得的请求数量。
$remote_addr:记录访问网站的客户端地址$remote_user:远程客户端用户名称
$time_local:记录访问时间与时区
$request:表示request请求头的行$status:http状态码,记录请求返回的状态,例如:200、404、301等
$body_bytes_sent:服务器发送给客户端的响应body字节数,发送给客户端的文件主体内容的大小,比如899,可以将日志每条记录中的这个值累加起来以粗略估计服务器吞吐量。$http_referer:记录此次请求是从哪个链接访问过来的,可以根据refer进行防盗链设置
$http_user_agent:记录客户端访问信息,例如:浏览器,手机客户端等$http_x_forwarded_for:客户端的真实ip,通常web服务器放在反向代理的后面,这样就不能获取到客户的IP地址了,通过$remote_add拿到的IP地址是反向代理服务器的iP地址。反向代理服务器在转发请求的http头信息中,可以增加x_forwarded_for信息,用以记录原有客户端的IP地址和原来客户端的请求的服务器地址。$http_host 请求地址,即浏览器中你输入的地址(IP或域名)
$ssl_protocol:SSL协议版本
$ssl_cipher:交换数据中的算法
$upstream_status:upstream状态,比如成功是200
$upstream_addr:当ngnix做负载均衡时,可以查看后台提供真实服务的设备
$upstream_response_time:请求过程中,upstream响应时间$request_time:整个请求的总时间,请求处理时间,单位为秒,精度毫秒; 从读入客户端的第一个字节开始,直到把最后一个字符发送给客户端后进行日志写入为止。$args:这个变量等于请求行中的参数,同$query_string
$content_length:请求头中的Content-length字段。
$content_type:请求头中的Content-Type字段。
$document_root:当前请求在root指令中指定的值。
$host:请求主机头字段,否则为服务器名称。
$http_user_agent:客户端agent信息
$http_cookie:客户端cookie信息
$limit_rate:这个变量可以限制连接速率。
$request_method:客户端请求的动作,通常为GET或POST。
$remote_addr:客户端的IP地址。
$remote_port:客户端的端口。
$remote_user:已经经过Auth Basic Module验证的用户名。
$request_filename:当前请求的文件路径,由root或alias指令与URI请求生成。
$scheme:HTTP方法(如http,https)。
$server_protocol:请求使用的协议,通常是HTTP/1.0或HTTP/1.1。
$server_addr:服务器地址,在完成一次系统调用后可以确定这个值。
$server_name:服务器名称。
$server_port:请求到达服务器的端口号。
$request_uri:包含请求参数的原始URI,不包含主机名,如:”/foo/bar.php?arg=baz”。
$uri:不带请求参数的当前URI,$uri不包含主机名,如”/foo/bar.html”。
$document_uri:与$uri相同。 
如何将nginx日志滚动一次,清除旧日志
[root@elk188 ~]$ curl -I 127.0.0.1
HTTP/1.1 200 OK
Server: nginx/1.26.1
Date: Fri, 18 Oct 2024 07:02:08 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 29 May 2024 19:07:19 GMT
Connection: keep-alive
ETag: "66577ce7-267"
Accept-Ranges: bytes[root@elk188 ~]$ cat /var/log/nginx/access.log
127.0.0.1 - - [17/Oct/2024:16:59:40 -0700] "GET / HTTP/1.1" 200 615 "-" "curl/7.29.0" "-"
192.168.222.1 - - [17/Oct/2024:17:46:34 -0700] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:131.0) Gecko/20100101 Firefox/131.0" "-"
192.168.222.1 - - [17/Oct/2024:17:46:34 -0700] "GET /favicon.ico HTTP/1.1" 404 153 "http://192.168.222.188/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:131.0) Gecko/20100101 Firefox/131.0" "-"
192.168.222.1 - - [17/Oct/2024:17:47:54 -0700] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36" "-"
192.168.222.1 - - [17/Oct/2024:17:47:54 -0700] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.222.188/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36" "-"
192.168.222.1 - - [17/Oct/2024:17:50:15 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36" "-"
192.168.222.1 - - [17/Oct/2024:17:52:32 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36" "-"
192.168.222.1 - - [17/Oct/2024:17:53:13 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:131.0) Gecko/20100101 Firefox/131.0" "-"{"@timestamp":"2024-10-18T00:02:08-07:00","@source":"127.0.0.1","idc":"huzhou","http_cookie":"-","hostname":"elk188.longchi.xyz","ip":"-","client":"127.0.0.1","request_method":"HEAD","scheme":"http","domain":"localhost","referer":"-","request":"/","args":"-","size":0,"request_body":"-","status": 200,"responsetime":0.000,"upstreamtime":"-","upstreamaddr":"-","http_user_agent":"curl/7.29.0","https":""}# 日志滚动一次,清除旧日志
[root@elk188 ~]$ >/var/log/nginx/access.log
[root@elk188 ~]$ curl -I 127.0.0.1
HTTP/1.1 200 OK
Server: nginx/1.26.1
Date: Fri, 18 Oct 2024 07:08:01 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 29 May 2024 19:07:19 GMT
Connection: keep-alive
ETag: "66577ce7-267"
Accept-Ranges: bytes[root@elk188 ~]$ cat /var/log/nginx/access.log
{"@timestamp":"2024-10-18T00:08:01-07:00","@source":"127.0.0.1","idc":"huzhou","http_cookie":"-","hostname":"elk188.longchi.xyz","ip":"-","client":"127.0.0.1","request_method":"HEAD","scheme":"http","domain":"localhost","referer":"-","request":"/","args":"-","size":0,"request_body":"-","status": 200,"responsetime":0.000,"upstreamtime":"-","upstreamaddr":"-","http_user_agent":"curl/7.29.0","https":""}

九,filebeat企业常见案例(EFK架构)

1,使用 filebeat 收集 nginx 日志到 es 集群
vim /etc/filebeat/config/02-nginx-to-es.yml
filebeat.inputs:
- type: logpaths:- /var/log/nginx/access.log*json.keys_under_root: trueoutput.elasticsearch:hosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-elk-%{+yyyy.MM.dd}"setup.ilm.enabled: false
setup.template.name: "longchi-linux-elk"
setup.template.pattern: "longchi-linux-elk*"
setup.template.settings:index.number_of_shards: 3index.number_of_replicas: 0启动 filebeat 实例
filebeat -e -c /etc/filebeat/config/02-nginx-to-es.yml
2,自定义 nginx 日志格式及ES索引名称
(1) 修改 nginx 的源日志格式
vim /etc/nginx/nginx.conf
...log_format longchi_nginx_json '{"@timestamp": "$time_iso8601",''"host": "$server_addr",''"clientip": "$remote_addr",''"size": "$body_bytes_sent",''"responsetime": "$request_time",''"upstreamtime": "$upstream_response_time",''"upstreamhost": "$upstream_addr",''"http_host": "$host",''"uri": "$uri",''"domain": "$host",''"xff": "$http_x_forwarded_for",''"referer": "$http_referer",''"tcp_xff": "$proxy_protocol_addr",''"http_user_agent": "$http_user_agent",''"status": "$status"}';access_log /var/log/nginx/access.log longchi_nginx_json;(2)检查 nginx 的配置文件语法并重启 nginx 服务
nginx -t
systemctl restart nginx(3) 编写 filebeat 配置文件
cat > /etc/filebeat/config/03-nginx-to-es.yml << 'EOF'
filebeat.inputs:
- type: filestream# 指定收集访问日志的路径paths:- /var/log/nginx/access.log# 配置解析器parsers:# 使 filebeat 能够解码结构化为 JSON 消息的日志。# filebeat 逐行处理日志,因此 JSON 解码仅在每条消息有一个JSON对象才有效。- ndjson:# 对 message 字段进行JSON格式解析,并将 key 放在根字段。keys_under_root: trueoutput.elasticsearch:# 指定ES集群的列表hosts:- "http://192.168.222.187:9200"- "http://192.168.222.188:9200"- "http://192.168.222.189:9200"index: "oldboyedu-linux-elk-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视;
setup.ilm.enabled: false
# 指定索引模板的名称,所谓索引模板就是创建索引的方式
setup.template.name: "oldboyedu-linux-elk"
# 指定索引目标的匹配模板
setup.template.pattern: "oldboyedu-linux-elk*"
EOF(4) 启动filebeat实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/03-nginx-to-es.yml(5) 在 kibana 查看数据
3,自定义 tomcat 日志格式并指定索引分片
"/longchi/softwares/apache-tomcat-10.0.20/logs/localhost_access_log.2024-10-19.txt"tomcat官网地址:
https://tomcat.apache.org/下载地址
https://archive.apache.org/dist/tomcat/tomcat-10/v10.0.20/bin/apache-tomcat-10.0.20.tar.gz(1) 部署 tomcat 服务1)解压 tomcat 软件包tar xf apache-tomcat-10.0.20.tar.gz -C /oldboyedu/softwares/(2) 创建符号链接
cd /oldboyedu/softwares/ && ln -sv apache-tomcat-10.0.20 tomcat(3) 配置环境变量
vim /etc/profile.d/elk.sh
...
export JAVA_HOME=/usr/share/elasticsearch/jdk
export TOMCAT_HOME=/longchi/softwares/tomcat
export PATH=$PATH:$TOMCAT_HOME/bin:$PATH:$JAVA_HOME/bin# 实操
[root@elk188 ~]$ cat /etc/profile.d/elk.sh
#!/bin/bash
export JAVA_HOME=/usr/share/elasticsearch/jdk
# export JAVA_HOME=/longchi/softwares/jdk
export TOMCAT_HOME=/longchi/softwares/tomcat
export PATH=$PATH:$TOMCAT_HOME/bin:$PATH:$JAVA_HOME/bin(4) 使得环境变量生效
source /etc/profile.d/elk.sh(5) 备份配置文件
cp /oldboyedu/softwares/tomcat/conf/{server.xml,server.xml-`date +%F`}(6) 修改配置文件
vim /oldboyedu/softwares/tomcat/conf/server.xml
...(切换到行尾修改,大概是在133-149之间)
修改 tomcat 配置文件将原<Host></Host>替换为如下内容<Host name="tomcat.longchi.xyz"  appBase="webapps"unpackWARs="true" autoDeploy="true"><!-- SingleSignOn valve, share authentication between web applicationsDocumentation at: /docs/config/valve.html --><!--<Valve className="org.apache.catalina.authenticator.SingleSignOn" />--><!-- Access log processes all example.Documentation at: /docs/config/valve.htmlNote: The pattern used is equivalent to using pattern="common" --><Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"prefix="tomcat.longchi.xyz_access_log" suffix=".txt"pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}"/></Host>7) 启动 tomcat 服务
catalina.sh start8)访问测试,并查看日志格式
tail -100f /longchi/softwares/tomcat/logs/tomcat.longchi.com_access_log.2022-04-19.txt(2) 配置 filebeat 收集 tomcat 日志
cat > /etc/filebeat/config/04-tomcat-to-es.yml << 'EOF'
filebeat.inputs:
- type: filestreampaths:- /longchi/softwares/tomcat/logs/tomcat.longchi.com_access_log*.txtjson.keys_under_root: trueparsers:- ndjson:keys_under_root: trueoutput.elasticsearch:hosts:- "http://10.0.0.101:9200"- "http://10.0.0.102:9200"- "http://10.0.0.103:9200"index: "longchi-tomcat-%{+yyyy.MM.dd}"# 禁用索引生命周期管理
setup.ilm.enabled: false
# 设置索引模板的名称
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false则不覆盖!
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置分片数量index.number_of_shards: 3# 设置副本数量,要求小于集群的数量index.number_of_replicas: 0
4, 多个input日志收集案例及filestream和log类型对比
多端发送数据logstash配置文件 vim config-logstash/10-many-to-es.conf
input {beats {port => 8888}redis {data_type => "list"db => 3host => "192.168.222.187"port => 6379password => "longchi"key => "longchi-linux-filebeat"}
}output {stdout {}elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.187:9200","192.168.222.187:9200"]index => "longchi-linux-logstash-%{+yyyy.MM.dd}"}
}
5,使用 log 输入类型对tomcat错误日志进行多行匹配
(1)编写 filebeat 配置文件
cat > config/07-tomcat-error-to-es.yml <<'EOF'
filebeat.inputs:
- type: logpaths: - /longchi/softwares/tomcat/logs/catalina.outmultiline.type: patternmultiline.pattern: '^\d(2)'multiline.negate: truemultiline.match: afteroutput.elasticsearch:hosts:- "http://10.0.0.101:9200"- "http://10.0.0.102:9200"- "http://10.0.0.103:9200"index: "longchi-tomcat-error-%{+yyyy.MM.dd}"setup.ilm.enabled: false
setup.template.name: "longchi-tomcat-error"
setup.template.pattern: "longchi-tomcat-error*"
setup.template.overwrite: true
setup.template.settings:index.number_of_shards: 3index.number_of_replicas: 0
EOF(2) 启动 filebeat 实例
filebeat test config -c config/07-tomcat-error-to-es.yml
filebeat -e -c config/07-tomcat-error-to-es.yml(3) kibana 查看
6,使用 filestream 输入类型对tomcat错误日志进行多行匹配

7,模块使用案例
[root@elk188 ~]$ cat /etc/filebeat/config/11-nginx-to-es.yml
# encoding: utf-8
filebeat.config.modules:path: ${path.config}/modules.d/*.yml# 开启热加载功能reload.enabled: trueoutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-nginx-access-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 1
8,二进制安装 filebeat
(1) 准备安装包 filebeat-7.17.3-linux-x86_64.tar.gz
[root@elk188 ~]$ ll
total 637016
drwxr-xr-x 2 root root       310 Oct 11 04:08 config
-rw-r--r-- 1 root root 311873551 Oct  4 17:03 elasticsearch-7.17.3-x86_64.rpm
-rw-r--r-- 1 root root  36062198 Oct 11 17:12 filebeat-7.17.3-linux-x86_64.tar.gz
-rw-r--r-- 1 root root  36010044 Oct  5 17:32 filebeat-7.17.3-x86_64.rpm
-rw-r--r-- 1 root root 268348045 Oct  5 04:57 kibana-7.17.3-x86_64.rpm(2) 解压到指定目录 /longchi/softwares/  
tar xf filebeat-7.17.3-linux-x86_64.tar.gz -C /longchi/softwares/[root@elk188 ~]$ tar xf filebeat-7.17.3-linux-x86_64.tar.gz -C /longchi/softwares/
[root@elk188 ~]$ ls /longchi/softwares/
filebeat-7.17.3-linux-x86_64  jdk  jdk1.8.0_321(3)创建软连接
[root@elk188 ~]$ cd /longchi/softwares/
[root@elk188 /longchi/softwares]$ ll
total 0
drwxr-xr-x 5 root  root  212 Oct 11 17:34 filebeat-7.17.3-linux-x86_64
lrwxrwxrwx 1 root  root   12 Oct  3 17:17 jdk -> jdk1.8.0_321
drwxr-xr-x 8 10143 10143 273 Dec 15  2021 jdk1.8.0_321[root@elk188 /longchi/softwares]$ ln -sv filebeat-7.17.3-linux-x86_64 filebeat
‘filebeat’ -> ‘filebeat-7.17.3-linux-x86_64’
[root@elk188 /longchi/softwares]$ ll
total 0
lrwxrwxrwx 1 root  root   28 Oct 11 17:39 filebeat -> filebeat-7.17.3-linux-x86_64
drwxr-xr-x 5 root  root  212 Oct 11 17:34 filebeat-7.17.3-linux-x86_64
lrwxrwxrwx 1 root  root   12 Oct  3 17:17 jdk -> jdk1.8.0_321
drwxr-xr-x 8 10143 10143 273 Dec 15  2021 jdk1.8.0_321
[root@elk188 /longchi/softwares]$ cd filebeat
[root@elk188 /longchi/softwares/filebeat]$ ll
total 128552
-rw-r--r--  1 root root   3780088 Apr 19  2022 fields.yml
-rwxr-xr-x  1 root root 125650688 Apr 19  2022 filebeat
-rw-r--r--  1 root root    170239 Apr 19  2022 filebeat.reference.yml
-rw-------  1 root root      8273 Apr 19  2022 filebeat.yml
drwxr-xr-x  3 root root        15 Apr 19  2022 kibana
-rw-r--r--  1 root root     13675 Apr 19  2022 LICENSE.txt
drwxr-xr-x 76 root root      4096 Apr 19  2022 module
drwxr-xr-x  2 root root      4096 Apr 19  2022 modules.d
-rw-r--r--  1 root root   1987715 Apr 19  2022 NOTICE.txt
-rw-r--r--  1 root root       814 Apr 19  2022 README.md[root@elk188 /longchi/softwares/filebeat]$ egrep -v "^*#|^$" filebeat.yml
filebeat.inputs:
- type: filestreamenabled: falsepaths:- /var/log/*.log
filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: false
setup.template.settings:index.number_of_shards: 1
setup.kibana:
output.elasticsearch:hosts: ["localhost:9200"]
processors:- add_host_metadata:when.not.contains.tags: forwarded- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~# 查看有哪些模块
root@elk188 ~]$ mv config/ /longchi/softwares/filebeat/
[root@elk188 /longchi/softwares/filebeat]$ ./filebeat -c config/11.nginx_log_to_es.yml modules list
Enabled:Disabled:
activemq
apache
auditd
aws
awsfargate
azure
barracuda
bluecoat
cef
checkpoint
cisco
coredns
crowdstrike
cyberark
cyberarkpas
cylance
elasticsearch
envoyproxy
f5
fortinet
gcp
google_workspace
googlecloud
gsuite
haproxy
ibmmq
icinga
iis
imperva
infoblox
iptables
juniper
kafka
kibana
logstash
microsoft
misp
mongodb
mssql
mysql
mysqlenterprise
nats
netflow
netscout
nginx
o365
okta
oracle
osquery
panw
pensando
postgresql
proofpoint
rabbitmq
radware
redis
santa
snort
snyk
sonicwall
sophos
squid
suricata
system
threatintel
tomcat
traefik
zeek
zookeeper
zoom
zscaler
9,查看filebeat有哪些模块服务
(1)查看 filebeat 有哪些模块服务
filebeat -c /etc/filebeat/config/11.nginx_log_to_es.yml modules list(2)启用模块命令 以 nginx tomcat为例
filebeat -c /etc/filebeat/config/11.nginx_log_to_es.yml modules enable nginx tomcat(3) 禁用模块命令 以 nginx tomcat为例
filebeat -c /etc/filebeat/config/11.nginx_log_to_es.yml modules disable nginx tomcat [root@elk188 ~]$ ls /usr/share/filebeat/module/
activemq    barracuda   crowdstrike    fortinet          icinga    kibana     mysqlenterprise  oracle      radware    squid        zeek
apache      bluecoat    cyberark       gcp               iis       logstash   nats             osquery     redis      suricata     zookeeper
apache2     cef         cyberarkpas    googlecloud       imperva   microsoft  netflow          panw        santa      symantec     zoom
auditd      checkpoint  cylance        google_workspace  infoblox  misp       netscout         pensando    snort      system       zscaler
aws         cisco       elasticsearch  gsuite            iptables  mongodb    nginx            postgresql  snyk       threatintel
awsfargate  citrix      envoyproxy     haproxy           juniper   mssql      o365             proofpoint  sonicwall  tomcat
azure       coredns     f5             ibmmq             kafka     mysql      okta             rabbitmq    sophos     traefik[root@elk188 ~]$ ls /etc/filebeat/modules.d/
activemq.yml.disabled    crowdstrike.yml.disabled       haproxy.yml.disabled    misp.yml.disabled             osquery.yml.disabled     sophos.yml.disabled
apache.yml.disabled      cyberarkpas.yml.disabled       ibmmq.yml.disabled      mongodb.yml.disabled          
panw.yml.disabled        squid.yml.disabled
auditd.yml.disabled      cyberark.yml.disabled          icinga.yml.disabled     mssql.yml.disabled            pensando.yml.disabled    suricata.yml.disabled
awsfargate.yml.disabled  cylance.yml.disabled           
iis.yml.disabled        mysqlenterprise.yml.disabled  postgresql.yml.disabled  system.yml.disabled
aws.yml.disabled         elasticsearch.yml.disabled     imperva.yml.disabled    mysql.yml.disabled            proofpoint.yml.disabled  threatintel.yml.disabled
azure.yml.disabled       envoyproxy.yml.disabled        infoblox.yml.disabled   nats.yml.disabled             rabbitmq.yml.disabled    tomcat.yml.disabled
barracuda.yml.disabled   f5.yml.disabled                iptables.yml.disabled   netflow.yml.disabled          radware.yml.disabled     traefik.yml.disabled
bluecoat.yml.disabled    fortinet.yml.disabled          juniper.yml.disabled    netscout.yml.disabled         
redis.yml.disabled       zeek.yml.disabled
cef.yml.disabled         gcp.yml.disabled               kafka.yml.disabled      nginx.yml.disabled            
santa.yml.disabled       zookeeper.yml.disabled
checkpoint.yml.disabled  googlecloud.yml.disabled       kibana.yml.disabled     o365.yml.disabled             
snort.yml.disabled       zoom.yml.disabled
cisco.yml.disabled       google_workspace.yml.disabled  logstash.yml.disabled   okta.yml.disabled             
snyk.yml.disabled        zscaler.yml.disabled
coredns.yml.disabled     gsuite.yml.disabled            microsoft.yml.disabled  oracle.yml.disabled           sonicwall.yml.disabled
[root@elk188 ~]$[root@elk188 ~]$ filebeat -c /etc/filebeat/config/11.nginx_log_to_es.yml modules list
Enabled:Disabled:
activemq
apache
auditd
aws
awsfargate
azure
barracuda
bluecoat
cef
checkpoint
cisco
coredns
crowdstrike
cyberark
cyberarkpas
cylance
elasticsearch
envoyproxy
f5
fortinet
gcp
google_workspace
googlecloud
gsuite
haproxy
ibmmq
icinga
iis
imperva
infoblox
iptables
juniper
kafka
kibana
logstash
microsoft
misp
mongodb
mssql
mysql
mysqlenterprise
nats
netflow
netscout
nginx
o365
okta
oracle
osquery
panw
pensando
postgresql
proofpoint
rabbitmq
radware
redis
santa
snort
snyk
sonicwall
sophos
squid
suricata
system
threatintel
tomcat
traefik
zeek
zookeeper
zoom
zscaler
[root@elk188 ~]$ egrep -v "^*#|^$" /etc/filebeat/config/11.nginx_log_to_es.yml
filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: false
output.elasticsearch:hosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189"]index: "longchi-linux-nginx-access-%{+yyyy.MM.dd}"
setup.ilm.enabled: false
setup.template.name: "longchi-linux"
setup.template.pattern: "longchi-linux*"
setup.template.overwrite: false
setup.template.settings:index.number_of_shards: 3index.number_of_replicas: 2
10,filebeat.config.modules配置
filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: false[root@elk188 ~]$ ls /etc/filebeat/modules.d/
activemq.yml.disabled    crowdstrike.yml.disabled       haproxy.yml.disabled    misp.yml.disabled             osquery.yml.disabled     sophos.yml.disabled
apache.yml.disabled      cyberarkpas.yml.disabled       ibmmq.yml.disabled      mongodb.yml.disabled          
panw.yml.disabled        squid.yml.disabled
auditd.yml.disabled      cyberark.yml.disabled          icinga.yml.disabled     mssql.yml.disabled            pensando.yml.disabled    suricata.yml.disabled
awsfargate.yml.disabled  cylance.yml.disabled           
iis.yml.disabled        mysqlenterprise.yml.disabled  postgresql.yml.disabled  system.yml.disabled
aws.yml.disabled         elasticsearch.yml.disabled     imperva.yml.disabled    mysql.yml.disabled            proofpoint.yml.disabled  threatintel.yml.disabled
azure.yml.disabled       envoyproxy.yml.disabled        infoblox.yml.disabled   nats.yml.disabled             rabbitmq.yml.disabled    tomcat.yml.disabled
barracuda.yml.disabled   f5.yml.disabled                iptables.yml.disabled   netflow.yml.disabled          radware.yml.disabled     traefik.yml.disabled
bluecoat.yml.disabled    fortinet.yml.disabled          juniper.yml.disabled    netscout.yml.disabled         
redis.yml.disabled       zeek.yml.disabled
cef.yml.disabled         gcp.yml.disabled               kafka.yml.disabled      nginx.yml.disabled            
santa.yml.disabled       zookeeper.yml.disabled
checkpoint.yml.disabled  googlecloud.yml.disabled       kibana.yml.disabled     o365.yml.disabled             
snort.yml.disabled       zoom.yml.disabled
cisco.yml.disabled       google_workspace.yml.disabled  logstash.yml.disabled   okta.yml.disabled             
snyk.yml.disabled        zscaler.yml.disabled
coredns.yml.disabled     gsuite.yml.disabled            microsoft.yml.disabled  oracle.yml.disabled           sonicwall.yml.disabled[root@elk188 ~]$ systemctl cat filebeat
# /usr/lib/systemd/system/filebeat.service
[Unit]
Description=Filebeat sends log files to Logstash or directly to Elasticsearch.
Documentation=https://www.elastic.co/beats/filebeat
Wants=network-online.target
After=network-online.target[Service]Environment="GODEBUG='madvdontneed=1'"
Environment="BEAT_LOG_OPTS="
Environment="BEAT_CONFIG_OPTS=-c /etc/filebeat/filebeat.yml"
Environment="BEAT_PATH_OPTS=--path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat"
ExecStart=/usr/share/filebeat/bin/filebeat --environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS
Restart=always[Install]
WantedBy=multi-user.target
11,filebeat的nginx实例日志格式 json 解决方案如图所示 :三种解决方案
1)在数据源端解决
(1) 修改nginx配置文件
[root@elk188 ~]$ vim /etc/nginx/nginx.conf
[root@elk188 ~]$ cat /etc/nginx/nginx.confuser  nginx;
worker_processes  auto;error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;events {worker_connections  1024;
}http {include       /etc/nginx/mime.types;default_type  application/octet-stream;#    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
#                      '$status $body_bytes_sent "$http_referer" '
#                      '"$http_user_agent" "$http_x_forwarded_for"';#    access_log  /var/log/nginx/access.log  main;log_format longchi_nginx_json '{"@timestamp": "$time_iso8601",''"host": "$server_addr",''"clientip": "$remote_addr",''"size": "$body_bytes_sent",''"responsetime": "$request_time",''"upstreamtime": "$upstream_response_time",''"upstreamhost": "$upstream_addr",''"http_host": "$host",''"uri": "$uri",''"domain": "$host",''"xff": "$http_x_forwarded_for",''"referer": "$http_referer",''"tcp_xff": "$proxy_protocol_addr",''"http_user_agent": "$http_user_agent",''"status": "$status"}';access_log /var/log/nginx/access.log longchi_nginx_json;sendfile        on;#tcp_nopush     on;keepalive_timeout  65;#gzip  on;include /etc/nginx/conf.d/*.conf;
}[root@elk188 ~]$ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@elk188 ~]$ systemctl restart nginx
[root@elk188 ~]$ systemctl status nginx
● nginx.service - nginx - high performance web serverLoaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)Active: active (running) since Thu 2024-10-17 19:11:41 PDT; 13s agoDocs: http://nginx.org/en/docs/Process: 6113 ExecStop=/bin/sh -c /bin/kill -s TERM $(/bin/cat /var/run/nginx.pid) (code=exited, status=0/SUCCESS)Process: 6119 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS)Main PID: 6120 (nginx)Tasks: 3CGroup: /system.slice/nginx.service├─6120 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf├─6121 nginx: worker process└─6122 nginx: worker processOct 17 19:11:41 elk188.longchi.xyz systemd[1]: Stopped nginx - high performance web server.
Oct 17 19:11:41 elk188.longchi.xyz systemd[1]: Starting nginx - high performance web server...
Oct 17 19:11:41 elk188.longchi.xyz systemd[1]: Started nginx - high performance web server.[root@elk188 ~]$ curl -I 127.0.0.1
HTTP/1.1 200 OK
Server: nginx/1.26.1
Date: Fri, 18 Oct 2024 02:16:01 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 29 May 2024 19:07:19 GMT
Connection: keep-alive
ETag: "66577ce7-267"
Accept-Ranges: bytes(2) 修改 filebeat 配置文件
[root@elk188 ~]$ cat /etc/filebeat/config/10-nginx-to-es.yml
# encoding: utf-8
filebeat.inputs:
- type: log# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/nginx/access.log*# 给当前的输入类型打上标签tags: ["access"]parsers:- ndjson:json.keys_under_root: truejson.overwrite_keys: truejson.add_error_key: truejson.message_key: true# 字符行是json格式,如下配置
# json 所有的key 是否在顶级key(json)下
#  json.keys_under_root: true
# 如果外部存在key,是否覆盖
#  json.overwrite_keys: true
# 是否添加错误key,如解析出错,会添加解析错误信息
#  json.add_error_key: true
# 添加message 的key
#  json.message_key: logoutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-nginx-access-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 1(3) 启动 filebeat 实例
rm -rf /var/lib/filebeat/*
filebeat -e -c /etc/filebeat/config/10-nginx-to-es.yml
2)借助filebeat的module模块解决
(1)修改 nginx 配置文件 恢复源文件配置
[root@elk188 ~]$ vim /etc/nginx/nginx.conf
[root@elk188 ~]$ cat /etc/nginx/nginx.confuser  nginx;
worker_processes  auto;error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;events {worker_connections  1024;
}http {include       /etc/nginx/mime.types;default_type  application/octet-stream;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log  /var/log/nginx/access.log  main;#    log_format longchi_nginx_json '{"@timestamp": "$time_iso8601",'
#                          '"host": "$server_addr",'
#                          '"clientip": "$remote_addr",'
#                          '"size": "$body_bytes_sent",'
#                          '"responsetime": "$request_time",'
#                          '"upstreamtime": "$upstream_response_time",'
#                          '"upstreamhost": "$upstream_addr",'
#                          '"http_host": "$host",'
#                          '"uri": "$uri",'
#                          '"domain": "$host",'
#                          '"xff": "$http_x_forwarded_for",'
#                          '"referer": "$http_referer",'
#                          '"tcp_xff": "$proxy_protocol_addr",'
#                          '"http_user_agent": "$http_user_agent",'
#                          '"status": "$status"}';
#
#    access_log /var/log/nginx/access.log longchi_nginx_json;sendfile        on;#tcp_nopush     on;keepalive_timeout  65;#gzip  on;include /etc/nginx/conf.d/*.conf;
}[root@elk188 ~]$ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@elk188 ~]$ systemctl restart nginx
[root@elk188 ~]$ ss -ntl
[root@elk188 ~]$ ll /var/log/nginx/
total 16
-rw-r----- 1 nginx adm 1255 Oct 18 18:48 access.log
-rw-r----- 1 nginx adm 8416 Oct 18 19:18 error.log
[root@elk188 ~]$ cat /var/log/nginx/access.log
[root@elk188 ~]$ curl -I 127.0.0.1
HTTP/1.1 200 OK
Server: nginx/1.26.1
Date: Sat, 19 Oct 2024 02:20:53 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 29 May 2024 19:07:19 GMT
Connection: keep-alive
ETag: "66577ce7-267"
Accept-Ranges: bytes# 滚动 '> /var/log/nginx/access.log' nginx 访问日志
[root@elk188 ~]$ > /var/log/nginx/access.log
[root@elk188 ~]$ curl -I 127.0.0.1
HTTP/1.1 200 OK
Server: nginx/1.26.1
Date: Sat, 19 Oct 2024 02:43:33 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 29 May 2024 19:07:19 GMT
Connection: keep-alive
ETag: "66577ce7-267"
Accept-Ranges: bytes[root@elk188 ~]$ cat /var/log/nginx/access.log
127.0.0.1 - - [18/Oct/2024:19:43:33 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"
[root@elk188 ~]$ egrep -v "^*#|^$" /etc/filebeat/filebeat.yml-2024-10-15
filebeat.inputs:
- type: filestreamenabled: falsepaths:- /var/log/*.log
filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: false
setup.template.settings:index.number_of_shards: 1
setup.kibana:
output.elasticsearch:hosts: ["localhost:9200"]
processors:- add_host_metadata:when.not.contains.tags: forwarded- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~(2) 配置filebeat的模块输入
[root@elk188 ~]$ cp /etc/filebeat/config/10-nginx-to-es.yml /etc/filebeat/config/11-nginx-to-es.yml
[root@elk188 ~]$ vim /etc/filebeat/config/11-nginx-to-es.yml
[root@elk188 ~]$ cat /etc/filebeat/config/11-nginx-to-es.yml
# encoding: utf-8
filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: falseoutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-nginx-access-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 1[root@elk188 ~]$ filebeat modules list
Error in modules manager: modules management requires 'filebeat.config.modules.path' setting[root@elk188 ~]$ filebeat modules list -c /etc/filebeat/config/11-nginx-to-es.yml
Enabled:Disabled:
activemq
apache
...[root@elk188 ~]$ filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules list
Enabled:Disabled:
activemq
apache
...
解压到指定目录 /longchi/softwares/
[root@elk188 ~]$ tar xf filebeat-7.17.3-linux-x86_64.tar.gz -C /longchi/softwares/[root@elk188 /longchi/softwares/filebeat]$ egrep -v "^*#|^$" /longchi/softwares/filebeat/filebeat.yml
filebeat.inputs:
- type: filestreamenabled: falsepaths:- /var/log/*.log
filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: false
setup.template.settings:index.number_of_shards: 1
setup.kibana:
output.elasticsearch:hosts: ["localhost:9200"]
processors:- add_host_metadata:when.not.contains.tags: forwarded- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~
[root@elk188 /longchi/softwares/filebeat]$ ./filebeat modules list
Enabled:Disabled:
activemq
apache
...[root@elk188 /longchi/softwares/filebeat]$ find / -name modules.d
/etc/filebeat/modules.d
/usr/lib/dracut/modules.d
/longchi/softwares/filebeat-7.17.3-linux-x86_64/modules.d[root@elk188 ~]$ cat /etc/filebeat/config/11-nginx-to-es.yml
# encoding: utf-8
filebeat.config.modules:path: ${path.config}/modules.d/*.yml# 开启热加载功能reload.enabled: trueoutput.elasticsearch:enabled: truehosts: ["http://192.168.222.187:9200","http://192.168.222.188:9200","http://192.168.222.189:9200"]index: "longchi-linux-nginx-access-%{+yyyy.MM.dd}"# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 1[root@elk188 ~]$ filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules list
Enabled:Disabled:
activemq
apache
...# 启用 nginx tomcat 模块
[root@elk188 ~]$ filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules enable nginx tomcat
Enabled nginx
Enabled tomcat# 查看模块的启用与禁用
[root@elk188 ~]$ filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules list
Enabled:
nginx
tomcatDisabled:
activemq
apache
...# 禁用 nginx
[root@elk188 ~]$ filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules disable nginx
Disabled nginx# 查看禁用情况
[root@elk188 ~]$ filebeat -c /etc/filebeat/config/11-nginx-to-es.yml modules list
Enabled:
tomcatDisabled:
activemq
apache
...# 另一种启用或禁用 nginx 的方式 如下:
[root@elk188 ~]$ mv /etc/filebeat/modules.d/nginx.yml.disabled /etc/filebeat/modules.d/nginx.yml
[root@elk188 ~]$ filebeat -c //etc/filebeat/config/11-nginx-to-es.yml modules list | head
Enabled:
nginx
tomcatDisabled:
activemq
apache
auditd
aws
awsfargate
[root@elk188 ~]$ vim /etc/filebeat/modules.d/nginx.yml
[root@elk188 ~]$ egrep -v "^*#|^$" /etc/filebeat/modules.d/nginx.yml
- module: nginxaccess:enabled: truevar.paths: ["/var/log/nginx/access.log*"]error:enabled: falsevar.paths: ["/var/log/nginx/error.log*"]ingress_controller:enabled: false(3) 启动 filebeat 的 nginx 模块 实例   
[root@elk188 ~]$ rm -rf /var/lib/filebeat/*
[root@elk188 ~]$ filebeat -e -c /etc/filebeat/config/11-nginx-to-es.ym

3)引入组件 logstash 解决(第三种解决日志方案,如上图所示)
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.17.3-x86_64.rpmwget https://artifacts.elastic.co/downloads/logstash/logstash-7.17.3-linux-x86_64.tar.gzyum -y localinstall logstash-7.17.3-x86_64.rpm
ln -sv /usr/share/logstash/bin/logstash /usr/local/bin/1,复原nginx的日志配置
vim /etc/nginx/nginx.conf
...log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log  /var/log/nginx/access.log  main;
...nginx -t  验证配置文件正确与否
[root@elk188 ~]$ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@elk188 ~]$ systemctl restart nginx
[root@elk188 ~]$ ss -ntl
State       Recv-Q Send-Q   Local Address:Port      Peer Address:Port
LISTEN      0      128       *:111                         *:*
LISTEN      0      128       *:80 清空访问日志 '>/var/log/nginx/access.log'
[root@elk188 ~]$ cat /var/log/nginx/access.log
192.168.222.188 - - [29/Oct/2024:18:23:31 -0700] "GET / HTTP/1.1" 200 615 "-" "curl/7.29.0" "-"
192.168.222.187 - - [29/Oct/2024:18:24:21 -0700] "GET / HTTP/1.1" 200 615 "-" "curl/7.29.0" "-"
192.168.222.189 - - [29/Oct/2024:18:24:28 -0700] "GET / HTTP/1.1" 200 615 "-" "curl/7.29.0" "-"grok 官网地址
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.htmlLogstash 附带大约 120 种模式 
https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns构建模式以匹配您的日志
http://grokdebug.herokuapp.com/
http://grokconstructor.appspot.com/grok 支持正则表达式规则网站
https://github.com/kkos/oniguruma/blob/master/doc/RE1,logstash 配置:
[root@elk187 ~]$ vim config-logstash/14-beats-grok-es.conf
[root@elk187 ~]$ cat config-logstash/14-beats-grok-es.conf
# encoding: utf-8
input {beats {port => 8888}
}filter {grok {match => {"message" => "%{COMBINEDAPACHELOG}"}}
}output {stdout {}elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-logstash-%{+yyyy.MM.dd}"}
}启动 logstash 实例
logstash -rf config-logstash/14-beats-grok-es.conf2, filebeat配置
[root@elk188 ~]$ vim /etc/filebeat/config/34-nginx-to-logatash.yml
[root@elk188 ~]$ cat /etc/filebeat/config/34-nginx-to-logatash.yml
# encoding: utf-8
filebeat.inputs:
- type: logenabled: truepaths:- /var/log/nginx/access.logoutput.logstash:hosts: ["192.168.222.187:8888"]启动 filebeat 实例
filebeat -e -c /etc/filebeat/config/34-nginx-to-logatash.yml
logstash中的filter插件中的grok官方地址
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

logstash 企业级插件案例(ELFK架构)

1,常见的插件概述
Grok 过滤器插件
解析任意文本并构建它。
Grok 是将非结构化日志数据解析为结构化且可查询的数据的好方法
该工具非常适合 syslog 日志、apache 和其他 Web 服务器日志、mysql 日志,以及通常为人类而不是计算机使用编写的任何日志格式。
默认情况下,Logstash 附带大约 120 种模式。您可以在此处找到它们:https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns。你可以简单地添加你自己的。
如果您需要帮助构建模式以匹配您的日志,您会发现 http://grokdebug.herokuapp.com 和 http://grokconstructor.appspot.com/ 应用程序非常有用!dissect 过滤器插件是使用分隔符将非结构化事件数据提取到字段中的另一种方法。
Dissect 与 Grok 的不同之处在于它不使用正则表达式并且速度更快。当数据可靠重复时,Dissect 效果很好。当文本的结构因行而异时,Grok 是更好的选择。
对于混合使用案例,您可以同时使用 Dissect 和 Grok,当行的某一部分可靠地重复,但整行不是。Dissect 过滤器可以解构重复的行部分。Grok 过滤器可以处理具有更多正则表达式可预测性的剩余字段值。Grok 的工作原理是将文本模式组合成与您的日志匹配的内容。
grok 模式的语法是 %{SYNTAX:SEMANTIC}
SYNTAX 是将与文本匹配的模式的名称。例如,3.44 将与 NUMBER 模式匹配,而 55.3.244.1 将与 IP 模式匹配。语法就是您的匹配方式。
SEMANTIC 是您为要匹配的文本段提供的标识符。例如,3.44 可以是事件的持续时间,因此您可以简单地将其称为持续时间。此外,字符串 55.3.244.1 可能会标识发出请求的客户端。
对于上面的示例,您的 grok 过滤器将如下所示:
%{NUMBER:duration} %{IP:client}
(可选)可以将数据类型转换添加到 grok 模式。默认情况下,所有语义都保存为字符串。如果您希望转换语义的数据类型,例如,将字符串更改为整数,然后使用目标数据类型为其添加后缀。例如,%{NUMBER:num:int} 将 num 语义从字符串转换为整数。目前唯一支持的转换是 int 和 float。
例子:有了语法和语义的这个想法,我们可以从示例日志中提取有用的字段,例如这个虚构的 http 请求日志:55.3.244.1 GET /index.html 15824 0.043
其模式可以是:%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}一个更实际的例子,让我们从一个文件中读取这些日志:input {file {path => "/var/log/http.log"}}filter {grok {match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }}}老师的案例
filter {grok {match => {"message" => "%{COMBINEDAPACHELOG}"}}
}
2,使用grok内置的正则案例
logstash 配置
[root@elk187 ~]$ vim config-logstash/14-beats-grok-es.conf
[root@elk187 ~]$ cat config-logstash/14-beats-grok-es.conf
# encoding: utf-8
input {beats {port => 8888}
}filter {grok {match => {# "message" => "%{COMBINEDAPACHELOG}"# 上面的""变量官方github上已经废弃,建议使用下面的匹配模式# https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/legacy/  到这里来找匹配模式"message" => "%{HTTPD_COMMONLOG}"}}
}output {stdout {}elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-logstash-%{+yyyy.MM.dd}"}
}启动 logstash 实例
logstash -rf config-logstash/14-beats-grok-es.conf
3,使用grok自定义的正则案例
在187机器配置logstash
[root@elk187 ~]$ cp config-logstash/14-beats-grok-es.conf config-logstash/15-stdin-grok-stdout.conf
[root@elk187 ~]$ vim config-logstash/15-stdin-grok-stdout.conf
[root@elk187 ~]$ cat config-logstash/15-stdin-grok-stdout.conf
# encoding: utf-8
input {stdin {}
}filter {grok {match => {"message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"}}
}output {stdout {}
}启动 logstash 实例
logstash -rf config-logstash/15-stdin-grok-stdout.conf
The stdin plugin is now waiting for input:
[INFO ] 2024-10-30 17:56:09.980 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
输入:55.3.244.1 GET /index.html 15824 0.043 
打印如下内容:
{"@timestamp" => 2024-10-31T00:57:00.559Z,"host" => "elk187.longchi.xyz","message" => "55.3.244.1 GET /index.html 15824 0.043","bytes" => "15824","@version" => "1","method" => "GET","request" => "/index.html","duration" => "0.043","client" => "55.3.244.1"
}-----------
配置内容修改如下:
[root@elk187 ~]$ cat config-logstash/15-stdin-grok-stdout.conf
# encoding: utf-8
input {stdin {}
}filter {grok {match => {"message" => "%{IP:longchi-client} %{WORD:longchi-method} %{URIPATHPARAM:longchi-request} %{NUMBER:longchi-bytes} %{NUMBER:longchi-duration}"}}
}output {stdout {}
}启动 logstash 实例
[root@elk187 ~]$ logstash -rf config-logstash/15-stdin-grok-stdout.conf
Using JAVA_HOME defined java: /longchi/softwares/jdk
WARNING: Using JAVA_HOME while Logstash distribution comes with a bundled JDK.
...
The stdin plugin is now waiting for input:
[INFO ] 2024-10-30 18:17:07.655 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
输入:55.3.244.1 GET /index.html 15824 0.043 
打印如下内容:
{"longchi-bytes" => "15824","longchi-method" => "GET","longchi-duration" => "0.043","@timestamp" => 2024-10-31T01:17:56.433Z,"longchi-client" => "55.3.244.1","@version" => "1","message" => "55.3.244.1 GET /index.html 15824 0.043","host" => "elk187.longchi.xyz","longchi-request" => "/index.html"
}
注意:我们做一个约定:凡是提到 'longchi'字段的行,我们是可以修改的输入:192.168.222.189 POST /longchi.html 1000 0.02
返回如下内容:
{"longchi-bytes" => "1000","longchi-method" => "POST","longchi-duration" => "0.02","@timestamp" => 2024-10-31T01:30:09.487Z,"longchi-client" => "192.168.222.189","@version" => "1","message" => "192.168.222.189 POST /longchi.html 1000 0.02","host" => "elk187.longchi.xyz","longchi-request" => "/longchi.html"
}官方参考地址:https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/legacy/
4, 使用grok自定义的正则案例
官方参考地址:
https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/legacy/# oniguruma 库
https://github.com/kkos/oniguruma/blob/master/doc/RE[root@elk187 ~]$ cp config-logstash/15-stdin-grok-stdout.conf config-logstash/16-stdin-grok-custom-patterns-stdout.conf
[root@elk187 ~]$ vim config-logstash/16-stdin-grok-custom-patterns-stdout.conf
[root@elk187 ~]$ cat config-logstash/16-stdin-grok-custom-patterns-stdout.conf
# encoding: utf-8
input {stdin {}
}filter {grok {match => {"message" => "%{IP:longchi-client} %{WORD:longchi-method} %{URIPATHPARAM:longchi-request} %{NUMBER:longchi-bytes} %{NUMBER:longchi-duration}"}}grok {# 指定匹配模式的目录,可以使用绝对路径patterns_dir => ["./patterns"]# 匹配模式match => { "message" => "%{SYSLOGBASE} %{POSTFIX_QUEUEID:queue_id}: %{GREEDYDATA:syslog_message}" }}
}output {stdout {}
}[root@elk187 ~]$ mkdir patterns
[root@elk187 ~]$ echo "POSTFIX_QUEUEID [0-9A-F]{10,11}" >> patterns/postfix
[root@elk187 ~]$ cat patterns/postfix
POSTFIX_QUEUEID [0-9A-F]{10,11}[root@elk187 ~]$ logstash -rf config-logstash/16-stdin-grok-custom-patterns-stdout.conf
Using JAVA_HOME defined java: /longchi/softwares/jdk
WARNING: Using JAVA_HOME while Logstash distribution comes with a bundled JDK.
...
The stdin plugin is now waiting for input:
[INFO ] 2024-10-30 23:37:59.642 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}Jan  1 06:25:43 mailserver14 postfix/cleanup[21403]: BEF25A72965: message-id=<20130101142543.5828399CCAF@mailserver14.example.com>
{"@version" => "1","timestamp" => "Jan  1 06:25:43","message" => "    Jan  1 06:25:43 mailserver14 postfix/cleanup[21403]: BEF25A72965: message-id=<20130101142543.5828399CCAF@mailserver14.example.com>","syslog_message" => "message-id=<20130101142543.5828399CCAF@mailserver14.example.com>","tags" => [[0] "_grokparsefailure"],"pid" => "21403","logsource" => "mailserver14","@timestamp" => 2024-10-31T06:38:11.094Z,"program" => "postfix/cleanup","queue_id" => "BEF25A72965","host" => "elk187.longchi.xyz"
}追加2
[root@elk187 ~]$ cat config-logstash/16-stdin-grok-custom-patterns-stdout.conf
# encoding: utf-8
input {stdin {}
}filter {grok {# 指定匹配模式的目录,可以使用绝对路径patterns_dir => ["./patterns"]# 匹配模式# 测试数据:    Jan  1 06:25:43 mailserver14 postfix/cleanup[21403]: BEF25A72965: message-id=<20130101142543.5828399CCAF@mailserver14.example.com># match => { "message" => "%{SYSLOGBASE} %{POSTFIX_QUEUEID:queue_id}: %{GREEDYDATA:syslog_message}" }# 测试数据:12345678910 ---> 333  或者这个测试数据: ABCDE12345678910 ---> 333EHGH 都可以match => { "message" => "%{POSTFIX_QUEUEID:longchi_queue_id} ---> %{LONGCHI_LINUX:longchi-linux-elk}" }}
}output {stdout {}
}启动实例
[root@elk187 ~]$ logstash -rf config-logstash/16-stdin-grok-custom-patterns-stdout.conf
Using JAVA_HOME defined java: /longchi/softwares/jdk
WARNING: Using JAVA_HOME while Logstash distribution comes with a bundled JDK.
...
The stdin plugin is now waiting for input:
[INFO ] 2024-10-31 00:28:52.555 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
输入:12345678910 ---> 333  # 匹配成功
输出如下数据:
{"longchi_queue_id" => "12345678910","@timestamp" => 2024-10-31T07:31:50.854Z,"longchi-linux-elk" => "333","message" => "12345678910 ---> 333","@version" => "1","host" => "elk187.longchi.xyz"
}
12345678990:323		# 匹配失败
{"host" => "elk187.longchi.xyz","@timestamp" => 2024-10-31T07:42:33.107Z,"@version" => "1","tags" => [[0] "_grokparsefailure"],"message" => "12345678990:323"
}
12345678990 ---> 323  # 匹配成功
{"longchi_queue_id" => "12345678990","@timestamp" => 2024-10-31T07:43:53.335Z,"longchi-linux-elk" => "323","message" => "12345678990 ---> 323","@version" => "1","host" => "elk187.longchi.xyz"
}
11234567890 ---> 44444444  # 匹配成功
{"longchi_queue_id" => "11234567890","@timestamp" => 2024-10-31T07:46:49.412Z,"longchi-linux-elk" => "444","message" => "11234567890 ---> 44444444","@version" => "1","host" => "elk187.longchi.xyz"
}AAAAAAAA12345678910 ---> 333BBBBBBBB  # 匹配成功
{"longchi_queue_id" => "12345678910","@timestamp" => 2024-10-31T07:50:28.011Z,"longchi-linux-elk" => "333","message" => "AAAAAAAA12345678910 ---> 333BBBBBBBB","@version" => "1","host" => "elk187.longchi.xyz"
}
ABCDE12345678910 ---> 333EHGH # 匹配成功
{"longchi_queue_id" => "12345678910","@timestamp" => 2024-10-31T07:53:02.415Z,"longchi-linux-elk" => "333","message" => "ABCDE12345678910 ---> 333EHGH","@version" => "1","host" => "elk187.longchi.xyz"
}注意: '[0] "_grokparsefailure"' 表示匹配失败# 最终版
[root@elk187 ~]$ cat config-logstash/16-stdin-grok-custom-patterns-stdout.conf
# encoding: utf-8
input {stdin {}
}filter {grok {# 指定匹配模式的目录,可以使用绝对路径# 在 ./patterns 目录下随便创建一个文件,并写入以下匹配模式# POSTFIX_QUEUEID [0-9A-F]{10,11}# LONGCHI_LINUX [\d]{3}#patterns_dir => ["./patterns"]# 匹配模式# 测试数据:    Jan  1 06:25:43 mailserver14 postfix/cleanup[21403]: BEF25A72965: message-id=<20130101142543.5828399CCAF@mailserver14.example.com># match => { "message" => "%{SYSLOGBASE} %{POSTFIX_QUEUEID:queue_id}: %{GREEDYDATA:syslog_message}" }# 测试数据:12345678910 ---> 333  或者这个测试数据: ABCDE12345678910 ---> 333EHGH 都可以match => { "message" => "%{POSTFIX_QUEUEID:longchi_queue_id} ---> %{LONGCHI_LINUX:longchi-linux-elk}" }}
}output {stdout {}
}
5,移除日志某些字段,可以节省大量的磁盘空间
1,logstash配置
[root@elk187 ~]$ cat config-logstash/17-beats-grok-es.conf
# encoding: utf-8
input {beats {port => 8888}
}filter {grok {match => {# "message" => "%{COMBINEDAPACHELOG}"# 上面的""变量官方github上已经废弃,建议使用下面的匹配模式# https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/legacy/  到这里来找匹配模式"message" => "%{HTTPD_COMMONLOG}"}remove_field => [ "@version","ecs","tags","agent","input","log","host" ]}
}output {stdout {}elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-logstash-%{+yyyy.MM.dd}"}
}启动 logstash 实例
logstash -rf config-logstash/17-beats-grok-es.conf2.filebeat 配置
[root@elk188 ~]$ cat /etc/filebeat/config/34-nginx-to-logatash.yml
# encoding: utf-8
filebeat.inputs:
- type: logenabled: truepaths:- /var/log/nginx/access.logoutput.logstash:hosts: ["192.168.222.187:8888"]启动 filebeat 实例
filebeat -e -c /etc/filebeat/config/34-nginx-to-logatash.yml
6,grok通用字段的添加与移除案例 如下图所示

logstash配置
[root@elk187 ~]$ vim config-logstash/17-beats-grok-es.conf
[root@elk187 ~]$ cat config-logstash/17-beats-grok-es.conf
# encoding: utf-8
input {beats {port => 8888}
}filter {grok {match => {# "message" => "%{COMBINEDAPACHELOG}"# 上面的""变量官方github上已经废弃,建议使用下面的匹配模式# https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/legacy/  到这里来找匹配模式"message" => "%{HTTPD_COMMONLOG}"}# 添加指定的字段add_field => {"school" => "北京市昌平区沙河镇老男孩IT教育""longchi-clientip" => "clientip ---> %{clientip}"}# 移除指定的字段remove_field => [ "@version","ecs","tags","agent","input","log","host" ]# 添加 tagadd_tag => [ "linux80","zookeeper","kafka","elk" ]# 移除 tagremove_tag => [ "zookeeper","kafka" ]# 创建插件的唯一ID,如果不创建则系统默认生成id => "nginx"}
}output {stdout {}elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-logstash-%{+yyyy.MM.dd}"}
}启动 logstash 实例
logstash -rf config-logstash/17-beats-grok-es.conf2,filebeat 配置
[root@elk188 ~]$ cat /etc/filebeat/config/34-nginx-to-logatash.yml
# encoding: utf-8
filebeat.inputs:
- type: logenabled: truepaths:- /var/log/nginx/access.logoutput.logstash:hosts: ["192.168.222.187:8888"]启动 filebeat 实例
filebeat -e -c /etc/filebeat/config/34-nginx-to-logatash.yml

十,部署 logstash 环境

Logstash 是一个开源数据收集引擎,具有实时流水线功能。Logstash 可以动态统一来自不同来源的数据,并将数据规范化到您选择的目标中。针对各种高级下游分析和可视化使用案例清理和大众化所有数据。
虽然 Logstash 最初推动了日志收集方面的创新,但其功能远远超出了该用例。任何类型的事件都可以通过广泛的输入、筛选和输出插件进行丰富和转换,许多本机编解码器进一步简化了摄取过程。Logstash 通过利用更大数量和种类的数据来加速您的洞察。
1,部署 logstash 环境
官网下载地址
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.17.3-x86_64.rpmwget https://artifacts.elastic.co/downloads/logstash/logstash-7.17.3-linux-x86_64.tar.gz官网文档地址
https://www.elastic.co/docs
https://www.elastic.co/guide/en/logstash/7.17/introduction.html# redis 链接
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html# filter官方地址
https://www.elastic.co/guide/en/logstash/7.17/filter-plugins.html# grok 官方地址
https://www.elastic.co/guide/en/logstash/7.17/plugins-filters-grok.html# 在 187 机器上安装 logstash
yum -y localinstall logstash-7.17.3-x86_64.rpm
ln -sv /usr/share/logstash/bin/logstash /usr/local/bin/1,在188机器配置filebeat配置文件
[root@elk188 ~]$ cp /etc/filebeat/config/33-tomcat-to-logatash.yml /etc/filebeat/config/34-nginx-to-logatash.yml                                           [root@elk188 ~]$ vim /etc/filebeat/config/34-nginx-to-logatash.yml
[root@elk188 ~]$ cat /etc/filebeat/config/34-nginx-to-logatash.yml
# encoding: utf-8
filebeat.inputs:
- type: logenabled: truepaths:- /var/log/nginx/access.logoutput.logstash:hosts: ["192.168.222.187:8888"]启动 filebeat 实例
filebeat -e c 2,在187机器配置logstash配置文件
[root@elk187 ~]$ vim config-logstash/14-beats-grok-es.conf
[root@elk187 ~]$ cat config-logstash/14-beats-grok-es.conf
# encoding: utf-8
input {beats {port => 8888}
}filter {grok {match => {"message" => "%{COMBINEDAPACHELOG}"}}
}output {stdout {}elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-logstash-%{+yyyy.MM.dd}"}
}启动 logstash 实例 '-r'表示reload 热加载  '-f' 表示文件
logstash -rf config-logstash/14-beats-grok-es.conf
2,修改 logstash 的配置文件
(1) 编写配置文件
cat > conf.d/01-stdin-to-stdout.conf << 'EOF'
input {stdin {}
}
output {stdout {}
}
EOF(2) 检查配置文件语法
logstash -tf conf.d/01-stdin-to-stdout.conf参数解释:
'-t' 就是 test 测试,'logstash -t'跟 'nginx -t' 一样测试语法是否正确
'-f' 表示指定文件(3) 启动 logstash 实例
logstash -f conf.d/01-stdin-to-stdout.conf

十一,logstash 企业级常见案例(ELFK架构)

1,logstash 收集本地文件
(1) 缩写 logstash 的配置文件
cat > conf.d/02-file-to-stdout << 'EOF'
input {file {# 指定收集的路径path => ["/tmp/test/*.txt"]# 指定文件的读取位置,仅在 '.sincedb*'文件中没有记录的情况下生效。start_position => "beginning"}
}output {stdout {}elasticsearch {}
}
EOF(2) 启动 logstash 实例
logstash -rf config.d/02-file-to-stdout(3) 查看生成的 '.sincedb_*' 文件 
Sincedb 文件是具有 4 个 (< v5.0.0)、5 列或 6 列的文本文件:
1) 查看文件的inode
[root@elk187 /tmp/test]$ ll -i
total 8
19554780 -rw-r--r-- 1 root root 15 Oct 27 00:55 1.txt
19554756 -rw-r--r-- 1 root root  5 Oct 27 00:59 2.txt
[root@elk187 /tmp/test]$ cat /usr/share/logstash/data/plugins/inputs/file/.sincedb_3cd99a80ca58225ec14dc0ac340abb80
19554780 0 2051 15 1730015918.691061 /tmp/test/1.txt
19554756 0 2051 5 1730015994.244561 /tmp/test/2.txt'.sincedb_*'文件参数解释
'19554780': inode 编号(或等效值)
'0': 文件系统的主要设备号(或等效设备号)
'2051': 文件系统的次要设备号(或等效设备号)。
'15': 文件中的当前字节偏移量。
'1730015918.691061': 最后一个活动时间戳(浮点数)
'/tmp/test/1.txt': 此记录匹配的最后一个已知路径(对于转换为新格式的旧 sincedb 记录,此路径为空# 以下说明 logstash 读取文件是从文件尾部读取
[root@elk187 /tmp/test]$ echo 4444 >> 2.txt
[root@elk187 /tmp/test]$ cat 2.txt
3333
4444
[root@elk187 /tmp/test]$ echo AAAA >> 1.txt
[root@elk187 /tmp/test]$ cat /usr/share/logstash/data/plugins/inputs/file/.sincedb_3cd99a80ca58225ec14dc0ac340abb80
19554780 0 2051 20 1730017636.8801842 /tmp/test/1.txt
19554756 0 2051 10 1730017579.632001 /tmp/test/2.txt
[root@elk187 /tmp/test]$ ll -i
total 8
19554780 -rw-r--r-- 1 root root 20 Oct 27 01:27 1.txt
19554756 -rw-r--r-- 1 root root 10 Oct 27 01:26 2.txt
2,logstash 实现日志聚合
[root@elk187 ~]$ cp config-logstash/02-file-to-stdout.conf config-logstash/03-tcp-to-stdout.conf                                                           [root@elk187 ~]$ vim config-logstash/03-tcp-to-stdout.conf
[root@elk187 ~]$ cat config-logstash/03-tcp-to-stdout.conf
# encoding: utf-8
input {tcp {port => 8888}
}output {stdout {}
}# 启动 logstash 实例
logstash -rf config-logstash/03-tcp-to-stdout.conf# 在 189 客户端机器登录 
[root@elk189 ~]$ nc 192.168.222.187 8888
aaaaaaaaaaaaa
bbbbbbbbbbbbb
ccccccccccccc# 在 187 机器上实时出现{"@timestamp" => 2024-10-27T11:09:49.127Z,"port" => 35312,"message" => "aaaaaaaaaaaaa","@version" => "1","host" => "elk189.longchi.xyz"
}
{"@timestamp" => 2024-10-27T11:10:04.581Z,"port" => 35312,"message" => "bbbbbbbbbbbbb","@version" => "1","host" => "elk189.longchi.xyz"
}
{"@timestamp" => 2024-10-27T11:10:18.970Z,"port" => 35312,"message" => "ccccccccccccc","@version" => "1","host" => "elk189.longchi.xyz"
}# 在 189 客户端机器登录
[root@elk188 ~]$ nc 192.168.222.187 8888
1111111111111111111
2222222222222222222
3333333333333333333# 在 187 机器上实时出现[WARN ] 2024-10-27 04:16:16.972 [nioEventLoopGroup-2-2] line - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
{"@timestamp" => 2024-10-27T11:16:32.227Z,"port" => 43414,"message" => "1111111111111111111","@version" => "1","host" => "elk188.longchi.xyz"
}
{"@timestamp" => 2024-10-27T11:16:46.169Z,"port" => 43414,"message" => "2222222222222222222","@version" => "1","host" => "elk188.longchi.xyz"
}
{"@timestamp" => 2024-10-27T11:16:59.714Z,"port" => 43414,"message" => "3333333333333333333","@version" => "1","host" => "elk188.longchi.xyz"
}
应用场景如下图所示:交换机(路由器)-->logstash--ES集群<--kibana<--用户
设备不支持安装操作系统,就可以采用tcp来收集日志
日志聚合: 
将所有数据发送到logstash(nc 192.168.222.187 8888)的端口就可以了

3,logstash 可以实现多端口发送数据
[root@elk187 ~]$ cat config-logstash/03-tcp-to-stdout.conf
# encoding: utf-8
input {tcp {port => 8888}tcp {port => 9999}
}output {stdout {}
}启动 logstash 实例
logstash -rf config-logstash/03-tcp-to-stdout.conf[root@elk189 ~]$ nc 192.168.222.187 8888
xxxxxxxxxxxxxxxxxxxxxxx
yyyyyyyyyyyyyyyyyyyyyyy[root@elk188 ~]$ nc 192.168.222.187 9999
44444444444444444444
55555555555555555555在187机器是实时出现如下数据release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
{"message" => "xxxxxxxxxxxxxxxxxxxxxxx","host" => "elk189.longchi.xyz","@version" => "1","port" => 35314,"@timestamp" => 2024-10-27T11:33:09.824Z
}
{"message" => "yyyyyyyyyyyyyyyyyyyyyyy","host" => "elk189.longchi.xyz","@version" => "1","port" => 35314,"@timestamp" => 2024-10-27T11:33:16.095Z
}
[WARN ] 2024-10-27 04:33:28.357 [nioEventLoopGroup-4-1] line - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
{"message" => "44444444444444444444","host" => "elk188.longchi.xyz","@version" => "1","port" => 36374,"@timestamp" => 2024-10-27T11:33:37.550Z
}
{"message" => "55555555555555555555","host" => "elk188.longchi.xyz","@version" => "1","port" => 36374,"@timestamp" => 2024-10-27T11:33:44.573Z
}
如下图所示做一个日志聚合功能

4,基于http案例
[ 使用场景:客户端发送http请求,比如webhdfs-->hdfs(分布式文件系统)--> http ]
文档地址
https://www.elastic.co/guide/en/logstash/7.17/plugins-inputs-http.html[root@elk187 ~]$ cp config-logstash/03-tcp-to-stdout.conf config-logstash/04-http-to-stdout.conf
[root@elk187 ~]$ vim config-logstash/04-http-to-stdout.conf
[root@elk187 ~]$ cat config-logstash/04-http-to-stdout.conf
# encoding: utf-8
input {http {port => 8888}http {port => 9999}
}output {stdout {}
}启动 logstash 实例
[root@elk187 ~]$ logstash -rf config-logstash/04-http-to-stdout.conf
Using JAVA_HOME defined java: /longchi/softwares/jdk
......
访问浏览器 http://192.168.222.187:8888
返回OK 表示成功可以利用postman发送数据
5,基于 redis 案例
1,在 188 主机 启动 关于redis的 filebeat 的实例
[root@elk188 ~]$ filebeat -e -c /etc/filebeat/config/28-tcp-to-redis.yml2,在 189 主机连接 Redis 服务器
[root@elk189 ~]$ redis-cli -h 192.168.222.187 -p 6379 -a longchi -n 5
192.168.222.187:6379[5]> keys *
1) "longchi-linux-filebeat"[root@elk189 ~]$ redis-cli -h 192.168.222.187 -p 6379 -a longchi
192.168.222.187:6379> keys *
(empty list or set)
# 切换到5号数据库
192.168.222.187:6379> SELECT 5
OK
192.168.222.187:6379[5]> KEYS *
1) "longchi-linux-filebeat"
192.168.222.187:6379[5]> TYPE "longchi-linux-filebeat"
list3,在 187 主机上 配置 有关 redis 的 logstash 实例
[root@elk187 ~]$ cp config-logstash/04-http-to-stdout.conf config-logstash/05-redis-to-stdout.conf
[root@elk187 ~]$ vim config-logstash/05-redis-to-stdout.conf
[root@elk187 ~]$ cat config-logstash/05-redis-to-stdout.conf
# encoding: utf-8
input {redis {# 指定的是 REDIS 的键(key)的类型data_type => "list"# 指定数据库的编号,默认值是0号数据库db => 5# 指定数据的ip地址,默认值是localhosthost => "192.168.222.187"# 指定数据库的端口号,默认值是6379port => 6379# 指定 redis 的认证密码password => "longchi"# 指定从 redis 的哪个 key 取数据key => "longchi-linux-filebeat"}
}output {stdout {}
}启动 logstash 实例
[root@elk187 ~]$ logstash -rf config-logstash/05-redis-to-stdout.conf
Using JAVA_HOME defined java: /longchi/softwares/jdk
WARNING: Using JAVA_HOME while Logstash distribution comes with a bundled JDK.
DEPRECATION: The use of JAVA_HOME is now deprecated and will be removed starting from 8.0. Please configure LS_JAVA_HOME instead.
...# 此时再去 189 去查看数据
# 说明 logstash 实例去 redis 去拿数据,拿一个删一个
192.168.222.187:6379[5]> lrange longchi-linux-filebeat 0 -1
(empty list or set)# 分别在 187 189 主机上发送如下数据
[root@elk187 ~]$ nc 192.168.222.188 9000
77777777777777777777
8888888888888888888
9999999999999999999[root@elk189 ~]$ nc 192.168.222.188 9000
FFFFFFFFFFFFFFFFFFF
GGGGGGGGGGGGGGGGGGG
HHHHHHHHHHHHHHHHHHH# 可以发现在 187 的 logstash 的实例中可以直接拿到数据
{"@version" => "1","ecs" => {"version" => "1.12.0"},"input" => {"type" => "tcp"},"agent" => {"id" => "eeaa54c9-bd45-4403-9cf3-84e1df972b4d","hostname" => "elk188.longchi.xyz","ephemeral_id" => "bd3dbd6d-0554-431e-a9d5-6c2b6fff0c44","name" => "elk188.longchi.xyz","type" => "filebeat","version" => "7.17.3"},"log" => {"source" => {"address" => "192.168.222.187:60118"}},"host" => {"name" => "elk188.longchi.xyz"},"message" => "9999999999999999999","@timestamp" => 2024-10-28T01:10:49.638Z
}{"@version" => "1","input" => {"type" => "tcp"},"ecs" => {"version" => "1.12.0"},"log" => {"source" => {"address" => "192.168.222.189:37058"}},"host" => {"name" => "elk188.longchi.xyz"},"agent" => {"hostname" => "elk188.longchi.xyz","id" => "eeaa54c9-bd45-4403-9cf3-84e1df972b4d","ephemeral_id" => "bd3dbd6d-0554-431e-a9d5-6c2b6fff0c44","name" => "elk188.longchi.xyz","type" => "filebeat","version" => "7.17.3"},"message" => "HHHHHHHHHHHHHHHHHHH","@timestamp" => 2024-10-28T01:12:37.048Z
}
6,input插件基于 redis 案例
input {redis {# 指定的是 REDIS 的键(key)的类型data_type => "list"# 指定数据库的编号,默认值是0号数据库db => 5# 指定数据的ip地址,默认值是localhosthost => "192.168.222.187"# 指定数据库的端口号,默认值是6379port => 6379# 指定 redis 的认证密码password => "longchi"# 指定从 redis 的哪个 key 取数据key => "longchi-linux-filebeat"}
}output {stdout {}
}

ELFK 架构图

7,input插件基于 beats 案例(架构如上图所示)
filebeat 和 logstash 打通
filebeat 配置:
filebeat.inputs:
- type: tcphost: "0.0.0.0:9000"output.logstash:hosts: ["192.168.222.187:5044"]logstash 配置:
input {beats {port => 5044}
}output {stdout {}
}
基于 logstash 的输出案例如上图所示

filebeat配置:
1, 在 188 主机修改 filebaet 配置文件,输出端为 logstash
[root@elk188 ~]$ cp /etc/filebeat/config/28-tcp-to-redis.yml /etc/filebeat/config/29-tcp-to-logstash.yml
[root@elk188 ~]$ vim /etc/filebeat/config/29-tcp-to-logstash.yml
[root@elk188 ~]$ vim /etc/filebeat/config/29-tcp-to-logstash.yml
[root@elk188 ~]$ cat /etc/filebeat/config/29-tcp-to-logstash.yml
# encoding: utf-8
filebeat.inputs:
- type: tcpmax_message_size: 20MiBhost: "0.0.0.0:9000"output.logstash:hosts: ["192.168.222.187:5044"]# 启动 filebeat 实例
[root@elk188 ~]$ filebeat -e -c /etc/filebeat/config/29-tcp-to-logstash.ymllogstash 配置:
2,在 187 主机 配置文件
[root@elk187 ~]$ cp config-logstash/05-redis-to-stdout.conf config-logstash/06-beats-to-stdout.conf                                          [root@elk187 ~]$ vim config-logstash/06-beats-to-stdout.conf
[root@elk187 ~]$ cat config-logstash/06-beats-to-stdout.conf
# encoding: utf-8
input {beats {port => 5044}
}output {stdout {}
}# 启动 logstash 实例  
logstash -rf config-logstash/06-beats-to-stdout.conf
'-r' 就是reload的意思,即热加载 修改配置文件之后你不需要去重新加载
[root@elk187 ~]$ logstash -rf config-logstash/06-beats-to-stdout.conf测试:
在 189 主机上发送如下消息
[root@elk189 ~]$ nc 192.168.222.188 9000
11111111111111111111
22222222222222222222在 187 主机上有 如下数据产生:{"@timestamp" => 2024-10-28T03:41:31.333Z,"log" => {"source" => {"address" => "192.168.222.189:37064"}},"tags" => [[0] "beats_input_codec_plain_applied"],"ecs" => {"version" => "1.12.0"},"@version" => "1","message" => "11111111111111111111","agent" => {"hostname" => "elk188.longchi.xyz","id" => "eeaa54c9-bd45-4403-9cf3-84e1df972b4d","type" => "filebeat","version" => "7.17.3","ephemeral_id" => "d20b8d42-1cae-4be9-af43-09d52e1bb792","name" => "elk188.longchi.xyz"},"host" => {"name" => "elk188.longchi.xyz"},"input" => {"type" => "tcp"}
}
{"@timestamp" => 2024-10-28T03:41:38.654Z,"log" => {"source" => {"address" => "192.168.222.189:37064"}},"tags" => [[0] "beats_input_codec_plain_applied"],"ecs" => {"version" => "1.12.0"},"@version" => "1","message" => "22222222222222222222","agent" => {"hostname" => "elk188.longchi.xyz","id" => "eeaa54c9-bd45-4403-9cf3-84e1df972b4d","type" => "filebeat","name" => "elk188.longchi.xyz","ephemeral_id" => "d20b8d42-1cae-4be9-af43-09d52e1bb792","version" => "7.17.3"},"host" => {"name" => "elk188.longchi.xyz"},"input" => {"type" => "tcp"}
}

logstash的output常用的插件

8,logstash 基于redis输出案例
logstash 配置:
1,在 187 主机
[root@elk187 ~]$ cp config-logstash/06-beats-to-stdout.conf config-logstash/07-tcp-to-redis.conf
[root@elk187 ~]$ vim config-logstash/07-tcp-to-redis.conf
[root@elk187 ~]$ cat config-logstash/07-tcp-to-redis.conf
# encoding: utf-8
input {tcp {port => 9999}
}output {stdout {}redis {# 指定 redis 主机host => "192.168.222.187"# 指定 redis 端口号port => "6379"# 指定 redis 数据库编号db => 10# 指定 redis 密码password => "longchi"# 指定写入的key类型data_type => "list"# 指定写入key名称key => "longchi-linux-logstash"}
}启动 logstash 实例
[root@elk187 ~]$ logstash -rf config-logstash/07-tcp-to-redis.conf
Using JAVA_HOME defined java: /longchi/softwares/jdk
WARNING: Using JAVA_HOME while Logstash distribution comes with a bundled JDK.
......在 189 主机
# 切换数据库编号并查看
192.168.222.187:6379[5]> select 10
OK
192.168.222.187:6379[10]> keys *
(empty list or set)# 发送数据以后 redis 编号为10的数据已经有如下数据
192.168.222.187:6379[10]> keys *
1) "longchi-linux-logstash"
192.168.222.187:6379[10]> type  "longchi-linux-logstash"
list# 查看数据
192.168.222.187:6379[10]> lrange longchi-linux-logstash 0 -1
1) "{\"@timestamp\":\"2024-10-28T05:12:44.383Z\",\"port\":33466,\"@version\":\"1\",\"message\":\"1111111111111111\",\"host\":\"elk189.longchi.xyz\"}"
2) "{\"@timestamp\":\"2024-10-28T05:15:15.712Z\",\"port\":58120,\"@version\":\"1\",\"message\":\"222222222\",\"host\":\"elk188.longchi.xyz\"}"
3) "{\"@timestamp\":\"2024-10-28T05:24:55.548Z\",\"port\":33468,\"@version\":\"1\",\"message\":\"\xe9\xbb\x84\xe5\x9c\x9f\xe9\xab\x98\xe5\x9d\xa1\",\"host\":\"elk189.longchi.xyz\"}"# 以 '--raw'模式登录数据库,就可以直接输出中文
[root@elk189 ~]$ redis-cli -h 192.168.222.187 -p 6379 -a longchi --raw
192.168.222.187:6379> select 10
OK
192.168.222.187:6379[10]> lrange longchi-linux-logstash 0 -1
{"@timestamp":"2024-10-28T05:12:44.383Z","port":33466,"@version":"1","message":"1111111111111111","host":"elk189.longchi.xyz"}
{"@timestamp":"2024-10-28T05:15:15.712Z","port":58120,"@version":"1","message":"222222222","host":"elk188.longchi.xyz"}
{"@timestamp":"2024-10-28T05:24:55.548Z","port":33468,"@version":"1","message":"黄土高坡","host":"elk189.longchi.xyz"}# 发送数据
[root@elk188 ~]$ echo 222222222 | nc 192.168.222.187 9999
[root@elk189 ~]$ echo 1111111111111111 | nc 192.168.222.187 9999在 187 上 呈现如下实时数据 有终端输出release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
{"@timestamp" => 2024-10-28T05:12:44.383Z,"port" => 33466,"@version" => "1","message" => "1111111111111111","host" => "elk189.longchi.xyz"
}
[WARN ] 2024-10-27 22:15:15.695 [nioEventLoopGroup-2-2] line - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
{"@timestamp" => 2024-10-28T05:15:15.712Z,"port" => 58120,"@version" => "1","message" => "222222222","host" => "elk188.longchi.xyz"
}
[WARN ] 2024-10-27 22:24:55.544 [nioEventLoopGroup-2-3] line - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
{"@timestamp" => 2024-10-28T05:24:55.548Z,"port" => 33468,"@version" => "1","message" => "黄土高坡","host" => "elk189.longchi.xyz"
}
如何输出数据库中文 需要用参数 '--raw'
192.168.222.187:6379[10]> lrange longchi-linux-logstash 0 -1
1) "{\"@timestamp\":\"2024-10-28T05:12:44.383Z\",\"port\":33466,\"@version\":\"1\",\"message\":\"1111111111111111\",\"host\":\"elk189.longchi.xyz\"}"
2) "{\"@timestamp\":\"2024-10-28T05:15:15.712Z\",\"port\":58120,\"@version\":\"1\",\"message\":\"222222222\",\"host\":\"elk188.longchi.xyz\"}"
192.168.222.187:6379[10]> lrange longchi-linux-logstash 0 -1
1) "{\"@timestamp\":\"2024-10-28T05:12:44.383Z\",\"port\":33466,\"@version\":\"1\",\"message\":\"1111111111111111\",\"host\":\"elk189.longchi.xyz\"}"
2) "{\"@timestamp\":\"2024-10-28T05:15:15.712Z\",\"port\":58120,\"@version\":\"1\",\"message\":\"222222222\",\"host\":\"elk188.longchi.xyz\"}"
3) "{\"@timestamp\":\"2024-10-28T05:24:55.548Z\",\"port\":33468,\"@version\":\"1\",\"message\":\"\xe9\xbb\x84\xe5\x9c\x9f\xe9\xab\x98\xe5\x9d\xa1\",\"host\":\"elk189.longchi.xyz\"}"
192.168.222.187:6379[10]>
[root@elk189 ~]$ redis-cli -h 192.168.222.187 -p 6379 -a longchi --raw
192.168.222.187:6379> select 10
OK
192.168.222.187:6379[10]> lrange longchi-linux-logstash 0 -1
{"@timestamp":"2024-10-28T05:12:44.383Z","port":33466,"@version":"1","message":"1111111111111111","host":"elk189.longchi.xyz"}
{"@timestamp":"2024-10-28T05:15:15.712Z","port":58120,"@version":"1","message":"222222222","host":"elk188.longchi.xyz"}
{"@timestamp":"2024-10-28T05:24:55.548Z","port":33468,"@version":"1","message":"黄土高坡","host":"elk189.longchi.xyz"}
192.168.222.187:6379[10]>
9,logstash 基于 file输出案例 将数据落地到磁盘,可以作数据同步
[root@elk187 ~]$ cp config-logstash/07-tcp-to-redis.conf config-logstash/08-tcp-to-file.conf
[root@elk187 ~]$ vim config-logstash/08-tcp-to-file.conf
[root@elk187 ~]$ cat config-logstash/08-tcp-to-file.conf
# encoding: utf-8
input {tcp {port => 9999}
}output {file {# 指定磁盘的落地位置path => "/tmp/longchi-linux-logstash.log"# codec => line { format => "custom format: %{message}"}}
}在 189 主机上发送数据进行测试
[root@elk189 ~]$ echo 黄土高坡 | nc 192.168.222.187 9999
[root@elk189 ~]$ echo 黄土高坡 | nc 192.168.222.187 9999
[root@elk189 ~]$ echo 1111111111111111 | nc 192.168.222.187 9999在 187 主机上查看 '/tmp/longchi-linux-logstash.log' 文件
[root@elk187 ~]$ cat /tmp/longchi-linux-logstash.log
{"message":"黄土高坡","port":33472,"host":"elk189.longchi.xyz","@timestamp":"2024-10-28T05:57:57.717Z","@version":"1"}
{"message":"黄土高坡","port":33474,"host":"elk189.longchi.xyz","@timestamp":"2024-10-28T05:58:21.756Z","@version":"1"}
{"message":"1111111111111111","port":33476,"host":"elk189.longchi.xyz","@timestamp":"2024-10-28T06:00:01.420Z","@version":"1"}

10, logstash 基于 elasticsearch 输出案例
[root@elk187 ~]$ cp config-logstash/08-tcp-to-file.conf config-logstash/09-tcp-to-es.conf
[root@elk187 ~]$ vim config-logstash/09-tcp-to-es.conf
[root@elk187 ~]$ cat config-logstash/09-tcp-to-es.conf
# encoding: utf-8
input {tcp {port => 9999}
}output {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "lonchi-logstash-%{+yyyy.MM.dd}"}
}# '-t' 表示test即测试  '-r' 表示 reload 即热加载
[root@elk187 ~]$ logstash -tf config-logstash/09-tcp-to-es.conf
Using JAVA_HOME defined java: /longchi/softwares/jdk
WARNING: Using JAVA_HOME while Logstash distribution comes with a bundled JDK.
[root@elk187 ~]$ logstash -rf config-logstash/09-tcp-to-es.conf
11,综合案例的架构如下图

1. logstash 配置:
在 187 主机上执行
[root@elk187 ~]$ cp config-logstash/09-tcp-to-es.conf config-logstash/10-many-to-es.conf
[root@elk187 ~]$ vim config-logstash/10-many-to-es.conf
[root@elk187 ~]$ vim config-logstash/10-many-to-es.conf
[root@elk187 ~]$ cat config-logstash/10-many-to-es.conf
# encoding: utf-8
input {tcp {type => "longchi-tcp"port => 6666}beats {type => "longchi-beats"port => 7777}redis {type => "longchi-redis"# 指定的是 REDIS 的键(key)的类型data_type => "list"# 指定数据库的编号,默认值是0号数据库db => 5# 指定数据的ip地址,默认值是localhosthost => "192.168.222.187"# 指定数据库的端口号,默认值是6379port => 6379# 指定 redis 的认证密码password => "longchi"# 指定从 redis 的哪个 key 取数据key => "longchi-linux-filebeat"}
}output {stdout {}if [type] == "longchi-tcp" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-logstash-tcp-%{+yyyy.MM.dd}"}} else if [type] == "longchi-beat" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-logstash-beat-%{+yyyy.MM.dd}"}} else if [type] == "longchi-redis" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-logstash-redis-%{+yyyy.MM.dd}"}} else {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-logstash-other-%{+yyyy.MM.dd}"}}
}启动 logstash 实例
logstash -tf config-logstash/10-many-to-es.conf
logstash -rf config-logstash/10-many-to-es.confoutput {stdout {}if [type] == "longchi-tcp" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-logstash-tcp-%{+yyyy.MM.dd}"}} else if [type] == "longchi-beat" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-logstash-beat-%{+yyyy.MM.dd}"}} else if [type] == "longchi-redis" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-logstash-redis-%{+yyyy.MM.dd}"}} else {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-logstash-other-%{+yyyy.MM.dd}"}}
} 2,filebeat 配置在 188 主机上执行
[root@elk188 ~]$ cp /etc/filebeat/config/29-tcp-to-logstash.yml /etc/filebeat/config/30-tcp-to-logstash.yml                                                [root@elk188 ~]$ vim /etc/filebeat/config/30-tcp-to-logstash.yml
[root@elk188 ~]$ cat /etc/filebeat/config/30-tcp-to-logstash.yml
# encoding: utf-8
filebeat.inputs:
- type: tcpmax_message_size: 20MiBhost: "0.0.0.0:9999"output.logstash:hosts: ["192.168.222.187:7777"]启动 filebeat 实例
filebeat -e -c /etc/filebeat/config/30-tcp-to-logstash.yml3, 配置 redis 
[root@elk188 ~]$ cp /etc/filebeat/config/28-tcp-to-redis.yml /etc/filebeat/config/31-tcp-to-redis.yml                                 [root@elk188 ~]$ vim /etc/filebeat/config/31-tcp-to-redis.yml
[root@elk188 ~]$ cat /etc/filebeat/config/31-tcp-to-redis.yml
# encoding: utf-8
filebeat.inputs:
- type: tcpmax_message_size: 20MiBhost: "0.0.0.0:8888"output.redis:# 写入redis集群的主机hosts: ["192.168.222.187:6379"]# 密码,指定redis认证口令password: "longchi"# 指定的key值key: "longchi-linux-filebeat"# 指定连接数据库的编号db: 5# 规定超时时间timeout: 3启动实例 filebeat -e -c /etc/filebeat/config/31-tcp-to-redis.yml[root@elk188 ~]$ filebeat -e -c /etc/filebeat/config/31-tcp-to-redis.yml[root@elk188 ~]$ filebeat -e -c /etc/filebeat/config/30-tcp-to-logstash.yml --path.data /tmp/filebeat/注意: 当同一台服务去同时启动多个filebeat实例的时候,
必须要在最后添加比如 '--path.data /tmp/filebeat/' 路径,系统才不会错误

 如上图所示,实现代码如下:
一,第一条链路  nginx-filebeat-redis-[189]logstash-es-kibana
1,编写 filebeat 配置文件
[root@elk188 ~]$ vim /etc/filebeat/config/32-nginx-to-redis.yml
[root@elk188 ~]$ cat /etc/filebeat/config/32-nginx-to-redis.yml
# encoding: utf-8
filebeat.inputs:
- type: filestream# 是否启动当前的输入类型,默认值为 trueenabled: true # 启用或禁用# 指定数据路径paths:- /var/log/nginx/access.log*# 给当前的输入类型打上标签tags: ["access"]# 对于 filestream 类型,不能使用'json.keys_under_root' 需要配置 parses 解析器# json.keys_under_root: true# 终上所述我们需要配置如下解析器来实现 json 解析parsers:- ndjson:keys_under_root: trueoutput.redis:# 写入redis集群的主机hosts: ["192.168.222.187:6379"]# 密码,指定redis认证口令password: "longchi"# 指定的key值key: "longchi-linux-filebeat"# 指定连接数据库的编号db: 5# 规定超时时间timeout: 3# 关闭索引的生命周期,若开启则上面的 index 配置会被无视
setup.ilm.enabled: false
# 设置索引模板的名称 ,所谓索模板就是创建索引的方式
setup.template.name: "longchi-linux"
# 设置索引模板的匹配模式
setup.template.pattern: "longchi-linux*"
# 覆盖已有的索引模板,如果为true,则会直接覆盖现有的索引模板,如果为false,则不会覆盖
setup.template.overwrite: true
# 配置索引模板
setup.template.settings:# 设置索引的分片数index.number_of_shards: 3# 设置索引的副本数 生产环境不允许将副本数设置为0 一般设置为1-3,副本数量要求小于集群数量index.number_of_replicas: 0启动实例
filebeat -e -c /etc/filebeat/config/32-nginx-to-redis.yml2, 编写 logstash 配置文件
[root@elk189 ~]$ cat config-logstash/02-beats-to-es.conf
# encoding: utf-8
input {tcp {type => "longchi-tcp"port => 9999}beats {type => "longchi-beat"port => 5044}redis {type => "longchi-redis"# 指定的是 REDIS 的键(key)的类型data_type => "list"# 指定数据库的编号,默认值是0号数据库db => 5# 指定数据的ip地址,默认值是localhosthost => "192.168.222.187"# 指定数据库的端口号,默认值是6379port => 6379# 指定 redis 的认证密码password => "longchi"# 指定从 redis 的哪个 key 取数据key => "longchi-linux-filebeat"}
}output {stdout {}if [type] == "longchi-tcp" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-tcp-%{+yyyy.MM.dd}"}} else if [type] == "longchi-beat" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-beat-%{+yyyy.MM.dd}"}} else if [type] == "longchi-redis" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-redis-%{+yyyy.MM.dd}"}} else {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-other-%{+yyyy.MM.dd}"}}
}启动 logstash 实例
logstash -rf config-logstash/02-beats-to-es.conf二,第二条链路  tomcat-filebeat-[187]logstash(8888)-es-kibana
filebeat 配置:
vim /etc/filebeat/config/33-tomcat-to-logstash.yml
filebeat.inputs:
- type: logenabled: truepaths: - /longchi/softwares/apache-tomcat-10.0.20/logs/*.txtjson.keys_under_root: trueoutput.logstash:hosts: ["192.168.222.187:8888"]logstash 配置:  
[root@elk187 ~]$ cat config-logstash/12-beats-to-es.conf
# encoding: utf-8
input {beats {port => 8888}
}output {stdout {}elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-tcp-%{+yyyy.MM.dd}"}
}启动实例
[root@elk188 ~]$ filebeat -e -c /etc/filebeat/config/33-tomcat-to-logatash.yml --path.data /tmp/filebeat/[root@elk187 ~]$ logstash -f config-logstash/12-beats-to-es.conf --path.data /tmp/logstash注意:在同一台服务器上已经启动了filebeat 或者 logstash 服务,再次启动他们不同配置文件的 filebeat或者logstash服务,需要在后面添加参数 '--path.data /tmp/filebeat/ 或者--path.data /tmp/logstash/' 即要指定filebeat和logstash的数据存储路径,不得与相同的filebeat和logstash服务的数据路径相同,也就是说他们的配置文件路径要一对一,一个配置文件需要指定单独的存储路径且路径不同,若路径相同,系统会报错
1,搭建如下图所示日志系统
1,在 189 机器安装 logstash
yum -y localinstall logstash-7.17.3-x86_64.rpm
ln -sv /usr/share/logstash/bin/logstash /usr/local/bin/2, 在 187 机器上安装 logstash
yum -y localinstall logstash-7.17.3-x86_64.rpm
ln -sv /usr/share/logstash/bin/logstash /usr/local/bin/(1) 编写配置文件
cat > conf.d/01-stdin-to-stdout.conf << 'EOF'
input {stdin {}
}
output {stdout {}
}
EOF(2) 检查配置文件语法
logstash -tf conf.d/01-stdin-to-stdout.conf参数解释:
'-t' 就是 test 测试,'logstash -t'跟 'nginx -t' 一样测试语法是否正确
'-f' 表示指定文件(3) 启动 logstash 实例
logstash -f conf.d/01-stdin-to-stdout.conf
一,在安装 logstash (187)机器上书写配置文件(链接logstash-es通道配置文件)
vim 11-many-to-es.conf
# encoding: utf-8
input {tcp {type => "longchi-tcp"port => 6666}beats {type => "longchi-beat"port => 7777}redis {type => "longchi-redis"data_type => "list"db => 5host => "192.168.222.187"port => 6379password => "longchi"key => "longchi-linux-filebeat"}
}output {stdout {}if [type] == "longchi-tcp" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-tcp-%{+yyyy.MM.dd}"}} else if [type] == "longchi-beat" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-beat-%{+yyyy.MM.dd}"}} else if [type] == "longchi-redis" {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-redis-%{+yyyy.MM.dd}"}} else {elasticsearch {hosts => ["192.168.222.187:9200","192.168.222.188:9200","192.168.222.189:9200"]index => "longchi-linux-other-%{+yyyy.MM.dd}"}}
}

测试

redis-cli -h 192.168.222.187 -p 6379 -a longchi --raw -n 5
lrange longchi-linux-filebeat 0 -11,tcp   在189机器测试,别的机器也可以
echo AAAAA | nc 192.168.222.187 66662, beats 在安装filebeat软件 188机器执行如下命令
vim 30-tcp-to-logstash.yml
filebeat.inputs:
- type: tcphost: "0.0.0.0:9999"output.logstash:hosts: ["192.168.222.187: 7777"]启动实例 filebeat -e -c 30-tcp-to-logstash.yml 
在189机器测试,别的机器也可以
echo BBBBBBBB | nc 192.168.222.188 99993,打通redis链路  在安装filebeat软件 188机器执行如下命令 
vim 31-tcp-to-redis.yml
filebeat.inputs:
- type: tcphost: "0.0.0.0:8888"output.redis:hosts: ["192.168.222.187:6379"]password: "longchi"db: 5key: "longchi-linux-filebeat"timeout: 3启动实例
filebeat -e -c 31-tcp-to-redis.yml --path.data /tmp/filebeat/
在189机器测试,别的机器也可以
echo CCCCCCC | nc 192.168.222.188 8888

十二,log类型切换filestream类型注意事项

1,filestream类型json解析配置
filebeat.inputs:
- type: filestreamenabled: truepaths:- /var/log/nginx/access.log*tags: ["access"]# 对于filestream类型而言,不能直接配置json解析,而是需要借助解析器实现# json.keys_under_root: true# 终上所述,我们就需要使用一些的写法实现parsers:# 使 Filebeat 能够解码结构化为 JSON 消息的日志。# Filebeat 逐行处理日志,因此 JSON 解码仅在每条消息有一个JSON对象时才有效。- ndjson:# 对message字段进行JSON格式解析,并将key放在顶级字段。keys_under_root: trueoutput.elasticsearch:enabled: truehosts: ["http://10.0.0.101:9200","http://10.0.0.102:9200","http://10.0.0.103:9200"]index: "longchi-linux-nginx-%{+yyyy.MM.dd}"setup.ilm.enabled: false
setup.template.name: "longchi-linux"
setup.template.pattern: "longchi-linux*"
setup.template.overwrite: true
setup.template.settings:index.number_of_shards: 3index.number_of_replicas: 0

2,filestream 类型多行匹配

(1) 编写 filebeat 配置文件
cat > config/08-tomcat-error-to-es.yml < 'EOF'
filebeat.inputs:
- type: filestreampaths:- /longchi/softwares/tomcat/logs/catalina.outparsers:- multiline:type: patternpattern: '^\d{2}'negate: truematch: afteroutput.elasticsearch:hosts:- "http://10.0.0.101:9200"- "http://10.0.0.102:9200"- "http://10.0.0.103:9200"index: "longchi-tomcat-error-%{+yyyy.MM.dd}"setup.ilm.enabled: false
setup.template.name: "longchi-tomcat-error"
setup.template.pattern: "longchi-tomcat-error*"
setup.template.overwrite: true
setup.template.settings:index.number_of_shards: 3index.number_of_replicas: 0
EOF(2) 启动 filebeat 实例
filebeat test config -c config/08-tomcat-error-to-es.yml
filebeat -e -c config/08-tomcat-error-to-es.yml(3) kibana查看

Lucene概述

什么是Lucene
Lucene是一套用于全文检索和搜寻的开源程序库,由Apache软件基金会支持和提供Lucene提供了一个简单却强大的应用程序接口(API),能够做全文索引和搜寻,在Java开发环境里Lucene是一个成熟的免费开放源代码工具Lucene并不是现成的搜索引擎产品,但可以用来制作搜索引擎产品官网:http://lucene.apache.org/老版本下载地址:http://archive.apache.org/dist/lucene/java/
Lucene、Solr、Elasticsearch关系
Lucene:底层的API,工具包
Solr:基于Lucene开发的企业级的搜索引擎产品
Elasticsearch:基于Lucene开发的企业级的搜索引擎产品
Lucene的基本使用
使用Lucene的API来实现对索引的增(创建索引)、删(删除索引)、改(修改索引)、查(搜索数据)。

补充知识

1,什么是JRuby

JRuby是面向Ruby、基于Java虚拟机(JVM)的一种解释程序,它结合了Ruby语言的简易性和功能强大的JVM的执行机制,包括与Java库 全面集成。Rails彻底加快及简化了Web应用的开发,不过它让人觉得不够成熟,特别是在高端企业级功能方面。另一方面,Java平台及其虚拟机、库和 应用服务器的速度、稳定性和功能方面却一直在提升,现在已被公认为是开发高端服务器。

JRuby是一个纯Java实现的Ruby解释器。通过JRuby,你可以在JVM上直接运行Ruby程序,调用Java的类库。很多Java编写的Ruby IDE都是使用JRuby来解释语法的。 2006年,SUN雇佣了两名JRuby团队的两名核心成员Charles Nutter和Thomas Enebo全职开发JRuby,后来ThoughtWorks也雇佣了一名JRuby项目的核心成员全职开发JRuby

什么是JRuby

JRuby是一个纯Java实现的Ruby解释器。通过JRuby,你可以在JVM上直接运行Ruby程序,调用Java的类库。很多Java编写的Ruby IDE都是使用JRuby来解释语法的。

JRuby,JVM下的一个开源Ruby解释器,能够在Java里面使用Ruby类库。就像标准的Ruby解释器一样,除开使用Ruby调用本地方法(C代码)或者Java类库以外,Ruby代码都能够在JRuby里面正确执行。

为什么JRuby

除了适合用来开发面向Internet 的Web 应用之外,还有很多公司将JRuby 看作是使Rails 进入企业应用的关键技术,例如ThoughtWorks。JRuby 允许Rails 应用部署在流行的Java 应用服务器中,很多企业早已建立了这样的运行环境,但因为某些原因无法为运行Rails 应用建立一个全新的运行环境。任何技术的流行,深究起来,其背后都有经济上的原因。Rails 能够达到5 倍于SSH 的开发效率,使它成为了一个几乎无法被抗拒的选择。Rails 进入企业应用,只是一个时间问题,它的前景十分光明。

下载和安装JRuby

到JRuby的官方网站: Home — JRuby.org

2,postman 官网

https://www.postman.com/downloads/
启动 logstash 实例
[root@elk187 ~]$ logstash -rf config-logstash/04-http-to-stdout.conf在浏览器输入:http://192.168.222.187:8888/		返回 'ok'在 187 服务器上返回如下内容
{"message" => "","@timestamp" => 2024-10-27T12:42:17.235Z,"@version" => "1","host" => "192.168.222.1","headers" => {"connection" => "keep-alive","content_length" => "0","request_method" => "GET","http_version" => "HTTP/1.1","http_accept" => "*/*","http_host" => "192.168.222.187:8888","http_user_agent" => "PostmanRuntime/7.42.0","accept_encoding" => "gzip, deflate, br","postman_token" => "230900a2-da74-422e-b4a0-3b4342c79fc0","request_path" => "/"}
}
在 postman 提交GET请求发送如下数据

安装计算软件bc
[root@elk188 ~]$ yum -y install bc
Loaded plugins: fastestmirror, langpacks
Determining fastest mirrors* base: mirrors.aliyun.com* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
base                                           | 3.6 kB  00:00:00
epel                                           | 4.3 kB  00:00:00
extras                                         | 2.9 kB  00:00:00
nginx-stable                                   | 2.9 kB  00:00:00
updates                                        | 2.9 kB  00:00:00
Package bc-1.06.95-13.el7.x86_64 already installed and latest version
Nothing to do
[root@elk188 ~]$ echo "50*10000000000/1024/1024" | bc
476837以上可以看出可以节省47T空间

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/6295.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

SAP RFC 用户安全授权

一、SAP 通讯用户 对于RFC接口的用户&#xff0c;使用五种用户类型之一的“通讯”类型&#xff0c;这种类型的用户没有登陆SAPGUI的权限。 二、对调用的RFC授权 在通讯用户内部&#xff0c;权限对象&#xff1a;S_RFC中&#xff0c;限制进一步可以调用的RFC函数授权&#xff…

文件操作:Xml转Excel

1 添加依赖 Spire.Xls.jar <dependency><groupId>e-iceblue</groupId><artifactId>spire.xls</artifactId><version>5.3.3</version></dependency>2 代码使用 package cctd.controller;import com.spire.xls.FileFormat; im…

【FL0014】基于SpringBoot和微信小程序的个人健康管理系统

&#x1f9d1;‍&#x1f4bb;博主介绍&#x1f9d1;‍&#x1f4bb; 全网粉丝10W,CSDN全栈领域优质创作者&#xff0c;博客之星、掘金/知乎/b站/华为云/阿里云等平台优质作者、专注于Java、小程序/APP、python、大数据等技术领域和毕业项目实战&#xff0c;以及程序定制化开发…

数据库_SQLite3

下载 1、更新软件源&#xff1a; sudo apt-get update 2、下载SQLite3&#xff1a; sudo apt-get install sqlite3 3、验证&#xff1a; sqlite3启动数据库&#xff0c;出现以下界面代表运行正常。输入 .exit 可以退出数据库 4、安装sqlite3的库 sudo apt-get install l…

鸿蒙进阶-List组件

hello大家好&#xff0c;这里是鸿蒙开天组&#xff0c;今天我们来讲讲常用的List组件&#xff0c;也就是列表组件。 List组件 List 组件的基本用法&#xff0c;可以用它来展示列表&#xff0c;并且实现列表滚动&#xff0c;日常开发的时候还可以用它来实现更为复杂的效果。 …

EDA技术简介

目录 可编程逻辑器件 CPLD/FPGA 基于查找表结构的FPGA 硬件描述语言 EDA软件 EDA技术的 应用领域 电子系统的设计方法 EDA (Electronic Design Automation,电子设计自动化) 以可编程逻辑器件 (Programmable Logic Device,简称PLD)为实现载体、以硬件描述语言 (Hardwar…

【java】实战-力扣题库:有序数组的平方

问题描述 给你一个按 非递减顺序 排序的整数数组 nums&#xff0c;返回 每个数字的平方 组成的新数组&#xff0c;要求也按 非递减顺序 排序。 问题分析&#xff1a; 既然给定的是一个 非递减顺序的数组 我们可以使用双指针 &#xff0c; 一个指向左边&#xff0c;一个指向…

Java项目实战II基于Java+Spring Boot+MySQL的智能推荐的卫生健康系统(开发文档+数据库+源码)

目录 一、前言 二、技术介绍 三、系统实现 四、文档参考 五、核心代码 六、源码获取 全栈码农以及毕业设计实战开发&#xff0c;CSDN平台Java领域新星创作者&#xff0c;专注于大学生项目实战开发、讲解和毕业答疑辅导。获取源码联系方式请查看文末 一、前言 基于Java、…

Jupyter Notebook添加kernel的解决方案

大家好,我是爱编程的喵喵。双985硕士毕业,现担任全栈工程师一职,热衷于将数据思维应用到工作与生活中。从事机器学习以及相关的前后端开发工作。曾在阿里云、科大讯飞、CCF等比赛获得多次Top名次。现为CSDN博客专家、人工智能领域优质创作者。喜欢通过博客创作的方式对所学的…

Python Matplotlib 如何绘制股票或金融数据图

Python Matplotlib 如何绘制股票或金融数据图 在金融领域&#xff0c;数据可视化是分析市场趋势、股票表现和财务健康的重要工具。Python 的 Matplotlib 库为我们提供了强大的功能来绘制股票和金融数据图。本文将详细介绍如何使用 Matplotlib 绘制这些图表&#xff0c;并且结合…

Chrome离线安装包下载

微软的Bing屏蔽了Chrome的搜索结果&#xff0c;需要通过百度搜索。 或者直接访问Chrome的官网&#xff1a;Google Chrome 网络浏览器 直接下载的是在线安装包&#xff0c;安装需要联网。 如果需要在无法联网的设备上安装Chrome&#xff0c;需要在上面的地址后面加上?standalon…

C++__XCode工程中Debug版本库向Release版本库的切换

Debug和Release版本分别设置编译后&#xff0c;就分别得到了对应的lib库&#xff0c;如下图&#xff1a; 再生成Release后如下图&#xff1a;

masm汇编键盘读取字符串换行输出演示

从键盘输入字符串按回车后换行输出 ASSUME CS:CODE, DS:DATA DATA SEGMENT BUFFER DB 20DB ?DB 20 DUP(0) CRLF DB 0AH, 0DH,$ DATA ENDS CODE SEGMENT …

python爬取m3u8视频(思路到实现全讲解!!!)

文章目录 抓取m3u8视频1、思路分析2、实现分析index.m3u8 3、代码实现3.1 获取最后一个m3u8的url地址3.2 多线程下载ts文件与视频合并3.3 合并获取上面俩个代码段的代码 4、注意事项4.1 说明4.2 使用代码进行处理4.3 完整代码 5、解密处理 处理m3u8文件中的url问题 抓取m3u8视频…

html语法

网站是指在因特网上根据一定规则&#xff0c;使用html等制作的用于展示特定内容相关的网页集合 网站由很多网页组成&#xff0c;网页是构成网站的基本元素&#xff0c;通常由图片、连接、视频、声音、文字等元素组成&#xff0c;一般用.htm和.html做后缀&#xff0c;又被称为h…

WPF使用Prism框架首页界面

1. 首先确保已经下载了NuGet包MaterialDesignThemes 2.我们通过包的项目URL可以跳转到Github上查看源码 3.找到首页所在的代码位置 4.将代码复制下来&#xff0c;删除掉自己不需要的东西&#xff0c;最终如下 <materialDesign:DialogHostDialogTheme"Inherit"Ide…

[ DOS 命令基础 3 ] DOS 命令详解-文件操作相关命令

&#x1f36c; 博主介绍 &#x1f468;‍&#x1f393; 博主介绍&#xff1a;大家好&#xff0c;我是 _PowerShell &#xff0c;很高兴认识大家~ ✨主攻领域&#xff1a;【渗透领域】【数据通信】 【通讯安全】 【web安全】【面试分析】 &#x1f389;点赞➕评论➕收藏 养成习…

WPF+MVVM案例实战(二十)- 制作一个雷达辐射效果的按钮

文章目录 1、案例效果2、文件创建与代码实现1、创建文件2、图标资源文件3、源代码获取1、案例效果 2、文件创建与代码实现 1、创建文件 打开 Wpf_Examples 项目,在 Views 文件夹下创建窗体界面 RadarEffactWindow.xaml 。代码功能分两个部分完成,一个是样式,一个是动画。页…

git中使用tag(标签)的方法及重要性

在Git中打标签&#xff08;tag&#xff09;通常用于标记发布版本或其他重要提交。 Git中打标签的步骤&#xff1a; 列出当前所有的标签 git tag创建一个指向特定提交的标签 git tag <tagname> <commit-hash>创建一个带注释的标签&#xff0c;通常用于发布版本 git…

让火患无处遁形,RFID智能应急消防管理来帮忙

我国常受自然灾害侵扰&#xff0c;灾害间的相互影响日益加剧&#xff0c;给灾害救援任务带来了前所未有的挑战。当前&#xff0c;专业救援队伍的实力亟需扩充&#xff0c;现代救援装备的配置亟须加强&#xff0c;保障体系亟待优化&#xff0c;应急预案及联动作战机制亦需深化完…