Elastic Stack(通常称为 ELK Stack)是一个用于搜索、分析和可视化数据的开源工具套件。它由 Elasticsearch、Logstash 和 Kibana 组成,最近还增加了 Beats 作为轻量级数据传输工具。以下是 ELK Stack 的各个组件以及如何进行集成的概述。

 

组件概述

  1. Elasticsearch:一个分布式搜索和分析引擎,用于存储、搜索和分析大规模数据。
  2. Logstash:一个服务器端数据处理管道,可以从多个来源获取数据,转换数据并将数据发送到指定的存储(如 Elasticsearch)。
  3. Kibana:一个数据可视化工具,用于 Elasticsearch 数据的探索和可视化。
  4. Beats:一组轻量级数据发送器,用于收集和发送各种类型的数据到 Logstash 或 Elasticsearch。

     

ELK Stack 集成步骤

1. 安装 Elasticsearch

安装 Elasticsearch 并启动它。你可以从官方网站下载 Elasticsearch 并按照安装指南进行安装。

# For Debian/Ubuntu
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.13.4-amd64.deb
sudo dpkg -i elasticsearch-7.13.4-amd64.deb
sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch

 

2. 安装 Logstash

安装 Logstash 并配置管道以处理和转发数据。

# For Debian/Ubuntu
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.13.4-amd64.deb
sudo dpkg -i logstash-7.13.4-amd64.deb
sudo systemctl start logstash
sudo systemctl enable logstash

 

创建一个简单的 Logstash 配置文件 logstash.conf

input {
  beats {
    port => 5044
  }
}
filter {
  # Add any filters you need here
}
output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "logstash-%{+YYYY.MM.dd}"
  }
}

 

启动 Logstash 并使用该配置文件:

sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf

 

3. 安装 Kibana

安装 Kibana 并配置它以连接到 Elasticsearch。

# For Debian/Ubuntu
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.13.4-amd64.deb
sudo dpkg -i kibana-7.13.4-amd64.deb
sudo systemctl start kibana
sudo systemctl enable kibana

 

编辑 Kibana 配置文件 /etc/kibana/kibana.yml,确保 Kibana 连接到正确的 Elasticsearch 实例:

server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]  

 

4. 安装 Beats

安装 Beats 中的 Filebeat 以收集日志数据并将其发送到 Logstash。

# For Debian/Ubuntu
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.13.4-amd64.deb
sudo dpkg -i filebeat-7.13.4-amd64.deb

 

编辑 Filebeat 配置文件 /etc/filebeat/filebeat.yml,配置输入和输出:

filebeat.inputs:
- type: log
  paths:
    - /var/log/*.log
output.logstash:
  hosts: ["localhost:5044"]

 

启动 Filebeat 并使其在系统启动时自动启动:

sudo systemctl start filebeat
sudo systemctl enable filebeat

 

通过docker-compose安装elasticsearch

version: "2.2"
services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    platform: linux/amd64
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne           "instances:\n"          "  - name: es01\n"          "    dns:\n"          "      - es01\n"          "      - localhost\n"          "    ip:\n"          "      - 127.0.0.1\n"          "  - name: es02\n"          "    dns:\n"          "      - es02\n"          "      - localhost\n"          "    ip:\n"          "      - 127.0.0.1\n"          "  - name: es03\n"          "    dns:\n"          "      - es03\n"          "      - localhost\n"          "    ip:\n"          "      - 127.0.0.1\n"          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120
  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    platform: linux/amd64
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es02,es03
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
    restart: always
  es02:
    depends_on:
      - es01
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    platform: linux/amd64
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata02:/usr/share/elasticsearch/data
    environment:
      - node.name=es02
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es02/es02.key
      - xpack.security.http.ssl.certificate=certs/es02/es02.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es02/es02.key
      - xpack.security.transport.ssl.certificate=certs/es02/es02.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
    restart: always
  es03:
    depends_on:
      - es02
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    platform: linux/amd64
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata03:/usr/share/elasticsearch/data
    environment:
      - node.name=es03
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es02
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es03/es03.key
      - xpack.security.http.ssl.certificate=certs/es03/es03.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es03/es03.key
      - xpack.security.transport.ssl.certificate=certs/es03/es03.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
    restart: always
  kibana:
    depends_on:
      es01:
        condition: service_healthy
      es02:
        condition: service_healthy
      es03:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    platform: linux/amd64
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      # 如果更改不成功则进入kibana容器 /config/kibana.yml 添加一行配置 i18n.locale: "zh-CN"
      - I18N_LOCALE=zh-CN
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
    restart: always
    
volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  kibanadata:
    driver: local

 

对应的.env文件

# 和docker-compose同目录创建一个.env文件,将下面内容写入
# elastic账号的密码 (至少六个字符)
ELASTIC_PASSWORD=123456
# kibana_system账号的密码 (至少六个字符,不能有纯数字),该账号仅用于一些kibana的内部设置,不能用来查询es
KIBANA_PASSWORD=ab123.
# es和kibana的版本
STACK_VERSION=8.11.3-amd64
# 集群名字
CLUSTER_NAME=docker-cluster
# x-pack安全设置,这里选择basic,基础设置,如果选择了trail,则会在30天后到期
LICENSE=basic
#LICENSE=trial
# es映射到宿主机的的端口
ES_PORT=9200
# kibana映射到宿主机的的端口
KIBANA_PORT=5601
# es容器的内存大小,请根据自己硬件情况调整,也可以1g
MEM_LIMIT=1073741824
# 命名空间,会体现在容器名的前缀上
COMPOSE_PROJECT_NAME=dev

 

logstash对应的conf配置文件,此配置是通过logstash写入到kafka,再输出到es

input {
    kafka {    
      topics => "kafkalog"
      bootstrap_servers => "172.16.1.180:19092,172.16.1.180:29092,172.16.1.180:39092"
      codec => "json" 
    }
}
output {
    elasticsearch {
       hosts => ["https://172.16.1.180:9200"]
       ssl_enabled => false
       ssl_verification_mode => none
       ssl_certificate_authorities => "/usr/share/logstash/conf/es01.crt"
       user => "elastic"
       password => "123456"
       ca_trusted_fingerprint => "AE:34:A1:FC:0E:89:0C:8D:15:BD:67:23:AB:E2:F4:7B:81:D8:D5:39:24:C9:C5:B7:A7:AC:82:75:3D:20:81:16"
       index => "kafkalog-index"
    }
}

 

连接和可视化

  1. 数据收集:Filebeat 收集 /var/log 目录下的日志文件,并将日志数据发送到 Logstash。
  2. 数据处理:Logstash 接收到数据后,可以使用过滤器进行数据处理,然后将数据发送到 Elasticsearch。
  3. 数据存储和索引:Elasticsearch 接收并存储来自 Logstash 的数据,按照日期或其他索引策略进行索引。
  4. 数据可视化:Kibana 连接到 Elasticsearch,可以用来探索、分析和可视化存储的数据。

     

示例:查看日志数据

  1. 打开浏览器,访问 Kibana(默认地址是 http://localhost:5601)。
  2. 在 Kibana 中配置索引模式(例如 logstash-*)。
  3. 使用 Kibana 的 Discover、Visualize 和 Dashboard 功能来探索和可视化日志数据。

     

高级配置和扩展

通过这些步骤,你可以成功地集成和使用 ELK Stack 来处理和分析你的数据。