运行logstash作为docker容器内的dameon

公平地说,我所要做的只是将度量值发送到elasticsearch并在kibana上查看它们。

我通过elasticsearch文档阅读,试图find线索。 我基于我的形象在python,因为我的实际应用程序是用python写的,我最终的目标是发送所有日志(通过metricbeat的sys统计和通过filebeat的应用程序日志)弹性。

我似乎无法find一种方法来运行logstash作为一个容器内的服务。

我的dockerfile:

FROM python:2.7 WORKDIR /var/local/myapp COPY . /var/local/myapp # logstash RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add - RUN apt-get update && apt-get install apt-transport-https dnsutils default-jre apt-utils -y RUN echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-5.x.list RUN apt-get update && apt-get install logstash # metricbeat #RUN wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-5.6.0-amd64.deb RUN dpkg -i metricbeat-5.6.0-amd64.deb RUN pip install --no-cache-dir -r requirements.txt RUN apt-get autoremove -y CMD bash strap_and_run.sh 

和额外的脚本strap_and_run.sh

 python finalize_config.py # start echo "starting logstash..." systemctl start logstash.service #todo :get my_ip echo "starting metric beat..." /etc/init.d/metricbeat start 

finalize_config.py

 import os import requests LOGSTASH_PIPELINE_FILE = 'logstash_pipeline.conf' LOGSTASH_TARGET_PATH = '/etc/logstach/conf.d' METRICBEAT_FILE = 'metricbeat.yml' METRICBEAT_TARGET_PATH = os.path.join(os.getcwd, '/metricbeat-5.6.0-amd64.deb') my_ip = requests.get("https://api.ipify.org/").content ELASTIC_HOST = os.environ.get('ELASTIC_HOST') ELASTIC_USER = os.environ.get('ELASTIC_USER') ELASTIC_PASSWORD = os.environ.get('ELASTIC_PASSWORD') if not os.path.exists(os.path.join(LOGSTASH_TARGET_PATH)): os.makedirs(os.path.join(LOGSTASH_TARGET_PATH)) # read logstash template file with open(LOGSTASH_PIPELINE_FILE, 'r') as logstash_f: lines = logstash_f.readlines() new_lines = [] for line in lines: new_lines.append(line .replace("<elastic_host>", ELASTIC_HOST) .replace("<elastic_user>", ELASTIC_USER) .replace("<elastic_password>", ELASTIC_PASSWORD)) # write current file with open(os.path.join(LOGSTASH_TARGET_PATH, LOGSTASH_PIPELINE_FILE), 'w+') as new_logstash_f: new_logstash_f.writelines(new_lines) if not os.path.exists(os.path.join(METRICBEAT_TARGET_PATH)): os.makedirs(os.path.join(METRICBEAT_TARGET_PATH)) # read metricbeath template file with open(METRICBEAT_FILE, 'r') as metric_f: lines = metric_f.readlines() new_lines = [] for line in lines: new_lines.append(line .replace("<ip-field>", my_ip) .replace("<type-field>", "test")) # write current file with open(os.path.join(METRICBEAT_TARGET_PATH, METRICBEAT_FILE), 'w+') as new_metric_f: new_metric_f.writelines(new_lines) 

原因是容器内没有init系统。 所以你不应该使用servicesystemctl 。 所以你应该自己开始在后台进程。 您的更新脚本如下所示

 python finalize_config.py # start echo "starting logstash..." /usr/bin/logstash & #todo :get my_ip echo "starting metric beat..." /usr/bin/metric start & wait 

您还需要添加处理TERM和其他信号,并杀死subprocess。 如果你不这样做docker stop将有几个问题。

我更喜欢在这种情况下使用像supervisord这样的stream程pipe理器和运行主pipe作为主要的PID 1。