Elasticsearch无法访问:连接被拒绝

我有运行在虚拟机内的docker容器内的ELK栈。
我可以把东西curl到ES,它显示在Kibana就好了。
我可以用Logstash读取文件并将它们输出到stdout。
但Logstash无法将数据发送给ES

docker-compose.yml

 version: '2' services: elasticsearch: image: elasticsearch ports: - "9200:9200" - "9300:9300" environment: ES_JAVA_OPTS: "-Xmx256m -Xms256m" xpack.security.enabled: "false" xpack.monitoring.enabled: "false" xpack.graph.enabled: "false" xpack.watcher.enabled: "false" networks: - elk logstash: image: docker.elastic.co/logstash/logstash:5.3.2 volumes: - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml - ./logstash/pipeline:/usr/share/logstash/pipeline - ./data:/usr/share/data ports: - "5000:5000" - "9600:9600" networks: - elk depends_on: - elasticsearch kibana: # kibana config networks: elk: driver: bridge 

logstash.yml (启用或禁用xpack似乎没有区别)

 #log.level: debug xpack.monitoring.elasticsearch.password: changeme xpack.monitoring.elasticsearch.username: logstash_system xpack.monitoring.elasticsearch.url: http://127.0.0.1:9200 xpack.monitoring.enabled: false 

pipeline.conf

 input { # some files } filter{ # csv filter } output { if ([type] == "GPS") { stdout { } elasticsearch { hosts => [ "127.0.0.1:9200" ] index => "GPS" template_overwrite => true user => logstash_system password => changeme } } } 

docker-compose up的输出docker-compose up

 elasticsearch_1 [2017-05-03T05:06:29,821][INFO ][oenNode ] initialized elasticsearch_1 [2017-05-03T05:06:29,821][INFO ][oenNode ] [SzDlw_j] starting ... elasticsearch_1 [2017-05-03T05:06:30,352][WARN ][inuiMacAddressUtil ] Failed to find a usable hardware address from the network interfaces; using random bytes: 74:70:60:88:e2:f8:41:36 elasticsearch_1 [2017-05-03T05:06:30,722][INFO ][oetTransportService ] [SzDlw_j] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} elasticsearch_1 [2017-05-03T05:06:30,744][WARN ][oebBootstrapChecks ] [SzDlw_j] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] elasticsearch_1 [2017-05-03T05:06:34,027][INFO ][oecsClusterService ] [SzDlw_j] new_master {SzDlw_j}{SzDlw_jiQsWcXN00BZfZHQ}{ELB1oDR4SlW6atQIM88hfg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined) elasticsearch_1 [2017-05-03T05:06:34,176][INFO ][oehnNetty4HttpServerTransport] [SzDlw_j] publish_address {172.18.0.2:9200}, bound_addresses {[::]:9200} elasticsearch_1 [2017-05-03T05:06:34,179][INFO ][oenNode ] [SzDlw_j] started elasticsearch_1 [2017-05-03T05:06:34,832][INFO ][oegGatewayService ] [SzDlw_j] recovered [1] indices into cluster_state elasticsearch_1 [2017-05-03T05:06:36,519][INFO ][oecraAllocationService] [SzDlw_j] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]). logstash_1 Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties logstash_1 [2017-05-03T05:06:55,865][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}} logstash_1 [2017-05-03T05:06:55,887][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"} logstash_1 [2017-05-03T05:06:56,131][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x7b0aac02 URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} logstash_1 [2017-05-03T05:06:56,140][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} logstash_1 [2017-05-03T05:06:56,147][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://127.0.0.1:9200/, :error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"} logstash_1 [2017-05-03T05:06:56,148][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:287:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:273:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:363:in `with_connection'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:272:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:280:in `get'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:83:in `get_version'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:16:in `get_es_version'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:20:in `get_es_major_version'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:54:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:21:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:8:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:41:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:257:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:268:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:268:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:277:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:207:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:389:in `start_pipeline'"]} logstash_1 [2017-05-03T05:06:56,149][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x6d11738 URL://127.0.0.1:9200>]} logstash_1 [2017-05-03T05:06:56,163][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125} logstash_1 [2017-05-03T05:06:56,703][INFO ][logstash.pipeline ] Pipeline main started logstash_1 2017-05-03T05:06:56.776Z b68ff5316e02 time,lat,lon,elevation,accuracy,bearing,speed logstash_1 2017-05-03T05:06:56.778Z b68ff5316e02 2014-09-14T00:26:23Z,98.404222,99.999021,9.599976,44.000000,0.000000,0.000000 logstash_1 2017-05-03T05:06:56.779Z b68ff5316e02 2014-09-14T00:48:45Z,98.404297,99.999338,9.000000,35.000000,102.300003,0.500000 logstash_1 [2017-05-03T05:06:57,035][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} logstash_1 [2017-05-03T05:06:57,058][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>} logstash_1 [2017-05-03T05:06:57,069][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>2} logstash_1 [2017-05-03T05:06:59,085][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>} logstash_1 [2017-05-03T05:06:59,086][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4} logstash_1 [2017-05-03T05:07:01,146][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"} logstash_1 [2017-05-03T05:07:01,158][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x7b0aac02 URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} 

之后它在“发送批量请求”,“执行健康检查”,“尝试复活”和“意外池错误”之间保持循环。
使用log.level: debug每隔一段时间我也看到Error, cannot retrieve cgroups information {:exception=>"Errno::ENOENT", :message=>"No such file or directory - /sys/fs/cgroup/cpuacct/system.slice/docker-<containerId>.scope/cpuacct.usage"}

docker的目的之一就是隔离,从容器的angular度来看,127.0.0.1只是指自己。

这里你有3个容器:

  • elasticsearch
  • logstash
  • kibana

您应该修改logstashconfiguration,以便将elasticsearch而不是127.0.0.1,因为它是您已经定义的elk网桥networking上的elasticsearch容器的名称