Root用户在Elasticsearch 2.4.0的Docker容器中

我使用Docker运行ELK堆栈以进行日志pipe理,当前configuration为ES 1.7,Logstash 1.5.4和Kibana 4.1.4。 现在我正在尝试将Elasticsearch升级到2.4.0,位于https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.0/elasticsearch-2.4.0.tar。 gz通过使用tar.gz文件与Docker。 由于ES 2.X不允许以root用户身份运行,所以我使用了

 -Des.insecure.allow.root=true 

选项,而运行elasticsearch服务,但我的容器不启动。 日志没有提到任何问题。

 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 874 100 874 0 0 874k 0 --:--:-- --:--:-- --:--:-- 853k //opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found Scheduler@0.0.0 start /opt/log-management/Scheduler node scheduler-app.js ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper node app.js Jobs are registered [2016-09-28 09:04:24,646][INFO ][bootstrap ] max_open_files [1048576] [2016-09-28 09:04:24,686][WARN ][bootstrap ] running as ROOT user. this is a bad idea! Native thread-sleep not available. This will result in much slower performance, but it will still work. You should re-install spawn-sync or upgrade to the lastest version of node if possible. Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details [2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z] [2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] initializing ... Wed, 28 Sep 2016 09:04:24 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5 Wed, 28 Sep 2016 09:04:24 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20 Wed, 28 Sep 2016 09:04:24 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15 [2016-09-28 09:04:25,399][INFO ][plugins ] [Kismet Deadly] modules [reindex, lang-expression, lang-groovy], plugins [], sites [] [2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs] [2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] heap size [7.8gb], compressed ordinary object pointers [true] [2016-09-28 09:04:25,455][WARN ][threadpool ] [Kismet Deadly] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead [2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] initialized [2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] starting ... [2016-09-28 09:04:27,695][INFO ][transport ] [Kismet Deadly] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} [2016-09-28 09:04:27,700][INFO ][discovery ] [Kismet Deadly] ccs-elasticsearch/q2Sv4FUFROGIdIWJrNENVA 

任何线索将不胜感激。

编辑1:作为//opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found是一个错误,docker图像没有hostname实用程序,我试图使用uname -n命令获取HOSTNAME在ES。 现在它不会抛出主机名错误,但问题仍然是一样的。 它不启动。 它是正确的替代使用?

另外还有一个疑问,当我使用当前正在运行的ES 1.7时, hostname实用程序不在那里,但运行没有任何问题。 很困惑。 使用uname -n后的日志uname -n

  % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1083 100 1083 0 0 1093k 0 --:--:-- --:--:-- --:--:-- 1057k > ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper > node app.js > Scheduler@0.0.0 start /opt/log-management/Scheduler > node scheduler-app.js Jobs are registered [2016-09-30 10:10:37,785][INFO ][bootstrap ] max_open_files [1048576] [2016-09-30 10:10:37,822][WARN ][bootstrap ] running as ROOT user. this is a bad idea! Native thread-sleep not available. This will result in much slower performance, but it will still work. You should re-install spawn-sync or upgrade to the lastest version of node if possible. Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details [2016-09-30 10:10:37,993][INFO ][node ] [Helleyes] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z] [2016-09-30 10:10:37,993][INFO ][node ] [Helleyes] initializing ... Fri, 30 Sep 2016 10:10:38 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5 Fri, 30 Sep 2016 10:10:38 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20 Fri, 30 Sep 2016 10:10:38 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15 [2016-09-30 10:10:38,435][INFO ][plugins ] [Helleyes] modules [reindex, lang-expression, lang-groovy], plugins [], sites [] [2016-09-30 10:10:38,455][INFO ][env ] [Helleyes] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs] [2016-09-30 10:10:38,456][INFO ][env ] [Helleyes] heap size [7.8gb], compressed ordinary object pointers [true] [2016-09-30 10:10:38,483][WARN ][threadpool ] [Helleyes] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead [2016-09-30 10:10:40,151][INFO ][node ] [Helleyes] initialized [2016-09-30 10:10:40,152][INFO ][node ] [Helleyes] starting ... [2016-09-30 10:10:40,278][INFO ][transport ] [Helleyes] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} [2016-09-30 10:10:40,283][INFO ][discovery ] [Helleyes] ccs-elasticsearch/wvVGkhxnTqaa_wS5GGjZBQ [2016-09-30 10:10:40,360][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x329b2977, /172.17.0.15:53388 => /10.240.118.69:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-09-30 10:10:40,360][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xdf31e5e6, /172.17.0.15:46846 => /10.240.118.70:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-09-30 10:10:41,798][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xcff0b2b6, /172.17.0.15:46958 => /10.240.118.70:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-09-30 10:10:41,800][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xb47caaf6, /172.17.0.15:53501 => /10.240.118.69:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-09-30 10:10:43,302][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x6247aa3f, /172.17.0.15:47057 => /10.240.118.70:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-09-30 10:10:43,303][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x1d266aa0, /172.17.0.15:53598 => /10.240.118.69:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) 

java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:617)at java.lang.Thread.run(Thread.java:745)[2016-09-30 10:10:44,807] [INFO] [cluster .service] [Helleyes] new_master {Helleyes} {wvVGkhxnTqaa_wS5GGjZBQ} {10.240.118.68} {10.240.118.68:9300},原因:zen-disco-join(election_as_master,[0]join收到)[2016-09-30 10: 10:44,852] [INFO] [http] [Helleyes] publish_address {10.240.118.68:9200},bound_addresses {[:: 1]:9200},{127.0.0.1:9200} [2016-09-30 10:10: 44,852] [INFO] [node] [Helleyes]开始[2016-09-30 10:10:44,984] [INFO] [网关] [Helleyes]恢复[32]索引到cluster_state

部署失败后出错

 failed: [10.240.118.68] (item={u'url': u'http://10.240.118.68:9200'}) => {"content": "", "failed": true, "item": {"url": "http://10.240.118.68:9200"}, "msg": "Status code was not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://10.240.118.68:9200"} 

编辑2:即使hostname实用程序安装并正常工作,容器不会启动。 日志与编辑1相同。

编辑3:容器确实启动,但无法访问地址http://nodeip:9200 。 在3个节点中,只有1个有2.4个,另外2个有1.7个,2.4个不是集群的一部分。 在运行2.4的容器中,curl到localhost:9200给出了运行弹性search的结果,但是从外面无法得到。

编辑4:我试图在群集上运行ES 2.4的基本安装,在相同的设置ES 1.7工作正常。 我已经运行ES迁移插件来检查是否可以运行ES 2.4集群,它给了我绿色。 基本的安装细节如下

Dockerfile

 #Pulling SLES12 thin base image FROM private-registry-1 #Author MAINTAINER XYZ # Pre-requisite - Adding repositories RUN zypper ar private-registry-2 RUN zypper --no-gpg-checks -n refresh #Install required packages and dependencies RUN zypper -n in net-tools-1.60-764.185 wget-1.14-7.1 python-2.7.9-14.1 python-base-2.7.9-14.1 tar-1.27.1-7.1 #Downloading elasticsearch executable ENV ES_VERSION=2.4.0 ENV ES_DIR="//opt//log-management//elasticsearch" ENV ES_CONFIG_PATH="${ES_DIR}//config" ENV ES_REST_PORT=9200 ENV ES_INTERNAL_COM_PORT=9300 WORKDIR /opt/log-management RUN wget private-registry-3/elasticsearch/elasticsearch/${ES_VERSION}.tar/elasticsearch-${ES_VERSION}.tar.gz --no-check-certificate RUN tar -xzvf ${ES_DIR}-${ES_VERSION}.tar.gz \ && rm ${ES_DIR}-${ES_VERSION}.tar.gz \ && mv ${ES_DIR}-${ES_VERSION} ${ES_DIR} #Exposing elasticsearch server container port to the HOST EXPOSE ${ES_REST_PORT} ${ES_INTERNAL_COM_PORT} #Removing binary files which are not needed RUN zypper -n rm wget # Removing zypper repos RUN zypper rr caspiancs_common #Running elasticsearch executable WORKDIR ${ES_DIR} ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true 

build立

 docker build -t es-test . 

1)使用docker run -d --name elasticsearch --net=host -p 9200:9200 -p 9300:9300 es-test运行时docker run -d --name elasticsearch --net=host -p 9200:9200 -p 9300:9300 es-test如在一个注释中所述,并在容器或节点内执行curl localhost:9200正在运行容器,我得到正确的答复。 我仍然无法到达9200端口上的其他集群节点。

2)使用docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 es-test并在容器内部执行curl localhost:9200 ,它工作正常,但不在节​​点给我错误

 curl: (56) Recv failure: Connection reset by peer 

我仍然无法到达9200端口上的其他集群节点。

编辑5: 在这个问题上使用这个答案 ,我得到了运行ES 2.4的三个容器中的全部三个。 但ES无法与所有这三个容器组成一个集群。 networkingconfiguration如下network.host : 0.0.0.0

 #configure elasticsearch.yml for clustering echo 'discovery.zen.ping.unicast.hosts: [ELASTICSEARCH_IPS] ' >> ${ES_CONFIG_PATH}/elasticsearch.yml 

docker logs elasticsearch得到的docker logs elasticsearch如下:

 [2016-10-06 12:31:28,887][WARN ][bootstrap ] running as ROOT user. this is a bad idea! [2016-10-06 12:31:29,080][INFO ][node ] [Screech] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z] [2016-10-06 12:31:29,081][INFO ][node ] [Screech] initializing ... [2016-10-06 12:31:29,652][INFO ][plugins ] [Screech] modules [reindex, lang-expression, lang-groovy], plugins [], sites [] [2016-10-06 12:31:29,684][INFO ][env ] [Screech] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8.7gb], net total_space [9.7gb], spins? [unknown], types [rootfs] [2016-10-06 12:31:29,684][INFO ][env ] [Screech] heap size [989.8mb], compressed ordinary object pointers [true] [2016-10-06 12:31:29,720][WARN ][threadpool ] [Screech] requested thread pool size [60] for [index] is too large; setting to maximum [5] instead [2016-10-06 12:31:31,387][INFO ][node ] [Screech] initialized [2016-10-06 12:31:31,387][INFO ][node ] [Screech] starting ... [2016-10-06 12:31:31,456][INFO ][transport ] [Screech] publish_address {172.17.0.16:9300}, bound_addresses {[::]:9300} [2016-10-06 12:31:31,465][INFO ][discovery ] [Screech] ccs-elasticsearch/YeO41MBIR3uqzZzISwalmw [2016-10-06 12:31:34,500][WARN ][discovery.zen ] [Screech] failed to connect to master [{Bobster}{Gh-6yBggRIypr7OuW1tXhA}{172.17.0.15}{172.17.0.15:9300}], retrying... ConnectTransportException[[Bobster][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300]; at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002) at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937) at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911) at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260) at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444) at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396) at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96) at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 

每当我提到运行那个容器的主机的IP地址为network.host ,我就结束了旧的情况,即只有一个容器运行ES 2.4,另外两个运行1.7。

刚刚看到docker代理正在听9300或“我认为”它在听。

 elasticsearch-server/src/main/docker # netstat -nlp | grep 9300 tcp 0 0 :::9300 :::* LISTEN 6656/docker-proxy 

任何线索?

我能够使用以下设置形成群集

network.publish_host=CONTAINER_HOST_ADDRESS即容器正在运行的节点的地址。 network.bind_host=0.0.0.0
transport.publish_port=9300
transport.publish_host=CONTAINER_HOST_ADDRESS

如果您在代理/负载均衡器(如nginx或haproxy)后面运行ES,则tranport.publish_port非常重要。

根据默认的elasticsearch 2.x的文档 ,network.host绑定到localhost

你需要明确地设置在这个答案中指定的network.host:0.0.0.0

例:

 ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true -Des.network.host=0.0.0.0 

尝试使用-p标志启动容器时映射端口。

EXPOSE--expose都不以任何方式依赖于主机; 这些规则不允许默认情况下从主机访问端口。 鉴于EXPOSE指令的限制,作为一个Dockerfile作者,你通常应该只包含一个EXPOSE规则来作为提示服务的端口。 由容器的操作员来指定进一步的联网规则。

在执行docker run时尝试映射您的端口,例如docker run -p 9200:9200 -p 9300:9300 <image>:<tag>