docker集装箱通信不暴露端口

我有以下docker-compose文件:

version: '2' services: # Define a Telegraf service telegraf: build: Services/Telegraf image: jit-systems/telegraf environment: HOST_PROC: /rootfs/proc HOST_SYS: /rootfs/sys HOST_ETC: /rootfs/etc volumes: #- ./etc/telegraf.conf:/etc/telegraf/telegraf.conf:ro - /var/run/docker.sock:/var/run/docker.sock:ro - /sys:/rootfs/sys:ro - /proc:/rootfs/proc:ro - /etc:/rootfs/etc:ro - /var/log/telegraf:/var/log/telegraf links: - influxdb logging: driver: json-file options: max-size: "100m" max-file: "3" networks: - influx - default depends_on: - influxdb restart: always # Define an InfluxDB service influxdb: image: influxdb:1.2.0 volumes: #- ./data/influxdb:/var/lib/influxdb - influxdb:/var/lib/influxdb networks: - influx - default #this port should not be exposed ports: - "8086:8086" logging: driver: json-file options: max-size: "100m" max-file: "3" restart: always # Define a Kapacitor service kapacitor: image: kapacitor:1.2.0 environment: KAPACITOR_HOSTNAME: kapacitor KAPACITOR_INFLUXDB_0_URLS_0: http://influxdb:8086 volumes: - influxdb:/home/docker_containers/kapacitor/volume - influxdb:/var/lib/kapacitor - /var/log/kapacitor:/var/log/kapacitor links: - influxdb logging: driver: json-file options: max-size: "100m" max-file: "3" networks: - influx - default depends_on: - influxdb restart: always grafana: image: grafana/grafana ports: - 3000:3000 volumes: - grafana:/var/lib/grafana env_file: - config.monitoring links: - influxdb logging: driver: json-file options: max-size: "100m" max-file: "3" restart: always volumes: influxdb: portainer: grafana: networks: influx: 

所有容器都成功build造。 Telegraf正在Influx插入数据。 没有错误被抛出。 只有当端口8086被暴露时才会发生这种情况。 如果closures端口8086,则不会插入任何数据,但数据库在Grafana – 数据源面板中可见。 当我保存连接时,显示连接成功的消息。 有没有办法从Influxdb容器中获取数据,而不公开8086端口?

我不确定这是否在docker-compose版本2中可用,但是:

您可以使用networking使networking中的所有容器都可以到达其他端口,而无需将端口发布给公众。

一个服务将通过服务名称和端口访问另一个服务。 这里是一个例子:

  version: "3.1" ## To ensure optimal performance and data persistence elk stack will only run on a node with a label added in the following way: docker node update --label-add app_role=elasticsearch nodeID networks: logging: volumes: logging_data: services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:5.3.1 logging: driver: "json-file" networks: - logging volumes: - logging_data:/usr/share/elasticsearch/data environment: xpack.security.enabled: "false" deploy: placement: constraints: [node.labels.app_role == elasticsearch] logstash: image: docker.elastic.co/logstash/logstash:5.3.1 logging: driver: "json-file" networks: - logging ports: - "127.0.0.1:12201:12201/udp" entrypoint: logstash -e 'input { gelf { } } output { stdout{ } elasticsearch { hosts => ["http://elasticsearch:9200"] } }' # Add to date{} add_field => { "ElkDebug" => "timestamp matched and was overwritten"} when in doubt about time filter 

logstash输出使用弹性search地址。