卡夫卡设置与docker撰写

您好,我目前正在用Docker设置Kafka。 我已经成功地将Zookeeper和Kafka设置为已发布的融合图像,请参阅以下docker-compose文件:

version: '2' services: zookeeper: image: confluentinc/cp-zookeeper:3.2.0 container_name: zookeeper hostname: zookeeper ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 restart: always kafka: image: confluentinc/cp-kafka:3.2.0 hostname: kafka container_name: kafka depends_on: - zookeeper ports: - '9092:9092' environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.99.100:9092 LISTENERS: PLAINTEXT://0.0.0.0:9092 restart: always kafka-rest: image: confluentinc/cp-kafka-rest:3.2.0 container_name: kafka-rest depends_on: - kafka ports: - '8082:8082' environment: KAFKA_REST_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_REST_LISTENERS: http://kafka-rest:8082 KAFKA_REST_SCHEMA_REGISTRY_URL: http://schema-registry:8081 KAFKA_REST_HOST_NAME: kafka-rest restart: always schema-registry: image: confluentinc/cp-schema-registry:3.2.0 container_name: schema-registry depends_on: - kafka ports: - '8081' environment: SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181' SCHEMA_REGISTRY_HOST_NAME: schema-registry SCHEMA_REGISTRY_LISTENERS: http://schema-registry:8081 restart: always connect: image: confluentinc/cp-kafka-connect:3.2.0 container_name: kafka-connect depends_on: - zookeeper - kafka - schema-registry ports: - "8083:8083" restart: always environment: CONNECT_BOOTSTRAP_SERVERS: 'kafka:9092' CONNECT_REST_ADVERTISED_HOST_NAME: connect CONNECT_REST_PORT: 8083 CONNECT_GROUP_ID: compose-connect-group CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081 CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081 CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter CONNECT_ZOOKEEPER_CONNECT: "zookeeper:2181" 

现在我已经设法正确地将kafka容器公开给我的非dockerized应用程序,正确地设置advertised.listener属性为PLAINTEXT:// {DOCKER_MACHINE_IP}:9092,但是正如你所看到的,我也已经添加了其他融合的应用程序扩展我的Kafka设置(Kafka REST,Schema-Registry)。 由于advertised.listener属性,这些不能再连接到我的Kafka实例。

我可以将其更改为正确的容器主机名 – > PLAINTEXT:// kafka:9092,但是随后我无法再通过其他应用程序访问kafka实例。 有没有简单的方法来解决这个问题?

奥马尔,也许你已经解决了你的问题,但为了将来的参考,汉斯·耶斯柏森的评论为我做了,即使在Windows上的伎俩。

以pipe理员身份打开C:\Windows\System32\drivers\etc\hosts并添加以下行以将kafka代理公开为localhost。 127.0.0.1 broker

而我docker-compose.yml文件如下所示:

 --- version: '2' services: zookeeper: image: confluentinc/cp-zookeeper hostname: zookeeper extra_hosts: - "moby:127.0.0.1" ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 broker: image: confluentinc/cp-kafka hostname: broker extra_hosts: - "moby:127.0.0.1" depends_on: - zookeeper ports: - '9092:9092' environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:9092' KAFKA_DEFAULT_REPLICATION_FACTOR: 1 schema_registry: image: confluentinc/cp-schema-registry hostname: schema_registry # extra_hosts: # - "moby:127.0.0.1" depends_on: - zookeeper - broker ports: - '8081:8081' environment: SCHEMA_REGISTRY_HOST_NAME: schema_registry SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181' kafka-rest: image: confluentinc/cp-kafka-rest container_name: kafka-rest extra_hosts: - "moby:127.0.0.1" depends_on: - zookeeper - broker ports: - '8082:8082' environment: KAFKA_REST_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_REST_LISTENERS: http://kafka-rest:8082 KAFKA_REST_SCHEMA_REGISTRY_URL: http://schema-registry:8081 KAFKA_REST_HOST_NAME: kafka-rest 

或者,公开我的笔记本电脑的当前IP地址(使用docker-compose.yml )也可以,但是这有一个缺点,即每当我的networking发生变化时,我都必须更改docker-compose.yml文件。