无法在Docker镜像之外访问Kafka服务

我在centos上创build了一个kafka docker镜像。 在这里,我在同一个映像上运行Zookeeper和Kafka服务器。

我看到服务在Docker容器中运行。 我通过卡夫卡提供的kafka-console-producer.sh和kafka-console-consumer.sh脚本testing了kafka。 所需的端口也被暴露。

PORTS 0.0.0.0:2182->2182/tcp, 22/tcp, 0.0.0.0:9093->9093/tcp 

以下是在Kafka的server.properties中完成的configuration:

 listeners=PLAINTEXT://0.0.0.0:9093 zookeeper.connect=localhost:2182 

我已经在Docker容器中创build了一个主题。

我可以通过外部机器(在同一networking中)使用运行docker映像的主机上的telnet命令访问Kafka服务。

 telnet 9093 Trying … Connected to . Escape character is '^]'. telnet 2182 Trying … Connected to . Escape character is '^]'. 

但是,将数据写入Kafka主题将失败,并显示TimeoutExceptions:

 2017-12-17 21:30:51 DEBUG NetworkClient:195 - [Producer clientId=KafkaExampleProducer] Using older server API v0 to send API_VERSIONS {} with correlation id 1 to node -1 2017-12-17 21:30:51 DEBUG NetworkClient:189 - [Producer clientId=KafkaExampleProducer] Recorded API versions for node -1: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 2 [usable: 2], ListOffsets(2): 0 [usable: 0], Metadata(3): 0 to 1 [usable: 1], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 2 [usable: 2], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 1 [usable: 1], FindCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 [usable: 0], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): UNSUPPORTED, DeleteTopics(20): UNSUPPORTED, DeleteRecords(21): UNSUPPORTED, InitProducerId(22): UNSUPPORTED, OffsetForLeaderEpoch(23): UNSUPPORTED, AddPartitionsToTxn(24): UNSUPPORTED, AddOffsetsToTxn(25): UNSUPPORTED, EndTxn(26): UNSUPPORTED, WriteTxnMarkers(27): UNSUPPORTED, TxnOffsetCommit(28): UNSUPPORTED, DescribeAcls(29): UNSUPPORTED, CreateAcls(30): UNSUPPORTED, DeleteAcls(31): UNSUPPORTED, DescribeConfigs(32): UNSUPPORTED, AlterConfigs(33): UNSUPPORTED, AlterReplicaLogDirs(34): UNSUPPORTED, DescribeLogDirs(35): UNSUPPORTED, SaslAuthenticate(36): UNSUPPORTED, CreatePartitions(37): UNSUPPORTED) 2017-12-17 21:30:51 DEBUG NetworkClient:189 - [Producer clientId=KafkaExampleProducer] Sending metadata request (type=MetadataRequest, topics=sifs.email.in) to node <IP>:9093 (id: -1 rack: null) 2017-12-17 21:30:51 DEBUG NetworkClient:195 - [Producer clientId=KafkaExampleProducer] Using older server API v1 to send METADATA {topics=[sifs.email.in]} with correlation id 2 to node -1 2017-12-17 21:30:52 DEBUG Metadata:270 - Updated cluster metadata version 2 to Cluster(id = null, nodes = [0.0.0.0:9093 (id: 0 rack: null)], partitions = [Partition(topic = sifs.email.in, partition = 0, leader = 0, replicas = [0], isr = [0], offlineReplicas = [])]) 2017-12-17 21:30:52 DEBUG NetworkClient:183 - [Producer clientId=KafkaExampleProducer] Initiating connection to node 0.0.0.0:9093 (id: 0 rack: null) org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time 2017-12-17 21:31:47 INFO KafkaProducer:341 - [Producer clientId=KafkaExampleProducer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. 2017-12-17 21:31:47 DEBUG Sender:177 - [Producer clientId=KafkaExampleProducer] Beginning shutdown of Kafka producer I/O thread, sending remaining records. 

让我知道如何从外部机器向卡夫卡主题写入数据。