Docker容器中Javaexception的Filebeat多行parsing不起作用

我正在运行Filebeat来从运行在容器中的Java服务中发送日志。 此容器有许多其他服务正在运行,并且相同的Filebeat守护程序正在收集主机中运行的所有容器的日志。 Filebeat将日志转发到Logstash,并将它们转储到Elastisearch中。

我正在尝试使用Filebeat多行function将来自Javaexception的日志行组合到一个使用以下Filebeatconfiguration的日志条目中:

filebeat: prospectors: # container logs - paths: - "/log/containers/*/*.log" document_type: containerlog multiline: pattern: "^\t|^[[:space:]]+(at|...)|^Caused by:" match: after output: logstash: hosts: ["{{getv "/logstash/host"}}:{{getv "/logstash/port"}}"] 

应该聚合成一个事件的Java堆栈跟踪的示例:

这个Java stacktrace是一个来自docker日志条目的副本(在运行docker日志java_service之后

 [2016-05-25 12:39:04,744][DEBUG][action.bulk ] [Set] [***][3] failed to execute bulk item (index) index {[***][***][***], source[{***}} MapperParsingException[Field name [events.created] cannot contain '.'] at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:273) at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218) at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:193) at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:305) at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218) at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139) at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:118) at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:99) at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:498) at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257) at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468) at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772) at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 

不过,在上面显示的Filebeatconfiguration中,我仍然将堆栈跟踪的每一行看作Elasticsearch中的一个单一事件。

任何想法我做错了什么? 另外请注意,因为我需要从多个文件中发送带有filebeat的日志,多行聚合无法在Logstash端完成。

版本

FILEBEAT_VERSION 1.1.0