4/17/2023 0 Comments Filebeats info send fail![]() ![]() all the pending files are consumed by filebeat/elasticsearch, but after only one hour, the same issue happened again! Yesterday I just update the pipeline settings, everything looks good. failed to execute pipeline for document So, I think this error may be caused by filebeat but not my log file or the configuration. : : .FailProcessorException: an error message=, source =, offset =, FileTime =Īt .newCompoundProcessorException(CompoundProcessor.java:156) ~Īt .execute(CompoundProcessor.java:107) ~Īt .execute(Pipeline.java:58) ~Īt .innerExecute(PipelineExecutionService.java:155) ~Īt .access$100(PipelineExecutionService.java:43) ~Īt $1.doRun(PipelineExecutionService.java:78) Īt .concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:723) Īt .(AbstractRunnable.java:37) Īt .runWorker(ThreadPoolExecutor.java:1149) Īt $n(ThreadPoolExecutor.java:624) Īt (Thread.java:748) Ĭaused by: : .FailProcessorException: an error message=, source =, offset =, FileTime =Ĭaused by: .FailProcessorException: an error message=, source =, offset =, FileTime =Īt .FailProcessor.execute(FailProcessor.java:52) ~Īt .execute(CompoundProcessor.java:100) ~Īfter check the details I found this error is about filebeat, document, why there's any empty filebeat monitor message, and also it was send to my slt-test-* index? "message": "an error message="Īfter the change, what I got from the elasticsearch log is this error, But I can not understand why this exception, actually I have ever just copy the ingest pipeline and rename to a new one, then the log file can be consumed by the elasticsearch, so I'm really confused to this exception, there are 3 processors in this pipeline and pass the first and don't know where it failed? failed to execute pipeline for document "description": "to parse the filebeat message", I just change the ingest pipeline configuration to catch and handle the error: PUT _ingest/pipeline/slt-test Is there a situation that the output queue are all the bad messages and then the queue is blocked? What if there are some message which can not be consumed by elasticsearch? filebeat just try re-send it endlessly and never release the harvester? In which situation filebeat will skip those messages and continue read the rest of file until eof reached? or filebeat just read all the message until end of file reached but those bad message just keep in the output pipeline and send failure there. ![]() You can see nothing is send to es while filebeat is busy in re-try event, and I just stop and restart filebeat and change the configuration then it can consume some new files. Now this is a screen-shot of the filebeat monitor, in the meanwhile, the kibana filebeat monitor metric retry in pipeline shows a very stable line. never close the existing harvester and open new file harvester since. While filebeat is blocked, then the log only shows 'bulk send failure' endlessly, but do nothing to new files. In my case, I have a script to read the filebeat log to remove the log files by grep "End of file reached" event in the filebeat log. You can see I use the below 2 options to close the file harvester and remove files from register. # are matching any regular expression from the list. # matching any regular expression from the list. home/tdni/hygon_apps/fileReader/log/*.log # Paths that should be crawled and fetched. # Change to true to enable this input configuration. # Below are the input specific configurations. # you can use different inputs for various configurations. Most options can be set at the input level, so My filebeat.yml is something like: filebeat.inputs: I'm not sure why filebeat is blocked, but the above 3 ways do make sense to help this issue. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |