Skip to content

Conversation

@masseyke
Copy link
Member

@masseyke masseyke commented Nov 9, 2022

If an exception is thrown from inside of the handler given to IngestDocument::executePipeline in IngestService::innerExecute or inside the handler given to Pipeline::execute in `IngestDocument::executePipeline``, then the error listener will be called twice, resulting in log messages like these:

[2022-10-11T12:00:54,244][WARN ][o.e.i.CompoundProcessor  ] [XPCS-S1] Preventing postIngest from being called more than once
java.lang.RuntimeException: null
        at org.elasticsearch.ingest.CompoundProcessor.innerExecute(CompoundProcessor.java:231) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.lambda$innerExecute$2(CompoundProcessor.java:221) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.ConditionalProcessor.execute(ConditionalProcessor.java:152) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.innerExecute(CompoundProcessor.java:209) ~[elasticsearch-8.4.3.jar:?] <-------- first different frame
        at org.elasticsearch.ingest.CompoundProcessor.lambda$innerExecute$2(CompoundProcessor.java:221) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.ConditionalProcessor.execute(ConditionalProcessor.java:152) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.innerExecute(CompoundProcessor.java:209) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:152) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.Pipeline.execute(Pipeline.java:129) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.IngestDocument.executePipeline(IngestDocument.java:823) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.IngestService.innerExecute(IngestService.java:898) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.IngestService.executePipelines(IngestService.java:748) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.IngestService$1.doRun(IngestService.java:710) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:769) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.4.3.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
        at java.lang.Thread.run(Thread.java:833) ~[?:?]
[2022-10-11T12:00:54,244][WARN ][o.e.i.Pipeline           ] [XPCS-S1] A listener was unexpectedly called more than once
java.lang.RuntimeException: null
        at org.elasticsearch.ingest.Pipeline.lambda$execute$0(Pipeline.java:131) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.executeOnFailure(CompoundProcessor.java:274) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.executeOnFailure(CompoundProcessor.java:308) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.executeOnFailureOuter(CompoundProcessor.java:257) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.innerExecute(CompoundProcessor.java:236) ~[elasticsearch-8.4.3.jar:?] <-------- first different frame
        at org.elasticsearch.ingest.CompoundProcessor.lambda$innerExecute$2(CompoundProcessor.java:221) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.ConditionalProcessor.execute(ConditionalProcessor.java:152) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.innerExecute(CompoundProcessor.java:209) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.lambda$innerExecute$2(CompoundProcessor.java:221) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.ConditionalProcessor.execute(ConditionalProcessor.java:152) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.innerExecute(CompoundProcessor.java:209) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:152) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.Pipeline.execute(Pipeline.java:129) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.IngestDocument.executePipeline(IngestDocument.java:823) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.IngestService.innerExecute(IngestService.java:898) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.IngestService.executePipelines(IngestService.java:748) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.ingest.IngestService$1.doRun(IngestService.java:710) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:769) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.4.3.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
        at java.lang.Thread.run(Thread.java:833) ~[?:?]

Those log messages aren't harmful in themselves (and they are part of #90319 which prevents counters from going negative, causing serialization exceptions like #77973). But they do show that we're getting an unexpected exception while processing an exception, and we're not handling it correctly. This change catches that unexpected exception and deals with it.

@masseyke masseyke added >bug :Data Management/Ingest Node Execution or management of Ingest Pipelines including GeoIP v8.5.1 v8.6.0 labels Nov 9, 2022
@masseyke masseyke requested a review from joegallo November 9, 2022 19:53
@elasticsearchmachine
Copy link
Collaborator

Hi @masseyke, I've created a changelog YAML for you.

@tmgordeeva tmgordeeva added v8.5.2 and removed v8.5.1 labels Nov 15, 2022
@kingherc kingherc added v8.7.0 and removed v8.6.0 labels Nov 16, 2022
@bpintea bpintea added v8.5.3 and removed v8.5.2 labels Nov 22, 2022
@pxsalehi pxsalehi added v8.5.4 and removed v8.5.3 labels Dec 1, 2022
@CyberTaoFlow
Copy link

Yes!!!
Been getting this from suricata ingest pipelines used by elastic agent integrations when output is logstash. If agent output is elasticsearch it does not happen.....

@masseyke
Copy link
Member Author

masseyke commented Dec 6, 2022

@CyberTaoFlow we found and fixed the root cause of this in the suricata integration -- elastic/integrations#4731. If you pick up the latest suricata integration the error will go away for you.

@masseyke masseyke closed this Dec 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

>bug :Data Management/Ingest Node Execution or management of Ingest Pipelines including GeoIP v8.5.4 v8.7.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants