Skip to content

Commit 0c0f9cb

Browse files
Flush traces when spark application is finished (#6670)
Flush spans when the spark application finishes. Default timeout for the remoteWriter is 1 second so it will not block the main thread for more than 1 second Motivation: Customer losing their spans because the JVM was shut down too quickly
1 parent e3eefca commit 0c0f9cb

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

dd-java-agent/instrumentation/spark/src/main/java/datadog/trace/instrumentation/spark/AbstractDatadogSparkListener.java

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -213,6 +213,10 @@ public synchronized void finishApplication(
213213
"spark.available_executor_time", computeCurrentAvailableExecutorTime(time));
214214

215215
applicationSpan.finish(time * 1000);
216+
217+
// write traces synchronously:
218+
// as soon as the application finishes, the JVM starts to shut down
219+
tracer.flush();
216220
}
217221

218222
private AgentSpan getOrCreateStreamingBatchSpan(

0 commit comments

Comments
 (0)