You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: spring-kafka-docs/src/main/antora/modules/ROOT/pages/appendix/native-images.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
[[native-images]]
2
2
= Native Images
3
3
4
-
https://docs.spring.io/spring-framework/docs/current/reference/html/core.html#aot[Spring AOT] native hints are provided to assist in developing native images for Spring applications that use Spring for Apache Kafka, including hints for AVRO generated classes used in `@KafkaListener` s.
4
+
https://docs.spring.io/spring-framework/docs/current/reference/html/core.html#aot[Spring AOT] native hints are provided to assist in developing native images for Spring applications that use Spring for Apache Kafka, including hints for AVRO generated classes used in `@KafkaListener`+++s+++.
5
5
6
6
IMPORTANT: `spring-kafka-test` (and, specifically, its `EmbeddedKafkaBroker`) is not supported in native images.
This will exclude all headers beginning with `abc` and include all others.
147
147
148
-
By default, the `DefaultKafkaHeaderMapper` is used in the `MessagingMessageConverter` and `BatchMessagingMessageConverter`, as long as Jackson is on the class path.
148
+
By default, the `DefaultKafkaHeaderMapper` is used in the `MessagingMessageConverter` and `BatchMessagingMessageConverter`, as long as Jackson is on the classpath.
149
149
150
150
With the batch converter, the converted headers are available in the `KafkaHeaders.BATCH_CONVERTED_HEADERS` as a `List<Map<String, Object>>` where the map in a position of the list corresponds to the data position in the payload.
Copy file name to clipboardExpand all lines: spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/micrometer.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
[[monitoring-listener-performance]]
5
5
== Monitoring Listener Performance
6
6
7
-
Starting with version 2.3, the listener container will automatically create and update Micrometer `Timer`+++s+++ for the listener, if `Micrometer` is detected on the class path, and a single `MeterRegistry` is present in the application context.
7
+
Starting with version 2.3, the listener container will automatically create and update Micrometer `Timer`+++s+++ for the listener, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context.
8
8
The timers can be disabled by setting the `ContainerProperty`+++'+++s `micrometerEnabled` to `false`.
9
9
10
10
Two timers are maintained - one for successful calls to the listener and one for failures.
@@ -24,7 +24,7 @@ NOTE: With the concurrent container, timers are created for each thread and the
24
24
[[monitoring-kafkatemplate-performance]]
25
25
== Monitoring KafkaTemplate Performance
26
26
27
-
Starting with version 2.5, the template will automatically create and update Micrometer `Timer`+++s for send operations, if `Micrometer` is detected on the class path, and a single `MeterRegistry` is present in the application context.
27
+
Starting with version 2.5, the template will automatically create and update Micrometer `Timer`+++s for send operations, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context.
28
28
The timers can be disabled by setting the template's `micrometerEnabled` property to `false`.
29
29
30
30
Two timers are maintained - one for successful calls to the listener and one for failures.
Copy file name to clipboardExpand all lines: spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/serdes.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -185,7 +185,7 @@ public ProducerFactory<String, Thing> kafkaProducerFactory(JsonSerializer custom
185
185
Setters are also provided, as an alternative to using these constructors.
186
186
====
187
187
188
-
Starting with version 2.2, you can explicitly configure the deserializer to use the supplied target type and ignore type information in headers by using one of the overloaded constructors that have a boolean `useHeadersIfPresent` parameter (which is `true` by default).
188
+
Starting with version 2.2, you can explicitly configure the deserializer to use the supplied target type and ignore type information in headers by using one of the overloaded constructors that have a boolean `useHeadersIfPresent` argument (which is `true` by default).
189
189
The following example shows how to do so:
190
190
191
191
[source, java]
@@ -509,7 +509,7 @@ Accessor methods will be used to lookup the property name as field in the receiv
509
509
The `@JsonPath` expression allows customization of the value lookup, and even to define multiple JSON Path expressions, to look up values from multiple places until an expression returns an actual value.
510
510
511
511
To enable this feature, use a `ProjectingMessageConverter` configured with an appropriate delegate converter (used for outbound conversion and converting non-projection interfaces).
512
-
You must also add `spring-data:spring-data-commons` and `com.jayway.jsonpath:json-path` to the class path.
512
+
You must also add `spring-data:spring-data-commons` and `com.jayway.jsonpath:json-path` to the classpath.
513
513
514
514
When used as the parameter to a `@KafkaListener` method, the interface type is automatically passed to the converter as normal.
Blocking delivery attempts are only provided if you set `ContainerProperties` <<deliveryAttemptHeader>> to `true`.
12
+
Blocking delivery attempts are only provided if you set `ContainerProperties`+++'+++s xref:kafka/container-props.adoc#deliveryAttemptHeader[deliveryAttemptHeader] to `true`.
13
13
14
14
Note that the non blocking attempts will be `null` for the initial delivery.
Copy file name to clipboardExpand all lines: spring-kafka-docs/src/main/antora/modules/ROOT/pages/retrytopic/dlt-strategies.adoc
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,12 @@
1
1
[[dlt-strategies]]
2
-
= Dlt Strategies
2
+
= DLT Strategies
3
3
4
4
The framework provides a few strategies for working with DLTs.
5
5
You can provide a method for DLT processing, use the default logging method, or have no DLT at all.
6
6
Also you can choose what happens if DLT processing fails.
7
7
8
8
[[dlt-processing-method]]
9
-
== Dlt Processing Method
9
+
== DLT Processing Method
10
10
11
11
You can specify the method used to process the DLT for the topic, as well as the behavior if that processing fails.
12
12
@@ -18,16 +18,16 @@ Note that the same method will be used for all the `@RetryableTopic` annotated m
18
18
@RetryableTopic
19
19
@KafkaListener(topics = "my-annotated-topic")
20
20
public void processMessage(MyPojo message) {
21
-
// ... message processing
21
+
// ... message processing
22
22
}
23
23
24
24
@DltHandler
25
25
public void processMessage(MyPojo message) {
26
-
// ... message processing, persistence, etc
26
+
// ... message processing, persistence, etc
27
27
}
28
28
----
29
29
30
-
The DLT handler method can also be provided through the RetryTopicConfigurationBuilder.dltHandlerMethod(String, String) method, passing as arguments the bean name and method name that should process the DLT's messages.
30
+
The DLT handler method can also be provided through the `RetryTopicConfigurationBuilder.dltHandlerMethod(String, String)` method, passing as arguments the bean name and method name that should process the DLT's messages.
31
31
32
32
[source, java]
33
33
----
@@ -49,12 +49,12 @@ public class MyCustomDltProcessor {
49
49
}
50
50
51
51
public void processDltMessage(MyPojo message) {
52
-
// ... message processing, persistence, etc
52
+
// ... message processing, persistence, etc
53
53
}
54
54
}
55
55
----
56
56
57
-
NOTE: If no DLT handler is provided, the default RetryTopicConfigurer.LoggingDltListenerHandlerMethod is used.
57
+
NOTE: If no DLT handler is provided, the default `RetryTopicConfigurer.LoggingDltListenerHandlerMethod` is used.
58
58
59
59
Starting with version 2.8, if you don't want to consume from the DLT in this application at all, including by the default handler (or you wish to defer consumption), you can control whether or not the DLT container starts, independent of the container factory's `autoStartup` property.
60
60
@@ -77,7 +77,7 @@ In the latter the consumer ends the execution without forwarding the message.
77
77
DltStrategy.FAIL_ON_ERROR)
78
78
@KafkaListener(topics = "my-annotated-topic")
79
79
public void processMessage(MyPojo message) {
80
-
// ... message processing
80
+
// ... message processing
81
81
}
82
82
----
83
83
@@ -96,7 +96,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate<Integer, MyPojo> templ
96
96
NOTE: The default behavior is to `ALWAYS_RETRY_ON_ERROR`.
97
97
98
98
IMPORTANT: Starting with version 2.8.3, `ALWAYS_RETRY_ON_ERROR` will NOT route a record back to the DLT if the record causes a fatal exception to be thrown,
99
-
such as a `DeserializationException` because, generally, such exceptions will always be thrown.
99
+
such as a `DeserializationException`, because, generally, such exceptions will always be thrown.
100
100
101
101
Exceptions that are considered fatal are:
102
102
@@ -125,7 +125,7 @@ In this case after retrials are exhausted the processing simply ends.
@@ -33,7 +33,7 @@ public void processMessage(MyPojo message) {
33
33
public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, MyPojo> template) {
34
34
return RetryTopicConfigurationBuilder
35
35
.newInstance()
36
-
.fixedBackoff(3000)
36
+
.fixedBackoff(3_000)
37
37
.maxAttempts(4)
38
38
.create(template);
39
39
}
@@ -53,25 +53,25 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, MyPojo> templa
53
53
}
54
54
----
55
55
56
-
NOTE: The default backoff policy is `FixedBackOffPolicy` with a maximum of 3 attempts and 1000ms intervals.
56
+
NOTE: The default back off policy is `FixedBackOffPolicy` with a maximum of 3 attempts and 1000ms intervals.
57
57
58
58
NOTE: There is a 30-second default maximum delay for the `ExponentialBackOffPolicy`.
59
-
If your back off policy requires delays with values bigger than that, adjust the maxDelay property accordingly.
59
+
If your back off policy requires delays with values bigger than that, adjust the `maxDelay` property accordingly.
60
60
61
61
IMPORTANT: The first attempt counts against `maxAttempts`, so if you provide a `maxAttempts` value of 4 there'll be the original attempt plus 3 retries.
62
62
63
63
[[global-timeout]]
64
-
== Global timeout
64
+
== Global Timeout
65
65
66
66
You can set the global timeout for the retrying process.
67
67
If that time is reached, the next time the consumer throws an exception the message goes straight to the DLT, or just ends the processing if no DLT is available.
Starting with version 2.8.4, if you wish to add custom headers (in addition to the retry information headers added by the factory, you can add a `headersFunction` to the factory - `factory.setHeadersFunction((rec, ex) -> { ... })`
238
+
Starting with version 2.8.4, if you wish to add custom headers (in addition to the retry information headers added by the factory, you can add a `headersFunction` to the factory - `factory.setHeadersFunction((rec, ex) +++->+++ { +++...+++ })`.
239
239
240
240
By default, any headers added will be cumulative - Kafka headers can contain multiple values.
241
241
Starting with version 2.9.5, if the `Headers` returned by the function contains a header of type `DeadLetterPublishingRecoverer.SingleRecordHeader`, then any existing values for that header will be removed and only the new single value will remain.
0 commit comments