-
Notifications
You must be signed in to change notification settings - Fork 83
Description
Setup
- Using confluent cloud, their schema registry and Avro for the serialization
- Confluent Cloud sets all topics to compression.type=producer (producers must compress prior to sending, kafka cluster does not compress)
Goal
After my AF code has processed events and created a new list of ISpecific AVRO POCOs, I want to forward those new events to another Kafka topic but I want to send these new events as Zstd compressed microbatch.
Problem
I didn't see an argument I could pass into the KafkaAttribute output binding that lets me define a compression (in Confluent .NET client the setting is ProducerConfig.CompressionType = CompressionType.Zstd [compression.codec]) nor could I find the setting to linger to "fill up" the internal librdkafka buffer (ProducerConfig.LingerMs [linger.ms]).
See #57 and in #11 it seems the idea of providing all config options was shot down because it was messy - maybe we could just get the CompressionType setting added into KafkaProducerFactory.cs > GetProducerConfig()?
Regarding the microbatching, is that kind've what you get with using an "out" param for the output binding?
[Kafka("LocalBroker", "stringTopic")] out KafkaEventData<string>[] kafkaEventData
as opposed to using the IAsyncCollector and calling .AddAsync()?
[Kafka("LocalBroker", "stringTopic")] IAsyncCollector<KafkaEventData<string>> events,
...
await events.AddAsync(forwardEvent);
In the meantime, I assume I need to manually use Confluent's AvroSerializer() and take that byte[] and run it thru a ZStandard library (like ZstdNet) to compress it?
I also attached a Confluent Best Practices PDF which on page 25 outlines which settings to tweak to optimize for different scenarios (optimize for latency, throughput, durability and availability). You may consider exposing these settings.
confluent cloud-Best_Practices_for_Developing_Apache_Kafka_Applications_on_Confluent_Cloud.pdf
Thanks