Skip to content

Conversation

solsson
Copy link
Contributor

@solsson solsson commented Nov 9, 2017

Experimenting with different ways to aggregate logs, here and in #88. Supersedes #40.

hopefully still no log collection recursion.
actually getting amazingly close to the result from filebeat or fluentd.

The `==> [filename] <==` messages give the filename for coming nessages
up to the next such line (thanks to `tail`) and the filenames all have the pattern
pod-name_namespace_container-name_id.log

A streams application could process this and pass messages to another
topic, after adding labels and annotations from kube API, either as very long keys
or wrapping as json like fluentd or filebeat does.
although it will never be reliable in clusters with a lot of scheduling going on
taking into account that pod's own log file might not be there at start.

Now I actually get 1 logs-kafka-raw per 1 container deletion.
(I know you're close)
and though it looked like unhealthy nodes I see no reason
for a longer grace period with neither tail nor kafkacat.
Addresses #88 (comment).

I deemed it safe to assume that operational daemonset pods never co-exist on a node.

Tests edenhill/kcat#123,
as does 53f355a.
@solsson solsson changed the title Evaluate different logs aggregation/streaming approaches Evaluate different log aggregation/streaming approaches Nov 9, 2017
@solsson
Copy link
Contributor Author

solsson commented Jan 22, 2018

And the winner is ... #131 :)

@solsson solsson closed this Jan 22, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant