Skip to content

Conversation

Techcable
Copy link
Member

Sends a message to the logging thread, then blocks until the logger acknowledges it.
Contrast this to the busy-loop approach in #36

Pinned to the branch used in PR slog-rs/slog#349, so cannot be released until that PR is accepted.

Fixes issue #35

Sends a message to the logging thread,
then blocks until the logger acknowledges it.
Contrast this to the busy-loop approach in slog-rs#36

Pinned to the branch used in PR #349,
so cannot be released until that PR is accepted.

Fixes issue slog-rs#35
@Techcable Techcable changed the title Implement the Drain::flush method added in Implement the Drain::flush method Aug 9, 2025
Techcable added a commit to Techcable/slog-json that referenced this pull request Aug 9, 2025
Pinned to the branch used in slog-rs/slog#349, so that must be accepted first.
The 'pretty' example is pinned to use the branch in slog-rs/async#39,
so that must also be accepted first.
@vorner
Copy link
Collaborator

vorner commented Aug 25, 2025

As I'm reading through it, I wonder if there might be a less code-intensive and more readable approach. How about putting some kind of backchannel into the Flush message (I'd say a semaphore with 0 tokens in it, the worker thread provides one after flushing, the caller waits for it; except there don't seem to be any semaphores in Rust 😇). That way it would be clear whose flush got satisfied, there would be less work around poisoned mutexes, etc.

@dpc
Copy link
Contributor

dpc commented Aug 25, 2025

Increasing message size might negatively affect perf. of the main codepath.

@Techcable
Copy link
Member Author

Techcable commented Aug 25, 2025

As I'm reading through it, I wonder if there might be a less code-intensive and more readable approach. How about putting some kind of backchannel into the Flush message (I'd say a semaphore with 0 tokens in it, the worker thread provides one after flushing, the caller waits for it; except there don't seem to be any semaphores in Rust 😇). That way it would be clear whose flush got satisfied, there would be less work around poisoned mutexes, etc.

Thank you for the suggestion! I will look for a simpler approach.

Increasing message size might negatively affect perf. of the main codepath.

You may be right about this, but an AsyncMsg is already a pretty large struct.

@Techcable
Copy link
Member Author

Techcable commented Aug 25, 2025

Idea for a simpler implementation: Use a two-way channel or queue dedicated to flushing and acknowledgment of flushing.

EDIT: A zero-capacity channel may be what we want. According to crossbeam-channel:

A special case is zero-capacity channel, which cannot hold any messages. Instead, send and receive operations must appear at the same time in order to pair up and pass the message over:

@dpc
Copy link
Contributor

dpc commented Aug 26, 2025

Probably a proper way to do flush ack using only stdlib is to use CondVar paired with a Mutex<u64>. The u64 acts as a counter. Before flush the sender reads the current counter value, releases the lock, sends flush notification, then wait on the condvar for the counter to change. Backend thread increments the counter and notify_all after every flush. This should be correct and CondVar is a very bare-bone sync primitive so it's light and fast.

@Techcable
Copy link
Member Author

Probably a proper way to do flush ack using only stdlib is to use CondVar paired with a Mutex<u64>. The u64 acts as a counter. Before flush the sender reads the current counter value, releases the lock, sends flush notification, then wait on the condvar for the counter to change. Backend thread increments the counter and notify_all after every flush. This should be correct and CondVar is a very bare-bone sync primitive so it's light and fast.

This is similar to what I am doing right now, using an enum for the state. Your version is better since the enum currently doesn't have a stamp and blocks if there is an existing flush rather than queuing it. I also suspect your idea would be simpler to implement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants