Skip to content

Conversation

@NobodyXu
Copy link
Collaborator

@NobodyXu NobodyXu commented Oct 2, 2025

Ref #384

@NobodyXu NobodyXu requested a review from robjtede October 2, 2025 15:28
@NobodyXu NobodyXu force-pushed the refactor/dedup-async-runtime-impl branch from dfcae1f to dd5602e Compare October 2, 2025 15:33
@NobodyXu

This comment was marked as outdated.

@NobodyXu NobodyXu force-pushed the refactor/dedup-async-runtime-impl branch from dd5602e to bf4e415 Compare October 2, 2025 15:49
Copy link
Collaborator Author

@NobodyXu NobodyXu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some explanation on my design/API choice

&mut self,
cx: &mut Context<'_>,
output: &mut PartialBuffer<&mut [u8]>,
reader: &mut dyn AsyncBufRead,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @robjtede I made this design choice since

  • for async I/O inlining usually doesn't help with performance, as it is usually the I/O that is bottleneck and inlining doesn't eliminate it
  • it would avoid monomophization

Copy link
Collaborator Author

@NobodyXu NobodyXu Oct 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also say that even for in-memory objects or AsyncBufRead with (de)compression it wouldn't matter, because the AsyncBufRead::poll_fill_buf returns a &[u8], AFAIK this is something inline cannot optimize out since it involves a buffer and our de/encoder impl often involves complex stuff.

The on,y case it could be bottleneck is when poll_fill_buf returns very small slice or the (de)compression only consumes a very small part of it, like literally a few bytes, but that would usually mean the user is using a very terrible BufReader implementation

Copy link
Collaborator Author

@NobodyXu NobodyXu Oct 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've think of an alternative design which does not need to pass the reader at all, it also removes the variable first within do_poll_read, making it truly stateless:

impl Decoder {
    fn do_poll_read(
        &mut self,
        output: &mut PartialBuffer<&mut [u8]>,
        decoder: &mut impl Decode,
        input: &mut PartialBuffer<&[u8]>,
        first: bool,
    ) -> ControlFlow<Result<NextAction>> {
        let mut first = true;
        loop {
            match self.state {
                Decode => {
                    // logic unchanged ...

                    if res? {
                        self.state = State::Flushing;
                    } else {
                        return ControlFlow::Continue(()); // Ask caller to call `reader.consume` and `reader.poll_fill_buf`
                    }
                }
            }
        }
    }
}

Caller of this funciton will then do:

let mut input = PartialBuffer(&[][]);
let mut first = true;
loop {
    match this.inner.do_poll_read(output, this.decoder, &mut input) {
        Continue(_) => {
            if !first {
                this.reader.consume(input.written.len());
            }
            first = false;
            input = PartialBuffer(ready!(this.reader.poll_fill_buf(cx))?);
        },
       Break(res) => break res,
    }
}

this design is much cleaner

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@robjtede I've updated the code with this new design, and use macro_rules! to further reduce duplicate code.

I quite like the current iteration, I think it'd make maintenance much, much simpler

Comment on lines +22 to +26
#[derive(Debug)]
pub struct Decoder {
state: State,
multiple_members: bool,
}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @robjtede I decided to keep the state as simple as possible, anything else can be passed by parameter

@NobodyXu
Copy link
Collaborator Author

If there's no objection, I'll merge it this evening

@NobodyXu NobodyXu added this pull request to the merge queue Oct 13, 2025
Merged via the queue into main with commit 1fa960f Oct 13, 2025
20 checks passed
@NobodyXu NobodyXu deleted the refactor/dedup-async-runtime-impl branch October 13, 2025 11:06
@codecov
Copy link

codecov bot commented Oct 13, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 0.00%. Comparing base (fcf91fd) to head (2c24af2).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@     Coverage Diff     @@
##   main   #391   +/-   ##
===========================
===========================

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@github-actions github-actions bot mentioned this pull request Oct 6, 2025
@NobodyXu
Copy link
Collaborator Author

I will ping you again when releasing this change @robjtede , if you think of something you'd like me to change I can open a separate pr

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants