Skip to content

Conversation

@Keno
Copy link
Member

@Keno Keno commented Jul 7, 2023

When we have an inference loop with different interpreters, the current code was trying to cache everything with the top level interpreter of the loop, yielding some unexpected behavior. I don't think that it's necessarily super well defined what should happen here, as it depends on the interpreters, in question, but I think it's better to try to cache each frame with the interpreter that created it, since they may have different lattices, etc. Doing this fixes an error I saw downstream that had just such a situation.

When we have an inference loop with different interpreters,
the current code was trying to cache everything with the top
level interpreter of the loop, yielding some unexpected behavior.
I don't think that it's necessarily super well defined what should
happen here, as it depends on the interpreters, in question, but
I think it's better to try to cache each frame with the interpreter
that created it, since they may have different lattices, etc.
Doing this fixes an error I saw downstream that had just such
a situation.
@Keno
Copy link
Member Author

Keno commented Jul 7, 2023

@nanosoldier runbenchmarks("inference", vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - no performance regressions were detected. A full report can be found here.

Copy link
Member

@aviatesk aviatesk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM.
caller.interp::AbstractInterpreter isn't type stable so I worried the performance cost, but the benchmark looks just fine.

Co-authored-by: Shuhei Kadowaki <[email protected]>
@aviatesk aviatesk merged commit d60f9b3 into master Jul 8, 2023
@aviatesk aviatesk deleted the kf/typeinffinishloop branch July 8, 2023 06:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants