-
Notifications
You must be signed in to change notification settings - Fork 10.6k
Small fixes for diagnostics with prepared overloads enabled #84821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ocator didn't change For a Bind constraint generated by a same-type requirement, we must preserve the locator so we need to record it like any other kind of constraint. This fixes a diagnostic regression with -solver-enable-prepared-overloads.
…ge() We want to re-prepare our overloads, since now they will contain fixes.
|
@swift-ci Please smoke test |
|
@swift-ci Please test source compatibility |
| // FIXME: Perhaps we should store the Constraint -> PreparedOverload mapping | ||
| // in a SolverStep or something? Mutating Constraint feels wrong. | ||
|
|
||
| void Constraint::setPreparedOverload(PreparedOverload *preparedOverload) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry for the drive by, but out of curiosity: what is a 'prepared overload'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generating type variables and constraints for a disjunction choice once only instead of every time the disjunction choice is attempted, to reduce instances of exponential space usage. It speeds up the “worst case” behavior of the solver by 10-20%.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(And feel free to ask questions always!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! i had noticed some earlier PRs float by using this terminology – this first one has more about the motivation. so IIUC this approach will cache some intermediary data structures used by the solver which previously were not reused. how does the space savings lead to such a performance speedup in the "worst case"? does the high memory use spill over into wasted time (idk much about how the memory 'arena' for the solver is set up)? or is it just the wasted time of recreating the same data structures repeatedly that is eliminated when caching them?
No description provided.