Skip to content

Conversation

@hamishknight
Copy link
Contributor

Rather than replacing the code completion file on the CompilerInstance when we do a cached top-level completion, let's set a new main module instead.

This allows us to properly update the LoadedModules map, and allows the retrieval of the code completion file to be turned into a request.

Rather than replacing the code completion file
on the `CompilerInstance` whenever we do a cached
top-level completion, let's set a new main module
instead.

This allows us to properly update the
`LoadedModules` map, and allows the retrieval of
the code completion file to be turned into a
request.
@hamishknight
Copy link
Contributor Author

@swift-ci please test

@hamishknight hamishknight requested a review from rintaro June 2, 2020 21:35
Copy link
Member

@rintaro rintaro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. Thank you!

@swiftlang swiftlang deleted a comment from swift-ci Jun 2, 2020
@swiftlang swiftlang deleted a comment from swift-ci Jun 2, 2020
@swiftlang swiftlang deleted a comment from swift-ci Jun 3, 2020
@swiftlang swiftlang deleted a comment from swift-ci Jun 3, 2020
@hamishknight
Copy link
Contributor Author

@swift-ci please smoke test macOS

@hamishknight hamishknight merged commit 8c20bcd into swiftlang:master Jun 3, 2020
@hamishknight hamishknight deleted the modulo-module branch June 3, 2020 04:38
auto *newM =
ModuleDecl::create(oldM->getName(), Ctx, oldM->getImplicitImportInfo());
auto *newM = ModuleDecl::createMainModule(Ctx, oldM->getName(),
oldM->getImplicitImportInfo());
Copy link
Collaborator

@AnthonyLatsis AnthonyLatsis Jun 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How often does this happen in practice? We are adding up to the cache each time we retrieve the completion file for a new main module, even though the old ones are thrown away.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We only perform a limited number of cached completions before throwing away the CompilerInstance (and therefore both the ASTContext and evaluator cache), so I don't think this should be an issue in practice. Note this doesn't just apply to the evaluator's cache, there are also a bunch of other allocations that we make per module and file (including the AST nodes themselves) that won't be deallocated until the ASTContext is torn down.

Eventually we'd like code completion to take full advantage of the request evaluator by having it invalidate certain parsing requests, which would invalidate any dependant requests and ensure we only need to re-compute the minimal amount depending on what was changed. Once we get to this point, we'll hopefully be able to tie memory lifetimes such that request invalidation causes the deallocation of any state previously computed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very informative, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants