-
Notifications
You must be signed in to change notification settings - Fork 831
Description
I've been closely looking at the performance of the building of an ItemKeyStore that is used to dramatically lower the memory usage from storing symbol identity information for find all references. We build an ItemKeyStore every-time a background type-check happens. A background type-check occurs on a file when another file dependent on it is being type-checked, so it's not the editing document necessarily.
Some results on FSharp.Compiler.Private:
**FileName | TypeCheck Time | ItemKeyStore Construction Time**
TypeChecker.fs | 2325ms | 202ms
Optimizer.fs | 426ms | 31ms
InnerLambdasToTopLevelFuncs.fs | 142ms | 13ms
IlxGen.fs | 1014ms | 82ms
SymbolPatterns.fs | 29ms | 7ms
So, there is some clear performance cost. I don't know how noticeable this is, but I think it is. The trade-off is the memory reduction we gained, and the performance increase when doing a find all refs.
There is some GC cost and sometimes we do allocate on the LOH for semantic classification storage. This isn't a regression per-se, since we were allocating on the LOH for in-memory symbol storage before the ItemKeyStore changes.
We can address this in a number of ways:
-
Lazy evaluation of background semantic classification and item key store.
- This is most preferable but with the way incremental builder is designed, this makes it challenging. It is possible though.
-
Make construction time faster and allocate less (remove LOH allocations)
- We should do this anyway, but it could be possible to minimize the time enough to where we don't have to rely on lazy evaluation - but not sure if that's possible yet.
- Make semantic classification be stored in a memory-mapped file like the item keys.