Skip to content

Explore TinyLFU cache #16802

@ben-manes

Description

@ben-manes

When removing Guava as a dependency, @jasontedor wrote an 256-way LRU cache using a Guava-like API. The design uses a read/write lock per table segment and a lock around the global linked for LRU ordering.

There are a few concerns that might be worth addressing. First is that a r/w lock is expensive, often much more than an exclusive lock, and its usage per read is probably best replaced with a ConcurrentHashMap. Second is that the LRU lock is a throttling point, as all accesses contend on it. Third is that computeIfAbsent is racy by allowing redundant computations with last insert wins. Fourth is that LRU is sub-optimal for search workloads, where frequency is a more insightful metric.

The easiest (and biased) solution is to adopt Caffeine. Alternatively a big win would come from integrating TinyLFU into the existing cache. This would increase the hit rate, thereby reducing I/O and GC allocations, resulting in improved latencies.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions