diff --git a/.github/workflows/gate.yml b/.github/workflows/gate.yml index 2acb6275..2b3f1a56 100644 --- a/.github/workflows/gate.yml +++ b/.github/workflows/gate.yml @@ -2,10 +2,8 @@ name: Build on: push: - paths-ignore: [ '**.md' ] branches: [ main ] pull_request: - paths-ignore: [ '**.md' ] branches: [ main ] jobs: diff --git a/BitFaster.Caching/ReadMe.md b/BitFaster.Caching/ReadMe.md index d34db786..f4f3acba 100644 --- a/BitFaster.Caching/ReadMe.md +++ b/BitFaster.Caching/ReadMe.md @@ -4,12 +4,12 @@ High performance, thread-safe in-memory caching primitives for .NET. ## ConcurrentLru -`ConcurrentLru` is a light weight drop in replacement for `ConcurrentDictionary`, but with bounded size enforced by the TU-Q eviction policy (similar to [2Q](https://www.vldb.org/conf/1994/P439.PDF)). There are no background threads, no global locks, concurrent throughput is high, lookups are fast and hit rate outperforms a pure LRU in all tested scenarios. +`ConcurrentLru` is a light weight drop in replacement for `ConcurrentDictionary`, but with bounded size enforced by the TU-Q eviction policy (based on [2Q](https://www.vldb.org/conf/1994/P439.PDF)). There are no background threads, no global locks, concurrent throughput is high, lookups are fast and hit rate outperforms a pure LRU in all tested scenarios. Choose a capacity and use just like `ConcurrentDictionary`, but with bounded size: ```csharp -int capacity = 666; +int capacity = 128; var lru = new ConcurrentLru(capacity); var value = lru.GetOrAdd("key", (key) => new SomeItem(key)); @@ -17,12 +17,12 @@ var value = lru.GetOrAdd("key", (key) => new SomeItem(key)); ## ConcurrentLfu -`ConcurrentLfu` is a drop in replacement for `ConcurrentDictionary`, but with bounded size enforced by the [W-TinyLFU eviction policy](https://arxiv.org/pdf/1512.00727.pdf). `ConcurrentLfu` has near optimal hit rate. Reads and writes are buffered then replayed asynchronously to mitigate lock contention. +`ConcurrentLfu` is a drop in replacement for `ConcurrentDictionary`, but with bounded size enforced by the [W-TinyLFU admission policy](https://arxiv.org/pdf/1512.00727.pdf). `ConcurrentLfu` has near optimal hit rate and high scalability. Reads and writes are buffered then replayed asynchronously to mitigate lock contention. Choose a capacity and use just like `ConcurrentDictionary`, but with bounded size: ```csharp -int capacity = 666; +int capacity = 128; var lfu = new ConcurrentLfu(capacity); var value = lfu.GetOrAdd("key", (key) => new SomeItem(key));