Skip to content

Quantization: reduce memory footprint without compromising speed and quality #1350

@antonfirsov

Description

@antonfirsov

All our current quantizers rely on a very heavy cache in order to produce acceptable quality:

private readonly ConcurrentDictionary<TPixel, int> distanceCache;

With large images this could consume up to several megabytes of memory every time a larger GIF (or palettized PNG) gets saved. We need to reduce memory usage without compromising speed and quality.

We made a couple of experiments and exchanged many ideas a couple of months ago, but unfortunately all those discussions got lost in the noise of the gitter chatroom. I suggest to recollect and rediscuss those ideas.

Improvements we may consider (according to what I remember from the gitter chat):

  1. Improving the RGB Octree implementation to make it very fast by default so we hopefully don't need any cache. Most important: flattening the tree nodes into array(s) of structs (instead of heap objects).
  2. Extend the octree to RGBA
  3. Replace the dictionary with this cache. It produced promising results in my experiments.
  4. New idea from @JimBobSquarePants: replace the dictionary with an LRU cache implementation like the one in BitFaster.Caching.

@saucecontrol since I remember you had really valuable ideas here, I would really appreciate your feedback on all 4 options, and apologies if I'm making you to repeat yourself because of my terrible memory 😄 If I remember correctly, you also had some promising results in your repos.

Bravely assigning milestone 1.1 for now.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions