Skip to content
@kvcache-ai

kvcache.ai

KVCache.AI is a joint research project between MADSys and top industry collaborators, focusing on efficient LLM serving.

Pinned Loading

  1. Mooncake Mooncake Public

    Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.

    C++ 4.3k 433

  2. ktransformers ktransformers Public

    A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations

    Python 15.9k 1.2k

  3. TrEnv-X TrEnv-X Public

    Go 68 2

Repositories

Showing 9 of 9 repositories
  • Mooncake Public

    Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.

    kvcache-ai/Mooncake’s past year of commit activity
    C++ 4,300 Apache-2.0 433 188 (8 issues need help) 60 Updated Nov 24, 2025
  • ktransformers Public

    A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations

    kvcache-ai/ktransformers’s past year of commit activity
    Python 15,898 Apache-2.0 1,154 652 (1 issue needs help) 5 Updated Nov 24, 2025
  • sglang Public Forked from sgl-project/sglang

    SGLang is a fast serving framework for large language models and vision language models.

    kvcache-ai/sglang’s past year of commit activity
    Python 3 Apache-2.0 3,500 0 1 Updated Nov 21, 2025
  • sglang_awq Public Forked from sgl-project/sglang

    SGLang is a fast serving framework for large language models and vision language models.

    kvcache-ai/sglang_awq’s past year of commit activity
    Python 1 Apache-2.0 3,506 0 0 Updated Nov 19, 2025
  • TrEnv-X Public
    kvcache-ai/TrEnv-X’s past year of commit activity
    Go 68 Apache-2.0 2 0 0 Updated Sep 15, 2025
  • sglang-npu Public Forked from sgl-project/sglang

    SGLang is a fast serving framework for large language models and vision language models.

    kvcache-ai/sglang-npu’s past year of commit activity
    Python 0 Apache-2.0 3,500 0 0 Updated Aug 12, 2025
  • DeepEP_fault_tolerance Public Forked from deepseek-ai/DeepEP

    DeepEP: an efficient expert-parallel communication library that supports fault tolerance

    kvcache-ai/DeepEP_fault_tolerance’s past year of commit activity
    Cuda 3 MIT 1,005 0 0 Updated Jul 31, 2025
  • custom_flashinfer Public Forked from flashinfer-ai/flashinfer

    FlashInfer: Kernel Library for LLM Serving

    kvcache-ai/custom_flashinfer’s past year of commit activity
    Cuda 5 Apache-2.0 582 0 0 Updated Jul 24, 2025
  • vllm Public Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    kvcache-ai/vllm’s past year of commit activity
    Python 14 Apache-2.0 11,597 0 0 Updated Mar 27, 2025

Most used topics

Loading…