📚A curated list of Awesome LLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.
-
Updated
Jun 20, 2025 - Python
📚A curated list of Awesome LLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.
⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.
Add a description, image, and links to the flash-mla topic page so that developers can more easily learn about it.
To associate your repository with the flash-mla topic, visit your repo's landing page and select "manage topics."