diff --git a/_posts/2024-09-26-pytorch-native-architecture-optimization.md b/_posts/2024-09-26-pytorch-native-architecture-optimization.md index fcf5122e970e..dd901a5a2517 100644 --- a/_posts/2024-09-26-pytorch-native-architecture-optimization.md +++ b/_posts/2024-09-26-pytorch-native-architecture-optimization.md @@ -4,6 +4,7 @@ title: "PyTorch Native Architecture Optimization: torchao" author: Team PyTorch --- + We’re happy to officially launch torchao, a PyTorch native library that makes models faster and smaller by leveraging low bit dtypes, quantization and sparsity. [torchao](https://github.com/pytorch/ao) is an accessible toolkit of techniques written (mostly) in easy to read PyTorch code spanning both inference and training. This blog will help you pick which techniques matter for your workloads. We benchmarked our techniques on popular GenAI models like LLama 3 and Diffusion models and saw minimal drops in accuracy. Unless otherwise noted the baselines are bf16 run on A100 80GB GPU. @@ -31,32 +32,37 @@ Below we'll walk through some of the techniques available in torchao you can app [Our inference quantization algorithms](https://github.com/pytorch/ao/tree/main/torchao/quantization) work over arbitrary PyTorch models that contain nn.Linear layers. Weight only and dynamic activation quantization for various dtypes and sparse layouts can be chosen using our top level quantize\_ api +```py from torchao.quantization import ( quantize\_, int4\_weight\_only, ) quantize\_(model, int4\_weight\_only()) +``` Sometimes quantizing a layer can make it slower because of overhead so if you’d rather we just pick how to quantize each layer in a model for you then you can instead run +```py model \= torchao.autoquant(torch.compile(model, mode='max-autotune')) +``` quantize\_ API has a few different options depending on whether your model is compute bound or memory bound. +```py from torchao.quantization import ( - \# Memory bound models + # Memory bound models int4\_weight\_only, int8\_weight\_only, - \# Compute bound models + # Compute bound models int8\_dynamic\_activation\_int8\_semi\_sparse\_weight, int8\_dynamic\_activation\_int8\_weight, - \# Device capability 8.9+ + # Device capability 8.9+ float8\_weight\_only, float8\_dynamic\_activation\_float8\_weight, ) - +``` We also have extensive benchmarks on diffusion models in collaboration with the HuggingFace diffusers team in [diffusers-torchao](https://github.com/sayakpaul/diffusers-torchao) where we demonstrated 53.88% speedup on Flux.1-Dev and 27.33% speedup on CogVideoX-5b @@ -72,7 +78,7 @@ But also can do things like quantize weights to int4 and the kv cache to int8 to Post training quantization, especially at less than 4 bit can suffer from serious accuracy degradations. Using [Quantization Aware Training](https://pytorch.org/blog/quantization-aware-training/) (QAT) we’ve managed to recover up to 96% of the accuracy degradation on hellaswag. We’ve integrated this as an end to end recipe in torchtune with a minimal [tutorial](https://github.com/pytorch/ao/tree/main/torchao/quantization/prototype/qat) -![](/assets/images/Figure_3.png){:style="width:100%"} +![](/assets/images/Figure_3.jpg){:style="width:100%"} # Training @@ -115,8 +121,6 @@ We’ve been actively working on making sure torchao works well in some of the m 5. In [torchchat](https://github.com/pytorch/torchchat) for post training quantization 6. In SGLang for for [int4 and int8 post training quantization](https://github.com/sgl-project/sglang/pull/1341) -# - ## Conclusion If you’re interested in making your models faster and smaller for training or inference, we hope you’ll find torchao useful and easy to integrate.