-
Notifications
You must be signed in to change notification settings - Fork 13.7k
Added quantization for the visual projector LLAVA, Qwen2VL #11644
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,44 @@ | ||
| # Quantizing CLIP Visual Projector | ||
|
|
||
| This is the tool for quantizing the CLIP visual projector model. Quantization reduces the precision of the model's weights, which can significantly decrease the model size and improve inference speed, often with minimal impact on performance. | ||
|
|
||
| ## Usage | ||
|
|
||
| To quantize a CLIP visual projector model, use the following command: | ||
|
|
||
| ```sh | ||
| ./bin/llama-llava-clip-quantize-cli /path/to/ggml-model-f32.gguf /path/to/ggml-model-quantized.gguf <type> | ||
| ``` | ||
|
|
||
| After the quantization, the visual projector can be used freely with the existing LLAVA cli (LLAVA, Qwen2VL, etc). | ||
|
|
||
| ### Arguments | ||
|
|
||
| - `/path/to/ggml-model-f32.gguf`: The path to the input model file in FP32 or FP16 format. | ||
| - `/path/to/ggml-model-quantized.gguf`: The path where the quantized model will be saved. | ||
| - `<type>`: The quantization type to apply. This should be an integer corresponding to one of the quantization types defined in the `enum ggml_type`. | ||
|
|
||
| ### Quantization Types | ||
|
|
||
| The following quantization types are supported, based on the `enum ggml_type` definition: | ||
|
|
||
| - `2` - `q4_0`: 4-bit quantization with a single scale value. | ||
| - `3` - `q4_1`: 4-bit quantization with a separate scale value for each block. | ||
| - `6` - `q5_0`: 5-bit quantization with a single scale value. | ||
| - `7` - `q5_1`: 5-bit quantization with a separate scale value for each block. | ||
| - `8` - `q8_0`: 8-bit quantization with a single scale value. | ||
|
|
||
| ### Example | ||
|
|
||
| To quantize a model using the `q4_0` quantization type, you would run: | ||
|
|
||
| ```sh | ||
| ./bin/llama-llava-clip-quantize-cli /path/to/ggml-model-f32.gguf /path/to/ggml-model-quantized.gguf 2 | ||
| ``` | ||
|
|
||
| This command will generate a quantized model at `/path/to/ggml-model-quantized.gguf` using the `q4_0` quantization method. | ||
|
|
||
| ## Notes | ||
|
|
||
| - Quantization can lead to a loss in model accuracy, depending on the chosen quantization type. It is recommended to evaluate the quantized model's performance on your specific task to ensure it meets your requirements. | ||
| - The quantized model will typically be smaller in size and faster to run, making it more suitable for deployment in resource-constrained environments. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,59 @@ | ||
| #include "arg.h" | ||
| #include "base64.hpp" | ||
| #include "log.h" | ||
| #include "common.h" | ||
| #include "sampling.h" | ||
| #include "clip.h" | ||
| #include "llava.h" | ||
| #include "llama.h" | ||
| #include "ggml.h" | ||
|
|
||
| static void print_usage(int argc, char ** argv) { | ||
| (void) argc; | ||
|
|
||
| fprintf(stderr, "usage: %s /path/to/ggml-model-f32.gguf /path/to/ggml-model-quantized.gguf type\n", argv[0]); | ||
| fprintf(stderr, " type = 2 - q4_0\n"); | ||
| fprintf(stderr, " type = 3 - q4_1\n"); | ||
| fprintf(stderr, " type = 6 - q5_0\n"); | ||
| fprintf(stderr, " type = 7 - q5_1\n"); | ||
| fprintf(stderr, " type = 8 - q8_0\n"); | ||
| } | ||
|
|
||
| int main(int argc, char ** argv) { | ||
| if (argc != 4) { | ||
| print_usage(argc, argv); | ||
| return 1; | ||
| } | ||
|
|
||
| const std::string fname_inp = argv[1]; | ||
| const std::string fname_out = argv[2]; | ||
|
|
||
| const int itype = atoi(argv[3]); | ||
|
|
||
| const int64_t t_main_start_us = ggml_time_us(); | ||
|
|
||
| int64_t t_quantize_us = 0; | ||
|
|
||
| // load the model | ||
| { | ||
| const int64_t t_start_us = ggml_time_us(); | ||
|
|
||
| if (!clip_model_quantize(fname_inp.c_str(), fname_out.c_str(), itype)) { | ||
| fprintf(stderr, "%s: failed to quantize model from '%s'\n", __func__, fname_inp.c_str()); | ||
| return 1; | ||
| } | ||
|
|
||
| t_quantize_us = ggml_time_us() - t_start_us; | ||
| } | ||
|
|
||
| // report timing | ||
| { | ||
| const int64_t t_main_end_us = ggml_time_us(); | ||
|
|
||
| printf("\n"); | ||
| printf("%s: quantize time = %8.2f ms\n", __func__, t_quantize_us / 1000.0f); | ||
| printf("%s: total time = %8.2f ms\n", __func__, (t_main_end_us - t_main_start_us) / 1000.0f); | ||
| } | ||
|
|
||
| return 0; | ||
| } |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.