-
Notifications
You must be signed in to change notification settings - Fork 439
add Qwen Image support #851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Where/how does it crash with Vulkan? |
Testing it here, I get: gdb shows just this: I'll try on a debug build. @jeffbolznv , anything more specific I could check? |
|
@SeanTater @wbruna This is likely because GGML Vulkan doesn’t support im2col_3d. I’ve updated GGML, so you can pull the latest code and try again. |
|
@leejet , unfortunately a3a2b2d (with ggml 553c44706c ) crashes too: the last output linesGDB backtrace |
|
What are the src and dst types for the GET_ROWS that crashes? |
Interesting... the model files are qwen-image-Q4_0.gguf and Qwen2.5-VL-7B-Instruct-IQ4_XS.gguf . |
|
Thanks. We're missing the K quants but I don't think there's any reason for this. I'll add it. |
|
Please try ggml-org/llama.cpp#16235. |
|
After applying the change from Jeff's PR in llama.cpp to the ggml submodule in stable-diffusion.cpp, it does run, no crash. But I get garbed output, and even though it does recognize the devices: .. and it swears it places them on VRAM .. rocm-smi disagrees: It does finish in 17 seconds per step as opposed to about 70 for successful CPU sampling, but I think that may be a red herring since the output is garbage and the GPU is idle |
|
After applying ggml-org/llama.cpp@9073a73 and ggml-org/llama.cpp#16235 , I got a broken image too:
VAE tiling also crashes with a |
|
I'm seeing similar corruption. I'll try to debug it. |
|
I updated ggml to the latest commit and optimized the handling of embedding weights, so there’s no need to use k_quant’s get_rows. I’m not sure if this will fix the Vulkan issue. |
|
I don't think it's related to get_rows. Setting GGML_VK_DISABLE_FUSION=1 seems to fix it. I'll continue to narrow it down. |
|
Oops, I think I mixed up my experiments. I think it's forcing GGML_PREC_F32 for matrix-matrix multiplies that's fixing it. I don't know which multiplies, I just forced it for all of them. |
|
@leejet , testing b769da2 with the Lightning distill:
The same command with ROCm, just replacing the binary, renders a black image. The ROCm build command is:
full ROCM output (Vulkan is identical except for the hardware info and the timings) |
|
Hello @leejet , sorry for the delay My params are very simple and now fully reproducible every time. Here is the terminal log: And here is the output, which is a black square: Only cuda is broken. CPU and Vulkan work fine. |
|
with Rocm i get also black image BUT the first step does an image. so its something with sampling i guess. maybe on cuda also since Rocm is similar to cuda or better part from cuda. So maybe try to generate an 1-step image with cuda and see if its not black |
|
I tried the q4_K_S quantization and reproduced the black image issue. It’s likely due to a precision problem in the CUDA computations related to q4_K_S. |
|
The full q8_0 model won't fit on my card, but the Pruning 13b q8_0 works on ROCm, too. |
|
i wonder what ops it could be besides mat mul, since it works on vulkan with q4ks |
|
I had to implement get_rows for this, I think cuda may also be missing this. BTW, its a shame that sd is still using the ggml path that doesn't automatically fallback on unsupported ops. |
|
In sd.cpp, if a type isn’t one of {GGML_TYPE_F16, GGML_TYPE_Q8_0, GGML_TYPE_Q5_1, GGML_TYPE_Q5_0, GGML_TYPE_Q4_1, GGML_TYPE_Q4_0}, it’ll first get converted to GGML_TYPE_F32 on the CPU before calling get_rows. |
|
@leejet @jeffbolznv I tried using q4_0 instead of q4_k_s and it worked on cuda!!! |
I tried using q2_k as well, and it also generated a black image. Maybe all the k-quants have this issue. |
|
If we can narrow it down to a specific offending tensor, maybe we can force convert that to a different type. edit2: it fails when i load a "q4_0" model, but it works when i manually set the wtype to convert to q4. |
|
I’ve located and fixed the issue. It’s working fine on my side now — you can test it again on your end @LostRuins @wbruna. q2_k with cuda
|
common.hpp
Outdated
| // The purpose of the scale here is to prevent NaN issues in certain situations. | ||
| // For example, when using Vulkan without enabling force_prec_f32, | ||
| // or when using CUDA but the weights are k-quants. | ||
| float scale = 1.f / 128.f; | ||
| x = ggml_scale(ctx, x, scale); | ||
| x = net_2->forward(ctx, x); // [ne3, ne2, ne1, dim_out] | ||
| x = ggml_scale(ctx, x, 1.f / scale); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious which part in the CUDA backend causes the issue here? I assume you are working around some FP overflow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It’s likely that ggml_mul_mat has a precision issue when the weights are k-quants.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why did Jeff's ggml_mul_mat_set_prec fix work for vulkan but not cuda, could cuda be ignoring that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cuda approach to matmul is pretty different (see #851 (comment)). Anecdotally it seems to be less prone to precision issues, but I guess it can still run into problems.
|
Alright i'm building all your latest changes on cuda and will let you know how it goes |
@LostRuins Are there still any issues on your side during testing? |
|
My previous build failed due to an unrelated issue, so I am rebuilding it again. It takes me 1 hour for each CUDA build lol. |
|
Yeah… building ggml with CUDA always takes a lot of time. |
|
@leejet seems to be working well now, thanks!
|
|
It looks like this PR can be merged now. Thanks, everyone! |
|
Just a shoutout to anyone wanting to try this out with low-powered hardware - I am successfully running this on a 2019 Ryzen 5 3400G with an iGPU (Vega RX 11) with 16 GB of Ram allocated for VRAM and 24 GB of GTT memory under Ubuntu (on a total of 64 GB system ram) - on the Q4_0 quant I am generating the same image of a cat as above at 512 px for 504s. (as opposed to the 16.85s. achieved with the 4090 above). Using the Q8_0 quant at 1024 px, things become very slow - All 16 GB Vram used + 8 GB GTT and another 21 GB offloaded to system ram, so a total of ~ 45 GB memory used and around 330s/it. Thank you @leejet ! edit - it's running on Vulkan |
* docs: add sd.cpp-webui as an available frontend (leejet#738) * fix: correct head dim check and L_k padding of flash attention (leejet#736) * fix: convert f64 to f32 and i64 to i32 when loading weights * docs: add LocalAI to README's UIs (leejet#741) * sync: update ggml * sync: update ggml * feat: upgrade musa sdk to rc4.2.0 (leejet#732) * feat: change image dimensions requirement for DiT models (leejet#742) * feat: add missing models and parameters to image metadata (leejet#743) * feat: add new scheduler types, clip skip and vae to image embedded params - If a non default scheduler is set, include it in the 'Sampler' tag in the data embedded into the final image. - If a custom VAE path is set, include the vae name (without path and extension) in embedded image params under a `VAE:` tag. - If a custom Clip skip is set, include that Clip skip value in embedded image params under a `Clip skip:` tag. * feat: add separate diffusion and text models to metadata --------- Co-authored-by: one-lithe-rune <[email protected]> * refector: optimize the usage of tensor_types * feat: support build against system installed GGML library (leejet#749) * chore: avoid setting GGML_MAX_NAME when building against external ggml (leejet#751) An external ggml will most likely have been built with the default GGML_MAX_NAME value (64), which would be inconsistent with the value set by our build (128). That would be an ODR violation, and it could easily cause memory corruption issues due to the different sizeof(struct ggml_tensor) values. For now, when linking against an external ggml, we demand it has been patched with a bigger GGML_MAX_NAME, since we can't check against a value defined only at build time. * Conv2D direct support (leejet#744) * Conv2DDirect for VAE stage * Enable only for Vulkan, reduced duplicated code * Cmake option to use conv2d direct * conv2d direct always on for opencl * conv direct as a flag * fix merge typo * Align conv2d behavior to flash attention's * fix readme * add conv2d direct for controlnet * add conv2d direct for esrgan * clean code, use enable_conv2d_direct/get_all_blocks * format code --------- Co-authored-by: leejet <[email protected]> * sync: update ggml, make cuda im2col a little faster * chore: add Nvidia 30 series (cuda arch 86) to build * feat: throttle model loading progress updates (leejet#782) Some terminals have slow display latency, so frequent output during model loading can actually slow down the process. Also, since tensor loading times can vary a lot, the progress display now shows the average across past iterations instead of just the last one. * docs: add missing dash to docs/chroma.md (leejet#771) * docs: add compile option needed by Ninja (leejet#770) * feat: show usage on unknown arg (leejet#767) * fix: typo in the verbose long flag (leejet#783) * feat: add wan2.1/2.2 support (leejet#778) * add wan vae suppport * add wan model support * add umt5 support * add wan2.1 t2i support * make flash attn work with wan * make wan a little faster * add wan2.1 t2v support * add wan gguf support * add offload params to cpu support * add wan2.1 i2v support * crop image before resize * set default fps to 16 * add diff lora support * fix wan2.1 i2v * introduce sd_sample_params_t * add wan2.2 t2v support * add wan2.2 14B i2v support * add wan2.2 ti2v support * add high noise lora support * sync: update ggml submodule url * avoid build failure on linux * avoid build failure * update ggml * update ggml * fix sd_version_is_wan * update ggml, fix cpu im2col_3d * fix ggml_nn_attention_ext mask * add cache support to ggml runner * fix the issue of illegal memory access * unify image loading processing * add wan2.1/2.2 FLF2V support * fix end_image mask * update to latest ggml * add GGUFReader * update docs * feat: add support for timestep boundary based automatic expert routing in Wan MoE (leejet#779) * Wan MoE: Automatic expert routing based on timestep boundary * unify code style and fix some issues --------- Co-authored-by: leejet <[email protected]> * feat: add flow shift parameter (for SD3 and Wan) (leejet#780) * Add flow shift parameter (for SD3 and Wan) * unify code style and fix some issues --------- Co-authored-by: leejet <[email protected]> * docs: update docs and help message * chore: update to c++17 * docs: update docs/wan.md * fix: add flash attn support check (leejet#803) * feat: support incrementing ref image index (omni-kontext) (leejet#755) * kontext: support ref images indices * lora: support x_embedder * update help message * Support for negative indices * support for OmniControl (offsets at index 0) * c++11 compat * add --increase-ref-index option * simplify the logic and fix some issues * update README.md * remove unused variable --------- Co-authored-by: leejet <[email protected]> * feat: add detailed tensor loading time stat (leejet#793) * fix: clarify lora quant support and small fixes (leejet#792) * fix: accept NULL in sd_img_gen_params_t::input_id_images_path (leejet#809) * chore: update flash attention warnings (leejet#805) * fix: use {} for params init instead of memset (leejet#781) * chore: remove sd3 flash attention warn (leejet#812) * feat: use log_printf to print ggml logs (leejet#545) * chore: add install() support in CMakeLists.txt (leejet#540) * feat: add SmoothStep Scheduler (leejet#813) * feat: add sd3 flash attn support (leejet#815) * fix: make tiled VAE reuse the compute buffer (leejet#821) * feat: reduce CLIP memory usage with no embeddings (leejet#768) * fix: make weight override more robust against ggml changes (leejet#760) * fix: do not force VAE type to f32 on SDXL (leejet#716) This seems to be a leftover from the initial SDXL support: it's not enough to avoid NaN issues, and it's not not needed for the fixed sdxl-vae-fp16-fix . * feat: use Euler sampling by default for SD3 and Flux (leejet#753) Thank you for your contribution. * fix: harden for large files (leejet#643) * feat: Add SYCL Dockerfile (leejet#651) * feat: increase work_ctx memory buffer size (leejet#814) * docs: update docs * feat: add VAE encoding tiling support and adaptive overlap (leejet#484) * implement tiling vae encode support * Tiling (vae/upscale): adaptative overlap * Tiling: fix edge case * Tiling: fix crash when less than 2 tiles per dim * remove extra dot * Tiling: fix edge cases for adaptative overlap * tiling: fix edge case * set vae tile size via env var * vae tiling: refactor again, base on smaller buffer for alignment * Use bigger tiles for encode (to match compute buffer size) * Fix edge case when tile is bigger than latent * non-square VAE tiling (#3) * refactor tile number calculation * support non-square tiles * add env var to change tile overlap * add safeguards and better error messages for SD_TILE_OVERLAP * add safeguards and include overlapping factor for SD_TILE_SIZE * avoid rounding issues when specifying SD_TILE_SIZE as a factor * lower SD_TILE_OVERLAP limit * zero-init empty output buffer * Fix decode latent size * fix encode * tile size params instead of env * Tiled vae parameter validation (#6) * avoid crash with invalid tile sizes, use 0 for default * refactor default tile size, limit overlap factor * remove explicit parameter for relative tile size * limit encoding tile to latent size * unify code style and format code * update docs * fix get_tile_sizes in decode_first_stage --------- Co-authored-by: Wagner Bruna <[email protected]> Co-authored-by: leejet <[email protected]> * feat: add vace support (leejet#819) * add wan vace t2v support * add --vace-strength option * add vace i2v support * fix the processing of vace_context * add vace v2v support * update docs * feat: optimize tensor loading time (leejet#790) * opt tensor loading * fix build failure * revert the changes * allow the use of n_threads * fix lora loading * optimize lora loading * add mutex * use atomic * fix build * fix potential duplicate issue * avoid duplicate lookup of lora tensor * fix progeress bar * remove unused remove_duplicates --------- Co-authored-by: leejet <[email protected]> * refactor: simplify the logic of pm id image loading (leejet#827) * feat: add sgm_uniform scheduler, simple scheduler, and support for NitroFusion (leejet#675) * feat: Add timestep shift and two new schedulers * update readme * fix spaces * format code * simplify SGMUniformSchedule * simplify shifted_timestep logic * avoid conflict --------- Co-authored-by: leejet <[email protected]> * refactor: move tiling cacl and debug print into the tiling code branch (leejet#833) * refactor: simplify DPM++ (2S) Ancestral (leejet#667) * chore: set release tag by commit count * chore: fix workflow (leejet#836) * fix: avoid multithreading issues in the model loader * fix: avoid segfault for pix2pix models without reference images (leejet#766) * fix: avoid segfault for pix2pix models with no reference images * fix: default to empty reference on pix2pix models to avoid segfault * use resize instead of reserve * format code --------- Co-authored-by: leejet <[email protected]> * refactor: remove unused --normalize-input parameter (leejet#835) * fix: correct tensor deduplication logic (leejet#844) * docs: include Vulkan compatibility for LoRA quants (leejet#845) * docs: HipBLAS / ROCm build instruction fix (leejet#843) * fix: tensor loading thread count (leejet#854) * fix: optimize the handling of CLIP embedding weight (leejet#840) * sync: update ggml * sync: update ggml * fix: optimize the handling of embedding weight (leejet#859) * feat: add support for Flux Controls and Flex.2 (leejet#692) * docs: update README.md (leejet#866) * chore: fix dockerfile libgomp1 dependency + improvements (leejet#852) * fix: ensure directory iteration results are sorted by filename (leejet#858) * chore: fix vulkan ci (leejet#878) * feat: add support for more esrgan models & x2 & x1 models (leejet#855) * feat: add a stand-alone upscale mode (leejet#865) * feat: add a stand-alone upscale mode * fix prompt option check * format code * update README.md --------- Co-authored-by: leejet <[email protected]> * refactor: deal with default img-cfg-scale at the library level (leejet#869) * feat: add Qwen Image support (leejet#851) * add qwen tokenizer * add qwen2.5 vl support * mv qwen.hpp -> qwenvl.hpp * add qwen image model * add qwen image t2i pipeline * fix qwen image flash attn * add qwen image i2i pipline * change encoding of vocab_qwen.hpp to utf8 * fix get_first_stage_encoding * apply jeffbolz f32 patch leejet#851 (comment) * fix the issue that occurs when using CUDA with k-quants weights * optimize the handling of the FeedForward precision fix * to_add_out precision fix * update docs * fix: resolve VAE tiling problem in Qwen Image (leejet#873) * fix: avoid generating black images when running T5 on the GPU (leejet#882) * fix: correct canny preprocessor (leejet#861) * fix: better progress display for second-order samplers (leejet#834) * feat: add Qwen Image Edit support (leejet#877) * add ref latent support for qwen image * optimize clip_preprocess and fix get_first_stage_encoding * add qwen2vl vit support * add qwen image edit support * fix qwen image edit pipeline * add mmproj file support * support dynamic number of Qwen image transformer blocks * set prompt_template_encode_start_idx every time * to_add_out precision fix * to_out.0 precision fix * update docs --------- Co-authored-by: Daniele <[email protected]> Co-authored-by: Erik Scholz <[email protected]> Co-authored-by: leejet <[email protected]> Co-authored-by: Ettore Di Giacinto <[email protected]> Co-authored-by: R0CKSTAR <[email protected]> Co-authored-by: stduhpf <[email protected]> Co-authored-by: one-lithe-rune <[email protected]> Co-authored-by: Seas0 <[email protected]> Co-authored-by: NekopenDev <[email protected]> Co-authored-by: SmallAndSoft <[email protected]> Co-authored-by: Markus Hartung <[email protected]> Co-authored-by: clibdev <[email protected]> Co-authored-by: Richard Palethorpe <[email protected]> Co-authored-by: rmatif <[email protected]> Co-authored-by: vmobilis <[email protected]> Co-authored-by: Stefan-Olt <[email protected]> Co-authored-by: Sharuzzaman Ahmat Raslan <[email protected]> Co-authored-by: Serkan Sahin <[email protected]> Co-authored-by: Pedrito <[email protected]>









txt2img
img2img
Qwen Image Edit
#877