Skip to content

Conversation

@rmatif
Copy link
Contributor

@rmatif rmatif commented Sep 6, 2025

Following the need here #772 and the discussion here #789

It acheives up to x3 faster loading on SDXL model

The PR introduce parallelization on the entire tensor processing and loading pipeline. Tensor preprocessing and deduplication are now distributed across a thread pool, using thread-local maps followed by a final merge to minimize contention. The core loading loop leverages an atomic counter to dispatch tensors to worker threads, each with its own file handle to enable true concurrent I/O on non-zip archives (this operation is not thread-safe on zip files), this will overlaps I/O and cpu works

cc @wbruna

@rmatif rmatif force-pushed the ref-tensor-loading branch from e2c6c10 to 55b7707 Compare September 6, 2025 17:16
@rmatif
Copy link
Contributor Author

rmatif commented Sep 6, 2025

The build is failing because I’m using std::unordered_map::merge and structured bindings, which are C++17 features and not available on the CI compiler. I need to figure out a workaround without sacrificing perf

@Green-Sky
Copy link
Contributor

ggml moved to c++17, so it would be reasonable to move sd.cpp too @leejet

@rmatif
Copy link
Contributor Author

rmatif commented Sep 6, 2025

ggml moved to c++17, so it would be reasonable to move sd.cpp too @leejet

I was just about pressing enter to say the same :)
c++17

@hartmark
Copy link
Contributor

hartmark commented Sep 7, 2025

Cool, was about to implement the same thing myself as i was also thinking the loading is quite slow

@leejet
Copy link
Owner

leejet commented Sep 7, 2025

ggml moved to c++17, so it would be reasonable to move sd.cpp too @leejet

Sure, there's no problem with this.

@leejet
Copy link
Owner

leejet commented Sep 7, 2025

I have updated sd.cpp to C++17.

@leejet
Copy link
Owner

leejet commented Sep 7, 2025

I have added the statistics of tensor loading time #793.

> .\bin\Release\sd.exe -m ..\..\stable-diffusion-webui\models\Stable-diffusion\sd_xl_base_1.0.safetensors --vae ..\..\stable-diffusion-webui\models\VAE\sdxl_vae-fp16-fix.safetensors -p "a lovely cat" -v   -H 1024 -W 1024 --diffusion-fa

loading tensors completed, taking 6.39s (process: 0.03s, read: 4.68s, memcpy: 0.00s, convert: 0.30s, copy_to_backend: 1.16s)

It seems that the process time is not significant and is not a bottleneck. Therefore, I think there is no need to perform multi-threading on this process section, which will reduce the complexity of the code.

@rmatif
Copy link
Contributor Author

rmatif commented Sep 7, 2025

I have updated sd.cpp to C++17.

Thanks! I've reverted the workaround

It seems that the process time is not significant and is not a bottleneck. Therefore, I think there is no need to perform multi-threading on this process section, which will reduce the complexity of the code

Can you try to mount a ramdisk and load the model from there? Will share some numbers soon

Master:

[INFO ] stable-diffusion.cpp:641  - total params memory size = 4145.07MB (VRAM 0.00MB, RAM 4145.07MB): text_encoders 1118.92MB(RAM), diffusion_model 2931.68MB(RAM), vae 94.47MB(RAM), controlnet 0.00MB(VRAM), pmid 0.00MB(RAM)
[INFO ] stable-diffusion.cpp:660  - loading model from '/mnt/ramdisk/Diff-InstructStar_q8_0.gguf' completed, taking 2.34s

PR:

[INFO ] stable-diffusion.cpp:641  - total params memory size = 4145.07MB (VRAM 0.00MB, RAM 4145.07MB): text_encoders 1118.92MB(RAM), diffusion_model 2931.68MB(RAM), vae 94.47MB(RAM), controlnet 0.00MB(VRAM), pmid 0.00MB(RAM)
[INFO ] stable-diffusion.cpp:660  - loading model from '/mnt/ramdisk/Diff-InstructStar_q8_0.gguf' completed, taking 1.01s

@leejet
Copy link
Owner

leejet commented Sep 7, 2025

Ramdisk result

loading tensors completed, taking 5.85s (process: 0.01s, read: 4.28s, memcpy: 0.00s, convert: 0.19s, copy_to_backend: 1.15s)

@leejet
Copy link
Owner

leejet commented Sep 7, 2025

Using a ramdisk didn't make much of a difference in speed. I'm not sure if it's because of the limitation of my memory bandwidth.

@rmatif
Copy link
Contributor Author

rmatif commented Sep 7, 2025

Using a ramdisk didn't make much of a difference in speed. I'm not sure if it's because of the limitation of my memory bandwidth.

You can mesure your memory bandiwth with mbw. Here's mine:

AVG     Method: MEMCPY  Elapsed: 0.07953        MiB: 1024.00000 Copy: 12875.013 MiB/s

If you’re already capped by your bandwidth, then this PR won’t make much difference

@leejet
Copy link
Owner

leejet commented Sep 7, 2025

It doesn't seem to be a problem with the memory bandwidth. Perhaps it might be a problem with my ramdisk software.

AVG     Method: MEMCPY  Elapsed: 1.09857        MiB: 10000.00000        Copy: 9102.753 MiB/s

@wbruna
Copy link
Contributor

wbruna commented Sep 7, 2025

On my Ryzen 5 3400G, RX 7600 XT, SSD storage, Linux 6.12:

Vulkan, cold cache

[INFO ] model.cpp:2216 - loading tensors completed, taking 16.97s (process: 0.12s, read: 11.60s, memcpy: 0.00s, convert: 0.30s, copy_to_backend: 4.74s)
[INFO ] stable-diffusion.cpp:641 - total params memory size = 6751.89MB (VRAM 6751.89MB, RAM 0.00MB): text_encoders 1757.36MB(VRAM), diffusion_model 4900.07MB(VRAM), vae 94.47MB(VRAM), controlnet 0.00MB(VRAM), pmid 0.00MB(VRAM)
[INFO ] stable-diffusion.cpp:660 - loading model from './cyberrealisticXL_v60.safetensors' completed, taking 16.97s

Vulkan, hot cache

[INFO ] model.cpp:2216 - loading tensors completed, taking 6.04s (process: 0.12s, read: 0.95s, memcpy: 0.00s, convert: 0.31s, copy_to_backend: 4.45s)
[INFO ] stable-diffusion.cpp:641 - total params memory size = 6751.89MB (VRAM 6751.89MB, RAM 0.00MB): text_encoders 1757.36MB(VRAM), diffusion_model 4900.07MB(VRAM), vae 94.47MB(VRAM), controlnet 0.00MB(VRAM), pmid 0.00MB(VRAM)
[INFO ] stable-diffusion.cpp:660 - loading model from './cyberrealisticXL_v60.safetensors' completed, taking 6.04s

PR, Vulkan, cold cache

[INFO ] stable-diffusion.cpp:641 - total params memory size = 6751.89MB (VRAM 6751.89MB, RAM 0.00MB): text_encoders 1757.36MB(VRAM), diffusion_model 4900.07MB(VRAM), vae 94.47MB(VRAM), controlnet 0.00MB(VRAM), pmid 0.00MB(VRAM)
[INFO ] stable-diffusion.cpp:660 - loading model from './cyberrealisticXL_v60.safetensors' completed, taking 16.08s

PR, Vulkan, hot cache

[INFO ] stable-diffusion.cpp:641 - total params memory size = 6751.89MB (VRAM 6751.89MB, RAM 0.00MB): text_encoders 1757.36MB(VRAM), diffusion_model 4900.07MB(VRAM), vae 94.47MB(VRAM), controlnet 0.00MB(VRAM), pmid 0.00MB(VRAM)
[INFO ] stable-diffusion.cpp:660 - loading model from './cyberrealisticXL_v60.safetensors' completed, taking 2.08s

For comparison:

ROCm, cold cache

[INFO ] model.cpp:2216 - loading tensors completed, taking 15.55s (process: 0.11s, read: 11.90s, memcpy: 0.00s, convert: 0.32s, copy_to_backend: 2.98s)
[INFO ] stable-diffusion.cpp:641 - total params memory size = 6751.89MB (VRAM 6751.89MB, RAM 0.00MB): text_encoders 1757.36MB(VRAM), diffusion_model 4900.07MB(VRAM), vae 94.47MB(VRAM), controlnet 0.00MB(VRAM), pmid 0.00MB(VRAM)
[INFO ] stable-diffusion.cpp:660 - loading model from './cyberrealisticXL_v60.safetensors' completed, taking 15.55s

ROCm, hot cache

[INFO ] model.cpp:2216 - loading tensors completed, taking 4.54s (process: 0.11s, read: 1.36s, memcpy: 0.00s, convert: 0.33s, copy_to_backend: 2.53s)
[INFO ] stable-diffusion.cpp:641 - total params memory size = 6751.89MB (VRAM 6751.89MB, RAM 0.00MB): text_encoders 1757.36MB(VRAM), diffusion_model 4900.07MB(VRAM), vae 94.47MB(VRAM), controlnet 0.00MB(VRAM), pmid 0.00MB(VRAM)
[INFO ] stable-diffusion.cpp:660 - loading model from './cyberrealisticXL_v60.safetensors' completed, taking 4.54s

(I can also test the PR on ROCm, but it takes a long time to build here 😅 )

$ mbw -t0 -q 1024
AVG Method: MEMCPY Elapsed: 0.13855 MiB: 1024.00000 Copy: 7390.958 MiB/s

Copy link
Contributor

@wbruna wbruna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good so far. Could we also get a clock reading between the preparation and the loading phase?

@rmatif
Copy link
Contributor Author

rmatif commented Sep 7, 2025

More numbers:

Master:

[INFO ] stable-diffusion.cpp:641  - total params memory size = 6751.89MB (VRAM 0.00MB, RAM 6751.89MB): text_encoders 1757.36MB(RAM), diffusion_model 4900.07MB(RAM), vae 94.47MB(RAM), controlnet 0.00MB(VRAM), pmid 0.00MB(RAM)
[INFO ] stable-diffusion.cpp:660  - loading model from '/ramdisk/RealVisXL_V5.0_fp16.safetensors' completed, taking 4.61s

PR:

[INFO ] stable-diffusion.cpp:641  - total params memory size = 6751.89MB (VRAM 0.00MB, RAM 6751.89MB): text_encoders 1757.36MB(RAM), diffusion_model 4900.07MB(RAM), vae 94.47MB(RAM), controlnet 0.00MB(VRAM), pmid 0.00MB(RAM)
[INFO ] stable-diffusion.cpp:660  - loading model from '/ramdisk/RealVisXL_V5.0_fp16.safetensors' completed, taking 1.34s

@leejet
Copy link
Owner

leejet commented Sep 7, 2025

I have merged the changes from the master branch into your branch https://github.com/leejet/stable-diffusion.cpp/commits/ref-tensor-loading/. If you don't mind, I can directly push it to your branch. These are some of my test data.

Master:

loading tensors completed, taking 6.39s (process: 0.03s, read: 4.68s, memcpy: 0.00s, convert: 0.30s, copy_to_backend: 1.16s)

PR:

loading tensors completed, taking 2.71s (process: 0.02s, read: 1.04s, memcpy: 0.00s, convert: 0.04s, copy_to_backend: 1.50s)

@rmatif
Copy link
Contributor Author

rmatif commented Sep 7, 2025

If you don't mind, I can directly push it to your branch. These are some of my test data.

Cool. Sure you can go ahead and push it!

@leejet
Copy link
Owner

leejet commented Sep 7, 2025

I have merged the changes from the master branch into your branch https://github.com/leejet/stable-diffusion.cpp/commits/ref-tensor-loading/. If you don't mind, I can directly push it to your branch. These are some of my test data.

Master:

loading tensors completed, taking 6.39s (process: 0.03s, read: 4.68s, memcpy: 0.00s, convert: 0.30s, copy_to_backend: 1.16s)

PR:

loading tensors completed, taking 2.71s (process: 0.02s, read: 1.04s, memcpy: 0.00s, convert: 0.04s, copy_to_backend: 1.50s)

Based on the above test results, I think that multithreading has very little effect on the speed optimization of preprocess_tensor and dedup. I prefer to keep this part using the original single-threaded processing method, and only use multithreading in other parts.

@Green-Sky
Copy link
Contributor

Based on the above test results, I think that multithreading has very little effect on the speed optimization of preprocess_tensor and dedup. I prefer to keep this part using the original single-threaded processing method, and only use multithreading in other parts.

Have you compared dedicated model conversion numbers? or just supplying --type with eg q5_k ?

@leejet
Copy link
Owner

leejet commented Sep 7, 2025

Have you compared dedicated model conversion numbers? or just supplying --type with eg q5_k ?

preprocess_tensor/dedup do not involve type conversion; they only deal with the handling of tensor names and dedup. This is data that includes type conversion.

loading tensors completed, taking 212.47s (process: 0.02s, read: 8.64s, memcpy: 0.00s, convert: 201.98s, copy_to_backend: 1.63s)

@rmatif
Copy link
Contributor Author

rmatif commented Sep 7, 2025

Added n_threads override

Number of threads Load Time for SDXL model
t = 1 4.82s
t = 2 2.62s
t = 3 1.81s
t = 4 1.49s
t = 5 1.30s

After 5 we are hitting the memory bandwidth

@rmatif
Copy link
Contributor Author

rmatif commented Sep 7, 2025

@leejet the tensor display stat seems completely broken when it comes to lora

[INFO ] model.cpp:2281 - loading tensors completed, taking 0.61s (process: 0.41s, read: 0.00s, memcpy: 0.00s, convert: 0.00s, copy_to_backend: 0.00s)
[DEBUG] ggml_extend.hpp:1597 - lora params backend buffer size =  375.37 MB(VRAM) (2364 tensors)
[DEBUG] model.cpp:2042 - loading tensors from /ramdisk/dmd2_sdxl_4step_lora_fp16.safetensors
  |==================================================| 2364/2364 - 17.24it/s

[INFO ] model.cpp:2281 - loading tensors completed, taking 0.09s (process: 0.04s, read: 0.02s, memcpy: 0.00s, convert: 0.00s, copy_to_backend: 0.05s)

@wbruna
Copy link
Contributor

wbruna commented Sep 8, 2025

Cold cache:
[INFO ] model.cpp:2281 - loading tensors completed, taking 16.65s (process: 0.04s, read: 12.23s, memcpy: 0.00s, convert: 0.14s, copy_to_backend: 3.85s)

Hot cache:
[INFO ] model.cpp:2281 - loading tensors completed, taking 2.06s (process: 0.04s, read: 0.71s, memcpy: 0.00s, convert: 0.04s, copy_to_backend: 0.94s)

Tmpfs:
[INFO ] model.cpp:2281 - loading tensors completed, taking 2.04s (process: 0.04s, read: 0.68s, memcpy: 0.00s, convert: 0.04s, copy_to_backend: 0.91s)

As a baseline, this is 1 thread, cold/hot cache:
[INFO ] model.cpp:2281 - loading tensors completed, taking 20.34s (process: 0.12s, read: 14.45s, memcpy: 0.00s, convert: 0.33s, copy_to_backend: 5.00s)
[INFO ] model.cpp:2281 - loading tensors completed, taking 6.73s (process: 0.12s, read: 1.30s, memcpy: 0.00s, convert: 0.26s, copy_to_backend: 4.52s)

Speed peaks at 4 threads here (4-core CPU).

So, it looks like the parallel reads are in fact helping a bit- (edit: see below). And depending on the system, a ramdisk could be pointless.

model.cpp Outdated
bool ModelLoader::load_tensors(on_new_tensor_cb_t on_new_tensor_cb, int n_threads_p) {
int64_t process_time_ms = 0;
int64_t read_time_ms = 0;
int64_t memcpy_time_ms = 0;
Copy link
Contributor

@wbruna wbruna Sep 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are being incremented by each thread, so... should at least be atomic?

And thinking about it, it may make more sense to have the time counters per-thread, and average the results.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are being incremented by each thread, so... should at least be atomic?

And thinking about it, it may make more sense to have the time counters per-thread, and average the results.

On paper it seems like a good idea but in practice it can be misleading. If you have 36 threads, it will bias the average when dividing by that. I'll push it anyway and if it's not desired I'll revert

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with a simple sum is the total would end up bigger than the loading time. Correct, but very confusing :-)

A more... say, meaningful measure, could be averaging by time, regardless of the number of threads. Something like (total read time on all threads) / (total time on all threads), then multiplying that by the time measured by the main thread, to get the "total read time".

@wbruna
Copy link
Contributor

wbruna commented Sep 8, 2025

I tested the condition variable + mutex approach: f9a2adb

The reading speed got a little bit slower:

[INFO ] model.cpp:2324 - loading tensors completed, taking 2.13s (process: 0.05s, read: 0.74s, memcpy: 0.00s, convert: 0.12s, copy_to_backend: 1.04s)

But using serialized reads made it a little bit faster:

[INFO ] model.cpp:2324 - loading tensors completed, taking 1.94s (process: 0.04s, read: 0.69s, memcpy: 0.00s, convert: 0.03s, copy_to_backend: 1.03s)

I didn't fix the time counters for the multi-thread updates, though, so I wouldn't put too much trust on those values.

@rmatif
Copy link
Contributor Author

rmatif commented Sep 8, 2025

Based on the above test results, I think that multithreading has very little effect on the speed optimization of preprocess_tensor and dedup. I prefer to keep this part using the original single-threaded processing method, and only use multithreading in other parts.

For lora it does:

Before:

[INFO ] stable-diffusion.cpp:848  - attempting to apply 1 LoRAs
[INFO ] model.cpp:1043 - load /ramdisk/NatsukiAoi ag4o.safetensors using safetensors format
[DEBUG] model.cpp:1150 - init from '/ramdisk/NatsukiAoi ag4o.safetensors', prefix = ''
[INFO ] lora.hpp:119  - loading LoRA from '/ramdisk/NatsukiAoi ag4o.safetensors'
[DEBUG] model.cpp:2042 - loading tensors from /ramdisk/NatsukiAoi ag4o.safetensors
  |==================================================| 2166/2166 - 5.00it/s

[INFO ] model.cpp:2281 - loading tensors completed, taking 1.38s (process: 1.17s, read: 0.00s, memcpy: 0.00s, convert: 0.00s, copy_to_backend: 0.00s)
[DEBUG] ggml_extend.hpp:1597 - lora params backend buffer size =  324.78 MB(VRAM) (2166 tensors)
[DEBUG] model.cpp:2042 - loading tensors from /ramdisk/NatsukiAoi ag4o.safetensors
  |==================================================| 2166/2166 - 4.15it/s

[INFO ] model.cpp:2281 - loading tensors completed, taking 0.30s (process: 0.06s, read: 0.01s, memcpy: 0.00s, convert: 0.01s, copy_to_backend: 0.02s)
[DEBUG] lora.hpp:161  - lora type: ".lora_down"/".lora_up"
[DEBUG] lora.hpp:163  - finished loaded lora
[DEBUG] lora.hpp:860  - (2166 / 2166) LoRA tensors will be applied
[DEBUG] ggml_extend.hpp:1425 - lora compute buffer size: 101.56 MB(VRAM)
[DEBUG] lora.hpp:860  - (2166 / 2166) LoRA tensors will be applied
[INFO ] stable-diffusion.cpp:825  - lora 'NatsukiAoi ag4o' applied, taking 3.16s
[INFO ] stable-diffusion.cpp:868  - apply_loras completed, taking 3.16s

After:

[INFO ] stable-diffusion.cpp:848  - attempting to apply 1 LoRAs
[INFO ] model.cpp:1043 - load /ramdisk/NatsukiAoi ag4o.safetensors using safetensors format
[DEBUG] model.cpp:1150 - init from '/ramdisk/NatsukiAoi ag4o.safetensors', prefix = ''
[INFO ] lora.hpp:120  - loading LoRA from '/ramdisk/NatsukiAoi ag4o.safetensors'
[DEBUG] model.cpp:2042 - loading tensors from /ramdisk/NatsukiAoi ag4o.safetensors
  |==================================================| 2166/2166 - 17.86it/s

[INFO ] model.cpp:2281 - loading tensors completed, taking 0.10s (process: 0.05s, read: 0.00s, memcpy: 0.00s, convert: 0.00s, copy_to_backend: 0.00s)
[DEBUG] ggml_extend.hpp:1597 - lora params backend buffer size =  324.78 MB(VRAM) (2166 tensors)
[DEBUG] model.cpp:2042 - loading tensors from /ramdisk/NatsukiAoi ag4o.safetensors
  |==================================================| 2166/2166 - 16.13it/s

[INFO ] model.cpp:2281 - loading tensors completed, taking 0.10s (process: 0.04s, read: 0.01s, memcpy: 0.00s, convert: 0.01s, copy_to_backend: 0.08s)
[DEBUG] lora.hpp:174  - lora type: ".lora_down"/".lora_up"
[DEBUG] lora.hpp:176  - finished loaded lora
[DEBUG] lora.hpp:873  - (2166 / 2166) LoRA tensors will be applied
[DEBUG] ggml_extend.hpp:1425 - lora compute buffer size: 101.56 MB(VRAM)
[DEBUG] lora.hpp:873  - (2166 / 2166) LoRA tensors will be applied
[INFO ] stable-diffusion.cpp:825  - lora 'NatsukiAoi ag4o' applied, taking 1.58s
[INFO ] stable-diffusion.cpp:868  - apply_loras completed, taking 1.58s

@leejet
Copy link
Owner

leejet commented Sep 14, 2025

I have fixed the potential issue in the deduplication logic and also corrected the progress bar display problem. I believe this PR is now ready to be merged.

@leejet leejet merged commit 55c2e05 into leejet:master Sep 14, 2025
8 checks passed
@leejet
Copy link
Owner

leejet commented Sep 14, 2025

Thank you all for your contributions!

@pedroCabrera
Copy link
Contributor

Hi, for me this update broke SDXL generations, pure black output, its working before this PR, i am missing something i need to update to make it work?? thanks!

@wbruna
Copy link
Contributor

wbruna commented Sep 16, 2025

@pedroCabrera , you mean master-52a97b3 works for you, but master-55c2e05 fails, with the same models and parameters?

If so, please open an issue, and tell us the command line and models you used (ideally a full log with -v).

@pedroCabrera
Copy link
Contributor

yeah exactly those RP, ok will create a new issue, thanks @wbruna !!

stduhpf added a commit to stduhpf/stable-diffusion.cpp that referenced this pull request Oct 23, 2025
* docs: add sd.cpp-webui as an available frontend (leejet#738)

* fix: correct head dim check and L_k padding of flash attention (leejet#736)

* fix: convert f64 to f32 and i64 to i32 when loading weights

* docs: add LocalAI to README's UIs (leejet#741)

* sync: update ggml

* sync: update ggml

* feat: upgrade musa sdk to rc4.2.0 (leejet#732)

* feat: change image dimensions requirement for DiT models (leejet#742)

* feat: add missing models and parameters to image metadata (leejet#743)

* feat: add new scheduler types, clip skip and vae to image embedded params

- If a non default scheduler is set, include it in the 'Sampler' tag in the data
embedded into the final image.
- If a custom VAE path is set, include the vae name (without path and extension)
in embedded image params under a `VAE:` tag.
- If a custom Clip skip is set, include that Clip skip value in embedded image
params under a `Clip skip:` tag.

* feat: add separate diffusion and text models to metadata

---------

Co-authored-by: one-lithe-rune <[email protected]>

* refector: optimize the usage of tensor_types

* feat: support build against system installed GGML library (leejet#749)

* chore: avoid setting GGML_MAX_NAME when building against external ggml (leejet#751)

An external ggml will most likely have been built with the default
GGML_MAX_NAME value (64), which would be inconsistent with the value
set by our build (128). That would be an ODR violation, and it could
easily cause memory corruption issues due to the different
sizeof(struct ggml_tensor) values.

For now, when linking against an external ggml, we demand it has been
patched with a bigger GGML_MAX_NAME, since we can't check against a
value defined only at build time.

* Conv2D direct support (leejet#744)

* Conv2DDirect for VAE stage

* Enable only for Vulkan, reduced duplicated code

* Cmake option to use conv2d direct

* conv2d direct always on for opencl

* conv direct as a flag

* fix merge typo

* Align conv2d behavior to flash attention's

* fix readme

* add conv2d direct for controlnet

* add conv2d direct for esrgan

* clean code, use enable_conv2d_direct/get_all_blocks

* format code

---------

Co-authored-by: leejet <[email protected]>

* sync: update ggml, make cuda im2col a little faster

* chore: add Nvidia 30 series (cuda arch 86) to build

* feat: throttle model loading progress updates (leejet#782)

Some terminals have slow display latency, so frequent output
during model loading can actually slow down the process.

Also, since tensor loading times can vary a lot, the progress
display now shows the average across past iterations instead
of just the last one.

* docs: add missing dash to docs/chroma.md (leejet#771)

* docs: add compile option needed by Ninja (leejet#770)

* feat: show usage on unknown arg (leejet#767)

* fix: typo in the verbose long flag (leejet#783)

* feat: add wan2.1/2.2 support (leejet#778)

* add wan vae suppport

* add wan model support

* add umt5 support

* add wan2.1 t2i support

* make flash attn work with wan

* make wan a little faster

* add wan2.1 t2v support

* add wan gguf support

* add offload params to cpu support

* add wan2.1 i2v support

* crop image before resize

* set default fps to 16

* add diff lora support

* fix wan2.1 i2v

* introduce sd_sample_params_t

* add wan2.2 t2v support

* add wan2.2 14B i2v support

* add wan2.2 ti2v support

* add high noise lora support

* sync: update ggml submodule url

* avoid build failure on linux

* avoid build failure

* update ggml

* update ggml

* fix sd_version_is_wan

* update ggml, fix cpu im2col_3d

* fix ggml_nn_attention_ext mask

* add cache support to ggml runner

* fix the issue of illegal memory access

* unify image loading processing

* add wan2.1/2.2 FLF2V support

* fix end_image mask

* update to latest ggml

* add GGUFReader

* update docs

* feat: add support for timestep boundary based automatic expert routing in Wan MoE (leejet#779)

* Wan MoE: Automatic expert routing based on timestep boundary

* unify code style and fix some issues

---------

Co-authored-by: leejet <[email protected]>

* feat: add flow shift parameter (for SD3 and Wan) (leejet#780)

* Add flow shift parameter (for SD3 and Wan)

* unify code style and fix some issues

---------

Co-authored-by: leejet <[email protected]>

* docs: update docs and help message

* chore: update to c++17

* docs: update docs/wan.md

* fix: add flash attn support check (leejet#803)

* feat: support incrementing ref image index (omni-kontext) (leejet#755)

* kontext: support  ref images indices

* lora: support x_embedder

* update help message

* Support for negative indices

* support for OmniControl (offsets at index 0)

* c++11 compat

* add --increase-ref-index option

* simplify the logic and fix some issues

* update README.md

* remove unused variable

---------

Co-authored-by: leejet <[email protected]>

* feat: add detailed tensor loading time stat (leejet#793)

* fix: clarify lora quant support and small fixes (leejet#792)

* fix: accept NULL in sd_img_gen_params_t::input_id_images_path (leejet#809)

* chore: update flash attention warnings (leejet#805)

* fix: use {} for params init instead of memset (leejet#781)

* chore: remove sd3 flash attention warn (leejet#812)

* feat: use log_printf to print ggml logs (leejet#545)

* chore: add install() support in CMakeLists.txt (leejet#540)

* feat: add SmoothStep Scheduler (leejet#813)

* feat: add sd3 flash attn support (leejet#815)

* fix: make tiled VAE reuse the compute buffer (leejet#821)

* feat: reduce CLIP memory usage with no embeddings (leejet#768)

* fix: make weight override more robust against ggml changes (leejet#760)

* fix: do not force VAE type to f32 on SDXL (leejet#716)

This seems to be a leftover from the initial SDXL support: it's
not enough to avoid NaN issues, and it's not not needed for the
fixed sdxl-vae-fp16-fix .

* feat: use Euler sampling by default for SD3 and Flux (leejet#753)

Thank you for your contribution.

* fix: harden for large files (leejet#643)

* feat: Add SYCL Dockerfile (leejet#651)

* feat: increase work_ctx memory buffer size (leejet#814)

* docs: update docs

* feat: add VAE encoding tiling support and adaptive overlap  (leejet#484)

* implement  tiling vae encode support

* Tiling (vae/upscale): adaptative overlap

* Tiling: fix edge case

* Tiling: fix crash when less than 2 tiles per dim

* remove extra dot

* Tiling: fix edge cases for adaptative overlap

* tiling: fix edge case

* set vae tile size via env var

* vae tiling: refactor again, base on smaller buffer for alignment

* Use bigger tiles for encode (to match compute buffer size)

* Fix edge case when tile is bigger than latent

* non-square VAE tiling (#3)

* refactor tile number calculation

* support non-square tiles

* add env var to change tile overlap

* add safeguards and better error messages for SD_TILE_OVERLAP

* add safeguards and include overlapping factor for SD_TILE_SIZE

* avoid rounding issues when specifying SD_TILE_SIZE as a factor

* lower SD_TILE_OVERLAP limit

* zero-init empty output buffer

* Fix decode latent size

* fix encode

* tile size params instead of env

* Tiled vae parameter validation (#6)

* avoid crash with invalid tile sizes, use 0 for default

* refactor default tile size, limit overlap factor

* remove explicit parameter for relative tile size

* limit encoding tile to latent size

* unify code style and format code

* update docs

* fix get_tile_sizes in decode_first_stage

---------

Co-authored-by: Wagner Bruna <[email protected]>
Co-authored-by: leejet <[email protected]>

* feat: add vace support (leejet#819)

* add wan vace t2v support

* add --vace-strength option

* add vace i2v support

* fix the processing of vace_context

* add vace v2v support

* update docs

* feat: optimize tensor loading time (leejet#790)

* opt tensor loading

* fix build failure

* revert the changes

* allow the use of n_threads

* fix lora loading

* optimize lora loading

* add mutex

* use atomic

* fix build

* fix potential duplicate issue

* avoid duplicate lookup of lora tensor

* fix progeress bar

* remove unused remove_duplicates

---------

Co-authored-by: leejet <[email protected]>

* refactor: simplify the logic of pm id image loading (leejet#827)

* feat: add sgm_uniform scheduler, simple scheduler, and support for NitroFusion (leejet#675)

* feat: Add timestep shift and two new schedulers

* update readme

* fix spaces

* format code

* simplify SGMUniformSchedule

* simplify shifted_timestep logic

* avoid conflict

---------

Co-authored-by: leejet <[email protected]>

* refactor: move tiling cacl and debug print into the tiling code branch (leejet#833)

* refactor: simplify DPM++ (2S) Ancestral (leejet#667)

* chore: set release tag by commit count

* chore: fix workflow (leejet#836)

* fix: avoid multithreading issues in the model loader

* fix: avoid segfault for pix2pix models without reference images (leejet#766)

* fix: avoid segfault for pix2pix models with no reference images

* fix: default to empty reference on pix2pix models to avoid segfault

* use resize instead of reserve

* format code

---------

Co-authored-by: leejet <[email protected]>

* refactor: remove unused --normalize-input parameter (leejet#835)

* fix: correct tensor deduplication logic (leejet#844)

* docs: include Vulkan compatibility for LoRA quants (leejet#845)

* docs: HipBLAS / ROCm build instruction fix (leejet#843)

* fix: tensor loading thread count (leejet#854)

* fix: optimize the handling of CLIP embedding weight (leejet#840)

* sync: update ggml

* sync: update ggml

* fix: optimize the handling of embedding weight (leejet#859)

* feat: add support for Flux Controls and Flex.2 (leejet#692)

* docs: update README.md (leejet#866)

* chore: fix dockerfile libgomp1 dependency + improvements (leejet#852)

* fix: ensure directory iteration results are sorted by filename (leejet#858)

* chore: fix vulkan ci (leejet#878)

* feat: add support for more esrgan models & x2 & x1 models (leejet#855)

* feat: add a stand-alone upscale mode (leejet#865)

* feat: add a stand-alone upscale mode

* fix prompt option check

* format code

* update README.md

---------

Co-authored-by: leejet <[email protected]>

* refactor: deal with default img-cfg-scale at the library level (leejet#869)

* feat: add Qwen Image support (leejet#851)

* add qwen tokenizer

* add qwen2.5 vl support

* mv qwen.hpp -> qwenvl.hpp

* add qwen image model

* add qwen image t2i pipeline

* fix qwen image flash attn

* add qwen image i2i pipline

* change encoding of vocab_qwen.hpp to utf8

* fix get_first_stage_encoding

* apply jeffbolz f32 patch

leejet#851 (comment)

* fix the issue that occurs when using CUDA with k-quants weights

* optimize the handling of the FeedForward precision fix

* to_add_out precision fix

* update docs

* fix: resolve VAE tiling problem in Qwen Image (leejet#873)

* fix: avoid generating black images when running T5 on the GPU (leejet#882)

* fix: correct canny preprocessor (leejet#861)

* fix: better progress display for second-order samplers (leejet#834)

* feat: add Qwen Image Edit support (leejet#877)

* add ref latent support for qwen image

* optimize clip_preprocess and fix get_first_stage_encoding

* add qwen2vl vit support

* add qwen image edit support

* fix qwen image edit pipeline

* add mmproj file support

* support dynamic number of Qwen image transformer blocks

* set prompt_template_encode_start_idx every time

* to_add_out precision fix

* to_out.0 precision fix

* update docs

---------

Co-authored-by: Daniele <[email protected]>
Co-authored-by: Erik Scholz <[email protected]>
Co-authored-by: leejet <[email protected]>
Co-authored-by: Ettore Di Giacinto <[email protected]>
Co-authored-by: R0CKSTAR <[email protected]>
Co-authored-by: stduhpf <[email protected]>
Co-authored-by: one-lithe-rune <[email protected]>
Co-authored-by: Seas0 <[email protected]>
Co-authored-by: NekopenDev <[email protected]>
Co-authored-by: SmallAndSoft <[email protected]>
Co-authored-by: Markus Hartung <[email protected]>
Co-authored-by: clibdev <[email protected]>
Co-authored-by: Richard Palethorpe <[email protected]>
Co-authored-by: rmatif <[email protected]>
Co-authored-by: vmobilis <[email protected]>
Co-authored-by: Stefan-Olt <[email protected]>
Co-authored-by: Sharuzzaman Ahmat Raslan <[email protected]>
Co-authored-by: Serkan Sahin <[email protected]>
Co-authored-by: Pedrito <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants