-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Is there an existing issue for this?
- I have searched the existing issues
OS
macOS
GPU
mps
VRAM
No response
What happened?
If a model fails to load on !switch, the original model is restored, but subsequent attempts to !switch to the new model have no effect.
Note in the middle of the below logs, the no-op invoke> !switch sd2-768 after the first attempt fails
invoke> !switch sd2-768
>> Current VRAM usage: 0.00G
>> Offloading stable-diffusion-1.5 to CPU
>> Scanning Model: sd2-768
>> Model Scanned. OK!!
>> Loading sd2-768 from /Users/damian/Documents/invokeai/models/ldm/stable-diffusion-v2/768-v-ema.ckpt
>> Calculating sha256 hash of weights file
>> sha256 = bfcaf0755797b0c30eb00a3787e8b423eb1f5decd8de76c4d824ac2dd27e139f (3.30s)
| LatentDiffusion: Running in v-prediction mode
| DiffusionWrapper has 865.91 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
Downloading: 10%|███████████████▏ | 384M/3.94G [01:38<02:59, 19.8MB/s]** model sd2-768 could not be loaded: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.
Traceback (most recent call last):
... [stacktrace skipped] ...
** restoring stable-diffusion-1.5
>> Retrieving model stable-diffusion-1.5 from system RAM cache
>> Setting Sampler to k_lms
invoke> !switch sd2-768
invoke> !switch stable-diffusion-1.5
>> Current VRAM usage: 0.00G
>> Retrieving model stable-diffusion-1.5 from system RAM cache
>> Setting Sampler to k_lms
invoke> !switch sd2-768
>> Current VRAM usage: 0.00G
>> Offloading stable-diffusion-1.5 to CPU
>> Scanning Model: sd2-768
>> Model Scanned. OK!!
...
Screenshots
No response
Additional context
No response
Contact Details
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working