I've been researching all over for a tutorial/guide to load a civitAI model (https://civitai.com/models/4823/deliberate) into pytorch and then use it for Inference.
Most research leads to the following:
However, the models on civitai only have the ckpt file and nothing more. So cannot do step 1. I do know it's possible, because the GUI version AUTOMATIC1111 is able to do it.
PS. I do know that the same deliberate model is available on huggingface.co and can be downloaded like standard stable diffusion models, but i'm interested in working with the ckpt file alone and do it the way AUTO1111 does it.
model_id = "stabilityai/stable-diffusion-2-1"
model = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
model.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
# Load the Checkpoint File
ckpt_path = '/Users/XXXX/XXXX/model.ckpt'
checkpoint = torch.load(ckpt_path, map_location="cpu")
model.load_state_dict(checkpoint['state_dict'])
model.eval()
image = model(prompt='xxxxxx')
The requirement is that you install diffusers and peft (pip install peft diffusers).
.safetensorsYou can use Hugging Face stable diffusion load_lora_weight method.
For this demo, I am downloading this LoRA weight: Styles for Pony Diffusion V6 XL.
!wget https://civitai.com/api/download/models/396157?type=Model&format=SafeTensor
!mv "396157?type=Model" "lora.safetensors"
Load the pipeline (be careful to load the same version as the one the LoRA was performed on! Here XL-1.0), then load the LoRA weighs:
import torch
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights("lora.safetensors")
image, *_ = pipeline(text).images
image.save("output.png")
| example output |
|---|
![]() |
.safetensorsIf you are looking to load checkpoint instead but have access to a .safetensors then you can't rely on from_pretrained. According to the documentation "Load safetensors", you need to rely on from_single_file to initialize and load the pipeline:
Here I am downloading the checkpoint from: Vapor - A Futuristic Retro Experience.
The following code was tested on Google collab:
!wget https://civitai.com/api/download/models/157346
!mv "157346" "checkpoint.safetensors"
Then you can load the checkpoint with a single line:
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_single_file("checkpoint.safetensors")
image, *_ = pipeline(text).images
image.save("output.png")
| example output |
|---|
![]() |
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With