| tit | ter | id |
| emilianJR/AnyLORA | lora | |
| pipe.load_lora_weights( ".", weight_name="lora.safetensors" ) | lora | |
| pipe.safety_checker=None | pipe | |
| Pokémon lora | lora | |
| refiner | sd | |
| cuda 11.4, torch=2.0.0, xformers=0.0.19 | cuda | |
| torch==1.11.0 - git clone --recursive https://github.com/facebookresearch/xformers.git | err | |
| ImportError: cannot import name 'CLIPImageProcessor' from 'transformers' -> use >=4.29 | err | |
| py>=3.8, tor >=1.7.0 | ver | |
| load_lora_weights(unet,te) , load_attn_procs(unet) | lora | |
| python launch.py --ckpt my.ckpt | ckpt | |
| guoyww/AnimateDiff | adif | |
| guidance_scale img <-> pro | pipe | |
| strength=0.1 + num_inference_steps=50 = 0.1*50 steps of noise | pipe | |
| sd-concepts-library/gta5-artwork | mod | |
| pipe.load_textual_inversion( "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" ) | mod | |
| images = pipeline.generate(prompts, uncond_prompts, seed=42) | gen | |
| https://hoshikat.hatenablog.com/ | tut | |
| ctr + chatur | ctr | |
| chatur lora | mod | |
| from diffusers.utils import load_image | utils | |
| https://runrunsketch.net/ | tut | |
| sd.from_pretrained(mid, safety_checker=None, torch_dtype=torch.float16).to("cuda") | pipe | |
| IP-Adapter face model to apply specific faces to your images | ada | |
| DDIMScheduler,EulerDiscreteScheduler for face model. | sch | |
| adaptors | ada | |
| https://note.com/npaka | tut | |
| determine char with lora | det | |
| repeatable seeds | seed | |
| reusing_seeds | seed | |
| Yntec/3Danimation | mod | |
| …, BREAK white t-shirt, … | pro | |
| num_inference_steps=50, strength=0.8 == 50*0.8 steps | i2i | |
| pipeline (prompt=prompt, image=init_image, strength=0.6, num_inference_steps=30).images[0] | i2i | |
| Meina/MeinaMix_V10 - jap | mod | |
| sdxl, kandinsky | i2i | |
| 80-90% likeness | ctr | |
| consistent-ai-character - charturner | det | |
| charturner | mod | |
| pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) | inv | |
| unidiffuser | uni | |
| diffusers.StableDiffusionControlNetImg2ImgPipeline | ctr | |
| from controlnet_aux.processor import Processor | ctr | |
| pip install controlnet-aux==0.0.7 | ctr | |
| torch=2.1.2, xformers=0.0.23.post1 | tor | |
| if torch>=2.0, x-transformers (X) | tor | |
| up-adaptor | ipa | |
| quantization | opt | |
| from diffusers import StableVideoDiffusionPipeline | svd | |
| mid="stabilityai/stable-video-diffusion-img2vid-xt" | svd | |
| openmmlab/PIA-condition-adapter | pia | |
| i2vgen-xl | i2v | |
| difs adif | adif | |
| keyerror:sample -> img=pipe(prompt).images[0] | wd | |
| difs - ckpt | ckpt | |
| inpaint | ctrl | |
| wd pro | wd | |
| ctrl models | ctrl | |
| Torch not compiled with CUDA enabled | err | |
| Import error from pytorch lightning -> pip install pytorch_lightning==1.7.7 | err | |
| 3Dillustration-stable-diffusion - not good | 3d | |
| model_id = "aidystark/3Dillustration-stable-diffusion"; mind the space | 3di | |
| hakurei-wd | wd | |
| wd1.4a == sd1.2 | wd | |
| opt sd | opt | |
| source .venv/bin/activate | webui | |
| no space left -> pip install --cache-dir=/home/user/tmp ... | webui | |
| img2img | webui | |
| $p scripts/txt2img.py \ --prompt $tex \ --plms --ckpt $pt \ --skip_grid --n_samples 1 | sd | |
| AttributeError: module 'cv2.dnn' has no attribute 'DictValue' -> opencv2-python=4.8.0.74 | cv2 | |
| import pipe | i2i | |
| py=3.8.5, pytorch==1.11.0, torchvision==1.12.0 | sd | |
| pip install taming-transformers-rom1504 | sd | |
| imwatermark -> invisible-watermark | sd | |
| ln -s models/ldm/stable-diffusion-v1/model.ckpt | mod | |