Textual inversion (TI or Embedding):
- TI are quite difficult to train properly
- The output file will be 20-40KB
- When using TI results can be unpredictable (bad or very good)
LoRA:
- Easier and quicker to train properly
- the output file will be 144MB
- results are usually average good
Lora will add to the prompt a value like <lorasomething:1>
while TI are just added keywords that will trigger the embedding file
Both Lora and TI can be intermixed and give interesting results
Why would you prefer an embedding compared to a Lora?
The main advantage of TI embeddings is their flexibility and small size
The results are also going to be affected by the selected ckpt model - on top of the Stable Diffusion WebUI (top left)
A ckpt model file is 2GB+ in size
The big ckpt file is like a general Universe, there can be only one selected at a time
They take a lot of VRAM to train
Some Examples of models are:
- AnythingV4
- AbyssOrangeMix
- Deliberate
- RPGV4
- Protogen
- Myne Factory
- Dreamlike
No comments:
Post a Comment