Fix LoRA loading and conditioning to match training notebook #23

Merged
llabeyrie merged 1 commits from fix/lora-loading-and-conditioning into main 2026-03-19 23:31:31 +00:00
Owner

Summary

The adapter had two mismatches with the training notebook (pokemon_card_training_2.ipynb), causing the Streamlit app to generate vanilla SD 1.5 images instead of LoRA-finetuned Pokemon cards:

  1. LoRA loading: Used pipe.load_lora_weights() (diffusers format) but the adapter was saved with PEFT's save_pretrained() — keys didn't match, producing the warnings No LoRA keys associated to UNet2DConditionModel found. Now uses PeftModel.from_pretrained() + merge_and_unload(), matching the notebook.

  2. Conditioning format: Built a natural language prompt ("Pokemon trading card of X, Fire-type...") but the LoRA was trained on json.dumps(meta) serialization. Now uses JSON serialization to match.

Test plan

  • Syntax check passes
  • Run streamlit run app.py on the server — generated cards should now match notebook quality
  • No more "No LoRA keys" warnings in stderr
## Summary The adapter had two mismatches with the training notebook (`pokemon_card_training_2.ipynb`), causing the Streamlit app to generate vanilla SD 1.5 images instead of LoRA-finetuned Pokemon cards: 1. **LoRA loading**: Used `pipe.load_lora_weights()` (diffusers format) but the adapter was saved with PEFT's `save_pretrained()` — keys didn't match, producing the warnings `No LoRA keys associated to UNet2DConditionModel found`. Now uses `PeftModel.from_pretrained()` + `merge_and_unload()`, matching the notebook. 2. **Conditioning format**: Built a natural language prompt (`"Pokemon trading card of X, Fire-type..."`) but the LoRA was trained on `json.dumps(meta)` serialization. Now uses JSON serialization to match. ## Test plan - [x] Syntax check passes - [x] Run `streamlit run app.py` on the server — generated cards should now match notebook quality - [x] No more "No LoRA keys" warnings in stderr
llabeyrie added 1 commit 2026-03-19 23:29:56 +00:00
card_generator_adapter.py had two mismatches with the training notebook:

1. LoRA loading: used pipe.load_lora_weights() (diffusers format) but the
   adapter was saved with PEFT's save_pretrained() — keys didn't match,
   so no LoRA weights were actually applied. Now uses
   PeftModel.from_pretrained() + merge_and_unload().

2. Conditioning: built a natural language prompt, but the LoRA was trained
   on json.dumps(meta) serialization. Now uses JSON serialization to match.
llabeyrie merged commit 5e49efd7cb into main 2026-03-19 23:31:31 +00:00
llabeyrie deleted branch fix/lora-loading-and-conditioning 2026-03-19 23:31:31 +00:00
Sign in to join this conversation.