Fix LoRA loading and conditioning to match training notebook #23
Reference in New Issue
Block a user
Delete Branch "fix/lora-loading-and-conditioning"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Summary
The adapter had two mismatches with the training notebook (
pokemon_card_training_2.ipynb), causing the Streamlit app to generate vanilla SD 1.5 images instead of LoRA-finetuned Pokemon cards:LoRA loading: Used
pipe.load_lora_weights()(diffusers format) but the adapter was saved with PEFT'ssave_pretrained()— keys didn't match, producing the warningsNo LoRA keys associated to UNet2DConditionModel found. Now usesPeftModel.from_pretrained()+merge_and_unload(), matching the notebook.Conditioning format: Built a natural language prompt (
"Pokemon trading card of X, Fire-type...") but the LoRA was trained onjson.dumps(meta)serialization. Now uses JSON serialization to match.Test plan
streamlit run app.pyon the server — generated cards should now match notebook quality