-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Fixing Kohya loras loading: Flux.1-dev loras with TE ("lora_te1_" prefix) + Flux.2-dev loras #13118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
9fef245 to
7bfe11f
Compare
|
Thanks for your PR. Could you provide a reproducer? |
import torch
from diffusers import FluxKontextPipeline
pipe = FluxKontextPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16
)
pipe.load_lora_weights(
"scenario-labs/big-head-kontext-lora", weight_name="flux_kontext_lora.safetensors",
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR fixes Flux/Kohya LoRA conversion so text-encoder LoRA weights using the lora_te1_ prefix are no longer filtered out during conversion, addressing dropped TE layers and the resulting “No LoRA keys associated to CLIPTextModel…” warning.
Changes:
- Extend Flux LoRA conversion filtering to retain
lora_te1_-prefixed text encoder keys. - Include
lora_te1_in the.diff_bunsupported-keys detection/filtering logic. - Add an additional ComfyUI key renaming step intended to produce
lora_te1_keys (currently ineffective as written).
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| state_dict = { | ||
| _custom_replace(k, limit_substrings): v | ||
| for k, v in state_dict.items() | ||
| if k.startswith(("lora_unet_", "lora_te_")) | ||
| if k.startswith(("lora_unet_", "lora_te_", "lora_te1_")) | ||
| } |
Copilot
AI
Feb 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change adds support for keeping lora_te1_ keys in the Flux/Kohya conversion path, but there doesn't appear to be a regression test covering loading a non-Diffusers LoRA state dict with lora_te1_-prefixed text-encoder weights (existing LoRA tests don’t mention lora_te1_). Adding a small unit/integration test would help prevent future regressions where TE1 keys get filtered out again.
|
Here is another reproducer, with Flux.1 dev instead of Flux.1 Kontext dev: import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
)
pipe.load_lora_weights(
"scenario-labs/kohya-sd-scripts-loras", weight_name="flux_lora.safetensors",
)The Lora has been trained with Kohya/sd-scripts framework.
|
0a9e3ea to
144704f
Compare
|
I added support for Kohya's Flux.2-dev loras to the PR as well. Reproducer code is: import torch
from diffusers import Flux2Pipeline
pipe = Flux2Pipeline.from_pretrained("black-forest-labs/FLUX.2-dev", torch_dtype=torch.bfloat16)
pipe.load_lora_weights(
"scenario-labs/musubi-tuner-loras", weight_name="flux2-dev_lora.safetensors",
)Without the fix, I previously got: With this PR, on-the-fly conversion now works for Flux.2-dev loras also. In this implementation, I preserved the logic of |
144704f to
5518d07
Compare
|
Works also for Flux.2-klein, here is the reproducer: import torch
from diffusers import Flux2Pipeline
pipe = Flux2Pipeline.from_pretrained("black-forest-labs/FLUX.2-klein-4b", torch_dtype=torch.bfloat16)
pipe.load_lora_weights(
"scenario-labs/musubi-tuner-loras", weight_name="flux2-klein-4b_lora.safetensors",
) |
|
I just saw that you have an algorithm in I also got an error for Z-Image loras import torch
from diffusers import ZImagePipeline
pipe = ZImagePipeline.from_pretrained("Tongyi-MAI/Z-Image-Turbo", torch_dtype=torch.bfloat16)
pipe.load_lora_weights(
"scenario-labs/musubi-tuner-loras", weight_name="zimage-turbo_lora.safetensors",
)but I used Kohya's conversion script for Z-image and that worked with the converted lora: import torch
from diffusers import ZImagePipeline
pipe = ZImagePipeline.from_pretrained("Tongyi-MAI/Z-Image-Turbo", torch_dtype=torch.bfloat16)
pipe.load_lora_weights(
"scenario-labs/musubi-tuner-loras", weight_name="zimage-turbo_lora_converted.safetensors",
)As far as we are concerned @scenario-labs we are all set with this PR. |
Text encoder lora layers are dropped for some loras such as this one
A log message confirms it:
No LoRA keys associated to CLIPTextModel found with the prefix='text_encoder'. This is safe to ignore if LoRA state dict didn't originally have any CLIPTextModel related params. You can also try specifyingprefix=Noneto resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/newAt least, this PR brings more consistency (wherever there is
lora_te_, there should be alsolora_te1_)Closes #12053
@sayakpaul