Skip to content

Conversation

@christopher5106
Copy link

@christopher5106 christopher5106 commented Feb 10, 2026

Text encoder lora layers are dropped for some loras such as this one
A log message confirms it:
No LoRA keys associated to CLIPTextModel found with the prefix='text_encoder'. This is safe to ignore if LoRA state dict didn't originally have any CLIPTextModel related params. You can also try specifying prefix=None to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new

At least, this PR brings more consistency (wherever there is lora_te_, there should be also lora_te1_)

Closes #12053

@sayakpaul

@christopher5106 christopher5106 changed the title feat(image-ml): fixing text encoder lora loading Fixing text encoder lora loading on some loras Feb 10, 2026
@christopher5106 christopher5106 changed the title Fixing text encoder lora loading on some loras Fixing text encoder lora loading when prefix is "lora_te1_" Feb 10, 2026
@sayakpaul
Copy link
Member

Thanks for your PR. Could you provide a reproducer?

@christopher5106
Copy link
Author

christopher5106 commented Feb 10, 2026

import torch
from diffusers import FluxKontextPipeline

pipe = FluxKontextPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16
)

pipe.load_lora_weights(
    "scenario-labs/big-head-kontext-lora", weight_name="flux_kontext_lora.safetensors",
)

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes Flux/Kohya LoRA conversion so text-encoder LoRA weights using the lora_te1_ prefix are no longer filtered out during conversion, addressing dropped TE layers and the resulting “No LoRA keys associated to CLIPTextModel…” warning.

Changes:

  • Extend Flux LoRA conversion filtering to retain lora_te1_-prefixed text encoder keys.
  • Include lora_te1_ in the .diff_b unsupported-keys detection/filtering logic.
  • Add an additional ComfyUI key renaming step intended to produce lora_te1_ keys (currently ineffective as written).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 897 to 900
state_dict = {
_custom_replace(k, limit_substrings): v
for k, v in state_dict.items()
if k.startswith(("lora_unet_", "lora_te_"))
if k.startswith(("lora_unet_", "lora_te_", "lora_te1_"))
}
Copy link

Copilot AI Feb 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change adds support for keeping lora_te1_ keys in the Flux/Kohya conversion path, but there doesn't appear to be a regression test covering loading a non-Diffusers LoRA state dict with lora_te1_-prefixed text-encoder weights (existing LoRA tests don’t mention lora_te1_). Adding a small unit/integration test would help prevent future regressions where TE1 keys get filtered out again.

Copilot uses AI. Check for mistakes.
@christopher5106
Copy link
Author

christopher5106 commented Feb 11, 2026

Here is another reproducer, with Flux.1 dev instead of Flux.1 Kontext dev:

import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
)

pipe.load_lora_weights(
    "scenario-labs/kohya-sd-scripts-loras", weight_name="flux_lora.safetensors",
)

The Lora has been trained with Kohya/sd-scripts framework.
With the fix of this PR, the warning message disappears, solving the drop of layers for the text encoder.

No LoRA keys associated to CLIPTextModel found with the prefix='text_encoder'. This is safe to ignore if LoRA state dict didn't originally have any CLIPTextModel related params. You can also try specifying prefix=None to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new

@christopher5106 christopher5106 force-pushed the fix_lora branch 3 times, most recently from 0a9e3ea to 144704f Compare February 11, 2026 15:54
@christopher5106
Copy link
Author

christopher5106 commented Feb 11, 2026

I added support for Kohya's Flux.2-dev loras to the PR as well.

Reproducer code is:

import torch
from diffusers import Flux2Pipeline

pipe = Flux2Pipeline.from_pretrained("black-forest-labs/FLUX.2-dev", torch_dtype=torch.bfloat16)
pipe.load_lora_weights(
    "scenario-labs/musubi-tuner-loras", weight_name="flux2-dev_lora.safetensors",
)

Without the fix, I previously got:
No LoRA keys associated to Flux2Transformer2DModel found with the prefix='transformer'. This is safe to ignore if LoRA state dict didn't originally have any Flux2Transformer2DModel related params. You can also try specifying prefix=None to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new

With this PR, on-the-fly conversion now works for Flux.2-dev loras also.

In this implementation, I preserved the logic of _convert_kohya_flux_lora_to_diffusers(), with little differences: first I infer the max number of blocks from the state dict keys, second, I had to fix loras keys for the new modules Flux2FeedForward and Flux2ParallelSelfAttnProcessor.

@christopher5106 christopher5106 changed the title Fixing text encoder lora loading when prefix is "lora_te1_" Fixing Kohya loras loading: Flux.1-dev loras with TE ("lora_te1_" prefix) + Flux.2-dev loras Feb 11, 2026
@christopher5106
Copy link
Author

Works also for Flux.2-klein, here is the reproducer:

import torch
from diffusers import Flux2Pipeline

pipe = Flux2Pipeline.from_pretrained("black-forest-labs/FLUX.2-klein-4b", torch_dtype=torch.bfloat16)
pipe.load_lora_weights(
    "scenario-labs/musubi-tuner-loras", weight_name="flux2-klein-4b_lora.safetensors",
)

@christopher5106
Copy link
Author

christopher5106 commented Feb 11, 2026

I just saw that you have an algorithm in _convert_non_diffusers_qwen_lora_to_diffusers() for Qwen-Image that could have worked as well for Flux 2 but in _convert_non_diffusers_flux2_lora_to_diffusers() there is nothing about "lora_unet_" prefix. Anyway, what I did works but there will be a time where it will be possible to uniformize all that's coming from Musubi-Tuner (I saw also a function for Wan).

I also got an error for Z-Image loras

import torch
from diffusers import ZImagePipeline

pipe = ZImagePipeline.from_pretrained("Tongyi-MAI/Z-Image-Turbo", torch_dtype=torch.bfloat16)

pipe.load_lora_weights(
    "scenario-labs/musubi-tuner-loras", weight_name="zimage-turbo_lora.safetensors",
)

but I used Kohya's conversion script for Z-image and that worked with the converted lora:

import torch
from diffusers import ZImagePipeline

pipe = ZImagePipeline.from_pretrained("Tongyi-MAI/Z-Image-Turbo", torch_dtype=torch.bfloat16)

pipe.load_lora_weights(
    "scenario-labs/musubi-tuner-loras", weight_name="zimage-turbo_lora_converted.safetensors",
)

As far as we are concerned @scenario-labs we are all set with this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Flux1.Dev Kohya Loras text encoder layers no more supported

2 participants