Skip to content

Conversation

@asomoza
Copy link
Member

@asomoza asomoza commented Feb 11, 2026

Add support for the distilled loras here for the Base Z-Image model.

code:

import torch

from diffusers import ZImagePipeline


repo_id = "Tongyi-MAI/Z-Image"

pipe = ZImagePipeline.from_pretrained(repo_id, torch_dtype=torch.bfloat16)

pipe.load_lora_weights(
    "alibaba-pai/Z-Image-Fun-Lora-Distill",
    weight_name="Z-Image-Fun-Lora-Distill-8-Steps.safetensors",
    # weight_name="Z-Image-Fun-Lora-Distill-8-Steps-2602.safetensors",
    # weight_name="Z-Image-Fun-Lora-Distill-4-Steps-2602.safetensors",
    adapter_name="distill",
)
pipe.enable_model_cpu_offload()

pipe.set_adapters("distill", 0.8) # they recommend between 0.7-0.8

prompt = """stock photo of a fox wearing a beret and smock, standing in front of an easel, painting an abstract canvas. The canvas has splashes of vibrant colors, and near the fox's paw, inside the canvas there is also the text 'Diffusers' in wild, expressive strokes. The photo is taken from the side, with the fox in sharp focus while the colorful background of the studio fades softly, adding an artsy, playful tone."""
image = pipe(
    prompt,
    height=1024,
    width=1024,
    num_inference_steps=9,
    # num_inference_steps=5, # for the 4-step lora
    guidance_scale=1.0,
    generator=torch.Generator("cuda").manual_seed(42),
).images[0]

image.save("zimage-distilled.png")

Images:

8-Steps 8-Steps 2602 4-Steps 2602
zimage-8steps zimage-8steps-2602 zimage-4steps-2602

@sayakpaul

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. If you could address my comment on the support for existing LoRAs, that would be great!

state_dict = {k.removeprefix("diffusion_model."): v for k, v in state_dict.items()}

has_lora_unet = any(k.startswith("lora_unet_") for k in state_dict)
has_lora_unet = any(k.startswith("lora_unet_") or k.startswith("lora_unet__") for k in state_dict)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good lord. Double _ 🥲

Comment on lines -2466 to +2473
if "." in key:
base, suffix = key.rsplit(".", 1)
else:
base, suffix = key, ""
suffix = ""
for sfx in (".lora_down.weight", ".lora_up.weight", ".alpha"):
if key.endswith(sfx):
base = key[: -len(sfx)]
suffix = sfx
break
else:
base = key
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hope this does not break compatibility with existing LoRAs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants