-
Notifications
You must be signed in to change notification settings - Fork 2.8k
feat(flux2): add FLUX.2 klein model support #8768
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat(flux2): add FLUX.2 klein model support #8768
Conversation
- Add new invocation nodes for FLUX 2: - flux2_denoise: Denoising invocation for FLUX 2 - flux2_klein_model_loader: Model loader for Klein architecture - flux2_klein_text_encoder: Text encoder for Qwen3-based encoding - flux2_vae_decode: VAE decoder for FLUX 2 - Add backend support: - New flux2 module with denoise and sampling utilities - Extended model manager configs for FLUX 2 models - Updated model loaders for Klein architecture - Update frontend: - Extended graph builder for FLUX 2 support - Added FLUX 2 model types and configurations - Updated readiness checks and UI components
FLUX.2 VAE uses Batch Normalization in the patchified latent space (128 channels). The decode must: 1. Patchify latents from (B, 32, H, W) to (B, 128, H/2, W/2) 2. Apply BN denormalization using running_mean/running_var 3. Unpatchify back to (B, 32, H, W) for VAE decode Also fixed image normalization from [-1, 1] to [0, 255]. This fixes washed-out colors in generated FLUX.2 Klein images.
…nkuchensack/InvokeAI into feature/flux2-klein-support
…ompatibility - Add FLUX.2 transformer loader with BFL-to-diffusers weight conversion - Fix AdaLayerNorm scale-shift swap for final_layer.adaLN_modulation weights - Add VAE batch normalization handling for FLUX.2 latent normalization - Add Qwen3 text encoder loader with ComfyUI FP8 quantization support - Add frontend components for FLUX.2 Klein model selection - Update configs and schema for FLUX.2 model types
- Add isFlux2Klein9BMainModelConfig and isNonCommercialMainModelConfig functions - Update MainModelPicker and InitialStateMainModelPicker to show license icon - Update license tooltip text to include FLUX.2 Klein 9B
|
I tested with two models: the diffusers model at Initial observations:
|
|
Here's a trivial fix for the error |
Backend: - Add klein_4b/klein_9b variants for FLUX.2 Klein models - Add qwen3_4b/qwen3_8b variants for Qwen3 encoder models - Validate encoder variant matches Klein model (4B↔4B, 9B↔8B) - Auto-detect Qwen3 variant from hidden_size during probing Frontend: - Show variant field for all model types in ModelView - Filter Qwen3 encoder dropdown to only show compatible variants - Update variant type definitions (zFlux2VariantType, zQwen3VariantType) - Remove unused exports (isFluxDevMainModelConfig, isFlux2Klein9BMainModelConfig)
Distinguish between FLUX.2 Klein 9B (distilled) and Klein 9B Base (undistilled) models by checking guidance_embeds in diffusers config or guidance_in keys in safetensors. Klein 9B Base requires more steps but offers higher quality.
Backend changes: - Update text encoder layers from [9,18,27] to (10,20,30) matching diffusers - Use apply_chat_template with system message instead of manual formatting - Change position IDs from ones to zeros to match diffusers implementation - Add get_schedule_flux2() with empirical mu computation for proper schedule shifting - Add txt_embed_scale parameter for Qwen3 embedding magnitude control - Add shift_schedule toggle for base (28+ steps) vs distilled (4 steps) models - Zero out guidance_embedder weights for Klein models without guidance_embeds UI changes: - Clear Klein VAE and Qwen3 encoder when switching away from flux2 base - Clear Qwen3 encoder when switching between different Klein model variants - Add toast notification informing user to select compatible encoder
|
The recent commit is causing the previously-installed ZImage Qwen3 encoders to fail validation with warnings like this one: |
|
Another issue I just noticed. When I try to generate with Flux.1 (schnell or krea), I am getting "Node with id pos_cond_collect is not found." |
- Configure scheduler with FLUX.2 Klein parameters from scheduler_config.json (use_dynamic_shifting=True, shift=3.0, time_shift_type="exponential") - Pass mu parameter to scheduler.set_timesteps() for resolution-aware shifting - Remove manual shift_schedule parameter (scheduler handles this automatically) - Simplify get_schedule_flux2() to return linear sigmas only - Remove txt_embed_scale parameter (no longer needed) This matches the diffusers Flux2KleinPipeline behavior where the FlowMatchEulerDiscreteScheduler applies dynamic timestep shifting based on image resolution via the mu parameter. Fixes 4-step distilled Klein 9B model quality issues.
The posCondCollect node was created with getPrefixedId() which generates a random suffix (e.g., 'pos_cond_collect:abc123'), but g.getNode() was called with the plain string 'pos_cond_collect', causing a node lookup failure. Fix by declaring posCondCollect as a module-scoped variable and referencing it directly instead of using g.getNode().







Summary
Add initial support for FLUX 2 klein models architecture.
New Invocation Nodes
Backend Support
flux2module with denoise and sampling utilitiesFrontend Updates
Related Issues / Discussions
QA Instructions
Merge Plan
Standard merge after review.
Checklist
What's Newcopy (if doing a release after this PR)