-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
Hye,
I am trying to use AutoencoderRAE, unfortunately I cannot load it and it seems to break by giving the following error:
File ".venv/lib/python3.13/site-packages/transformers/models/dinov2_with_registers/modeling_dinov2_with_registers.py", line 529, in _init_weights
).to(module.weight.dtype)
^^
AttributeError: 'NoneType' object has no attribute 'to'I think its because of older tokenizers, transformers and huggingface-hub versions.
I am in the following versions for them:
transformers==4.55.4
tokenizers==0.21.2
huggingface-hub==0.36.2
diffusers==0.37.0Updating them to the following fixes the problem:
transformers==5.3.0
tokenizers==0.22.2
huggingface-hub==1.7.1
diffusers==0.37.0However, it breaks my metric script for calculating ImageReward! Is there a solution for this without updating packages?
Reproduction
The package versions must be:
transformers==4.55.4
tokenizers==0.21.2
huggingface-hub==0.36.2
diffusers==0.37.0Then the simple loading fails:
from diffusers import AutoencoderRAE
model = AutoencoderRAE.from_pretrained(
"nyu-visionx/RAE-dinov2-wReg-base-ViTXL-n08"
).to("cuda").eval()Logs
File ".venv/lib/python3.13/site-packages/transformers/models/dinov2_with_registers/modeling_dinov2_with_registers.py", line 529, in _init_weights
).to(module.weight.dtype)
^^
AttributeError: 'NoneType' object has no attribute 'to'System Info
- 🤗 Diffusers version: 0.37.0
- Platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.13.5
- PyTorch version (GPU?): 2.8.0+cu128 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.36.2
- Transformers version: 4.55.4
- Accelerate version: 1.13.0
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.7.0
- xFormers version: not installed
- Accelerator: NVIDIA RTX A6000, 49140 MiB
NVIDIA RTX A6000, 49140 MiB - Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Who can help?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working