Skip to content

[SIGGRAPH Asia 2025] Hallo4: High-Fidelity Dynamic Portrait Animation via Direct Preference Optimization

Notifications You must be signed in to change notification settings

fudan-generative-vision/hallo4

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hallo4: High-Fidelity Dynamic Portrait Animation via Direct Preference Optimization

Yun Zhan1Zilong Dong5Yao Yao4Jingdong Wang2Siyu Zhu1,3✉️
1Fudan University  2Baidu Inc  3Shanghai Innovative Institute 
4Nanjing University  5Alibaba Group 


📸 Showcase

rap1.mp4
rap2.mp4
rap3.mp4
hb1_.mp4
hb2_.mp4
hb3_.mp4

⚙️ Installation

  • System requirement: Ubuntu 20.04/Ubuntu 22.04, Cuda 12.1
  • Tested GPUs: H100

Download the codes:

  git clone https://github.com/fudan-generative-vision/hallo4
  cd hallo4

Create conda environment:

  conda create -n hallo python=3.10
  conda activate hallo

Install packages with pip

  pip install -r requirements.txt

Besides, ffmpeg is also needed:

  apt-get install ffmpeg

📥 Download Pretrained Models

You can easily get all pretrained models required by inference from our HuggingFace repo.

Using huggingface-cli to download the models:

cd $ProjectRootDir
pip install "huggingface_hub[cli]"
huggingface-cli download fudan-generative-ai/hallo4 --local-dir ./pretrained_models

Finally, these pretrained models should be organized as follows:

./pretrained_models/
|-- hallo4
|   `-- model_weight.pth
|-- Wan2.1_Encoders
    |-- Wan2.1_VAE.pth
    |-- models_t5_umt5-xxl-enc-bf16.pth
|-- audio_separator/
|   |-- download_checks.json
|   |-- mdx_model_data.json
|   |-- vr_model_data.json
|   `-- Kim_Vocal_2.onnx
|-- wav2vec/
    `-- wav2vec2-base-960h/
        |-- config.json
        |-- feature_extractor_config.json
        |-- model.safetensors
        |-- preprocessor_config.json
        |-- special_tokens_map.json
        |-- tokenizer_config.json
        `-- vocab.json

🛠️ Prepare Inference Data

Hallo4 have some specicial requirements on inference data due to limitation of our training:

  1. Reference image should have aspect ratio between 1:1 and 480:832.
  2. Driving audio must be in WAV format.
  3. Audio must be in English since our training datasets are only in this language.
  4. Ensure the vocals of audio are clear; background music is acceptable.

🎮 Run Inference

To run a simple demo, just use our provided shell bash inf.sh

⚠️ Social Risks and Mitigations

The development of portrait image animation technologies driven by audio inputs poses social risks, such as the ethical implications of creating realistic portraits that could be misused for deepfakes. To mitigate these risks, it is crucial to establish ethical guidelines and responsible use practices. Privacy and consent concerns also arise from using individuals' images and voices. Addressing these involves transparent data usage policies, informed consent, and safeguarding privacy rights. By addressing these risks and implementing mitigations, the research aims to ensure the responsible and ethical development of this technology.

🤗 Acknowledgements

This model is a fine-tuned derivative version based on the WAN2.1-1.3B model. WAN is an open-source video generation model developed by the WAN team. Its original code and model parameters are governed by the WAN LICENSE.

As a derivative work of WAN, the use, distribution, and modification of this model must comply with the license terms of WAN.

About

[SIGGRAPH Asia 2025] Hallo4: High-Fidelity Dynamic Portrait Animation via Direct Preference Optimization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •