[CVPR 2025] DRiVE: Diffusion-based Rigging Empowers Generation of Versatile and Expressive Characters
Mingze Sun1*, Junhao Chen1*, Junting Dong2†, Yurun Chen1, Xinyu Jiang1, Shiwei Mao1,
Puhua Jiang1,3, Jingbo Wang2, Bo Dai2, Ruqi Huang1†
1 Tsinghua University 2 Shanghai AI Laboratory 3 PengCheng Laboratory
✅ 1. Our fine-tuned SDXL pipeline for Text2Anime and Image2Anime.
✅ 2. Our fine-tuned LGM with SV3D.
✅ 3. Inference code for generating anime avatar 3DGS from text, image inputs.
✅ 4. The code for generating skeleton bindings and skinning weights for 3DGS.
⚪️ 5. The AnimeRig dataset, which contains nearly 10,000 3D meshes and 3DGS, along with their corresponding skeleton riggings and skinning.
See install.md.
bash ./scripts/runpipe2.sh
@InProceedings{Sun_2025_CVPR,
author = {Sun, Mingze and Chen, Junhao and Dong, Junting and Chen, Yurun and Jiang, Xinyu and Mao, Shiwei and Jiang, Puhua and Wang, Jingbo and Dai, Bo and Huang, Ruqi},
title = {DRiVE: Diffusion-based Rigging Empowers Generation of Versatile and Expressive Characters},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {21170-21180}
}This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!
