[CVPR 2026] VAD-GS: Visibility-Aware Densification for 3D Gaussian Splatting in Dynamic Urban Scenes
Project page | Paper | Youtube | Bilibili
Clone this repository and checkout dev branch
git clone https://github.com/YikangZhang1641/VAD-GS.git
git checkout -b dev origin/dev
Build tools
Set up the environment
# Create and sync the environment with uv
uv venv
# Install project dependencies
# On Linux, uv resolves torch/torchvision from the PyTorch cu124 index.
uv sync
data/
├── nuscenes/
│ ├── raw/
│ ├── processed_10Hz/
│ │ ├── mini/
│ │ │ ├── 000/
│ │ │ │ ├── images/
│ | │ │ ├── ego_pose/
│ | │ │ ├── lidar_depth/
│ | │ │ └── ...
│ │ │ ├── 001/
│ │ │ ├── ...
└── waymo/
|...
- We provide a nuScenes example here. Download and extract it to the folder path above.
For training:
python train.py --config configs/example/nuscenes_train_000.yaml
To generate visual outputs:
python render.py --config configs/example/nuscenes_train_000.yaml mode evaluate
For evaluation:
python metrics.py --config configs/example/nuscenes_train_000.yaml
