-
Create the environment and install all dependencies:
# First, create and activate a Python 3.12 virtual environment uv venv --python 3.12 source .venv/bin/activate # After activating the environment, install all dependencies in one step # (Modify the PyTorch-related lines according to your system/CUDA version) uv pip install \ torch==2.5.1 \ torchvision==0.20.1 \ torchaudio==2.5.1 \ -r requirements.txt
Note: On Windows, the activation command is
.venv\Scripts\activate. -
Install GroundedSAM:
.venv\Scripts\activate mkdir -p modules cd modules git clone https://github.com/SoluteToNight/GroundedSam4FACT.git cd GroundedSam4FACT uv pip install -e . uv pip install --no-build-isolation -e grounding_dino
For more details, see the Grounded-SAM-2 repository.
The main script for running the processing workflow is demo.py.
-
Place your 3D model assets (e.g.,
.obj,.mtl, and texture images) into a subdirectory inside theobjfolder. Seeobj/example1for reference. -
Run the script from the command line. You can specify the input directory for your model and where to save the output.
python demo.py --input ./obj/example1 --output ./outputs/example1
--input: Path to the directory containing the input model and textures.--output: Directory where processed textures will be saved.--device: Computation device,cudaorcpu(Default:cuda).