This is a reimplementation of the paper "Detecting Deepfakes Without Seeing Any." Deepfake detectors for audio-visual and face swapping scenarios were implemented.
Create a virtual conda environment, activate it, and install the requirements file. You must also install faiss-cpu or faiss-gpu and sentencepiece and dlib, both of which first require cmake.
This implementation is based off of FACTOR, AV-HuBERT, and FaceX-Zoo; please refer to the original AV-HuBERT, FaceX-Zoo, and FACTOR repositories.
For the instructions on the individual implementations, please refer the Readme.md files in their respective folders