Skip to content

Official Implementation of the Paper "Controllable Localized Face Anonymization via Diffusion Inpainting"

Notifications You must be signed in to change notification settings

parham1998/Face-Anonymization

Repository files navigation

Controllable Localized Face Anonymization via Diffusion Inpainting [ArXiv]

Ali Salar, Qing Liu, Guoying Zhao

Abstract

The growing use of portrait images in computer vision highlights the need to protect personal identities. At the same time, anonymized images must remain useful for downstream computer vision tasks. In this work, we propose a unified framework that leverages the inpainting ability of latent diffusion models to generate realistic anonymized images. Unlike prior approaches, we have complete control over the anonymization process by designing an adaptive attribute-guidance module that applies gradient correction during the reverse denoising process, aligning the facial attributes of the generated image with those of the synthesized target image. Our framework also supports localized anonymization, allowing users to specify which facial regions are left unchanged. Extensive experiments conducted on the public CelebA-HQ and FFHQ datasets show that our method outperforms state-of-the-art approaches while requiring no additional model training.

Setup

  • Get code
git clone https://github.com/parham1998/Face-Anonymization.git
  • Build environment
cd Face-Anonymization
# use anaconda to build environment 
conda create -n Face-Anonymization python=3.11.7
conda activate Face-Anonymization
# install packages
pip install -r requirements.txt
  • Download assets and place them in the assets folder

    • Download datasets from Datasets
    • Download pre-trained face parsing model from Face_Parsing
    • Download pre-trained facenet model from AMT-GAN
    • Download pre-trained FaRL from FaRL
    • Download pre-trained LDM from LDM
  • The final assets folder should be like this:

assets
  └- datasets
    └- CelebA-HQ
    └- FFHQ
  └- face_parsing
    └- 38_G.pth
  └- face_recognition_models
    └- facenet.pth
    └- facenet.py
  └- farl
    └- FaRL-Base-Patch16-LAIONFace20M-ep16.pth
    └- FaRL-Base-Patch16-LAIONFace20M-ep64.pth
  └- ldm
    └- 512-inpainting-ema.ckpt
  └- target_images
  • Datasets are already aligned. However, for new data, the images should be aligned before starting the anonymization process:
python align.py
  • For anonymization:
source_dir=source images folder path
target_path=the desired synthesized target image path
MTCNN_cropping=True
excluded_masks=choose number from: {'2: nose', '3: eye_glasses', '4: l_eye', '5: r_eye', '6: l_brow', '7:r_brow', '10: mouth', '11: u_lip', '12: l_lip'}
  1. Run the code:
python main.py

Citation

Acknowledgments

Our code structure is based on stablediffusion

About

Official Implementation of the Paper "Controllable Localized Face Anonymization via Diffusion Inpainting"

Topics

Resources

Stars

Watchers

Forks

Languages