Before You Report a Bug, Please Confirm You Have Done The Following...
DeepFace's version
v0.0.99
Python version
3.13.1
Operating System
Windows WSL
Dependencies
absl-py==2.4.0
astroid==4.0.4
astunparse==1.6.3
beautifulsoup4==4.14.3
blinker==1.9.0
certifi==2026.2.25
charset-normalizer==3.4.6
click==8.3.1
colorama==0.4.6
-e git+https://github.com/JayNightmare/Deepface-Mirror.git@960985e0f1b02877306508d28e867373b1a67c91#egg=deepface
dill==0.4.1
filelock==3.25.2
fire==0.7.1
Flask==2.0.2
flask-cors==6.0.2
flatbuffers==25.12.19
fsspec==2026.2.0
gast==0.7.0
gdown==5.2.1
google-pasta==0.2.0
grpcio==1.78.0
gunicorn==25.3.0
h5py==3.14.0
idna==3.11
iniconfig==2.3.0
isort==8.0.1
itsdangerous==2.2.0
Jinja2==3.1.6
joblib==1.5.3
keras==3.13.2
libclang==18.1.1
librt==0.8.1
lightdsa==0.0.3
lightecc==0.0.5
lightphe==0.0.21
lz4==4.4.5
markdown-it-py==4.0.0
MarkupSafe==3.0.3
mccabe==0.7.0
mdurl==0.1.2
ml_dtypes==0.5.4
mpmath==1.3.0
mtcnn==1.0.0
mypy==1.19.1
mypy_extensions==1.1.0
namex==0.1.0
networkx==3.6.1
numpy==2.4.3
opencv-python==4.13.0.92
opt_einsum==3.4.0
optree==0.19.0
packaging==26.0
pandas==3.0.1
pathspec==1.0.4
pillow==12.1.1
platformdirs==4.9.4
pluggy==1.6.0
protobuf==7.34.1
pyenchant==3.3.0
Pygments==2.19.2
pylint==4.0.5
PySocks==1.7.1
pytest==9.0.2
python-dateutil==2.9.0.post0
python-dotenv==1.2.2
requests==2.33.0
retina-face==0.0.17
rich==14.3.3
setuptools==81.0.0
six==1.17.0
soupsieve==2.8.3
sympy==1.14.0
tensorflow==2.21.0
termcolor==3.3.0
tf_keras==2.21.0
tomlkit==0.14.0
torch==2.11.0
tqdm==4.67.3
types-click==7.1.8
types-Flask==1.1.6
types-Jinja2==2.11.9
types-MarkupSafe==1.1.10
types-Werkzeug==1.0.9
typing_extensions==4.15.0
tzdata==2025.3
urllib3==2.6.3
Werkzeug==2.0.2
wheel==0.46.3
wrapt==2.1.2
Reproducible example
cd tests/unit
python -m pytest . -s --disable-warnings
Relevant Log Output
== 78 failed, 38 passed, 7337 warnings in 117.14s (0:01:57) ==
Expected Result
Pass 100% of all tests
What happened instead?
- Corrupted / partially-downloaded DeepFace model weight files in CI
- Werkzeug/Flask incompatibility in tests (Werkzeug has no version)
Additional Info
The job is failing for two independent reasons:
- Corrupted / partially-downloaded DeepFace model weight files in CI
Your tests attempt to download and load multiple .h5 weights into ~/.deepface/weights/…, but in this run they’re ending up corrupted/incomplete, so Keras/H5PY throws errors like:
ValueError: ... loading the pre-trained weights from /home/runner/.deepface/weights/vgg_face_weights.h5 ... interruption during the download
OSError: Unable to synchronously open file (file signature not found)
This cascades into many failures across analyze, represent, verify, etc. because the models can’t load.
- Werkzeug/Flask incompatibility in tests (Werkzeug has no version)
Several API tests fail with:
AttributeError: module 'werkzeug' has no attribute '__version__'
Even though your workflow pins Werkzeug==2.0.2 flask==2.0.2, the error indicates something in the environment still ends up with an incompatible Werkzeug version or code that relies on werkzeug.__version__ (which can be missing in newer Werkzeug versions).
Fix 1 (recommended): Make CI deterministic by caching + validating weights, and re-downloading if corrupted
A) Add a cache for ~/.deepface/weights
In .github/workflows/tests.yml (ref 960985e0f1b02877306508d28e867373b1a67c91), add an actions cache step before running pytest:
- name: Cache DeepFace weights
uses: actions/cache@v4
with:
path: ~/.deepface/weights
key: deepface-weights-${{ runner.os }}-py${{ matrix.python-version }}
restore-keys: |
deepface-weights-${{ runner.os }}-
This reduces repeated downloads and makes runs much less flaky.
B) Proactively delete corrupted .h5 files before tests
Add a small “sanity cleanup” step before pytest to remove broken weight files (common when downloads get interrupted). Example:
- name: Remove corrupted DeepFace weights (if any)
run: |
python - <<'PY'
import os, glob
weights_dir = os.path.expanduser("~/.deepface/weights")
if not os.path.isdir(weights_dir):
raise SystemExit(0)
# Remove suspiciously small files (often indicates truncated downloads)
for p in glob.glob(os.path.join(weights_dir, "*.h5")):
try:
if os.path.getsize(p) < 1024 * 100: # <100KB is definitely broken for these models
print("Removing tiny weight file:", p)
os.remove(p)
except OSError:
pass
PY
If you want stricter validation, you can check for a valid HDF5 signature (first 8 bytes) and delete if not present.
C) Consider pinning TensorFlow/Keras stack for CI stability
Your requirements.txt pins very new packages (tensorflow>=2.21.0, keras>=3.13.2, numpy>=2.4.3, Python 3.13.1 in CI). That’s a high-risk combo for model-loading edge cases.
For CI, strongly consider pinning to a known-good combination (example; adjust to what DeepFace upstream supports best):
- Python 3.10/3.11
tensorflow==2.15.* (or a version you know works with these weights)
- compatible
keras/tf-keras
This is the single biggest lever to avoid “weights load” surprises.
Concretely, in .github/workflows/tests.yml, change the matrix:
strategy:
matrix:
python-version: ["3.11"]
(You can still keep a separate “latest” job, but don’t block merges on it.)
Fix 2: Resolve Werkzeug __version__ failures by pinning correctly and/or removing the dependency on werkzeug.__version__
A) Pin Werkzeug/Flask in a way that actually wins dependency resolution
Right now the workflow does:
pip install pytest
pip install Werkzeug==2.0.2 flask==2.0.2
pip install .
But pip install . then installs Flask>=3.1.3 from your requirements.txt (ref 960985e0f1b02877306508d28e867373b1a67c91/requirements.txt), which will override your earlier pin and drag in newer Werkzeug. That explains why you still see Werkzeug-related breakage.
Solution: move the Flask/Werkzeug constraints into install requirements (or use constraints files), so installing your package doesn’t undo the pins.
Minimal change: in requirements.txt, replace:
with something compatible with your tests, e.g.:
Flask==2.0.2
Werkzeug==2.0.2
(or at least cap them: Flask<3, Werkzeug<3).
Then remove the special-case install lines from the workflow (optional, but cleaner).
B) If your code/tests read werkzeug.__version__, make it robust
If you have code that does werkzeug.__version__, change it to:
from importlib.metadata import version, PackageNotFoundError
try:
werkzeug_version = version("werkzeug")
except PackageNotFoundError:
werkzeug_version = "unknown"
This works regardless of whether the module exposes __version__.
Why this should make the job pass
- Caching + deleting corrupted weights prevents the widespread
*.h5 load failures that currently account for most of the 80 failures.
- Fixing dependency pinning stops
pip install . from upgrading Flask/Werkzeug behind your back, eliminating the werkzeug.__version__ API-test crashes and the resulting 400 != 200 assertions.
If you apply only one change first, do the dependency pinning fix (because the current workflow pins Flask/Werkzeug but your package install immediately overrides it via requirements.txt).
Before You Report a Bug, Please Confirm You Have Done The Following...
DeepFace's version
v0.0.99
Python version
3.13.1
Operating System
Windows WSL
Dependencies
absl-py==2.4.0
astroid==4.0.4
astunparse==1.6.3
beautifulsoup4==4.14.3
blinker==1.9.0
certifi==2026.2.25
charset-normalizer==3.4.6
click==8.3.1
colorama==0.4.6
-e git+https://github.com/JayNightmare/Deepface-Mirror.git@960985e0f1b02877306508d28e867373b1a67c91#egg=deepface
dill==0.4.1
filelock==3.25.2
fire==0.7.1
Flask==2.0.2
flask-cors==6.0.2
flatbuffers==25.12.19
fsspec==2026.2.0
gast==0.7.0
gdown==5.2.1
google-pasta==0.2.0
grpcio==1.78.0
gunicorn==25.3.0
h5py==3.14.0
idna==3.11
iniconfig==2.3.0
isort==8.0.1
itsdangerous==2.2.0
Jinja2==3.1.6
joblib==1.5.3
keras==3.13.2
libclang==18.1.1
librt==0.8.1
lightdsa==0.0.3
lightecc==0.0.5
lightphe==0.0.21
lz4==4.4.5
markdown-it-py==4.0.0
MarkupSafe==3.0.3
mccabe==0.7.0
mdurl==0.1.2
ml_dtypes==0.5.4
mpmath==1.3.0
mtcnn==1.0.0
mypy==1.19.1
mypy_extensions==1.1.0
namex==0.1.0
networkx==3.6.1
numpy==2.4.3
opencv-python==4.13.0.92
opt_einsum==3.4.0
optree==0.19.0
packaging==26.0
pandas==3.0.1
pathspec==1.0.4
pillow==12.1.1
platformdirs==4.9.4
pluggy==1.6.0
protobuf==7.34.1
pyenchant==3.3.0
Pygments==2.19.2
pylint==4.0.5
PySocks==1.7.1
pytest==9.0.2
python-dateutil==2.9.0.post0
python-dotenv==1.2.2
requests==2.33.0
retina-face==0.0.17
rich==14.3.3
setuptools==81.0.0
six==1.17.0
soupsieve==2.8.3
sympy==1.14.0
tensorflow==2.21.0
termcolor==3.3.0
tf_keras==2.21.0
tomlkit==0.14.0
torch==2.11.0
tqdm==4.67.3
types-click==7.1.8
types-Flask==1.1.6
types-Jinja2==2.11.9
types-MarkupSafe==1.1.10
types-Werkzeug==1.0.9
typing_extensions==4.15.0
tzdata==2025.3
urllib3==2.6.3
Werkzeug==2.0.2
wheel==0.46.3
wrapt==2.1.2
Reproducible example
Relevant Log Output
== 78 failed, 38 passed, 7337 warnings in 117.14s (0:01:57) ==
Expected Result
Pass 100% of all tests
What happened instead?
Additional Info
The job is failing for two independent reasons:
Your tests attempt to download and load multiple
.h5weights into~/.deepface/weights/…, but in this run they’re ending up corrupted/incomplete, so Keras/H5PY throws errors like:ValueError: ... loading the pre-trained weights from /home/runner/.deepface/weights/vgg_face_weights.h5 ... interruption during the downloadOSError: Unable to synchronously open file (file signature not found)This cascades into many failures across
analyze,represent,verify, etc. because the models can’t load.Several API tests fail with:
AttributeError: module 'werkzeug' has no attribute '__version__'Even though your workflow pins
Werkzeug==2.0.2 flask==2.0.2, the error indicates something in the environment still ends up with an incompatible Werkzeug version or code that relies onwerkzeug.__version__(which can be missing in newer Werkzeug versions).Fix 1 (recommended): Make CI deterministic by caching + validating weights, and re-downloading if corrupted
A) Add a cache for
~/.deepface/weightsIn
.github/workflows/tests.yml(ref960985e0f1b02877306508d28e867373b1a67c91), add an actions cache step before running pytest:This reduces repeated downloads and makes runs much less flaky.
B) Proactively delete corrupted
.h5files before testsAdd a small “sanity cleanup” step before pytest to remove broken weight files (common when downloads get interrupted). Example:
If you want stricter validation, you can check for a valid HDF5 signature (first 8 bytes) and delete if not present.
C) Consider pinning TensorFlow/Keras stack for CI stability
Your
requirements.txtpins very new packages (tensorflow>=2.21.0,keras>=3.13.2,numpy>=2.4.3, Python 3.13.1 in CI). That’s a high-risk combo for model-loading edge cases.For CI, strongly consider pinning to a known-good combination (example; adjust to what DeepFace upstream supports best):
tensorflow==2.15.*(or a version you know works with these weights)keras/tf-kerasThis is the single biggest lever to avoid “weights load” surprises.
Concretely, in
.github/workflows/tests.yml, change the matrix:(You can still keep a separate “latest” job, but don’t block merges on it.)
Fix 2: Resolve Werkzeug
__version__failures by pinning correctly and/or removing the dependency onwerkzeug.__version__A) Pin Werkzeug/Flask in a way that actually wins dependency resolution
Right now the workflow does:
But
pip install .then installsFlask>=3.1.3from yourrequirements.txt(ref960985e0f1b02877306508d28e867373b1a67c91/requirements.txt), which will override your earlier pin and drag in newer Werkzeug. That explains why you still see Werkzeug-related breakage.Solution: move the Flask/Werkzeug constraints into install requirements (or use constraints files), so installing your package doesn’t undo the pins.
Minimal change: in
requirements.txt, replace:with something compatible with your tests, e.g.:
(or at least cap them:
Flask<3,Werkzeug<3).Then remove the special-case install lines from the workflow (optional, but cleaner).
B) If your code/tests read
werkzeug.__version__, make it robustIf you have code that does
werkzeug.__version__, change it to:This works regardless of whether the module exposes
__version__.Why this should make the job pass
*.h5load failures that currently account for most of the 80 failures.pip install .from upgrading Flask/Werkzeug behind your back, eliminating thewerkzeug.__version__API-test crashes and the resulting400 != 200assertions.If you apply only one change first, do the dependency pinning fix (because the current workflow pins Flask/Werkzeug but your package install immediately overrides it via
requirements.txt).