-
Notifications
You must be signed in to change notification settings - Fork 30
Add error handling fix #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add error handling fix #26
Conversation
|
Note
|
| Cohort / File(s) | Summary |
|---|---|
Startup, config & model preloading model-lab/app/main.py |
Load .env and configure logging; fix onxx_models → onnx_models typo; validate model file exists and CORS origins at startup; add CORSMiddleware; initialize public emotion_models with two preloaded Model instances. |
Predict endpoint & validation model-lab/app/main.py |
Validate uploaded file type and image integrity; validate model_option against preloaded emotion_models; use cached model for inference; wrap prediction in try/except returning 500 for unexpected errors while preserving HTTPExceptions; return {"prediction": ...}. |
Sequence Diagram(s)
sequenceDiagram
participant Startup as Application Startup
participant Env as Env & Logging
participant Validator as Startup Validator
participant Models as emotion_models (cache)
participant Client as Client
participant Endpoint as POST /predict
participant Inference as Model Inference
Startup->>Env: load .env, configure logging
Startup->>Validator: check model file path (onnx_models)
Validator-->>Startup: ok / raise on missing
Startup->>Validator: validate CORS origins
Validator-->>Startup: ok / raise on none
Startup->>Models: preload Model(option=1) & Model(option=2)
Models-->>Startup: ready
Client->>Endpoint: POST /predict (file, model_option)
Endpoint->>Endpoint: validate file type & image integrity
Endpoint->>Endpoint: validate model_option in emotion_models
alt valid
Endpoint->>Models: get model instance
Models-->>Endpoint: model
Endpoint->>Inference: run inference (try)
Inference-->>Endpoint: result
Endpoint-->>Client: 200 { "prediction": result }
else invalid or error
Endpoint-->>Client: 400 or 500 (HTTPException preserved)
end
Estimated code review effort
🎯 2 (Simple) | ⏱️ ~10 minutes
- Pay attention to:
model-lab/app/main.py(startup checks, CORSMiddleware,emotion_modelsinitialization, and/predictvalidation/error handling). - Verify model path correction, startup behavior on missing files, and that preloaded models are thread-safe/immutable for the app runtime.
Poem
🐰 Models stacked and ready to play,
I checked the path and hopped all day,
CORS is set, the server hums,
Two models wait for incoming crumbs,
Hop, predict, and then hooray! 🥕
Pre-merge checks and finishing touches
❌ Failed checks (1 warning, 1 inconclusive)
| Check name | Status | Explanation | Resolution |
|---|---|---|---|
| Docstring Coverage | Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. | You can run @coderabbitai generate docstrings to improve docstring coverage. |
|
| Title check | ❓ Inconclusive | The title is vague and generic, using non-descriptive terms that don't convey the actual scope of changes (model path fix, CORS validation, model preloading, comprehensive error handling). | Provide a more specific title that reflects the main changes, such as 'Fix model initialization and add comprehensive error handling' or 'Preload ONNX models and add CORS validation'. |
✅ Passed checks (1 passed)
| Check name | Status | Explanation |
|---|---|---|
| Description Check | ✅ Passed | Check skipped - CodeRabbit’s high-level summary is enabled. |
✨ Finishing touches
- 📝 Generate docstrings
🧪 Generate unit tests (beta)
- Create PR with unit tests
- Post copyable unit tests in a comment
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
model-lab/app/main.py (1)
14-18: Startup checks and model preloading are good; normalize CORS origins to avoid subtle mismatchesThe fixed ONNX path, explicit
FileNotFoundErrorfor the model file (Lines 14-18), and early CORS validation plus preloading bothModeloptions (Lines 20-23, 32-36) are all solid improvements that fail fast on misconfiguration and avoid per-request model construction.One small robustness tweak: if
CORS_ALLOWED_ORIGINScontains comma‑separated values with spaces (e.g."http://a.com, http://b.com"), the leading/trailing whitespace will remain and may not match browser origins exactly. You can strip and filter empties when buildingALLOWED_ORIGINS:-ALLOWED_ORIGINS = os.getenv("CORS_ALLOWED_ORIGINS", "http://localhost:3000").split(",") -if not ALLOWED_ORIGINS or ALLOWED_ORIGINS == [""]: - raise RuntimeError("No allowed origins configured in CORS_ALLOWED_ORIGINS") +raw_origins = os.getenv("CORS_ALLOWED_ORIGINS", "http://localhost:3000") +ALLOWED_ORIGINS = [origin.strip() for origin in raw_origins.split(",") if origin.strip()] +if not ALLOWED_ORIGINS: + raise RuntimeError("No allowed origins configured in CORS_ALLOWED_ORIGINS")Also applies to: 20-23, 32-36
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
model-lab/app/main.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
model-lab/app/main.py (1)
model-lab/app/model.py (2)
Model(11-84)predict(58-84)
🪛 Ruff (0.14.5)
model-lab/app/main.py
18-18: Avoid specifying long messages outside the exception class
(TRY003)
23-23: Avoid specifying long messages outside the exception class
(TRY003)
39-39: Do not perform function call File in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
… validation, and global model loading
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
model-lab/app/main.py (2)
40-44: LGTM!Preloading both models at startup eliminates per-request instantiation overhead and avoids potential race conditions.
Optional: Clarify model_path parameter usage.
Based on the Model class implementation,
model_pathis only used bymodel_option=2(ONNX), whilemodel_option=1(ViT) loads from HuggingFace and ignores this parameter. Consider documenting this or making the parameter optional in the Model constructor to clarify intent.
60-77: Refactor error handling for cleaner flow.The error handling logic can be streamlined:
Move model validation outside the try block (lines 62-64): Since this raises an HTTPException intentionally, it shouldn't be wrapped in the try/except that's meant to catch prediction errors.
Remove unused exception binding (line 73): The variable
eis never used.Explicitly suppress exception context (line 77): Use
from Noneto indicate intentional context suppression.Apply this diff:
+ # Validate model option + model = emotion_models.get(model_option) + if not model: + raise HTTPException(status_code=400, detail="Invalid model option") + try: - # Validate model option - model = emotion_models.get(model_option) - if not model: - raise HTTPException(status_code=400, detail="Invalid model option") - # Run prediction prediction = model.predict(image) return {"prediction": prediction} except HTTPException: # Preserve explicit HTTP errors raise - except Exception as e: + except Exception: # Log internal error for debugging logger.exception("Unexpected error during prediction") # Return generic message to client - raise HTTPException(status_code=500, detail="Internal server error") + raise HTTPException(status_code=500, detail="Internal server error") from None
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
model-lab/app/main.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
model-lab/app/main.py (1)
model-lab/app/model.py (2)
Model(11-84)predict(58-84)
🪛 Ruff (0.14.5)
model-lab/app/main.py
26-26: Avoid specifying long messages outside the exception class
(TRY003)
31-31: Avoid specifying long messages outside the exception class
(TRY003)
48-48: Do not perform function call File in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
64-64: Abstract raise to an inner function
(TRY301)
68-68: Consider moving this statement to an else block
(TRY300)
73-73: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
77-77: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🔇 Additional comments (7)
model-lab/app/main.py (7)
1-8: LGTM!The added imports are appropriate and used throughout the file.
12-17: LGTM!Environment variable loading and logging configuration are properly set up.
22-26: LGTM!The typo fix and existence check are good defensive programming practices that prevent runtime errors.
28-31: LGTM!The CORS origin validation ensures the application fails fast at startup if misconfigured.
33-38: LGTM!The CORS configuration follows the principle of least privilege with appropriate restrictions.
46-58: LGTM!The file type and image integrity validations are solid defensive checks that prevent invalid data from reaching the model.
60-77: Past review feedback has been successfully addressed!The error handling now correctly:
- Re-raises HTTPException to preserve explicit error semantics (lines 70-72)
- Returns generic error messages to clients instead of leaking internals (line 77)
- Logs full exception details server-side for debugging (line 75)
This is a solid improvement.
Summary by CodeRabbit
Bug Fixes
New Features
Performance
Reliability