Skip to content

Conversation

@shivansh31414
Copy link

@shivansh31414 shivansh31414 commented Nov 18, 2025

Summary by CodeRabbit

  • Bug Fixes

    • Fixed model file path issue that prevented startup.
  • New Features

    • Load environment settings and configure logging at startup.
  • Performance

    • Models are preloaded at startup, improving prediction response times.
  • Reliability

    • Startup now verifies model file and CORS configuration.
    • Prediction endpoint validates uploads and model selection, returns standardized prediction responses, and reports errors with appropriate HTTP status codes.

@coderabbitai
Copy link

coderabbitai bot commented Nov 18, 2025

Note

.coderabbit.yaml has unrecognized properties

CodeRabbit is using all valid settings from your configuration. Unrecognized properties (listed below) have been ignored and may indicate typos or deprecated fields that can be removed.

⚠️ Parsing warnings (1)
Validation error: Unrecognized key(s) in object: 'version'
⚙️ Configuration instructions
  • Please see the configuration documentation for more information.
  • You can also validate your configuration using the online YAML validator.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Walkthrough

The startup fixes a model path typo, loads environment variables, configures logging and CORS, checks model file existence, and preloads two model instances into a public emotion_models dictionary. The /predict endpoint now validates uploads and model options, uses preloaded models for inference, and wraps prediction in error handling.

Changes

Cohort / File(s) Summary
Startup, config & model preloading
model-lab/app/main.py
Load .env and configure logging; fix onxx_modelsonnx_models typo; validate model file exists and CORS origins at startup; add CORSMiddleware; initialize public emotion_models with two preloaded Model instances.
Predict endpoint & validation
model-lab/app/main.py
Validate uploaded file type and image integrity; validate model_option against preloaded emotion_models; use cached model for inference; wrap prediction in try/except returning 500 for unexpected errors while preserving HTTPExceptions; return {"prediction": ...}.

Sequence Diagram(s)

sequenceDiagram
    participant Startup as Application Startup
    participant Env as Env & Logging
    participant Validator as Startup Validator
    participant Models as emotion_models (cache)
    participant Client as Client
    participant Endpoint as POST /predict
    participant Inference as Model Inference

    Startup->>Env: load .env, configure logging
    Startup->>Validator: check model file path (onnx_models)
    Validator-->>Startup: ok / raise on missing
    Startup->>Validator: validate CORS origins
    Validator-->>Startup: ok / raise on none
    Startup->>Models: preload Model(option=1) & Model(option=2)
    Models-->>Startup: ready

    Client->>Endpoint: POST /predict (file, model_option)
    Endpoint->>Endpoint: validate file type & image integrity
    Endpoint->>Endpoint: validate model_option in emotion_models
    alt valid
        Endpoint->>Models: get model instance
        Models-->>Endpoint: model
        Endpoint->>Inference: run inference (try)
        Inference-->>Endpoint: result
        Endpoint-->>Client: 200 { "prediction": result }
    else invalid or error
        Endpoint-->>Client: 400 or 500 (HTTPException preserved)
    end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Pay attention to: model-lab/app/main.py (startup checks, CORSMiddleware, emotion_models initialization, and /predict validation/error handling).
  • Verify model path correction, startup behavior on missing files, and that preloaded models are thread-safe/immutable for the app runtime.

Poem

🐰 Models stacked and ready to play,
I checked the path and hopped all day,
CORS is set, the server hums,
Two models wait for incoming crumbs,
Hop, predict, and then hooray! 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The title is vague and generic, using non-descriptive terms that don't convey the actual scope of changes (model path fix, CORS validation, model preloading, comprehensive error handling). Provide a more specific title that reflects the main changes, such as 'Fix model initialization and add comprehensive error handling' or 'Preload ONNX models and add CORS validation'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
model-lab/app/main.py (1)

14-18: Startup checks and model preloading are good; normalize CORS origins to avoid subtle mismatches

The fixed ONNX path, explicit FileNotFoundError for the model file (Lines 14-18), and early CORS validation plus preloading both Model options (Lines 20-23, 32-36) are all solid improvements that fail fast on misconfiguration and avoid per-request model construction.

One small robustness tweak: if CORS_ALLOWED_ORIGINS contains comma‑separated values with spaces (e.g. "http://a.com, http://b.com"), the leading/trailing whitespace will remain and may not match browser origins exactly. You can strip and filter empties when building ALLOWED_ORIGINS:

-ALLOWED_ORIGINS = os.getenv("CORS_ALLOWED_ORIGINS", "http://localhost:3000").split(",")
-if not ALLOWED_ORIGINS or ALLOWED_ORIGINS == [""]:
-    raise RuntimeError("No allowed origins configured in CORS_ALLOWED_ORIGINS")
+raw_origins = os.getenv("CORS_ALLOWED_ORIGINS", "http://localhost:3000")
+ALLOWED_ORIGINS = [origin.strip() for origin in raw_origins.split(",") if origin.strip()]
+if not ALLOWED_ORIGINS:
+    raise RuntimeError("No allowed origins configured in CORS_ALLOWED_ORIGINS")

Also applies to: 20-23, 32-36

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bc82449 and f04dd67.

📒 Files selected for processing (1)
  • model-lab/app/main.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
model-lab/app/main.py (1)
model-lab/app/model.py (2)
  • Model (11-84)
  • predict (58-84)
🪛 Ruff (0.14.5)
model-lab/app/main.py

18-18: Avoid specifying long messages outside the exception class

(TRY003)


23-23: Avoid specifying long messages outside the exception class

(TRY003)


39-39: Do not perform function call File in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
model-lab/app/main.py (2)

40-44: LGTM!

Preloading both models at startup eliminates per-request instantiation overhead and avoids potential race conditions.

Optional: Clarify model_path parameter usage.

Based on the Model class implementation, model_path is only used by model_option=2 (ONNX), while model_option=1 (ViT) loads from HuggingFace and ignores this parameter. Consider documenting this or making the parameter optional in the Model constructor to clarify intent.


60-77: Refactor error handling for cleaner flow.

The error handling logic can be streamlined:

  1. Move model validation outside the try block (lines 62-64): Since this raises an HTTPException intentionally, it shouldn't be wrapped in the try/except that's meant to catch prediction errors.

  2. Remove unused exception binding (line 73): The variable e is never used.

  3. Explicitly suppress exception context (line 77): Use from None to indicate intentional context suppression.

Apply this diff:

+    # Validate model option
+    model = emotion_models.get(model_option)
+    if not model:
+        raise HTTPException(status_code=400, detail="Invalid model option")
+
     try:
-        # Validate model option
-        model = emotion_models.get(model_option)
-        if not model:
-            raise HTTPException(status_code=400, detail="Invalid model option")
-
         # Run prediction
         prediction = model.predict(image)
         return {"prediction": prediction}
 
     except HTTPException:
         # Preserve explicit HTTP errors
         raise
-    except Exception as e:
+    except Exception:
         # Log internal error for debugging
         logger.exception("Unexpected error during prediction")
         # Return generic message to client
-        raise HTTPException(status_code=500, detail="Internal server error")
+        raise HTTPException(status_code=500, detail="Internal server error") from None
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f04dd67 and e341371.

📒 Files selected for processing (1)
  • model-lab/app/main.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
model-lab/app/main.py (1)
model-lab/app/model.py (2)
  • Model (11-84)
  • predict (58-84)
🪛 Ruff (0.14.5)
model-lab/app/main.py

26-26: Avoid specifying long messages outside the exception class

(TRY003)


31-31: Avoid specifying long messages outside the exception class

(TRY003)


48-48: Do not perform function call File in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


64-64: Abstract raise to an inner function

(TRY301)


68-68: Consider moving this statement to an else block

(TRY300)


73-73: Local variable e is assigned to but never used

Remove assignment to unused variable e

(F841)


77-77: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🔇 Additional comments (7)
model-lab/app/main.py (7)

1-8: LGTM!

The added imports are appropriate and used throughout the file.


12-17: LGTM!

Environment variable loading and logging configuration are properly set up.


22-26: LGTM!

The typo fix and existence check are good defensive programming practices that prevent runtime errors.


28-31: LGTM!

The CORS origin validation ensures the application fails fast at startup if misconfigured.


33-38: LGTM!

The CORS configuration follows the principle of least privilege with appropriate restrictions.


46-58: LGTM!

The file type and image integrity validations are solid defensive checks that prevent invalid data from reaching the model.


60-77: Past review feedback has been successfully addressed!

The error handling now correctly:

  1. Re-raises HTTPException to preserve explicit error semantics (lines 70-72)
  2. Returns generic error messages to clients instead of leaking internals (line 77)
  3. Logs full exception details server-side for debugging (line 75)

This is a solid improvement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant