Skip to content

Conversation

@shivansh31414
Copy link

@shivansh31414 shivansh31414 commented Nov 17, 2025

πŸ“Œ Pull Request: Add Error Handling to FastAPI App

πŸ”§ Summary

This PR introduces robust error handling to the FastAPI application and enforces consistent LF line endings across the repository.

βœ… Changes Implemented

  • Model validation
    • Added check to ensure the ONNX model file exists at startup.
    • Returns clear HTTP 500 error if the model file is missing.
  • CORS configuration
    • Validates allowed origins from environment variables.
    • Raises error if no origins are configured.
  • File upload validation
    • Ensures uploaded files are images (content_type check).
    • Returns HTTP 400 error for non‑image files.
  • Corrupted image handling
    • Wraps Image.open() in try/except.
    • Returns HTTP 400 error for invalid or corrupted image files.
  • Model inference safety
    • Wraps inference in try/except.
    • Returns HTTP 500 error with clear message if inference fails.
  • Line ending normalization
    • Added .gitattributes to enforce LF endings.
    • Prevents CRLF/LF diff noise in future commits.

πŸ“‚ Files Updated

  • app/main.py (FastAPI error handling logic)
  • .gitattributes (line ending enforcement)

πŸ§ͺ Testing

  • Verified /predict endpoint rejects non‑image files.
  • Verified corrupted images return HTTP 400.
  • Verified inference errors return HTTP 500 with descriptive message.
  • Confirmed LF normalization prevents large β€œinsertions/deletions” diffs.

πŸš€ Impact

  • Improves API reliability with clear error responses.
  • Simplifies future code reviews by eliminating line‑ending noise.
  • Provides a safer, reviewer‑friendly workflow for contributors.

Summary by CodeRabbit

  • Improvements

    • App validates required model files and CORS origins at startup to surface configuration issues early.
    • Both model options are preloaded at startup and reused for requests, improving consistency and performance.
    • Predict endpoint validates selected model option and preserves image input validation.
  • Bug Fixes

    • Added robust inference error handling that returns informative server errors on failure.

@coderabbitai
Copy link

coderabbitai bot commented Nov 17, 2025

Note

.coderabbit.yaml has unrecognized properties

CodeRabbit is using all valid settings from your configuration. Unrecognized properties (listed below) have been ignored and may indicate typos or deprecated fields that can be removed.

⚠️ Parsing warnings (1)
Validation error: Unrecognized key(s) in object: 'version'
βš™οΈ Configuration instructions
  • Please see the configuration documentation for more information.
  • You can also validate your configuration using the online YAML validator.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Walkthrough

Preloads ONNX model options into a global emotion_models dict at startup, validates ONNX model file and CORS origins on startup, removes per-request model instantiation in /predict, validates model_option, and wraps inference in try/except to return HTTP 500 on failure.

Changes

Cohort / File(s) Summary
App startup & globals
model-lab/app/main.py
Fixed model path typo (onxx_models β†’ onnx_models); added startup checks that raise FileNotFoundError if model file missing and RuntimeError if CORS origins are empty; preloads both model options into a global emotion_models dict mapping option β†’ Model instance.
Predict endpoint
model-lab/app/main.py
/predict no longer creates models per-request; validates model_option and returns HTTP 400 for invalid options; uses emotion_models[option]; wraps inference in try/except and returns HTTP 500 with descriptive error on failure; retains image validation and returns emotion + probabilities on success.

Sequence Diagram(s)

sequenceDiagram
    participant App as FastAPI App
    participant FS as File System
    participant Config as Config

    rect rgb(230,245,255)
    note over App: Application startup
    App->>FS: Check `onnx_models/...` files for each option
    alt Missing any model file
        FS-->>App: Not found
        App->>App: Raise FileNotFoundError
    else All present
        FS-->>App: Exists
        App->>App: Load models -> global emotion_models{option: Model}
    end
    App->>Config: Validate CORS origins
    alt No origins
        Config-->>App: Empty
        App->>App: Raise RuntimeError
    else Origins present
        Config-->>App: Origins list
    end
    end
Loading
sequenceDiagram
    participant Client
    participant Endpoint as /predict
    participant Models as global emotion_models
    participant Inference as Inference Engine

    Client->>Endpoint: POST /predict (image, optional model_option)
    rect rgb(255,250,230)
    note over Endpoint: Validate inputs & model_option
    Endpoint->>Models: Lookup emotion_models[option]
    alt option invalid
        Models-->>Endpoint: KeyError / not found
        Endpoint->>Client: HTTP 400 (invalid model_option)
    else option valid
        Endpoint->>Inference: Run inference with selected Model
        alt Inference raises error
            Inference-->>Endpoint: Exception
            Endpoint->>Client: HTTP 500 + descriptive error
        else Success
            Inference-->>Endpoint: emotion + probabilities
            Endpoint-->>Client: HTTP 200 + results
        end
    end
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20–30 minutes

  • Areas to focus:
    • Correctness of model path and existence checks for all options.
    • Behavior and message clarity of raised FileNotFoundError and RuntimeError.
    • Thread-safety and lifecycle of emotion_models (concurrent requests).
    • Validation and error responses for invalid model_option.
    • Exception handling in inference to avoid leaking internal traces.

Poem

🐰 I hopped through files to find the right way,
Loaded models once at the break of day.
Origins checked, no more per-call race,
Predictions faster, steady in place.
A carrot cheer for code that now stays.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The title 'Added error handling' is partially related to the changeset; it refers to a real aspect of the changes (error handling was indeed added), but it misses the main improvements including model preloading optimization, startup validation, and line-ending normalization that are equally significant to the changeset. Consider a more comprehensive title that captures the primary optimization and validation changes, such as 'Optimize model loading and add comprehensive error handling' or 'Add startup validation and error handling for model predictions'.
βœ… Passed checks (1 passed)
Check name Status Explanation
Description Check βœ… Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • πŸ“ Generate docstrings
πŸ§ͺ Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

πŸ“œ Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 0e155d4 and f04dd67.

πŸ“’ Files selected for processing (1)
  • model-lab/app/main.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
model-lab/app/main.py (1)
model-lab/app/model.py (2)
  • Model (11-84)
  • predict (58-84)
πŸͺ› Ruff (0.14.5)
model-lab/app/main.py

18-18: Avoid specifying long messages outside the exception class

(TRY003)


23-23: Avoid specifying long messages outside the exception class

(TRY003)


39-39: Do not perform function call File in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)

πŸ”‡ Additional comments (6)
model-lab/app/main.py (6)

14-18: LGTM! Model path validation is correct.

The startup validation correctly checks for the ONNX model file and raises FileNotFoundError with a clear message if missing. This ensures the application fails fast on startup rather than during the first request.

The static analysis hint (TRY003) suggests using custom exception classes for long messages, but this is an optional style preferenceβ€”using standard exceptions with descriptive messages is perfectly acceptable for startup validation.


20-23: LGTM! CORS validation handles edge cases correctly.

The validation properly checks for empty origins by handling the edge case where "".split(",") returns [""]. Raising RuntimeError for missing configuration at startup is the correct approach.


32-36: Excellent fix for the race condition issue!

Pre-loading both model options at startup is the correct solution. This approach:

  • Eliminates race conditions from concurrent requests mutating shared model_option
  • Avoids expensive per-request model instantiation
  • Ensures both models are ready before handling requests

The memory trade-off of loading both models upfront is justified for correctness and performance.


39-39: Static analysis false positive for FastAPI.

The Ruff hint (B008) about File() in default arguments is a false positive. In FastAPI, File(), Form(), and similar functions are dependency markers for FastAPI's dependency injection systemβ€”they're not evaluated at function definition time like normal Python defaults. This pattern is correct and idiomatic for FastAPI.

Based on static analysis hints.


49-51: LGTM! Proper validation before model selection.

The validation correctly checks if model_option exists in the preloaded models dictionary and returns HTTP 400 (client error) with a clear message for invalid options. This prevents KeyError and provides helpful feedback to API consumers.


53-57: LGTM! Inference logic is correct and safe.

The inference correctly:

  • Selects the appropriate preloaded model without mutating shared state
  • Wraps the prediction in try/except for safety
  • Chains exceptions with from e to preserve the original stack trace
  • Returns HTTP 500 with a descriptive error message on failure

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❀️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

πŸ“œ Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between bc82449 and 55f0ad9.

πŸ“’ Files selected for processing (1)
  • model-lab/app/main.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
model-lab/app/main.py (1)
model-lab/app/model.py (2)
  • Model (11-84)
  • predict (58-84)
πŸͺ› Ruff (0.14.4)
model-lab/app/main.py

45-45: Do not catch blind exception: Exception

(BLE001)


46-46: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

πŸ”‡ Additional comments (1)
model-lab/app/main.py (1)

37-40: Good error handling for image validation.

The try/except block correctly catches OSError and ValueError, which are the appropriate exceptions raised by PIL when opening invalid or corrupted images. Exception chaining with from err is properly implemented.

Comment on lines 21 to 22
if not ALLOWED_ORIGINS:
raise HTTPException(status_code=500, detail="No allowed origins configured")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

HTTPException cannot be raised at module level, and validation logic may not work as intended.

Two issues:

  1. Raising HTTPException at module level will crash the application on startup rather than return an HTTP 500 response. Use RuntimeError instead.
  2. The validation logic may not work correctly because "".split(",") returns [""], not an empty list. Check for empty strings after splitting.

Apply this diff:

-if not ALLOWED_ORIGINS:
-    raise HTTPException(status_code=500, detail="No allowed origins configured")
+if not ALLOWED_ORIGINS or ALLOWED_ORIGINS == [""]:
+    raise RuntimeError("No allowed origins configured in CORS_ALLOWED_ORIGINS")
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if not ALLOWED_ORIGINS:
raise HTTPException(status_code=500, detail="No allowed origins configured")
if not ALLOWED_ORIGINS or ALLOWED_ORIGINS == [""]:
raise RuntimeError("No allowed origins configured in CORS_ALLOWED_ORIGINS")
πŸ€– Prompt for AI Agents
In model-lab/app/main.py around lines 21 to 22, replace the module-level raise
of HTTPException with a RuntimeError so the app fails at startup without trying
to use FastAPI-specific exceptions, and change the ALLOWED_ORIGINS validation to
split the string and filter out empty/whitespace-only entries (e.g., split on
',' then strip and keep only truthy values); if the filtered list is empty,
raise RuntimeError with a clear message.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

πŸ“œ Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 55f0ad9 and 0e155d4.

πŸ“’ Files selected for processing (1)
  • model-lab/app/main.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
model-lab/app/main.py (1)
model-lab/app/model.py (2)
  • Model (11-84)
  • predict (58-84)
πŸͺ› Ruff (0.14.5)
model-lab/app/main.py

18-18: Avoid specifying long messages outside the exception class

(TRY003)


23-23: Avoid specifying long messages outside the exception class

(TRY003)


36-36: Do not perform function call File in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)

πŸ”‡ Additional comments (2)
model-lab/app/main.py (2)

14-18: LGTM! Model path validation is correct.

The directory name has been fixed and the startup validation properly uses FileNotFoundError instead of HTTPException, which is the correct approach for module-level checks.


20-23: LGTM! CORS validation is correct.

The validation properly handles the edge case where an empty string splits to [""], and correctly uses RuntimeError for startup validation instead of HTTPException.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant