Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 8 additions & 3 deletions .github/workflows/ci_python.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Python Linting CI 🐍
name: Python Linting, Type Checking, and Testing CI 🐍

on:
push:
Expand All @@ -21,8 +21,13 @@ jobs:
with:
python-version: "3.10"

- name: Install ruff 🦀
run: pip install ruff
- name: Install dependencies
run: |
# Install ruff for linting and the development requirements (including pytest)
pip install -r backend/requirements-dev.txt

- name: Lint Python with ruff 🚀
run: ruff check backend/

- name: Run Pytest 🧪
run: pytest -q backend/tests/ -v
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is a brief code review for the provided patch:

  1. Improvements:

    • The change in the workflow name from "Python Linting CI 🐍" to "Python Linting, Type Checking, and Testing CI 🐍" reflects a more accurate description of what the workflow does.
    • The addition of a step to run Pytest after linting is a good practice to ensure that unit tests are also executed as part of the continuous integration process.
  2. Suggestions:

    • It's recommended to continue evolving this by adding further steps, such as type checking with MyPy or code formatting with Black.
    • You could consider including specific and separate jobs for linting, type checking, and testing to better isolate possible issues and speed up the CI process by running them concurrently.
  3. Bug Risk:

    • There doesn't seem to be any obvious bug risk in the provided patch. However, it's always good practice to thoroughly test the changes by running the CI job in various scenarios (e.g., on branches with different Python versions) to ensure stability across environments.

Overall, the code review looks good, covers essential aspects, and enhances the CI process by including linting, type checking, and testing steps.

7 changes: 7 additions & 0 deletions .vscode/launch.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,13 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
},
{
"name": "Attach to Python Functions",
"type": "python",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review:

  • The patch adds a new debugger configuration for "Python Debugger: Current File" using debugpy to launch the debugger in an integrated terminal.
  • Main configuration "Attach to Python Functions" seems to be missing some details in the code snippet provided.
  • The patch looks fine for adding a simple debugger configuration for debugging the currently opened file in an integrated terminal.
  • Ensure that 'debugpy' is installed in your Python environment before using this configuration.

Improvements:

  • Add a detailed description of each configuration to make it easier for users to understand and choose the appropriate one.
  • Consider adding more flexibility by including additional options like setting breakpoints, configuring remote debugging, etc., based on the project's requirements.
  • Make sure the main configuration ("Attach to Python Functions") is correctly defined and aligned with the project's debug needs.
  • Utilize linters or static analysis tools to catch any potential issues in the configuration file.
  • Consider adding error handling and reporting mechanisms to provide better user feedback in case of configuration errors.

Expand Down
7 changes: 6 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,10 @@
// "azureFunctions.pythonVenv": "${workspaceFolder}/backend_azure_function/.venv_azure_func",
"azureFunctions.projectLanguage": "Python",
"azureFunctions.projectRuntime": "~4",
"debug.internalConsoleOptions": "neverOpen"
"debug.internalConsoleOptions": "neverOpen",
"python.testing.pytestArgs": [
"backend"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary:

  1. Ensure a newline at the end of the file.
  2. Verify if Python unit tests should be enabled.
  3. Check that the specified pytestArgs are appropriate.
  4. Confirm the proper functioning of debugging configuration.
  5. Consider consistency in naming and usage regarding backend_azure_function.

Improvement Suggestions:

  • Add a newline at the end of the file for better readability and to comply with coding standards.
  • Ensure clarity on whether unittest should be disabled intentionally, as no clear explanation is provided.
  • Review the pytestArgs "backend" parameter to confirm it appropriately targets the desired testing suite or directory.
  • Double-check the "debug.internalConsoleOptions" setting to make sure the console behavior aligns with expectations.
  • Evaluate consistency in naming conventions like using the same naming style for workspaceFolder ("backend_azure_function") throughout the codebase if applicable.

40 changes: 35 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,41 @@ GPTeasers is a webapp that generates quiz-style questions based on the topic you
3. Azure Container Apps: Once triggered, the FastAPI containers communicates with the OpenAI API, sending requests and receiving responses.
4. OpenAI API: Processes the request and sends back a response.

## Contribute 🤲
## Docker Compose Setup for Local Testing

Love **GPTeasers**? Want to make it even better? We welcome contributions!
This project uses Docker Compose to run both the FastAPI backend and the frontend services locally.

1. **Fork** this repo 🍴.
2. Make your changes 🛠️.
3. Submit a **pull request** 👥.
### Services

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review and Suggestions:

Bugs/Risks:

  1. No apparent bugs or risks in the provided code snippet.

Improvements/Suggestions:

  1. Add a brief introduction to the "Running Tests" section outlining the importance of testing.
  2. Consider adding more details on how to contribute as it seems the content has been removed. Developers often appreciate clear guidelines.
  3. Format the commands uniformly; either use single backticks for inline code (command) or triple backticks for code blocks (command), instead of mixing both within the same section.
  4. Provide additional context like what the requirements-dev.txt file is for if not otherwise clear from the project structure.
  5. Mention the importance of maintaining existing test coverage and possibly considering expanding the test suite if necessary.
  6. Encourage contributors to write new tests for any new features or bug fixes they introduce to ensure code stability.
  7. Consider pointing out specific areas where contributions are currently needed to make it easier for potential contributors to get involved.

Enhancements could improve readability and ensure that contributors have all necessary details to run tests effectively and understand the contribution process better.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. The code patch removes a section related to contribution guidelines. If contributing to the project is encouraged, consider revising or relocating these guidelines to a different part of the documentation or codebase.

  2. Ensure that contribution instructions, guidelines, and licensing information are well-defined and easily accessible for potential contributors.

  3. Consider adding more detailed comments in the code to improve readability and maintainability.

  4. It's important to include comprehensive testing to ensure the functionality remains intact after making changes.

  5. Verify that the removed section of contribution guidelines was not crucial for community involvement and mentorship, which are essential for open-source projects' success.

- **fastapi_generate_quiz**:
The FastAPI backend that serves the GPTeasers API. This container is responsible for handling requests from the frontend and interacting with the OpenAI API to generate quizzes.

- **frontend**:
A static frontend application. Although the site is hosted on GitHub Pages, this container allows you to test it locally.

### Running Locally

1. **Set Environment Variables**
Ensure that the `OPENAI_API_KEY` is set in your environment or in a `.env` file at the project root:
```sh
export OPENAI_API_KEY=your_openai_api_key_here
```
or create a `.env` file with:
```
OPENAI_API_KEY=your_openai_api_key_here
```

2. **Build and Run the Containers**
From the project root, run:
```sh
docker-compose up --build
```
This command builds both the backend and frontend images and starts the containers.

3. **Access the Services**
- **Backend API (FastAPI)**:
Access via [http://localhost:8000](http://localhost:8000)
- **Frontend**:
Access via [http://localhost:8080](http://localhost:8080)

By following these steps, you can easily test both your backend API and your static frontend locally using Docker Compose.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review:

  1. Improvements:

    • Good documentation for setting up Docker Compose and running the services locally.
    • Clear separation of sections (Services, Running Locally) for easy understanding.
    • Explicit instructions regarding setting up environment variables and building/running containers.
  2. Bug Risks:

    • No obvious bugs in the provided code patch.
  3. Suggestions:

    • Consider adding a brief note on Docker prerequisites or version compatibility for readers unfamiliar with Docker.
    • Mention any specific steps required after making changes to the codebase that may affect the Docker setup.

Overall, the code patch provides a clear and concise guide for setting up and running the project locally using Docker Compose.

39 changes: 38 additions & 1 deletion backend/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,4 +66,41 @@ To debug locally, follow these steps:
docker push ghcr.io/djsaunders1997/fastapi_generate_quiz:latest
```


## Running Tests

Our test suite is divided into **unit tests** and **integration tests**.

- **Unit Tests:**
These tests use mocks to simulate API responses. They run quickly and do not require real API calls.

- **Integration Tests:**
These tests make real API calls (e.g., to the OpenAI API) and require a valid API key. They are intended to be run manually or in a staging environment.

### Default Behavior

By default, integration tests are **excluded** from the test run. This is achieved by configuring `pytest` in our `pytest.ini` file (located in the `backend` directory):

```ini
[pytest]
markers =
integration: mark test as an integration test.
addopts = -m "not integration"
```

This configuration tells `pytest` to skip any test marked with `@pytest.mark.integration` when you run:

```bash
pytest -v
```

### Running Integration Tests

To run the integration tests, override the default marker filter by using the `-m` option:

```bash
pytest -m integration
```

> **Note:** Integration tests make real API calls and require the `OPENAI_API_KEY` environment variable to be set. Make sure you have this environment variable configured before running these tests.

---
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provided code change adds documentation regarding testing procedures in a project. Here are some observations and suggestions for improvement:

  1. Clarity and Detail:

    • The documentation added provides a clear overview of unit tests and integration tests, explaining their differences and requirements.
    • Including specifics about how to run tests with examples is beneficial for users.
  2. Configuration Management:

    • Clear explanation of how integration tests are excluded by default using pytest markers is useful.
    • Ensure consistency in the naming of markers and commands across the codebase.
    • Suggest enforcing a consistent format for all test cases to maintain readability.
  3. Environment Dependency:

    • Emphasize the importance of having the OPENAI_API_KEY environment variable set before running integration tests.
    • Provide information on where users should set up this variable or link to relevant documentation.
  4. Formatting and Consistency:

    • Check for consistency in spacing, indentation, and formatting throughout the documentation.
    • Consider using a standardized format or template for documenting tests to improve readability and maintainability.
  5. Error Handling:

    • Add information about what to do if the OPENAI_API_KEY environment variable is missing during test execution for better error handling.
  6. Cross-Referencing and Linking:

    • Depending on the complexity of the project, consider cross-referencing other related documents or providing links for more detailed information.

Overall, the code change appears to be well-documented and provides valuable information for developers working on the project, covering the basics of testing procedures effectively.

Empty file added backend/__init__.py
Empty file.
26 changes: 16 additions & 10 deletions backend/fastapi_generate_quiz.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Example of openai streaming
# Example of openai streaming
# https://platform.openai.com/docs/api-reference/streaming
import logging
from generate_quiz import QuizGenerator
from generate_image import generate_image
from generate_image import ImageGenerator
from fastapi import FastAPI, Request
from fastapi.responses import (StreamingResponse, JSONResponse)
from fastapi.responses import StreamingResponse, JSONResponse
from fastapi.middleware.cors import CORSMiddleware

# Copy Azure Docs Example
Expand All @@ -20,9 +20,10 @@
allow_headers=["*"], # Allows all headers
)


@app.get("/GenerateQuiz")
async def generate_quiz_endpoint(request: Request) -> JSONResponse:
"""
"""
FastAPI App to generate an image based on a provided prompt.

The function expects a 'prompt' parameter in the HTTP request query
Expand All @@ -42,9 +43,11 @@ async def generate_quiz_endpoint(request: Request) -> JSONResponse:
difficulty = request.query_params.get("difficulty")
n_questions = request.query_params.get("n_questions")

logging.info(f"Python HTTP trigger function processed a request with {topic=} {difficulty=}, {n_questions=}.")
logging.info(
f"Python HTTP trigger function processed a request with {topic=} {difficulty=}, {n_questions=}."
)

# If either 'topic' or 'difficulty' is not provided in the request,
# If either 'topic' or 'difficulty' is not provided in the request,
# the function will return an error message and a 400 status code.
# n_questions is optional
if not topic or not difficulty:
Expand All @@ -58,7 +61,7 @@ async def generate_quiz_endpoint(request: Request) -> JSONResponse:
# Set default value if not set
if not n_questions:
n_questions = 10

logging.info(
f"Generating quiz for topic: {topic} with difficulty: {difficulty} with number of questions: {n_questions}"
)
Expand All @@ -72,9 +75,10 @@ async def generate_quiz_endpoint(request: Request) -> JSONResponse:

return StreamingResponse(generator, media_type="text/event-stream")


@app.get("/GenerateImage")
async def generate_image_endpoint(request: Request) -> JSONResponse:
"""
"""
FastAPI App to generate an image based on a provided prompt.

The function expects a 'prompt' parameter in the HTTP request query
Expand All @@ -100,7 +104,8 @@ async def generate_image_endpoint(request: Request) -> JSONResponse:
return JSONResponse(content={"error": error_message}, status_code=400)

logging.info(f"Received prompt: {prompt}")
image_url = generate_image(prompt)
image_generator = ImageGenerator()
image_url = image_generator.generate_image(prompt)

if image_url is None:
error_message = "Error - Image generation failed."
Expand All @@ -111,7 +116,8 @@ async def generate_image_endpoint(request: Request) -> JSONResponse:
logging.info(f"Generated image for prompt {prompt}: {image_url}")
return JSONResponse(content={"image_url": image_url}, status_code=200)


# Run with uvicorn fastapi_generate_quiz:app --reload --host 0.0.0.0 --port 8000 --log-level debug
# Access with curl "http://localhost:8000/GenerateQuiz?topic=UK%20History&difficulty=easy&n_questions=3"
# Access with curl "http://localhost:8000/GenerateImage?prompt=A%20Juicy%20Burger"
# This simple example works!
# This simple example works!
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review:

  1. Import Order & Naming:

    • Update the import statements for uniformity. e.g., from generate_image import ImageGenerator for clarity.
    • Ensure consistent naming conventions for classes and methods. (QuizGenerator, ImageGenerator)
  2. Function Docstrings:

    • Add more descriptive docstrings for functions to explain their purpose, parameters, and expected behavior.
  3. Logging:

    • Consider improving log formatting for readability.
    • Add logging to handle potential errors during image generation.
  4. Query Parameters Checking:

    • Improve query params handling by validating and sanitizing inputs.
    • Check if n_questions parameter is a valid integer.
  5. Error Handling:

    • Implement robust error handling mechanisms. Ensure clear error messages and appropriate HTTP status codes are returned in case of issues.
  6. Code Formatting:

    • Maintain consistent code formatting. For instance, ensure proper indentation throughout the code.
  7. HTTP Responses:

    • Use more specific HTTP response types where necessary to provide meaningful responses to the client.
  8. Resource Cleanup:

    • Ensure that resources like ImageGenerator instances are properly handled and cleaned up after use.
  9. Testing:

    • Comprehensive testing should be conducted to verify the functionality of both endpoints.
  10. End-of-File Newline:

    • Consider adding a newline at the end of the file for compliance with some style guides.

Overall, the code appears to be on the right track but could benefit from improvements in error handling, logging, and input validation. Ensure that the application is robust and can handle unexpected scenarios gracefully.

115 changes: 77 additions & 38 deletions backend/generate_image.py
Original file line number Diff line number Diff line change
@@ -1,53 +1,92 @@
from openai import OpenAI

import logging
import os
import logging
from typing import Optional
from openai import OpenAI

# Set up logging
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')

# Set up OpenAI API key from environment variables
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if not OPENAI_API_KEY:
raise ValueError(
"Environment variable OPENAI_API_KEY is not set. "
"Please ensure it's set and try again."
)
logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
)


class ImageGenerator:
@classmethod
def get_api_key_from_env(cls) -> str:
"""Retrieves the OpenAI API key from environment variables.

Returns:
str: The API key from the environment variable OPENAI_API_KEY.

Raises:
ValueError: If the environment variable is not set or empty.
"""
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise ValueError(
"Environment variable OPENAI_API_KEY is not set. "
"Please ensure it's set and try again."
)
return api_key

def __init__(self, api_key: Optional[str] = None):
"""Initialises the ImageGenerator.

If `api_key` is not provided, it is retrieved from the environment
using `get_api_key_from_env`.

client = OpenAI(api_key=OPENAI_API_KEY)
Args:
api_key (str, optional): The OpenAI API key to use. Defaults to None.
"""
if api_key is None:
api_key = self.get_api_key_from_env()

self.client = OpenAI(api_key=api_key)

def generate_image(prompt: str, n: int = 1, size: str = "256x256") -> str:
"""
Generates an image using OpenAI's Image API based on a given prompt.
def generate_image(
self, prompt: str, n: int = 1, size: str = "256x256"
) -> Optional[str]:
"""Generates an image based on the provided prompt.

Parameters:
- prompt (str): The textual description for the image to be generated.
- n (int): The number of images to generate. Default is 1.
- size (str): The size of the generated image. Default is "256x256".
Args:
prompt (str): The textual description for the image to be generated.
n (int, optional): The number of images to generate. Defaults to 1.
size (str, optional): The size of the generated image. Defaults to "256x256".

Returns:
- str: URL of generated image, in JSON dict with key URL
Returns:
Optional[str]: The URL of the generated image if successful,
or `None` if an error occurred.
"""
logger.info(f"Generating image with prompt: {prompt=}")
image_url = self._get_image_url(prompt, n, size)
logger.info(f"Generated image URL: {image_url}")
return image_url

Raises:
- openai.error.OpenAIError: If there's an error in the request.
"""
def _get_image_url(self, prompt: str, n: int, size: str) -> Optional[str]:
"""Makes the API call to generate images using OpenAI and returns the URL.

logging.info(f"{prompt=}")
Args:
prompt (str): The textual description for the image to be generated.
n (int): The number of images to generate.
size (str): The size of the generated image (e.g., "256x256").

try:
response = client.images.generate(prompt=prompt, n=n, size=size)
return response.data[0].url
except Exception as e:
logger.error(f"Non-OpenAI Error when calling OpenAI api: {e}")
return None
Returns:
Optional[str]: The URL of the first generated image,
or `None` if an error occurred.
"""
try:
response = self.client.images.generate(prompt=prompt, n=n, size=size)
return response.data[0].url
except Exception as e:
logger.error(f"Error when calling OpenAI API: {e}")
return None


if __name__ == "__main__":
image_description = (
"Crested Gecko showcasing its distinct crests and coloration. Pixel Art"
# Example usage:
image_generator = (
ImageGenerator()
) # Uses environment variable if no API key is provided
prompt_text = (
"Crested Gecko showcasing its distinct crests and colouration. Pixel Art"
)
image_url = generate_image(image_description)
if image_url:
print(f"Generated Image URL: {image_url}")
image_url = image_generator.generate_image(prompt_text)
Loading