- Docker + Compose
- Gunicorn
- Whitenoise
- PostgreSQL
- Docs (Open API 3.0)
- Debug Toolbar
- Docker
- Docker Compose
- uv (
pip install uv) - Task
- pre-commit (
pip install pre-commit)
- Install the pre-commit hooks:
pre-commit install
- Create a
.envfile just like.env.examplewith your custom data, if you add something to your.envfile, also and keep.env.exampleupdated with dummy values for key reference. - Start the development server:
task compose-up -- -d
- (Optionally) run
django-tasksworker for all queues:task manage-db_worker -- --queue-name="*" --interval=2
We follow specific conventions to organize our Django apps for clarity and maintainability:
urls.pyfor URL Routing: We useurls.pyto define URL-API patterns for the app.models.pyfor Database Models: We usemodels.pyto define database models.serializers.pyfor Serialization: We useserializers.pyto define serialization logic.apis.pyinstead ofviews.py: For API endpoint definitions, we useapis.py. This helps differentiate API logic from traditional Django views that might render templates.services.pyfor Business Logic: All core business logic should reside inservices.py. This file acts as a central hub for the application's primary functionalities.constants.pyfor Constants: We useconstants.pyto store constant values that are used throughout the application. This file helps in keeping values consistent and easy to maintain.data_models.pyfor Data Models: We usedata_models.pyto define data structures for our application objects.tasks.pyfor Background Tasks: We usetasks.pyto define background tasks.- Specific Internal Logic: For highly specific or internal app logic (e.g., image transformations), you can create custom files within the app's directory (e.g.,
images/transformations.py).
- Shared Components:
constants,services,data_modelsandtasksare think to be accessible by other apps. Place widely used constants and reusable service functions here.
We can use folders to group related files/components within an app splitting the codebase into more manageable and organized sections as the project grows and complexity increases:
# Places `services` may start like:
places/
__init__.py
services.py
# and become:
places/
__init__.py
services/
__init__.py
place.py
place_image.pyOnce this pattern is applied in any app component we highly recommend to switch to this pattern for other app components to follow a domain-like structure.
- Method Naming: We use an
object_action[_context]naming convention for methods (e.g.,place_retrieve_by_user(),user_update()). This helps in keeping the codebase organized and easy to navigate.
- image_processing: The
image_processingapp handle general image-related functionalities, such as processing transformations and object detection.- The
image_processingapp its meant to be extracted to a new app in the future as it will grow and become more complex, inner apps should use theimage_processing_apiservices that supports built-in types instead of accessing the appimage_processingfunctionalities.
- The
- places: The
placesapp allows to create and handlePlaces.- A
Placeis a basic virtual representations of a real place with a real-world counterpart.
- A
- users: The
usersapp handles user-related functionalities. - api: The
apiapp centralize API routing. This app typically contains api versions that includes URL patterns from other apps, providing a single entry point for all API requests and making versioning or global API changes more manageable. - common: The
commonapp is used to house highly generic and reusable code as public services that are not specific nor related to any single application but are used across the project. This promotes DRY principles and keeps app-specific logic clean.
The tasks defined in the Taskfile are executed within a Docker container, which has a volume mounted to the host PC. This volume specifically includes the application's codebase, allowing for a seamless integration between the development environment on the host and the containerized tasks.
Here's how it works:
-
Code Synchronization: The mounted volume in the
docker-compose.yaml>backend-humanifyensures that the code inside the container is the same as on the host machine. Any changes made to the code on the host are immediately reflected within the container. This is crucial for development workflows, where frequent changes to the codebase are tested and iterated upon. -
Docker Compose and Django Operations: The tasks typically involve operations such as starting, stopping, or managing services using Docker Compose, as well as running commands related to Django. Since these tasks rely on the codebase, the volume ensures they operate on the latest version of the code, regardless of where the task is run.
-
Host and Container Interaction: While the tasks are executed in an isolated container environment, the mounted volume enables these tasks to access and manipulate the code on the host machine. This setup is particularly useful for tasks that need to interact closely with the host's file system or leverage host-specific configurations.
Run task --list to see a full list of available tasks with their description.
-
Common docker compose commands
# build the containers without cache task compose-build -- --no-cache # start the containers in detached mode task compose-up -- -d # stop the containers task compose-stop # down the containers task compose-down
-
Common manage.py commands
# run django-tasks worker for a queue task manage-db_worker -- --queue-name="queue_name" --interval=2 # create a super user task manage-createsuperuser # make migrations for a specific app task manage-makemigrations -- <app_name> # migrate a specific db task manage-migrate -- --database=<db_name> # start a new app task manage-startapp -- <app_name>
DISCLAIMER: Even with this volume approach some tasks might NOT reflect the changes in the host machine, for example, running task uv-add -- requests will install the requests lib dependency inside the docker container only, and you would need to install it via uv add requests locally if you want to have editor completitions or linting for the lib. This is the actual behaviour we want, we highly encourage to develop in a containerized environment and not in the host machine since some dependencies might need custom OS dependencies that we might forget to add to the Dockerfile while working in a non-containerized environment.
The best approach to install a dependency in the host and a running container is to install it locally with uv add <dependency_name> and then run task uv-sync.
If you add a new task that does not work with the volume approach, please add [CONTAINER_ONLY] tag to the task description.
We are currently using django-tasks for background tasks.
To run the background task worker:
# Run the worker for specific queues
task manage-db_worker -- --queue-name="place_images,users"
# Run the worker for all queues
task manage-db_worker -- --queue-name="*"
# Run the worker with a specific interval (in seconds) (default is 1)
task manage-db_worker -- --queue-name="*" --interval=20- Queue Names: It's good practice to use descriptive queue names to organize tasks. For example,
image_processing,notifications,data_cleanup. - Consider the priority and resource consumption of tasks when assigning them to queues.
Currently we use pytest-django for testing our code. We use faker along with factory_boy to generate data for our testing models.
Complete list of the default providers available in faker.
# Run all tests
task test
# Run a specific test file
task test -- path/to/test_file_example.py
# Run a specific test
task test -- path/to/test_file_example.py::test_exampleThe API docs are generated based on the code using drf-spectacular.