This project demonstrates a complete DevOps workflow for a Python web application, containerized with Docker, deployed on AWS using Terraform, and automated with GitHub Actions. The solution follows 12-factor app and SOLID principles, with a focus on security, automation, and documentation.
- Flask App (Dockerized, venv for local dev)
- PostgreSQL (AWS RDS)
- ECS Fargate (App hosting)
- ECR (Docker image registry)
- VPC, Subnets, Security Groups (Terraform-managed)
- CloudWatch (Logging & Alarms)
- GitHub Actions (CI/CD)
git clone git@github.com:alex-504/devops-test-master.git
cd devops-test-master/app/beer_catalogpython3 -m venv venv
source venv/bin/activate
pip install poetrypoetry installexport DATABASE_URL="sqlite:///beers.db"
poetry run python -m flask --app beer_catalog/app run --debug- The app will be available at http://127.0.0.1:5000.
Prerequisites:
- PostgreSQL installed (e.g.,
brew install postgresql@14on macOS) - Start PostgreSQL (macOS/Homebrew):
brew services start postgresql@14 # or for other versions: # brew services start postgresql
- Check status:
brew services list
- Create the database (if not already created):
createdb beer_catalog
- If you see an error like
database "beer_catalog" already exists, you can skip this step.
- If you see an error like
Set the DATABASE_URL environment variable:
- The default user is usually your macOS username (e.g.,
alexandrevieira). - If you use a password, add it after the username:
postgresql://<user>:<password>@localhost:5432/beer_catalog
export DATABASE_URL="postgresql://<user>@localhost:5432/beer_catalog"Run the app:
poetry run python -m flask --app beer_catalog/app run --debug- The app will be available at http://127.0.0.1:5000.
- Health check:
curl http://127.0.0.1:5000/health
- Get all beers:
curl http://127.0.0.1:5000/beers
- Add a beer:
curl -X POST http://127.0.0.1:5000/beers \ -H "Content-Type: application/json" \ -d '{"name": "Heineken", "style": "Lager", "abv": 5.0}'
---
## Running the App with Docker
You can run the app locally using Docker, just like in production (ECS). This ensures consistency and lets you test the container before deploying.
### 1. Build the Docker image
```sh
docker build -t beer-catalog-app .
docker run -p 5000:5000 beer-catalog-app- The app will be available at http://localhost:5000.
If you want to use your local PostgreSQL database with Docker:
docker run -p 5000:5000 -e DATABASE_URL="postgresql://<user>@host.docker.internal:5432/beer_catalog" beer-catalog-app- Replace
<user>with your Postgres username. - If you use a password, add it after the username.
- The app will be available at http://localhost:5000.
Note: This is the same Docker image that is pushed to ECR and used by ECS in production, ensuring consistency between local and cloud environments.
Note: An example
terraform.tfvars.examplefile is provided in theterraform/directory.
Copy it to a new file calledterraform.tfvarsand update the values with your own secrets and settings before runningterraform apply.
- Automated via GitHub Actions on push to
main,master,feature/*branches.
cd terraform
terraform init
terraform plan
terraform apply- All AWS resources (ECR, ECS, RDS, VPC, etc.) are created from scratch (no prebuilt modules).
- App will be available at the ECS public IP (see ECS task details in AWS Console).
- Pull Request Checks: Linting and (optional) tests on every PR.
- Docker Build & Push: On merge to
main,master,feature/*branches, image is built and pushed to ECR. - ECS Deployment: ECS service is updated with the new image.
- Push image to ECR: On push, the new image is built and pushed to ECR.
- Deploy app to ECS: ECS service is updated with the new image.
The workflows are define on .github/workflows/ folder.
- ECR repository for Docker images
- ECS cluster & service for app hosting
- RDS PostgreSQL instance
- VPC, subnets, security groups (built from scratch)
- CloudWatch log group & alarms
- No prebuilt modules used for ECS or networking
| Method | Endpoint | Description |
|---|---|---|
| GET | /health | Health check |
| GET | /beers | List all beers |
| POST | /beers | Add a new beer |
| POST | /seed | Seed database |
Note: No
/beers/<id>endpoint as per the original app.
- Intentional Issues: Documented and fixed in
ISSUES_FOUND.md. - 12-factor & SOLID: Environment variables, logging, error handling, and code structure.
- Security: IAM roles, least privilege, no hardcoded secrets.
- Naming & Structure: Consistent resource names, clear separation of concerns.
- CloudWatch Logs: ECS task logs
- CloudWatch Alarms: RDS high CPU, ECS task failures
(All rerouces were created from scratch and provisioned using Terraform)
- ECS cluster,
- ECS Logs,
- RDS PostgreSQL DB,
- RDS Monitoring,
- RDS User Permission Setup,
- VPC,
- CloudWatch log events,
- CloudWatch Alarms
- health Check:
curl http://3.27.247.99:5000/health-> screenshot - get all beers: curl
http://3.27.247.99:5000/beers-> screenshot - add a beer:
curl -X POST http://3.27.247.99:5000/beers -H "Content-Type: application/json" -d '{"name": "Heineken", "style": "Lager", "abv": 5.0}'-> screenshot - ❌ seed the database:
curl -X POST http://3.27.247.99:5000/seed-> screenshot
- Region Mismatch: Ensure AWS CLI and Console are set to
ap-southeast-2. - Resource Already Exists: Delete or import orphaned resources.
- App Not Responding: Check ECS task status, security groups, and logs.
- Terraform State Issues: Use
terraform importor clean up resources as needed.
- Infrastructure design: Built for scalability and team collaboration.
- Naming & documentation: Clear, standardized, and recruiter-friendly.
- CI/CD pipeline: Automated, reliable, and secure.
- Resilience & security: Follows AWS and DevOps best practices.
- Troubleshooting: All intentional issues found and documented.
- [✓] RDS user/permission automation. Refer to
aws_db_instance,aws_db_user,aws_db_parameter_group, oraws_db_role - [✓] Secret management. Refer to
terraform.tfvars - ECS Service Auto Scaling. I did not have time to implement this since it required very specific configuration on AWS and loadbalancer.
- Add
/beers/<id>endpoint - Add authentication/authorization
- Use Terraform modules for larger projects
- Add automated integration tests
- ECS Service Auto Scaling
- Add ECR Lifecycle policy: to restrict the number of images in the repository. It would limit the growth of images in the repository.
- Time Investment notes: ~2h to complete the full AWS Deployment.
- I would like to have tested more the CI/CD pipeline.
- Cost optimization: I created a budget Status on AWS Console, to avoid unexpected costs.
- I would like to have added a deletion protection on RDS instance (
deletion_protection = var.environment == "prod" ? true : false), maybe next time.
Alexandre Vieira
[https://www.linkedin.com/in/alexandre-dev/]
