Skip to content

Commit 6181679

Browse files
committed
fix: merge conflict
2 parents 6bc0d25 + 15a05eb commit 6181679

File tree

16 files changed

+557
-273
lines changed

16 files changed

+557
-273
lines changed

.mega-linter.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,9 @@ DISABLE_LINTERS:
3838
# Disable due to poor configuration options
3939
- ACTION_ACTIONLINT
4040

41+
# Disable due to Prettier conflict
42+
- MARKDOWN_MARKDOWN_TABLE_FORMATTER
43+
4144
SHOW_ELAPSED_TIME: true
4245

4346
FILEIO_REPORTER: false

README.md

Lines changed: 119 additions & 87 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# GitHub Copilot Usage Lambda
22

3-
This repository contains the AWS Lambda Function for updating the GitHub Copilot dashboard's historic information, stored within an S3 bucket.
3+
This repository contains the AWS Lambda Function for updating the GitHub Copilot dashboard's organisation-wide historic data, Copilot teams, and teams history.
44

5-
The Copilot dashboard can be found on the Copilot tab within the Digital Landscape.
5+
The Copilot dashboard can be found on the GitHub Copilot tab within the Digital Landscape.
66

7-
[View the Digital Landscape's repository](https://github.com/ONS-Innovation/keh-digital-landscape).
7+
[View the Digital Landscape's repository](https://github.com/ONSdigital/keh-digital-landscape).
88

99
---
1010

@@ -14,10 +14,20 @@ The Copilot dashboard can be found on the Copilot tab within the Digital Landsca
1414
- [Table of Contents](#table-of-contents)
1515
- [Prerequisites](#prerequisites)
1616
- [Makefile](#makefile)
17-
- [AWS Lambda Scripts](#aws-lambda-scripts)
18-
- [Setup - Running in a container](#setup---running-in-a-container)
19-
- [Setup - running outside of a Container (Development only)](#setup---running-outside-of-a-container-development-only)
20-
- [Storing the container on AWS Elastic Container Registry (ECR)](#storing-the-container-on-aws-elastic-container-registry-ecr)
17+
- [AWS Lambda Script](#aws-lambda-script)
18+
- [Running the Project](#running-the-project)
19+
- [Outside of a Container (Recommended) (Development Only)](#outside-of-a-container-recommended-development-only)
20+
- [Running in a container](#running-in-a-container)
21+
- [Deployment](#deployment)
22+
- [Deployments with Concourse](#deployments-with-concourse)
23+
- [Allowlisting your IP](#allowlisting-your-ip)
24+
- [Setting up a pipeline](#setting-up-a-pipeline)
25+
- [Prod deployment](#prod-deployment)
26+
- [Triggering a pipeline](#triggering-a-pipeline)
27+
- [Destroying a pipeline](#destroying-a-pipeline)
28+
- [Manual Deployment](#manual-deployment)
29+
- [Deployment Overview](#deployment-overview)
30+
- [Storing the container on AWS Elastic Container Registry (ECR)](#storing-the-container-on-aws-elastic-container-registry-ecr)
2131
- [Deployment to AWS](#deployment-to-aws)
2232
- [Deployment Prerequisites](#deployment-prerequisites)
2333
- [Underlying AWS Infrastructure](#underlying-aws-infrastructure)
@@ -26,10 +36,6 @@ The Copilot dashboard can be found on the Copilot tab within the Digital Landsca
2636
- [Running the Terraform](#running-the-terraform)
2737
- [Updating the running service using Terraform](#updating-the-running-service-using-terraform)
2838
- [Destroy the Main Service Resources](#destroy-the-main-service-resources)
29-
- [Deployments with Concourse](#deployments-with-concourse)
30-
- [Allowlisting your IP](#allowlisting-your-ip)
31-
- [Setting up a pipeline](#setting-up-a-pipeline)
32-
- [Triggering a pipeline](#triggering-a-pipeline)
3339
- [Documentation](#documentation)
3440
- [Testing](#testing)
3541
- [Linting](#linting)
@@ -62,7 +68,7 @@ This repository has a Makefile for executing common commands. To view all comman
6268
make all
6369
```
6470

65-
## AWS Lambda Scripts
71+
## AWS Lambda Script
6672

6773
This script:
6874

@@ -72,7 +78,42 @@ This script:
7278

7379
Further information can be found in [this project's documentation](/docs/index.md).
7480

75-
### Setup - Running in a container
81+
### Running the Project
82+
83+
### Outside of a Container (Recommended) (Development Only)
84+
85+
To run the Lambda function outside of a container, we need to execute the `handler()` function.
86+
87+
1. Uncomment the following at the bottom of `main.py`.
88+
89+
```python
90+
...
91+
# if __name__ == "__main__":
92+
# handler(None, None)
93+
...
94+
```
95+
96+
**Please Note:** If uncommenting the above in `main.py`, make sure you re-comment the code _before_ pushing back to GitHub.
97+
98+
2. Export the required environment variables:
99+
100+
```bash
101+
export AWS_ACCESS_KEY_ID=<aws_access_key_id>
102+
export AWS_SECRET_ACCESS_KEY=<aws_secret_access_key>
103+
export AWS_DEFAULT_REGION=eu-west-2
104+
export AWS_SECRET_NAME=<aws_secret_name>
105+
export GITHUB_ORG=ONSDigital
106+
export GITHUB_APP_CLIENT_ID=<github_app_client_id>
107+
export AWS_ACCOUNT_NAME=<sdp-dev/sdp-prod>
108+
```
109+
110+
3. Run the script.
111+
112+
```bash
113+
python3 src/main.py
114+
```
115+
116+
### Running in a container
76117

77118
1. Build a Docker Image
78119

@@ -99,7 +140,7 @@ Further information can be found in [this project's documentation](/docs/index.m
99140
```bash
100141
docker run --platform linux/amd64 -p 9000:8080 \
101142
-e AWS_ACCESS_KEY_ID=<aws_access_key_id> \
102-
-e AWS_SECRET_ACCESS_KEY=<aws_secret_access_key_id> \
143+
-e AWS_SECRET_ACCESS_KEY=<aws_secret_access_key> \
103144
-e AWS_DEFAULT_REGION=eu-west-2 \
104145
-e AWS_SECRET_NAME=<aws_secret_name> \
105146
-e GITHUB_ORG=ONSDigital \
@@ -138,40 +179,79 @@ Further information can be found in [this project's documentation](/docs/index.m
138179
docker stop 3f7d64676b1a
139180
```
140181

141-
### Setup - running outside of a Container (Development only)
182+
## Deployment
142183

143-
To run the Lambda function outside of a container, we need to execute the `handler()` function.
184+
### Deployments with Concourse
144185

145-
1. Uncomment the following at the bottom of `main.py`.
186+
#### Allowlisting your IP
146187

147-
```python
148-
...
149-
# if __name__ == "__main__":
150-
# handler(None, None)
151-
...
152-
```
188+
To setup the deployment pipeline with concourse, you must first allowlist your IP address on the Concourse
189+
server. IP addresses are flushed everyday at 00:00 so this must be done at the beginning of every working day
190+
whenever the deployment pipeline needs to be used. Follow the instructions on the Confluence page (SDP Homepage > SDP Concourse > Concourse Login) to
191+
login. All our pipelines run on sdp-pipeline-prod, whereas sdp-pipeline-dev is the account used for
192+
changes to Concourse instance itself. Make sure to export all necessary environment variables from sdp-pipeline-prod (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN).
153193

154-
**Please Note:** If uncommenting the above in `main.py`, make sure you re-comment the code _before_ pushing back to GitHub.
194+
#### Setting up a pipeline
155195

156-
2. Export the required environment variables:
196+
When setting up our pipelines, we use ecs-infra-user on sdp-dev to be able to interact with our infrastructure on AWS. The credentials for this are stored on
197+
AWS Secrets Manager so you do not need to set up anything yourself.
157198

158-
```bash
159-
export AWS_ACCESS_KEY_ID=<aws_access_key_id>
160-
export AWS_SECRET_ACCESS_KEY=<aws_secret_access_key>
161-
export AWS_DEFAULT_REGION=eu-west-2
162-
export AWS_SECRET_NAME=<aws_secret_name>
163-
export GITHUB_ORG=ONSDigital
164-
export GITHUB_APP_CLIENT_ID=<github_app_client_id>
165-
export AWS_ACCOUNT_NAME=<sdp-dev/sdp-prod>
166-
```
199+
To set the pipeline, run the following script:
167200

168-
3. Run the script.
201+
```bash
202+
chmod u+x ./concourse/scripts/set_pipeline.sh
203+
./concourse/scripts/set_pipeline.sh
204+
```
169205

170-
```bash
171-
python3 src/main.py
172-
```
206+
Note that you only have to run chmod the first time running the script in order to give permissions.
207+
This script will set the branch and pipeline name to whatever branch you are currently on. It will also set the image tag on ECR to 7 characters of the current branch name if running on a branch other than main. For main, the ECR tag will be the latest release tag on the repository that has semantic versioning(vX.Y.Z).
208+
209+
The pipeline name itself will usually follow a pattern as follows: `github-copilot-usage-lambda-<branch-name>` for any non-main branch and `github-copilot-usage-lambda` for the main/master branch.
210+
211+
#### Prod deployment
212+
213+
To deploy to prod, it is required that a Github Release is made on Github. The release is required to follow semantic versioning of vX.Y.Z.
214+
215+
A manual trigger is to be made on the pipeline name `github-copilot-usage-lambda > deploy-after-github-release` job through the Concourse CI UI. This will create a github-create-tag resource that is required on the `github-copilot-usage-lambda > build-and-push-prod` job. Then the prod deployment job is also through a manual trigger ensuring that prod is only deployed using the latest GitHub release tag in the form of vX.Y.Z and is manually controlled.
216+
217+
#### Triggering a pipeline
218+
219+
Once the pipeline has been set, you can manually trigger a dev build on the Concourse UI, or run the following command for non-main branch deployment:
220+
221+
```bash
222+
fly -t aws-sdp trigger-job -j copilot-usage-lambda-<branch-name>/build-and-push-dev
223+
```
224+
225+
and for main branch deployment:
226+
227+
```bash
228+
fly -t aws-sdp trigger-job -j copilot-usage-lambda/build-and-push-dev
229+
```
230+
231+
#### Destroying a pipeline
232+
233+
To destroy the pipeline, run the following command:
234+
235+
```bash
236+
fly -t aws-sdp destroy-pipeline -p copilot-usage-lambda-<branch-name>
237+
```
238+
239+
**It is unlikely that you will need to destroy a pipeline, but the command is here if needed.**
240+
241+
**Note:** This will not destroy any resources created by Terraform. You must manually destroy these resources using Terraform.
242+
243+
### Manual Deployment
244+
245+
#### Deployment Overview
246+
247+
This repository is designed to be hosted on AWS Lambda using a container image as the Lambda's definition.
248+
249+
There are 2 parts to deployment:
250+
251+
1. Updating the ECR Image.
252+
2. Updating the Lambda.
173253

174-
### Storing the container on AWS Elastic Container Registry (ECR)
254+
#### Storing the container on AWS Elastic Container Registry (ECR)
175255

176256
When you make changes to the Lambda Script, a new container image must be pushed to ECR.
177257

@@ -295,12 +375,6 @@ If the application has been modified, the following can be performed to update t
295375
The reconfigure options ensures that the backend state is reconfigured to point to the appropriate S3 bucket.
296376

297377
**_Please Note:_** This step requires an **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** to be loaded into the environment if not already in place.
298-
This can be done using:
299-
300-
```bash
301-
export AWS_ACCESS_KEY_ID="<aws_access_key_id>"
302-
export AWS_SECRET_ACCESS_KEY="<aws_secret_access_key>"
303-
```
304378

305379
- Refresh the local state to ensure it is in sync with the backend
306380

@@ -340,48 +414,6 @@ terraform refresh -var-file=env/dev/dev.tfvars
340414
terraform destroy -var-file=env/dev/dev.tfvars
341415
```
342416

343-
## Deployments with Concourse
344-
345-
### Allowlisting your IP
346-
347-
To setup the deployment pipeline with concourse, you must first allowlist your IP address on the Concourse
348-
server. IP addresses are flushed everyday at 00:00 so this must be done at the beginning of every working day whenever the deployment pipeline needs to be used.
349-
350-
Follow the instructions on the Confluence page (SDP Homepage > SDP Concourse > Concourse Login) to
351-
login. All our pipelines run on `sdp-pipeline-prod`, whereas `sdp-pipeline-dev` is the account used for
352-
changes to Concourse instance itself. Make sure to export all necessary environment variables from `sdp-pipeline-prod` (**AWS_ACCESS_KEY_ID**, **AWS_SECRET_ACCESS_KEY**, **AWS_SESSION_TOKEN**).
353-
354-
### Setting up a pipeline
355-
356-
When setting up our pipelines, we use `ecs-infra-user` on `sdp-dev` to be able to interact with our infrastructure on AWS. The credentials for this are stored on AWS Secrets Manager so you do not need to set up anything yourself.
357-
358-
To set the pipeline, run the following script:
359-
360-
```bash
361-
chmod u+x ./concourse/scripts/set_pipeline.sh
362-
./concourse/scripts/set_pipeline.sh github-copilot-usage-lambda
363-
```
364-
365-
Note that you only have to run chmod the first time running the script in order to give permissions.
366-
This script will set the branch and pipeline name to whatever branch you are currently on. It will also set the image tag on ECR to the current commit hash at the time of setting the pipeline.
367-
368-
The pipeline name itself will usually follow a pattern as follows: `<repo-name>-<branch-name>`
369-
If you wish to set a pipeline for another branch without checking out, you can run the following:
370-
371-
```bash
372-
./concourse/scripts/set_pipeline.sh github-copilot-usage-lambda <branch_name>
373-
```
374-
375-
If the branch you are deploying is `main`, it will trigger a deployment to the `sdp-prod` environment. To set the ECR image tag, you must draft a GitHub release pointing to the latest release of the `main` branch that has a tag in the form of `vX.Y.Z.` Drafting up a release will automatically deploy the latest version of the `main` branch with the associated release tag, but you can also manually trigger a build through the Concourse UI or the terminal prompt.
376-
377-
### Triggering a pipeline
378-
379-
Once the pipeline has been set, you can manually trigger a build on the Concourse UI, or run the following command:
380-
381-
```bash
382-
fly -t aws-sdp trigger-job -j github-copilot-usage-lambda-<branch-name>/build-and-push
383-
```
384-
385417
## Documentation
386418

387419
This project uses MkDocs for documentation which gets deployed to GitHub Pages at a repository level.

0 commit comments

Comments
 (0)