Historically, documentation of this repo's API server was kept in https://github.com/PermanentOrg/stela/blob/main/API.md. We are now adopting the OpenAPI specification format for API documentation; new endpoints are documented in this fashion. To generate an HTML copy of the OpenAPI documentation, run
redocly build-docs packages/api/docs/src/api.yamlThen open redoc-static.html to view the docs in browser. Alternatively, see the hosted version.
- Create a
.envfile
cp .env.template .envUpdate values as needed (see Environment Variables.
-
Install the Node.js version specified in .node-version (doing this using nvm is recommended).
-
Install dependencies
npm install
npm install -wspsqlneeds to be installed for running tests
Depending on the work being done, some environment variable will not be required for the service to run.
For these, simply fill in any fake value to prevent require-env-variable from throwing errors.
| Variable | Default | Notes |
|---|---|---|
| ENV | local | Tells stela what environment it's running in |
| DATABASE_URL | postgres://postgres:permanent@database:5432/permanent | Run tests to generate default database |
| PORT | 8080 | Tells stela what port to run on |
| FUSIONAUTH_HOST | <none. needs to be set> | Fusionauth's host URL. Should be different between prod and other envs. |
| FUSIONAUTH_API_KEY | <none. needs to be set> | Find it in Fusionauth admin panel -> settings -> API keys -> the one called "back-end (local)" |
| FUSIONAUTH_TENANT | <none. needs to be set> | Find it in Fusionauth admin panel -> Tenants -> the one called "Local" |
| FUSIONAUTH_BACKEND_APPLICATION_ID | <none. needs to be set> | Find it in Fusionauth admin panel -> Applications -> the one called "back-end (local)" |
| FUSIONAUTH_ADMIN_APPLICATION_ID | <none. needs to be set> | Find it in Fusionauth admin panel -> Applications -> the one called "admin-local" |
| FUSIONAUTH_SFTP_APPLICATION_ID | <none. needs to be set> | Find it in Fusionauth admin panel -> Applications -> the one called "sftp (local)" |
| LEGACY_BACKEND_HOST_URL | http://load_balancer:80/api | |
| LEGACY_BACKEND_SHARED_SECRET | none | Can be found in back-end's library/base/constants/base.constants.php |
| MAILCHIMP_API_KEY | none | Can be found in back-end's library/base/constants/base.constants.php |
| MAILCHIMP_TRANSACTIONAL_API_KEY | none | Can be found in back-end's library/base/constants/base.constants.php, where it is called MANDRILL_API_KEY |
| MAILCHIMP_DATACENTER | us12 | |
| MAILCHIMP_COMMUNITY_LIST_ID | 2736f796db | The default value corresponds to the dev list |
| SENTRY_DSN | none | Can be found in Sentry under Projects > stela > Settings > Client Keys (DSN) |
| DEV_NAME | none | Only set in local environments. Should be your given name. all lowercase. Used to create Sentry envs for developers |
| AWS_REGION | us-west-2 | |
| AWS_ACCESS_KEY_ID | none | The same one you use in devenv |
| AWS_SECRET_ACCESS_KEY | none | The same one you use in devenv |
| LOW_PRIORITY_TOPIC_ARN | test | Doesn't need to be set to a real ARN unless your work touches it specifically |
| EVENT_TOPIC_ARN | test | Doesn't need to be set to a real ARN unless your working with events specifically |
| MIXPANEL_TOKEN | none | Found in Mixpanel at Settings > Project Settings > Project Token |
| ARCHIVEMATICA_BASE_URL | none | It is the url of the EC2 instance on which archivematica is running |
| ARCHIVEMATICA_API_KEY | none | Found in Bitwarden, not needed unless you're running the cleanup cron |
| CLOUDFRONT_URL | none | Can be found as CDN_URL in back-end's library/base/constants/base.constants.php. Not required for API server |
| CLOUDFRONT_KEY_PAIR_ID | none | Can be found as CLOUDFRONT_KEYPAIR in back-end's library/base/constants/base.constants.php. Not required for API server |
| CLOUDFRONT_PRIVATE_KEY | none | Can be found in back-end's library/static/certs/pk-APKAJP2D34UGZ6IG443Q.pem. Not required for API server |
| SITE_URL | local.permanent.org | |
| ARCHIVEMATICA_HOST_URL | none | Only needed for the archivematica_triggerer lambda |
| ARCHIVEMATICA_API_KEY | none | Only needed for the archivematica_triggerer lambda |
| ARCHIVEMATICA_ORIGINAL_LOCATION_ID | none | Only needed for the archivematica_triggerer lambda |
Run
npm run lint -wsMake sure the local database from devenv's docker compose is running
docker compose up -dthen run tests with
npm run test -wsor, for a single project, specify the workspace
npm run test -w @stela/apiNote that the database tests run against is dropped and recreated at the beginning of each test run.
For running tests on a single file in stela, you can make an adjustment to the existing test command in packages/api/package.json on your machine. See the example below (replace src/middleware/authentication.test.ts with a pattern to match your file):
"test:file": "npm run start-containers && npm run clear-database && npm run create-database && npm run set-up-database && (cd ../..; docker compose run stela node --experimental-vm-modules ../../node_modules/jest/bin/jest.js -i --silent=false -- src/middleware/authentication.test.ts)",Preferred method: From the devenv repo, run
docker compose up -dor if you've added or updated dependencies, run
docker compose up -d --build stelaOutside a container: Run
npm run start -w @stela/apiThis project contains an API server and some cron jobs, which run on AWS EKS, as well as some serverless functions that run on AWS Lambda. All of these are configured and deployed through Terraform. Creating new API servers in this project is unlikely, at least in the near future, but it is not uncommon to add a lambda or cron job. To do so:
- Create a new workspace to represent this service. It should have its own directory under
packages. - Add your workspace to the workspaces array in the top-level package.json
- Implement your service within the new workspace
- Create a Dockerfile called
Dockerfile.<service_name>defining a docker image from which a container running your service can be built. Use existing Dockerfiles for lambdas (such as Dockerfile.trigger_archivematica) or crons (such as Dockerfile.thumbnail_refresh) as a guide. - In
terraform/test_clusteradd a terraform file defining your new cron job or lambda and its dependencies (crons don't tend to have dependencies, but lambdas will usually have at least an SQS queue that triggers the lambda). Use existing an existing cron or lambda definition as a guide. Be sure to define a data block corresponding to your service so Terraform can see what image your service is running on future deploys - If your service uses new environment variables, add them to
variables.tf, and add the correct values for each environment in our Terraform Cloud. If you're adding a cron and any of your new environment variables are secrets, add them also tosecrets.tf. - Add the name of your service's image to
required_dev_imagesandrequired_staging_imagesinlocals.tf. Also update thecurrent_kubernetes_imagesandcurrent_lambda_imagesdefinitions to include your service's image. - Repeat steps 5 and 6 in
terraform/prod_cluster, but don't add data blocks (they are not necessary here) - In the Generate Image Tags Github workflow, add a step to generate an appropriate tag for the image
- In the deploy Github workflow for each environment, add your service's image tag to the
envof the deploy job. In dev and staging, add it also to theimage_overridesvariable passed toterraform planandterraform apply. In prod, add it to the variables passed directly toterraform planandterraform apply - Add a step to the unit tests Github workflow to run the tests for your new workspace (it would be ideal if we could just run
npm run test -ws, but in the past this caused silent failures).
- Build the lambda image
docker build --platform linux/amd64 -t <LAMBDA NAME>:test -f Dockerfile.<LAMBDA NAME> .- Run the lambda container. Add additional --env arguments as needed. Note that the
CLOUDFRONT_PRIVATE_KEYrequired by therecord_thumbnail_attachermust include literal newlines; docker does not appear to interpret\ncorrectly.
docker run --platform linux/amd64 -p 9001:8080 --env DATABASE_URL=postgres://postgres:permanent@database:5432/permanent <LAMBDA NAME>:test- Find the container name
docker ps- Connect to the local env docker network
docker network connect devenv_default <YOUR CONTAINER NAME>- Trigger the lambda
curl "http://localhost:9001/2015-03-31/functions/function/invocations" -d '<YOUR PAYLOAD>'stela deploys to dev automatically upon any merge to main.
To deploy to staging and prod, create a Release with a tag of the form vX.X.X (this should conform to semantic
versioning). This will trigger a deploy to staging. If that is successful, the workflow will pause and wait for manual
approval to deploy to prod. Given manual approval, it will deploy to prod.
To deploy to staging without deploying to prod, run the "Deploy to staging" workflow from the Actions tab.
Because dev and staging run on the same EKS cluster, deploys to each of these environments just target the stela
deployments and won't update the underlying infrastructure they run on. To update that infrastructure, manually trigger
the "Deploy to all test envs" workflow from the Actions tab in Github. This will deploy the current main branch to
dev and staging in addition to updating the test cluster infrastructure.