-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Description
Summary
This issue tracks the migration and modernization (and fixing) of the apt.envoyproxy.io package repository backend and related build/deploy processes.
Background
- The current repo is served from Netlify's CDN, built via Bazel, and has been stalled/broken since v1.32 due to Netlify/CI/time/disk constraints.
- The static content (repo + debs) currently lives in Netlify's storage. There is no official GCS backend yet.
- We need to migrate the backend to use GCS for storage and overhaul the workflows for incremental, reliable, and maintainable builds/publishing.
Migration Plan
Phase 1: Migration/Bootstrap
- Scrape all existing data from Netlify (apt.envoyproxy.io) using a script (wget, etc).
- Sync downloaded apt repo files (dists, pool, index.html, keys, etc.) to a new GCS bucket (e.g. gs://envoy-apt-repo/).
- Prepare scripts/workflows to enable this migration (manual trigger, as a temporary step).
Phase 2: Frontend/Proxy Switch
- Update Netlify frontend to become a proxy-only edge/CDN for the GCS bucket (no more in-place builds).
- Confirm that apt.envoyproxy.io now serves from GCS via Netlify proxying.
Phase 3: Add PR Build/Test Jobs
- Add isolated PR test workflows that:
- Build a stub/fake repo using committed or CI-generated test .debs.
- Validate the publish/build machinery works, but never touches real data/resources.
- No secret access for PRs.
Phase 4: Publishing CI/CD
- On merge to main:
- Sync the aptly db (and any required state) from a private GCS state bucket.
- Check for new Envoy releases. Download and add only new debs.
- Run aptly publish/update to refresh metadata and publish incremental changes.
- Sync public/ to the public GCS bucket.
- Secrets required only for this step.
Phase 5: Repo Catch-Up
- Add one-time/manual workflow to fill in missing releases by downloading all missing .debs from GitHub releases and updating the repo as needed.
Open Questions
- Is there a way to reconstruct the aptly db from scraped static repo, or is a full import from scratch required?
Estimated Risk/Resources
- Initial scrape and GCS seed will be write-heavy and will require significant bandwidth and disk (one-off).
- Ongoing publishing after migration should be incremental and low-overhead.
- Will need patience and possibly access to beefy hardware just for initial bootstrap.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels