-
-
Notifications
You must be signed in to change notification settings - Fork 28
[#699] Added support for downloading DB from S3. #2256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds S3-based database upload/download: new environment variables, installer support for an S3 source, two AWS SigV4 Bash scripts (GET/PUT), BATS unit tests and manual scripts; and dispatch in download-db.sh to the S3 download script when configured. Changes
Sequence Diagram(s)sequenceDiagram
participant Caller as "try-s3-download.sh / CI"
participant Script as "download-db-s3.sh"
participant OpenSSL as "OpenSSL"
participant S3 as "Amazon S3"
participant FS as "Local FS"
Caller->>Script: invoke with VORTEX_DB_DOWNLOAD_S3_* env
Script->>Script: load .env(.local), validate inputs
Script->>OpenSSL: derive signing key & signature (HMAC steps)
OpenSSL-->>Script: return signature
Script->>S3: GET object with Authorization header
S3-->>Script: return DB object
Script->>FS: write to VORTEX_DB_DIR/VORTEX_DB_FILE
Script->>Caller: exit code / log success or failure
sequenceDiagram
participant Caller as "try-s3-upload.sh / CI"
participant Script as "upload-db-s3.sh"
participant FS as "Local FS"
participant OpenSSL as "OpenSSL"
participant S3 as "Amazon S3"
Caller->>Script: invoke with VORTEX_DB_UPLOAD_S3_* env
Script->>Script: load .env(.local), validate inputs
Script->>FS: read local DB file, compute payload SHA256
Script->>OpenSSL: compute AWS4 signing key & signature
OpenSSL-->>Script: return signature
Script->>S3: PUT object with Authorization + x-amz-* headers
S3-->>Script: confirm upload (HTTP 200/201)
Script->>Caller: exit code / log success or failure
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🤖 Fix all issues with AI agents
In @.env.local.example:
- Around line 75-78: The environment keys are out of alphabetical order and
trigger the dotenv-linter UnorderedKey rule; swap the two variables so
VORTEX_CONTAINER_REGISTRY_PASS appears before VORTEX_CONTAINER_REGISTRY_USER
(i.e., reorder the two lines in the .env example), ensuring the key order is
alphabetical and CI warnings go away.
In @.vortex/tests/bats/unit/download-db-s3.bats:
- Around line 122-208: The tests for download-db-s3 rely on
VORTEX_DB_DOWNLOAD_S3_* env vars but can be polluted by parent S3_* variables;
update each failing test (the four `@test` blocks) to explicitly unset or override
any S3_ACCESS_KEY/S3_SECRET_KEY/S3_BUCKET/S3_REGION before exporting the
VORTEX_* vars (and/or export them as empty) so inherited S3_* values cannot
satisfy the checks in scripts/vortex/download-db-s3.sh; reference the
VORTEX_DB_DOWNLOAD_S3_ACCESS_KEY, VORTEX_DB_DOWNLOAD_S3_SECRET_KEY,
VORTEX_DB_DOWNLOAD_S3_BUCKET, VORTEX_DB_DOWNLOAD_S3_REGION variables and ensure
setup (run_steps "setup") runs with the sanitized environment.
In @.vortex/tests/bats/unit/upload-db-s3.bats:
- Around line 154-267: The failure tests for upload-db-s3 (tests named like
"upload-db-s3: Fail when VORTEX_DB_UPLOAD_S3_ACCESS_KEY is missing",
"...SECRET_KEY...", "...BUCKET...", "...REGION...", and the missing dump file
test) can be made flaky if a developer has alternative S3_* env vars exported;
before running scripts/vortex/upload-db-s3.sh in those tests, explicitly unset
any S3_* shortcut env vars (e.g. S3_ACCESS_KEY, S3_SECRET_KEY, S3_BUCKET,
S3_REGION or any other S3_* you may have in the environment) so the script only
sees the VORTEX_ variables being tested; update each test’s setup block to unset
these variables (or clear them with export S3_ACCESS_KEY="" etc.) before calling
run scripts/vortex/upload-db-s3.sh.
In `@scripts/vortex/download-db-s3.sh`:
- Around line 87-91: The canonical request string assigned to the variable
request is using literal backslash-n sequences which aren't converted to
newlines; update the construction of request (which concatenates request_type,
uri, headers, signed_headers, payload_hash) to produce real newlines—either
build it using printf (e.g., printf/printf -v with format placeholders to insert
\n between request_type, uri, headers, signed_headers, payload_hash) or use
shell $'...' quoting so the \n are interpreted; ensure you reference the same
variables (request_type, uri, headers, signed_headers, payload_hash) and produce
the exact newline-separated canonical request expected by the signature routine.
- Around line 94-103: The string_to_sign in create_signature contains
backslash-escaped newlines but is being passed as a format string to printf, so
escapes aren't interpreted and ShellCheck was disabled; change the printing to
use a literal format string that interprets escapes—replace the unsafe printf
"${string_to_sign}" with printf '%b' "$string_to_sign" (and apply the same
change where printf is used on the string_to_sign) so newlines are expanded
correctly and avoid SC2059.
- Around line 74-77: The functions use printf with the variable as the format
string which is unsafe; update hash_sha256 and hmac_sha256 to pass the variable
as an argument with an explicit format specifier (e.g., printf '%s' "$1" in
hash_sha256 and printf '%s' "$2" in hmac_sha256) so input containing % sequences
can't be interpreted as format directives, leaving the openssl and -macopt usage
(parameter "${1}" in hmac_sha256) unchanged.
- Around line 66-67: date_short and date_long are computed from two separate
date calls which can drift across a day boundary; capture a single UTC timestamp
once (e.g., into a variable like now_ts or now_iso) and derive both date_short
and date_long from that single value so they always match (reference the
existing variables date_short and date_long to replace their separate date calls
with formatting of the single captured timestamp).
In `@scripts/vortex/upload-db-s3.sh`:
- Around line 69-70: date_short and date_long are computed with separate date
calls which can race across a UTC midnight boundary; instead capture a single
timestamp first (e.g., store epoch or an ISO datetime) and then derive both
date_short and date_long from that single timestamp so both use the same moment;
update the code that sets date_short and date_long (referencing the existing
date_short and date_long variables) to format from that one captured timestamp.
- Around line 121-131: The curl invocation for uploading to S3 suppresses errors
and never checks the result: modify the curl call that uses "${request_type}",
"${local_file}", and "${object_url}" to capture both curl's exit code and the
HTTP response code (e.g., via -w "%{http_code}" to a temp output), remove or
replace --silent with --silent --show-error (or drop --silent) so errors are
printed, redirect response body to a temporary file and parse/check the HTTP
status and/or S3 XML error before calling pass "Finished database dump upload to
S3.", and if curl exits non‑zero or the HTTP status is not 2xx, emit an error
(use your existing logging or exit non‑zero) including the response body and
signature details for debugging.
| # The username to log into the container registry. | ||
| VORTEX_CONTAINER_REGISTRY_USER=$DOCKER_USER | ||
| VORTEX_CONTAINER_REGISTRY_USER= | ||
| # The password (token) to log into the container registry. | ||
| VORTEX_CONTAINER_REGISTRY_PASS=$DOCKER_PASS | ||
| VORTEX_CONTAINER_REGISTRY_PASS= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix dotenv-linter key order to avoid CI warnings.
Reorder the container registry variables to satisfy the UnorderedKey rule.
♻️ Suggested reorder
-# The username to log into the container registry.
-VORTEX_CONTAINER_REGISTRY_USER=
-# The password (token) to log into the container registry.
-VORTEX_CONTAINER_REGISTRY_PASS=
+# The password (token) to log into the container registry.
+VORTEX_CONTAINER_REGISTRY_PASS=
+# The username to log into the container registry.
+VORTEX_CONTAINER_REGISTRY_USER=📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # The username to log into the container registry. | |
| VORTEX_CONTAINER_REGISTRY_USER=$DOCKER_USER | |
| VORTEX_CONTAINER_REGISTRY_USER= | |
| # The password (token) to log into the container registry. | |
| VORTEX_CONTAINER_REGISTRY_PASS=$DOCKER_PASS | |
| VORTEX_CONTAINER_REGISTRY_PASS= | |
| # The password (token) to log into the container registry. | |
| VORTEX_CONTAINER_REGISTRY_PASS= | |
| # The username to log into the container registry. | |
| VORTEX_CONTAINER_REGISTRY_USER= |
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 78-78: [UnorderedKey] The VORTEX_CONTAINER_REGISTRY_PASS key should go before the VORTEX_CONTAINER_REGISTRY_USER key
(UnorderedKey)
🤖 Prompt for AI Agents
In @.env.local.example around lines 75 - 78, The environment keys are out of
alphabetical order and trigger the dotenv-linter UnorderedKey rule; swap the two
variables so VORTEX_CONTAINER_REGISTRY_PASS appears before
VORTEX_CONTAINER_REGISTRY_USER (i.e., reorder the two lines in the .env
example), ensuring the key order is alphabetical and CI warnings go away.
| date_short="$(date -u '+%Y%m%d')" | ||
| date_long="${date_short}T$(date -u '+%H%M%S')Z" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential time drift between date captures.
Same issue as the upload script: date_short and date_long are captured separately, risking mismatched dates at day boundaries.
🛡️ Proposed fix to derive both from single timestamp
-date_short="$(date -u '+%Y%m%d')"
-date_long="${date_short}T$(date -u '+%H%M%S')Z"
+date_long="$(date -u '+%Y%m%dT%H%M%SZ')"
+date_short="${date_long:0:8}"🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 66 - 67, date_short and
date_long are computed from two separate date calls which can drift across a day
boundary; capture a single UTC timestamp once (e.g., into a variable like now_ts
or now_iso) and derive both date_short and date_long from that single value so
they always match (reference the existing variables date_short and date_long to
replace their separate date calls with formatting of the single captured
timestamp).
| # shellcheck disable=SC2059 | ||
| hash_sha256() { printf "${1}" | openssl dgst -sha256 | sed 's/^.* //'; } | ||
| # shellcheck disable=SC2059 | ||
| hmac_sha256() { printf "${2}" | openssl dgst -sha256 -mac HMAC -macopt "${1}" | sed 's/^.* //'; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsafe use of printf with variable content.
Using printf "${1}" is unsafe when the input contains format specifiers (e.g., %s, %d). If the file path or data contains such characters, it could cause unexpected behavior or errors.
🛡️ Proposed fix to use format specifier
# shellcheck disable=SC2059
-hash_sha256() { printf "${1}" | openssl dgst -sha256 | sed 's/^.* //'; }
+hash_sha256() { printf '%s' "${1}" | openssl dgst -sha256 | sed 's/^.* //'; }
# shellcheck disable=SC2059
-hmac_sha256() { printf "${2}" | openssl dgst -sha256 -mac HMAC -macopt "${1}" | sed 's/^.* //'; }
+hmac_sha256() { printf '%s' "${2}" | openssl dgst -sha256 -mac HMAC -macopt "${1}" | sed 's/^.* //'; }🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 74 - 77, The functions use
printf with the variable as the format string which is unsafe; update
hash_sha256 and hmac_sha256 to pass the variable as an argument with an explicit
format specifier (e.g., printf '%s' "$1" in hash_sha256 and printf '%s' "$2" in
hmac_sha256) so input containing % sequences can't be interpreted as format
directives, leaving the openssl and -macopt usage (parameter "${1}" in
hmac_sha256) unchanged.
| request="${request_type} | ||
| ${uri}\n | ||
| ${headers}\n | ||
| ${signed_headers} | ||
| ${payload_hash}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Canonical request newlines are malformed.
The canonical request uses literal \n in strings but doesn't use printf with -e or $'...' syntax, so the backslash-n sequences won't be interpreted as newlines. This will produce an invalid canonical request and cause signature verification failures.
🐛 Proposed fix for canonical request construction
-request="${request_type}
-${uri}\n
-${headers}\n
+request="${request_type}
+${uri}
+
+${headers}
+
${signed_headers}
${payload_hash}"Or use $'...' syntax with explicit newlines:
-request="${request_type}
-${uri}\n
-${headers}\n
-${signed_headers}
-${payload_hash}"
+request="${request_type}"$'\n'"${uri}"$'\n'$'\n'"${headers}"$'\n'$'\n'"${signed_headers}"$'\n'"${payload_hash}"🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 87 - 91, The canonical request
string assigned to the variable request is using literal backslash-n sequences
which aren't converted to newlines; update the construction of request (which
concatenates request_type, uri, headers, signed_headers, payload_hash) to
produce real newlines—either build it using printf (e.g., printf/printf -v with
format placeholders to insert \n between request_type, uri, headers,
signed_headers, payload_hash) or use shell $'...' quoting so the \n are
interpreted; ensure you reference the same variables (request_type, uri,
headers, signed_headers, payload_hash) and produce the exact newline-separated
canonical request expected by the signature routine.
| # shellcheck disable=SC2059 | ||
| create_signature() { | ||
| string_to_sign="${auth_type}\n${date_long}\n${date_short}/${VORTEX_DB_DOWNLOAD_S3_REGION}/${service}/aws4_request\n$(hash_sha256 "${request}")" | ||
| date_key=$(hmac_sha256 key:"AWS4${VORTEX_DB_DOWNLOAD_S3_SECRET_KEY}" "${date_short}") | ||
| region_key=$(hmac_sha256 hexkey:"${date_key}" "${VORTEX_DB_DOWNLOAD_S3_REGION}") | ||
| service_key=$(hmac_sha256 hexkey:"${region_key}" "${service}") | ||
| signing_key=$(hmac_sha256 hexkey:"${service_key}" "aws4_request") | ||
|
|
||
| printf "${string_to_sign}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"${signing_key}" | sed 's/(stdin)= //' | sed 's/SHA2-256//' | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same printf format issue in create_signature function.
Line 96 uses printf with escape sequences (\n) that require -e flag or $'...' syntax to be interpreted correctly. The shellcheck disable=SC2059 suggests awareness of the issue, but the current implementation may not produce correct signatures.
🐛 Proposed fix for string_to_sign construction
# shellcheck disable=SC2059
create_signature() {
- string_to_sign="${auth_type}\n${date_long}\n${date_short}/${VORTEX_DB_DOWNLOAD_S3_REGION}/${service}/aws4_request\n$(hash_sha256 "${request}")"
+ string_to_sign="${auth_type}"$'\n'"${date_long}"$'\n'"${date_short}/${VORTEX_DB_DOWNLOAD_S3_REGION}/${service}/aws4_request"$'\n'"$(hash_sha256 "${request}")"
date_key=$(hmac_sha256 key:"AWS4${VORTEX_DB_DOWNLOAD_S3_SECRET_KEY}" "${date_short}")And update line 102 similarly:
- printf "${string_to_sign}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"${signing_key}" | sed 's/(stdin)= //' | sed 's/SHA2-256//'
+ printf '%s' "${string_to_sign}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"${signing_key}" | sed 's/(stdin)= //' | sed 's/SHA2-256//'🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 94 - 103, The string_to_sign
in create_signature contains backslash-escaped newlines but is being passed as a
format string to printf, so escapes aren't interpreted and ShellCheck was
disabled; change the printing to use a literal format string that interprets
escapes—replace the unsafe printf "${string_to_sign}" with printf '%b'
"$string_to_sign" (and apply the same change where printf is used on the
string_to_sign) so newlines are expanded correctly and avoid SC2059.
| date_short=$(date -u +'%Y%m%d') | ||
| date_long=$(date -u +'%Y%m%dT%H%M%SZ') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential time drift between date captures.
date_short and date_long are captured in separate date calls. If the script runs near a day boundary (e.g., 23:59:59 UTC), these could end up with mismatched dates, causing signature validation failures.
🛡️ Proposed fix to capture both from single timestamp
-date_short=$(date -u +'%Y%m%d')
-date_long=$(date -u +'%Y%m%dT%H%M%SZ')
+date_long=$(date -u +'%Y%m%dT%H%M%SZ')
+date_short="${date_long:0:8}"🤖 Prompt for AI Agents
In `@scripts/vortex/upload-db-s3.sh` around lines 69 - 70, date_short and
date_long are computed with separate date calls which can race across a UTC
midnight boundary; instead capture a single timestamp first (e.g., store epoch
or an ISO datetime) and then derive both date_short and date_long from that
single timestamp so both use the same moment; update the code that sets
date_short and date_long (referencing the existing date_short and date_long
variables) to format from that one captured timestamp.
| curl --silent --location --proto-redir =https --request "${request_type}" --upload-file "${local_file}" \ | ||
| --header "Content-Type: ${content_type}" \ | ||
| --header "Host: ${VORTEX_DB_UPLOAD_S3_BUCKET}${base_url}" \ | ||
| --header "X-Amz-Content-SHA256: ${payload_hash}" \ | ||
| --header "X-Amz-Date: ${date_long}" \ | ||
| --header "X-Amz-Server-Side-Encryption: AES256" \ | ||
| --header "X-Amz-Storage-Class: ${VORTEX_DB_UPLOAD_S3_STORAGE_CLASS}" \ | ||
| --header "Authorization: ${auth_type} Credential=${VORTEX_DB_UPLOAD_S3_ACCESS_KEY}/${date_short}/${VORTEX_DB_UPLOAD_S3_REGION}/${service}/aws4_request, SignedHeaders=${header_list}, Signature=${signature}" \ | ||
| "${object_url}" | ||
|
|
||
| pass "Finished database dump upload to S3." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling for curl upload response.
The script uses --silent which suppresses error output, and there's no check for curl's exit code or HTTP response status before declaring success. S3 PUT operations can fail silently (e.g., permission denied, bucket not found) and return XML error responses.
🔧 Proposed fix to handle curl errors
-curl --silent --location --proto-redir =https --request "${request_type}" --upload-file "${local_file}" \
+response=$(curl --silent --location --proto-redir =https --fail --request "${request_type}" --upload-file "${local_file}" \
--header "Content-Type: ${content_type}" \
--header "Host: ${VORTEX_DB_UPLOAD_S3_BUCKET}${base_url}" \
--header "X-Amz-Content-SHA256: ${payload_hash}" \
--header "X-Amz-Date: ${date_long}" \
--header "X-Amz-Server-Side-Encryption: AES256" \
--header "X-Amz-Storage-Class: ${VORTEX_DB_UPLOAD_S3_STORAGE_CLASS}" \
--header "Authorization: ${auth_type} Credential=${VORTEX_DB_UPLOAD_S3_ACCESS_KEY}/${date_short}/${VORTEX_DB_UPLOAD_S3_REGION}/${service}/aws4_request, SignedHeaders=${header_list}, Signature=${signature}" \
- "${object_url}"
+ --write-out "%{http_code}" \
+ "${object_url}") || { fail "Upload failed with curl error."; exit 1; }
+
+http_code="${response: -3}"
+if [ "${http_code}" != "200" ]; then
+ fail "Upload failed with HTTP status ${http_code}."
+ exit 1
+fi
pass "Finished database dump upload to S3."🤖 Prompt for AI Agents
In `@scripts/vortex/upload-db-s3.sh` around lines 121 - 131, The curl invocation
for uploading to S3 suppresses errors and never checks the result: modify the
curl call that uses "${request_type}", "${local_file}", and "${object_url}" to
capture both curl's exit code and the HTTP response code (e.g., via -w
"%{http_code}" to a temp output), remove or replace --silent with --silent
--show-error (or drop --silent) so errors are printed, redirect response body to
a temporary file and parse/check the HTTP status and/or S3 XML error before
calling pass "Finished database dump upload to S3.", and if curl exits non‑zero
or the HTTP status is not 2xx, emit an error (use your existing logging or exit
non‑zero) including the response body and signature details for debugging.
This comment has been minimized.
This comment has been minimized.
|
|
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2256 +/- ##
==========================================
- Coverage 76.95% 76.44% -0.51%
==========================================
Files 112 107 -5
Lines 5870 5829 -41
Branches 44 0 -44
==========================================
- Hits 4517 4456 -61
- Misses 1353 1373 +20 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
9bed58c to
c508f81
Compare
This comment has been minimized.
This comment has been minimized.
|
|
|
c508f81 to
31dd014
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@scripts/vortex/download-db-s3.sh`:
- Around line 111-116: The curl invocation using object_url and writing to
"${VORTEX_DB_DIR}/${VORTEX_DB_FILE}" can fail silently despite -f and rely on
set -e; add explicit error handling immediately after the curl command: capture
its exit status, on non-zero print a clear diagnostic including object_url,
auth_header/date_long/payload_hash context (but not secret values), and curl
stderr, remove any partial "${VORTEX_DB_DIR}/${VORTEX_DB_FILE}" file, and exit
with the curl exit code; ensure the logic references the same variables
(object_url, auth_header, content_type, payload_hash, date_long, VORTEX_DB_DIR,
VORTEX_DB_FILE) so failures are logged and cleaned up deterministically.
In `@scripts/vortex/download-db.sh`:
- Around line 78-80: Revert the S3 routing change from
scripts/vortex/download-db.sh so core script remains untouched; remove the
conditional that checks VORTEX_DB_DOWNLOAD_SOURCE and calling
./scripts/vortex/download-db-s3.sh. Instead create a custom wrapper in
scripts/custom/ (e.g., scripts/custom/download-db.sh) that inspects the
environment variable VORTEX_DB_DOWNLOAD_SOURCE and routes to either
./scripts/vortex/download-db.sh or ./scripts/vortex/download-db-s3.sh
accordingly, leaving the original scripts/vortex/* files unchanged.
In `@scripts/vortex/upload-db-s3.sh`:
- Around line 1-11: Move the custom upload-db-s3.sh script out of
scripts/vortex/ into scripts/custom/ and update any callers to reference the new
path; preserve the shebang (#!/usr/bin/env bash) and the environment-loading
line (the mktemp/export/set -a/.env/.env.local sequence) exactly as-is to keep
behavior identical, and ensure any CI/job configs, Makefiles, or other scripts
that invoked scripts/vortex/upload-db-s3.sh now invoke
scripts/custom/upload-db-s3.sh instead.
| curl "${object_url}" \ | ||
| -H "Authorization: ${auth_header}" \ | ||
| -H "content-type: ${content_type}" \ | ||
| -H "x-amz-content-sha256: ${payload_hash}" \ | ||
| -H "x-amz-date: ${date_long}" \ | ||
| -f -S -o "${VORTEX_DB_DIR}/${VORTEX_DB_FILE}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Missing error handling for curl failures.
The script exits with the success message even if curl fails silently (without -f triggering an exit). While -f is used, if curl returns a non-zero exit code, the script will terminate due to set -e, but the error won't be caught gracefully.
Consider adding explicit error handling to provide better diagnostics:
🛡️ Suggested improvement
-curl "${object_url}" \
+if ! curl "${object_url}" \
-H "Authorization: ${auth_header}" \
-H "content-type: ${content_type}" \
-H "x-amz-content-sha256: ${payload_hash}" \
-H "x-amz-date: ${date_long}" \
- -f -S -o "${VORTEX_DB_DIR}/${VORTEX_DB_FILE}"
+ -f -S -o "${VORTEX_DB_DIR}/${VORTEX_DB_FILE}"; then
+ fail "Failed to download database dump from S3."
+ exit 1
+fi🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 111 - 116, The curl invocation
using object_url and writing to "${VORTEX_DB_DIR}/${VORTEX_DB_FILE}" can fail
silently despite -f and rely on set -e; add explicit error handling immediately
after the curl command: capture its exit status, on non-zero print a clear
diagnostic including object_url, auth_header/date_long/payload_hash context (but
not secret values), and curl stderr, remove any partial
"${VORTEX_DB_DIR}/${VORTEX_DB_FILE}" file, and exit with the curl exit code;
ensure the logic references the same variables (object_url, auth_header,
content_type, payload_hash, date_long, VORTEX_DB_DIR, VORTEX_DB_FILE) so
failures are logged and cleaned up deterministically.
| if [ "${VORTEX_DB_DOWNLOAD_SOURCE}" = "s3" ]; then | ||
| ./scripts/vortex/download-db-s3.sh | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this routing change to scripts/custom/.
Core scripts in scripts/vortex/ should remain untouched; add the S3 routing in a custom wrapper instead.
As per coding guidelines, Never modify scripts/vortex/ - use scripts/custom/ for custom scripts.
🤖 Prompt for AI Agents
In `@scripts/vortex/download-db.sh` around lines 78 - 80, Revert the S3 routing
change from scripts/vortex/download-db.sh so core script remains untouched;
remove the conditional that checks VORTEX_DB_DOWNLOAD_SOURCE and calling
./scripts/vortex/download-db-s3.sh. Instead create a custom wrapper in
scripts/custom/ (e.g., scripts/custom/download-db.sh) that inspects the
environment variable VORTEX_DB_DOWNLOAD_SOURCE and routes to either
./scripts/vortex/download-db.sh or ./scripts/vortex/download-db-s3.sh
accordingly, leaving the original scripts/vortex/* files unchanged.
| #!/usr/bin/env bash | ||
| ## | ||
| # Upload DB dump to S3. | ||
| # | ||
| # Uses AWS Signature Version 4 with curl (no AWS CLI required). | ||
| # | ||
| # IMPORTANT! This script runs outside the container on the host system. | ||
| # | ||
| # shellcheck disable=SC1090,SC1091 | ||
|
|
||
| t=$(mktemp) && export -p >"${t}" && set -a && . ./.env && if [ -f ./.env.local ]; then . ./.env.local; fi && set +a && . "${t}" && rm "${t}" && unset t |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this script out of scripts/vortex/ into scripts/custom/.
Custom additions must live under scripts/custom/; update callers accordingly.
As per coding guidelines, Never modify scripts/vortex/ - use scripts/custom/ for custom scripts.
🤖 Prompt for AI Agents
In `@scripts/vortex/upload-db-s3.sh` around lines 1 - 11, Move the custom
upload-db-s3.sh script out of scripts/vortex/ into scripts/custom/ and update
any callers to reference the new path; preserve the shebang (#!/usr/bin/env
bash) and the environment-loading line (the mktemp/export/set -a/.env/.env.local
sequence) exactly as-is to keep behavior identical, and ensure any CI/job
configs, Makefiles, or other scripts that invoked scripts/vortex/upload-db-s3.sh
now invoke scripts/custom/upload-db-s3.sh instead.
This comment has been minimized.
This comment has been minimized.
|
|
|
31dd014 to
4cb6b4b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@scripts/vortex/download-db-s3.sh`:
- Around line 1-11: This script (scripts/vortex/download-db-s3.sh) must be
relocated to scripts/custom/download-db-s3.sh and all callers updated to
reference the new path; move the file preserving its shebang and executable bit,
then search the repo for any invocation of scripts/vortex/download-db-s3.sh (CI
jobs, other scripts, docs) and update those to scripts/custom/download-db-s3.sh,
ensuring the .env sourcing line and relative path semantics (./.env and
./.env.local) remain unchanged after the move.
| #!/usr/bin/env bash | ||
| ## | ||
| # Download DB dump from S3. | ||
| # | ||
| # Uses AWS Signature Version 4 with curl (no AWS CLI required). | ||
| # | ||
| # IMPORTANT! This script runs outside the container on the host system. | ||
| # | ||
| # shellcheck disable=SC1090,SC1091 | ||
|
|
||
| t=$(mktemp) && export -p >"${t}" && set -a && . ./.env && if [ -f ./.env.local ]; then . ./.env.local; fi && set +a && . "${t}" && rm "${t}" && unset t |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this custom script to scripts/custom/.
New custom scripts should not live under scripts/vortex/. Update callers to point to the custom location.
As per coding guidelines, Never modify scripts/vortex/ - use scripts/custom/ for custom scripts.
🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 1 - 11, This script
(scripts/vortex/download-db-s3.sh) must be relocated to
scripts/custom/download-db-s3.sh and all callers updated to reference the new
path; move the file preserving its shebang and executable bit, then search the
repo for any invocation of scripts/vortex/download-db-s3.sh (CI jobs, other
scripts, docs) and update those to scripts/custom/download-db-s3.sh, ensuring
the .env sourcing line and relative path semantics (./.env and ./.env.local)
remain unchanged after the move.
This comment has been minimized.
This comment has been minimized.
|
|
|
4cb6b4b to
5ac9960
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @.vortex/tests/manual/try-s3-download.sh:
- Around line 65-71: The script invocation currently calls
./scripts/vortex/download-db-s3.sh but the downloader has been moved to
scripts/custom/ per guidelines; update the call in the try-s3-download.sh
snippet to invoke ./scripts/custom/download-db-s3.sh and ensure the exported env
vars (VORTEX_DB_DOWNLOAD_S3_*) remain the same so the custom script receives
them; verify there are no other references to scripts/vortex/download-db-s3.sh
in this file and adjust them to scripts/custom/download-db-s3.sh.
In @.vortex/tests/manual/try-s3-upload.sh:
- Around line 64-72: The test currently invokes the uploader at
./scripts/vortex/upload-db-s3.sh; update the invocation to point to the custom
script location (e.g., ./scripts/custom/upload-db-s3.sh) so it uses the approved
custom-script path, and ensure any environment exports (VORTEX_DB_UPLOAD_S3_*)
remain unchanged; modify the call in .vortex/tests/manual/try-s3-upload.sh to
reference the new script path (upload-db-s3.sh) in scripts/custom/ instead of
scripts/vortex/.
| export VORTEX_DB_DOWNLOAD_S3_ACCESS_KEY="${S3_ACCESS_KEY}" | ||
| export VORTEX_DB_DOWNLOAD_S3_SECRET_KEY="${S3_SECRET_KEY}" | ||
| export VORTEX_DB_DOWNLOAD_S3_BUCKET="${S3_BUCKET}" | ||
| export VORTEX_DB_DOWNLOAD_S3_REGION="${S3_REGION}" | ||
| export VORTEX_DB_DOWNLOAD_S3_PREFIX="${S3_PREFIX}" | ||
|
|
||
| ./scripts/vortex/download-db-s3.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Align invocation with the required custom-script location.
If the downloader is moved to scripts/custom/, update this path to match.
Suggested update
-./scripts/vortex/download-db-s3.sh
+./scripts/custom/download-db-s3.sh🤖 Prompt for AI Agents
In @.vortex/tests/manual/try-s3-download.sh around lines 65 - 71, The script
invocation currently calls ./scripts/vortex/download-db-s3.sh but the downloader
has been moved to scripts/custom/ per guidelines; update the call in the
try-s3-download.sh snippet to invoke ./scripts/custom/download-db-s3.sh and
ensure the exported env vars (VORTEX_DB_DOWNLOAD_S3_*) remain the same so the
custom script receives them; verify there are no other references to
scripts/vortex/download-db-s3.sh in this file and adjust them to
scripts/custom/download-db-s3.sh.
| cd "${PROJECT_ROOT}" || exit 1 | ||
|
|
||
| export VORTEX_DB_UPLOAD_S3_ACCESS_KEY="${S3_ACCESS_KEY}" | ||
| export VORTEX_DB_UPLOAD_S3_SECRET_KEY="${S3_SECRET_KEY}" | ||
| export VORTEX_DB_UPLOAD_S3_BUCKET="${S3_BUCKET}" | ||
| export VORTEX_DB_UPLOAD_S3_REGION="${S3_REGION}" | ||
| export VORTEX_DB_UPLOAD_S3_PREFIX="${S3_PREFIX}" | ||
|
|
||
| ./scripts/vortex/upload-db-s3.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Align invocation with the required custom-script location.
If the uploader is moved to scripts/custom/, update this path accordingly.
Suggested update
-./scripts/vortex/upload-db-s3.sh
+./scripts/custom/upload-db-s3.sh🤖 Prompt for AI Agents
In @.vortex/tests/manual/try-s3-upload.sh around lines 64 - 72, The test
currently invokes the uploader at ./scripts/vortex/upload-db-s3.sh; update the
invocation to point to the custom script location (e.g.,
./scripts/custom/upload-db-s3.sh) so it uses the approved custom-script path,
and ensure any environment exports (VORTEX_DB_UPLOAD_S3_*) remain unchanged;
modify the call in .vortex/tests/manual/try-s3-upload.sh to reference the new
script path (upload-db-s3.sh) in scripts/custom/ instead of scripts/vortex/.
|
|
|
|
Closes #699
Summary by CodeRabbit
New Features
Tests
Chores