Skip to content

Conversation

@AlexSkrypnyk
Copy link
Member

@AlexSkrypnyk AlexSkrypnyk commented Feb 8, 2026

Closes #699

Summary by CodeRabbit

  • New Features

    • S3 support for database upload/download and an installer option to select S3 as the DB source.
  • Tests

    • Added unit, functional, and manual tests covering S3 upload/download flows and installer discovery/processing.
  • Chores

    • Updated environment examples with S3 credential, bucket, region variables and cleared default container registry credentials.

@coderabbitai
Copy link

coderabbitai bot commented Feb 8, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds S3-based database upload/download: new environment variables, installer support for an S3 source, two AWS SigV4 Bash scripts (GET/PUT), BATS unit tests and manual scripts; and dispatch in download-db.sh to the S3 download script when configured.

Changes

Cohort / File(s) Summary
Environment
/.env, /.env.local.example
Added S3-related env vars (VORTEX_DB_DOWNLOAD_S3_BUCKET, VORTEX_DB_DOWNLOAD_S3_REGION, VORTEX_DB_DOWNLOAD_S3_ACCESS_KEY, VORTEX_DB_DOWNLOAD_S3_SECRET_KEY); cleared default container registry creds in .env.local.example.
Installer & Tests
.vortex/installer/src/Prompts/Handlers/DatabaseDownloadSource.php, .vortex/installer/tests/.../DatabaseDownloadSourceHandlerProcessTest.php, .vortex/installer/tests/.../DatabaseDownloadSourceHandlerDiscoveryTest.php
Registered s3 as a database download source constant/option and added corresponding discovery/process test cases.
S3 Scripts
scripts/vortex/download-db-s3.sh, scripts/vortex/upload-db-s3.sh
Added standalone upload/download scripts implementing AWS Signature Version 4 (openssl + curl), with env loading, validation, signing, upload/download via HTTP, and logging.
Integration
scripts/vortex/download-db.sh
Added branch to invoke download-db-s3.sh when VORTEX_DB_DOWNLOAD_SOURCE is s3, preserving existing flow.
BATS Tests
.vortex/tests/bats/unit/download-db-s3.bats, .vortex/tests/bats/unit/upload-db-s3.bats
New unit tests covering success paths, defaults/fallbacks, prefix normalization, and missing-credential failures for S3 upload/download scripts.
Manual scripts
.vortex/tests/manual/try-s3-download.sh, .vortex/tests/manual/try-s3-upload.sh
Added manual helper scripts to exercise S3 download/upload flows with environment validation and exported VORTEX_* variables.

Sequence Diagram(s)

sequenceDiagram
    participant Caller as "try-s3-download.sh / CI"
    participant Script as "download-db-s3.sh"
    participant OpenSSL as "OpenSSL"
    participant S3 as "Amazon S3"
    participant FS as "Local FS"

    Caller->>Script: invoke with VORTEX_DB_DOWNLOAD_S3_* env
    Script->>Script: load .env(.local), validate inputs
    Script->>OpenSSL: derive signing key & signature (HMAC steps)
    OpenSSL-->>Script: return signature
    Script->>S3: GET object with Authorization header
    S3-->>Script: return DB object
    Script->>FS: write to VORTEX_DB_DIR/VORTEX_DB_FILE
    Script->>Caller: exit code / log success or failure
Loading
sequenceDiagram
    participant Caller as "try-s3-upload.sh / CI"
    participant Script as "upload-db-s3.sh"
    participant FS as "Local FS"
    participant OpenSSL as "OpenSSL"
    participant S3 as "Amazon S3"

    Caller->>Script: invoke with VORTEX_DB_UPLOAD_S3_* env
    Script->>Script: load .env(.local), validate inputs
    Script->>FS: read local DB file, compute payload SHA256
    Script->>OpenSSL: compute AWS4 signing key & signature
    OpenSSL-->>Script: return signature
    Script->>S3: PUT object with Authorization + x-amz-* headers
    S3-->>Script: confirm upload (HTTP 200/201)
    Script->>Caller: exit code / log success or failure
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 I hopped to the bucket with a tiny shell sigh,

Openssl hummed keys as the bytes fluttered by,
Curl fetched the dump and the folder grew bright,
Uploads and downloads danced through the night,
The rabbit twitched whiskers — data tucked in tight.

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 38.89% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and clearly describes the main change: adding support for downloading databases from S3.
Linked Issues check ✅ Passed The PR implements the core requirement from issue #699 to add a new database download method using AWS S3 as the source.
Out of Scope Changes check ✅ Passed All changes are directly related to S3 download functionality, including environment variables, handlers, tests, and scripts.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/699-download-db-s3

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🤖 Fix all issues with AI agents
In @.env.local.example:
- Around line 75-78: The environment keys are out of alphabetical order and
trigger the dotenv-linter UnorderedKey rule; swap the two variables so
VORTEX_CONTAINER_REGISTRY_PASS appears before VORTEX_CONTAINER_REGISTRY_USER
(i.e., reorder the two lines in the .env example), ensuring the key order is
alphabetical and CI warnings go away.

In @.vortex/tests/bats/unit/download-db-s3.bats:
- Around line 122-208: The tests for download-db-s3 rely on
VORTEX_DB_DOWNLOAD_S3_* env vars but can be polluted by parent S3_* variables;
update each failing test (the four `@test` blocks) to explicitly unset or override
any S3_ACCESS_KEY/S3_SECRET_KEY/S3_BUCKET/S3_REGION before exporting the
VORTEX_* vars (and/or export them as empty) so inherited S3_* values cannot
satisfy the checks in scripts/vortex/download-db-s3.sh; reference the
VORTEX_DB_DOWNLOAD_S3_ACCESS_KEY, VORTEX_DB_DOWNLOAD_S3_SECRET_KEY,
VORTEX_DB_DOWNLOAD_S3_BUCKET, VORTEX_DB_DOWNLOAD_S3_REGION variables and ensure
setup (run_steps "setup") runs with the sanitized environment.

In @.vortex/tests/bats/unit/upload-db-s3.bats:
- Around line 154-267: The failure tests for upload-db-s3 (tests named like
"upload-db-s3: Fail when VORTEX_DB_UPLOAD_S3_ACCESS_KEY is missing",
"...SECRET_KEY...", "...BUCKET...", "...REGION...", and the missing dump file
test) can be made flaky if a developer has alternative S3_* env vars exported;
before running scripts/vortex/upload-db-s3.sh in those tests, explicitly unset
any S3_* shortcut env vars (e.g. S3_ACCESS_KEY, S3_SECRET_KEY, S3_BUCKET,
S3_REGION or any other S3_* you may have in the environment) so the script only
sees the VORTEX_ variables being tested; update each test’s setup block to unset
these variables (or clear them with export S3_ACCESS_KEY="" etc.) before calling
run scripts/vortex/upload-db-s3.sh.

In `@scripts/vortex/download-db-s3.sh`:
- Around line 87-91: The canonical request string assigned to the variable
request is using literal backslash-n sequences which aren't converted to
newlines; update the construction of request (which concatenates request_type,
uri, headers, signed_headers, payload_hash) to produce real newlines—either
build it using printf (e.g., printf/printf -v with format placeholders to insert
\n between request_type, uri, headers, signed_headers, payload_hash) or use
shell $'...' quoting so the \n are interpreted; ensure you reference the same
variables (request_type, uri, headers, signed_headers, payload_hash) and produce
the exact newline-separated canonical request expected by the signature routine.
- Around line 94-103: The string_to_sign in create_signature contains
backslash-escaped newlines but is being passed as a format string to printf, so
escapes aren't interpreted and ShellCheck was disabled; change the printing to
use a literal format string that interprets escapes—replace the unsafe printf
"${string_to_sign}" with printf '%b' "$string_to_sign" (and apply the same
change where printf is used on the string_to_sign) so newlines are expanded
correctly and avoid SC2059.
- Around line 74-77: The functions use printf with the variable as the format
string which is unsafe; update hash_sha256 and hmac_sha256 to pass the variable
as an argument with an explicit format specifier (e.g., printf '%s' "$1" in
hash_sha256 and printf '%s' "$2" in hmac_sha256) so input containing % sequences
can't be interpreted as format directives, leaving the openssl and -macopt usage
(parameter "${1}" in hmac_sha256) unchanged.
- Around line 66-67: date_short and date_long are computed from two separate
date calls which can drift across a day boundary; capture a single UTC timestamp
once (e.g., into a variable like now_ts or now_iso) and derive both date_short
and date_long from that single value so they always match (reference the
existing variables date_short and date_long to replace their separate date calls
with formatting of the single captured timestamp).

In `@scripts/vortex/upload-db-s3.sh`:
- Around line 69-70: date_short and date_long are computed with separate date
calls which can race across a UTC midnight boundary; instead capture a single
timestamp first (e.g., store epoch or an ISO datetime) and then derive both
date_short and date_long from that single timestamp so both use the same moment;
update the code that sets date_short and date_long (referencing the existing
date_short and date_long variables) to format from that one captured timestamp.
- Around line 121-131: The curl invocation for uploading to S3 suppresses errors
and never checks the result: modify the curl call that uses "${request_type}",
"${local_file}", and "${object_url}" to capture both curl's exit code and the
HTTP response code (e.g., via -w "%{http_code}" to a temp output), remove or
replace --silent with --silent --show-error (or drop --silent) so errors are
printed, redirect response body to a temporary file and parse/check the HTTP
status and/or S3 XML error before calling pass "Finished database dump upload to
S3.", and if curl exits non‑zero or the HTTP status is not 2xx, emit an error
(use your existing logging or exit non‑zero) including the response body and
signature details for debugging.

Comment on lines 75 to +78
# The username to log into the container registry.
VORTEX_CONTAINER_REGISTRY_USER=$DOCKER_USER
VORTEX_CONTAINER_REGISTRY_USER=
# The password (token) to log into the container registry.
VORTEX_CONTAINER_REGISTRY_PASS=$DOCKER_PASS
VORTEX_CONTAINER_REGISTRY_PASS=
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix dotenv-linter key order to avoid CI warnings.
Reorder the container registry variables to satisfy the UnorderedKey rule.

♻️ Suggested reorder
-# The username to log into the container registry.
-VORTEX_CONTAINER_REGISTRY_USER=
-# The password (token) to log into the container registry.
-VORTEX_CONTAINER_REGISTRY_PASS=
+# The password (token) to log into the container registry.
+VORTEX_CONTAINER_REGISTRY_PASS=
+# The username to log into the container registry.
+VORTEX_CONTAINER_REGISTRY_USER=
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# The username to log into the container registry.
VORTEX_CONTAINER_REGISTRY_USER=$DOCKER_USER
VORTEX_CONTAINER_REGISTRY_USER=
# The password (token) to log into the container registry.
VORTEX_CONTAINER_REGISTRY_PASS=$DOCKER_PASS
VORTEX_CONTAINER_REGISTRY_PASS=
# The password (token) to log into the container registry.
VORTEX_CONTAINER_REGISTRY_PASS=
# The username to log into the container registry.
VORTEX_CONTAINER_REGISTRY_USER=
🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 78-78: [UnorderedKey] The VORTEX_CONTAINER_REGISTRY_PASS key should go before the VORTEX_CONTAINER_REGISTRY_USER key

(UnorderedKey)

🤖 Prompt for AI Agents
In @.env.local.example around lines 75 - 78, The environment keys are out of
alphabetical order and trigger the dotenv-linter UnorderedKey rule; swap the two
variables so VORTEX_CONTAINER_REGISTRY_PASS appears before
VORTEX_CONTAINER_REGISTRY_USER (i.e., reorder the two lines in the .env
example), ensuring the key order is alphabetical and CI warnings go away.

Comment on lines +66 to +67
date_short="$(date -u '+%Y%m%d')"
date_long="${date_short}T$(date -u '+%H%M%S')Z"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential time drift between date captures.

Same issue as the upload script: date_short and date_long are captured separately, risking mismatched dates at day boundaries.

🛡️ Proposed fix to derive both from single timestamp
-date_short="$(date -u '+%Y%m%d')"
-date_long="${date_short}T$(date -u '+%H%M%S')Z"
+date_long="$(date -u '+%Y%m%dT%H%M%SZ')"
+date_short="${date_long:0:8}"
🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 66 - 67, date_short and
date_long are computed from two separate date calls which can drift across a day
boundary; capture a single UTC timestamp once (e.g., into a variable like now_ts
or now_iso) and derive both date_short and date_long from that single value so
they always match (reference the existing variables date_short and date_long to
replace their separate date calls with formatting of the single captured
timestamp).

Comment on lines +74 to +77
# shellcheck disable=SC2059
hash_sha256() { printf "${1}" | openssl dgst -sha256 | sed 's/^.* //'; }
# shellcheck disable=SC2059
hmac_sha256() { printf "${2}" | openssl dgst -sha256 -mac HMAC -macopt "${1}" | sed 's/^.* //'; }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unsafe use of printf with variable content.

Using printf "${1}" is unsafe when the input contains format specifiers (e.g., %s, %d). If the file path or data contains such characters, it could cause unexpected behavior or errors.

🛡️ Proposed fix to use format specifier
 # shellcheck disable=SC2059
-hash_sha256() { printf "${1}" | openssl dgst -sha256 | sed 's/^.* //'; }
+hash_sha256() { printf '%s' "${1}" | openssl dgst -sha256 | sed 's/^.* //'; }
 # shellcheck disable=SC2059
-hmac_sha256() { printf "${2}" | openssl dgst -sha256 -mac HMAC -macopt "${1}" | sed 's/^.* //'; }
+hmac_sha256() { printf '%s' "${2}" | openssl dgst -sha256 -mac HMAC -macopt "${1}" | sed 's/^.* //'; }
🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 74 - 77, The functions use
printf with the variable as the format string which is unsafe; update
hash_sha256 and hmac_sha256 to pass the variable as an argument with an explicit
format specifier (e.g., printf '%s' "$1" in hash_sha256 and printf '%s' "$2" in
hmac_sha256) so input containing % sequences can't be interpreted as format
directives, leaving the openssl and -macopt usage (parameter "${1}" in
hmac_sha256) unchanged.

Comment on lines +87 to +91
request="${request_type}
${uri}\n
${headers}\n
${signed_headers}
${payload_hash}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Canonical request newlines are malformed.

The canonical request uses literal \n in strings but doesn't use printf with -e or $'...' syntax, so the backslash-n sequences won't be interpreted as newlines. This will produce an invalid canonical request and cause signature verification failures.

🐛 Proposed fix for canonical request construction
-request="${request_type}
-${uri}\n
-${headers}\n
+request="${request_type}
+${uri}
+
+${headers}
+
 ${signed_headers}
 ${payload_hash}"

Or use $'...' syntax with explicit newlines:

-request="${request_type}
-${uri}\n
-${headers}\n
-${signed_headers}
-${payload_hash}"
+request="${request_type}"$'\n'"${uri}"$'\n'$'\n'"${headers}"$'\n'$'\n'"${signed_headers}"$'\n'"${payload_hash}"
🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 87 - 91, The canonical request
string assigned to the variable request is using literal backslash-n sequences
which aren't converted to newlines; update the construction of request (which
concatenates request_type, uri, headers, signed_headers, payload_hash) to
produce real newlines—either build it using printf (e.g., printf/printf -v with
format placeholders to insert \n between request_type, uri, headers,
signed_headers, payload_hash) or use shell $'...' quoting so the \n are
interpreted; ensure you reference the same variables (request_type, uri,
headers, signed_headers, payload_hash) and produce the exact newline-separated
canonical request expected by the signature routine.

Comment on lines +94 to +103
# shellcheck disable=SC2059
create_signature() {
string_to_sign="${auth_type}\n${date_long}\n${date_short}/${VORTEX_DB_DOWNLOAD_S3_REGION}/${service}/aws4_request\n$(hash_sha256 "${request}")"
date_key=$(hmac_sha256 key:"AWS4${VORTEX_DB_DOWNLOAD_S3_SECRET_KEY}" "${date_short}")
region_key=$(hmac_sha256 hexkey:"${date_key}" "${VORTEX_DB_DOWNLOAD_S3_REGION}")
service_key=$(hmac_sha256 hexkey:"${region_key}" "${service}")
signing_key=$(hmac_sha256 hexkey:"${service_key}" "aws4_request")

printf "${string_to_sign}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"${signing_key}" | sed 's/(stdin)= //' | sed 's/SHA2-256//'
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Same printf format issue in create_signature function.

Line 96 uses printf with escape sequences (\n) that require -e flag or $'...' syntax to be interpreted correctly. The shellcheck disable=SC2059 suggests awareness of the issue, but the current implementation may not produce correct signatures.

🐛 Proposed fix for string_to_sign construction
 # shellcheck disable=SC2059
 create_signature() {
-  string_to_sign="${auth_type}\n${date_long}\n${date_short}/${VORTEX_DB_DOWNLOAD_S3_REGION}/${service}/aws4_request\n$(hash_sha256 "${request}")"
+  string_to_sign="${auth_type}"$'\n'"${date_long}"$'\n'"${date_short}/${VORTEX_DB_DOWNLOAD_S3_REGION}/${service}/aws4_request"$'\n'"$(hash_sha256 "${request}")"
   date_key=$(hmac_sha256 key:"AWS4${VORTEX_DB_DOWNLOAD_S3_SECRET_KEY}" "${date_short}")

And update line 102 similarly:

-  printf "${string_to_sign}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"${signing_key}" | sed 's/(stdin)= //' | sed 's/SHA2-256//'
+  printf '%s' "${string_to_sign}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"${signing_key}" | sed 's/(stdin)= //' | sed 's/SHA2-256//'
🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 94 - 103, The string_to_sign
in create_signature contains backslash-escaped newlines but is being passed as a
format string to printf, so escapes aren't interpreted and ShellCheck was
disabled; change the printing to use a literal format string that interprets
escapes—replace the unsafe printf "${string_to_sign}" with printf '%b'
"$string_to_sign" (and apply the same change where printf is used on the
string_to_sign) so newlines are expanded correctly and avoid SC2059.

Comment on lines +69 to +70
date_short=$(date -u +'%Y%m%d')
date_long=$(date -u +'%Y%m%dT%H%M%SZ')
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential time drift between date captures.

date_short and date_long are captured in separate date calls. If the script runs near a day boundary (e.g., 23:59:59 UTC), these could end up with mismatched dates, causing signature validation failures.

🛡️ Proposed fix to capture both from single timestamp
-date_short=$(date -u +'%Y%m%d')
-date_long=$(date -u +'%Y%m%dT%H%M%SZ')
+date_long=$(date -u +'%Y%m%dT%H%M%SZ')
+date_short="${date_long:0:8}"
🤖 Prompt for AI Agents
In `@scripts/vortex/upload-db-s3.sh` around lines 69 - 70, date_short and
date_long are computed with separate date calls which can race across a UTC
midnight boundary; instead capture a single timestamp first (e.g., store epoch
or an ISO datetime) and then derive both date_short and date_long from that
single timestamp so both use the same moment; update the code that sets
date_short and date_long (referencing the existing date_short and date_long
variables) to format from that one captured timestamp.

Comment on lines +121 to +131
curl --silent --location --proto-redir =https --request "${request_type}" --upload-file "${local_file}" \
--header "Content-Type: ${content_type}" \
--header "Host: ${VORTEX_DB_UPLOAD_S3_BUCKET}${base_url}" \
--header "X-Amz-Content-SHA256: ${payload_hash}" \
--header "X-Amz-Date: ${date_long}" \
--header "X-Amz-Server-Side-Encryption: AES256" \
--header "X-Amz-Storage-Class: ${VORTEX_DB_UPLOAD_S3_STORAGE_CLASS}" \
--header "Authorization: ${auth_type} Credential=${VORTEX_DB_UPLOAD_S3_ACCESS_KEY}/${date_short}/${VORTEX_DB_UPLOAD_S3_REGION}/${service}/aws4_request, SignedHeaders=${header_list}, Signature=${signature}" \
"${object_url}"

pass "Finished database dump upload to S3."
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing error handling for curl upload response.

The script uses --silent which suppresses error output, and there's no check for curl's exit code or HTTP response status before declaring success. S3 PUT operations can fail silently (e.g., permission denied, bucket not found) and return XML error responses.

🔧 Proposed fix to handle curl errors
-curl --silent --location --proto-redir =https --request "${request_type}" --upload-file "${local_file}" \
+response=$(curl --silent --location --proto-redir =https --fail --request "${request_type}" --upload-file "${local_file}" \
   --header "Content-Type: ${content_type}" \
   --header "Host: ${VORTEX_DB_UPLOAD_S3_BUCKET}${base_url}" \
   --header "X-Amz-Content-SHA256: ${payload_hash}" \
   --header "X-Amz-Date: ${date_long}" \
   --header "X-Amz-Server-Side-Encryption: AES256" \
   --header "X-Amz-Storage-Class: ${VORTEX_DB_UPLOAD_S3_STORAGE_CLASS}" \
   --header "Authorization: ${auth_type} Credential=${VORTEX_DB_UPLOAD_S3_ACCESS_KEY}/${date_short}/${VORTEX_DB_UPLOAD_S3_REGION}/${service}/aws4_request, SignedHeaders=${header_list}, Signature=${signature}" \
-  "${object_url}"
+  --write-out "%{http_code}" \
+  "${object_url}") || { fail "Upload failed with curl error."; exit 1; }
+
+http_code="${response: -3}"
+if [ "${http_code}" != "200" ]; then
+  fail "Upload failed with HTTP status ${http_code}."
+  exit 1
+fi

 pass "Finished database dump upload to S3."
🤖 Prompt for AI Agents
In `@scripts/vortex/upload-db-s3.sh` around lines 121 - 131, The curl invocation
for uploading to S3 suppresses errors and never checks the result: modify the
curl call that uses "${request_type}", "${local_file}", and "${object_url}" to
capture both curl's exit code and the HTTP response code (e.g., via -w
"%{http_code}" to a temp output), remove or replace --silent with --silent
--show-error (or drop --silent) so errors are printed, redirect response body to
a temporary file and parse/check the HTTP status and/or S3 XML error before
calling pass "Finished database dump upload to S3.", and if curl exits non‑zero
or the HTTP status is not 2xx, emit an error (use your existing logging or exit
non‑zero) including the response body and signature details for debugging.

@github-actions

This comment has been minimized.

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-08 04:47:52

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-08 04:51:03

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-08 04:51:27

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@codecov
Copy link

codecov bot commented Feb 8, 2026

Codecov Report

❌ Patch coverage is 82.20339% with 21 lines in your changes missing coverage. Please review.
✅ Project coverage is 76.44%. Comparing base (78811df) to head (5ac9960).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
scripts/vortex/upload-db-s3.sh 75.38% 16 Missing ⚠️
scripts/vortex/download-db-s3.sh 91.83% 4 Missing ⚠️
scripts/vortex/download-db.sh 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2256      +/-   ##
==========================================
- Coverage   76.95%   76.44%   -0.51%     
==========================================
  Files         112      107       -5     
  Lines        5870     5829      -41     
  Branches       44        0      -44     
==========================================
- Hits         4517     4456      -61     
- Misses       1353     1373      +20     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@AlexSkrypnyk AlexSkrypnyk force-pushed the feature/699-download-db-s3 branch from 9bed58c to c508f81 Compare February 9, 2026 01:43
@github-actions

This comment has been minimized.

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 01:49:57

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 01:53:05

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 01:53:28

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk AlexSkrypnyk force-pushed the feature/699-download-db-s3 branch from c508f81 to 31dd014 Compare February 9, 2026 02:08
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@scripts/vortex/download-db-s3.sh`:
- Around line 111-116: The curl invocation using object_url and writing to
"${VORTEX_DB_DIR}/${VORTEX_DB_FILE}" can fail silently despite -f and rely on
set -e; add explicit error handling immediately after the curl command: capture
its exit status, on non-zero print a clear diagnostic including object_url,
auth_header/date_long/payload_hash context (but not secret values), and curl
stderr, remove any partial "${VORTEX_DB_DIR}/${VORTEX_DB_FILE}" file, and exit
with the curl exit code; ensure the logic references the same variables
(object_url, auth_header, content_type, payload_hash, date_long, VORTEX_DB_DIR,
VORTEX_DB_FILE) so failures are logged and cleaned up deterministically.

In `@scripts/vortex/download-db.sh`:
- Around line 78-80: Revert the S3 routing change from
scripts/vortex/download-db.sh so core script remains untouched; remove the
conditional that checks VORTEX_DB_DOWNLOAD_SOURCE and calling
./scripts/vortex/download-db-s3.sh. Instead create a custom wrapper in
scripts/custom/ (e.g., scripts/custom/download-db.sh) that inspects the
environment variable VORTEX_DB_DOWNLOAD_SOURCE and routes to either
./scripts/vortex/download-db.sh or ./scripts/vortex/download-db-s3.sh
accordingly, leaving the original scripts/vortex/* files unchanged.

In `@scripts/vortex/upload-db-s3.sh`:
- Around line 1-11: Move the custom upload-db-s3.sh script out of
scripts/vortex/ into scripts/custom/ and update any callers to reference the new
path; preserve the shebang (#!/usr/bin/env bash) and the environment-loading
line (the mktemp/export/set -a/.env/.env.local sequence) exactly as-is to keep
behavior identical, and ensure any CI/job configs, Makefiles, or other scripts
that invoked scripts/vortex/upload-db-s3.sh now invoke
scripts/custom/upload-db-s3.sh instead.

Comment on lines +111 to +116
curl "${object_url}" \
-H "Authorization: ${auth_header}" \
-H "content-type: ${content_type}" \
-H "x-amz-content-sha256: ${payload_hash}" \
-H "x-amz-date: ${date_long}" \
-f -S -o "${VORTEX_DB_DIR}/${VORTEX_DB_FILE}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Missing error handling for curl failures.

The script exits with the success message even if curl fails silently (without -f triggering an exit). While -f is used, if curl returns a non-zero exit code, the script will terminate due to set -e, but the error won't be caught gracefully.

Consider adding explicit error handling to provide better diagnostics:

🛡️ Suggested improvement
-curl "${object_url}" \
+if ! curl "${object_url}" \
   -H "Authorization: ${auth_header}" \
   -H "content-type: ${content_type}" \
   -H "x-amz-content-sha256: ${payload_hash}" \
   -H "x-amz-date: ${date_long}" \
-  -f -S -o "${VORTEX_DB_DIR}/${VORTEX_DB_FILE}"
+  -f -S -o "${VORTEX_DB_DIR}/${VORTEX_DB_FILE}"; then
+  fail "Failed to download database dump from S3."
+  exit 1
+fi
🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 111 - 116, The curl invocation
using object_url and writing to "${VORTEX_DB_DIR}/${VORTEX_DB_FILE}" can fail
silently despite -f and rely on set -e; add explicit error handling immediately
after the curl command: capture its exit status, on non-zero print a clear
diagnostic including object_url, auth_header/date_long/payload_hash context (but
not secret values), and curl stderr, remove any partial
"${VORTEX_DB_DIR}/${VORTEX_DB_FILE}" file, and exit with the curl exit code;
ensure the logic references the same variables (object_url, auth_header,
content_type, payload_hash, date_long, VORTEX_DB_DIR, VORTEX_DB_FILE) so
failures are logged and cleaned up deterministically.

Comment on lines +78 to +80
if [ "${VORTEX_DB_DOWNLOAD_SOURCE}" = "s3" ]; then
./scripts/vortex/download-db-s3.sh
fi
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Move this routing change to scripts/custom/.
Core scripts in scripts/vortex/ should remain untouched; add the S3 routing in a custom wrapper instead.

As per coding guidelines, Never modify scripts/vortex/ - use scripts/custom/ for custom scripts.

🤖 Prompt for AI Agents
In `@scripts/vortex/download-db.sh` around lines 78 - 80, Revert the S3 routing
change from scripts/vortex/download-db.sh so core script remains untouched;
remove the conditional that checks VORTEX_DB_DOWNLOAD_SOURCE and calling
./scripts/vortex/download-db-s3.sh. Instead create a custom wrapper in
scripts/custom/ (e.g., scripts/custom/download-db.sh) that inspects the
environment variable VORTEX_DB_DOWNLOAD_SOURCE and routes to either
./scripts/vortex/download-db.sh or ./scripts/vortex/download-db-s3.sh
accordingly, leaving the original scripts/vortex/* files unchanged.

Comment on lines +1 to +11
#!/usr/bin/env bash
##
# Upload DB dump to S3.
#
# Uses AWS Signature Version 4 with curl (no AWS CLI required).
#
# IMPORTANT! This script runs outside the container on the host system.
#
# shellcheck disable=SC1090,SC1091

t=$(mktemp) && export -p >"${t}" && set -a && . ./.env && if [ -f ./.env.local ]; then . ./.env.local; fi && set +a && . "${t}" && rm "${t}" && unset t
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Move this script out of scripts/vortex/ into scripts/custom/.
Custom additions must live under scripts/custom/; update callers accordingly.

As per coding guidelines, Never modify scripts/vortex/ - use scripts/custom/ for custom scripts.

🤖 Prompt for AI Agents
In `@scripts/vortex/upload-db-s3.sh` around lines 1 - 11, Move the custom
upload-db-s3.sh script out of scripts/vortex/ into scripts/custom/ and update
any callers to reference the new path; preserve the shebang (#!/usr/bin/env
bash) and the environment-loading line (the mktemp/export/set -a/.env/.env.local
sequence) exactly as-is to keep behavior identical, and ensure any CI/job
configs, Makefiles, or other scripts that invoked scripts/vortex/upload-db-s3.sh
now invoke scripts/custom/upload-db-s3.sh instead.

@github-actions

This comment has been minimized.

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 02:15:23

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 02:18:29

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 02:18:47

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk AlexSkrypnyk force-pushed the feature/699-download-db-s3 branch from 31dd014 to 4cb6b4b Compare February 9, 2026 03:32
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@scripts/vortex/download-db-s3.sh`:
- Around line 1-11: This script (scripts/vortex/download-db-s3.sh) must be
relocated to scripts/custom/download-db-s3.sh and all callers updated to
reference the new path; move the file preserving its shebang and executable bit,
then search the repo for any invocation of scripts/vortex/download-db-s3.sh (CI
jobs, other scripts, docs) and update those to scripts/custom/download-db-s3.sh,
ensuring the .env sourcing line and relative path semantics (./.env and
./.env.local) remain unchanged after the move.

Comment on lines +1 to +11
#!/usr/bin/env bash
##
# Download DB dump from S3.
#
# Uses AWS Signature Version 4 with curl (no AWS CLI required).
#
# IMPORTANT! This script runs outside the container on the host system.
#
# shellcheck disable=SC1090,SC1091

t=$(mktemp) && export -p >"${t}" && set -a && . ./.env && if [ -f ./.env.local ]; then . ./.env.local; fi && set +a && . "${t}" && rm "${t}" && unset t
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Move this custom script to scripts/custom/.

New custom scripts should not live under scripts/vortex/. Update callers to point to the custom location.

As per coding guidelines, Never modify scripts/vortex/ - use scripts/custom/ for custom scripts.

🤖 Prompt for AI Agents
In `@scripts/vortex/download-db-s3.sh` around lines 1 - 11, This script
(scripts/vortex/download-db-s3.sh) must be relocated to
scripts/custom/download-db-s3.sh and all callers updated to reference the new
path; move the file preserving its shebang and executable bit, then search the
repo for any invocation of scripts/vortex/download-db-s3.sh (CI jobs, other
scripts, docs) and update those to scripts/custom/download-db-s3.sh, ensuring
the .env sourcing line and relative path semantics (./.env and ./.env.local)
remain unchanged after the move.

@github-actions

This comment has been minimized.

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 03:39:10

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 03:42:19

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 03:42:35

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk AlexSkrypnyk force-pushed the feature/699-download-db-s3 branch from 4cb6b4b to 5ac9960 Compare February 9, 2026 03:50
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In @.vortex/tests/manual/try-s3-download.sh:
- Around line 65-71: The script invocation currently calls
./scripts/vortex/download-db-s3.sh but the downloader has been moved to
scripts/custom/ per guidelines; update the call in the try-s3-download.sh
snippet to invoke ./scripts/custom/download-db-s3.sh and ensure the exported env
vars (VORTEX_DB_DOWNLOAD_S3_*) remain the same so the custom script receives
them; verify there are no other references to scripts/vortex/download-db-s3.sh
in this file and adjust them to scripts/custom/download-db-s3.sh.

In @.vortex/tests/manual/try-s3-upload.sh:
- Around line 64-72: The test currently invokes the uploader at
./scripts/vortex/upload-db-s3.sh; update the invocation to point to the custom
script location (e.g., ./scripts/custom/upload-db-s3.sh) so it uses the approved
custom-script path, and ensure any environment exports (VORTEX_DB_UPLOAD_S3_*)
remain unchanged; modify the call in .vortex/tests/manual/try-s3-upload.sh to
reference the new script path (upload-db-s3.sh) in scripts/custom/ instead of
scripts/vortex/.

Comment on lines +65 to +71
export VORTEX_DB_DOWNLOAD_S3_ACCESS_KEY="${S3_ACCESS_KEY}"
export VORTEX_DB_DOWNLOAD_S3_SECRET_KEY="${S3_SECRET_KEY}"
export VORTEX_DB_DOWNLOAD_S3_BUCKET="${S3_BUCKET}"
export VORTEX_DB_DOWNLOAD_S3_REGION="${S3_REGION}"
export VORTEX_DB_DOWNLOAD_S3_PREFIX="${S3_PREFIX}"

./scripts/vortex/download-db-s3.sh
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Align invocation with the required custom-script location.
If the downloader is moved to scripts/custom/, update this path to match.

Suggested update
-./scripts/vortex/download-db-s3.sh
+./scripts/custom/download-db-s3.sh
As per coding guidelines, "Never modify `scripts/vortex/` - use `scripts/custom/` for custom scripts."
🤖 Prompt for AI Agents
In @.vortex/tests/manual/try-s3-download.sh around lines 65 - 71, The script
invocation currently calls ./scripts/vortex/download-db-s3.sh but the downloader
has been moved to scripts/custom/ per guidelines; update the call in the
try-s3-download.sh snippet to invoke ./scripts/custom/download-db-s3.sh and
ensure the exported env vars (VORTEX_DB_DOWNLOAD_S3_*) remain the same so the
custom script receives them; verify there are no other references to
scripts/vortex/download-db-s3.sh in this file and adjust them to
scripts/custom/download-db-s3.sh.

Comment on lines +64 to +72
cd "${PROJECT_ROOT}" || exit 1

export VORTEX_DB_UPLOAD_S3_ACCESS_KEY="${S3_ACCESS_KEY}"
export VORTEX_DB_UPLOAD_S3_SECRET_KEY="${S3_SECRET_KEY}"
export VORTEX_DB_UPLOAD_S3_BUCKET="${S3_BUCKET}"
export VORTEX_DB_UPLOAD_S3_REGION="${S3_REGION}"
export VORTEX_DB_UPLOAD_S3_PREFIX="${S3_PREFIX}"

./scripts/vortex/upload-db-s3.sh
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Align invocation with the required custom-script location.
If the uploader is moved to scripts/custom/, update this path accordingly.

Suggested update
-./scripts/vortex/upload-db-s3.sh
+./scripts/custom/upload-db-s3.sh
As per coding guidelines, "Never modify `scripts/vortex/` - use `scripts/custom/` for custom scripts."
🤖 Prompt for AI Agents
In @.vortex/tests/manual/try-s3-upload.sh around lines 64 - 72, The test
currently invokes the uploader at ./scripts/vortex/upload-db-s3.sh; update the
invocation to point to the custom script location (e.g.,
./scripts/custom/upload-db-s3.sh) so it uses the approved custom-script path,
and ensure any environment exports (VORTEX_DB_UPLOAD_S3_*) remain unchanged;
modify the call in .vortex/tests/manual/try-s3-upload.sh to reference the new
script path (upload-db-s3.sh) in scripts/custom/ instead of scripts/vortex/.

@github-actions
Copy link

github-actions bot commented Feb 9, 2026

Code Coverage Report:
  2026-02-09 03:56:22

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 03:56:38

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 03:59:31

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk
Copy link
Member Author

Code Coverage Report:
  2026-02-09 03:59:43

 Summary:
  Classes:  0.00% (0/1)
  Methods:  0.00% (0/2)
  Lines:   93.92% (170/181)

@AlexSkrypnyk AlexSkrypnyk moved this from BACKLOG to Release queue in Vortex Feb 9, 2026
@AlexSkrypnyk AlexSkrypnyk moved this from Release queue to In progress in Vortex Feb 9, 2026
@AlexSkrypnyk AlexSkrypnyk added this to the 1.36.0 milestone Feb 9, 2026
@AlexSkrypnyk AlexSkrypnyk merged commit f88b404 into main Feb 9, 2026
28 checks passed
@AlexSkrypnyk AlexSkrypnyk deleted the feature/699-download-db-s3 branch February 9, 2026 04:15
@github-project-automation github-project-automation bot moved this from In progress to Release queue in Vortex Feb 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Release queue

Development

Successfully merging this pull request may close these issues.

Download DB from S3

1 participant