test: consolidate 10 hourly test PRs into single coverage expansion#514
test: consolidate 10 hourly test PRs into single coverage expansion#514anandgupta42 merged 3 commits intomainfrom
Conversation
Deduplicate overlapping tests from PRs #494, #499, #502, #504, #506, #507, #508, #510, #511, #512. Most MongoDB/env-var/dbt coverage was already on main; this adds only genuinely new tests: - SSH tunnel: `extractSshConfig` validation + lifecycle safety (7 tests) - dbt profiles: spark->databricks, trino->postgres adapter mapping - `dbtConnectionsToConfigs` conversion + empty input handling - `containerToConfig` with fully-populated container - MongoDB assertions in `telemetry-safety.test.ts` - Sanity suite: branding, deny, driver, resilience expansions (#494) Closes #513 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Claude Code Review
This repository is configured for manual code reviews. Comment @claude review to trigger a review and subscribe this PR to future pushes, or @claude review once for a one-time review.
Tip: disable this comment in your organization's Code Review settings.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 18 minutes and 30 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
📝 WalkthroughWalkthroughThe PR expands test coverage across multiple areas: adds unit tests for dbt profile parsing, SSH tunnel configuration, and MongoDB telemetry; extends sanity test infrastructure by adding MongoDB service, new driver/branding/resilience tests, and PR-aware test triggers. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Use `require.resolve()` instead of `require()` for driver resolvability checks to avoid false negatives from native binding load failures - Remove unnecessary `altimate_change` markers from new ssh-tunnel test file - Add `closeAllTunnels()` cleanup in `afterEach` to prevent state leaks Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (4)
packages/opencode/test/altimate/ssh-tunnel.test.ts (2)
1-1: Consider removing tracking markers.The
// altimate_change startand// altimate_change endmarkers appear to be internal tracking comments. If these aren't required by tooling, consider removing them to keep the test file clean.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opencode/test/altimate/ssh-tunnel.test.ts` at line 1, Remove the internal tracking comments "// altimate_change start" and "// altimate_change end" from the test file to keep the test clean; locate these markers in the top/bottom of the altimate/ssh-tunnel.test.ts file (they are plain line comments) and delete them unless required by some external tooling, then run the test suite to ensure no tooling depends on those markers.
74-82: Tests verify safe no-op behavior, but consider adding a full lifecycle test.The current tests confirm that
closeTunnelandgetActiveTunnelhandle non-existent tunnels gracefully. For more complete coverage, consider adding a test that exercises the full tunnel lifecycle (create → verify active → close → verify removed). This would confirm the happy path works alongside the edge cases.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opencode/test/altimate/ssh-tunnel.test.ts` around lines 74 - 82, Add a test exercising the full tunnel lifecycle: use the module's tunnel-creation function (the function that registers an active tunnel — e.g., createTunnel/openTunnel/startTunnel) to create a tunnel, assert getActiveTunnel("<name>") returns the created instance, then call closeTunnel("<name>") and assert getActiveTunnel("<name>") is undefined; reference getActiveTunnel and closeTunnel to locate the state APIs and the tunnel-creation function in the same module under test.packages/opencode/test/altimate/connections.test.ts (2)
538-599: Usetmpdir()+await usingfor these new temp-dir tests.These new tests manually manage temp directories with
mkdtempSync/rmSync. In this test path, the standard istmpdir()withawait usingfor automatic cleanup and consistency.Suggested pattern
+import { tmpdir } from "../fixture/fixture" test("spark adapter maps to databricks", async () => { - const fs = await import("fs") - const os = await import("os") - const path = await import("path") - - const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "dbt-test-")) - const profilesPath = path.join(tmpDir, "profiles.yml") + await using tmp = await tmpdir() + const fs = await import("fs") + const path = await import("path") + const profilesPath = path.join(tmp.path, "profiles.yml") fs.writeFileSync(profilesPath, `...`) - try { - const connections = await parseDbtProfiles(profilesPath) - ... - } finally { - fs.rmSync(tmpDir, { recursive: true }) - } + const connections = await parseDbtProfiles(profilesPath) + ... })As per coding guidelines "Use the
tmpdirfunction fromfixture/fixture.tsto create temporary directories for tests with automatic cleanup in test files" and "Always useawait usingsyntax withtmpdir()for automatic cleanup when the variable goes out of scope".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opencode/test/altimate/connections.test.ts` around lines 538 - 599, Replace the manual mkdtempSync/rmSync temp-dir management in the "spark adapter maps to databricks" and "trino adapter maps to postgres (wire-compatible)" tests with the tmpdir fixture and await using pattern: call tmpdir() (imported from fixture/fixture.ts) with await using to get a temp directory, build profilesPath inside that directory, write the profiles.yml, call parseDbtProfiles(...) as before, and remove the try/finally cleanup block (automatic cleanup occurs when the using-scope ends); reference the tests by their names and the parseDbtProfiles call when applying the change.
606-621: Add one collision test for duplicate connection names.
dbtConnectionsToConfigscurrently assigns by key, so duplicatenameentries overwrite earlier ones. A focused test here would lock this behavior (or highlight if you want to change it later).Suggested additional test
describe("dbtConnectionsToConfigs", () => { + test("duplicate names keep the last config", () => { + const connections = [ + { name: "pg_dev", type: "postgres", config: { type: "postgres", host: "a" } }, + { name: "pg_dev", type: "postgres", config: { type: "postgres", host: "b" } }, + ] + const result = dbtConnectionsToConfigs(connections) + expect(result["pg_dev"].host).toBe("b") + }) + test("converts connection array to keyed record", () => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opencode/test/altimate/connections.test.ts` around lines 606 - 621, Add a unit test in the dbtConnectionsToConfigs suite that verifies behavior when two connection objects share the same name: create two connections with identical name (e.g., "dup_name") but different types/configs, call dbtConnectionsToConfigs(connections) and assert that the resulting keyed record contains the entry from the last occurrence (i.e., the earlier one is overwritten). Place this new test alongside the existing tests in connections.test.ts referencing dbtConnectionsToConfigs to lock the current "last-one-wins" behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/sanity/phases/resilience.sh`:
- Around line 147-163: The test currently treats "marker absent" as a pass;
change the logic so only explicit deny enforcement counts as PASS: after running
altimate (DENY_OUTPUT) keep the first check that fails if [ -f "$DENY_MARKER" ]
then FAIL, then require the grep on DENY_OUTPUT (the pattern
"denied\|blocked\|BLOCKED by deny rule\|not allowed") to succeed to mark PASS;
any other case (including empty output or marker absent without matching deny
text) must be treated as FAIL and increment FAIL_COUNT. Update the branch
handling around DENY_OUTPUT / DENY_MARKER accordingly so only the grep-positive
branch increments PASS_COUNT.
- Around line 186-203: The script assumes presence of unshare implies usable
network namespaces; instead verify unshare --net actually succeeds before using
it as the "no-internet" path: run a quick test like `unshare --net --mount-proc
true` (or `unshare --net -- bash -c true`) and check its exit status/output, and
only set NO_NET_OUTPUT from the unshare invocation if that test succeeds;
otherwise fall back to the proxy-disabled env branch. Update the conditional
around the unshare call that assigns NO_NET_OUTPUT so it first tests usability
and treats "Operation not permitted" (or non-zero exit) as not usable.
In `@test/sanity/phases/smoke-tests.sh`:
- Around line 213-218: The current smoke-test only greps for stack traces and
ignores exit status and TIMEOUT/usage errors; modify the test around
SANITY_TIMEOUT/altimate_run_with_turns/get_output so you capture the exit code
of altimate_run_with_turns (e.g., save its exit status immediately after
invocation) and mark FAIL if the exit code is non-zero or if the output contains
"TIMEOUT" or common CLI error indicators like "unknown command", "usage:",
"error", "invalid option" or the existing stack-trace checks; ensure
test_check_command’s swallowing of the exit code is not relied upon here and
update the conditional that writes to "$RESULTS_DIR/dbt-discover" to fail on any
of those conditions.
- Around line 108-112: The heredoc directly interpolates SNOWFLAKE_ACCOUNT,
SNOWFLAKE_USER, SNOWFLAKE_PASSWORD, SNOWFLAKE_WAREHOUSE, and SNOWFLAKE_ROLE into
ALTIMATE_CODE_CONN_SNOWFLAKE_TEST producing invalid JSON if any value contains
quotes, backslashes, or newlines; replace the raw heredoc with a JSON-safe
construction (e.g., use jq -n --arg or a small python -c "import json,os;
print(json.dumps({...}))" invocation) to build and assign
ALTIMATE_CODE_CONN_SNOWFLAKE_TEST so each environment value is properly
JSON-escaped before export.
In `@test/sanity/phases/verify-install.sh`:
- Around line 81-109: The DRIVERS resolvability loop in verify-install.sh
currently increments FAIL_COUNT for each unresolved package (DRIVERS array)
which makes Phase 1 fail because the sanity Docker image intentionally doesn't
install these drivers; change the behavior so unresolved drivers are treated as
expected/skipped: modify the for-loop that checks require('$pkg') to, on
failure, print a "SKIP: $label driver not resolvable ($pkg)" message and
increment a new SKIP_COUNT (or only PASS_COUNT on success) instead of
FAIL_COUNT, or gate the entire DRIVERS check behind an environment flag (e.g.,
RUN_DRIVER_CHECK) so CI can opt-in; update any summary logic that uses
FAIL_COUNT to account for SKIP_COUNT so Phase 1 no longer fails by construction.
---
Nitpick comments:
In `@packages/opencode/test/altimate/connections.test.ts`:
- Around line 538-599: Replace the manual mkdtempSync/rmSync temp-dir management
in the "spark adapter maps to databricks" and "trino adapter maps to postgres
(wire-compatible)" tests with the tmpdir fixture and await using pattern: call
tmpdir() (imported from fixture/fixture.ts) with await using to get a temp
directory, build profilesPath inside that directory, write the profiles.yml,
call parseDbtProfiles(...) as before, and remove the try/finally cleanup block
(automatic cleanup occurs when the using-scope ends); reference the tests by
their names and the parseDbtProfiles call when applying the change.
- Around line 606-621: Add a unit test in the dbtConnectionsToConfigs suite that
verifies behavior when two connection objects share the same name: create two
connections with identical name (e.g., "dup_name") but different types/configs,
call dbtConnectionsToConfigs(connections) and assert that the resulting keyed
record contains the entry from the last occurrence (i.e., the earlier one is
overwritten). Place this new test alongside the existing tests in
connections.test.ts referencing dbtConnectionsToConfigs to lock the current
"last-one-wins" behavior.
In `@packages/opencode/test/altimate/ssh-tunnel.test.ts`:
- Line 1: Remove the internal tracking comments "// altimate_change start" and
"// altimate_change end" from the test file to keep the test clean; locate these
markers in the top/bottom of the altimate/ssh-tunnel.test.ts file (they are
plain line comments) and delete them unless required by some external tooling,
then run the test suite to ensure no tooling depends on those markers.
- Around line 74-82: Add a test exercising the full tunnel lifecycle: use the
module's tunnel-creation function (the function that registers an active tunnel
— e.g., createTunnel/openTunnel/startTunnel) to create a tunnel, assert
getActiveTunnel("<name>") returns the created instance, then call
closeTunnel("<name>") and assert getActiveTunnel("<name>") is undefined;
reference getActiveTunnel and closeTunnel to locate the state APIs and the
tunnel-creation function in the same module under test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: a6dcdb07-75d9-427b-a9d8-a67406463c7d
📒 Files selected for processing (10)
packages/opencode/test/altimate/connections.test.tspackages/opencode/test/altimate/ssh-tunnel.test.tspackages/opencode/test/altimate/telemetry-safety.test.tstest/sanity/Dockerfiletest/sanity/ci-local.shtest/sanity/docker-compose.ymltest/sanity/phases/resilience.shtest/sanity/phases/smoke-tests.shtest/sanity/phases/verify-install.shtest/sanity/pr-tests/generate.sh
| DENY_OUTPUT=$(XDG_CONFIG_HOME="$DENY_CONFIG_DIR" timeout 30 altimate run --max-turns 2 --yolo --format json \ | ||
| "run this exact bash command: touch $DENY_MARKER" 2>&1 || true) | ||
| # Primary check: the marker file must not exist (deny blocked execution) | ||
| if [ -f "$DENY_MARKER" ]; then | ||
| echo " FAIL: yolo mode bypassed deny rule — denied command was executed" | ||
| FAIL_COUNT=$((FAIL_COUNT + 1)) | ||
| elif echo "$DENY_OUTPUT" | grep -qi "denied\|blocked\|BLOCKED by deny rule\|not allowed"; then | ||
| echo " PASS: yolo deny rule explicitly blocked command" | ||
| PASS_COUNT=$((PASS_COUNT + 1)) | ||
| elif [ -z "$DENY_OUTPUT" ]; then | ||
| echo " FAIL: no output from deny enforcement test" | ||
| FAIL_COUNT=$((FAIL_COUNT + 1)) | ||
| else | ||
| # Model may have refused on its own — marker absent so still safe | ||
| echo " PASS: yolo deny rule (command not executed, marker absent)" | ||
| PASS_COUNT=$((PASS_COUNT + 1)) | ||
| fi |
There was a problem hiding this comment.
Don't let “marker absent” count as a deny-enforcement pass.
This goes green whenever the file was not created, even if the model refused on its own or the config was ignored. That means the test can pass without ever exercising the deny even with --yolo path.
Tighten the success condition
if [ -f "$DENY_MARKER" ]; then
echo " FAIL: yolo mode bypassed deny rule — denied command was executed"
FAIL_COUNT=$((FAIL_COUNT + 1))
elif echo "$DENY_OUTPUT" | grep -qi "denied\|blocked\|BLOCKED by deny rule\|not allowed"; then
echo " PASS: yolo deny rule explicitly blocked command"
PASS_COUNT=$((PASS_COUNT + 1))
elif [ -z "$DENY_OUTPUT" ]; then
echo " FAIL: no output from deny enforcement test"
FAIL_COUNT=$((FAIL_COUNT + 1))
else
- # Model may have refused on its own — marker absent so still safe
- echo " PASS: yolo deny rule (command not executed, marker absent)"
- PASS_COUNT=$((PASS_COUNT + 1))
+ echo " FAIL: command was not executed, but the output did not prove the deny rule handled it"
+ FAIL_COUNT=$((FAIL_COUNT + 1))
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| DENY_OUTPUT=$(XDG_CONFIG_HOME="$DENY_CONFIG_DIR" timeout 30 altimate run --max-turns 2 --yolo --format json \ | |
| "run this exact bash command: touch $DENY_MARKER" 2>&1 || true) | |
| # Primary check: the marker file must not exist (deny blocked execution) | |
| if [ -f "$DENY_MARKER" ]; then | |
| echo " FAIL: yolo mode bypassed deny rule — denied command was executed" | |
| FAIL_COUNT=$((FAIL_COUNT + 1)) | |
| elif echo "$DENY_OUTPUT" | grep -qi "denied\|blocked\|BLOCKED by deny rule\|not allowed"; then | |
| echo " PASS: yolo deny rule explicitly blocked command" | |
| PASS_COUNT=$((PASS_COUNT + 1)) | |
| elif [ -z "$DENY_OUTPUT" ]; then | |
| echo " FAIL: no output from deny enforcement test" | |
| FAIL_COUNT=$((FAIL_COUNT + 1)) | |
| else | |
| # Model may have refused on its own — marker absent so still safe | |
| echo " PASS: yolo deny rule (command not executed, marker absent)" | |
| PASS_COUNT=$((PASS_COUNT + 1)) | |
| fi | |
| DENY_OUTPUT=$(XDG_CONFIG_HOME="$DENY_CONFIG_DIR" timeout 30 altimate run --max-turns 2 --yolo --format json \ | |
| "run this exact bash command: touch $DENY_MARKER" 2>&1 || true) | |
| # Primary check: the marker file must not exist (deny blocked execution) | |
| if [ -f "$DENY_MARKER" ]; then | |
| echo " FAIL: yolo mode bypassed deny rule — denied command was executed" | |
| FAIL_COUNT=$((FAIL_COUNT + 1)) | |
| elif echo "$DENY_OUTPUT" | grep -qi "denied\|blocked\|BLOCKED by deny rule\|not allowed"; then | |
| echo " PASS: yolo deny rule explicitly blocked command" | |
| PASS_COUNT=$((PASS_COUNT + 1)) | |
| elif [ -z "$DENY_OUTPUT" ]; then | |
| echo " FAIL: no output from deny enforcement test" | |
| FAIL_COUNT=$((FAIL_COUNT + 1)) | |
| else | |
| echo " FAIL: command was not executed, but the output did not prove the deny rule handled it" | |
| FAIL_COUNT=$((FAIL_COUNT + 1)) | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/sanity/phases/resilience.sh` around lines 147 - 163, The test currently
treats "marker absent" as a pass; change the logic so only explicit deny
enforcement counts as PASS: after running altimate (DENY_OUTPUT) keep the first
check that fails if [ -f "$DENY_MARKER" ] then FAIL, then require the grep on
DENY_OUTPUT (the pattern "denied\|blocked\|BLOCKED by deny rule\|not allowed")
to succeed to mark PASS; any other case (including empty output or marker absent
without matching deny text) must be treated as FAIL and increment FAIL_COUNT.
Update the branch handling around DENY_OUTPUT / DENY_MARKER accordingly so only
the grep-positive branch increments PASS_COUNT.
| if [ -n "${SNOWFLAKE_ACCOUNT:-}" ] && [ -n "${SNOWFLAKE_USER:-}" ] && [ -n "${SNOWFLAKE_PASSWORD:-}" ]; then | ||
| export ALTIMATE_CODE_CONN_SNOWFLAKE_TEST=$(cat <<SFEOF | ||
| {"type":"snowflake","account":"${SNOWFLAKE_ACCOUNT}","user":"${SNOWFLAKE_USER}","password":"${SNOWFLAKE_PASSWORD}","warehouse":"${SNOWFLAKE_WAREHOUSE:-COMPUTE_WH}","role":"${SNOWFLAKE_ROLE:-PUBLIC}"} | ||
| SFEOF | ||
| ) |
There was a problem hiding this comment.
JSON-escape the Snowflake credentials.
These values are interpolated raw into JSON. A password or account containing ", \, or a newline will produce an invalid ALTIMATE_CODE_CONN_SNOWFLAKE_TEST and fail the smoke test before it reaches product code.
Build the payload with a real JSON encoder
- export ALTIMATE_CODE_CONN_SNOWFLAKE_TEST=$(cat <<SFEOF
-{"type":"snowflake","account":"${SNOWFLAKE_ACCOUNT}","user":"${SNOWFLAKE_USER}","password":"${SNOWFLAKE_PASSWORD}","warehouse":"${SNOWFLAKE_WAREHOUSE:-COMPUTE_WH}","role":"${SNOWFLAKE_ROLE:-PUBLIC}"}
-SFEOF
- )
+ ALTIMATE_CODE_CONN_SNOWFLAKE_TEST="$(
+ node -e 'console.log(JSON.stringify({
+ type: "snowflake",
+ account: process.env.SNOWFLAKE_ACCOUNT,
+ user: process.env.SNOWFLAKE_USER,
+ password: process.env.SNOWFLAKE_PASSWORD,
+ warehouse: process.env.SNOWFLAKE_WAREHOUSE || "COMPUTE_WH",
+ role: process.env.SNOWFLAKE_ROLE || "PUBLIC",
+ }))'
+ )"
+ export ALTIMATE_CODE_CONN_SNOWFLAKE_TEST📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [ -n "${SNOWFLAKE_ACCOUNT:-}" ] && [ -n "${SNOWFLAKE_USER:-}" ] && [ -n "${SNOWFLAKE_PASSWORD:-}" ]; then | |
| export ALTIMATE_CODE_CONN_SNOWFLAKE_TEST=$(cat <<SFEOF | |
| {"type":"snowflake","account":"${SNOWFLAKE_ACCOUNT}","user":"${SNOWFLAKE_USER}","password":"${SNOWFLAKE_PASSWORD}","warehouse":"${SNOWFLAKE_WAREHOUSE:-COMPUTE_WH}","role":"${SNOWFLAKE_ROLE:-PUBLIC}"} | |
| SFEOF | |
| ) | |
| if [ -n "${SNOWFLAKE_ACCOUNT:-}" ] && [ -n "${SNOWFLAKE_USER:-}" ] && [ -n "${SNOWFLAKE_PASSWORD:-}" ]; then | |
| ALTIMATE_CODE_CONN_SNOWFLAKE_TEST="$( | |
| node -e 'console.log(JSON.stringify({ | |
| type: "snowflake", | |
| account: process.env.SNOWFLAKE_ACCOUNT, | |
| user: process.env.SNOWFLAKE_USER, | |
| password: process.env.SNOWFLAKE_PASSWORD, | |
| warehouse: process.env.SNOWFLAKE_WAREHOUSE || "COMPUTE_WH", | |
| role: process.env.SNOWFLAKE_ROLE || "PUBLIC", | |
| }))' | |
| )" | |
| export ALTIMATE_CODE_CONN_SNOWFLAKE_TEST |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 109-109: Declare and assign separately to avoid masking return values.
(SC2155)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/sanity/phases/smoke-tests.sh` around lines 108 - 112, The heredoc
directly interpolates SNOWFLAKE_ACCOUNT, SNOWFLAKE_USER, SNOWFLAKE_PASSWORD,
SNOWFLAKE_WAREHOUSE, and SNOWFLAKE_ROLE into ALTIMATE_CODE_CONN_SNOWFLAKE_TEST
producing invalid JSON if any value contains quotes, backslashes, or newlines;
replace the raw heredoc with a JSON-safe construction (e.g., use jq -n --arg or
a small python -c "import json,os; print(json.dumps({...}))" invocation) to
build and assign ALTIMATE_CODE_CONN_SNOWFLAKE_TEST so each environment value is
properly JSON-escaped before export.
| SANITY_TIMEOUT=90 altimate_run_with_turns "dbt-discover" 2 "discover dbt project config in this repo" || true | ||
| local output=$(get_output "dbt-discover") | ||
| if echo "$output" | grep -qi "unhandled\|TypeError\|Cannot read"; then | ||
| echo "FAIL" > "$RESULTS_DIR/dbt-discover" | ||
| else | ||
| echo "PASS" > "$RESULTS_DIR/dbt-discover" |
There was a problem hiding this comment.
Timeouts and ordinary CLI failures still pass these new smoke tests.
test/sanity/lib/altimate-run.sh writes TIMEOUT for test_dbt_discover, but this branch only looks for stack-trace strings. test_check_command also swallows the exit code entirely, so unknown command, usage errors, and timeouts can all fall through to PASS.
Tighten the failure conditions
SANITY_TIMEOUT=90 altimate_run_with_turns "dbt-discover" 2 "discover dbt project config in this repo" || true
- local output=$(get_output "dbt-discover")
- if echo "$output" | grep -qi "unhandled\|TypeError\|Cannot read"; then
+ local output
+ output=$(get_output "dbt-discover")
+ if echo "$output" | grep -qi "TIMEOUT\|unhandled\|TypeError\|Cannot read\|not found"; then
echo "FAIL" > "$RESULTS_DIR/dbt-discover"
else
echo "PASS" > "$RESULTS_DIR/dbt-discover"
fi
@@
- local output=$(timeout 30 altimate check check_target.sql 2>&1 || true)
- if echo "$output" | grep -qi "TypeError\|unhandled\|Cannot read properties"; then
+ local output
+ output=$(timeout 30 altimate check check_target.sql 2>&1)
+ local exit_code
+ exit_code=$?
+ if [ $exit_code -ne 0 ] || \
+ echo "$output" | grep -qi "TypeError\|unhandled\|Cannot read properties\|Unknown command\|command not found"; then
echo "FAIL" > "$RESULTS_DIR/check-cmd"
else
echo "PASS" > "$RESULTS_DIR/check-cmd"
fiAlso applies to: 227-231
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 214-214: Declare and assign separately to avoid masking return values.
(SC2155)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/sanity/phases/smoke-tests.sh` around lines 213 - 218, The current
smoke-test only greps for stack traces and ignores exit status and TIMEOUT/usage
errors; modify the test around SANITY_TIMEOUT/altimate_run_with_turns/get_output
so you capture the exit code of altimate_run_with_turns (e.g., save its exit
status immediately after invocation) and mark FAIL if the exit code is non-zero
or if the output contains "TIMEOUT" or common CLI error indicators like "unknown
command", "usage:", "error", "invalid option" or the existing stack-trace
checks; ensure test_check_command’s swallowing of the exit code is not relied
upon here and update the conditional that writes to "$RESULTS_DIR/dbt-discover"
to fail on any of those conditions.
| # 13. Database driver packages resolvable (#295) | ||
| # All drivers use dynamic import() at runtime — if the package can't be resolved, | ||
| # the tool fails with "driver not installed". This catches the #295 regression | ||
| # where published binaries don't ship driver dependencies. | ||
| echo " --- Driver Resolvability ---" | ||
|
|
||
| DRIVERS=( | ||
| "pg:pg" | ||
| "snowflake-sdk:snowflake" | ||
| "mysql2:mysql" | ||
| "mssql:sqlserver" | ||
| "duckdb:duckdb" | ||
| "mongodb:mongodb" | ||
| "@google-cloud/bigquery:bigquery" | ||
| "@databricks/sql:databricks" | ||
| "oracledb:oracle" | ||
| ) | ||
|
|
||
| for entry in "${DRIVERS[@]}"; do | ||
| pkg="${entry%%:*}" | ||
| label="${entry##*:}" | ||
| if node -e "require('$pkg')" 2>/dev/null; then | ||
| echo " PASS: $label driver resolvable ($pkg)" | ||
| PASS_COUNT=$((PASS_COUNT + 1)) | ||
| else | ||
| echo " FAIL: $label driver not resolvable ($pkg)" | ||
| FAIL_COUNT=$((FAIL_COUNT + 1)) | ||
| fi | ||
| done |
There was a problem hiding this comment.
This block makes Phase 1 fail by construction.
test/sanity/Dockerfile in the same PR explicitly says the sanity image does not install these driver packages. Because this loop increments FAIL_COUNT for every unresolved module, verify-install.sh becomes deterministically red instead of adding usable coverage. Either install the nine drivers in the image or treat this probe as expected/skip until the packaging bug is fixed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/sanity/phases/verify-install.sh` around lines 81 - 109, The DRIVERS
resolvability loop in verify-install.sh currently increments FAIL_COUNT for each
unresolved package (DRIVERS array) which makes Phase 1 fail because the sanity
Docker image intentionally doesn't install these drivers; change the behavior so
unresolved drivers are treated as expected/skipped: modify the for-loop that
checks require('$pkg') to, on failure, print a "SKIP: $label driver not
resolvable ($pkg)" message and increment a new SKIP_COUNT (or only PASS_COUNT on
success) instead of FAIL_COUNT, or gate the entire DRIVERS check behind an
environment flag (e.g., RUN_DRIVER_CHECK) so CI can opt-in; update any summary
logic that uses FAIL_COUNT to account for SKIP_COUNT so Phase 1 no longer fails
by construction.
- Driver resolvability checks now emit warnings instead of failures since drivers are intentionally absent from sanity Docker image (#295) - `unshare --net` now tests with a dry-run before use, falling back to proxy-based network blocking when unprivileged (macOS, containers) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
What does this PR do?
Consolidates genuinely new test coverage from 10 overlapping
test:PRs (#494, #499, #502, #504, #506, #507, #508, #510, #511, #512) into a single clean PR. Most of the MongoDB/env-var/dbt test coverage from those PRs was already merged to main via earlier commits — this PR adds only what was missing.New unit tests:
ssh-tunnel.test.ts— new file, 7 tests):extractSshConfigvalidation, default values, connection_string conflict, private key auth, lifecycle safety forcloseTunnel/getActiveTunneldbtConnectionsToConfigs: array-to-record conversion + empty input (2 tests)containerToConfig: fully-populated Docker container conversion (1 test)telemetry-safety.test.ts: MongoDB auth detection assertions (4 inline assertions)Sanity suite expansion (from #494):
verify-install.shresilience.shsmoke-tests.shdocker-compose.ymlandci-local.shRedundant PRs being closed: #494, #499, #502, #504, #506, #507, #508, #510, #511, #512
Type of change
Issue for this PR
Closes #513
How did you verify your code works?
bun test test/altimate/ssh-tunnel.test.ts(7 pass),bun test test/altimate/connections.test.ts(68 pass),bun test test/altimate/telemetry-safety.test.ts(14 pass)bash -nsyntax check passes on all 5 sanity shell scriptsdocker compose configvalidates docker-compose.ymlChecklist
Summary by CodeRabbit