Skip to content

build(cli): expose adbc feature flag#443

Closed
dataders wants to merge 13 commits intoposit-dev:mainfrom
dataders:adbc-driver-manager-spike
Closed

build(cli): expose adbc feature flag#443
dataders wants to merge 13 commits intoposit-dev:mainfrom
dataders:adbc-driver-manager-spike

Conversation

@dataders
Copy link
Copy Markdown

@dataders dataders commented May 8, 2026

Summary

Stacked on top of #422. This follow-up only forwards the CLI crate's optional adbc feature to ggsql/adbc, so ggsql-cli can be built against the ADBC reader introduced by #422.

Intentionally left out of this PR:

  • dbt profile loading / ~/.dbt/profiles.yml parsing
  • Snowflake-specific URI plumbing
  • dynamic driver discovery
  • any duplicate ADBC reader implementation

Those pieces belong in a later integration PR after the base ADBC reader lands.

Validation

  • cargo check -p ggsql-cli --no-default-features --features "adbc vegalite"
  • git diff --check

Notes

The branch was force-pushed from the old clean-room spike snapshot onto #422. A local backup of the pre-rebase branch exists at backup/adbc-driver-manager-spike-pre-422-rebase-20260508.

jimhester and others added 13 commits May 1, 2026 12:33
New off-by-default 'adbc' feature. Implements ggsql's Reader trait over
any adbc_core::sync::Driver: execute_sql, register, unregister, dialect.
Bridges ADBC's &mut Statement API to Reader's &self via RefCell on the
Connection (same pattern as OdbcReader).

Tests are added in subsequent commits.
adbc_datafusion is added as a dev-dep so unit tests can construct an
in-process AdbcReader without dynamically loading any native driver.

Also enables the workspace `arrow` crate's `ipc` feature when `adbc`
is on, since the reader's IPC roundtrip uses both arrow-56 (workspace)
and arrow-58 (direct) IPC paths.
ggsql's execute pipeline emits CREATE OR REPLACE TEMP TABLE for
layer/stat materialization. adbc_datafusion 0.23 rejects that with
NotImplemented("Temporary tables not supported") and unwraps the
result inside Statement::execute, so the unit-test process panics.
The full pipeline works against drivers that support TEMP TABLE
(DuckDB, Trino, etc.) — see the equivalence-test path for that.
The spike-result note and PR-description draft live in the Netflix
fork's records and in this branch's git history; they don't need to
land in the upstream PR tree. Cleaner diff for review.
Loads the official Apache Arrow SQLite ADBC driver via
`ManagedDriver::load_from_name("sqlite", ...)` (installed by `dbc install
sqlite` to the platform manifest path) and asserts that
`AdbcReader<ManagedDriver>` produces output identical to ggsql's
`SqliteReader` on the same query against the same tempfile-backed SQLite
database.

Three tests cover:

- `equiv_simple_select`: scalar projection (int, text, float)
- `equiv_register_and_query`: register-then-query through the standard
  ADBC bulk-ingest path against a real C driver
- `equiv_nulls`: mixed null + typed values

All `#[ignore]`'d so the default `cargo test` is unchanged. Run with
`cargo test --features "adbc sqlite" -- --ignored equivalence` after
installing the driver.

`register()` now sets `IngestMode::Append` on each batch statement so the
SQLite ADBC driver doesn't try to re-CREATE the table our SQL DDL
already created. DataFusion 0.23 returns `Status::NotFound` from
`set_option(IngestMode, ...)`; that's tolerated and the driver's default
append behavior takes over.

Surfaces one pre-existing divergence: `SqliteReader` types an
exclusively-NULL column as `Utf8` (no values to infer from), while
`AdbcReader` correctly carries through the declared projection type. The
nulls test mixes NULLs with typed values to steer around this.
Adds two cargo test runs to the existing build job:
- `cargo test --features "adbc sqlite" --lib` — exercises the
  adbc_datafusion-backed unit tests
- `cargo test --features "adbc sqlite" --lib -- --ignored equivalence`
  — exercises the SQLite ADBC equivalence suite against the official
  Apache Arrow SQLite ADBC driver, installed via `dbc install sqlite`

The default `cargo test --lib --bins` step is unchanged, so the
existing test surface is untouched. dbc is installed via the official
curl-pipe-sh from dbc.columnar.tech.
Forward the CLI crate's optional adbc feature to ggsql/adbc so the CLI can be built against the ADBC reader from PR posit-dev#422 without adding URI, Snowflake, or dbt profile plumbing here.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@dataders dataders force-pushed the adbc-driver-manager-spike branch from 3d0824d to a44a697 Compare May 8, 2026 21:40
@dataders dataders changed the title feat: clean-room ADBC reader (warehouse connectivity via upstream arrow-adbc) build(cli): expose adbc feature flag May 8, 2026
@dataders
Copy link
Copy Markdown
Author

dataders commented May 9, 2026

closing in favor of jimhester#1 as #422 has much of this work already

@dataders dataders closed this May 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants