Skip to content

Fix/checks create container id support#33

Merged
pelegolas9 merged 11 commits intomainfrom
fix/checks-create-container-id-support
Mar 11, 2026
Merged

Fix/checks create container id support#33
pelegolas9 merged 11 commits intomainfrom
fix/checks-create-container-id-support

Conversation

@pelegolas9
Copy link
Contributor

Summary

Datastores

datastores update failing with partial payload — The PUT endpoint requires a full payload, but the CLI was only sending user-provided fields (e.g., just --name). Now fetches the current datastore first and merges changes on top. Also flattens nested objects (teams, tags, connection) from the API response into the format the PUT endpoint expects.

Containers

containers create not applying description and tags. Two issues fixed:

  1. build_create_container_payload accepted description and tags parameters but never included them in the payload.
  2. The container create API endpoint ignores these fields anyway, so a follow-up PUT is now performed after creation to apply them.

containers import not applying description. Same issue as above. The _create_computed_table function in the import flow now performs a follow-up PUT to apply the description after creating each computed table.

Config Export/Import

Export included read-only fields that broke import: The exported YAML contained connection-level fields (jdbc_url, host, port, username, parameters, group, store_type, etc.) that the PUT endpoint rejects. These are now stripped during export. Also fixed teams and global_tags being exported as objects instead of string names.

Export missing connection_name: Added fallback logic to extract the connection reference even when the API doesn't return a connection object with a name. Falls back to connection_id.

Import failing with wrong input path: The import now auto-detects whether the user pointed to the root export dir, the datastores/ folder, or a single datastore directory, and adjusts accordingly.

Import failing on datastore update. Three fixes: fetches the full datastore by ID (not the lightweight list result) for proper merging, strips read-only fields from older exports, and defers connection resolution errors for updates (falls back to the existing connection).

Import not discovering tables before importing checks. The import now automatically runs a sync (catalog) operation after creating/updating each datastore. This discovers all tables and views so that checks can be imported in a single pass. Previously required running import twice.

MCP Server

Operation tools sending wrong payload key: run_catalog, run_profile, run_scan, and run_materialize were sending datastore_ids (list) but the API expects datastore_id (singular int). Now loops over each ID and sends individual requests.

pelegolas9 and others added 11 commits March 5, 2026 16:46
…tainer name

Support both portable format (container name) and API format (container_id)
in checks create and import commands. Also accept "rule" as alias for "rule_type".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Supports --id for single and --ids for bulk activation. Fetches the
existing check payload before updating status to avoid missing required
fields on the PUT endpoint.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… help

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ally exclusive container filters

Strip None values from operation payloads before sending to the API, and
only include container_names/container_tags when explicitly provided since
they are mutually exclusive.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…t state first

The PUT endpoint requires a full payload but the CLI was only sending
user-provided fields. Now fetches the current datastore and merges
changes on top, flattening nested objects (teams, tags, connection)
into the format the API expects.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The build_create_container_payload function accepted description and
tags parameters but never included them in the returned payload dict.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The container create API endpoint ignores description and tags fields.
Now performs a follow-up update call to apply them after creation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The container create API endpoint ignores the description field, so
_create_computed_table now performs a follow-up update to apply it
after creation, matching the containers create command behavior.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ath, and run sync

Export:
- Strip read-only connection-level fields (jdbc_url, host, port, etc.)
- Add fallback connection_id extraction when connection object lacks name
- Flatten teams and global_tags to string names

Import:
- Auto-detect input path (root dir, datastores/ folder, or single datastore dir)
- Strip read-only fields from older exports before sending to API
- Fetch full datastore for proper merge on update (not lightweight list result)
- Defer connection resolution failure for updates (fall back to existing)
- Run sync (catalog) automatically after datastore create/update to discover
  tables/views before importing containers and checks

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The API expects datastore_id (singular) per request but run_catalog,
run_profile, run_scan, and run_materialize were sending datastore_ids
(list). Now loops over each ID and sends individual requests.

Also adds .mcp.json to .gitignore (contains local paths).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@pelegolas9 pelegolas9 requested a review from shindiogawa March 11, 2026 15:20
@pelegolas9 pelegolas9 marked this pull request as ready for review March 11, 2026 15:21
Copy link
Contributor

@shindiogawa shindiogawa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@pelegolas9 pelegolas9 merged commit 4ec832e into main Mar 11, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants