Fix/checks create container id support#33
Merged
pelegolas9 merged 11 commits intomainfrom Mar 11, 2026
Merged
Conversation
…tainer name Support both portable format (container name) and API format (container_id) in checks create and import commands. Also accept "rule" as alias for "rule_type". Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Supports --id for single and --ids for bulk activation. Fetches the existing check payload before updating status to avoid missing required fields on the PUT endpoint. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… help Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ally exclusive container filters Strip None values from operation payloads before sending to the API, and only include container_names/container_tags when explicitly provided since they are mutually exclusive. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…t state first The PUT endpoint requires a full payload but the CLI was only sending user-provided fields. Now fetches the current datastore and merges changes on top, flattening nested objects (teams, tags, connection) into the format the API expects. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The build_create_container_payload function accepted description and tags parameters but never included them in the returned payload dict. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The container create API endpoint ignores description and tags fields. Now performs a follow-up update call to apply them after creation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The container create API endpoint ignores the description field, so _create_computed_table now performs a follow-up update to apply it after creation, matching the containers create command behavior. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ath, and run sync Export: - Strip read-only connection-level fields (jdbc_url, host, port, etc.) - Add fallback connection_id extraction when connection object lacks name - Flatten teams and global_tags to string names Import: - Auto-detect input path (root dir, datastores/ folder, or single datastore dir) - Strip read-only fields from older exports before sending to API - Fetch full datastore for proper merge on update (not lightweight list result) - Defer connection resolution failure for updates (fall back to existing) - Run sync (catalog) automatically after datastore create/update to discover tables/views before importing containers and checks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The API expects datastore_id (singular) per request but run_catalog, run_profile, run_scan, and run_materialize were sending datastore_ids (list). Now loops over each ID and sends individual requests. Also adds .mcp.json to .gitignore (contains local paths). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Datastores
datastores update failing with partial payload — The PUT endpoint requires a full payload, but the CLI was only sending user-provided fields (e.g., just --name). Now fetches the current datastore first and merges changes on top. Also flattens nested objects (teams, tags, connection) from the API response into the format the PUT endpoint expects.
Containers
containers create not applying description and tags. Two issues fixed:
containers import not applying description. Same issue as above. The _create_computed_table function in the import flow now performs a follow-up PUT to apply the description after creating each computed table.
Config Export/Import
Export included read-only fields that broke import: The exported YAML contained connection-level fields (jdbc_url, host, port, username, parameters, group, store_type, etc.) that the PUT endpoint rejects. These are now stripped during export. Also fixed teams and global_tags being exported as objects instead of string names.
Export missing connection_name: Added fallback logic to extract the connection reference even when the API doesn't return a connection object with a name. Falls back to connection_id.
Import failing with wrong input path: The import now auto-detects whether the user pointed to the root export dir, the datastores/ folder, or a single datastore directory, and adjusts accordingly.
Import failing on datastore update. Three fixes: fetches the full datastore by ID (not the lightweight list result) for proper merging, strips read-only fields from older exports, and defers connection resolution errors for updates (falls back to the existing connection).
Import not discovering tables before importing checks. The import now automatically runs a sync (catalog) operation after creating/updating each datastore. This discovers all tables and views so that checks can be imported in a single pass. Previously required running import twice.
MCP Server
Operation tools sending wrong payload key: run_catalog, run_profile, run_scan, and run_materialize were sending datastore_ids (list) but the API expects datastore_id (singular int). Now loops over each ID and sends individual requests.