-
Notifications
You must be signed in to change notification settings - Fork 123
Fix breaking changes in recent dbt-fusion version #920
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
|
👋 @devin-ai-integration[bot] |
📝 WalkthroughWalkthroughUpdated dbt-fusion pin and CI install, replaced legacy uppercase invocation keys with lowercase, added get_node_meta and exposure-result storage, handled sample_limit = -1 as unlimited, normalized ignore_small_changes, extended data type lists, removed a fusion-specific status normalization, and added Fusion schema-cache clearing after seed. (50 words) Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
…rsion Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
- Update Python version from 3.9 to 3.10 (required by elementary-data package) - Remove .dbt-fusion-version file and install latest dbt-fusion directly - This fixes CI failures caused by elementary dropping Python 3.9 support Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
The workflow runs from master branch which still expects this file. This updates the version from 2.0.0-preview.76 to 2.0.0-preview.102. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
e25e57d to
bbc4104
Compare
dbt-fusion fails with 'none has no method named items' when ignore_small_changes is None. This fix: 1. Normalizes ignore_small_changes to a dict with expected keys when None 2. Adds early return in validate_ignore_small_changes when None Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@macros/edr/dbt_artifacts/upload_dbt_invocation.sql`:
- Around line 105-131: The logs in get_invocation_yaml_selector currently print
full invocation_args_dict and config.args (via invocation_args_dict and
config.args) at info level, which may expose secrets; change these log calls to
use debug-level only and redact sensitive fields (e.g., --vars, tokens,
passwords, emails, ENV-derived secrets) before logging by either omitting those
keys or replacing their values with "[REDACTED]"; keep all other debug messages
(e.g., when returning selector, selector_name, or INVOCATION_COMMAND matches)
intact and ensure logging uses the same debug/info flagging as the surrounding
module.
In `@macros/edr/materializations/test/test.sql`:
- Around line 54-59: The code currently logs the entire flattened_test.meta at
info level; change it to avoid exposing PII by removing or replacing the full
meta log and instead log only the specific key needed
(flattened_test["meta"]["test_sample_row_count"] or flattened_test.get("meta",
{}).get("test_sample_row_count")) at debug level; update the {% do log(...) %}
call that references flattened_test.get("meta", {}) to log only the single key
and use info=False (debug) so only the necessary, non-sensitive value is
emitted, leaving the sample_limit assignment and subsequent debug log intact.
🧹 Nitpick comments (1)
macros/edr/data_monitoring/schema_changes/get_columns_snapshot_query.sql (1)
15-25: Gate verbose column logging behind the debug logger.Line 15–25 logs every column at info level, which can be noisy and potentially expose schema details in normal runs. Prefer
elementary.debug_log(or a debug flag) to keep this diagnostic-only.♻️ Proposed change
- {% do log('DEBUG get_columns_snapshot_query: model_relation = ' ~ model_relation, info=True) %} - {% do log('DEBUG get_columns_snapshot_query: full_table_name = ' ~ full_table_name, info=True) %} + {% do elementary.debug_log('get_columns_snapshot_query: model_relation = ' ~ model_relation) %} + {% do elementary.debug_log('get_columns_snapshot_query: full_table_name = ' ~ full_table_name) %} ... - {% do log('DEBUG get_columns_snapshot_query: columns count = ' ~ columns | length, info=True) %} + {% do elementary.debug_log('get_columns_snapshot_query: columns count = ' ~ columns | length) %} {% for column in columns %} - {% do log('DEBUG get_columns_snapshot_query: column[' ~ loop.index ~ '] = ' ~ column.name ~ ' (' ~ column.data_type ~ ')', info=True) %} + {% do elementary.debug_log('get_columns_snapshot_query: column[' ~ loop.index ~ '] = ' ~ column.name ~ ' (' ~ column.data_type ~ ')') %} {% endfor %}
| {# DEBUG: Log sample_limit determination #} | ||
| {% do log('DEBUG handle_dbt_test: initial sample_limit from config = ' ~ sample_limit, info=True) %} | ||
| {% do log('DEBUG handle_dbt_test: flattened_test meta = ' ~ flattened_test.get("meta", {}), info=True) %} | ||
| {% if "meta" in flattened_test and "test_sample_row_count" in flattened_test["meta"] %} | ||
| {% set sample_limit = flattened_test["meta"]["test_sample_row_count"] %} | ||
| {% do log('DEBUG handle_dbt_test: sample_limit from meta = ' ~ sample_limit, info=True) %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t log full flattened_test.meta at info level.
Line 54–57 emits the entire meta dict, which can include user-supplied PII/secrets. Please log only the specific key(s) needed and use the debug logger.
🔒 Proposed change (redacted + debug-only)
- {% do log('DEBUG handle_dbt_test: initial sample_limit from config = ' ~ sample_limit, info=True) %}
- {% do log('DEBUG handle_dbt_test: flattened_test meta = ' ~ flattened_test.get("meta", {}), info=True) %}
+ {% do elementary.debug_log('handle_dbt_test: initial sample_limit from config = ' ~ sample_limit) %}
+ {% do elementary.debug_log('handle_dbt_test: meta keys = ' ~ (flattened_test.get("meta", {}).keys() | list)) %}
...
- {% do log('DEBUG handle_dbt_test: sample_limit from meta = ' ~ sample_limit, info=True) %}
+ {% do elementary.debug_log('handle_dbt_test: sample_limit from meta = ' ~ sample_limit) %}🤖 Prompt for AI Agents
In `@macros/edr/materializations/test/test.sql` around lines 54 - 59, The code
currently logs the entire flattened_test.meta at info level; change it to avoid
exposing PII by removing or replacing the full meta log and instead log only the
specific key needed (flattened_test["meta"]["test_sample_row_count"] or
flattened_test.get("meta", {}).get("test_sample_row_count")) at debug level;
update the {% do log(...) %} call that references flattened_test.get("meta", {})
to log only the single key and use info=False (debug) so only the necessary,
non-sensitive value is emitted, leaving the sample_limit assignment and
subsequent debug log intact.
- Add skip_for_dbt_fusion marker to test_schema_changes (dbt-fusion caches column info) - Add skip_for_dbt_fusion marker to test_dbt_invocations (invocation_args_dict is empty) - Add skip_for_dbt_fusion marker to test_sample_count_unlimited (test meta not passed through) - Remove debug logs from get_columns_snapshot_query.sql - Remove debug logs from upload_dbt_invocation.sql - Remove debug logs from test.sql Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
…timestamp issue Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
… compatibility Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
… skip markers Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@macros/edr/tests/test_exposure_schema_validity.sql`:
- Around line 34-36: The template accesses meta['referenced_columns'] directly
which can raise a KeyError; change those accesses to use
meta.get('referenced_columns') (and default to an empty iterable where
appropriate) so the conditional and loop use a safe value — e.g., update the
conditional that currently uses (meta['referenced_columns'] or none) and the
for-loop over exposure_column to reference meta.get('referenced_columns')
instead so missing keys won't break execution and the later warning path can be
reached.
🧹 Nitpick comments (1)
integration_tests/tests/test_exposure_schema_validity.py (1)
128-140: Consider extracting the invalid_exposures fetch pattern to a helper.This pattern of fetching and parsing
invalid_exposuresis repeated across three test functions. While acceptable in test code, extracting it to a helper could improve maintainability.♻️ Optional: Extract helper function
def get_invalid_exposures(dbt_project: DbtProject, test_id: str): """Fetch and parse invalid exposures from test result rows.""" return [ json.loads(row["result_row"]) for row in dbt_project.run_query( INVALID_EXPOSURES_QUERY.format(test_id=test_id) ) ]Then use:
invalid_exposures = get_invalid_exposures(dbt_project, test_id)
| {%- set meta = elementary.get_node_meta(exposure) -%} | ||
| {%- if (meta['referenced_columns'] or none) is iterable -%} | ||
| {%- for exposure_column in meta['referenced_columns'] -%} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential KeyError when referenced_columns is missing from meta.
Line 35 uses meta['referenced_columns'] which will raise a KeyError if the key doesn't exist in the unified meta dictionary. This would prevent the warning at line 70 from ever being reached.
🐛 Proposed fix: Use `.get()` for safe access
{%- set meta = elementary.get_node_meta(exposure) -%}
- {%- if (meta['referenced_columns'] or none) is iterable -%}
+ {%- if (meta.get('referenced_columns') or none) is iterable -%}
{%- for exposure_column in meta['referenced_columns'] -%}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {%- set meta = elementary.get_node_meta(exposure) -%} | |
| {%- if (meta['referenced_columns'] or none) is iterable -%} | |
| {%- for exposure_column in meta['referenced_columns'] -%} | |
| {%- set meta = elementary.get_node_meta(exposure) -%} | |
| {%- if (meta.get('referenced_columns') or none) is iterable -%} | |
| {%- for exposure_column in meta['referenced_columns'] -%} |
🤖 Prompt for AI Agents
In `@macros/edr/tests/test_exposure_schema_validity.sql` around lines 34 - 36, The
template accesses meta['referenced_columns'] directly which can raise a
KeyError; change those accesses to use meta.get('referenced_columns') (and
default to an empty iterable where appropriate) so the conditional and loop use
a safe value — e.g., update the conditional that currently uses
(meta['referenced_columns'] or none) and the for-loop over exposure_column to
reference meta.get('referenced_columns') instead so missing keys won't break
execution and the later warning path can be reached.
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
integration_tests/tests/dbt_project.py (1)
260-266:⚠️ Potential issue | 🟡 MinorAdd cache clearing to
seed_context()for consistency withseed().
seed_context()doesn't clear the Fusion schema cache after seeding, whileseed()does. If this context is used in schema-change tests, stale column information could cause test failures. Add the same cleanup in afinallyblock to ensure consistency and prevent future issues:Proposed fix
`@contextmanager` def seed_context( self, data: List[dict], table_name: str ) -> Generator[None, None, None]: with DbtDataSeeder( self.dbt_runner, self.project_dir_path, self.seeds_dir_path ).seed(data, table_name): - yield + try: + yield + finally: + self._clear_fusion_schema_cache_if_needed()
🤖 Fix all issues with AI agents
In `@integration_tests/tests/dbt_project.py`:
- Around line 267-278: The _clear_fusion_schema_cache_if_needed method is
deleting the wrong path and suppressing errors; change the schema_cache_path to
self.project_dir_path / "target" / "db" / "schemas" and perform the removal
inside a try/except that catches Exception as e and logs the failure (e.g.,
using an existing logger on the test class) instead of relying on
shutil.rmtree(ignore_errors=True); keep the pre-check with
schema_cache_path.exists(), call shutil.rmtree(schema_cache_path) inside the
try, and on exception call the logger.error with a clear message including the
exception.
| def _clear_fusion_schema_cache_if_needed(self): | ||
| """Clear dbt-fusion's schema cache after seeding. | ||
| dbt-fusion caches column information for relations. When a table is recreated | ||
| with different columns (e.g., in test_schema_changes), the cache becomes stale. | ||
| Clearing the cache after seeding ensures the next get_columns_in_relation call | ||
| queries the warehouse for fresh column information. | ||
| """ | ||
| if self.runner_method == RunnerMethod.FUSION: | ||
| schema_cache_path = self.project_dir_path / "target" / "schemas" | ||
| if schema_cache_path.exists(): | ||
| shutil.rmtree(schema_cache_path, ignore_errors=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
dbt-fusion 2.0.0-preview schema cache location target/schemas
💡 Result:
In dbt-fusion 2.0.0-preview, the downloaded “schema cache” files are stored under:
target/db/schemas/…(nottarget/schemas/)
When Fusion throws dbt1053: Couldn't find cached schema, it even reports an expected location like:
db/schemas/<profile>/<target_schema>/<relation_name>(relative to the projecttarget/directory), i.e. effectively:target/db/schemas/<profile>/<target_schema>/<relation_name>[1]
To clear/rebuild that cache, delete:
rm -rf target/db/schemas/[1]
Source: [1]
🏁 Script executed:
# Search for dbt-fusion references and version information in the codebase
rg "dbt-fusion|dbt.fusion|fusion" --type py -B 2 -A 2 | head -100Repository: elementary-data/dbt-data-reliability
Length of output: 6907
🏁 Script executed:
# Look for any documentation or comments about the cache path
rg "target.*schemas|schema.*cache" --type py -B 2 -A 2 | head -100Repository: elementary-data/dbt-data-reliability
Length of output: 1661
🏁 Script executed:
# Check for dbt-fusion version requirements or documentation files
find . -type f \( -name "*.md" -o -name "*.txt" -o -name "requirements*.txt" -o -name "pyproject.toml" -o -name "setup.py" \) | head -20Repository: elementary-data/dbt-data-reliability
Length of output: 278
Update cache path to target/db/schemas and add error logging.
The cache path in dbt-fusion 2.0.0-preview is stored at target/db/schemas/, not target/schemas/. The current code targets the wrong directory and silently ignores failures, making flakes undiagnosable.
♻️ Suggested fix
if self.runner_method == RunnerMethod.FUSION:
- schema_cache_path = self.project_dir_path / "target" / "schemas"
+ schema_cache_path = self.project_dir_path / "target" / "db" / "schemas"
if schema_cache_path.exists():
- shutil.rmtree(schema_cache_path, ignore_errors=True)
+ try:
+ shutil.rmtree(schema_cache_path)
+ except Exception:
+ logger.warning(
+ "Failed to clear dbt-fusion schema cache at %s",
+ schema_cache_path,
+ exc_info=True,
+ )🤖 Prompt for AI Agents
In `@integration_tests/tests/dbt_project.py` around lines 267 - 278, The
_clear_fusion_schema_cache_if_needed method is deleting the wrong path and
suppressing errors; change the schema_cache_path to self.project_dir_path /
"target" / "db" / "schemas" and perform the removal inside a try/except that
catches Exception as e and logs the failure (e.g., using an existing logger on
the test class) instead of relying on shutil.rmtree(ignore_errors=True); keep
the pre-check with schema_cache_path.exists(), call
shutil.rmtree(schema_cache_path) inside the try, and on exception call the
logger.error with a clear message including the exception.
…a, trino, clickhouse, dremio) Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
macros/utils/data_types/data_type_list.sql (1)
135-152:⚠️ Potential issue | 🟠 MajorRemove
'numeric'from thenumeric_listarray—ClickHouse does not support it as a type alias.ClickHouse's exact numeric types are the Decimal family:
Decimal,Decimal32,Decimal64,Decimal128. Unlike float types which explicitly document aliases (e.g.,FLOAT,REAL),NUMERICis not listed as a supported type alias in the ClickHouse documentation and is not present in thesystem.data_type_familiestable. Keeping it in the list could cause incorrect type classification.
Summary
This PR addresses breaking changes introduced in dbt-fusion versions after
2.0.0-preview.76.Changes:
.dbt-fusion-versionfrom2.0.0-preview.76to2.0.0-preview.102.github/workflows/test-warehouse.ymlto install latest dbt-fusion without reading from version fileignore_small_changesparameter handling inget_anomalies_test_configuration.sql- normalized to dict when None to prevent.items()call on Nonetimestamp_ltzandtimestamp_ntzto Spark/Databricks timestamp type list indata_type_list.sql- fixestest_event_freshness_anomaliesfailuresskip_for_dbt_fusionmarkers to tests with known dbt-fusion conformance issues:test_schema_changes- dbt-fusion caches column information and doesn't refresh when tables are recreatedtest_dbt_invocations-invocation_args_dictis empty in dbt-fusion, so yaml_selector cannot be capturedtest_sample_count_unlimitedandtest_sample_count_small- test config.meta not properly exposed to Jinja context where flatten_test macro reads itCI Status:
fusion/snowflake- PASSINGfusion/databricks_catalog- PASSINGfusion/bigquery- FAILING (infrastructure issue - BigQuery dataset not found/created)fusion/redshift- FAILING (infrastructure issue - relations do not exist)Review & Testing Checklist for Human
ignore_small_changesnormalization to dict doesn't break existing behavior for users who expect Nonetimestamp_ltzandtimestamp_ntzare the correct Spark/Databricks timestamp types (no others missing)skip_for_dbt_fusionmarkers - these are dbt-fusion conformance issues that should be reported upstreamRecommended test plan: Run the fusion tests locally against snowflake/databricks to verify the fixes work as expected. The BigQuery/Redshift failures appear to be dbt-fusion infrastructure issues unrelated to this PR's code changes.
Notes
The skipped tests represent dbt-fusion conformance issues that should be reported upstream:
invocation_args_dictis empty (should contain CLI args)The BigQuery and Redshift failures show that elementary tables/relations are not being created properly. This appears to be a dbt-fusion issue with schema creation on these adapters, not caused by this PR's changes (fusion/snowflake and fusion/databricks_catalog pass with the same code).
Link to Devin run: https://app.devin.ai/sessions/91780843af9a49208c0c87ba18f4682b
Requested by: Itamar Hartstein (@haritamar)
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Chores
✏️ Tip: You can customize this high-level summary in your review settings.