Add specs for artifact composition#836
Conversation
WalkthroughThis pull request introduces specification documentation for a new feature to compose artifact content inside Jinja2 templates. The changes add a new directory structure under 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Deploying infrahub-sdk-python with
|
| Latest commit: |
e3e8af5
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://f09cb4d9.infrahub-sdk-python.pages.dev |
| Branch Preview URL: | https://pog-artifact-composition-spe.infrahub-sdk-python.pages.dev |
13cefaa to
f9ccc45
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dev/specs/001-artifact-composition/checklists/requirements.md`:
- Around line 9-11: The checklist wrongly marks items as complete even though
the spec contains implementation details and is written for SDK implementers;
update the checklist in requirements.md by unchecking the four items at lines
shown (the items about "No implementation details", "Written for non-technical
stakeholders", "Success criteria are technology-agnostic", and "No
implementation details leak") and add a short note stating this is a technical
spec (not a pure PRD) calling out examples such as FR-001/006/010 naming
InfrahubClient/InfrahubClientSync, FR-012 file path
infrahub_sdk/template/exceptions.py, FR-013
client.object_store.get(identifier=storage_id), FR-002/007 naming
FilterDefinition/InfrahubFilters/Jinja2Template, and SC-005 referencing uv run
pytest tests/unit/ so reviewers understand the intentional deviation.
- Around line 34-36: The Markdown file ends with a Notes list but lacks a
trailing blank line; update
dev/specs/001-artifact-composition/checklists/requirements.md by adding a single
blank line after the final list item (the lines starting with "One open question
remains..." / "Ordering guarantee..." / "`from_json`/`from_yaml` existence...")
so there is an empty line at EOF, conforming to the guideline to add blank lines
before and after lists.
In `@dev/specs/001-artifact-composition/spec.md`:
- Around line 157-161: The Clarifications list at the end of the file lacks a
trailing blank line; update the final list in the "Clarifications" section (the
bullet list ending at the end of file) to include a blank line after the list so
the Markdown convention "Add blank lines before and after lists" is satisfied —
ensure there's exactly one empty line following the final list item and that any
preceding list start also has a blank line above it if missing.
- Around line 100-102: FR-007 is inconsistent with the three-context validation
model: registering artifact_content and file_object_content as trusted=False
will equate them with existing filters like safe and break the intended
Prefect-worker allowance; update the spec wording to mark FR-007 as provisional
and add a note that FilterDefinition must be extended (e.g., a new enum or flag)
to distinguish "worker-allowed" filters from general trusted/untrusted, and
reference artifact_content, file_object_content, FilterDefinition,
validate(restricted=True), safe, and JinjaTemplateOperationViolationError so
implementers know to avoid simply copying safe's trusted=False registration and
instead await the enum migration described in Open Questions.
| - [x] No implementation details (languages, frameworks, APIs) | ||
| - [x] Focused on user value and business needs | ||
| - [x] Written for non-technical stakeholders |
There was a problem hiding this comment.
Four checklist items are marked complete but contradict the spec's content.
| Line | Item | Counter-evidence in spec.md |
|---|---|---|
| 9 | [x] No implementation details (languages, frameworks, APIs) |
FR-001/006/010 name InfrahubClient/InfrahubClientSync; FR-012 names file path infrahub_sdk/template/exceptions.py; FR-013 names client.object_store.get(identifier=storage_id); FR-002/007 name FilterDefinition, InfrahubFilters, Jinja2Template. |
| 11 | [x] Written for non-technical stakeholders |
The spec is addressed entirely to SDK implementers, with Python class names, method signatures, and SDK module references throughout. |
| 19 | [x] Success criteria are technology-agnostic (no implementation details) |
SC-005 names uv run pytest tests/unit/; SC-004 names InfrahubClient; SC-002 names validate(restricted=True). |
| 30 | [x] No implementation details leak into specification |
Same evidence as line 9. |
If this spec intentionally blends product requirements with technical design (a "technical spec" rather than a pure PRD), update the checklist criteria to reflect that, or uncheck these items and add a note explaining the intentional deviation.
Also applies to: 19-19, 29-30
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dev/specs/001-artifact-composition/checklists/requirements.md` around lines 9
- 11, The checklist wrongly marks items as complete even though the spec
contains implementation details and is written for SDK implementers; update the
checklist in requirements.md by unchecking the four items at lines shown (the
items about "No implementation details", "Written for non-technical
stakeholders", "Success criteria are technology-agnostic", and "No
implementation details leak") and add a short note stating this is a technical
spec (not a pure PRD) calling out examples such as FR-001/006/010 naming
InfrahubClient/InfrahubClientSync, FR-012 file path
infrahub_sdk/template/exceptions.py, FR-013
client.object_store.get(identifier=storage_id), FR-002/007 naming
FilterDefinition/InfrahubFilters/Jinja2Template, and SC-005 referencing uv run
pytest tests/unit/ so reviewers understand the intentional deviation.
| - One open question remains intentionally: whether to add a Python transform convenience SDK method (FR scope question flagged in Open Questions section, documented for planning phase). | ||
| - Ordering guarantee is explicitly out of scope and documented as a known limitation. | ||
| - `from_json`/`from_yaml` existence in the current filter set is flagged as an assumption to verify during planning. |
There was a problem hiding this comment.
Missing blank line after the Notes list.
The Notes list ends at line 36 (end of file) with no trailing blank line, violating the Markdown convention.
📝 Proposed fix
- `from_json`/`from_yaml` existence in the current filter set is flagged as an assumption to verify during planning.
+As per coding guidelines: "Add blank lines before and after lists in Markdown files."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - One open question remains intentionally: whether to add a Python transform convenience SDK method (FR scope question flagged in Open Questions section, documented for planning phase). | |
| - Ordering guarantee is explicitly out of scope and documented as a known limitation. | |
| - `from_json`/`from_yaml` existence in the current filter set is flagged as an assumption to verify during planning. | |
| - One open question remains intentionally: whether to add a Python transform convenience SDK method (FR scope question flagged in Open Questions section, documented for planning phase). | |
| - Ordering guarantee is explicitly out of scope and documented as a known limitation. | |
| - `from_json`/`from_yaml` existence in the current filter set is flagged as an assumption to verify during planning. | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dev/specs/001-artifact-composition/checklists/requirements.md` around lines
34 - 36, The Markdown file ends with a Notes list but lacks a trailing blank
line; update dev/specs/001-artifact-composition/checklists/requirements.md by
adding a single blank line after the final list item (the lines starting with
"One open question remains..." / "Ordering guarantee..." /
"`from_json`/`from_yaml` existence...") so there is an empty line at EOF,
conforming to the guideline to add blank lines before and after lists.
| - **FR-005**: Both `artifact_content` and `file_object_content` MUST raise `JinjaFilterError` when the input `storage_id` is null or empty, or when the object store cannot retrieve the content for any reason (not found, network failure, auth failure). | ||
| - **FR-006**: Both `artifact_content` and `file_object_content` MUST raise `JinjaFilterError` when invoked and no `InfrahubClient` was supplied to `Jinja2Template` at construction time. The error message MUST name the filter and explain that an `InfrahubClient` is required. | ||
| - **FR-007**: Both `artifact_content` and `file_object_content` MUST be registered with `trusted=False` in the `FilterDefinition` registry so that `validate(restricted=True)` blocks them in the computed attributes execution context (Infrahub API server). They are only permitted to execute on Prefect workers, where an `InfrahubClient` is available. |
There was a problem hiding this comment.
FR-007 is internally inconsistent with the three-context validation model — mark as provisional.
FR-007 registers both new filters as trusted=False, but this puts them in the same bucket as safe (and similar currently-untrusted filters). User Story 4, Acceptance Scenario 3 explicitly requires that other trusted=False filters still raise JinjaTemplateOperationViolationError in the Prefect-worker context. With the current binary validate(restricted: bool), there is no way to simultaneously:
- block
artifact_contentin the computed-attributes context ✓ - allow
artifact_contentin the Prefect-worker context ✓ - continue blocking
safein the Prefect-worker context ✓
The Open Questions section (line 151) correctly identifies the enum migration needed, but FR-007 reads as a final requirement rather than a provisional one. An implementer reading FR-007 in isolation could register the new filters identically to safe and still satisfy the wording — which would break US4/SC-002.
📝 Suggested wording to make FR-007 provisional
-- **FR-007**: Both `artifact_content` and `file_object_content` MUST be registered with `trusted=False` in the `FilterDefinition` registry so that `validate(restricted=True)` blocks them in the computed attributes execution context (Infrahub API server). They are only permitted to execute on Prefect workers, where an `InfrahubClient` is available.
+- **FR-007**: Both `artifact_content` and `file_object_content` MUST be blocked in the computed attributes execution context (Infrahub API server) and permitted in the Prefect-worker execution context. *Provisional implementation*: register with `trusted=False` so that the existing `validate(restricted=True)` blocks them initially. **This requirement depends on resolution of the "Validation level model" open question**, which must introduce a dedicated worker-context tag to distinguish these filters from currently-untrusted filters (e.g., `safe`) that must remain blocked even in the Prefect-worker context.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dev/specs/001-artifact-composition/spec.md` around lines 100 - 102, FR-007 is
inconsistent with the three-context validation model: registering
artifact_content and file_object_content as trusted=False will equate them with
existing filters like safe and break the intended Prefect-worker allowance;
update the spec wording to mark FR-007 as provisional and add a note that
FilterDefinition must be extended (e.g., a new enum or flag) to distinguish
"worker-allowed" filters from general trusted/untrusted, and reference
artifact_content, file_object_content, FilterDefinition,
validate(restricted=True), safe, and JinjaTemplateOperationViolationError so
implementers know to avoid simply copying safe's trusted=False registration and
instead await the enum migration described in Open Questions.
| - Q: Are `artifact_content` and `file_object_content` identical at the storage API level, or do they use different API paths / metadata handling? → A: Different implementations — `file_object_content` uses a different API path or carries different metadata handling than `artifact_content`. | ||
| - Q: Where are these filters permitted to execute, and what mechanism enforces the boundary? → A: Blocked in computed attributes (executed locally in the Infrahub API server, which uses `validate(restricted=True)`); permitted on Prefect workers, which have access to an `InfrahubClient`. The `trusted=False` registration enforces this boundary via the existing restricted-mode validation. | ||
| - Q: What exception class should filter-level errors (no client, retrieval failure) raise? → A: A new `JinjaFilterError` class that is a child of the existing `JinjaTemplateError` base class. | ||
| - Q: Should the SDK expose a convenience method for artifact content retrieval in Python transforms? → A: No new method — document `client.object_store.get(identifier=storage_id)` directly. | ||
| - Q: What should `from_json`/`from_yaml` do on malformed input? → A: Raise `JinjaFilterError` on malformed JSON or YAML input. |
There was a problem hiding this comment.
Missing blank line after the final list.
The Clarifications list ends at line 161 (end of file) without a trailing blank line, violating the project's Markdown convention.
📝 Proposed fix
- Q: What should `from_json`/`from_yaml` do on malformed input? → A: Raise `JinjaFilterError` on malformed JSON or YAML input.
+As per coding guidelines: "Add blank lines before and after lists in Markdown files."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Q: Are `artifact_content` and `file_object_content` identical at the storage API level, or do they use different API paths / metadata handling? → A: Different implementations — `file_object_content` uses a different API path or carries different metadata handling than `artifact_content`. | |
| - Q: Where are these filters permitted to execute, and what mechanism enforces the boundary? → A: Blocked in computed attributes (executed locally in the Infrahub API server, which uses `validate(restricted=True)`); permitted on Prefect workers, which have access to an `InfrahubClient`. The `trusted=False` registration enforces this boundary via the existing restricted-mode validation. | |
| - Q: What exception class should filter-level errors (no client, retrieval failure) raise? → A: A new `JinjaFilterError` class that is a child of the existing `JinjaTemplateError` base class. | |
| - Q: Should the SDK expose a convenience method for artifact content retrieval in Python transforms? → A: No new method — document `client.object_store.get(identifier=storage_id)` directly. | |
| - Q: What should `from_json`/`from_yaml` do on malformed input? → A: Raise `JinjaFilterError` on malformed JSON or YAML input. | |
| - Q: Are `artifact_content` and `file_object_content` identical at the storage API level, or do they use different API paths / metadata handling? → A: Different implementations — `file_object_content` uses a different API path or carries different metadata handling than `artifact_content`. | |
| - Q: Where are these filters permitted to execute, and what mechanism enforces the boundary? → A: Blocked in computed attributes (executed locally in the Infrahub API server, which uses `validate(restricted=True)`); permitted on Prefect workers, which have access to an `InfrahubClient`. The `trusted=False` registration enforces this boundary via the existing restricted-mode validation. | |
| - Q: What exception class should filter-level errors (no client, retrieval failure) raise? → A: A new `JinjaFilterError` class that is a child of the existing `JinjaTemplateError` base class. | |
| - Q: Should the SDK expose a convenience method for artifact content retrieval in Python transforms? → A: No new method — document `client.object_store.get(identifier=storage_id)` directly. | |
| - Q: What should `from_json`/`from_yaml` do on malformed input? → A: Raise `JinjaFilterError` on malformed JSON or YAML input. | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dev/specs/001-artifact-composition/spec.md` around lines 157 - 161, The
Clarifications list at the end of the file lacks a trailing blank line; update
the final list in the "Clarifications" section (the bullet list ending at the
end of file) to include a blank line after the list so the Markdown convention
"Add blank lines before and after lists" is satisfied — ensure there's exactly
one empty line following the final list item and that any preceding list start
also has a blank line above it if missing.
f9ccc45 to
43e1401
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
dev/specs/infp-504-artifact-composition/spec.md (2)
100-102: FR-007 is inconsistent with the three-context validation model (previously flagged).FR-007 registers
artifact_contentandfile_object_contentastrusted=False, equating them withsafeand other currently-untrusted filters. However, User Story 4 / SC-002 require that the Prefect-worker context permits these two new filters while still blockingsafe. The existing binaryvalidate(restricted: bool)cannot express this distinction, and FR-007 does not acknowledge that dependency — an implementer can satisfy the literal wording of FR-007 by copyingsafe's registration and still break US4/SC-002.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dev/specs/infp-504-artifact-composition/spec.md` around lines 100 - 102, FR-007 is too coarse: registering artifact_content and file_object_content as trusted=False conflates them with filters like safe and cannot express the three-context model required by US4/SC-002; update the design so FilterDefinition supports an execution-scope flag (e.g. allowed_contexts or trusted_for = {"prefect-worker"}) instead of a single boolean, register artifact_content and file_object_content with that worker-only scope, and change validate(restricted: bool) (or add a new validate_for(context)) to check the filter's allowed_contexts (so restricted=True still blocks safe but permits worker-scoped filters when running in the Prefect-worker context).
157-161: Missing blank line after the Clarifications list (previously flagged).The list ends at line 161 (end of file) with no trailing blank line, violating the project's Markdown convention.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dev/specs/infp-504-artifact-composition/spec.md` around lines 157 - 161, Add a trailing blank line after the Clarifications list that ends with "Q: What should `from_json`/`from_yaml` do on malformed input? → A: Raise `JinjaFilterError` on malformed JSON or YAML input." so the file follows the project's Markdown convention; simply insert one empty newline at the end of the document (after that final answer) in dev/specs/infp-504-artifact-composition/spec.md.dev/specs/infp-504-artifact-composition/checklists/requirements.md (2)
9-11: Four checklist items are incorrectly marked complete (previously flagged).Lines 9, 11, 19, and 30 are checked but directly contradict
spec.md: FR-001/002/007/010/012/013 contain class names, file paths, method signatures, and CLI commands throughout. If this is intentionally a technical spec rather than a pure PRD, uncheck these items and add a note acknowledging the deviation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dev/specs/infp-504-artifact-composition/checklists/requirements.md` around lines 9 - 11, Uncheck the four checklist boxes that were incorrectly marked complete and add a brief note acknowledging this spec deviates into technical detail: specifically uncheck the items "No implementation details (languages, frameworks, APIs)", "Focused on user value and business needs", "Written for non-technical stakeholders" (the ones flagged in the review) and append a short note stating that the spec contains technical artifacts (see FR-001, FR-002, FR-007, FR-010, FR-012, FR-013 which include class names, file paths, method signatures, and CLI commands) and that the checklist was intentionally left unchecked to reflect this deviation.
34-36: Missing blank line after the Notes list (previously flagged).The file ends at line 36 without a trailing blank line, violating the Markdown convention.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dev/specs/infp-504-artifact-composition/checklists/requirements.md` around lines 34 - 36, Add a trailing blank line at the end of the Notes list so the file ends with a single newline (ensure the final character is a newline). Locate the end of the Notes list where the last bullet mentions `from_json`/`from_yaml` and append one blank line (or configure the editor/formatter to ensure files end with a newline) to satisfy Markdown conventions.
🧹 Nitpick comments (1)
dev/specs/infp-504-artifact-composition/spec.md (1)
136-136: The async-to-sync bridge is a non-trivial implementation detail that should be an explicit open question.The assumption that async I/O is "handled consistently with the SDK's existing pattern" is too vague for a specification.
InfrahubFiltersexposes synchronous callables to Jinja2 while its underlyingInfrahubClientis async; bridging async onto a sync call-site (e.g.,asyncio.run()vsloop.run_until_complete()vs thread-based dispatching) behaves differently depending on whether an event loop is already running. The "Sandboxed environment injection" open question only covers client-passing, not execution strategy.Consider promoting this to an explicit open question, e.g.:
Async-to-sync execution bridge:
InfrahubFiltersexposes sync callables to Jinja2 whileInfrahubClientis async. The bridging strategy (e.g.,asyncio.run(),anyio.from_thread, or a dedicated thread) must be specified up front, as the choice differs between contexts where an event loop is or isn't already running.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dev/specs/infp-504-artifact-composition/spec.md` at line 136, Promote the async-to-sync bridge to an explicit open question in the spec: state that InfrahubFilters exposes synchronous callables while the underlying InfrahubClient is async and require the spec to mandate a bridging strategy (examples: asyncio.run(), loop.run_until_complete() with loop detection, anyio.from_thread, or a dedicated worker thread/executor) and enumerate the behavioral differences when an event loop is already running versus not; also update the "Sandboxed environment injection" section to cross-reference this new "Async-to-sync execution bridge" question so implementers know to choose and document the exact execution strategy for InfrahubFilters/InfrahubClient.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dev/specs/infp-504-artifact-composition/spec.md`:
- Line 115: Change the spec to remove or reword the "ObjectStore /
ObjectStoreSync" line so it doesn't imply a sync path; specifically, delete the
reference to ObjectStoreSync (or explicitly state it is out of scope) and
clarify that InfrahubFilters holds an async InfrahubClient which uses
ObjectStore (not ObjectStoreSync), referencing InfrahubFilters, InfrahubClient,
InfrahubClientSync, ObjectStore, and ObjectStoreSync so readers see that
InfrahubClientSync/ObjectStoreSync are unsupported and only the async
ObjectStore path should be implemented.
- Line 61: US3 AC3 currently allows "an empty dict or appropriate empty value"
for from_json/from_yaml when given an empty string, which conflicts with FR-008
that mandates an empty dict; update the spec and implementations to normalize
empty-string inputs to return an empty dict (not None) so tests in SC-005 align.
Concretely, change the AC3 wording to require "{}" for empty input and ensure
the from_yaml and from_json functions (identify by name: from_yaml, from_json)
special-case empty-string input and return {} instead of relying on
yaml.safe_load("") or json.loads("") behavior; add a short unit test asserting
that from_yaml("") and from_json("") yield {}.
---
Duplicate comments:
In `@dev/specs/infp-504-artifact-composition/checklists/requirements.md`:
- Around line 9-11: Uncheck the four checklist boxes that were incorrectly
marked complete and add a brief note acknowledging this spec deviates into
technical detail: specifically uncheck the items "No implementation details
(languages, frameworks, APIs)", "Focused on user value and business needs",
"Written for non-technical stakeholders" (the ones flagged in the review) and
append a short note stating that the spec contains technical artifacts (see
FR-001, FR-002, FR-007, FR-010, FR-012, FR-013 which include class names, file
paths, method signatures, and CLI commands) and that the checklist was
intentionally left unchecked to reflect this deviation.
- Around line 34-36: Add a trailing blank line at the end of the Notes list so
the file ends with a single newline (ensure the final character is a newline).
Locate the end of the Notes list where the last bullet mentions
`from_json`/`from_yaml` and append one blank line (or configure the
editor/formatter to ensure files end with a newline) to satisfy Markdown
conventions.
In `@dev/specs/infp-504-artifact-composition/spec.md`:
- Around line 100-102: FR-007 is too coarse: registering artifact_content and
file_object_content as trusted=False conflates them with filters like safe and
cannot express the three-context model required by US4/SC-002; update the design
so FilterDefinition supports an execution-scope flag (e.g. allowed_contexts or
trusted_for = {"prefect-worker"}) instead of a single boolean, register
artifact_content and file_object_content with that worker-only scope, and change
validate(restricted: bool) (or add a new validate_for(context)) to check the
filter's allowed_contexts (so restricted=True still blocks safe but permits
worker-scoped filters when running in the Prefect-worker context).
- Around line 157-161: Add a trailing blank line after the Clarifications list
that ends with "Q: What should `from_json`/`from_yaml` do on malformed input? →
A: Raise `JinjaFilterError` on malformed JSON or YAML input." so the file
follows the project's Markdown convention; simply insert one empty newline at
the end of the document (after that final answer) in
dev/specs/infp-504-artifact-composition/spec.md.
---
Nitpick comments:
In `@dev/specs/infp-504-artifact-composition/spec.md`:
- Line 136: Promote the async-to-sync bridge to an explicit open question in the
spec: state that InfrahubFilters exposes synchronous callables while the
underlying InfrahubClient is async and require the spec to mandate a bridging
strategy (examples: asyncio.run(), loop.run_until_complete() with loop
detection, anyio.from_thread, or a dedicated worker thread/executor) and
enumerate the behavioral differences when an event loop is already running
versus not; also update the "Sandboxed environment injection" section to
cross-reference this new "Async-to-sync execution bridge" question so
implementers know to choose and document the exact execution strategy for
InfrahubFilters/InfrahubClient.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
dev/specs/infp-504-artifact-composition/checklists/requirements.mddev/specs/infp-504-artifact-composition/spec.mdspecs
🚧 Files skipped from review as they are similar to previous changes (1)
- specs
|
|
||
| 1. **Given** a template using `storage_id | artifact_content | from_json`, **When** rendered with a storage_id pointing to valid JSON content, **Then** the template can access keys of the parsed object. | ||
| 2. **Given** `storage_id | artifact_content | from_yaml`, **When** rendered with YAML content, **Then** the template can access keys of the parsed mapping. | ||
| 3. **Given** `from_json` or `from_yaml` applied to an empty string (for example, a template variable that is explicitly empty), **When** rendered, **Then** the filter returns an empty dict or appropriate empty value without raising. |
There was a problem hiding this comment.
US3 AC3 and FR-008 conflict on from_yaml empty-string return value.
Line 61 (US3 AC3) allows "an empty dict or appropriate empty value", which could permit None. FR-008 (line 103) mandates "empty dict" unconditionally. In practice, yaml.safe_load("") returns None — not {} — because an empty YAML document is valid and maps to Python None. json.loads("") raises json.JSONDecodeError. Both filters therefore require explicit special-casing; the spec needs to align on exactly one contract ({} or None) so the test in SC-005 and the implementation agree.
📝 Suggested fix — align AC3 with FR-008
-3. **Given** `from_json` or `from_yaml` applied to an empty string (for example, a template variable that is explicitly empty), **When** rendered, **Then** the filter returns an empty dict or appropriate empty value without raising.
+3. **Given** `from_json` or `from_yaml` applied to an empty string (for example, a template variable that is explicitly empty), **When** rendered, **Then** the filter returns an empty dict (`{}`) without raising. *(Note: both filters must explicitly handle the empty-string edge case, as `yaml.safe_load("")` returns `None` and `json.loads("")` raises an error by default.)*📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 3. **Given** `from_json` or `from_yaml` applied to an empty string (for example, a template variable that is explicitly empty), **When** rendered, **Then** the filter returns an empty dict or appropriate empty value without raising. | |
| 3. **Given** `from_json` or `from_yaml` applied to an empty string (for example, a template variable that is explicitly empty), **When** rendered, **Then** the filter returns an empty dict (`{}`) without raising. *(Note: both filters must explicitly handle the empty-string edge case, as `yaml.safe_load("")` returns `None` and `json.loads("")` raises an error by default.)* |
🧰 Tools
🪛 LanguageTool
[style] ~61-~61: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...access keys of the parsed mapping. 3. Given from_json or from_yaml applied to...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dev/specs/infp-504-artifact-composition/spec.md` at line 61, US3 AC3
currently allows "an empty dict or appropriate empty value" for
from_json/from_yaml when given an empty string, which conflicts with FR-008 that
mandates an empty dict; update the spec and implementations to normalize
empty-string inputs to return an empty dict (not None) so tests in SC-005 align.
Concretely, change the AC3 wording to require "{}" for empty input and ensure
the from_yaml and from_json functions (identify by name: from_yaml, from_json)
special-case empty-string input and return {} instead of relying on
yaml.safe_load("") or json.loads("") behavior; add a short unit test asserting
that from_yaml("") and from_json("") yield {}.
| - **`Jinja2Template`**: Gains an optional `client` constructor parameter; delegates client-bound filter registration to `InfrahubFilters`. | ||
| - **`InfrahubFilters`**: New class that holds an `InfrahubClient` reference and exposes `artifact_content`, `file_object_content`, and any other client-dependent filter methods. Registered into the Jinja2 filter map when a client is provided. | ||
| - **`FilterDefinition`**: Existing dataclass used to declare filter `name`, `trusted` flag, and `source`. New entries are added here for all new filters. | ||
| - **`ObjectStore` / `ObjectStoreSync`**: Existing async/sync storage clients used by `InfrahubFilters` to perform `get(identifier=storage_id)` calls. |
There was a problem hiding this comment.
ObjectStoreSync is listed as a key entity despite InfrahubClientSync being explicitly unsupported.
FR-001 and FR-010 both rule out InfrahubClientSync, so InfrahubFilters will only ever hold an async InfrahubClient, which internally uses ObjectStore. Listing ObjectStore / ObjectStoreSync with the phrasing "used by InfrahubFilters" implies both are in scope and could mislead implementers into adding a sync code path.
📝 Suggested fix
-- **`ObjectStore` / `ObjectStoreSync`**: Existing async/sync storage clients used by `InfrahubFilters` to perform `get(identifier=storage_id)` calls.
+- **`ObjectStore`**: Existing async storage client used by `InfrahubFilters` to perform `get(identifier=storage_id)` calls. (`ObjectStoreSync` is not used; `InfrahubClientSync` is explicitly out of scope — see FR-001, FR-010.)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **`ObjectStore` / `ObjectStoreSync`**: Existing async/sync storage clients used by `InfrahubFilters` to perform `get(identifier=storage_id)` calls. | |
| - **`ObjectStore`**: Existing async storage client used by `InfrahubFilters` to perform `get(identifier=storage_id)` calls. (`ObjectStoreSync` is not used; `InfrahubClientSync` is explicitly out of scope — see FR-001, FR-010.) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dev/specs/infp-504-artifact-composition/spec.md` at line 115, Change the spec
to remove or reword the "ObjectStore / ObjectStoreSync" line so it doesn't imply
a sync path; specifically, delete the reference to ObjectStoreSync (or
explicitly state it is out of scope) and clarify that InfrahubFilters holds an
async InfrahubClient which uses ObjectStore (not ObjectStoreSync), referencing
InfrahubFilters, InfrahubClient, InfrahubClientSync, ObjectStore, and
ObjectStoreSync so readers see that InfrahubClientSync/ObjectStoreSync are
unsupported and only the async ObjectStore path should be implemented.
| 1. **Given** a `Jinja2Template` constructed with a valid `InfrahubClient` and a template calling `storage_id | artifact_content`, **When** the template is rendered with a data dict containing a valid storage_id string, **Then** the output contains the raw string content fetched from the object store. | ||
| 2. **Given** the same setup but the storage_id is null or the object store cannot retrieve the content, **When** rendered, **Then** the filter raises a descriptive error indicating the retrieval failure. | ||
| 3. **Given** a `Jinja2Template` constructed *without* an `InfrahubClient` and a template calling `artifact_content`, **When** rendered, **Then** an error is raised with a message clearly stating that an `InfrahubClient` is required for this filter. | ||
| 4. **Given** a template using `artifact_content` and `validate(restricted=True)` is called, **Then** a `JinjaTemplateOperationViolationError` is raised, confirming the filter is blocked in local restricted mode. |
There was a problem hiding this comment.
Not sure that I understand where this one is coming from
There was a problem hiding this comment.
This part might be a bit weird within this context and goes into the technical aspects but it's really targeting what happens within Infrahub. The idea really is that if anyone uses one of these filters as a computed attribute we'd raise an error during schema import to indicate that it's not allowed.
| ### Functional requirements | ||
|
|
||
| - **FR-001**: `Jinja2Template.__init__` MUST accept an optional `client` parameter of type `InfrahubClient | None` (default `None`). `InfrahubClientSync` is not supported. | ||
| - **FR-002**: A dedicated class (for example, `InfrahubFilters`) MUST be introduced to hold the client reference and expose the Infrahub-specific filter callable methods. `Jinja2Template` instantiates this class when a client is provided and registers its filters into the Jinja2 environment. |
There was a problem hiding this comment.
Not clear to me why we need that
There was a problem hiding this comment.
The idea is that these new filters would sit in a class that has access to the SDK itself. I.e. when we use something like {{ storage_id | artifact_content }} then artifact_content will be a method of a class which has the SDK initialized so that we can use the SDK to collect the information.
| - **FR-001**: `Jinja2Template.__init__` MUST accept an optional `client` parameter of type `InfrahubClient | None` (default `None`). `InfrahubClientSync` is not supported. | ||
| - **FR-002**: A dedicated class (for example, `InfrahubFilters`) MUST be introduced to hold the client reference and expose the Infrahub-specific filter callable methods. `Jinja2Template` instantiates this class when a client is provided and registers its filters into the Jinja2 environment. | ||
| - **FR-003**: The system MUST provide an `artifact_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced artifact, using the artifact-specific API path. | ||
| - **FR-004**: The system MUST provide a `file_object_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced file object, using the file-object-specific API path or metadata handling — this implementation is distinct from `artifact_content`. |
There was a problem hiding this comment.
is the storage id enough to pull the content of a file object ?
There was a problem hiding this comment.
Yes, although there are other ways that could be applicable to download these files. I'll add a comment around that. I.e. get by HFID and node id
| - **FR-001**: `Jinja2Template.__init__` MUST accept an optional `client` parameter of type `InfrahubClient | None` (default `None`). `InfrahubClientSync` is not supported. | ||
| - **FR-002**: A dedicated class (for example, `InfrahubFilters`) MUST be introduced to hold the client reference and expose the Infrahub-specific filter callable methods. `Jinja2Template` instantiates this class when a client is provided and registers its filters into the Jinja2 environment. | ||
| - **FR-003**: The system MUST provide an `artifact_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced artifact, using the artifact-specific API path. | ||
| - **FR-004**: The system MUST provide a `file_object_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced file object, using the file-object-specific API path or metadata handling — this implementation is distinct from `artifact_content`. |
There was a problem hiding this comment.
Also, there is no mention of permission in this doc but the object file API will check the permission before returning the content, it feels like it should be mentioned in the spec, event if we just bypass it for now
There was a problem hiding this comment.
Yes good point. I added this as a requirement as well that it should catch these from the API and provide the correct error back.
| - **FR-001**: `Jinja2Template.__init__` MUST accept an optional `client` parameter of type `InfrahubClient | None` (default `None`). `InfrahubClientSync` is not supported. | ||
| - **FR-002**: A dedicated class (for example, `InfrahubFilters`) MUST be introduced to hold the client reference and expose the Infrahub-specific filter callable methods. `Jinja2Template` instantiates this class when a client is provided and registers its filters into the Jinja2 environment. | ||
| - **FR-003**: The system MUST provide an `artifact_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced artifact, using the artifact-specific API path. | ||
| - **FR-004**: The system MUST provide a `file_object_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced file object, using the file-object-specific API path or metadata handling — this implementation is distinct from `artifact_content`. |
There was a problem hiding this comment.
What if the object return is in binary format and not text based ?
There was a problem hiding this comment.
I think for now the best approach might be to raise an error, or do you have any other thoughts?
| - **FR-004**: The system MUST provide a `file_object_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced file object, using the file-object-specific API path or metadata handling — this implementation is distinct from `artifact_content`. | ||
| - **FR-005**: Both `artifact_content` and `file_object_content` MUST raise `JinjaFilterError` when the input `storage_id` is null or empty, or when the object store cannot retrieve the content for any reason (not found, network failure, auth failure). | ||
| - **FR-006**: Both `artifact_content` and `file_object_content` MUST raise `JinjaFilterError` when invoked and no `InfrahubClient` was supplied to `Jinja2Template` at construction time. The error message MUST name the filter and explain that an `InfrahubClient` is required. | ||
| - **FR-007**: Both `artifact_content` and `file_object_content` MUST be registered with `trusted=False` in the `FilterDefinition` registry so that `validate(restricted=True)` blocks them in the computed attributes execution context (Infrahub API server). They are only permitted to execute on Prefect workers, where an `InfrahubClient` is available. |
There was a problem hiding this comment.
Could instead of checking at runtime if once of these filters is used in a computed attribute, could we do a sanity check when the user add / update a computed attribute instead ?
There was a problem hiding this comment.
Yes for sure, this would be on the side of the Infrahub project but the idea is to keep the current approach of blocking the schema from loading when this happens. But I agree it's vague within the SDK context here.
7642b5d to
e3e8af5
Compare
|
@dgarros, I've pushed a new commit with some clarifications along with the comments above. |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (5)
dev/specs/infp-504-artifact-composition/spec.md (5)
61-61: **from_yamlempty-string return value still conflicts between AC3 and FR-008 — align on{}**AC3 still permits
"an empty dict or appropriate empty value"(which could beNone), while FR-008 mandates"empty dict"unconditionally.yaml.safe_load("")returnsNoneandjson.loads("")raises by default — both filters need explicit special-casing, and the spec must settle on a single contract for tests to be deterministic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dev/specs/infp-504-artifact-composition/spec.md` at line 61, AC3 and FR-008 conflict on how from_yaml/from_json handle empty strings; update the spec and implementations so both filters (from_yaml and from_json) explicitly special-case an empty string input and return an empty dict ({}) consistently; change the spec text in the clause that currently reads "an empty dict or appropriate empty value" to mandate "empty dict" and ensure the from_yaml and from_json code paths check for === "" (or equivalent) before calling yaml.safe_load/json.loads and return {} instead of None or raising.
116-116: **ObjectStoreSyncentry implies a sync path that is explicitly out of scope **FR-001 and FR-010 both rule out
InfrahubClientSync. ListingObjectStore / ObjectStoreSyncas entities "used byInfrahubFilters" implies both are in scope and may mislead implementers into adding a sync code path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dev/specs/infp-504-artifact-composition/spec.md` at line 116, The spec line listing "ObjectStore / ObjectStoreSync" is misleading given FR-001 and FR-010 rule out any sync client (e.g., InfrahubClientSync); update the sentence around InfrahubFilters to remove or mark ObjectStoreSync as out of scope and explicitly state only the async ObjectStore interface is supported by InfrahubFilters, referencing ObjectStore, ObjectStoreSync, InfrahubFilters, FR-001 and FR-010 so implementers do not add a sync path.
100-102: **FR-007 still reads as final requirement despite being irreconcilable with the three-context model — mark as provisional **US4 (line 69) correctly states the boolean
validate(restricted: bool)is insufficient. FR-007 still unconditionally registers both filters astrusted=False, equating them withsafeand making the Prefect-worker allowance (US4/SC-002) unachievable without breaking restriction ofsafein that same context.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dev/specs/infp-504-artifact-composition/spec.md` around lines 100 - 102, FR-007 currently mandates registering artifact_content and file_object_content as trusted=False which conflicts with the three-context model and US4/SC-002; update the spec text for FR-007 to mark it provisional and clarify the implementation constraint: remove the unconditional "trusted=False" requirement and instead state that the FilterDefinition registration must support a three-context policy (e.g., separate flags or a registry that distinguishes server-restricted vs worker-allowed execution), reference the existing Jinja2Template and validate(restricted=True) behavior, and note that InfrahubClient-dependent filters must be blocked in the API server context but permitted on Prefect workers (and that final trust semantics will be decided once the three-context registry API is designed).
158-162: **Missing blank line after the final list in the Clarifications section **The file ends at line 162 without a trailing blank line after the list, violating the project's Markdown convention.
As per coding guidelines: "Add blank lines before and after lists in Markdown files."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dev/specs/infp-504-artifact-composition/spec.md` around lines 158 - 162, The Clarifications section's final bulleted list lacks the required trailing blank line; open the Clarifications section in spec.md, locate the final list (the last set of bullets in that section) and add a single blank line after the list (i.e., ensure there is an empty newline at the end of the file) to comply with the project's Markdown rule to have blank lines before and after lists.
137-137:⚠️ Potential issue | 🟠 MajorSpec leaves the async-from-sync execution bridge unspecified — a critical implementation gap
The Assumptions section states that
InfrahubFiltersprovides synchronous callables to Jinja2's filter map while the underlying client isInfrahubClient(async), with the bridge deferred to "the SDK's existing pattern." Jinja2 filters are invoked synchronously inside the render loop, so driving an async coroutine from that call site is non-trivial (e.g.,asyncio.get_event_loop().run_until_complete(),anyio.from_thread.run_sync(),nest_asyncio, or a dedicated thread executor). The choice materially affects correctness under event-loop nesting, thread safety, and reentrancy.The spec must either name the specific pattern to use, reference the existing SDK helper that handles this, or add an Open Question so the implementation decision is explicitly tracked rather than silently deferred.
✏️ Suggested clarification in Assumptions
-- The `InfrahubFilters` class provides synchronous callables to Jinja2's filter map; the underlying client is always `InfrahubClient` (async). Async I/O calls are handled consistently with the SDK's existing pattern. +- The `InfrahubFilters` class provides synchronous callables to Jinja2's filter map; the underlying client is always `InfrahubClient` (async). The mechanism for driving async coroutines from the synchronous Jinja2 filter call site (for example: `asyncio.run()`, an existing SDK helper, or a thread-pool executor pattern) MUST be specified in the implementation plan. See Open Questions.Add a corresponding Open Question entry:
+- **Async-from-sync bridge**: `InfrahubFilters` filter callables are synchronous (required by Jinja2) but must invoke `InfrahubClient` async methods. The exact pattern — `asyncio.run()`, `anyio.from_thread`, `nest_asyncio`, or other — must be decided before implementation to avoid event-loop nesting issues.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dev/specs/infp-504-artifact-composition/spec.md` at line 137, The spec currently leaves the async-from-sync bridge for InfrahubFilters unspecified; update the Assumptions to explicitly pick or reference the mechanism for driving InfrahubClient (async) from Jinja2's synchronous filter calls by either (a) naming the exact pattern/library to use (e.g., anyio/from_thread, a dedicated thread executor, nest_asyncio, or asyncio.run_coroutine_threadsafe), (b) referencing the existing SDK helper function that performs this bridge (refer to the helper by name) so implementers know which utility to call, or (c) add an Open Question entry that records this decision must be resolved before implementation; include references to the InfrahubFilters class and InfrahubClient in the text so the linkage is explicit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dev/specs/infp-504-artifact-composition/spec.md`:
- Around line 98-100: FR-005 is missing guidance for binary (non-text) responses
from the object store: update FR-005 (or add a new FR) to require that both
artifact_content and file_object_content detect binary/non-decodable UTF-8
payloads and raise JinjaFilterError with a clear, descriptive message (e.g.,
indicating the storage_id and that content is binary/non-UTF-8) whenever the
retrieved content cannot be decoded as UTF-8; reference the artifact_content and
file_object_content filters and the JinjaFilterError class in the requirement so
implementers handle this path explicitly.
- Line 102: Update the grammar in the FR-007 sentence: change "Jinja2 based
computed attributes" to "Jinja2-based computed attributes" so the compound
modifier is hyphenated; ensure the surrounding sentence referencing FR-007 and
symbols like artifact_content, file_object_content, FilterDefinition, and
validate(restricted=True) remains unchanged except for this hyphenation fix.
---
Duplicate comments:
In `@dev/specs/infp-504-artifact-composition/spec.md`:
- Line 61: AC3 and FR-008 conflict on how from_yaml/from_json handle empty
strings; update the spec and implementations so both filters (from_yaml and
from_json) explicitly special-case an empty string input and return an empty
dict ({}) consistently; change the spec text in the clause that currently reads
"an empty dict or appropriate empty value" to mandate "empty dict" and ensure
the from_yaml and from_json code paths check for === "" (or equivalent) before
calling yaml.safe_load/json.loads and return {} instead of None or raising.
- Line 116: The spec line listing "ObjectStore / ObjectStoreSync" is misleading
given FR-001 and FR-010 rule out any sync client (e.g., InfrahubClientSync);
update the sentence around InfrahubFilters to remove or mark ObjectStoreSync as
out of scope and explicitly state only the async ObjectStore interface is
supported by InfrahubFilters, referencing ObjectStore, ObjectStoreSync,
InfrahubFilters, FR-001 and FR-010 so implementers do not add a sync path.
- Around line 100-102: FR-007 currently mandates registering artifact_content
and file_object_content as trusted=False which conflicts with the three-context
model and US4/SC-002; update the spec text for FR-007 to mark it provisional and
clarify the implementation constraint: remove the unconditional "trusted=False"
requirement and instead state that the FilterDefinition registration must
support a three-context policy (e.g., separate flags or a registry that
distinguishes server-restricted vs worker-allowed execution), reference the
existing Jinja2Template and validate(restricted=True) behavior, and note that
InfrahubClient-dependent filters must be blocked in the API server context but
permitted on Prefect workers (and that final trust semantics will be decided
once the three-context registry API is designed).
- Around line 158-162: The Clarifications section's final bulleted list lacks
the required trailing blank line; open the Clarifications section in spec.md,
locate the final list (the last set of bullets in that section) and add a single
blank line after the list (i.e., ensure there is an empty newline at the end of
the file) to comply with the project's Markdown rule to have blank lines before
and after lists.
- Line 137: The spec currently leaves the async-from-sync bridge for
InfrahubFilters unspecified; update the Assumptions to explicitly pick or
reference the mechanism for driving InfrahubClient (async) from Jinja2's
synchronous filter calls by either (a) naming the exact pattern/library to use
(e.g., anyio/from_thread, a dedicated thread executor, nest_asyncio, or
asyncio.run_coroutine_threadsafe), (b) referencing the existing SDK helper
function that performs this bridge (refer to the helper by name) so implementers
know which utility to call, or (c) add an Open Question entry that records this
decision must be resolved before implementation; include references to the
InfrahubFilters class and InfrahubClient in the text so the linkage is explicit.
| - **FR-003**: The system MUST provide an `artifact_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced artifact, using the artifact-specific API path. | ||
| - **FR-004**: The system MUST provide a `file_object_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced file object, using the file-object-specific API path or metadata handling — this implementation is distinct from `artifact_content`. | ||
| - **FR-005**: Both `artifact_content` and `file_object_content` MUST raise `JinjaFilterError` when the input `storage_id` is null or empty, or when the object store cannot retrieve the content for any reason (not found, network failure, auth failure). |
There was a problem hiding this comment.
Missing requirement: binary content response from the object store is unhandled
FR-003, FR-004, and FR-005 cover null/empty storage_id, not-found, network, and auth failures, but none of them addresses the case where the object store returns binary (non-text) content. This was explicitly raised in review discussion (dgarros asked; ogenstad agreed to raise an error), yet no FR captures it. An implementer reading the spec alone would have no guidance on this path.
Add a requirement to FR-005 (or a new FR) specifying that artifact_content and file_object_content MUST raise JinjaFilterError with a descriptive message when the retrieved content is binary rather than a decodable UTF-8 string.
✏️ Proposed addition to FR-005
-- **FR-005**: Both `artifact_content` and `file_object_content` MUST raise `JinjaFilterError` when the input `storage_id` is null or empty, or when the object store cannot retrieve the content for any reason (not found, network failure, auth failure).
+- **FR-005**: Both `artifact_content` and `file_object_content` MUST raise `JinjaFilterError` when the input `storage_id` is null or empty, or when the object store cannot retrieve the content for any reason (not found, network failure, auth failure). If the retrieved content cannot be decoded as a UTF-8 string (i.e., it is binary), the filter MUST also raise `JinjaFilterError` with a message identifying the filter and the binary-content condition.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dev/specs/infp-504-artifact-composition/spec.md` around lines 98 - 100,
FR-005 is missing guidance for binary (non-text) responses from the object
store: update FR-005 (or add a new FR) to require that both artifact_content and
file_object_content detect binary/non-decodable UTF-8 payloads and raise
JinjaFilterError with a clear, descriptive message (e.g., indicating the
storage_id and that content is binary/non-UTF-8) whenever the retrieved content
cannot be decoded as UTF-8; reference the artifact_content and
file_object_content filters and the JinjaFilterError class in the requirement so
implementers handle this path explicitly.
| - **FR-004**: The system MUST provide a `file_object_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced file object, using the file-object-specific API path or metadata handling — this implementation is distinct from `artifact_content`. | ||
| - **FR-005**: Both `artifact_content` and `file_object_content` MUST raise `JinjaFilterError` when the input `storage_id` is null or empty, or when the object store cannot retrieve the content for any reason (not found, network failure, auth failure). | ||
| - **FR-006**: Both `artifact_content` and `file_object_content` MUST raise `JinjaFilterError` when invoked and no `InfrahubClient` was supplied to `Jinja2Template` at construction time. The error message MUST name the filter and explain that an `InfrahubClient` is required. | ||
| - **FR-007**: Both `artifact_content` and `file_object_content` MUST be registered with `trusted=False` in the `FilterDefinition` registry so that `validate(restricted=True)` blocks them in the computed attributes execution context (Infrahub API server). They are only permitted to execute on Prefect workers, where an `InfrahubClient` is available. Within Infrahub any Jinja2 based computed attributes that use these new filters should cause a schema violation when loading the schema. |
There was a problem hiding this comment.
Grammar: "Jinja2-based" requires a hyphen
"Jinja2 based computed attributes" should be "Jinja2-based computed attributes" (compound modifier before a noun).
✏️ Proposed fix
-- **FR-007**: Both `artifact_content` and `file_object_content` MUST be registered with `trusted=False` in the `FilterDefinition` registry so that `validate(restricted=True)` blocks them in the computed attributes execution context (Infrahub API server). They are only permitted to execute on Prefect workers, where an `InfrahubClient` is available. Within Infrahub any Jinja2 based computed attributes that use these new filters should cause a schema violation when loading the schema.
+- **FR-007**: Both `artifact_content` and `file_object_content` MUST be registered with `trusted=False` in the `FilterDefinition` registry so that `validate(restricted=True)` blocks them in the computed attributes execution context (Infrahub API server). They are only permitted to execute on Prefect workers, where an `InfrahubClient` is available. Within Infrahub any Jinja2-based computed attributes that use these new filters should cause a schema violation when loading the schema.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **FR-007**: Both `artifact_content` and `file_object_content` MUST be registered with `trusted=False` in the `FilterDefinition` registry so that `validate(restricted=True)` blocks them in the computed attributes execution context (Infrahub API server). They are only permitted to execute on Prefect workers, where an `InfrahubClient` is available. Within Infrahub any Jinja2 based computed attributes that use these new filters should cause a schema violation when loading the schema. | |
| - **FR-007**: Both `artifact_content` and `file_object_content` MUST be registered with `trusted=False` in the `FilterDefinition` registry so that `validate(restricted=True)` blocks them in the computed attributes execution context (Infrahub API server). They are only permitted to execute on Prefect workers, where an `InfrahubClient` is available. Within Infrahub any Jinja2-based computed attributes that use these new filters should cause a schema violation when loading the schema. |
🧰 Tools
🪛 LanguageTool
[grammar] ~102-~102: Use a hyphen to join words.
Context: ...is available. Within Infrahub any Jinja2 based computed attributes that use these...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dev/specs/infp-504-artifact-composition/spec.md` at line 102, Update the
grammar in the FR-007 sentence: change "Jinja2 based computed attributes" to
"Jinja2-based computed attributes" so the compound modifier is hyphenated;
ensure the surrounding sentence referencing FR-007 and symbols like
artifact_content, file_object_content, FilterDefinition, and
validate(restricted=True) remains unchanged except for this hyphenation fix.
Why
Provide specifications for INFP-504
What changed
Add Spec Kit specifications
Summary by CodeRabbit