Skip to content

UN-3056 [FEAT] Filter notifications by run outcome#1936

Draft
kirtimanmishrazipstack wants to merge 12 commits intomainfrom
UN-3056-Notify-on-API-deployment-failures
Draft

UN-3056 [FEAT] Filter notifications by run outcome#1936
kirtimanmishrazipstack wants to merge 12 commits intomainfrom
UN-3056-Notify-on-API-deployment-failures

Conversation

@kirtimanmishrazipstack
Copy link
Copy Markdown
Contributor

@kirtimanmishrazipstack kirtimanmishrazipstack commented Apr 29, 2026

What

  • Add notify_on field to Notification with three modes: ALL, FAILURES_ONLY, SUCCESS_ONLY (defaults to ALL).
  • Filter dispatch in APINotification.send and PipelineNotification.send so each subscription's notify_on preference is honored against the run's terminal status.
  • Surface the new option as a "Notify on" select in the notification create/edit form.

Why

Notifications previously fired on every workflow completion, so users wanting only-on-failure alerts (the UN-3056 ask) or only-on-success acks got noise on the other outcome. The new field lets each subscription opt in to the run outcomes it cares about.

How

The mode is enforced in the dispatch layer: on terminal failure (ERROR for API deployments, FAILURE for ETL pipelines) the queryset excludes SUCCESS_ONLY rows; on terminal success it excludes FAILURES_ONLY rows; for non-terminal / STOPPED statuses only ALL rows fire. Defaulting to ALL keeps existing rows on their previous "every completion" behavior.

Can this PR break any existing features. If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)

No. The new field defaults to ALL for existing rows, preserving the historical "fire on every completion" behavior. The dispatch change only narrows the active queryset based on notify_on; no existing caller signatures change.

Database Migrations

  • backend/notification_v2/migrations/0002_notification_notify_on.py — adds the notify_on column to notification with default ALL.

Env Config

  • None

Relevant Docs

Related Issues or PRs

  • UN-3056

Dependencies Versions

  • None

Notes on Testing

  • pytest backend/notification_v2/tests/test_notification_filter.py covers APINotification.send and PipelineNotification.send across ALL / FAILURES_ONLY / SUCCESS_ONLY × each terminal status, plus a mixed-partition case per dispatch path.
  • Manual: in the notification modal for an API deployment and an ETL pipeline, set "Notify on" to "On failures only" and "On success only" and confirm webhooks fire only on the matching run outcome.

Screenshots

Checklist

I have read and understood the Contribution Guidelines.

kirtimanmishrazipstack and others added 6 commits April 24, 2026 16:25
…down rendering (#1927)

* [FIX] Make tool-run logs visible in workflow execution UI

Two stacked gaps were keeping tool-level log lines (Processing prompt,
Running LLM completion, lookup calls, etc.) out of the workflow
execution logs UI and the execution_log DB table for API / workflow
runs:

1. Empty log_events_id.  structure_tool_task seeded LOG_EVENTS_ID in
   StateStore but never threaded it into pipeline_ctx / agentic_ctx.
   ExecutorToolShim.stream_log gated publishing on
   self.log_events_id, so every tool-level log was dropped before it
   ever reached the broker.

2. Wrong payload shape.  Even with the channel threaded,
   stream_log used LogPublisher.log_progress(...) whose payload omits
   execution_id / organization_id / file_execution_id.
   get_validated_log_data (log_utils.py) requires those IDs and
   LogType == LOG to persist to execution_log, so tool-level messages
   were silently filtered at the Redis->DB drain step — orchestration
   logs persisted, tool logs did not.

Fixes:
- ExecutionContext gains execution_id + file_execution_id, populated
  in structure_tool_task for both the legacy pipeline and agentic
  contexts.
- LegacyExecutor caches the three IDs on self during execute() and
  passes them into every ExecutorToolShim construction
  (~7 callsites).
- ExecutorToolShim.stream_log now dual-emits: PROGRESS (unchanged,
  drives the IDE prompt-card live progress pane) plus LOG carrying
  the workflow IDs (feeds the workflow execution logs UI and persists
  to execution_log via the existing drain). LOG emission is gated on
  execution_id + organization_id being present, so bare IDE test
  runs without a workflow still behave as before.

Rendering polish
- The LogModal and pipeline LogsModal now pipe log text through the
  existing CustomMarkdown renderer, so backticked identifiers render
  as inline-code pills and embedded newlines break lines. This lets
  multi-line structured events (e.g. the lookup pre-call trio)
  surface as a single row with readable inner formatting.
- Prompt-key mentions inside legacy_executor tool logs are wrapped
  in backticks for consistency with the rest of the log surface.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [FIX] Wrap prompt_name in backticks in remaining stream_log calls

Completes the consistency pass on tool-run log formatting: the table-
and line-item-extraction success and error paths still emitted prompt
names without backticks, so the markdown-rendered logs UI showed them
as bare text instead of inline-code pills. Matches the pattern already
applied to the other 9 stream_log calls in this file.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [FIX] Validate URL schemes in CustomMarkdown link renderer

Workflow logs rendered via CustomMarkdown can contain tool-generated or
user-derived content, so an untrusted \`[text](url)\` sequence could
inject a \`javascript:\` or \`data:\` scheme and get clickable through
antd \`Typography.Link\`. Allow-list the safe external schemes (http,
https, mailto, tel) before rendering as a link; everything else falls
back to plain text while still honouring the existing internal-path
branch used for in-app navigation.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [FIX] Thread workflow IDs into remaining shim/context callsites

Addresses CodeRabbit review gaps so the log-plumbing fix is consistent
across every pre-dispatch and plugin-dispatch path:

- `table_ctx` / `line_item_ctx` in `legacy_executor.py` now carry
  `log_events_id`, `execution_id`, `file_execution_id` from context so
  downstream table/line-item plugins that build their own
  `ExecutorToolShim` pass the `execution_id + organization_id` gate
  and emit workflow LOG payloads.
- `structure_tool_task.py` threads the same IDs into the bare
  pre-dispatch shim, so `X2Text.process()` calls during agentic
  extraction reach the workflow logs UI.
- `LogsModal.jsx` stores the raw log string in row data and lets the
  column renderer wrap it in `CustomMarkdown` — the previous map
  stored a `<CustomMarkdown />` element that was then passed back into
  `CustomMarkdown.text`, producing `[object Object]` for multi-row
  lookups.
- Dropped `getattr(context, ...)` on `execution_id` /
  `file_execution_id` now that they are dataclass fields — matches the
  direct access used for `organization_id`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [REFACTOR] Trim overly specific comments in log-plumbing changes

Pass through the new comments added across this PR and either remove or
tighten the ones that restate what the code already shows. Keep only
the WHY lines that protect future readers from missing a non-obvious
constraint (XSS guard in CustomMarkdown, dual PROGRESS/LOG emission in
the shim, pre-dispatch shim needing workflow IDs so X2Text logs are
not silently dropped).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [REFACTOR] Extract isSafeExternalUrl into shared helpers module

Moves the URL scheme allow-list check out of CustomMarkdown into
helpers/urlSafety.js so any future component that renders links from
user- or tool-derived content can reuse the same guard instead of
re-implementing it.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [FIX] Tighten URL guard, split publish try/excepts, and extract shim builder

Addresses the must-fix and worth-doing comments from the PR review:

Security
- CustomMarkdown: treat protocol-relative URLs (`//host/...`) as external,
  not internal, so they can no longer skip the scheme guard via the
  `startsWith("/")` branch.
- `isSafeExternalUrl`: drop the `window.location.origin` base so bare
  strings ("javascript", "../foo") fail to parse instead of silently
  resolving to `https://<origin>/...` and passing the scheme check.

Silent failure + comment accuracy
- ExecutorToolShim.stream_log: split the PROGRESS and LOG publish paths
  into separate try/except blocks so a LogDataDTO validation failure on
  the LOG payload is no longer mis-attributed to "progress publish
  failed". Corrected the inline comments — the DB drop is driven by
  LogPublisher's `payload.type == 'LOG'` check, and only
  `execution_id` + `organization_id` are strictly required.

Refactor
- New `LegacyExecutor._build_shim()` helper — all seven
  ExecutorToolShim callsites now share one construction path so the
  workflow-ID plumbing can't drift out of sync across sites again.
- Thread `execution_id` / `file_execution_id` into the seven
  self-dispatched sub-`ExecutionContext`s alongside `log_events_id`,
  matching the table/line-item sites and keeping the context
  consistent for any downstream consumer that reads the IDs from the
  context rather than from the executor instance.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [FIX] Address remaining type-design and silent-failure comments

- ExecutionContext: drop the BE-coupled inline comment, document the
  new IDs in the Attributes block, and enforce the invariant that
  execution_id implies organization_id via __post_init__.
- ExecutorToolShim: typed the three new IDs as str | None instead of
  str = "" so the signature matches the Optional semantics already
  enforced by the runtime guards.
- LegacyExecutor: move per-request state to __init__ so _log_component
  is no longer a class-level mutable default shared across instances;
  stop silently coercing None IDs to ""; add a one-shot warning when a
  tool-sourced run lands without workflow IDs so the silent-no-persist
  case is visible in GKE logs.
- structure_tool_task: emit the same warning when LOG_EVENTS_ID is
  absent from StateStore.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [FIX] Surface first publish failure per shim at WARN

Both PROGRESS and LOG publish paths previously swallowed every broker
failure at DEBUG, so a misconfigured or down Redis broker meant every
tool-level log silently vanished with no operator-visible signal.

Track a per-shim _progress_publish_failed / _log_publish_failed flag
and log the first failure at WARNING (with traceback), then downgrade
subsequent failures on the same shim back to DEBUG. Preserves the
non-fatal semantics of the publish path while making broker outages
visible in GKE logs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* [FIX] Auto-bump modified_at on QuerySet.update() and bulk_update()

Django's auto_now=True only fires on Model.save(); QuerySet.update() and
bulk_update() bypass save(), so BaseModel.modified_at silently stayed at
the creation time for every bulk-path write. Audit trail drifted.

Introduce BaseModelQuerySet that injects modified_at=timezone.now() into
both paths, and expose it via BaseModelManager. Migrate all custom
managers on BaseModel subclasses to compose BaseModelManager so their
querysets inherit the overrides. Drop the ad-hoc modified_at=now() kwarg
in FileHistoryHelper now that the queryset handles it.

* [FIX] Materialize objs in BaseModelQuerySet.bulk_update to support generators

Addresses PR review: if callers pass a non-rewindable iterable (generator,
queryset iterator), the modified_at stamping loop would exhaust it before
super().bulk_update() saw it, silently updating zero rows. list(objs) up
front keeps generator callers working.

Also drop the mock-based unit test — it needed django.setup() at module
import which isn't viable without pytest-django, and proper DB-backed
coverage is tracked separately.

* [FIX] Auto-inject modified_at into BaseModel.save(update_fields=...)

Django only runs auto_now for fields listed in update_fields, so every
save(update_fields=["foo"]) on a BaseModel subclass silently drops the
modified_at bump — same family of bug as QuerySet.update/bulk_update.

Override BaseModel.save() to add modified_at to update_fields whenever
the caller supplies a restricted list without it. Also drop two dead
manual-assignment lines (execution.modified_at = timezone.now() before
save()) that were redundant with auto_now on a full save().

* [FIX] Auto-bump modified_at on upsert bulk_create and drop workarounds

QuerySet.bulk_create(update_conflicts=True, update_fields=[...]) runs an
UPDATE on conflict with only the listed fields — same auto_now-bypass as
save(update_fields=...) and QuerySet.update(). Patch BaseModelQuerySet's
bulk_create to inject modified_at into update_fields on upsert.

With that in place, the explicit "modified_at" entries in dashboard_metrics
upsert callers are redundant. Drop them.

* [REFACTOR] Tighten BaseModel auto-bump helpers and edge cases

- Extract `_with_modified_at` helper; single source of truth for the "inject
  modified_at into a partial field list" rule across `bulk_update`,
  `bulk_create` and `BaseModel.save`.
- Preserve Django's documented `save(update_fields=[])` no-op (signals-only
  save, no column writes) instead of rewriting it to `["modified_at"]`.
  Apply the same guard to `bulk_create(update_conflicts=True, update_fields=[])`.
- Match Django's positional `save()` signature (`force_insert`, `force_update`,
  `using`, `update_fields`) so callers passing flags positionally still hit
  the auto-bump override.
- Skip the per-obj `modified_at` stamp + `objs` materialization in
  `bulk_update` when the caller already listed `modified_at` — lets the
  opt-in path stay O(1) before the `super()` delegation.
- Docstring corrections: "previous save() timestamp" (not just creation
  time); manager-level convention note; precise `auto_now` semantics
  (attribute still updates in-memory, just isn't persisted without
  `update_fields` inclusion).
…wered table extraction (#1914)

* Execution backend - revamp

* async flow

* Streaming progress to FE

* Removing multi hop in Prompt studio ide and structure tool

* UN-3234 [FIX] Add beta tag to agentic prompt studio navigation item

* Added executors for agentic prompt studio

* Added executors for agentic prompt studio

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Removed redundant envs

* adding worker for callbacks

* adding worker for callbacks

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Pluggable apps and plugins to fit the new async prompt execution architecture

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Pluggable apps and plugins to fit the new async prompt execution architecture

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Pluggable apps and plugins to fit the new async prompt execution architecture

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* fix: write output files in agentic extraction pipeline

Agentic extraction returned early without writing INFILE (JSON) or
METADATA.json, causing destination connectors to read the original PDF
and fail with "Expected tool output type: TXT, got: application/pdf".

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: replace hardcoded /tmp paths with secure temp dirs in tests (#1850)

* UN-3266 fix: replace hardcoded /tmp paths with secure temp dirs in tests

Replace hardcoded /tmp/ paths (SonarCloud S5443 security hotspots) with
pytest's tmp_path fixture or module-level tempfile.mkdtemp() constants
in all affected test files to avoid world-writable directory vulnerabilities.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update docs

* UN-3266 fix: remove dead code with undefined names in fetch_response

Remove unreachable code block after the async callback return in
fetch_response that still referenced output_count_before and response
from the old synchronous implementation, causing ruff F821 errors.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Un 3266 fix security hotspot tmp paths (#1851)

* UN-3266 fix: replace hardcoded /tmp paths with secure temp dirs in tests

Replace hardcoded /tmp/ paths (SonarCloud S5443 security hotspots) with
pytest's tmp_path fixture or module-level tempfile.mkdtemp() constants
in all affected test files to avoid world-writable directory vulnerabilities.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: resolve ruff linting failures across multiple files

- B026: pass url positionally in worker_celery.py to avoid star-arg after keyword
- N803: rename MockAsyncResult to mock_async_result in test_tasks.py
- E501/I001: fix long line and import sort in llm_whisperer helper
- ANN401: replace Any with object|None in dispatcher.py; add noqa in test helpers
- F841: remove unused workflow_id and result assignments

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* UN-3266 fix: resolve SonarCloud bugs S2259 and S1244 in PR #1849

- S2259: guard against None after _discover_plugins() in loader.py
  to satisfy static analysis on the dict[str,type]|None field type
- S1244: replace float equality checks with pytest.approx() in
  test_answer_prompt.py and test_phase2h.py

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: resolve SonarCloud code smells in PR #1849

- S5799: Merge all implicit string concatenations in log messages
  (legacy_executor.py, tasks.py, dispatcher.py, orchestrator.py,
   registry.py, variable_replacement.py, structure_tool_task.py)
- S1192: Extract duplicate literal to _NO_CELERY_APP_MSG constant in
  dispatcher.py
- S1871: Merge identical elif/else branches in tasks.py and
  test_sanity_phase6j.py
- S1186: Add comment to empty stub method in test_sanity_phase6a.py
- S1481: Remove unused local variables in test_sanity_phase6d/e/f/g/h/j
  and test_phase5d.py
- S117: Rename PascalCase local variables to snake_case in
  test_sanity_phase3/5/6i.py
- S5655: Broaden tool type annotation to StreamMixin in
  IndexingUtils.generate_index_key and PlatformHelper.get_adapter_config
- docker:S7031: Merge consecutive RUN instructions in
  worker-unified.Dockerfile
- javascript:S1128: Remove unused pollForCompletion import in
  usePromptRun.js

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* UN-3266 fix: wrap long log message in dispatcher.py to fix E501

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: resolve remaining SonarCloud S117 naming violations

Rename PascalCase local variables to snake_case to comply with S117:

- legacy_executor.py: rename tuple-unpacked _get_prompt_deps() results
  (AnswerPromptService→answer_prompt_svc, RetrievalService→retrieval_svc,
  VariableReplacementService→variable_replacement_svc, LLM→llm_cls,
  EmbeddingCompat→embedding_compat_cls, VectorDB→vector_db_cls) and
  update all downstream usages including _apply_type_conversion and
  _handle_summarize
- test_phase1_log_streaming.py: rename Mock* local variables to
  mock_* snake_case equivalents
- test_sanity_phase3.py: rename MockDispatcher→mock_dispatcher_cls
  and MockShim→mock_shim_cls across all 10 test methods
- test_sanity_phase5.py: rename MockShim→mock_shim, MockX2Text→mock_x2text
  in 6 test methods; MockDispatcher→mock_dispatcher_cls in dispatch test;
  fix LLM_cls→llm_cls, EmbeddingCompat→embedding_compat_cls,
  VectorDB→vector_db_cls in _mock_prompt_deps helper

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* UN-3266 fix: resolve remaining SonarCloud code smells in PR #1849

- test_sanity_phase2/4.py, test_answer_prompt.py: rename PascalCase
  local variables in _mock_prompt_deps/_mock_deps to snake_case
  (RetrievalService→retrieval_svc, VariableReplacementService→
  variable_replacement_svc, Index→index_cls, LLM_cls→llm_cls,
  EmbeddingCompat→embedding_compat_cls, VectorDB→vector_db_cls,
  AnswerPromptService→answer_prompt_svc_cls) — fixes S117
- test_sanity_phase3.py: remove unused local variable "result" — fixes S1481
- structure_tool_task.py: remove redundant json.JSONDecodeError from
  except clause (subclass of ValueError) — fixes S5713
- shared/workflow/execution/service.py: replace generic Exception with
  RuntimeError for structure tool failure — fixes S112
- run-worker-docker.sh: define EXECUTOR_WORKER_TYPE constant and
  replace 10 literal "executor" occurrences — fixes S1192

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: resolve SonarCloud cognitive complexity and code smell violations

- Reduce cognitive complexity in answer_prompt.py:
  - Extract _build_grammar_notes, _run_webhook_postprocess helpers
  - _is_safe_public_url: extracted _resolve_host_addresses helper
  - handle_json: early-return pattern eliminates nesting
  - construct_prompt: delegates grammar loop to _build_grammar_notes
- Reduce cognitive complexity in legacy_executor.py:
  - Extract _execute_single_prompt, _run_table_extraction helpers
  - Extract _run_challenge_if_enabled, _run_evaluation_if_enabled
  - Extract _inject_table_settings, _finalize_pipeline_result
  - Extract _convert_number_answer, _convert_scalar_answer
  - Extract _sanitize_dict_values helper
  - _handle_answer_prompt CC reduced from 50 to ~7
- Reduce CC in structure_tool_task.py: guard-clause refactor
- Reduce CC in backend: dto.py, deployment_helper.py,
  api_deployment_views.py, prompt_studio_helper.py
- Fix S117: rename PascalCase local vars in test_answer_prompt.py
- Fix S1192: extract EXECUTOR_WORKER_TYPE constant in run-worker.sh
- Fix S1172: remove unused params from structure_tool_task.py
- Fix S5713: remove redundant JSONDecodeError in json_repair_helper.py
- Fix S112/S5727 in test_execution.py

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: remove unused RetrievalStrategy import from _handle_answer_prompt

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* UN-3266 fix: rename UsageHelper params to lowercase (N803)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: resolve remaining SonarCloud issues from check run 66691002192

- Add @staticmethod to _sanitize_null_values (fixes S2325 missing self)
- Reduce _execute_single_prompt params from 25 to 11 (S107)
  by grouping services as deps tuple and extracting exec params
  from context.executor_params
- Add NOSONAR suppression for raise exc in test helper (S112)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* UN-3266 fix: remove unused locals in _handle_answer_prompt (F841)

execution_id, file_hash, log_events_id, custom_data are now extracted
inside _execute_single_prompt from context.executor_params.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: resolve Biome linting errors in frontend source files

Auto-fixed 48 lint errors across 56 files: import ordering, block
statements, unused variable prefixing, and formatting issues.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: replace dynamic import of SharePermission with static import in Workflows

Resolves vite build warning about SharePermission.jsx being both
dynamically and statically imported across the codebase.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: resolve SonarCloud warnings in frontend components

- Remove unnecessary try-catch around PostHog event calls
- Flip negated condition in PromptOutput.handleTable for clarity

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Address PR #1849 review comments: fix null guards, dead code, and test drift

- Remove redundant inline `import uuid as _uuid` in views.py (use module-level uuid)
- URL-encode DB_USER in worker_celery.py result backend connection string
- Remove misleading task_queues=[Queue("executor")] from dispatch-only Celery app
- Remove dead `if not tool:` guards after objects.get() (already raises DoesNotExist)
- Move profile_manager/default_profile null checks before first dereference
- Reorder ProfileManager.objects.get before mark_document_indexed in tasks.py
- Handle ProfileManager.DoesNotExist as warning, not hard failure
- Wrap PostHog analytics in try/catch so failures don't block prompt execution
- Handle pending-indexing 200 response in usePromptRun.js (clear RUNNING status)
- Reset formData when metadata is missing in ConfigureDs.jsx
- Fix test_should_skip_extraction tests: function now takes 1 arg (outputs only)
- Fix agentic routing tests: mock X2Text.process, remove stale platform_helper kwarg

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix missing llm_usage_reason for summarize LLM usage tracking

Add PSKeys.LLM_USAGE_REASON to usage_kwargs in _handle_summarize() so
summarization costs appear under summarize_llm in API response metadata.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Fix single-pass extraction routing in LegacyExecutor

- Route _handle_structure_pipeline to _handle_single_pass_extraction when
  is_single_pass=True (was always calling _handle_answer_prompt)
- Delegate _handle_single_pass_extraction to cloud plugin via ExecutorRegistry,
  falling back to _handle_answer_prompt if plugin not installed

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Fixing API depployment response mismatches

* Add complete_vision() method to SDK1 LLM for multimodal completions

Adds a new complete_vision() method alongside existing complete() that
accepts pre-built multimodal messages (text + image_url) in OpenAI-style
format. LiteLLM auto-translates for Anthropic/Bedrock/Vertex providers.
This enables the agentic table extractor plugin to send page images
alongside text prompts for VLM-based table detection and extraction.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Gate Run button by agentic table readiness checklist

- PromptCardItems loads AgenticTableChecklist plugin and owns the
  isAgenticTableReady state, rendering the checklist above the prompt
  text area and delegating the settings gear visibility to the plugin.
- Header and PromptOutput disable their Run buttons when
  isAgenticTableReady is false (default true for non-agentic types).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [FIX] Use correct primary key field in prompt count subquery (#1905)

ToolStudioPrompt uses prompt_id as its primary key, not id.
Count("id") causes FieldError on the list endpoint (500).

Co-authored-by: Chandrasekharan M <chandrasekharan@zipstack.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [FIX] Add agentic_table as valid enforce_type choice

The cloud build adds "agentic_table" to the prompt enforce_type
dropdown, but the OSS ToolStudioPrompt model rejected it as an
invalid choice. Add AGENTIC_TABLE to EnforceType and ship a
matching migration so the value can be persisted.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Wire agentic_table enforce_type to executor dispatch

The single-prompt run flow had no branch for prompts with
enforce_type=agentic_table, so clicking Run silently fell through to
the legacy prompt-service path and never invoked the agentic_table
executor. Adds an AGENTIC_TABLE constant to TSPKeys, includes it in
the OperationNotSupported guard, and dispatches to
PayloadModifier.execute_agentic_table when the plugin is available
so the result still flows through _handle_response.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Add agentic_table queue to executor worker defaults

The ExecutionDispatcher derives the queue name from the executor name
(celery_executor_{name}), so dispatches to the agentic_table executor
land on celery_executor_agentic_table. The local docker-compose default
only listed celery_executor_legacy and celery_executor_agentic, so no
worker consumed the new queue and dispatch hung for the full 1-hour
result timeout. Adds the missing queue to the docker-compose default.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Dispatch agentic_table prompts to executor on IDE Run

The IDE Run button was building a legacy answer_prompt payload for
agentic_table prompts, so the agentic table executor was never
invoked. Branch fetch_response on enforce_type so agentic_table
prompts are built via the cloud payload_modifier plugin and
dispatched directly to celery_executor_agentic_table. Add the
enforce_type to the OSS dropdown choices and the JSON-dump set in
OutputManagerHelper so the persisted output is parseable by the FE
table renderer.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Reshape agentic_table executor output in IDE callback

The agentic_table executor returns {"output": {"tables": [...],
"page_count": ..., "headers": [...], ...}}, but
OutputManagerHelper.handle_prompt_output_update reads
outputs[prompt.prompt_key] when persisting prompt output. Without a
reshape the table list never lands under the prompt key and the FE
sees an empty result.

When cb_kwargs carries is_agentic_table=True and prompt_key (set by
the cloud build_agentic_table_payload), reshape outputs to
{prompt_key: tables} before calling update_prompt_output. The
executor itself also shapes its envelope, so this is a defensive
double-keying that keeps the legacy answer_prompt path untouched.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Fixing timeout issues

* API deployment fixes for Agentic table extractor

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fixing syntax issues

* Fix agentic_table executor reading INFILE after JSON overwrite

Read from SOURCE instead of INFILE when dispatching to the
agentic_table executor. INFILE gets overwritten with JSON output
by the regular pipeline, causing PDFium parse errors when the
agentic_table executor tries to process it as a PDF.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Signed-off-by: harini-venkataraman <115449948+harini-venkataraman@users.noreply.github.com>
Co-authored-by: Ghost Jake <89829542+Deepak-Kesavan@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Ritwik G <100672805+ritwik-g@users.noreply.github.com>
Co-authored-by: Chandrasekharan M <chandrasekharan@zipstack.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 29, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7dd126c5-8ab4-48b7-babd-130a10f57219

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch UN-3056-Notify-on-API-deployment-failures

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@kirtimanmishrazipstack kirtimanmishrazipstack changed the title Un 3056 notify on api deployment failures UN-3056 [FEAT] Filter notifications by run outcome Apr 29, 2026
kirtimanmishrazipstack and others added 6 commits May 5, 2026 20:49
* UN-3439 [FIX] Accept wildcard subdomain origins in SocketIO and Django CORS (#1938)

* UN-3439 [FIX] Accept wildcard subdomain origins in SocketIO and Django CORS

Production socket connections were failing for `*.env.us-central.unstract.com`
because python-socketio does exact-string comparison on `cors_allowed_origins`,
so a literal `*` pattern silently rejected every real subdomain.

- Add `CORS_ALLOWED_ORIGIN_REGEXES` derived from `WEB_APP_ORIGIN_URL_WITH_WILD_CARD`.
- Wire SocketIO via `_RegexOrigin` whose `__eq__` does the regex match — single
  list entry covers all wildcard subdomains, no library subclass needed.
- Normalize `WEB_APP_ORIGIN_URL` through `urlparse` so trailing slashes / paths
  in env are stripped (also fixes the `…com//oauth-status/` double-slash).
- Add startup guard for malformed env values.

Resolves item #1 of UN-3439. Items #2/#3 (decoupling indexing from Socket.io,
fallback) are owned separately.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* UN-3439 [FIX] Address PR review: canonical origin, fullmatch, unhashable RegexOrigin, tests

Addresses five review comments on #1938:

1. coderabbitai (Major) — RFC 6454 canonicalization. Browsers serialize
   `Origin` headers with a lowercase host and no explicit default ports;
   `parsed_url.netloc` preserved both, so `https://APP.EXAMPLE.COM:443`
   would silently fail to match the browser's `https://app.example.com`.
   Switch to `parsed_url.hostname` + drop default ports, and reject
   non-http(s) schemes at startup.

2. greptile (P2) — `re.fullmatch` instead of `re.match`. With `re.match`
   plus `$`, a candidate ending in `\n` matches because `$` is allowed
   before an optional trailing newline. `fullmatch` removes the ambiguity.

3. self — `_RegexOrigin.__hash__` violated `a == b ⇒ hash(a) == hash(b)`
   (one fixed pattern hash vs. many matching strings). Today this is
   masked because python-socketio uses linear `__eq__` on a list, but if
   the allow-list is ever wrapped in a set, every legitimate subdomain
   would silently be rejected — exactly the failure mode UN-3439 closes.
   Make instances unhashable so the contract can't be broken.

4. self — No regression tests. Add `backend/utils/tests/test_cors_origin.py`
   (33 cases) covering: regex match/no-match, lookalike spoofing, scheme
   mismatch, trailing-newline rejection, non-string equality protocol,
   unhashability, ReDoS bounds, URL normalization (case, default ports,
   trailing slash, paths, queries), startup-guard rejections (empty,
   no-scheme, non-browser-scheme, no-host), and end-to-end via the same
   `RegexOrigin` path SocketIO uses.

5. self — Over-clever wildcard-to-regex builder. The
   `split('*').join(re.escape, ...)` construction generalised to N
   wildcards but the input has exactly one; replace with a direct rf-string
   that's self-evident on review.

Refactor for testability: extract `RegexOrigin` and `normalize_web_app_origin`
into `backend/utils/cors_origin.py` (Django-free, importable from settings
and tests). Settings now delegates to one helper call; `log_events.py`
imports `RegexOrigin`. No behavioural change beyond what each comment fixes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* UN-3439 [FIX] Address SonarCloud quality gate

The Sonar quality gate failed with C reliability + 5 security hotspots, all
on the new test file:

- S905 (Bug, Major) — `{ro}` flagged as no-side-effect statement (Sonar
  doesn't see the implicit `__hash__` call). Drove the C reliability rating.
  Fix: use `len({ro})` so the side effect is via an explicit function call;
  test still asserts the same `TypeError`.
- S5727 (Code Smell, Critical) — `assert ro != None` is tautological and
  doesn't exercise `__eq__`. Switch to `(ro == None) is False` which directly
  tests that `NotImplemented` falls back to identity-equality.
- S5332 × 5 (Hotspots) — `http://` and `ftp://` literals in test data.
  These are intentional inputs proving the rejection logic. Annotate with
  `# NOSONAR` and an explanatory comment so the hotspots can be marked
  reviewed.

No production code changed; tests still 33/33 passing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* UN-3439 [FIX] Remove last S5727 code smell — test __eq__ via dunder

Sonar S5727 correctly inferred that ``ro == None`` is statically always
False (NotImplemented falls back to identity), making the assertion look
tautological. The intent is to lock the protocol contract: ``__eq__`` must
return the ``NotImplemented`` sentinel for non-strings. Test that directly
via ``ro.__eq__(None) is NotImplemented`` instead of going through ``==``.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* UN-3439 [FIX] Address remaining CodeRabbit nits — port validation, ReDoS bound

Two minor follow-ups from the second CodeRabbit pass:

- `parsed.port` is a property that raises ValueError on malformed/out-of-range
  inputs (e.g. `:abc`, `:99999`). That bypassed our normalized config-error
  message and surfaced as a stack trace. Wrap the access and re-raise with
  the same actionable text. Adds two test cases (`https://example.com:abc`,
  `https://example.com:99999`) to lock the new behaviour.

- The 50ms ReDoS timing bound is too tight for noisy CI runners. Loosen to
  500ms — still orders of magnitude below what catastrophic backtracking
  would produce.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* ReverseMerge: V0.161.4 hotfix (#1943)

* Change csp to report only

* [HOTFIX] Bool-parse ENABLE_HIGHLIGHT_API_DEPLOYMENT env var (v0.161.4) (#1939)

[HOTFIX] Bool-parse ENABLE_HIGHLIGHT_API_DEPLOYMENT env var (#1937)

[FIX] Bool-parse ENABLE_HIGHLIGHT_API_DEPLOYMENT env var

os.environ.get returns the raw string when the variable is set, so
ENABLE_HIGHLIGHT_API_DEPLOYMENT="False" was truthy in Python (any
non-empty string is truthy). Wrap in CommonUtils.str_to_bool so
"False" / "false" / "0" actually evaluate to False.

The setting is consumed by the cloud configuration plugin's spec
default (ConfigSpec.default in plugins/configuration/cloud_config.py)
on cloud and on-prem builds. With this fix, an admin who explicitly
sets the env var to a falsy string sees highlight data stripped as
expected.

Co-authored-by: vishnuszipstack <117254672+vishnuszipstack@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Deepak K <89829542+Deepak-Kesavan@users.noreply.github.com>
Co-authored-by: vishnuszipstack <117254672+vishnuszipstack@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* UN-3448 [FIX] Remove vestigial `uv pip install` line in uv-lock-automation workflow (#1941)

* UN-3448 [FIX] Add --system flag to uv pip install in uv-lock-automation workflow

Modern uv requires uv pip install to run inside a virtual environment OR
with the explicit --system flag. The workflow currently has neither, so
it errors out:

  error: No virtual environment found for Python 3.12.9; run `uv venv`
  to create an environment, or pass `--system` to install into a
  non-virtual environment

This breaks every PR that touches a pyproject.toml (the workflow's
paths filter triggers on those). Last successful run was 2026-04-01,
before a behaviour change in uv or astral-sh/setup-uv@v7.

The --system flag is exactly what the error message suggests and is
correct here — we install pip into the runner's system Python; the
downstream uv-lock.sh script creates its own venvs as needed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* UN-3448 [FIX] Remove vestigial `uv pip install` line per review

Per @jaseemjaskp's review: the pre-step `uv pip install ... pip` does
nothing useful for this workflow. The downstream uv-lock.sh script
uses uv sync at line 74, which manages its own venvs internally and
never invokes pip directly:

  $ grep -rn 'pip' docker/scripts/uv-lock-gen/
  docker/scripts/uv-lock-gen/uv-lock.sh:2:set -o pipefail

Only match is pipefail (shell option), no real pip references.

Removing the line entirely is cleaner than papering over with --system.
The line was likely copy-pasted from a sibling workflow that legitimately
needed pip in the system Python.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* ReverseMerge: V0.163.2 hotfix (#1946)

* [HOTFIX] Use importlib.util.find_spec for pluggable worker discovery (#1918)

* [FIX] Use importlib.util.find_spec for pluggable worker discovery

_verify_pluggable_worker_exists() previously checked for the literal file
`pluggable_worker/<name>/worker.py` on disk, which breaks when the plugin
has been compiled to a .so (Nuitka, Cython, or any C extension) — the
module is perfectly importable but the pre-check rejects it because only
the .py extension is considered.

Replace the filesystem check with importlib.util.find_spec(), which is
Python's standard way to ask "is this module resolvable by the import
system?". It honors every registered finder — source .py, compiled .so,
bytecode .pyc, namespace packages, zipimports — so the function now
matches what its docstring claims: verifying the module can be loaded,
not that a specific file extension is present.

Behavior is preserved for existing deployments:
- Images with no `pluggable_worker/<name>/` subpackage → find_spec
  raises ModuleNotFoundError (ImportError subclass) → returns False.
- Images with source .py → find_spec resolves the .py → returns True.
- Images with compiled .so → find_spec resolves the .so → returns True.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [FIX] Handle ValueError from find_spec in pluggable worker verification

Greptile-flagged edge case: importlib.util.find_spec() can raise
ValueError (not just ImportError) when sys.modules has a partially
initialised module entry with __spec__ = None from a prior failed import.
Broaden the except to catch both.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [FIX] Resolve api-deployment worker directory from enum import path

worker.py:452 did worker_type.value.replace("-", "_") to derive the
on-disk dir name. All WorkerType enum values already use underscores,
so the replace was a no-op; for API_DEPLOYMENT whose dir is
"api-deployment" (hyphen), it resolved to "api_deployment" and the
os.path.exists() check failed. Boot then logged a spurious
"❌ Worker directory not found: /app/api_deployment" at ERROR level.

The task registration path (builder + celery autodiscover via
to_import_path) is unaffected, so this was purely log noise — but
noise at ERROR level that masks real failures in log scans.

Fix: derive the directory from the authoritative to_import_path()
which already handles the hyphen case (api_deployment -> api-deployment).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [HOTFIX] Add IAM Role / Instance Profile auth mode to AWS Bedrock adapter (#1944)

* [FEAT] Allow Bedrock to fall through to boto3's default credential chain

Match the S3/MinIO connector pattern: when AWS access keys are left blank
on the Bedrock LLM and embedding adapter forms, drop them from the kwargs
dict so boto3's default credential chain handles authentication. This
unlocks IAM role / instance profile / IRSA / AWS Profile scenarios on
hosts that already have ambient AWS credentials (e.g. EKS workers with
IRSA, EC2 with an instance profile).

- llm1/static/bedrock.json: clarify access-key descriptions to mention
  IRSA and instance profile (already non-required at v0.163.2 base).
- embedding1/static/bedrock.json: drop aws_access_key_id and
  aws_secret_access_key from top-level required; same description fix;
  expose aws_profile_name for parity with the LLM form.
- base1.py: AWSBedrockLLMParameters and AWSBedrockEmbeddingParameters
  now strip empty access-key values from the validated kwargs before
  returning, so empty strings don't override boto3's default chain.
  AWSBedrockEmbeddingParameters fields gain explicit None defaults
  and an aws_profile_name field.

Backward-compatible: existing adapters with access keys filled in
continue to work unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [FEAT] Add Authentication Type selector to Bedrock adapter form

Add an explicit `auth_type` selector with two options, making the auth
choice clear to users:

- "Access Keys" (default): existing flow, keys required
- "IAM Role / Instance Profile (on-prem AWS only)": no fields; relies on
  boto3's default credential chain (IRSA on EKS, task role on ECS,
  instance profile on EC2). Description on the selector explicitly notes
  this option is only for AWS-hosted Unstract deployments.

The form-only auth_type field is stripped before LiteLLM validation in
both AWSBedrockLLMParameters.validate() and AWSBedrockEmbeddingParameters.
validate(). Empty access keys continue to be stripped so boto3 falls
through to the default chain even when the access_keys arm is selected
without values (matches the S3/MinIO connector pattern).

Backward-compatible: legacy adapters without auth_type behave as
"Access Keys" mode (the default), and existing keys are forwarded
unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [REVIEW] Address Bedrock auth_type review feedback

Fixes the P0/P1 issues raised by greptile-apps and jaseemjaskp on
PR #1944.

Behaviour fixes:
- Stale-key leak in IAM Role mode: switching an existing adapter from
  Access Keys to IAM Role would carry truthy stored access keys through
  the strip-empty-only loop, so boto3 silently authenticated with the
  old long-lived credentials instead of falling through to the host's
  IRSA / instance-profile identity. Both LLM and embedding paths were
  affected.
- Silent acceptance of unknown auth_type: a typo (e.g. "access_key") or
  a malformed payload from a non-UI client passed through the dict
  comprehension untouched, with no enum guard.
- Cross-field validation gap: explicit Access Keys mode with blank or
  whitespace-only values silently fell through to the default
  credential chain instead of surfacing the misconfiguration.

Implementation:
- Add a module-level _resolve_bedrock_aws_credentials helper used by
  both AWSBedrockLLMParameters.validate() and AWSBedrock
  EmbeddingParameters.validate(), so the auth-type contract is
  expressed once.
  - Validates auth_type against an allowlist (None | "access_keys" |
    "iam_role"); raises ValueError on anything else.
  - iam_role: unconditionally drops aws_access_key_id and
    aws_secret_access_key.
  - access_keys (explicit): requires non-blank values; raises ValueError
    if either is empty or whitespace-only.
  - Legacy (auth_type absent): retains the lenient strip behaviour so
    pre-PR adapter configurations continue to deserialise unchanged.
- Restore aws_region_name as required (no `= None` default) on
  AWSBedrockEmbeddingParameters; only credentials may legitimately be
  absent.
- Drop the orphan aws_profile_name field from
  embedding1/static/bedrock.json: it was added for parity with the LLM
  form but lives outside the auth_type oneOf and contradicts the
  selector's "no further input" semantics. The LLM form already had
  aws_profile_name pre-PR and is left alone for backwards compatibility.

Tests:
- New tests/test_bedrock_adapter.py covers 15 cases across LLM and
  embedding adapters: legacy-no-auth-type, explicit access_keys with
  valid/blank/whitespace keys, iam_role with stale/no keys, unknown
  auth_type rejection, cross-field validation, and preservation of
  unrelated params (model_id, aws_profile_name, region, thinking).

Skipped (P2 nice-to-have):
- Comment-scope clarification, MinIO reference rewording,
  validate-mutates-caller'\''s-dict, and the LLM form description nit
  about aws_profile_name visibility. These don'\''t change behaviour
  and can be addressed in a follow-up.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

---------

Co-authored-by: Chandrasekharan M <117059509+chandrasekharan-zipstack@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Athul <89829560+athul-rs@users.noreply.github.com>

* batch notification

---------

Co-authored-by: ali <117142933+muhammad-ali-e@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Ritwik G <100672805+ritwik-g@users.noreply.github.com>
Co-authored-by: Deepak K <89829542+Deepak-Kesavan@users.noreply.github.com>
Co-authored-by: vishnuszipstack <117254672+vishnuszipstack@users.noreply.github.com>
Co-authored-by: Praveen Kumar <praveen@zipstack.com>
Co-authored-by: Chandrasekharan M <117059509+chandrasekharan-zipstack@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Athul <89829560+athul-rs@users.noreply.github.com>
…m:Zipstack/unstract into UN-3056-Notify-on-API-deployment-failures
@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud Bot commented May 7, 2026

Quality Gate Failed Quality Gate failed

Failed conditions
2 Security Hotspots

See analysis details on SonarQube Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants