Skip to content

Bug: PromptNormalizer.send_prompt_async returns None on skip criteria, crashing downstream callers #1518

@pr0b3r7

Description

@pr0b3r7

Bug Report

Describe the bug

PromptNormalizer.send_prompt_async() returns None when skip criteria match (lines 89-90 of prompt_normalizer.py), but callers assume a non-None PromptRequestResponse is always returned. This causes AttributeError: 'NoneType' object has no attribute 'get_value' in downstream code like LLMGenericTextConverter.convert_async().

Steps to reproduce

  1. Create a PromptNormalizer instance
  2. Call set_skip_criteria() with criteria that match a prompt
  3. Call send_prompt_async() — returns None
  4. Any caller that accesses response.get_value() or response.request_pieces crashes

Root cause analysis

File: pyrit/prompt_normalizer/prompt_normalizer.py

# Lines 89-90: Returns None when skip criteria match
if self._should_skip_based_on_skip_criteria(request):
    return None  # <-- Callers assume non-None return

# Lines 124-125: Defensive None return (unreachable in practice)
if response is None:
    return None

Crash site — File: pyrit/prompt_converter/llm_generic_text_converter.py

# Lines 99-100: No None check before accessing .get_value()
response = await self._converter_target.send_prompt_async(prompt_request=request)
return ConverterResult(output_text=response.get_value(), output_type="text")
# ^^^^ AttributeError when response is None

Partial mitigation already exists: send_prompt_batch_to_target_async() at line 185 correctly filters out None returns:

return [response for response in responses if response is not None]

But single-prompt callers (converters, orchestrators) have no such protection.

Downstream impact

When azure-ai-evaluation's OrchestratorManager uses PyRIT converters that hit this path:

  1. The AttributeError propagates up through PyRIT's exception handler
  2. Gets wrapped as "Error sending prompt with conversation ID: ..." (line 122)
  3. The Azure SDK's retry decorator misclassifies this as a network error
  4. Retries 5 times with exponential backoff (~47 seconds wasted, all retries fail identically)

A companion issue has been filed on Azure/azure-sdk-for-python for the retry-misclassification side.

Proposed fix

Option A (Preferred): Return a sentinel empty PromptRequestResponse instead of None:

if self._should_skip_based_on_skip_criteria(request):
    skipped = construct_response_from_request(
        request=request.request_pieces[0],
        response_text_pieces=[""],
        response_type="text",
        error="skipped",
    )
    return skipped

Option B: Change return type to Optional[PromptRequestResponse] and update all callers to handle None.

Option C: Raise a specific PromptSkippedException that callers can catch.

Environment

  • PyRIT version: 0.8.1 (also verified present in 0.11.0 via GitHub source)
  • Python: 3.12.12
  • OS: macOS 15.5 (Darwin 24.6.0)
  • azure-ai-evaluation: 1.15.0

Additional context

Discovered during red teaming framework development when using LLMGenericTextConverter with skip criteria enabled via azure-ai-evaluation's RedTeam class. The bug is silent in batch operations (filtered by list comprehension) but crashes single-prompt converter paths.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions