-
Notifications
You must be signed in to change notification settings - Fork 699
Description
Bug Report
Describe the bug
PromptNormalizer.send_prompt_async() returns None when skip criteria match (lines 89-90 of prompt_normalizer.py), but callers assume a non-None PromptRequestResponse is always returned. This causes AttributeError: 'NoneType' object has no attribute 'get_value' in downstream code like LLMGenericTextConverter.convert_async().
Steps to reproduce
- Create a
PromptNormalizerinstance - Call
set_skip_criteria()with criteria that match a prompt - Call
send_prompt_async()— returnsNone - Any caller that accesses
response.get_value()orresponse.request_piecescrashes
Root cause analysis
File: pyrit/prompt_normalizer/prompt_normalizer.py
# Lines 89-90: Returns None when skip criteria match
if self._should_skip_based_on_skip_criteria(request):
return None # <-- Callers assume non-None return
# Lines 124-125: Defensive None return (unreachable in practice)
if response is None:
return NoneCrash site — File: pyrit/prompt_converter/llm_generic_text_converter.py
# Lines 99-100: No None check before accessing .get_value()
response = await self._converter_target.send_prompt_async(prompt_request=request)
return ConverterResult(output_text=response.get_value(), output_type="text")
# ^^^^ AttributeError when response is NonePartial mitigation already exists: send_prompt_batch_to_target_async() at line 185 correctly filters out None returns:
return [response for response in responses if response is not None]But single-prompt callers (converters, orchestrators) have no such protection.
Downstream impact
When azure-ai-evaluation's OrchestratorManager uses PyRIT converters that hit this path:
- The
AttributeErrorpropagates up through PyRIT's exception handler - Gets wrapped as
"Error sending prompt with conversation ID: ..."(line 122) - The Azure SDK's retry decorator misclassifies this as a network error
- Retries 5 times with exponential backoff (~47 seconds wasted, all retries fail identically)
A companion issue has been filed on Azure/azure-sdk-for-python for the retry-misclassification side.
Proposed fix
Option A (Preferred): Return a sentinel empty PromptRequestResponse instead of None:
if self._should_skip_based_on_skip_criteria(request):
skipped = construct_response_from_request(
request=request.request_pieces[0],
response_text_pieces=[""],
response_type="text",
error="skipped",
)
return skippedOption B: Change return type to Optional[PromptRequestResponse] and update all callers to handle None.
Option C: Raise a specific PromptSkippedException that callers can catch.
Environment
- PyRIT version: 0.8.1 (also verified present in 0.11.0 via GitHub source)
- Python: 3.12.12
- OS: macOS 15.5 (Darwin 24.6.0)
- azure-ai-evaluation: 1.15.0
Additional context
Discovered during red teaming framework development when using LLMGenericTextConverter with skip criteria enabled via azure-ai-evaluation's RedTeam class. The bug is silent in batch operations (filtered by list comprehension) but crashes single-prompt converter paths.