Skip to content

[bot] OpenAI Responses API instructions and string-form input not captured in span input #101

@braintrust-bot

Description

@braintrust-bot

Summary

The OpenAI Responses API request tagger in InstrumentationSemConv.tagOpenAIRequest() drops two important pieces of input data:

  1. instructions (system prompt) silently dropped: The Responses API uses instructions as the system-level prompt (equivalent to Chat Completions' system message or Anthropic's system field). This field is not included in braintrust.input_json, so the system prompt is invisible when viewing the trace.

  2. String-form input silently dropped: The Responses API accepts input as either a plain string ("Hello!") or an array of input items. The code checks requestJson.get("input").isArray(), which means plain-string inputs fail the guard and input_json is never set.

For comparison, the Anthropic request tagger in the same file correctly captures the system field as a synthetic {role: "system"} message appended to the input array (lines 224–232).

What is missing

In InstrumentationSemConv.tagOpenAIRequest() (lines 127–138):

if (requestBody != null) {
    JsonNode requestJson = BraintrustJsonMapper.get().readTree(requestBody);
    if (requestJson.has("model")) {
        metadata.put("model", requestJson.get("model").asText());
    }
    // Chat completions API uses "messages"; Responses API uses "input"
    if (requestJson.has("messages")) {
        span.setAttribute("braintrust.input_json", toJson(requestJson.get("messages")));
    } else if (requestJson.has("input") && requestJson.get("input").isArray()) {
        span.setAttribute("braintrust.input_json", toJson(requestJson.get("input")));
    }
}

Problem 1: instructions not captured

A typical Responses API request:

{
    "model": "gpt-4o-mini",
    "instructions": "You are a helpful assistant",
    "input": [{"role": "user", "content": "What is 2+2?"}]
}

The input array is captured, but instructions is silently dropped. Users viewing this trace in Braintrust see the user message but not the system instructions that shaped the model's behavior.

The Anthropic handler (tagAnthropicRequest, lines 224–232) solves the equivalent problem by appending a synthetic {role: "system", content: ...} entry:

if (requestJson.has("system") && ...) {
    var systemNode = BraintrustJsonMapper.get().createObjectNode();
    systemNode.put("role", "system");
    systemNode.set("content", requestJson.get("system"));
    inputArray.add(systemNode);
}

Problem 2: String-form input dropped

A valid Responses API request:

{
    "model": "gpt-4o-mini",
    "input": "What is the capital of France?"
}

The requestJson.get("input").isArray() check fails for string inputs, so input_json is never set. The span has no input at all.

The OpenAI Java SDK supports both forms via ResponseCreateParams.builder().inputOfResponse(String) and .inputOfResponse(List<ResponseInputItem>).

Braintrust docs status

Upstream sources

Local files inspected

  • braintrust-sdk/src/main/java/dev/braintrust/instrumentation/InstrumentationSemConv.java — lines 127–138 (tagOpenAIRequest: instructions not extracted, string input dropped by isArray() guard); lines 224–232 (tagAnthropicRequest: system correctly captured as synthetic system message for comparison)
  • braintrust-sdk/instrumentation/openai_2_15_0/src/test/java/dev/braintrust/instrumentation/openai/v2_15_0/BraintrustOpenAITest.java — lines 229–253, 275–302 (Responses API tests use .instructions("You are a helpful assistant") but assertValidOpenAISpan only checks input_json is non-null, not that it contains the instructions)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions