Skip to content

Bug: Invalid schema error when trying to use Context7 MCP: Overly strict schema validation? #4416

@jwm4

Description

@jwm4

System Info

Running latest Llama Stack on main right now

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

I am trying to use the Context7 MCP server as shown in the snippet below. It fails with "Invalid schema for function 'search-library-docs': In context=('properties', 'folders'), array schema missing items". My guess is that this is correct that the schema is invalid. However, this is a very popular MCP server and other MCP clients like Cursor are able to handle it. So I think the real issue here is that we should be less strict with our enforcement of schema requirements.

from llama_stack_client import LlamaStackClient

client = LlamaStackClient(base_url="http://localhost:8321")

response = client.responses.create(
    model="openai/gpt-4o",
    input="Can I get any info about https://github.com/llamastack/llama-stack from Context7?",
    tools=[{"type": "mcp", "server_url": "https://context7.liam.sh/mcp", "require_approval": "never", "server_label": "Context7"}]
)

Error logs

Client:

---------------------------------------------------------------------------
InternalServerError                       Traceback (most recent call last)
Cell In[14], line 5
      1 from llama_stack_client import LlamaStackClient
      3 client = LlamaStackClient(base_url="http://localhost:8321/")
----> 5 response = client.responses.create(
      6     model="openai/gpt-4o",
      7     input="Can I get any info about https://github.com/llamastack/llama-stack from Context7?",
      8     tools=[{"type": "mcp", "server_url": "https://context7.liam.sh/mcp", "requires_approval": False, "server_label": "Context7"}]
      9 )

File ~/sample-agent/venv-3_12_9/lib/python3.12/site-packages/llama_stack_client/_utils/_utils.py:288, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    286             msg = f"Missing required argument: {quote(missing[0])}"
    287     raise TypeError(msg)
--> 288 return func(*args, **kwargs)

File ~/sample-agent/venv-3_12_9/lib/python3.12/site-packages/llama_stack_client/resources/responses/responses.py:248, in ResponsesResource.create(self, input, model, conversation, include, instructions, max_infer_iters, max_tool_calls, parallel_tool_calls, previous_response_id, prompt, store, stream, temperature, text, tools, extra_headers, extra_query, extra_body, timeout)
    217 @required_args(["input", "model"], ["input", "model", "stream"])
    218 def create(
    219     self,
   (...)    246     timeout: float | httpx.Timeout | None | NotGiven = not_given,
    247 ) -> ResponseObject | Stream[ResponseObjectStream]:
--> 248     return self._post(
    249         "/v1/responses",
    250         body=maybe_transform(
...
-> 1054         raise self._make_status_error_from_response(err.response) from None
   1056     break
   1058 assert response is not None, "could not resolve response (should never happen)"

InternalServerError: Error code: 500 - {'detail': 'Internal server error: An unexpected error occurred.'}

Server:

INFO     2025-12-19 13:22:07,312 uvicorn.access:473 uncategorized: ::1:59762 - "POST /v1/responses HTTP/1.1" 200
ERROR    2025-12-19 13:27:50,200 llama_stack.core.server.server:281 core::server: Error executing endpoint
         route='/v1/responses' method='post'
         ╭───────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────╮
         │ /Users/bmurdock/sample-agent/venv-3_12_9/lib/python3.12/site-packages/llama_stack/core/server/server.py:271 │
         │ in route_handler                                                                                            │
         │                                                                                                             │
         │   268 │   │   │   │   │   return StreamingResponse(gen, media_type="text/event-stream")                     │
         │   269 │   │   │   │   else:                                                                                 │
         │   270 │   │   │   │   │   value = func(**kwargs)                                                            │
         │ ❱ 271 │   │   │   │   │   result = await maybe_await(value)                                                 │
         │   272 │   │   │   │   │   if isinstance(result, PaginatedResponse) and result.url is None:                  │
         │   273 │   │   │   │   │   │   result.url = route                                                            │
         │   274                                                                                                       │
         │                                                                                                             │
         │ /Users/bmurdock/sample-agent/venv-3_12_9/lib/python3.12/site-packages/llama_stack/core/server/server.py:193 │
         │ in maybe_await                                                                                              │
         │                                                                                                             │
         │   190                                                                                                       │
         │   191 async def maybe_await(value):                                                                         │
         │   192 │   if inspect.iscoroutine(value):                                                                    │
         │ ❱ 193 │   │   return await value                                                                            │
         │   194 │   return value                                                                                      │
         │   195                                                                                                       │
         │   196                                                                                                       │
         │                                                                                                             │
         │ /Users/bmurdock/sample-agent/venv-3_12_9/lib/python3.12/site-packages/llama_stack/providers/inline/agents/m │
         │ eta_reference/agents.py:115 in create_openai_response                                                       │
         │                                                                                                             │
         │   112 │   │   metadata: dict[str, str] | None = None,                                                       │
         │   113 │   ) -> OpenAIResponseObject:                                                                        │
         │   114 │   │   assert self.openai_responses_impl is not None, "OpenAI responses not                          │
         │       initialized"                                                                                          │
         │ ❱ 115 │   │   result = await self.openai_responses_impl.create_openai_response(                             │
         │   116 │   │   │   input,                                                                                    │
         │   117 │   │   │   model,                                                                                    │
         │   118 │   │   │   prompt,                                                                                   │
         │                                                                                                             │
         │ /Users/bmurdock/sample-agent/venv-3_12_9/lib/python3.12/site-packages/llama_stack/providers/inline/agents/m │
         │ eta_reference/responses/openai_responses.py:425 in create_openai_response                                   │
         │                                                                                                             │
         │   422 │   │   │   │   │   if failed_response and failed_response.error                                      │
         │   423 │   │   │   │   │   else "Response stream failed without error details"                               │
         │   424 │   │   │   │   )                                                                                     │
         │ ❱ 425 │   │   │   │   raise RuntimeError(f"OpenAI response failed: {error_message}")                        │
         │   426 │   │   │                                                                                             │
         │   427 │   │   │   if final_response is None:                                                                │
         │   428 │   │   │   │   raise ValueError("The response stream never reached a terminal state")                │
         ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
         RuntimeError: OpenAI response failed: Error code: 400 - {'error': {'message': "Invalid schema for function
         'search-library-docs': In context=('properties', 'folders'), array schema missing items.", 'type':
         'invalid_request_error', 'param': 'tools[1].function.parameters', 'code': 'invalid_function_parameters'}}

Expected behavior

I would like the MCP tool to just work like it does with Cursor. With that said, OpenAI's own implementation of the Responses API fails on this too with a similar error:

BadRequestError: Error code: 400 - {'error': {'message': 'Invalid tool schema for: Context7.search-library-docs', 'type': 'invalid_request_error', 'param': 'tools', 'code': None}}

So I guess it's debatable whether we want to fix this. FWIW, I'd prefer to be more robust than OpenAI in this case.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions