Skip to content

Correct env config for mistral #1525

@aclifton314

Description

@aclifton314
OS: WSL Ubuntu 24.04.3 LTS
python: 3.11
browser: Firefox 143.0.3
gpt-researcher: commit 906e94f19ab69387d6c4c28fa7aa5646068bf65c, v3.3.5

I'm having issues setting up my env file to utilize a mistral model I have access to on a server I own. For reference, I verified that these parameters work in langgraph:

llm = ChatMistralAI(
     model="/models/mistral-small",
     base_url="http://url.to.my.model:1234/v1",
     api_key="my-api-key")

The mistral model is deployed using vllm. Here is what I have in my env file:

OPENAI_API_KEY=my-api-key
MISTRAL_BASE_URL=http://url.to.my.model:1234/v1
FAST_LLM=mistralai:/models/mistral-small
SMART_LLM=mistralai:/models/mistral-small
STRATEGIC_LLM=mistralai:/models/mistral-small

With this set up, I get the following error: TypeError: expected string or bytes-like object, got 'NoneType'. Here is the full stacktrace:

INFO:     Will watch for changes in these directories: ['/home/aclifton/gpt-researcher']
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [2482] using StatReload
INFO:     Started server process [2484]
INFO:     Waiting for application startup.
2025-10-02 10:02:54,522 - backend.server.app - INFO - Frontend mounted from: /home/aclifton/gpt-researcher/frontend
2025-10-02 10:02:54,522 - backend.server.app - INFO - Static assets mounted from: /home/aclifton/gpt-researcher/frontend/static
2025-10-02 10:02:54,522 - backend.server.app - INFO - Research API started - no database required
INFO:     Application startup complete.
INFO:     127.0.0.1:51731 - "GET / HTTP/1.1" 200 OK
INFO:     127.0.0.1:51732 - "GET /site/styles.css HTTP/1.1" 304 Not Modified
INFO:     127.0.0.1:51731 - "GET /site/scripts.js HTTP/1.1" 304 Not Modified
INFO:     127.0.0.1:51732 - "GET /static/gptr-logo.png HTTP/1.1" 304 Not Modified
INFO:     127.0.0.1:51733 - "GET /site/static/gptr-logo.png HTTP/1.1" 304 Not Modified
DEBUG:    = connection is CONNECTING
DEBUG:    < GET /ws HTTP/1.1
DEBUG:    < host: 127.0.0.1:8000
DEBUG:    < user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:143.0) Gecko/20100101 Firefox/143.0
DEBUG:    < accept: */*
DEBUG:    < accept-language: en-US,en;q=0.5
DEBUG:    < accept-encoding: gzip, deflate, br, zstd
DEBUG:    < sec-websocket-version: 13
DEBUG:    < origin: http://127.0.0.1:8000
DEBUG:    < sec-websocket-extensions: permessage-deflate
DEBUG:    < sec-websocket-key: 05dI+C2BgK3usvQaXvi5dA==
DEBUG:    < connection: keep-alive, Upgrade
DEBUG:    < cookie: conversationHistory=%5B%7B%22prompt%22%3A%22What%20are%20the%20main%20algorithms%20used%20for%20the%20scheduling%20problem%3F%22%2C%22links%22%3A%7B%22pdf%22%3A%22%22%2C%22docx%22%3A%22outputs%2Ftask_1759355264_What%2520are%2520the%2520main%2520algorithms%2520used%2520for%2520the%2520sc.docx%22%2C%22md%22%3A%22outputs%2Ftask_1759355264_What%2520are%2520the%2520main%2520algorithms%2520used%2520for%2520the%2520sc.md%22%2C%22json%22%3A%22outputs%2Ftask_1759355264_What%20are%20the%20main%20algorithms%20used%20for%20the%20scheduling%20problem.json%22%7D%2C%22timestamp%22%3A%222025-10-01T21%3A51%3A09.021Z%22%7D%2C%7B%22prompt%22%3A%22what%20is%20the%20history%20of%20the%20neutrino%3F%22%2C%22links%22%3A%7B%22pdf%22%3A%22%22%2C%22docx%22%3A%22outputs%2Ftask_1759351808_what%2520is%2520the%2520history%2520of%2520the%2520neutrino.docx%22%2C%22md%22%3A%22outputs%2Ftask_1759351808_what%2520is%2520the%2520history%2520of%2520the%2520neutrino.md%22%2C%22json%22%3A%22outputs%2Ftask_1759351808_what%20is%20the%20history%20of%20the%20neutrino.json%22%7D%2C%22timestamp%22%3A%222025-10-01T20%3A53%3A34.040Z%22%7D%5D
DEBUG:    < sec-fetch-dest: empty
DEBUG:    < sec-fetch-mode: websocket
DEBUG:    < sec-fetch-site: same-origin
DEBUG:    < pragma: no-cache
DEBUG:    < cache-control: no-cache
DEBUG:    < upgrade: websocket
INFO:     127.0.0.1:51738 - "WebSocket /ws" [accepted]
DEBUG:    > HTTP/1.1 101 Switching Protocols
DEBUG:    > Upgrade: websocket
DEBUG:    > Connection: Upgrade
DEBUG:    > Sec-WebSocket-Accept: a9gp0DX6D0y6eSurk7dYbxrQK+8=
DEBUG:    > Sec-WebSocket-Extensions: permessage-deflate
DEBUG:    > date: Thu, 02 Oct 2025 16:03:29 GMT
DEBUG:    > server: uvicorn
INFO:     connection open
DEBUG:    = connection is OPEN
DEBUG:    < TEXT 'start {"task":"What are the main algorithms use...nt","query_domains":[]}' [208 bytes]
2025-10-02 10:03:23,703 - server.server_utils - INFO - Received WebSocket message: start {"task":"What are the main algorithms used f...
2025-10-02 10:03:23,703 - server.server_utils - INFO - Processing start command
DEBUG:    > TEXT '{"query":"What are the main algorithms used for...ontext":[],"report":""}' [111 bytes]
2025-10-02 10:03:24,950 - httpx - INFO - HTTP Request: POST https://api.mistral.ai/v1/chat/completions "HTTP/1.1 401 Unauthorized"
⚠️ Error in reading JSON and failed to repair with json_repair: the JSON object must be str, bytes or bytearray, not NoneType
⚠️ LLM Response: `None`
2025-10-02 10:03:24,964 - server.server_utils - ERROR - Error running task: expected string or bytes-like object, got 'NoneType'
Traceback (most recent call last):
  File "/home/aclifton/gpt-researcher/gpt_researcher/actions/agent_creator.py", line 34, in choose_agent
    response = await create_chat_completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/gpt_researcher/utils/llm.py", line 84, in create_chat_completion
    response = await provider.get_chat_response(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/gpt_researcher/llm_provider/generic/base.py", line 259, in get_chat_response
    output = await self.llm.ainvoke(messages, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 417, in ainvoke
    llm_result = await self.agenerate_prompt(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1034, in agenerate_prompt
    return await self.agenerate(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 992, in agenerate
    raise exceptions[0]
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1162, in _agenerate_with_cache
    result = await self._agenerate(
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/langchain_mistralai/chat_models.py", line 689, in _agenerate
    response = await acompletion_with_retry(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/langchain_mistralai/chat_models.py", line 231, in acompletion_with_retry
    return await _completion_with_retry(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/tenacity/asyncio/__init__.py", line 189, in async_wrapped
    return await copy(fn, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/tenacity/asyncio/__init__.py", line 111, in __call__
    do = await self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/tenacity/asyncio/__init__.py", line 153, in iter
    result = await action(retry_state)
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/tenacity/_utils.py", line 99, in inner
    return call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 400, in <lambda>
    self._add_action_func(lambda rs: rs.outcome.result())
                                     ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/tenacity/asyncio/__init__.py", line 114, in __call__
    result = await fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/langchain_mistralai/chat_models.py", line 228, in _completion_with_retry
    await _araise_on_error(response)
  File "/home/aclifton/gpt-researcher/.venv/lib/python3.12/site-packages/langchain_mistralai/chat_models.py", line 190, in _araise_on_error
    raise httpx.HTTPStatusError(
httpx.HTTPStatusError: Error response 401 while fetching https://api.mistral.ai/v1/chat/completions: {"detail":"Unauthorized"}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/aclifton/gpt-researcher/backend/server/server_utils.py", line 254, in safe_run
    await awaitable
  File "/home/aclifton/gpt-researcher/backend/server/server_utils.py", line 151, in handle_start_command
    report = await manager.start_streaming(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/backend/server/websocket_manager.py", line 105, in start_streaming
    report = await run_agent(
             ^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/backend/server/websocket_manager.py", line 161, in run_agent
    report = await researcher.run()
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/backend/report_type/detailed_report/detailed_report.py", line 68, in run
    await self._initial_research()
  File "/home/aclifton/gpt-researcher/backend/report_type/detailed_report/detailed_report.py", line 77, in _initial_research
    await self.gpt_researcher.conduct_research()
  File "/home/aclifton/gpt-researcher/gpt_researcher/agent.py", line 306, in conduct_research
    self.agent, self.role = await choose_agent(
                            ^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/gpt_researcher/actions/agent_creator.py", line 51, in choose_agent
    return await handle_json_error(response)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/gpt_researcher/actions/agent_creator.py", line 63, in handle_json_error
    json_string = extract_json_with_regex(response)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aclifton/gpt-researcher/gpt_researcher/actions/agent_creator.py", line 79, in extract_json_with_regex
    json_match = re.search(r"{.*?}", response, re.DOTALL)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/re/__init__.py", line 177, in search
    return _compile(pattern, flags).search(string)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or bytes-like object, got 'NoneType'

DEBUG:    > TEXT '{"type":"logs","content":"error","output":"Erro...ect, got \'NoneType\'"}' [104 bytes]

Thanks in advance for your help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions