Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions en/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,7 @@
* [IBM Watsonx](integrations/langchain/chat-models/ibm-watsonx.md)
* [ChatOllama](integrations/langchain/chat-models/chatollama.md)
* [ChatOpenAI](integrations/langchain/chat-models/azure-chatopenai.md)
* [ChatOpenAI Custom](integrations/langchain/chat-models/chatopenai-custom.md)
* [ChatTogetherAI](integrations/langchain/chat-models/chattogetherai.md)
* [GroqChat](integrations/langchain/chat-models/groqchat.md)
* [Document Loaders](integrations/langchain/document-loaders/README.md)
Expand Down
2 changes: 1 addition & 1 deletion en/integrations/langchain/chat-models/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Chat models take a list of messages as input and return a model-generated messag
* [ChatOllama](chatollama.md)
* [ChatOllama Funtion](broken-reference)
* [ChatOpenAI](azure-chatopenai.md)
* [ChatOpenAI Custom](broken-reference)
* [ChatOpenAI Custom](chatopenai-custom.md)
* [ChatTogetherAI](chattogetherai.md)
* [GroqChat](groqchat.md)
* [ChatSambanova](chat-sambanova.md)
36 changes: 36 additions & 0 deletions en/integrations/langchain/chat-models/chatopenai-custom.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# ChatOpenAI Custom

Use the **ChatOpenAI Custom** node when you want to connect Flowise to an OpenAI-compatible chat completion API, including custom deployments, proxy gateways, and third-party providers.

This node lets you set the model name directly and override the default API base path, so it is useful when the provider exposes the same request shape as OpenAI but uses different model IDs or a different endpoint.

## Setup

1. In the Flowise canvas, open **Chat Models** and drag the **ChatOpenAI Custom** node into your flow.
2. Click **Connect Credential** and create a new OpenAI credential.
3. Paste the API key from your OpenAI-compatible provider.
4. Enter the provider's model ID in **Model Name**.
5. Open **Additional Parameters**.
6. Set **Base Path** to the provider's OpenAI-compatible `/v1` endpoint.
7. Connect the node to a chain, agent, or other component that accepts a chat model.

## WaveSpeed LLM

[WaveSpeed LLM](https://wavespeed.ai/llm) provides access to models from multiple vendors through an OpenAI-compatible API.

Use these values in **ChatOpenAI Custom**:

| Field | Value |
| --- | --- |
| Credential API key | Your WaveSpeed API key |
| Model Name | A WaveSpeed model ID such as `deepseek/deepseek-v4-flash` |
| Base Path | `https://llm.wavespeed.ai/v1` |

Model IDs use a `vendor/model` format. Copy the model ID from the [WaveSpeed LLM catalog](https://wavespeed.ai/llm), because model names are case-sensitive and provider-prefixed.

## Troubleshooting

* If requests return `401 Unauthorized`, confirm that the credential contains a valid API key for your provider.
* If requests return `404 Not Found` or `model not found`, use the full model ID required by your provider.
* If requests fail to connect, confirm that **Base Path** is correctly configured for your provider's endpoint.
* If tool calling or image input does not work as expected, verify that the selected model supports that capability before enabling it in your flow.