Skip to content

feat(chat model): add Volcengine chat model node#5928

Open
yuyol wants to merge 4 commits intoFlowiseAI:mainfrom
yuyol:feature/volcengine-chat-model
Open

feat(chat model): add Volcengine chat model node#5928
yuyol wants to merge 4 commits intoFlowiseAI:mainfrom
yuyol:feature/volcengine-chat-model

Conversation

@yuyol
Copy link

@yuyol yuyol commented Mar 7, 2026

Add Volcengine Chat Model Node

Overview

This PR adds support for Volcengine Chat Models as a new chat model node in Flowise.

Users can now connect to Volcengine LLM services and use them directly inside Flowise workflows, similar to existing providers such as OpenAI, Anthropic, and Google.

Why

Volcengine provides powerful LLM services (e.g. Doubao models) that are widely used in the Chinese developer ecosystem.
Adding native support for Volcengine enables Flowise users to easily integrate these models into their AI pipelines.

What’s included

  • New Volcengine Chat Model node
  • Configurable parameters in the node UI
  • Support for API Key authentication
  • Compatibility with existing Flowise chat workflows

How to test

  1. Start Flowise locally

  2. Add the Volcengine Chat Model node to a flow

  3. Configure:

    • API Key
    • Model name (e.g. Doubao model)
  4. Connect it to a Prompt node or Chatflow

  5. Run the flow and verify responses from Volcengine

Screenshots

image

Notes

  • No breaking changes
  • This is an additive feature that does not affect existing nodes

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces native support for Volcengine's large language models within Flowise. By adding a dedicated chat model node, users can now seamlessly connect to and leverage Volcengine's powerful LLM services, such as the Doubao models, directly within their AI pipelines. This expansion broadens the range of available LLM providers, particularly benefiting users within the Chinese developer ecosystem.

Highlights

  • New Volcengine Chat Model Node: A new 'Volcengine Chat Model' node has been added, allowing users to integrate Volcengine LLM services into Flowise workflows.
  • Configurable Parameters: The new node supports configurable parameters such as model name, temperature, max tokens, streaming, and various penalties, providing flexibility for users.
  • API Key Authentication: The node includes support for API Key authentication, utilizing a newly defined Volcengine API credential.
  • Compatibility: This additive feature is compatible with existing Flowise chat workflows and does not introduce any breaking changes.
Changelog
  • packages/components/credentials/VolcengineApi.credential.ts
    • Created a new credential class VolcengineApi to securely store Volcengine API keys.
    • Defined an input for the Volcengine API Key as a password type.
  • packages/components/models.json
    • Included new Volcengine models, doubao-1-5-lite-32k-250115 and doubao-1-5-pro-32k-250115, along with their respective input and output costs.
    • Expanded the models.json file to support Volcengine as a new model provider.
  • packages/components/nodes/chatmodels/ChatVolcengine/ChatVolcengine.ts
    • Implemented the ChatVolcengine_ChatModels node class to integrate Volcengine's chat models.
    • Configured the node with various parameters such as model name, temperature, streaming, max tokens, and penalties.
    • Implemented logic to load Volcengine models dynamically and handle API key authentication.
    • Ensured proper initialization of the Volcengine chat model client, including base URL configuration and handling of additional options.
  • packages/components/nodes/chatmodels/ChatVolcengine/volcengine-color.svg
    • Included the volcengine-color.svg icon for visual representation of the Volcengine node in the UI.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new chat model node for Volcengine. A critical security flaw was identified and its severity upgraded to 'high' due to the server-side context of API key leakage, with a suggested fix provided. Additionally, a high-severity issue with the temperature parameter's unsafe handling was noted, and a robust code suggestion has been provided. Despite these, the implementation correctly uses the @langchain/openai package, sets up credentials, and adds new models, generally adhering to existing patterns.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Volcengine chat models by adding a new ChatVolcengine node. The implementation correctly leverages the existing ChatOpenAI component for OpenAI-compatible APIs. The changes include a new credential type, the node implementation, and model definitions in models.json. My review identified a couple of minor issues: the use of a deprecated property and an inconsistent default model name. I've provided suggestions to address these points to improve the code's correctness, maintainability, and cost-effectiveness. Overall, this is a well-implemented feature addition.

@yuyol
Copy link
Author

yuyol commented Mar 8, 2026

Hi maintainers, this PR currently has workflows awaiting approval.
Please let me know if anything else is needed from my side. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant