feat(dotAI): Dot AI LangChain4J integration (WIP - First provider)#35150
feat(dotAI): Dot AI LangChain4J integration (WIP - First provider)#35150ihoffmann-dot wants to merge 14 commits intomainfrom
Conversation
❌ Issue Linking RequiredThis PR could not be linked to an issue. All PRs must be linked to an issue for tracking purposes. How to fix this:Option 1: Add keyword to PR body (Recommended - auto-removes this comment)
Why is this required?Issue linking ensures proper tracking, documentation, and helps maintain project history. It connects your code changes to the problem they solve.--- This comment was automatically generated by the issue linking workflow |
|
Claude finished @ihoffmann-dot's task in 5m 5s —— View job Code Review
Findings1. Security: API keys exposed in UI —
2. Security: Incomplete credential redaction in providerConfig.replaceAll("\"apiKey\"\\s*:\\s*\"[^\"]+\"", "\"apiKey\":\"*****\"")This only redacts the 3. NPE crash:
// In LangChain4jModelFactory.buildChatModel():
if (config == null || config.getProvider() == null) {
return null; // ← ConcurrentHashMap will NPE on this
}The factory should throw 4. Unbounded memory leak in model caches — The three 5. public AIProvider getProvider() {
return AIProvider.OPEN_AI;
}The class is intended to be the single multi-provider client. Returning 6. /**
* Returns true when either a {@code providerConfig} JSON is present (new path)
* or all legacy URL + API key fields are populated (old path). ← Javadoc
*/
public boolean isEnabled() {
return StringUtils.isNotBlank(providerConfig); // ← only new path
}The Javadoc describes the safe dual-check that would protect rollback, but the implementation only checks 7. public void validateAIConfig(final AppConfig appConfig, final String userId) {
appConfig.debugLogger(getClass(), () -> "AI configuration validation delegated to LangChain4J");
}LangChain4J validates credentials at request time, not eagerly. This means a misconfigured 8. The JSON is parsed three separate times (once for 9. Missing section in If a site only configures Minor
|
|
Pull Request Unsafe to Rollback!!!
|
Summary
Replaces the direct OpenAI HTTP client (
OpenAIClient) with a LangChain4J abstraction layer, enabling multi-provider AI support in dotCMS. This PR covers Phase 1: OpenAI via LangChain4J.Changes
LangChain4jAIClient.java: NewAIProxiedClientimplementation that delegates chat, embeddings, and image requests to LangChain4J models.LangChain4jModelFactory.java: Factory that buildsChatModel,EmbeddingModel, andImageModelinstances from aProviderConfig. Only place with provider-specific builder logic.ProviderConfig.java: Deserializable POJO for theproviderConfigJSON secret (per provider section: model, apiKey, endpoint, maxTokens, maxCompletionTokens, etc.).AppConfig.java: Replaced legacy individual-field secrets (apiKey, model, etc.) with a singleproviderConfigJSON string.isEnabled()now only checks this field.AIAppValidator.java: Removed the OpenAI/v1/modelsvalidation call, which is incompatible with the multi-provider architecture.CompletionsResource.java: Updated/api/v1/ai/completions/configto derive model names and config values fromAppConfiggetters instead of iterating rawAppKeys.dotAI.yml: Removed legacy hidden fields; addedproviderConfigas the single configuration entry point.ProviderConfig,LangChain4jModelFactory, andLangChain4jAIClient; updatedAIProxyClientTestintegration test to useproviderConfig-based setup.Motivation
The previous implementation was tightly coupled to OpenAI's API contract (hardcoded HTTP calls, OpenAI-specific parameters, model validation via
/v1/models). LangChain4J provides a provider-agnostic model interface, allowing future phases to add Azure OpenAI, AWS Bedrock, and Vertex AI without touching the core request/response flow.The
providerConfigJSON secret replaces multiple individual secrets with a single structured configuration, supporting per-section (chat/embeddings/image) provider and model settings.Related Issue
EPIC: dotAI Multi-Provider Support #33970