Skip to content

fix: 修复 dist 构建产物运行时 API Error 系列问题#269

Open
xiaoFjun-eng wants to merge 1 commit intoclaude-code-best:mainfrom
xiaoFjun-eng:my-fix-branch
Open

fix: 修复 dist 构建产物运行时 API Error 系列问题#269
xiaoFjun-eng wants to merge 1 commit intoclaude-code-best:mainfrom
xiaoFjun-eng:my-fix-branch

Conversation

@xiaoFjun-eng
Copy link
Copy Markdown

@xiaoFjun-eng xiaoFjun-eng commented Apr 14, 2026

  1. 修复 API Error: undefined is not an object (evaluating '_OpenAI_instances.add')

    • build.ts 增加 external: ['openai'],避免 Bun bundler 将 openai SDK 打包进 chunk,防止 private class fields (WeakSet/WeakMap) 懒初始化顺序错乱
  2. 修复 API Error: Cannot access 'OpenAI' before initialization

    • openai/client.ts、grok/client.ts:静态 import OpenAI 改为 import type(仅类型) + 函数体内 await import('openai') 动态加载,彻底规避 Bun code splitting 产生的 TDZ(Temporal Dead Zone)问题
    • openai/index.ts、grok/index.ts:对应调用处加 await
  3. 修复 API Error: undefined is not an object (evaluating 'DEFAULT_MODEL_MAP[cleanModel]')

    • openai/modelMapping.ts、grok/modelMapping.ts:将模块级常量 DEFAULT_MODEL_MAP / DEFAULT_FAMILY_MAP 改为函数 getDefaultModelMap() / getDefaultFamilyMap(), 避免模块初始化时因 Bun 懒加载顺序导致常量为 undefined
    • 包括首次登录(/login)逻辑和后续再次进入界面的逻辑均已覆盖
  4. 登录后模型字符串缓存重置(login.tsx)

    • /login 成功后依次调用 resetSettingsCache()、applyConfigEnvironmentVariables()、 resetModelStrings(),确保切换 provider 后模型映射立即生效,不残留旧缓存
  5. 新增 resetModelStrings() 工具函数(modelStrings.ts)

    • 暴露缓存重置接口,供 login 流程调用
  6. 错误日志增强(openai/index.ts、grok/index.ts)

    • catch 块补充 error.stack 输出,便于定位构建产物中的运行时错误

Summary by CodeRabbit

  • Improvements
    • Enhanced login flow to immediately refresh settings and model configuration after authentication.
    • Improved error logging with stack trace details for better troubleshooting.

1. 修复 API Error: undefined is not an object (evaluating '_OpenAI_instances.add')
   - build.ts 增加 external: ['openai'],避免 Bun bundler 将 openai SDK 打包进
     chunk,防止 private class fields (WeakSet/WeakMap) 懒初始化顺序错乱

2. 修复 API Error: Cannot access 'OpenAI' before initialization
   - openai/client.ts、grok/client.ts:静态 import OpenAI 改为 import type(仅类型)
     + 函数体内 await import('openai') 动态加载,彻底规避 Bun code splitting
     产生的 TDZ(Temporal Dead Zone)问题
   - openai/index.ts、grok/index.ts:对应调用处加 await

3. 修复 API Error: undefined is not an object (evaluating 'DEFAULT_MODEL_MAP[cleanModel]')
   - openai/modelMapping.ts、grok/modelMapping.ts:将模块级常量 DEFAULT_MODEL_MAP /
     DEFAULT_FAMILY_MAP 改为函数 getDefaultModelMap() / getDefaultFamilyMap(),
     避免模块初始化时因 Bun 懒加载顺序导致常量为 undefined
   - 包括首次登录(/login)逻辑和后续再次进入界面的逻辑均已覆盖

4. 登录后模型字符串缓存重置(login.tsx)
   - /login 成功后依次调用 resetSettingsCache()、applyConfigEnvironmentVariables()、
     resetModelStrings(),确保切换 provider 后模型映射立即生效,不残留旧缓存

5. 新增 resetModelStrings() 工具函数(modelStrings.ts)
   - 暴露缓存重置接口,供 login 流程调用

6. 错误日志增强(openai/index.ts、grok/index.ts)
   - catch 块补充 error.stack 输出,便于定位构建产物中的运行时错误
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 14, 2026

📝 Walkthrough

Walkthrough

The PR converts OpenAI and Grok client initialization to asynchronous functions using dynamic imports, marks the OpenAI package as external in the build configuration, refactors model mapping constants into factory functions, and introduces a post-login cache reset mechanism for model strings.

Changes

Cohort / File(s) Summary
Build Configuration
build.ts
Marks openai as an external dependency to prevent bundling into dist.
Login Flow
src/commands/login/login.tsx
Adds cache reset and environment variable application steps (resetSettingsCache(), applyConfigEnvironmentVariables(), resetModelStrings()) to post-login initialization sequence.
Grok API Client
src/services/api/grok/client.ts, src/services/api/grok/index.ts
Converts getGrokClient to async with dynamic import; updates call site to await the async function and enhances error logging with stack traces.
Grok Model Mapping
src/services/api/grok/modelMapping.ts
Replaces DEFAULT_MODEL_MAP and DEFAULT_FAMILY_MAP constants with getDefaultModelMap() and getDefaultFamilyMap() factory functions.
OpenAI API Client
src/services/api/openai/client.ts, src/services/api/openai/index.ts
Converts getOpenAIClient to async with dynamic import; updates call site to await the async function and enhances error logging with stack traces.
OpenAI Model Mapping
src/services/api/openai/modelMapping.ts
Replaces DEFAULT_MODEL_MAP constant with getDefaultModelMap() factory function.
Model Utilities
src/utils/model/modelStrings.ts
Adds new exported resetModelStrings() function to clear cached model strings state.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰 Dynamic imports and caches we clear,
Grok and OpenAI clients brought near,
Factory functions replace static stone,
Async flows make the settings known! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 54.55% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title describes fixing runtime API errors in dist build artifacts, which directly corresponds to the main objective of addressing initialization and caching issues in the bundled application.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
src/services/api/grok/modelMapping.ts (1)

100-107: Avoid redundant factory function calls.

The factory functions are called twice each—once for the existence check and once for value retrieval. This creates unnecessary object allocations on every resolveGrokModel() call.

♻️ Proposed optimization using optional chaining
   // 5. Exact model name lookup
-  if (getDefaultModelMap()[cleanModel]) {
-    return getDefaultModelMap()[cleanModel]
+  const mappedModel = getDefaultModelMap()[cleanModel]
+  if (mappedModel) {
+    return mappedModel
   }

   // 6. Family-level default
-  if (family && getDefaultFamilyMap()[family]) {
-    return getDefaultFamilyMap()[family]
+  const familyDefault = family && getDefaultFamilyMap()[family]
+  if (familyDefault) {
+    return familyDefault
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/grok/modelMapping.ts` around lines 100 - 107, The code calls
getDefaultModelMap() and getDefaultFamilyMap() twice each which causes
unnecessary allocations; in resolveGrokModel(), call each factory once and cache
the returned map in a local const (e.g., const modelMap = getDefaultModelMap())
then use modelMap[cleanModel] (or optional chaining) for the existence check and
return, and do the same for family with const familyMap = getDefaultFamilyMap()
to avoid duplicate function calls and object creation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@build.ts`:
- Line 62: Move the openai package from devDependencies to dependencies in
package.json so it is installed at runtime; update package.json's "dependencies"
to include the same openai version currently in "devDependencies" and remove it
from "devDependencies" so dynamic imports in src/services/api/openai/client.ts
and usage in src/services/api/grok/client.ts resolve at runtime and the bundled
CLI can require/openai without missing-module errors.

In `@src/services/api/grok/client.ts`:
- Around line 15-19: Update the tests that call getGrokClient so they await its
async result: make the test functions async and add await before each
getGrokClient() invocation in the client.test.ts tests (all places where client
is assigned from getGrokClient), then use the resolved client instance for
assertions like client.baseURL; ensure all six calls that currently return
Promise<OpenAI> are awaited so assertions operate on the actual OpenAI instance.

In `@src/services/api/openai/client.ts`:
- Around line 16-23: The getOpenAIClient function can race on concurrent calls
because it checks cachedClient before an await and may construct multiple OpenAI
instances; add an in-flight promise guard by introducing a module-scoped
cachedClientPromise: when getOpenAIClient is called, if cachedClient exists
return it, else if cachedClientPromise exists return await cachedClientPromise;
otherwise set cachedClientPromise to an async initializer that imports OpenAI
(the OpenAIClass), constructs the OpenAI client, assigns it to cachedClient,
then clears cachedClientPromise on success/failure, and finally return the
created client; update getOpenAIClient to use cachedClientPromise so concurrent
calls share the same initialization and reference the same OpenAI instance.

---

Nitpick comments:
In `@src/services/api/grok/modelMapping.ts`:
- Around line 100-107: The code calls getDefaultModelMap() and
getDefaultFamilyMap() twice each which causes unnecessary allocations; in
resolveGrokModel(), call each factory once and cache the returned map in a local
const (e.g., const modelMap = getDefaultModelMap()) then use
modelMap[cleanModel] (or optional chaining) for the existence check and return,
and do the same for family with const familyMap = getDefaultFamilyMap() to avoid
duplicate function calls and object creation.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6c544ea2-10ce-460d-a0bb-b0eaf1e2e30b

📥 Commits

Reviewing files that changed from the base of the PR and between 2273a0b and 6c8e6e0.

📒 Files selected for processing (9)
  • build.ts
  • src/commands/login/login.tsx
  • src/services/api/grok/client.ts
  • src/services/api/grok/index.ts
  • src/services/api/grok/modelMapping.ts
  • src/services/api/openai/client.ts
  • src/services/api/openai/index.ts
  • src/services/api/openai/modelMapping.ts
  • src/utils/model/modelStrings.ts

splitting: true,
define: getMacroDefines(),
features,
external: ['openai'],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Check if 'openai' is in dependencies or devDependencies in package.json

echo "=== Checking openai package location ==="
jq '.dependencies // {} | has("openai")' package.json
jq '.devDependencies // {} | has("openai")' package.json

echo "=== Full dependencies section ==="
jq '.dependencies' package.json

echo "=== Check for openai in devDependencies ==="
jq '.devDependencies.openai // "not found"' package.json

Repository: claude-code-best/claude-code

Length of output: 265


🏁 Script executed:

#!/bin/bash
# Check if this is a CLI tool or library, and how openai is actually used

echo "=== Check package.json bin and main fields ==="
jq '{name, type, main, bin, exports}' package.json

echo "=== Search for openai imports/requires in source code ==="
rg "openai" --type ts --type tsx -l src/

echo "=== Check specific openai usage patterns ==="
rg "import.*openai|require.*openai" --type ts --type tsx -A 2 src/

Repository: claude-code-best/claude-code

Length of output: 506


🏁 Script executed:

#!/bin/bash
# Search for openai usage without problematic file type flags
echo "=== Files containing 'openai' ==="
rg "openai" src/ -l

echo "=== Openai usage patterns ==="
rg "openai" src/ -A 2

echo "=== Check build.ts context around line 62 ==="
sed -n '55,70p' build.ts

Repository: claude-code-best/claude-code

Length of output: 26206


Move openai from devDependencies to dependencies.

Marking openai as external correctly prevents Bun from bundling it, which resolves the private class field initialization issue. However, openai is currently in devDependencies and is required at runtime by the bundled CLI.

When users install this package via npm, openai will not be installed (devDependencies are excluded), causing runtime errors when they use the OpenAI or Grok provider:

  • src/services/api/openai/client.ts dynamically imports openai
  • src/services/api/grok/client.ts uses openai for the Grok provider
  • Both are invoked at runtime when the user selects these providers

Move openai to dependencies to ensure it's installed alongside the CLI.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@build.ts` at line 62, Move the openai package from devDependencies to
dependencies in package.json so it is installed at runtime; update
package.json's "dependencies" to include the same openai version currently in
"devDependencies" and remove it from "devDependencies" so dynamic imports in
src/services/api/openai/client.ts and usage in src/services/api/grok/client.ts
resolve at runtime and the bundled CLI can require/openai without missing-module
errors.

Comment on lines +15 to +19
export async function getGrokClient(options?: {
maxRetries?: number
fetchOverride?: typeof fetch
source?: string
}): OpenAI {
}): Promise<OpenAI> {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Find all usages of getGrokClient to verify they're awaited

echo "=== All getGrokClient calls in test files ==="
rg -n "getGrokClient\(\)" --type ts -g '*test*' -g '*spec*'

echo "=== All getGrokClient calls in source files ==="
rg -n "getGrokClient\(" --type ts -g '!*test*' -g '!*spec*'

Repository: claude-code-best/claude-code

Length of output: 837


🏁 Script executed:

cat -n src/services/api/grok/__tests__/client.test.ts | head -60

Repository: claude-code-best/claude-code

Length of output: 1659


🏁 Script executed:

cat -n src/services/api/grok/client.ts | head -40

Repository: claude-code-best/claude-code

Length of output: 1526


Tests need to be updated to handle the async signature.

The async conversion of getGrokClient() is correct for resolving the TDZ issue, but the test file calls it synchronously without await. All 6 calls in src/services/api/grok/__tests__/client.test.ts (lines 19, 27, 32, 33, 38, 41) return Promise<OpenAI> instead of the client instance, causing assertions like client.baseURL to fail.

Update the tests to be async functions and await all getGrokClient() calls.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/grok/client.ts` around lines 15 - 19, Update the tests that
call getGrokClient so they await its async result: make the test functions async
and add await before each getGrokClient() invocation in the client.test.ts tests
(all places where client is assigned from getGrokClient), then use the resolved
client instance for assertions like client.baseURL; ensure all six calls that
currently return Promise<OpenAI> are awaited so assertions operate on the actual
OpenAI instance.

Comment on lines +16 to +23
export async function getOpenAIClient(options?: {
maxRetries?: number
fetchOverride?: typeof fetch
source?: string
}): OpenAI {
}): Promise<OpenAI> {
if (cachedClient) return cachedClient

const { default: OpenAIClass } = await import('openai')
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify current guard state and call-path usage.
rg -nP --type=ts -C3 '\bcachedClient\b|\bcachedClientPromise\b|getOpenAIClient\s*\(' \
  src/services/api/openai/client.ts src/services/api/openai/index.ts

Repository: claude-code-best/claude-code

Length of output: 2165


🏁 Script executed:

#!/bin/bash
# Get the full getOpenAIClient function and surrounding context
cat -n src/services/api/openai/client.ts | head -50

Repository: claude-code-best/claude-code

Length of output: 1994


🏁 Script executed:

#!/bin/bash
# Check if there are any tests for the client
find . -type f -name "*test*" -o -name "*spec*" | grep -i "openai.*client" | head -20

Repository: claude-code-best/claude-code

Length of output: 54


🏁 Script executed:

#!/bin/bash
# Check how getOpenAIClient is used across the codebase
rg -l "getOpenAIClient" --type=ts

Repository: claude-code-best/claude-code

Length of output: 197


🏁 Script executed:

#!/bin/bash
# Check the test file for concurrent/race condition tests
cat -n src/services/api/openai/__tests__/queryModelOpenAI.test.ts | head -100

Repository: claude-code-best/claude-code

Length of output: 4866


🏁 Script executed:

#!/bin/bash
# Check context around line 293 in index.ts to understand fetchOverride usage
sed -n '280,310p' src/services/api/openai/index.ts

Repository: claude-code-best/claude-code

Length of output: 1422


🏁 Script executed:

#!/bin/bash
# Check if there are other concurrent calls to getOpenAIClient
rg -B2 -A2 "getOpenAIClient" src/services/api/openai/index.ts | head -50

Repository: claude-code-best/claude-code

Length of output: 450


🏁 Script executed:

#!/bin/bash
# Verify the Promise type is correctly used and check for any TypeScript compatibility
rg -A5 "Promise\<OpenAI\>" src/services/api/openai/client.ts

Repository: claude-code-best/claude-code

Length of output: 162


🏁 Script executed:

#!/bin/bash
# Check if there are any concurrency tests or Promise-based tests in the test file
grep -n "Promise\|concurrent\|race\|parallel" src/services/api/openai/__tests__/queryModelOpenAI.test.ts

Repository: claude-code-best/claude-code

Length of output: 54


🏁 Script executed:

#!/bin/bash
# Check for any Promise types in the file
grep -n "Promise" src/services/api/openai/client.ts

Repository: claude-code-best/claude-code

Length of output: 96


🏁 Script executed:

#!/bin/bash
# Verify the fix would work by checking the buildClient arrow function pattern is valid TypeScript
# Look for similar async arrow function patterns in the codebase
rg "const .+ = async \(\)" --type=ts | head -5

Repository: claude-code-best/claude-code

Length of output: 645


Add in-flight promise guard to prevent duplicate client construction during concurrent async initialization.

The current implementation checks cachedClient at line 21, but the await import('openai') at line 23 creates a window where concurrent calls bypass the cache check and each constructs a separate OpenAI instance. This violates the caching guarantee.

Implement in-flight promise deduplication: track a cachedClientPromise and return it if a build is already in progress, ensuring concurrent calls share the same initialization rather than duplicating work.

🔧 Suggested fix
 let cachedClient: OpenAI | null = null
+let cachedClientPromise: Promise<OpenAI> | null = null

 export async function getOpenAIClient(options?: {
   maxRetries?: number
   fetchOverride?: typeof fetch
   source?: string
 }): Promise<OpenAI> {
-  if (cachedClient) return cachedClient
-
-  const { default: OpenAIClass } = await import('openai')
-  const apiKey = process.env.OPENAI_API_KEY || ''
-  const baseURL = process.env.OPENAI_BASE_URL
-
-  const client = new OpenAIClass({
-    apiKey,
-    ...(baseURL && { baseURL }),
-    maxRetries: options?.maxRetries ?? 0,
-    timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10),
-    dangerouslyAllowBrowser: true,
-    ...(process.env.OPENAI_ORG_ID && { organization: process.env.OPENAI_ORG_ID }),
-    ...(process.env.OPENAI_PROJECT_ID && { project: process.env.OPENAI_PROJECT_ID }),
-    fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }),
-    ...(options?.fetchOverride && { fetch: options.fetchOverride }),
-  })
-
-  if (!options?.fetchOverride) {
-    cachedClient = client
-  }
-
-  return client
+  if (!options?.fetchOverride) {
+    if (cachedClient) return cachedClient
+    if (cachedClientPromise) return cachedClientPromise
+  }
+
+  const buildClient = async (): Promise<OpenAI> => {
+    const { default: OpenAIClass } = await import('openai')
+    const apiKey = process.env.OPENAI_API_KEY || ''
+    const baseURL = process.env.OPENAI_BASE_URL
+
+    return new OpenAIClass({
+      apiKey,
+      ...(baseURL && { baseURL }),
+      maxRetries: options?.maxRetries ?? 0,
+      timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10),
+      dangerouslyAllowBrowser: true,
+      ...(process.env.OPENAI_ORG_ID && { organization: process.env.OPENAI_ORG_ID }),
+      ...(process.env.OPENAI_PROJECT_ID && { project: process.env.OPENAI_PROJECT_ID }),
+      fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }),
+      ...(options?.fetchOverride && { fetch: options.fetchOverride }),
+    })
+  }
+
+  if (options?.fetchOverride) return buildClient()
+
+  cachedClientPromise = buildClient()
+  try {
+    cachedClient = await cachedClientPromise
+    return cachedClient
+  } finally {
+    cachedClientPromise = null
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/openai/client.ts` around lines 16 - 23, The getOpenAIClient
function can race on concurrent calls because it checks cachedClient before an
await and may construct multiple OpenAI instances; add an in-flight promise
guard by introducing a module-scoped cachedClientPromise: when getOpenAIClient
is called, if cachedClient exists return it, else if cachedClientPromise exists
return await cachedClientPromise; otherwise set cachedClientPromise to an async
initializer that imports OpenAI (the OpenAIClass), constructs the OpenAI client,
assigns it to cachedClient, then clears cachedClientPromise on success/failure,
and finally return the created client; update getOpenAIClient to use
cachedClientPromise so concurrent calls share the same initialization and
reference the same OpenAI instance.

@claude-code-best
Copy link
Copy Markdown
Owner

如果你需要 openai 排除出去的话, 需要改动一下 package.json 的部分, 让其成为 dependencies

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants