Add a dedicated OpenAI-compatible LLM adapter#1895
Add a dedicated OpenAI-compatible LLM adapter#1895jimmyzhuu wants to merge 10 commits intoZipstack:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThis PR adds support for OpenAI-compatible LLM providers (e.g., via LiteLLM) by introducing a new adapter type, parameter validation with model normalization, improved usage token handling, and corresponding tests and configuration. ChangesOpenAI-Compatible LLM Adapter Feature
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
for more information, see https://pre-commit.ci
|
| Filename | Overview |
|---|---|
| unstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.py | New adapter class; cleanly inherits OpenAICompatibleLLMParameters and BaseAdapter, overrides SCHEMA_PATH, and implements all required abstract methods. |
| unstract/sdk1/src/unstract/sdk1/adapters/base1.py | Adds OpenAICompatibleLLMParameters and a SCHEMA_PATH override mechanism; validate() mutates the caller's dict (flagged in prior thread) unlike OpenAILLMParameters which copies first. |
| unstract/sdk1/src/unstract/sdk1/llm.py | _record_usage now prefers provider-reported prompt_tokens and falls back to token_counter estimation with exception handling; type annotation on the usage parameter is slightly inaccurate for the None-value case. |
| unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/openai_compatible.json | JSON schema for the new adapter; model is not in required (flagged in prior thread), api_key correctly typed as string |
| unstract/sdk1/tests/test_openai_compatible_adapter.py | Comprehensive test coverage for registration, model normalization, schema loading, and _record_usage behavior; magic module stub addressed with patch.dict context manager. |
| unstract/sdk1/src/unstract/sdk1/adapters/llm1/init.py | Correctly registers OpenAICompatibleLLMAdapter alongside existing adapters and adds it to all. |
Sequence Diagram
sequenceDiagram
participant UI
participant OpenAICompatibleLLMAdapter
participant OpenAICompatibleLLMParameters
participant LiteLLM
participant LLM._record_usage
participant Audit
UI->>OpenAICompatibleLLMAdapter: validate(adapter_metadata)
OpenAICompatibleLLMAdapter->>OpenAICompatibleLLMParameters: validate_model(metadata)
Note over OpenAICompatibleLLMParameters: prefix with "custom_openai/"
OpenAICompatibleLLMParameters-->>OpenAICompatibleLLMAdapter: "custom_openai/<model>"
OpenAICompatibleLLMAdapter->>OpenAICompatibleLLMParameters: Pydantic model_dump()
OpenAICompatibleLLMParameters-->>OpenAICompatibleLLMAdapter: validated dict
UI->>LiteLLM: litellm.completion(model="custom_openai/...", api_base=...)
LiteLLM-->>LLM._record_usage: response with usage (may include prompt_tokens)
alt usage has prompt_tokens
LLM._record_usage->>LLM._record_usage: use reported value
else usage missing prompt_tokens
LLM._record_usage->>LiteLLM: token_counter(model, messages)
alt token_counter succeeds
LiteLLM-->>LLM._record_usage: estimated count
else token_counter fails
LLM._record_usage->>LLM._record_usage: warn + use 0
end
end
LLM._record_usage->>Audit: push_usage_data(prompt_tokens, ...)
Prompt To Fix All With AI
Fix the following 1 code review issue. Work through them one at a time, proposing concise fixes.
---
### Issue 1 of 1
unstract/sdk1/src/unstract/sdk1/llm.py:648-654
The `usage` parameter is annotated as `Mapping[str, int] | None`, but the new code explicitly handles `None` values within the mapping (e.g. `{"prompt_tokens": None, ...}`). The test `test_record_usage_uses_estimated_prompt_tokens_when_usage_has_none` demonstrates this real-world case. A stricter type checker would reject callers passing a `Mapping[str, int | None]` under this annotation.
```suggestion
def _record_usage(
self,
model: str,
messages: list[dict[str, str]],
usage: Mapping[str, int | None] | None,
llm_api: str,
) -> None:
```
Reviews (6): Last reviewed commit: "Refine OpenAI compatible adapter schema ..." | Re-trigger Greptile
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
unstract/sdk1/src/unstract/sdk1/llm.py (1)
542-557: Avoid unconditional token estimation when usage already includes prompt tokens.This currently computes
token_counter()even when provider usage already has prompt tokens, which can create repeated warnings/noise for unmapped models without improving recorded usage.♻️ Proposed refinement
- try: - prompt_tokens = token_counter(model=model, messages=messages) - except Exception as e: - prompt_tokens = 0 - logger.warning( - "[sdk1][LLM][%s][%s] Failed to estimate prompt tokens: %s", - model, - llm_api, - e, - ) usage_data: Mapping[str, int] = usage or {} + prompt_tokens = usage_data.get("prompt_tokens") + if prompt_tokens is None: + try: + prompt_tokens = token_counter(model=model, messages=messages) + except Exception as e: + prompt_tokens = 0 + logger.warning( + "[sdk1][LLM][%s][%s] Failed to estimate prompt tokens: %s", + model, + llm_api, + e, + ) all_tokens = TokenCounterCompat( - prompt_tokens=usage_data.get("prompt_tokens", 0), + prompt_tokens=usage_data.get("prompt_tokens", prompt_tokens or 0), completion_tokens=usage_data.get("completion_tokens", 0), total_tokens=usage_data.get("total_tokens", 0), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@unstract/sdk1/src/unstract/sdk1/llm.py` around lines 542 - 557, The code unconditionally calls token_counter(model, messages) even when usage already contains prompt token counts; change the logic in the block around token_counter and TokenCounterCompat so you first check usage (usage_data = usage or {}) and if usage_data.get("prompt_tokens") is present use that value for prompt_tokens instead of calling token_counter; only call token_counter(model, messages) inside the try/except when usage_data lacks prompt_tokens, preserving the existing exception handling and the logger.warning path, and then construct TokenCounterCompat using the values from usage_data (falling back to the estimated prompt_tokens when used).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/custom_openai.json`:
- Around line 15-20: The schema for the "api_key" property currently only allows
a string which fails when runtime metadata contains null; update the "api_key"
entry in the JSON schema (the "api_key" property in custom_openai.json) to
permit null values by changing its type to accept both string and null (or add a
nullable:true equivalent) so stored configs with null pass validation and
editing flows.
---
Nitpick comments:
In `@unstract/sdk1/src/unstract/sdk1/llm.py`:
- Around line 542-557: The code unconditionally calls token_counter(model,
messages) even when usage already contains prompt token counts; change the logic
in the block around token_counter and TokenCounterCompat so you first check
usage (usage_data = usage or {}) and if usage_data.get("prompt_tokens") is
present use that value for prompt_tokens instead of calling token_counter; only
call token_counter(model, messages) inside the try/except when usage_data lacks
prompt_tokens, preserving the existing exception handling and the logger.warning
path, and then construct TokenCounterCompat using the values from usage_data
(falling back to the estimated prompt_tokens when used).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bf841637-54b7-4802-9156-7f56e899ca54
📒 Files selected for processing (7)
README.mdunstract/sdk1/src/unstract/sdk1/adapters/base1.pyunstract/sdk1/src/unstract/sdk1/adapters/llm1/__init__.pyunstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.pyunstract/sdk1/src/unstract/sdk1/adapters/llm1/static/custom_openai.jsonunstract/sdk1/src/unstract/sdk1/llm.pyunstract/sdk1/tests/test_openai_compatible_adapter.py
|
Addressed the review follow-ups.
Validation re-run:
|
|
Gentle follow-up on this PR in case it slipped through the queue. When someone has bandwidth, I would really appreciate a review. Happy to make any follow-up changes quickly. Thanks! |
| prompt_tokens = usage_data.get("prompt_tokens") | ||
| if prompt_tokens is None: | ||
| try: | ||
| prompt_tokens = token_counter(model=model, messages=messages) | ||
| except Exception as e: | ||
| prompt_tokens = 0 | ||
| logger.warning( | ||
| "[sdk1][LLM][%s][%s] Failed to estimate prompt tokens: %s", | ||
| model, | ||
| llm_api, | ||
| e, | ||
| ) |
There was a problem hiding this comment.
@pk-zipstack @johnyrahul is this a safe change?
There was a problem hiding this comment.
Kept this scoped to usage accounting only. It still uses provider-reported prompt tokens when available, only estimates when they are missing, and the fallback paths are covered by tests now.
jaseemjaskp
left a comment
There was a problem hiding this comment.
PR Review Toolkit — consolidated findings
Automated review aggregating six specialist agents (code-reviewer, code-simplifier, silent-failure-hunter, type-design-analyzer, pr-test-analyzer, comment-analyzer). No blocking defects; the adapter follows existing sibling-adapter conventions and the scope is appropriately narrow.
High-signal items worth addressing before merge
llm.py:556— redundant/confusing fallback:prompt_tokensis already resolved at line 543 with an explicit estimation branch, so theusage_data.get("prompt_tokens", prompt_tokens or 0)expression double-handles the default and, worse, silently coerces an explicitNonefromusage_datato 0 without logging.llm.py:547— broadexcept Exceptioncombined with a warning that says "failed to estimate" but not "recording 0 tokens" means billing/audit can silently under-report. Narrow the exception and either uselogger.exceptionor rewrite the message to name the consequence.base1.py:232/ schemaapi_base— the Pydantic type is plainstr; URL shape lives only in the JSON schema, so direct construction accepts garbage. ConsiderHttpUrl/ afield_validator.base1.py:239— prefix logic is only invoked by thevalidateclassmethod.AzureOpenAILLMParametersuses@model_validator(mode="before")which cannot be bypassed; mirroring that tightens the invariant.custom_openai.json—api_baseis listed as required yet ships with a placeholder default URL, which lets users save an unchanged form and hit 404s at request time instead of validation errors. Vendor-specific examples (ERNIE-4.0-8K (Baidu Qianfan),qianfan.baidubce.com) are prone to rot and should be generic.test_openai_compatible_adapter.py— the "tolerates unmapped models" test only assertspush_usage_data.assert_called_once(); it does not verifyprompt_tokens=0was actually pushed, so a regression silently pushingNoneor crashing insideTokenCounterCompatwould still pass.
Suggested follow-ups (non-blocking)
- Add tests for:
api_basemissing (Pydantic ValidationError),usage=None,usage["prompt_tokens"]=None, and the success branch oftoken_counter. - Drop the unused
@lru_cacheon_load_llm_moduleand the dead_load_llm_class. - Replace the tautological
get_description()/ metadata description with user-facing copy that distinguishes this adapter fromOpenAILLMAdapter.
Inline comments below flag each item at its exact line with a concrete fix suggestion.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
unstract/sdk1/tests/test_openai_compatible_adapter.py (3)
102-192: LGTM — good coverage of the three_record_usagebranches.Tests exercise: (a) provider-supplied
prompt_tokensbypassestoken_counter, (b)token_counterraising falls back to 0 with a warning, and (c)prompt_tokens=Nonetriggers estimation. The use of__new__to bypass__init__and the targetedpatch.object(llm_module, ...)are appropriate here. Nice to see the warning-message assertion on line 162 pinning the audit-visible text.One small suggestion: assert
mock_warningis called exactly once and withmodel/llm_apisubstituted into the format string, to catch regressions that accidentally change the log signature.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@unstract/sdk1/tests/test_openai_compatible_adapter.py` around lines 102 - 192, Update the test_record_usage_tolerates_unmapped_models_without_prompt_tokens test to assert the warning logger was called exactly once and that the warning message includes the model ("custom_openai/gateway-model") and llm_api ("complete") values; locate the test function and the mock_warning (patched via patch.object(llm_module.logger, "warning")) and after calling llm._record_usage add assertions that mock_warning.assert_called_once() and that the call_args contains both the model and llm_api strings in the formatted warning message to catch signature regressions.
18-31: Minor:lru_cachearoundimport_moduleis largely redundant.
sys.modulesalready caches modules after first import, so thelru_cache(maxsize=1)only saves thepatch.dictcontext-manager overhead. Leaving it is harmless, but on second and subsequent calls themagicstub will not be re-installed (because the cached branch returns early), so any test that newly triggersimport magicafter the first call would see the real module. Not a problem today, but worth documenting with a short comment to prevent surprise.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@unstract/sdk1/tests/test_openai_compatible_adapter.py` around lines 18 - 31, The `@lru_cache` on _load_llm_module() prevents the patch.dict stub for "magic" from being re-applied on subsequent calls, which can lead to surprising behavior if tests later import the real magic module; either remove the `@lru_cache` decorator or (preferred) keep it but add a brief comment inside _load_llm_module explaining that sys.modules already caches imports and that the cached result means the "magic" stub will not be re-installed on later calls so tests should call this once or manage stubbing themselves — reference the _load_llm_module function and the patch.dict usage when adding the comment.
34-35: Dead helper:_load_llm_classis never called.Every test uses
_load_llm_module().LLMdirectly (e.g., lines 104, 135, 167). Consider removing_load_llm_classor using it in place of the inlinellm_module.LLMlookups for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@unstract/sdk1/tests/test_openai_compatible_adapter.py` around lines 34 - 35, The helper function _load_llm_class is unused (dead code); either remove _load_llm_class entirely or replace direct usages of _load_llm_module().LLM in tests (e.g., the inline lookups at places that call llm_module.LLM) with calls to _load_llm_class() for consistency. Locate the definition of _load_llm_class and the test files referencing _load_llm_module().LLM and either delete the unused _load_llm_class function or update those tests to call _load_llm_class() instead, ensuring imports and type annotations still match.unstract/sdk1/src/unstract/sdk1/adapters/base1.py (1)
234-242: Minor:validate()mutates the caller's dict.Lines 236 and 241 write back into
adapter_metadata(same pattern asOpenAILLMParameters, but unlikeVertexAILLMParameterswhich copies first via{**adapter_metadata}). GivenLLM.complete()callsself.adapter.validate({**self.kwargs, **kwargs})(a fresh dict), there's no current bug — but if a future caller passes a long-lived dict, theapi_keywould be mutated in place and the model prefix would double-rewrite on a second call (thestartswith("custom_openai/")guard invalidate_modelmitigates the latter).Optional: copy first for defensive hygiene, matching the
VertexAILLMParameters.validate()style.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@unstract/sdk1/src/unstract/sdk1/adapters/base1.py` around lines 234 - 242, The validate() method currently mutates the incoming adapter_metadata dict (it writes to adapter_metadata["model"] and adapter_metadata["api_key"]); to avoid in-place side effects make a shallow copy first (e.g., metadata = {**adapter_metadata}) and perform all modifications against that copy before passing it to OpenAICompatibleLLMParameters.validate_model and constructing OpenAICompatibleLLMParameters(**metadata).model_dump(); keep references to the same symbols (validate, OpenAICompatibleLLMParameters.validate_model, OpenAICompatibleLLMParameters, adapter_metadata) so the change is local and preserves existing behavior while preventing caller dict mutation.unstract/sdk1/src/unstract/sdk1/llm.py (1)
557-557: Minor:prompt_tokens or 0also zeroes out legitimate0from provider.If a provider ever reports
usage.prompt_tokens == 0(unusual, but possible for zero-content requests or certain gateways), the truthiness check collapses it the same asNone. Given theprompt_tokens is Nonebranch already assigns an int (or 0 on exception), thisor 0is only needed to satisfy the type checker. A more precise form:Proposed tweak
- all_tokens = TokenCounterCompat( - prompt_tokens=prompt_tokens or 0, + all_tokens = TokenCounterCompat( + prompt_tokens=prompt_tokens if prompt_tokens is not None else 0, completion_tokens=usage_data.get("completion_tokens", 0), total_tokens=usage_data.get("total_tokens", 0), )Low-impact since 0 vs None ends up the same in the audit row, but semantically cleaner.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@unstract/sdk1/src/unstract/sdk1/llm.py` at line 557, Replace the truthiness fallback that zeroes out legitimate zero values: instead of using "prompt_tokens=prompt_tokens or 0" keep the explicit None-check so only None becomes 0 (e.g., use a conditional expression that assigns prompt_tokens if prompt_tokens is not None else 0). Locate the occurrence of the "prompt_tokens=prompt_tokens or 0" assignment and change it to an explicit None-check for the variable prompt_tokens so a reported 0 remains 0.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@unstract/sdk1/src/unstract/sdk1/llm.py`:
- Around line 543-560: The current catch in the prompt token estimation around
token_counter (used when building TokenCounterCompat) silently sets
prompt_tokens=0; update this to (1) narrow the except to only expected errors
from the estimator (e.g., KeyError/ValueError and litellm-specific exceptions
raised by token_counter) so unexpected errors still propagate, and (2) add a
sentinel field to the usage payload (e.g., prompt_tokens_source or
estimation_failed) before calling Audit().push_usage_data to mark that prompt
tokens were estimated/failed, and/or increment an ops metric/counter when the
fallback path occurs; reference the token_counter call, TokenCounterCompat
construction, Audit().push_usage_data, and the existing logger to emit a clear
warning and metric.
---
Nitpick comments:
In `@unstract/sdk1/src/unstract/sdk1/adapters/base1.py`:
- Around line 234-242: The validate() method currently mutates the incoming
adapter_metadata dict (it writes to adapter_metadata["model"] and
adapter_metadata["api_key"]); to avoid in-place side effects make a shallow copy
first (e.g., metadata = {**adapter_metadata}) and perform all modifications
against that copy before passing it to
OpenAICompatibleLLMParameters.validate_model and constructing
OpenAICompatibleLLMParameters(**metadata).model_dump(); keep references to the
same symbols (validate, OpenAICompatibleLLMParameters.validate_model,
OpenAICompatibleLLMParameters, adapter_metadata) so the change is local and
preserves existing behavior while preventing caller dict mutation.
In `@unstract/sdk1/src/unstract/sdk1/llm.py`:
- Line 557: Replace the truthiness fallback that zeroes out legitimate zero
values: instead of using "prompt_tokens=prompt_tokens or 0" keep the explicit
None-check so only None becomes 0 (e.g., use a conditional expression that
assigns prompt_tokens if prompt_tokens is not None else 0). Locate the
occurrence of the "prompt_tokens=prompt_tokens or 0" assignment and change it to
an explicit None-check for the variable prompt_tokens so a reported 0 remains 0.
In `@unstract/sdk1/tests/test_openai_compatible_adapter.py`:
- Around line 102-192: Update the
test_record_usage_tolerates_unmapped_models_without_prompt_tokens test to assert
the warning logger was called exactly once and that the warning message includes
the model ("custom_openai/gateway-model") and llm_api ("complete") values;
locate the test function and the mock_warning (patched via
patch.object(llm_module.logger, "warning")) and after calling llm._record_usage
add assertions that mock_warning.assert_called_once() and that the call_args
contains both the model and llm_api strings in the formatted warning message to
catch signature regressions.
- Around line 18-31: The `@lru_cache` on _load_llm_module() prevents the
patch.dict stub for "magic" from being re-applied on subsequent calls, which can
lead to surprising behavior if tests later import the real magic module; either
remove the `@lru_cache` decorator or (preferred) keep it but add a brief comment
inside _load_llm_module explaining that sys.modules already caches imports and
that the cached result means the "magic" stub will not be re-installed on later
calls so tests should call this once or manage stubbing themselves — reference
the _load_llm_module function and the patch.dict usage when adding the comment.
- Around line 34-35: The helper function _load_llm_class is unused (dead code);
either remove _load_llm_class entirely or replace direct usages of
_load_llm_module().LLM in tests (e.g., the inline lookups at places that call
llm_module.LLM) with calls to _load_llm_class() for consistency. Locate the
definition of _load_llm_class and the test files referencing
_load_llm_module().LLM and either delete the unused _load_llm_class function or
update those tests to call _load_llm_class() instead, ensuring imports and type
annotations still match.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bb92694d-b745-40e9-8bd9-0bc1fa3628b1
⛔ Files ignored due to path filters (1)
frontend/public/icons/adapter-icons/OpenAICompatible.pngis excluded by!**/*.png
📒 Files selected for processing (5)
unstract/sdk1/src/unstract/sdk1/adapters/base1.pyunstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.pyunstract/sdk1/src/unstract/sdk1/adapters/llm1/static/custom_openai.jsonunstract/sdk1/src/unstract/sdk1/llm.pyunstract/sdk1/tests/test_openai_compatible_adapter.py
✅ Files skipped from review due to trivial changes (2)
- unstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.py
- unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/custom_openai.json
| prompt_tokens = usage_data.get("prompt_tokens") | ||
| if prompt_tokens is None: | ||
| try: | ||
| prompt_tokens = token_counter(model=model, messages=messages) | ||
| except Exception as e: | ||
| prompt_tokens = 0 | ||
| logger.warning( | ||
| "[sdk1][LLM][%s][%s] Failed to estimate prompt tokens; " | ||
| "recording 0 prompt tokens for usage audit: %s", | ||
| model, | ||
| llm_api, | ||
| e, | ||
| ) | ||
| all_tokens = TokenCounterCompat( | ||
| prompt_tokens=usage_data.get("prompt_tokens", 0), | ||
| prompt_tokens=prompt_tokens or 0, | ||
| completion_tokens=usage_data.get("completion_tokens", 0), | ||
| total_tokens=usage_data.get("total_tokens", 0), | ||
| ) |
There was a problem hiding this comment.
Silent zero-token recording risks corrupting billing/usage audit data.
When token_counter raises (e.g., unmapped custom models in LiteLLM's metadata), the code records prompt_tokens=0 into Audit().push_usage_data. Per unstract/sdk1/src/unstract/sdk1/utils/common.py:114-145 and unstract/sdk1/src/unstract/sdk1/audit.py:85-98, that zero flows directly to the platform's usage record with no sentinel/flag distinguishing "unknown" from "actually zero." For long-running workloads against an OpenAI-compatible endpoint that doesn't return usage.prompt_tokens, this could silently understate prompt-token consumption in cost attribution and analytics.
Consider one of:
- Tagging the audit payload with an
estimation_failed/prompt_tokens_sourceflag so downstream consumers can distinguish missing data from genuinely zero usage. - Narrowing the
except(e.g.,except (KeyError, ValueError, litellm.exceptions.*)) so truly unexpected errors still propagate instead of being swallowed. - Emitting a metric/counter when this fallback triggers so ops can detect silent drift.
A warning log alone is easy to miss in aggregated usage reports. This answers the question raised in the prior review thread on this range.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@unstract/sdk1/src/unstract/sdk1/llm.py` around lines 543 - 560, The current
catch in the prompt token estimation around token_counter (used when building
TokenCounterCompat) silently sets prompt_tokens=0; update this to (1) narrow the
except to only expected errors from the estimator (e.g., KeyError/ValueError and
litellm-specific exceptions raised by token_counter) so unexpected errors still
propagate, and (2) add a sentinel field to the usage payload (e.g.,
prompt_tokens_source or estimation_failed) before calling
Audit().push_usage_data to mark that prompt tokens were estimated/failed, and/or
increment an ops metric/counter when the fallback path occurs; reference the
token_counter call, TokenCounterCompat construction, Audit().push_usage_data,
and the existing logger to emit a clear warning and metric.
|
Hi @jimmyzhuu - sorry we let this sit so long without a review, that's on us. The change looks reasonable in scope and the validation looks solid. If you're still interested, we'd be happy to have you reopen it and we'll get a maintainer on it this week. Either way, thanks for the careful work and the patience. |
|
Absolutely — I’d be happy to work on this. Thanks for the suggestion! |
chandrasekharan-zipstack
left a comment
There was a problem hiding this comment.
LGTM for the most part, @pk-zipstack please help take a look as well
| "model": { | ||
| "type": "string", | ||
| "title": "Model", | ||
| "default": "gpt-4o-mini", |
There was a problem hiding this comment.
| "default": "gpt-4o-mini", |
NIT: Consider removing this default since it might eventually get deprecated
There was a problem hiding this comment.
| @@ -0,0 +1,61 @@ | |||
| { | |||
| "title": "OpenAI Compatible LLM", | |||
There was a problem hiding this comment.
@jimmyzhuu This could be just OpenAI Compatible.
There was a problem hiding this comment.
@jimmyzhuu Might be better to rename the file also to openai_compatible.json.
There was a problem hiding this comment.
@hari-kuriakose fixed. Renamed the schema file to openai_compatible.json and updated the adapter schema loading path accordingly.
jaseemjaskp
left a comment
There was a problem hiding this comment.
LGTM. After running the PR review toolkit (code-reviewer, comment-analyzer, pr-test-analyzer, silent-failure-hunter, type-design-analyzer, code-simplifier), all clearly important findings (correctness, billing/audit data integrity, broken contracts) were already raised by prior reviewers (greptile, coderabbitai, jaseemjaskp) and addressed by the author in the latest commits. Remaining items from agent passes are minor or NIT-level (e.g. tightening api_base/model pydantic validators, schema title/filename naming, end-to-end LLM(init) coverage), so I'm not posting them as inline comments. Resolved my three previously-posted threads that the current code addresses (DESCRIPTION constant, dedicated icon, blank api_key coercion).
|
@jimmyzhuu The one thing I'd note is that the exception-swallowing path now applies to every adapter, not just custom_openai. I think that's the right tradeoff (a successful LLM call shouldn't fail at the billing step), but flagging it for the merge commit. |
@athul-rs Updated the PR description. |
|



Summary
This PR adds a dedicated
OpenAI CompatibleLLM adapter for OpenAI-style chat completion endpoints that are not the official OpenAI service.The implementation is intentionally small in scope:
OpenAI CompatibleLLM adapter backed by LiteLLM'scustom_openaipathOpenAIadapter unchangedWhy
Users may already have access to OpenAI-compatible endpoints behind a private gateway or third-party provider, but the current
OpenAIadapter is specifically shaped around official OpenAI semantics.Using a separate adapter keeps those semantics explicit and avoids broadening the meaning of the existing
OpenAIadapter.Refs #1894
Refs #856
Refs #1443
Scope
This PR is limited to:
OpenAIadapter behaviorNotes
LLM._record_usagenow prefers provider-reportedprompt_tokenswhen they are present in the usage payload.If
prompt_tokensare missing,_record_usagestill falls back to LiteLLM token estimation. If that estimation raises, it now logs a warning and records0prompt tokens for usage audit instead of bubbling the exception up after a successful LLM call.This behavior change is in the shared
_record_usagepath, so it applies to every SDK1 LLM adapter that uses it, not justcustom_openai.This keeps successful LLM calls from failing at the usage-audit / billing step while preserving the current pricing semantics in this PR.
Validation
UV_SKIP_WHEEL_FILENAME_CHECK=1 uv run pytest tests/test_openai_compatible_adapter.pyUV_SKIP_WHEEL_FILENAME_CHECK=1 uv run ruff check src/unstract/sdk1/adapters/base1.py src/unstract/sdk1/adapters/llm1/__init__.py src/unstract/sdk1/adapters/llm1/openai_compatible.py src/unstract/sdk1/llm.py tests/test_openai_compatible_adapter.py