Skip to content

Conversation

@iceljc
Copy link
Collaborator

@iceljc iceljc commented Dec 19, 2025

PR Type

Enhancement


Description

  • Refine reasoning model settings with optional temperature and parameters support

  • Inject conversation state service into ChatCompletionProvider for cleaner dependency management

  • Extract reasoning parsing logic into dedicated method for better maintainability

  • Expand reasoning effort level options to include "minimal", "none", and "xhigh"

  • Add gpt-5 model variants with configurable reasoning parameters to settings


Diagram Walkthrough

flowchart LR
  A["ReasoningSetting"] -->|"Add Temperature nullable<br/>and Parameters dict"| B["Enhanced Settings"]
  C["ChatCompletionProvider"] -->|"Inject IConversationStateService"| D["Cleaner Dependencies"]
  E["ParseReasoningEffortLevel"] -->|"Extract to ParseReasoning method"| F["Better Code Organization"]
  G["Effort Levels"] -->|"Add minimal, none, xhigh"| H["Extended Options"]
  I["appsettings.json"] -->|"Add gpt-5 variants"| J["Model Configuration"]
Loading

File Walkthrough

Relevant files
Enhancement
LlmModelSetting.cs
Make reasoning temperature optional and add parameters support

src/Infrastructure/BotSharp.Abstraction/MLTasks/Settings/LlmModelSetting.cs

  • Changed Temperature property from required float with default 1.0f to
    nullable float?
  • Added Parameters dictionary property to support flexible model
    parameter configuration
  • Enables more flexible reasoning model settings without hardcoded
    defaults
+2/-1     
ChatCompletionProvider.cs
Refactor reasoning parsing and inject conversation state service

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs

  • Injected IConversationStateService as constructor dependency instead
    of resolving it inline
  • Extracted reasoning parsing logic into new ParseReasoning method that
    handles temperature and effort level
  • Updated ParseReasoningEffortLevel to support additional effort levels:
    "minimal", "none", "xhigh"
  • Replaced inline state resolution with injected _state field throughout
    the class
  • Improved code organization and testability through dependency
    injection
+61/-17 
Configuration changes
appsettings.json
Add gpt-5 model variants with reasoning configurations     

src/WebStarter/appsettings.json

  • Fixed whitespace formatting in ImageInputCost property
  • Added three new gpt-5 model configurations: gpt-5, gpt-5.1, and
    gpt-5.2
  • Each gpt-5 variant includes reasoning settings with temperature and
    configurable effort level parameters
  • gpt-5 supports effort levels: minimal, low, medium, high
  • gpt-5.1 supports effort levels: none, low, medium, high
  • gpt-5.2 supports effort levels: none, low, medium, high, xhigh
+88/-1   

@qodo-code-review
Copy link

qodo-code-review bot commented Dec 19, 2025

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
Hardcoded API key

Description: A non-empty ApiKey value ("o") is committed in appsettings.json, which may represent an
accidentally committed credential or encourage storing secrets in source control; API keys
should be sourced from a secret manager or environment variables instead.
appsettings.json [510-516]

Referred Code
{
  "Id": "gpt-5",
  "Name": "gpt-5.1",
  "Version": "gpt-5.1",
  "ApiKey": "o",
  "Type": "chat",
  "MultiModal": true,
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

🔴
Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Unsafe parsing: The new code uses float.Parse(_state.GetState("temperature", "0.0"))
which can throw on invalid state values and lacks a try-parse fallback/handling.

Referred Code
float? temperature = float.Parse(_state.GetState("temperature", "0.0"));
var (reasoningTemp, reasoningEffortLevel) = ParseReasoning(settings?.Reasoning, agent);
if (reasoningTemp.HasValue)
{
    temperature = reasoningTemp.Value;
}

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status:
Hardcoded API key: A non-empty ApiKey value is committed in configuration, which is unsafe secret handling
and risks credential exposure.

Referred Code
"Id": "gpt-5",
"Name": "gpt-5.1",
"Version": "gpt-5.1",
"ApiKey": "o",
"Type": "chat",
"MultiModal": true,

Learn more about managing compliance generic rules or creating your own custom rules

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-code-review
Copy link

qodo-code-review bot commented Dec 19, 2025

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Ensure model IDs are unique

Ensure model Id values are unique in appsettings.json by updating the Id for
gpt-5.1 and gpt-5.2 to prevent configuration conflicts.

src/WebStarter/appsettings.json [481-558]

 ...
     {
       "Id": "gpt-5",
       "Name": "gpt-5",
       "Version": "gpt-5",
       "ApiKey": "",
 ...
     },
     {
-      "Id": "gpt-5",
+      "Id": "gpt-5.1",
       "Name": "gpt-5.1",
       "Version": "gpt-5.1",
       "ApiKey": "o",
 ...
     },
     {
-      "Id": "gpt-5",
+      "Id": "gpt-5.2",
       "Name": "gpt-5.2",
       "Version": "gpt-5.2",
       "ApiKey": "",
 ...

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 9

__

Why: This suggestion correctly identifies a critical issue where multiple model configurations share the same Id. This would lead to incorrect model selection at runtime, making it a significant bug.

High
Use TryParse for safer float conversion

Replace float.Parse with the safer float.TryParse to prevent runtime exceptions
when converting the temperature value from the conversation state.

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs [558]

-float? temperature = float.Parse(_state.GetState("temperature", "0.0"));
+float? temperature = null;
+if (float.TryParse(_state.GetState("temperature", "0.0"), out var tempValue))
+{
+    temperature = tempValue;
+}
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that using float.Parse on user-provided state is unsafe and could lead to a runtime crash. Using float.TryParse is a robust improvement for error handling.

Medium
High-level
Simplify the reasoning setting configuration
Suggestion Impact:The commit did not remove the EffortLevel property, but it addressed the redundancy by marking EffortLevel as [Obsolete("Set EffortLevel in Parameters")], steering configuration toward Parameters. The parsing logic still falls back to settings.EffortLevel, but continues to prefer Parameters when present (with minor refactoring/renaming).

code diff:

# File: src/Infrastructure/BotSharp.Abstraction/MLTasks/Settings/LlmModelSetting.cs
@@ -86,6 +86,8 @@
 public class ReasoningSetting
 {
     public float? Temperature { get; set; }
+
+    [Obsolete("Set EffortLevel in Parameters")]
     public string? EffortLevel { get; set; }
     public Dictionary<string, ModelParamSetting>? Parameters { get; set; }
 }

# File: src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs
@@ -606,17 +606,20 @@
             temperature = settings.Temperature;
         }
 
+
         var defaultLevel = settings?.EffortLevel;
+        
         if (settings?.Parameters != null
-            && settings.Parameters.TryGetValue("EffortLevel", out var value)
-            && !string.IsNullOrEmpty(value?.Default))
-        {
-            defaultLevel = value.Default;
+            && settings.Parameters.TryGetValue("EffortLevel", out var settingValue)
+            && !string.IsNullOrEmpty(settingValue?.Default))
+        {
+            defaultLevel = settingValue.Default;
         }
 
         var level = _state.GetState("reasoning_effort_level")
                      .IfNullOrEmptyAs(agent?.LlmConfig?.ReasoningEffortLevel)
                      .IfNullOrEmptyAs(defaultLevel);
+
         reasoningEffortLevel = ParseReasoningEffortLevel(level);
 

To avoid ambiguity, remove the redundant EffortLevel property from the
ReasoningSetting class. The Parameters dictionary should be the single source
for configuring the effort level.

Examples:

src/Infrastructure/BotSharp.Abstraction/MLTasks/Settings/LlmModelSetting.cs [87-91]
{
    public float? Temperature { get; set; }
    public string? EffortLevel { get; set; }
    public Dictionary<string, ModelParamSetting>? Parameters { get; set; }
}
src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs [609-615]
        var defaultLevel = settings?.EffortLevel;
        if (settings?.Parameters != null
            && settings.Parameters.TryGetValue("EffortLevel", out var value)
            && !string.IsNullOrEmpty(value?.Default))
        {
            defaultLevel = value.Default;
        }

Solution Walkthrough:

Before:

// In LlmModelSetting.cs
public class ReasoningSetting
{
    public float? Temperature { get; set; }
    public string? EffortLevel { get; set; } // Legacy property
    public Dictionary<string, ModelParamSetting>? Parameters { get; set; } // New, flexible property
}

// In ChatCompletionProvider.cs
private (float?, ChatReasoningEffortLevel?) ParseReasoning(...)
{
    // ...
    var defaultLevel = settings?.EffortLevel; // First, check legacy property
    if (settings?.Parameters != null && settings.Parameters.TryGetValue("EffortLevel", ...))
    {
        defaultLevel = value.Default; // Then, override with new property
    }
    // ...
}

After:

// In LlmModelSetting.cs
public class ReasoningSetting
{
    public float? Temperature { get; set; }
    // public string? EffortLevel { get; set; } // Removed
    public Dictionary<string, ModelParamSetting>? Parameters { get; set; }
}

// In ChatCompletionProvider.cs
private (float?, ChatReasoningEffortLevel?) ParseReasoning(...)
{
    // ...
    var defaultLevel = null;
    if (settings?.Parameters != null && settings.Parameters.TryGetValue("EffortLevel", ...))
    {
        defaultLevel = value.Default; // Single source for default level
    }
    // ...
}
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a design redundancy where EffortLevel and Parameters provide two ways to configure the same setting, leading to ambiguity. Removing the legacy EffortLevel property would simplify the configuration model and improve clarity.

Medium
Learned
best practice
Use case-insensitive parameter lookup

Make the "EffortLevel" lookup case-insensitive (or normalize keys) so
configuration JSON with different casing still works reliably.

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs [610-615]

-if (settings?.Parameters != null
-    && settings.Parameters.TryGetValue("EffortLevel", out var value)
-    && !string.IsNullOrEmpty(value?.Default))
+if (settings?.Parameters != null)
 {
-    defaultLevel = value.Default;
+    var effortParam = settings.Parameters
+        .FirstOrDefault(p => string.Equals(p.Key, "EffortLevel", StringComparison.OrdinalIgnoreCase))
+        .Value;
+
+    if (!string.IsNullOrEmpty(effortParam?.Default))
+    {
+        defaultLevel = effortParam.Default;
+    }
 }
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why:
Relevant best practice - Improve defensive coding with case-insensitive comparisons/lookup to prevent incorrect behavior from casing differences in configuration keys.

Low
  • Update

@iceljc iceljc merged commit 10a6f8d into SciSharp:master Dec 19, 2025
0 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant