Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/how-tos/vs-code/datacoves-copilot/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ title: Datacoves Copilot
sidebar_position: 85
---
# AI LLMs for Datacoves Copilot
Datacoves can integrate seamlessly with your existing ChatGPT or Azure Open AI LLMs. These how tos will go over configuration and usage of AI within Datacoves.
Datacoves can integrate seamlessly with your existing ChatGPT or Azure OpenAI LLMs. These how tos will go over configuration and usage of AI within Datacoves.

### Prereqs
- Have an existing LLM such as ChatGPT or [Azure Open AI](https://learn.microsoft.com/en-us/azure/ai-services/openai/assistants-quickstart?tabs=command-line%2Ckeyless%2Ctypescript-keyless&pivots=ai-foundry-portal)
- Have an existing LLM such as ChatGPT or [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/assistants-quickstart?tabs=command-line%2Ckeyless%2Ctypescript-keyless&pivots=ai-foundry-portal)
- Have access to API and Endpoint url credentials.
- Have `Admin` access to configure credentials in Datacoves

Expand Down
113 changes: 110 additions & 3 deletions docs/how-tos/vs-code/datacoves-copilot/v2.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ This section describes how to configure and use Datacoves Copilot v2, which come
- DeepSeek
- Google Gemini
- OpenAI
- OpenAI Compatible (i.e. Azure)
- Azure OpenAI
- OpenAI Compatible
- Open Router
- xAI (Grok)

Expand All @@ -24,6 +25,7 @@ import TabItem from '@theme/TabItem';
{label: 'Config', value: 'config'},
{label: 'Anthropic', value: 'anthropic'},
{label: 'OpenAI', value: 'openai'},
{label: 'Azure OpenAI', value: 'azure'},
{label: 'OpenAI Compatible', value: 'openaicompatible'},
{label: 'Google Gemini', value: 'gemini'},
{label: 'More Providers', value: 'additional'},
Expand Down Expand Up @@ -219,6 +221,109 @@ Optimized GPT-4 models:

Refer to the [OpenAI Models documentation](https://platform.openai.com/docs/models) for the most up-to-date list of models and capabilities.

</TabItem>
<TabItem value="azure">

## Azure OpenAI LLM Provider

Datacoves Copilot supports Azure OpenAI models through the OpenAI API compatible interface.

Website: https://azure.microsoft.com/en-us/products/ai-services/openai-service

### Secret value format

```json
{
"default": {
"apiProvider": "openai",
"openAiApiKey": "<YOUR AZURE API KEY>",
"openAiBaseUrl": "https://<your-resource>.cognitiveservices.azure.com/openai/deployments/<deployment-name>/chat/completions?api-version=<API VERSION>",
"openAiModelId": "<deployment-name>",
"openAiUseAzure": true,
"id": "default"
}
}
```

### Getting Azure OpenAI Credentials

1. **Create Azure OpenAI Resource**: Go to Azure Portal and create an Azure OpenAI service resource
2. **Deploy a Model**: In Azure AI Foundry, deploy a model (e.g., gpt-4o, gpt-4.1, gpt-5)
3. **Get Endpoint**: Copy your endpoint URL from the resource overview
4. **Get API Key**: Navigate to "Keys and Endpoint" section and copy one of the API keys
5. **Get Deployment Name**: Use the deployment name you created (not the model name)

### Endpoint URL Format

Datacoves Copilot uses the **Chat Completions API**, so your `openAiBaseUrl` must use the chat completions path, including the deployment name and `api-version` parameter:

```
https://<your-resource>.cognitiveservices.azure.com/openai/deployments/<deployment-name>/chat/completions?api-version=<API VERSION>
```

For models that support **both** the Responses API and the Chat Completions API (for example GPT-4.1, GPT-5.x, o3, o4-mini), **always** use the Chat Completions URL above in Datacoves Copilot and **do not** use the default `/openai/v1/responses` URL shown in some Azure examples.

### Supported Azure OpenAI Models (`openAiModelId`)

Use your **deployment name** from Azure AI Foundry as the `openAiModelId`. This must match the deployment name exactly (for example `gpt-5.1` if your deployment is named `gpt-5.1`), not just the base model family name.

#### GPT-5 Series (Latest)

**Models with Chat Completions API support:**
- gpt-5.2 (2025-12-11) - Flagship model, 400K context
- gpt-5.2-chat (2025-12-11) - Chat optimized
- gpt-5.1 (2025-11-13) - Advanced reasoning, 400K context
- gpt-5.1-chat (2025-11-13) - Chat optimized reasoning
- gpt-5 (2025-08-07) - Advanced reasoning, 400K context
- gpt-5-mini (2025-08-07) - Cost-efficient, 400K context
- gpt-5-nano (2025-08-07) - Fast, cost-efficient, 400K context
- gpt-5-chat (2025-08-07, 2025-10-03) - Conversational, 128K context
- gpt-oss-120b - Open-weight reasoning model
- gpt-oss-20b - Open-weight reasoning model

**Note:** The following GPT-5 models use Responses API only and are **not supported** by Datacoves Copilot:
- gpt-5-codex, gpt-5-pro, gpt-5.1-codex, gpt-5.1-codex-mini, gpt-5.1-codex-max

#### GPT-4.1 Series

- gpt-4.1 (2025-04-14) - Advanced multimodal, 1M context
- gpt-4.1-mini (2025-04-14) - Balanced performance, 1M context
- gpt-4.1-nano (2025-04-14) - Lightweight, 1M context

#### GPT-4o Series

- gpt-4o (2024-11-20) - Optimized GPT-4, 128K context
- gpt-4o (2024-08-06) - Optimized GPT-4, 128K context
- gpt-4o (2024-05-13) - Original GPT-4o, 128K context
- gpt-4o-mini (2024-07-18) - Fast, cost-efficient, 128K context

#### GPT-4 Series

- gpt-4 (turbo-2024-04-09) - GPT-4 Turbo with Vision, 128K context

#### o-Series Reasoning Models

- o3 (2025-04-16) - Reasoning model, 200K context
- o4-mini (2025-04-16) - Mini reasoning, 200K context
- o3-mini (2025-01-31) - Compact reasoning, 200K context
- o1 (2024-12-17) - Reasoning model, 200K context
- o1-mini (2024-09-12) - Smaller reasoning, 128K context
- codex-mini (2025-05-16) - Coding specialized, 200K context

#### GPT-3.5 Series

- gpt-35-turbo (0125) - Chat optimized, 16K context
- gpt-35-turbo (1106) - Chat optimized, 16K context
- gpt-35-turbo-instruct (0914) - Completions API only

**Important Notes:**
- Datacoves Copilot uses the Chat Completions API endpoint
- Responses API is not currently supported for Azure OpenAI
- Use your **deployment name** in Azure as both the URL path segment and `openAiModelId`
- The `api-version` parameter is required in the endpoint URL

Refer to [Azure OpenAI documentation](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/concepts/models) for the most current model availability and regional deployment options.

</TabItem>
<TabItem value="openaicompatible">

Expand All @@ -229,9 +334,11 @@ Refer to the [OpenAI Models documentation](https://platform.openai.com/docs/mode
Datacoves Copilot supports a wide range of AI model providers that offer APIs compatible with the OpenAI API standard. This means you can use models from providers other than OpenAI, while still using a familiar API interface. This includes providers like:

- Local models running through tools like Ollama and LM Studio (covered in separate sections).
- Cloud providers like Azure, Perplexity, Together AI, Anyscale, and others.
- Cloud providers like Perplexity, Together AI, Anyscale, and others.
- Any other provider offering an OpenAI-compatible API endpoint.

**Note:** For Azure OpenAI, see the dedicated [Azure OpenAI tab](#azure-openai) for specific setup instructions.

### Secret value format

```json
Expand All @@ -256,7 +363,7 @@ Where:
2. `openAiApiKey`: This is the secret key you obtain from the provider.
3. `openAiModelId`: This is the model name of the specific model, each provider will expose a different set of models, please check provider's documentation.

If you're using `Azure`, please set `"openAiUseAzure": true`, optionally you could specify a specific API version using `"azureApiVersion": "<VERSION>"`.


#### Fine tune model usage

Expand Down