Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion app/app.config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ export default defineAppConfig({
},
{
label: 'AI',
to: '/guides/ai/mcp',
to: '/guides/ai/',
icon: 'directus-ai',
},
{
Expand Down
4 changes: 2 additions & 2 deletions app/pages/[...slug].vue
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,9 @@ const showComingSoonBadge = computed(() => {
const { data: surround } = await useAsyncData(`${route.path}-surround`, () => queryCollectionItemSurroundings('content',
route.path,
{
fields: ['title', 'description', 'navigation', 'path'],
fields: ['title', 'description', 'path'],
},
));
).where('path', 'NOT LIKE', '%.navigation'));
</script>

<template>
Expand Down
30 changes: 30 additions & 0 deletions content/guides/10.ai/0.index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
title: AI + Directus
description: Use AI to interact with Directus - either through the built-in AI Chat or by connecting external AI tools via MCP.
---

Directus offers two ways to integrate AI into your workflow, depending on where you prefer to work.

## Inside Directus: AI Chat

The built-in AI Chat provides a conversational assistant directly within the Directus Data Studio.

- Works directly in the Directus UI
- No additional client setup required
- Best for content editors, quick tasks, and users who live in Directus

::card{icon="material-symbols:chat" title="AI Chat" to="/guides/ai/chat"}
Learn how to configure and use the built-in AI assistant.
::

## External AI Tools: MCP Server

The Model Context Protocol (MCP) server lets you connect external AI tools to Directus.

- Use Claude Desktop, ChatGPT, Cursor, or other MCP-compatible clients
- Bring Directus capabilities to where you already work
- Best for developers, power users, and complex workflows

::card{icon="material-symbols:hub" title="MCP Server" to="/guides/ai/mcp"}
Connect your preferred AI tools to Directus.
::
27 changes: 24 additions & 3 deletions content/guides/10.ai/1.chat/0.index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Directus AI Chat is an embedded conversational assistant that helps you interact
::

::callout{icon="material-symbols:info" color="info"}
**AI Chat requires an OpenAI or Anthropic API key.** Administrators [configure API keys](/guides/ai/chat/setup) in Settings → AI.
**AI Chat requires an API key from a supported provider.** Administrators [configure API keys](/guides/ai/chat/setup) for OpenAI, Anthropic, Google, or an OpenAI-compatible endpoint in Settings → AI.
::

## What can AI Chat do?
Expand All @@ -30,18 +30,39 @@ Directus AI Chat is an embedded conversational assistant that helps you interact

| Model | Model ID | Context Window Size |
|-------|----------|---------|
| GPT-4o Mini | `gpt-4o-mini` | 128k tokens |
| GPT-4.1 Nano | `gpt-4.1-nano` | 1M tokens |
| GPT-4.1 Mini | `gpt-4.1-mini` | 1M tokens |
| GPT-4.1 | `gpt-4.1` | 1M tokens |
| GPT-5 Nano | `gpt-5-nano` | 400k tokens |
| GPT-5 Mini | `gpt-5-mini` | 400k tokens |
| GPT-5 | `gpt-5` | 400k tokens |
| GPT-5.2 | `gpt-5.2` | 400k tokens |
| GPT-5.2 Chat | `gpt-5.2-chat-latest` | 128k tokens |
| GPT-5.2 Pro | `gpt-5.2-pro` | 400k tokens |

### :icon{name="i-simple-icons-claude" class="align-middle mr-1 size-5"} Anthropic
### :icon{name="i-simple-icons-anthropic" class="align-middle mr-1 size-5"} Anthropic

| Model | Model ID | Context Window Size |
|-------|----------|---------|
| Claude Haiku 4.5 | `claude-haiku-4-5` | 200k tokens |
| Claude Sonnet 4.5 | `claude-sonnet-4-5` | 200k tokens |
| Claude Opus 4.5 | `claude-opus-4-5` | 200k tokens |

Want additional models or features? [Submit a request on our roadmap](https://roadmap.directus.io/).
### :icon{name="i-simple-icons-googlegemini" class="align-middle mr-1 size-5"} Google

| Model | Model ID | Context Window Size |
|-------|----------|---------|
| Gemini 2.5 Flash | `gemini-2.5-flash` | 1M tokens |
| Gemini 2.5 Pro | `gemini-2.5-pro` | 1M tokens |
| Gemini 3 Flash Preview | `gemini-3-flash-preview` | 1M tokens |
| Gemini 3 Pro Preview | `gemini-3-pro-preview` | 1M tokens |

### :icon{name="material-symbols:cloud" class="align-middle mr-1 size-5"} OpenAI-Compatible

Use any OpenAI-compatible API endpoint including self-hosted models (Ollama, LM Studio), Azure OpenAI, DeepSeek, Mistral, and more. Configure custom models in [Settings → AI](/guides/ai/chat/setup#openai-compatible-providers).

Have feedback or feature requests? [Submit on our roadmap](https://roadmap.directus.io/).

## Key Features

Expand Down
161 changes: 152 additions & 9 deletions content/guides/10.ai/1.chat/1.setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,12 @@ AI Chat requires an API key from a supported AI provider. This page covers the a
## Requirements

- Administrator access to your Directus instance
- API key from OpenAI or Anthropic (see below)
- API key from at least one supported provider: OpenAI, Anthropic, or Google (see below)
- Users must have **App access** - Public (non-authenticated) or API-only users cannot use AI Chat
- AI Chat can be [disabled at the infrastructure level](/configuration/ai) using the `AI_ENABLED` environment variable

Alternatively, you can use an [OpenAI-compatible provider](#openai-compatible-providers) like Ollama or LM Studio for self-hosted models.

::callout{icon="material-symbols:info" color="info"}
Note that all users of AI Chat will share a single API key from your configured provider. Usage limits and costs will be shared across all users. See your provider's dashboard for monitoring usage details and costs.
::
Expand Down Expand Up @@ -41,7 +43,7 @@ OpenAI requires a payment method and has usage-based pricing. Set spending limit

:::accordion-item{label="Anthropic" icon="i-simple-icons-anthropic"}

Anthropic provides Claude models (Haiku 4.5, Sonnet 4.5).
Anthropic provides Claude models (Haiku 4.5, Sonnet 4.5, Opus 4.5).

1. Go to [console.anthropic.com](https://console.anthropic.com/) and sign in or create an account
2. Navigate to **API Keys** in the settings
Expand All @@ -55,35 +57,171 @@ Anthropic requires a payment method and has usage-based pricing. Monitor usage i

:::

:::accordion-item{label="Google AI" icon="i-simple-icons-googlegemini"}

Google provides Gemini models (2.5 Flash, 2.5 Pro, 3 Flash Preview, 3 Pro Preview).

1. Go to [aistudio.google.com](https://aistudio.google.com/) and sign in with your Google account
2. Click **Get API Key** in the left sidebar
3. Click **Create API Key**
4. Select or create a Google Cloud project
5. Copy the generated API key

::callout{icon="material-symbols:info" color="info"}
Google AI Studio offers a free tier with rate limits. For production use, consider enabling billing in Google Cloud Console to increase quotas.
::

:::

::

## Configure API Keys in Directus
## Configure Providers in Directus

::steps{level="3"}

### Navigate to AI Settings

Go to **Settings → AI** in the Directus admin panel.

![AI Settings page in Directus](/img/ai-chat-settings-disabled.png)
![AI Settings page in Directus](/img/ai-settings-ai-chat.png)

### Enter Your API Key
### Enter Your API Keys

Add your API key for one or both providers:
Add your API key for one or more providers:

- **OpenAI API Key** - Enables GPT-5 models
- **OpenAI API Key** - Enables GPT-4 and GPT-5 models
- **Anthropic API Key** - Enables Claude models
- **Google API Key** - Enables Gemini models

::callout{icon="material-symbols:security" color="info"}
API keys are encrypted at rest in the database and masked in the UI.
::

### Configure Allowed Models

For each provider, you can restrict which models are available to users. Use the **Allowed Models** dropdown next to each API key field to select the models users can choose from.

- If no models are selected, no models from that provider will be available
- You can add custom model IDs by typing them and pressing Enter (useful when new models are released)

This is useful for:
- Controlling costs by limiting access to expensive models
- Ensuring compliance by only allowing approved models
- Simplifying the user experience by reducing model choices

### Save Settings

Click **Save** to apply your changes. AI Chat is now available to all users with [App access](/guides/auth/access-control#studio-users).

::

## OpenAI-Compatible Providers

In addition to the built-in providers, Directus supports any OpenAI-compatible API endpoint. This allows you to use self-hosted models, alternative providers, or private deployments.

::callout{icon="material-symbols:warning" color="warning"}
**For best results, use built-in cloud providers.** Local models vary significantly in their tool-calling capabilities and may produce inconsistent results. If using OpenAI-compatible providers, we recommend cloud-hosted frontier models over locally-run models on personal hardware.
::

![OpenAI Compatible Provider Settings](/img/ai-settings-openai-compat.png)

### Configuration

In **Settings → AI**, scroll to the **OpenAI-Compatible** section and configure:

| Field | Description |
| ----- | ----------- |
| **Provider Name** | Display name shown in the model selector (e.g., "Ollama", "LM Studio") |
| **Base URL** | The API endpoint URL (required). Must be OpenAI-compatible. |
| **API Key** | Authentication key if required by your provider |
| **Custom Headers** | Additional HTTP headers for authentication or configuration |
| **Models** | List of models available from this provider |

### Model Configuration

For each model, you can specify:

| Field | Description |
| ----- | ----------- |
| **Model ID** | The model identifier used in API requests |
| **Display Name** | Human-readable name shown in the UI |
| **Context Window** | Maximum input tokens (default: 128,000) |
| **Max Output** | Maximum output tokens (default: 16,000) |
| **Supports Attachments** | Whether the model can process images/files |
| **Supports Reasoning** | Whether the model has chain-of-thought capabilities |
| **Provider Options** | JSON object for model-specific parameters |

::callout{icon="material-symbols:info" color="info"}
The **Provider Options** field allows you to pass provider-specific parameters to the AI SDK. This is useful for enabling features like extended thinking or custom sampling parameters. See the [Vercel AI SDK documentation](https://sdk.vercel.ai/providers/openai-compatible) for details.
::

### Example Configurations

::accordion{type="single"}

:::accordion-item{label="Ollama" icon="i-simple-icons-ollama"}

[Ollama](https://ollama.ai/) lets you run open-source models locally.

1. Install Ollama and pull a model: `ollama pull gpt-oss:20b`
2. Ollama runs on `http://localhost:11434` by default

**Directus Configuration:**
- **Provider Name**: `Ollama`
- **Base URL**: `http://localhost:11434/v1`
- **API Key**: `ollama` (required by the OpenAI SDK but ignored by Ollama)
- **Models**: Add your pulled models (e.g., `gpt-oss:20b`, `gpt-oss:120b`, `qwen3:8b`)

::callout{icon="material-symbols:info" color="info"}
You can copy an existing model to an OpenAI-compatible name if needed: `ollama cp gpt-oss:20b gpt-4`
::

See [Ollama OpenAI compatibility docs](https://docs.ollama.com/api/openai-compatibility) for supported endpoints and features.

:::


:::accordion-item{label="Azure OpenAI" icon="i-simple-icons-microsoftazure"}

[Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service) provides OpenAI models through Microsoft Azure.

1. Create an Azure OpenAI resource in the Azure portal
2. Deploy a model (e.g., GPT-4, GPT-4o)
3. Get your endpoint and API key from the Develop tab in your resource

**Directus Configuration:**
- **Provider Name**: `Azure OpenAI`
- **Base URL**: `https://YOUR-RESOURCE.openai.azure.com/openai/v1`
- **API Key**: Your Azure OpenAI API key
- **Models**: Add your deployed model names

::callout{icon="material-symbols:info" color="info"}
The v1 API (August 2025+) no longer requires an `api-version` header. If using an older API version, add `api-version` as a custom header (e.g., `2024-10-21`).
::

See [Azure OpenAI documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/) for setup details.

:::

:::accordion-item{label="Mistral AI" icon="simple-icons:mistralai"}

[Mistral AI](https://mistral.ai/) provides high-performance open and commercial models.

1. Create an account at [console.mistral.ai](https://console.mistral.ai/)
2. Generate an API key

**Directus Configuration:**
- **Provider Name**: `Mistral`
- **Base URL**: `https://api.mistral.ai/v1`
- **API Key**: Your Mistral API key
- **Models**: Add models like `mistral-large-latest`, `mistral-small-latest`, `codestral-latest`

See [Mistral AI documentation](https://docs.mistral.ai/) for available models and pricing.

:::

::

## Custom System Prompt

Optionally customize how the AI assistant behaves in **Settings → AI → Custom System Prompt**.
Expand Down Expand Up @@ -161,9 +299,14 @@ Leave blank to use the default behavior.
::

**Tips for controlling costs:**
- Use faster, cheaper models (GPT-5 Nano, Claude Haiku 4.5) for simple tasks
- Use faster, cheaper models (GPT-5 Nano, Claude Haiku 4.5, Gemini 2.5 Flash) for simple tasks
- Use [Allowed Models](#configure-allowed-models) to restrict access to expensive models
- Disable unused tools - disabled tools are not loaded into context, reducing token usage
- Set spending limits in your [OpenAI](https://platform.openai.com/settings/organization/limits) or [Anthropic](https://console.anthropic.com/) dashboard
- Set spending limits in your provider dashboard:
- [OpenAI](https://platform.openai.com/settings/organization/limits)
- [Anthropic](https://console.anthropic.com/)
- [Google AI](https://aistudio.google.com/)
- Consider self-hosted models via [OpenAI-compatible providers](#openai-compatible-providers) for cost control

## Next Steps

Expand Down
4 changes: 2 additions & 2 deletions content/guides/10.ai/1.chat/2.usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ Click the model dropdown in the chat header to choose which AI model to use:

| Model | Best For |
|-------|----------|
| :icon{name="logos:openai-icon"} GPT-5 Nano / :icon{name="logos:claude-icon"} Claude Haiku 4.5 | Quick tasks, simple queries |
| :icon{name="logos:openai-icon"} GPT-5 Nano / :icon{name="logos:claude-icon"} Claude Haiku 4.5 / :icon{name="i-simple-icons-googlegemini"} Gemini 3 Flash Preview | Quick tasks, simple queries |
| :icon{name="logos:openai-icon"} GPT-5 Mini | Balanced speed and capability |
| :icon{name="logos:openai-icon"} GPT-5 / :icon{name="logos:claude-icon"} Claude Sonnet 4.5 | Complex operations, detailed analysis |
| :icon{name="logos:openai-icon"} GPT-5 / :icon{name="logos:claude-icon"} Claude Sonnet 4.5 / :icon{name="i-simple-icons-googlegemini"} Gemini 3 Pro Preview | Complex operations, detailed analysis |

Only models for [configured providers](/guides/ai/chat/setup) appear in the dropdown.

Expand Down
Binary file added public/img/ai-settings-ai-chat.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added public/img/ai-settings-openai-compat.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.