Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions build/agents/build-your-agent/evals.mdx
Original file line number Diff line number Diff line change
@@ -1,18 +1,24 @@
---
title: 'Evals'
sidebarTitle: 'Evals'

Check warning on line 3 in build/agents/build-your-agent/evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals.mdx#L3

Did you really mean 'Evals'?
description: 'Test and evaluate your AI Agents with scenario-based evaluations and automated Evaluators'
---

<Info>
**Rollout Status**: Evals is currently being rolled out progressively, starting with Enterprise customers. If you're an Enterprise customer and don't see this feature in your account yet, reach out to your account manager to discuss access.

Check warning on line 8 in build/agents/build-your-agent/evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals.mdx#L8

Did you really mean 'Evals'?
</Info>

The Evals section is your command center for testing and evaluating AI Agent performance. Located in the **Monitor** tab (next to the Run tab) in the Agent builder, Evals enables you to create Test Suites, define evaluation criteria (Evaluators), run automated evaluations, and monitor ongoing performance—all without manual testing.

Check warning on line 11 in build/agents/build-your-agent/evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals.mdx#L11

Did you really mean 'Evals'?

Check warning on line 11 in build/agents/build-your-agent/evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals.mdx#L11

Did you really mean 'Evals'?

![Evals section showing Test Suites, Evaluators, Runs, and Performance](/images/agent/agent-evals.png)

Check warning on line 13 in build/agents/build-your-agent/evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals.mdx#L13

Did you really mean 'Evals'?

## Programmatic access

You can manage the full evaluation lifecycle programmatically using the Relevance AI MCP server. This covers creating test sets and test cases, configuring evaluator rules and tool simulations, triggering runs, and retrieving results — enabling CI/CD integration and automated testing workflows. See [Programmatic evals via MCP](/build/agents/build-your-agent/evals/programmatic-evals) for details.

---

## What you can do with Evals

Check warning on line 21 in build/agents/build-your-agent/evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals.mdx#L21

Did you really mean 'Evals'?

<CardGroup cols={3}>
<Card title="Conduct Tests" icon="flask-vial">
Expand All @@ -28,11 +34,11 @@

---

## Evals sections

Check warning on line 37 in build/agents/build-your-agent/evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals.mdx#L37

Did you really mean 'Evals'?

The Evals section contains five main sections, accessible from the left sidebar:

Check warning on line 39 in build/agents/build-your-agent/evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals.mdx#L39

Did you really mean 'Evals'?

- **Test Suites** — Create and manage groups of Test scenarios for your Agent. Each Test Suite can contain multiple scenarios with different prompts and evaluation criteria.

Check warning on line 41 in build/agents/build-your-agent/evals.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

5 settings listed as bullet points — consider using a table instead so they're easier to scan. [technical: 5 consecutive bullet items matching **Key**: value or **Key** — value pattern]
- **Evaluators** — Configure global evaluation criteria that can be applied across any Test Suite or scenario without needing to set them up each time.
- **Runs** — View your evaluation run history and results. See average scores, number of conversations evaluated, progress status, credit spend, and creation dates for all past runs.
- **Publish Checks** — Configure which Test Suites must pass before your Agent can be published. Set a pass threshold and optionally block publishing if evaluations fail.
Expand Down Expand Up @@ -106,7 +112,7 @@
To create a global Evaluator:

<div style={{ width:"100%",position:"relative","padding-top":"56.75%" }}>
<iframe src="https://app.supademo.com/embed/cmmmtwq7z1lsj9cvj5kwwifwi" frameBorder="0" title="Creating a global Evaluator" allow="clipboard-write; fullscreen" webkitAllowFullscreen="true" mozAllowFullscreen="true" allowFullscreen style={{ position:"absolute",top:0,left:0,width:"100%",height:"100%",border:"3px solid #5E43CE",borderRadius:"10px" }} />

Check failure on line 115 in build/agents/build-your-agent/evals.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

Supademo embed is missing rounded corners — use the standard embed snippet. [technical: borderRadius: '10px' missing from iframe style]

Check failure on line 115 in build/agents/build-your-agent/evals.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

Supademo embed is missing the purple border — use the standard embed snippet. [technical: border: '3px solid #5E43CE' missing from iframe style]

Check failure on line 115 in build/agents/build-your-agent/evals.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

Supademo embed isn't using the standard wrapper — replace it with the snippet from the style guide. [technical: paddingTop: '56.25%' missing from wrapper <div>]
</div>

1. Go to the **Monitor** tab and select **Evals**, then select **Evaluators**
Expand All @@ -125,7 +131,7 @@
## Creating a Test Suite with a scenario

<div style={{ width:"100%",position:"relative","padding-top":"56.75%" }}>
<iframe src="https://app.supademo.com/embed/cmmmvldns1nlq9cvjzy4nkpe0" frameBorder="0" title="Creating a Test Suite" allow="clipboard-write; fullscreen" webkitAllowFullscreen="true" mozAllowFullscreen="true" allowFullscreen style={{ position:"absolute",top:0,left:0,width:"100%",height:"100%",border:"3px solid #5E43CE",borderRadius:"10px" }} />

Check failure on line 134 in build/agents/build-your-agent/evals.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

Supademo embed is missing rounded corners — use the standard embed snippet. [technical: borderRadius: '10px' missing from iframe style]

Check failure on line 134 in build/agents/build-your-agent/evals.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

Supademo embed is missing the purple border — use the standard embed snippet. [technical: border: '3px solid #5E43CE' missing from iframe style]

Check failure on line 134 in build/agents/build-your-agent/evals.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

Supademo embed isn't using the standard wrapper — replace it with the snippet from the style guide. [technical: paddingTop: '56.25%' missing from wrapper <div>]
</div>

Follow these steps to create your first evaluation Test Suite:
Expand Down Expand Up @@ -282,7 +288,7 @@

The Performance tab also includes:

- **Data points** for the overall score over time

Check warning on line 291 in build/agents/build-your-agent/evals.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

4 features listed as bullet points — consider using cards instead so they stand out visually. [technical: 4 consecutive bullet items matching **Feature** pattern, use <CardGroup> with <Card> components]
- **Evaluator breakdown** showing individual scoring per Evaluator
- **Graphs** visualizing Evaluator performance trends
- **List of evaluation runs** with score, name, and the ability to view the full conversation
Expand Down
234 changes: 234 additions & 0 deletions build/agents/build-your-agent/evals/programmatic-evals.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,234 @@
---
title: "Programmatic evals via MCP"

Check warning on line 2 in build/agents/build-your-agent/evals/programmatic-evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals/programmatic-evals.mdx#L2

Did you really mean 'evals'?
sidebarTitle: "Programmatic evals"
description: "Manage the full evaluation lifecycle programmatically using MCP tools from your AI coding assistant."
---

The Relevance AI MCP server includes 19 tools for managing evaluations programmatically. This covers the complete evaluation lifecycle: creating test sets and test cases, configuring evaluator rules and tool simulations, running evaluations, and monitoring batch results.

This enables CI/CD integration, automated testing frameworks, and bulk operations that would be impractical to do through the UI.

<Info>
This page covers the MCP tools for programmatic eval management. For the UI-based workflow, see [Evals](/build/agents/build-your-agent/evals).

Check warning on line 12 in build/agents/build-your-agent/evals/programmatic-evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals/programmatic-evals.mdx#L12

Did you really mean 'eval'?
</Info>

<Info>
**Rollout Status**: Evals is currently being rolled out progressively, starting with Enterprise customers. If you're an Enterprise customer and don't see this feature in your account yet, reach out to your account manager to discuss access.

Check warning on line 16 in build/agents/build-your-agent/evals/programmatic-evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals/programmatic-evals.mdx#L16

Did you really mean 'Evals'?
</Info>

---

## Prerequisites

You need the Relevance AI MCP server connected to your AI coding assistant before using these tools. See the [MCP Server](/integrations/mcp/programmatic-gtm/mcp-server) page for setup instructions.

For better results, also clone the [agent skills](/integrations/mcp/programmatic-gtm/agent-skills) repository — it gives your assistant the knowledge to use MCP tools correctly.

---

## Managing test sets

Test sets (also called Test Suites in the UI) are containers for test cases that you run together as a group.

### What you can do

- Create a new test set for an agent
- List all test sets for an agent
- Get the details of a specific test set
- Update a test set's name or configuration
- Delete a test set

### Example prompts

```
Create a test set called "Customer Support Regression" for agent [agent-id]
```

```
List all test sets for my support agent
```

```
Delete the test set named "Draft Tests" from agent [agent-id]
```

---

## Managing test cases

Test cases are individual scenarios within a test set. Each test case defines a simulated user persona, an opening message, conversation limits, and its own evaluator rules.

### What you can do

- Create a test case within a test set
- List all test cases in a test set
- Get the details of a specific test case
- Update a test case's scenario, persona, or configuration
- Delete a test case

### Example prompts

```
Add a test case to the "Customer Support Regression" test set:
- Scenario name: Billing Dispute
- Persona: An upset customer who was charged twice for the same order
- First message: "I've been double charged and no one is helping me"
- Max turns: 8
```

```
List all test cases in test set [test-set-id]
```

```
Update the "Billing Dispute" test case to increase max turns to 12
```

---

## Configuring evaluator rules

Evaluator rules define the criteria used to assess whether an agent's response passes or fails a test case. You can add, update, and remove evaluator rules on individual test cases.

### Evaluator rule types

| Type | What it checks |
|------|---------------|
| LLM Judge | Evaluates the conversation against a prompt you write, using an LLM to score the result |
| String Contains | Checks whether the agent's response includes specific text |
| String Equals | Checks whether the agent's response exactly matches an expected value |
| Tool Usage | Checks whether a specific tool was used, and how many times or in what position |

### What you can do

- Add an evaluator rule to a test case
- Update an existing evaluator rule
- Remove an evaluator rule from a test case
- List all evaluator rules on a test case

### Example prompts

```
Add an LLM Judge evaluator to test case [test-case-id]:
- Name: Empathy Check
- Prompt: Did the agent acknowledge the customer's frustration before offering a solution?
```

```
Add a Tool Usage evaluator to test case [test-case-id]:
- Name: Escalation Tool Used
- Tool: escalate_to_human
- Check that it was used at least once
```

```
Remove the "String Contains" evaluator from test case [test-case-id]
```

---

## Configuring tool simulation

Tool simulation lets you emulate tool responses during evaluations without actually calling the real tools. This is useful for testing how your agent handles specific tool outputs without incurring real API calls or side effects.

Tool simulations are configured at the test case level. You specify the tool to simulate and a prompt describing the fake response the tool should return.

### Example prompts

```
Add a tool simulation to test case [test-case-id]:
- Tool: get_customer_account
- Simulation prompt: Return a customer account showing two identical charges of $49.99 on the same date
```

```
Update the tool simulation for "get_order_status" in test case [test-case-id] to return a delayed shipment scenario
```

```
Remove the tool simulation for "send_email" from test case [test-case-id]
```

---

## Running evaluations

You can trigger evaluation runs programmatically against a test set. This is the same operation as clicking **Run** in the UI, but callable from scripts, CI pipelines, and automated workflows.

### What you can do

- Run a test set (runs all test cases in the set)
- Run an individual test case
- Include or exclude global evaluators from a run

### Example prompts

```
Run the "Customer Support Regression" test set for agent [agent-id]
```

```
Run test case [test-case-id] and include the "Professional Tone" global evaluator
```

```
Trigger an evaluation run on test set [test-set-id] and name it "v2.3 release check"
```

---

## Monitoring batch results

After triggering a run, you can retrieve the results programmatically — including per-test-case scores, evaluator verdicts, and conversation logs.

### What you can do

- List all evaluation runs for a test set
- Get the detailed results for a specific run, including scores and evaluator verdicts
- Check whether a run is still in progress or complete

### Example prompts

```
List all evaluation runs for test set [test-set-id]
```

```
Get the results for evaluation run [run-id] — show me which test cases passed and which failed
```

```
Check if the latest evaluation run for the "Customer Support Regression" test set has completed
```

---

## CI/CD integration

Because evaluation runs are fully programmable via MCP, you can integrate them into automated pipelines:

- Trigger a test set run as part of a pre-deployment check
- Poll for completion and parse pass/fail status
- Block deployment if scores fall below a threshold

<Accordion title="Example CI/CD workflow using an AI coding assistant">
Ask your AI coding assistant:

```
1. Trigger an evaluation run for test set [test-set-id] on agent [agent-id]
2. Poll every 10 seconds until the run is complete
3. Check whether all test cases passed
4. If any test case scored below 80%, list the failing cases with their evaluator verdicts
5. Return a summary with overall pass rate
```

Your assistant will use the MCP eval tools to carry out each step and return a structured report you can act on.

Check warning on line 225 in build/agents/build-your-agent/evals/programmatic-evals.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

build/agents/build-your-agent/evals/programmatic-evals.mdx#L225

Did you really mean 'eval'?
</Accordion>

---

## Learn more

- [Evals (UI workflow)](/build/agents/build-your-agent/evals) — create and manage evaluations through the Relevance AI interface
- [MCP Server](/integrations/mcp/programmatic-gtm/mcp-server) — connect your AI coding assistant to Relevance AI
- [Agent Skills](/integrations/mcp/programmatic-gtm/agent-skills) — give your assistant built-in knowledge of Relevance AI tools
11 changes: 9 additions & 2 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,13 @@
"build/agents/build-your-agent/alerts",
"build/agents/build-your-agent/memory",
"build/agents/build-your-agent/variables",
"build/agents/build-your-agent/evals",
{
"group": "Evals",
"pages": [
"build/agents/build-your-agent/evals",
"build/agents/build-your-agent/evals/programmatic-evals"
]
},
{
"group": "Trigger Types",
"pages": [
Expand Down Expand Up @@ -384,7 +390,8 @@
"integrations/mcp/programmatic-gtm/claude-code",
"integrations/mcp/programmatic-gtm/codex",
"integrations/mcp/programmatic-gtm/mcp-server",
"integrations/mcp/programmatic-gtm/agent-skills"
"integrations/mcp/programmatic-gtm/agent-skills",
"integrations/mcp/programmatic-gtm/slide-builder"
]
},
"integrations/mcp/mcp-client"
Expand Down
10 changes: 10 additions & 0 deletions get-started/chat/chat-agents/slide-builder.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
frameBorder="0"
webkitAllowFullscreen
mozAllowFullscreen
allowFullscreen

Check warning on line 12 in get-started/chat/chat-agents/slide-builder.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

get-started/chat/chat-agents/slide-builder.mdx#L12

Did you really mean 'allowFullscreen'?
style={{position: "absolute", top: 0, left: 0, width: "100%", height: "100%", borderRadius: "10px"}}
></iframe>
</div>
Expand All @@ -17,7 +17,7 @@
Slide Builder is an inbuilt AI agent in Chat that allows you to generate Slides based on your prompt. Slide Builder first creates a presentation outline showing all planned slides, allowing you to review and refine the structure before building. You can give the Slides a simple prompt, use inbuilt agents like the OpenAI Deep Researcher to find info on a topic and then generate slides, or use your Agents and Workforces to produce an output that our Slide Builder can then turn into Slides!

## Key Features
- **AI-Generated**: Tell Slide Builder what you want, and it will create a slide deck based on your prompts or instructions.

Check warning on line 20 in get-started/chat/chat-agents/slide-builder.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

7 settings listed as bullet points — consider using a table instead so they're easier to scan. [technical: 7 consecutive bullet items matching **Key**: value or **Key** — value pattern]
- **Outline Preview**: Review a complete presentation outline with slide titles and descriptions before any slides are built.
- **Dynamic Updates**: Ask Slide Builder to modify or update your slides.
- **Content Customization**: Share your pre-existing text, slides, notes and Slide Builder will use your information in the slides.
Expand All @@ -28,7 +28,7 @@
## Getting Started

<div style={{ width:"100%",position:"relative",paddingTop:"56.25%"}}>
<iframe src="https://app.supademo.com/embed/cmha5hvas0mzc6kif0twz83bs" frameBorder="0" title="Invite a user to a project in Relevance AI" allow="clipboard-write; fullscreen" webkitAllowFullscreen="true" mozAllowFullscreen="true" allowFullscreen style={{ position:"absolute",top:0,left:0,width:"100%",height:"100%",border:"3px solid #5E43CE",borderRadius:"10px" }} />

Check failure on line 31 in get-started/chat/chat-agents/slide-builder.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

Supademo embed is missing rounded corners — use the standard embed snippet. [technical: borderRadius: '10px' missing from iframe style]

Check failure on line 31 in get-started/chat/chat-agents/slide-builder.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

Supademo embed is missing the purple border — use the standard embed snippet. [technical: border: '3px solid #5E43CE' missing from iframe style]

Check failure on line 31 in get-started/chat/chat-agents/slide-builder.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

Supademo embed isn't using the standard wrapper — replace it with the snippet from the style guide. [technical: paddingTop: '56.25%' missing from wrapper <div>]

Check warning on line 31 in get-started/chat/chat-agents/slide-builder.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

get-started/chat/chat-agents/slide-builder.mdx#L31

Did you really mean 'fullscreen'?

Check warning on line 31 in get-started/chat/chat-agents/slide-builder.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

get-started/chat/chat-agents/slide-builder.mdx#L31

Did you really mean 'allowFullscreen'?
</div>

1. **Navigate to Slides**: In [Relevance Chat](https://chat.relevanceai.com/chat/start), select the Slides feature.
Expand All @@ -55,7 +55,7 @@

#### What's Included in a BrandKit

- **Moodboard images**: Visual references that set the overall aesthetic

Check warning on line 58 in get-started/chat/chat-agents/slide-builder.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

7 settings listed as bullet points — consider using a table instead so they're easier to scan. [technical: 7 consecutive bullet items matching **Key**: value or **Key** — value pattern]
- **Colour palette**: The colours that represent your brand
- **Font styles**: Typography for headers and body text
- **Logos**: Your brand logos to include in slides
Expand All @@ -73,11 +73,11 @@
#### How to Create a BrandKit

<div style={{ width:"100%",position:"relative",paddingTop:"56.25%"}}>
<iframe src="https://app.supademo.com/embed/cmkw0hqjf3lh712hh9tkkmw95" frameBorder="0" title="How to create a BrandKit" allow="clipboard-write; fullscreen" webkitAllowFullscreen="true" mozAllowFullscreen="true" allowFullscreen style={{ position:"absolute",top:0,left:0,width:"100%",height:"100%",border:"3px solid #5E43CE",borderRadius:"10px" }} />

Check failure on line 76 in get-started/chat/chat-agents/slide-builder.mdx

View workflow job for this annotation

GitHub Actions / Documentation Lint Checks

Supademo embed isn't using the standard wrapper — replace it with the snippet from the style guide. [technical: paddingTop: '56.25%' missing from wrapper <div>]

Check warning on line 76 in get-started/chat/chat-agents/slide-builder.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

get-started/chat/chat-agents/slide-builder.mdx#L76

Did you really mean 'fullscreen'?

Check warning on line 76 in get-started/chat/chat-agents/slide-builder.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

get-started/chat/chat-agents/slide-builder.mdx#L76

Did you really mean 'allowFullscreen'?
</div>

1. Open Slide Builder and click **New BrandKit**, then select **From scratch** to manually build your BrandKit.
2. Upload moodboard images that represent your brand's visual style using the upload icon.

Check warning on line 80 in get-started/chat/chat-agents/slide-builder.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

get-started/chat/chat-agents/slide-builder.mdx#L80

Did you really mean 'moodboard'?
3. Add brand colours using the plus icon under **Colors**, select your desired colours, and click **Done** to confirm.
4. Select font styles for your slide headers and body content.
5. Add your logo by clicking the logo area or dragging and dropping your company image file.
Expand All @@ -97,7 +97,7 @@
#### How to Create a Template

<div style={{ width:"100%",position:"relative",paddingTop:"56.25%"}}>
<iframe src="https://app.supademo.com/embed/cmkw1pk1w3msy12hhlmfgdpe1" frameBorder="0" title="How to create a Template" allow="clipboard-write; fullscreen" webkitAllowFullscreen="true" mozAllowFullscreen="true" allowFullscreen style={{ position:"absolute",top:0,left:0,width:"100%",height:"100%",border:"3px solid #5E43CE",borderRadius:"10px" }} />

Check warning on line 100 in get-started/chat/chat-agents/slide-builder.mdx

View check run for this annotation

Mintlify / Mintlify Validation (relevanceai) - vale-spellcheck

get-started/chat/chat-agents/slide-builder.mdx#L100

Did you really mean 'fullscreen'?
</div>

1. Click the **Convert to template** button on a previously created slide deck.
Expand Down Expand Up @@ -192,6 +192,16 @@

---

## Programmatic access via MCP

AI assistants connected to Relevance AI via MCP (Claude, Cursor, VS Code, ChatGPT, and others) can access Slide Builder directly — creating presentations, managing BrandKits and Templates, exporting slides, and working with version history through natural language prompts.

<Card title="Slide Builder via MCP" icon="plug" href="/integrations/mcp/programmatic-gtm/slide-builder">
Learn how to create and manage presentations programmatically using the Relevance AI MCP server.
</Card>

---

## What's next?

<CardGroup cols={2}>
Expand Down
31 changes: 31 additions & 0 deletions get-started/core-concepts/programmatic-gtm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,9 @@ Connect your AI coding environment to Relevance AI to start building programmati
<Card title="Agent Skills" icon="graduation-cap" href="/integrations/mcp/programmatic-gtm/agent-skills">
Clone the agent skills repository to give your AI coding assistant built-in knowledge of Relevance AI.
</Card>
<Card title="Slide Builder" icon="presentation-screen" href="/integrations/mcp/programmatic-gtm/slide-builder">
Create presentations, manage BrandKits and Templates, and export slides via MCP.
</Card>
</CardGroup>

---
Expand Down Expand Up @@ -59,6 +62,9 @@ Once connected, your AI client gets full access to your Relevance AI project. Th
<Card title="Update configurations" icon="gear">
Modify agent instructions, tool settings, and workflow logic.
</Card>
<Card title="Create presentations" icon="presentation-screen">
Build AI-generated slide decks, manage BrandKits and Templates, and export in multiple formats — all from your AI client.
</Card>
</CardGroup>

---
Expand Down Expand Up @@ -129,6 +135,31 @@ Once connected, your AI client gets full access to your Relevance AI project. Th
</AccordionGroup>
</Tab>

<Tab title="Slide Builder">
Create presentations, manage BrandKits and Templates, and export slides — all from your AI client without opening the Chat interface.

**Example prompts:**

<AccordionGroup>
<Accordion title="Create a pitch deck">
*"Create a 10-slide investor pitch deck for our Series A round. Include slides on the problem, solution, market size, business model, traction, team, and ask."*
</Accordion>
<Accordion title="Build branded presentations">
*"Create a BrandKit called 'Acme Corp' using our primary colour #2563EB and Inter as the body font. Then use it to build a 6-slide product overview deck."*
</Accordion>
<Accordion title="Reuse a template">
*"Use the 'Quarterly Business Review' template to create a presentation for Q1 2026. Pull the metrics from this spreadsheet."*
</Accordion>
<Accordion title="Export slides">
*"Export the current presentation as a PPTX file and also as individual PNG images."*
</Accordion>
</AccordionGroup>

<Card title="Slide Builder via MCP" icon="presentation-screen" href="/integrations/mcp/programmatic-gtm/slide-builder">
See the full list of capabilities and example prompts for Slide Builder via MCP.
</Card>
</Tab>

<Tab title="Troubleshoot">
When something isn't working right, use Programmatic GTM to dig into agent behaviour, tool failures, and configuration problems.

Expand Down
Loading
Loading