Skip to content

New Server: Screenpipe - Screen & audio context for AI #3247

@louis030195

Description

@louis030195

Server Information

Name: Screenpipe MCP Server
Repository: https://github.com/mediar-ai/screenpipe
Description: Provides AI with context from 24/7 screen recordings and audio transcriptions

What it does

Screenpipe MCP server enables AI assistants to:

  1. Search screen history - Find text from anything you've seen on screen via OCR
  2. Search audio transcriptions - Query meetings, calls, and spoken content
  3. Get recent context - Retrieve what's been on screen recently
  4. Semantic search - Natural language queries over your digital activity

Why it's useful

This fills a unique gap in the MCP ecosystem - giving AI assistants awareness of the user's visual and audio context. Use cases:

  • "What was that error message I saw?"
  • "Summarize my last meeting"
  • "Find the documentation I was reading about X"
  • "What was discussed about feature Y?"

Technical Details

  • Language: TypeScript/Rust
  • Platform: macOS, Windows, Linux
  • Privacy: 100% local, no cloud required
  • Stars: 16,500+ on GitHub
  • License: MIT

MCP Features

  • search_screen - Full-text + semantic search over OCR content
  • search_audio - Search audio transcriptions
  • get_recent_context - Get recent screen/audio activity
  • get_frame - Retrieve specific screenshot

Installation

npm install @screenpipe/mcp

Or via the Screenpipe desktop app which includes the MCP server.

Links

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions