Your AI agent forgets everything between sessions.

ContextPulse remembers. It runs quietly on your machine, captures what you see, say, and type, and gives that context back to your AI agent the moment it asks. Local-only. Sensitive data redacted before it ever touches disk.

Works with Claude Code, Claude Desktop, Cursor, Windsurf, Cline, and any MCP client.

GitHub stars Glama listing 35MCP tools <1%CPU
How it works

One daemon. Five sense modules. 35 MCP tools.

Runs in your system tray. Reports to MCP. Stays on your machine.

Sight
Captures your screen with full OCR. Tracks clipboard and active windows. Per-monitor adaptive capture.
11 free / 2 Pro
Voice
Hold-to-dictate hotkey with local Whisper transcription. Vocabulary that learns your project names and jargon.
3 free
Touch
Detects typing bursts, mouse activity, and corrections. Feeds dictation errors back to Voice for self-improvement.
3 free
Memory
Three-tier hot/warm/cold storage with FTS5 keyword search. Quota-capped to stay light.
5 free / 2 Pro
Project
Routes context to the right project automatically based on working directory, window title, and content.
5 free
What's different

Five things you won't find together anywhere else.

  1. MCP-native infrastructure, not an app

    Other tools store context for you to query manually. ContextPulse exposes 35 tools through the Model Context Protocol so any AI agent can ask for context directly, the same way it calls any other MCP server.

  2. Pre-storage redaction

    API keys, JWTs, PEM keys, credit card numbers, SSNs, and connection strings are detected and masked before anything is written to disk. Most context-capture tools store raw text. We never do.

  3. Cross-modal continuous learning

    Voice transcription gets corrected by what you actually type. Typing vocabulary biases speech recognition. Screen OCR validates voice accuracy. The longer you use it, the more accurate it gets, to you specifically.

  4. Quad-modal capture in a single daemon

    Screen, voice, keyboard, and pointer captured by one process with millisecond temporal alignment. Not three apps stitched together.

  5. Open-source under AGPL-3.0

    Auditable end-to-end. Self-hostable. No vendor lock-in. Modifications forced into the open if redistributed.

Pricing

Free forever. Paid when you need more. Cloud for what's coming next.

Community
$0 forever

Open source, local-only, AGPL-3.0. Everything you need to give your AI agent real context.

  • All capture (Sight, Voice, Touch)
  • Local SQLite + FTS5 search
  • 27 free MCP tools
  • Project detection + routing
  • Memory CRUD (store, recall, list, forget)
  • Pre-storage redaction
Pro
$49/yr or $249 lifetime
30-day free trial. No credit card.

Everything in Community, plus four advanced search tools. Still 100% local. Your data never touches the cloud.

  • memory_search hybrid keyword + semantic
  • memory_semantic_search pure vector search
  • search_all_events cross-modal full-text
  • get_event_timeline temporal reconstruction
  • Priority support

Get notified when Gumroad goes live. The trial works today: install Community, then call any Pro tool.

Cloud
Coming

Optional cloud sync for cross-device context and AI-native recall. Encrypted in transit and at rest. Your local capture stays the source of truth.

  • Cross-device memory (laptop and desktop share context)
  • Knowledge graph (entities, facts, attributions)
  • Structured recall via recall_context
  • Sunday review digest

No spam. We'll email when Cloud is ready to test.