What MCP actually is (no buzzwords)
Anthropic introduced the Model Context Protocol in late 2024 as an open standard. It solves a boring-sounding problem with large consequences: AI assistants need to talk to external tools and data, and without a standard, every AI app has to rebuild every integration from scratch. ChatGPT plugins, custom GPTs, Cursor's @-mentions, Claude's tool use — every one of these is a separate wheel, reinvented.
MCP is the standard. Conceptually it's the same kind of move HTTP made for the web: instead of every browser inventing its own way to fetch a page, every browser speaks one protocol and any server that speaks the same protocol can join. With MCP:
- Servers are small programs that expose data or actions — a database server, a filesystem server, a "search my vault" server. They speak MCP.
- Clients are AI applications — Claude Desktop, Cursor, Windsurf, Zed, Claude Code. They consume MCP.
- Any client can talk to any server. Wire your vault server into Claude Desktop today, swap in Cursor tomorrow, and the same vault tools just work.
That's it. The whole concept is genuinely simple — what's interesting is what it lets you compose. The ecosystem is young (mid-2026), but the list of tools that ship MCP support is growing fast, and the implication for personal data is much bigger than the protocol itself sounds.
Why notes-as-context is a fundamental shift
Think about how you use Claude or ChatGPT with your notes today. The workflow looks something like this: you have a question. You remember you wrote about it. You open Obsidian, search for the relevant note, copy the text, paste it into the chat, then ask. If the answer needs another note, you go fetch that one too. You are the indexer. The AI is blind to anything you don't manually feed it, and your job is to figure out what to feed.
With MCP wired in, the loop changes. You ask the AI a question. The AI calls into your vault through the MCP server, retrieves the notes that look relevant, and uses them to answer — without you ever touching the file picker. If a follow-up needs more context, the AI fetches more. The AI does the routing; you just have a conversation.
This is qualitatively different from older patterns:
- It's not RAG over a cloud-uploaded vault. Your notes don't get ingested into someone's vector database. The MCP server runs on your machine; the AI client only retrieves notes when it actually needs them, in response to a query.
- It's not a chat plugin glued to one app. Once your vault speaks MCP, every MCP-compatible client gets access. Claude Desktop today, a different AI tool next year, same vault interface.
- It's not "the AI memorizes your notes." The AI doesn't preload your vault into context. It searches on demand, retrieves the slice it needs, and answers. Your vault can be 10,000 notes and the model never sees most of them.
The result is something the PKM world has wanted for years and never quite had: notes that are present when you need them and out of the way when you don't, accessible to whatever tool you're using, without you doing the schlep.
What's possible today, with examples
Once your vault is wired via MCP, the queries that suddenly become natural look like this:
- "What did I decide about the Acme deal last month?"
- "Show me everything tagged
#meeting from the last two weeks and pull out the action items."
- "What was my reasoning when I picked Postgres over MongoDB? Find the note where I wrote it down."
- "Summarize my notes on the rewrite proposal and flag anything I never followed up on."
- "What have I written about retrieval-augmented generation? Group it by month."
What's new about each of these isn't the question — it's that you can ask without first becoming the indexer. The AI handles the search, the retrieval, and the synthesis in one move, and your vault never leaves your machine to make it happen.
Under the hood, the MCP server exposes a small set of tools that the AI can call. The common surface for a vault server looks like:
vault_search — text search across all notes, optionally filtered by folder or tag
vault_semantic_search — embeddings-based search for conceptually related notes (not just keyword matches)
vault_get_note — fetch the full content of a specific note by path
vault_list_recent — list notes modified in the last N days
vault_list_by_tag — list notes carrying a specific tag
When you ask "what did I decide about Acme last month," the AI doesn't see that intent directly — it sees the question, decides search is the right move, calls vault_search with sensible parameters, reads what comes back, and writes its answer using the retrieved notes as context. You watch all this happen in the chat (most clients show tool calls inline), which is honestly part of what makes it feel different from black-box AI: you can see what your vault gave the model.
The current options for MCP-enabled vaults
As of mid-2026, you have three real paths for getting your vault on MCP. They trade off effort against polish in the way you'd expect.
Roll your own MCP server
Free · developer-only
Maximum flexibility. Anthropic's reference SDKs make the server scaffolding straightforward; the work is in the vault adapter.
- Setup
- Write or fork a small Python / TypeScript program using the official MCP SDK; expose vault search and read tools; run it as a local process
- You get
- A vault adapter shaped exactly the way you want it — your own search heuristics, your own tagging conventions, your own write rules
- You give up
- Your evenings. And ongoing maintenance whenever the spec or your vault structure shifts.
- Best for
- Developers who want to learn the protocol and care about exactly how their vault is exposed
Community MCP servers for Obsidian
Free · CLI-comfortable
A handful of open-source vault servers exist on GitHub and npm. Quality and maintenance vary.
- Setup
- Find a maintained server (search GitHub for "obsidian mcp"),
npm install or clone, point it at your vault path, register it in your AI client's config
- You get
- A working vault server without writing one yourself; community-driven feature set
- You give up
- Predictable maintenance — some projects are abandoned, some are actively developed, some are one-developer experiments. Vet before you trust.
- Best for
- Comfortable-with-the-terminal users who want to stay free and don't mind doing the integration work
ThoughtMic
Free up to 2k words/wk · $8/mo Pro
Bundled local MCP server. Capture by voice, write to vault, expose the vault to Claude Desktop and Cursor — same tool does all three.
- Setup
- Install ThoughtMic, point it at your vault folder. The MCP server is bundled and registered automatically; restart Claude Desktop and your vault tools appear.
- You get
- Voice-to-vault capture (local Whisper, auto-titles, auto-tags, auto-backlinks) plus a maintained MCP server with the standard tool surface (
vault_search, vault_semantic_search, vault_get_note, vault_list_recent)
- You give up
- Free tier caps at 2,000 words/week; Pro is $8/mo or $72/yr. Lifetime is $99 (200-slot cap; first 50 are $49).
- Best for
- PKM users who want both halves of the loop — fast capture and AI-queryable retrieval — from the same tool, with no terminal time. See thoughtmic.com.
How to set it up — Obsidian + Claude Desktop
Whether you go with a community server or ThoughtMic's bundled one, the wiring on the Claude Desktop side is the same. Here's the practical walkthrough.
1. Install your MCP server
For a community server, follow that project's README — usually a clone, an npm install, and a build step. For ThoughtMic, install the app and point it at your vault; it handles the next two steps for you.
2. Register the server in Claude Desktop's config
Claude Desktop reads MCP server definitions from a JSON config file. On macOS it lives at:
~/Library/Application Support/Claude/claude_desktop_config.json
On Windows it's at %APPDATA%\Claude\claude_desktop_config.json. Open the file in a text editor (create it if it doesn't exist) and add an entry that looks like this:
{
"mcpServers": {
"obsidian-vault": {
"command": "/path/to/your/mcp-server",
"args": ["--vault", "/Users/you/Documents/MyVault"]
}
}
}
The exact command and args values depend on which server you picked — community READMEs spell it out. ThoughtMic writes this entry for you and fills in the right paths.
3. Restart Claude Desktop
MCP servers are loaded at startup. Quit Claude Desktop fully (not just close the window) and reopen it. If the config has a syntax error, Claude Desktop will surface it in its developer console — worth checking if anything looks off.
4. Verify the wiring
Open a new chat in Claude Desktop and ask:
"What MCP tools do you have access to right now?"
You should see your vault tools listed by name — vault_search, vault_get_note, and so on. If they show up, you're done. Try a real query: "search my vault for anything I've written about Postgres." Watch Claude make the tool call and answer with the results.
5. Same wiring for Cursor
Cursor uses the same MCP server format with a slightly different config location (~/.cursor/mcp.json). The server itself is unchanged — that's the whole point of the protocol. Once you have the server running, every MCP-aware client can read from it.
What MCP doesn't do (yet)
MCP is genuinely promising, and it's also early. A few things to be honest about before you over-invest:
It doesn't write to your vault by default. Most community vault servers ship read-only — the AI can search and retrieve, not silently rewrite. This is the right default for trust, but it means "Claude, fix the typo in my note from yesterday" still requires you to do the fix (or to opt into a server that exposes write tools, which a few do). Watch this space.
It doesn't replace actually reading your notes. The AI surfaces what it finds, but the depth of any answer is bounded by the depth of what's in the notes. If your vault is half-finished thoughts and #inbox dumps, the AI gets half-finished thoughts back. MCP rewards a curated vault more than a sprawling one — the same way good search has always rewarded good metadata.
It doesn't make AI use of your notes private. This is the critical caveat. The MCP server runs locally and reads your vault locally — true. But once Claude Desktop pulls a note via that server, the note's content is included in the prompt the client sends to Anthropic's model. So your vault at rest stays on your machine, but the specific notes the AI retrieves during a query do reach the model provider as part of that conversation. The privacy claim is "your vault stays on your machine," not "your notes never reach a model." If you need the second, you'd need a local model on the client side too — possible but a separate decision from MCP.
The mobile story is weak. MCP servers are local processes. Mobile OSes don't run arbitrary background processes the way macOS does, and the major mobile AI clients haven't shipped MCP support. For now, MCP is a desktop pattern. Cloud-hosted MCP servers (where the server runs in your cloud account, not on your laptop) are starting to emerge and will probably be the mobile path eventually — but that's a different privacy posture again.
None of this means MCP isn't worth it. It means you should adopt it eyes-open: it's powerful infrastructure with real tradeoffs, not a magic-words feature that solves PKM by itself.