Skip to content

Mnemonic

Hero

How It All Started

I started using Claude Code early and ran into the usual problem: it kept forgetting things. Instructions I gave in one session were gone in the next. I learned about memory files, so I loaded up CLAUDE.md with references to agent behavior guidelines, coding standards, testing patterns, and documentation rules.

That worked fine on small projects. On larger ones, all those guidelines ate most of the context before I'd written a line of code. I started hitting the lost-in-the-middle problem — too much irrelevant material packed in alongside the actual work. I trimmed the memory files down to essentials, which helped, but it didn't fix the root issue. If I was writing YAML for a CI pipeline, my Go coding guidelines were still sitting there consuming context.

Then I found MCP and started using the Knowledge Graph Memory Server to hold cross-project knowledge. Instead of loading everything upfront, I instructed Claude to query the server when it needed a pattern. Only relevant context got pulled in. That worked much better.

Long sessions still degraded, though. The lost-in-the-middle problem came back as sessions grew. I'd run /compact proactively and it helped briefly, but eventually the context was so large there was nothing to do but clear it and start over. Around that time I discovered subagents — dedicated agents for specific tasks, each with its own focused system prompt. A Go agent. A BATS test agent. An API architect. Spinning those up mid-session and letting them work in their own context window solved the degradation problem cleanly.

The agent definition files got large quickly. With code examples and guidelines baked in, each markdown file ballooned. That's when I had the idea to put examples in the memory server instead and let subagents query for them as needed. I tried it. It worked well.

At that point I was also thinking about my team. I'd introduced Go to the team at work, and I wanted Claude to produce consistent results regardless of who was using it — not just for me. The Knowledge Graph Memory Server isn't shareable. Every team member would need to run their own instance, load their own documents, and keep it current. That doesn't scale.

I went looking for a memory server built for teams. I found Cipher campfirein/cipher first — it looked promising, vector database and Postgres, but I found it difficult to configure. Then I found Cognee topoteretes/cognee. Better documentation, easier to run, reasonable APIs for loading content. That became the foundation for my experiments.

The concepts proved out. Shared patterns work. Specialist agents work. A shared knowledge graph works. Mnemonic is what grew out of those experiments — a practical tool for the workflow I'd been building toward. In fact, the project Claude Code Setup is that concept, though a bit refined. I'm actually using it to build Mnemonic!

Mnemonic also uses other projects from this portfolio: Heartbeat for health monitoring, OtelX for observability, and Docker Golang Tooling for containerization and CI/CD builds (though Mnemonic only contributed to the GitHub Actions work on that one). All were built and improved using the same Claude Code Setup. The projects feed into each other.

What It Is

Mnemonic is a small server for teams using Claude Code. It holds two things: a shared knowledge base of team patterns, and a library of agents, skills, and commands that every developer on the team can use.

The Problem It Solves

When a team adopts Claude Code, each person ends up building their own setup. One engineer has a well-tuned Go agent; another never bothered. Someone discovers a useful pattern and it lives in their config forever, invisible to everyone else. New people start from scratch. The longer this goes on, the more the team diverges.

Mnemonic gives the team one place to store what they know and one source of truth for tooling.

How It Works

Claude Code reads from Mnemonic through MCP. Once Mnemonic is configured as an MCP server, Claude Code can search the knowledge base or fetch an agent definition directly in conversation. All MCP access is read-only.

graph LR
    User([You]) --> CC[Claude Code]
    CC -->|searches for knowledge| MCP[Mnemonic MCP]
    MCP --> KB[(Knowledge Base)]
    CC -->|fetches tooling| MCP
    MCP --> Tools[(Agents & Skills)]

Content gets into Mnemonic through a REST API. An admin, a CI pipeline, or a script POSTs a pattern or tool definition. Mnemonic stores it, generates a semantic embedding in the background, and it becomes searchable within moments.

graph LR
    Admin([Admin / CI]) -->|POST patterns, agents, skills| REST[Mnemonic REST API]
    REST --> KB[(Knowledge Base)]
    REST --> Tools[(Agents & Skills)]

Tooling syncs to each developer's machine with a /mnemonic-sync skill. It checks what has changed on the server and downloads only what is new or updated. Agents land in ~/.claude/agents/. Skills land in ~/.claude/skills/. Commands land in ~/.claude/commands/. After a sync, everyone on the team is running the same versions.

What Where it lives Who owns it
Patterns (team knowledge) Mnemonic Your team, via the REST API
Agent definitions Mnemonic Your team, via the REST API
Skill definitions Mnemonic Your team, via the REST API
Command definitions Mnemonic Your team, via the REST API
Local copies of tooling ~/.claude/ on each machine Synced from Mnemonic
Workflow decisions You Always
AI inference Claude Code / Anthropic Mnemonic never does inference

Commands are now skills

Anthropic has merged custom slash commands into skills, so the "Command Definitions" are no longer needed. See Extend Claude with skills for more info.

Status

Mnemonic is in active development. The core concepts — shared knowledge base, tooling library, MCP read path, REST write path, sync — are defined and stable. Implementation is underway. I will keep this page current as things move forward.