# OpenClaw Now Has a Memory: Announcing the minns Integration
Your OpenClaw agents can now remember, learn, and build on past experience. We are shipping @minns/openclaw-minns (v0.7.2), a plug-and-play integration that connects OpenClaw to minns.ai and gives every agent persistent memory, learned strategies, and semantic search out of the box.
npm install @minns/openclaw-minns
Why This Matters
OpenClaw gives you a powerful agent runtime with hooks, tools, and a plugin system. But every time a session ends, the agent starts fresh. It forgets what worked, repeats what failed, and has no way to build on prior interactions.
The minns integration fixes that. Once connected, your agent automatically:
- Recalls relevant memories before each turn
- Captures interactions as structured events after each turn
- Learns strategies from patterns of success and failure
- Extracts factual claims from unstructured text
- Builds a semantic graph that grows smarter over time
No custom retrieval logic. No vector database setup. No extra LLM calls.
How It Works
The integration runs in two modes depending on your setup:
MCP Server Mode runs as a standalone subprocess. OpenClaw communicates with it over stdio using the Model Context Protocol. This gives you process isolation and works with any MCP-compatible runtime.
Native Plugin Mode embeds directly into the OpenClaw process. It registers tools, hooks, slash commands, and CLI subcommands with zero subprocess overhead.
Both modes expose the same 11 tools and the same auto-recall/auto-capture pipeline.
The Auto-Recall and Auto-Capture Loop
This is the core of the integration. It turns every agent conversation into a learning cycle:
User sends message
|
v
[before_agent_start hook]
|-- Search memories by semantic similarity to user prompt
|-- Inject top-K memories into agent context
|-- Inject intent sidecar instruction into system prompt
|
v
Agent reasons and responds
|
v
[agent_end hook]
|-- Parse agent response for structured intent (locally, no LLM call)
|-- Classify intent: capture, recall, reflect, goal, query, observe, learn
|-- Store as typed event in minns.ai (Context, Cognitive, Observation, etc.)
|-- Extract claims from response
|-- Track goal updates
|
v
minns.ai consolidates into memories, strategies, and claims
|
v
Next turn starts with richer context
The agent does not need to do anything special. Auto-recall and auto-capture happen transparently through OpenClaw's hook system.
Full Event Lifecycle
Beyond the auto loop, the integration captures the complete agent lifecycle as events:
| Hook | Event Type | What It Captures |
|---|---|---|
message_received |
Communication (inbound) | User messages arriving |
message_sending |
Communication (outbound) | Agent responses going out |
message_sent |
Observation | Message delivery confirmation |
before_tool_call |
Observation | Tool invocation with parameters |
after_tool_call |
Action | Tool execution results |
before_agent_start |
Auto-recall | Memory injection |
agent_end |
Auto-capture | Response parsing and storage |
All event hooks are non-blocking. They use fire-and-forget delivery so the agent is never slowed down.
The 11 Tools
Every tool is available both as an MCP tool and as a native OpenClaw tool:
Memory
memory_search: semantic search over agent memoriesmemory_capture: store a new memory with embeddings and optional goalsmemory_memories: list memories sorted by strengthmemory_strategies: list strategies sorted by quality score
Strategy
strategy_similar: find strategies matching a goal, tool, or result signaturestrategy_suggest_next_action: get action suggestions for the current context
Semantic
claims_search: search extracted claims via vector similarity
Intent Parsing
intent_instruction: get the sidecar prompt instruction for LLM intent extractionintent_parse: parse LLM output into structured intent + clean response
System
health_check: verify minns.ai connectivitystats_get: system-wide statistics (events processed, nodes created, memories formed)
Setup
MCP Server Mode
Add this to your OpenClaw config:
{
"mcpServers": {
"minns": {
"command": "openclaw-minns-mcp",
"env": {
"MINNS_API_KEY": "your-api-key",
"MINNS_DEBUG": "false"
}
}
}
}
That is it. OpenClaw will spawn the MCP server and your agent gets all 11 tools automatically.
Native Plugin Mode
Register the plugin in your OpenClaw config with auto-recall and auto-capture settings:
{
"env": {
"MINNS_API_KEY": "your-api-key"
},
"autoRecall": true,
"autoCapture": true,
"topK": 5
}
| Option | Default | Description |
|---|---|---|
autoRecall |
true |
Inject relevant memories before each turn |
autoCapture |
true |
Store agent responses as events after each turn |
topK |
5 |
Max memories to inject per turn |
Slash Commands
Once the plugin is loaded, you get three slash commands:
/minns ping Test connection to minns.ai
/minns health Full health check with version and uptime
/minns status Show current configuration
CLI Subcommands
openclaw minns search "user preferences" --limit 10
openclaw minns stats
openclaw minns health
Local Intent Parsing
One of the most useful features is the intent sidecar. Instead of making an extra LLM call to classify what the agent just said, the integration injects a small instruction into the system prompt that asks the LLM to output a structured intent alongside its normal response.
The local parser then extracts that intent without any network call. The default intent spec covers 8 categories:
- capture : extractable claims from context
- recall : memory retrieval
- reflect : reasoning and analysis
- goal : updating agent objectives
- query : answering direct questions
- converse : casual conversation
- observe : noting environmental signals
- learn : updating understanding
Each intent determines how the response gets stored in minns.ai. A "capture" intent becomes a Context event with semantic indexing enabled. A "reflect" intent becomes a Cognitive event. A "query" becomes a Communication event. The right event type triggers the right downstream processing.
You can also register custom intent specs per agent type using the Intent Registry.
Fallback Skill for Non-MCP Runtimes
For agents that do not support MCP natively, the package includes a fallback skill at skills/minns-memory/SKILL.md. This is a policy document that instructs the LLM to:
- Call
memory.searchbefore answering to check for relevant context - After answering, decide if the interaction produced durable facts worth storing
- If yes, call
memory.capturewith a summary - Never store secrets, passwords, or PII unless explicitly asked
This gives you basic memory behavior even without the full hook system.
What Gets Learned
The integration does not just store raw logs. minns.ai processes events into three tiers of knowledge:
Episodic memories are individual interactions. Every auto-captured event becomes an episode with a summary, takeaway, and causal note explaining what happened and why.
Semantic memories emerge when 3+ similar episodes are detected. minns.ai distills them into generalized patterns: "When users ask about X, approach Y works best because Z."
Schema memories form from 3+ semantic memories. These are high-level principles your agent has learned from experience.
On top of that, strategies are extracted as reusable playbooks with success/failure tracking, branching logic, and counterfactual analysis. Your agent can query these before acting to avoid known failure modes.
Get Started
Install the package and add your API key:
npm install @minns/openclaw-minns
Add the MCP server config, restart OpenClaw, and your agent starts learning immediately. Every conversation makes it smarter. Every mistake gets remembered. Every successful pattern gets reinforced.
Check out the API reference for the full tool documentation.