An embodied AI agent for the Reachy Mini desktop robot, powered by Claude Agent SDK and MCP.
"Your ghost, my shell." - Inspired by Ghost in the Shell
Status: Phase 1 Complete | Architecture Docs | Getting Started Tutorial
Claude in the Shell transforms your Reachy Mini into an autonomous AI assistant that can:
- Respond to voice commands with "Hey Reachy" wake word
- Control its head, body, and antennas expressively
- See through its camera and respond to visual cues
- Remember context across conversations
- Connect to external services via MCP (Home Assistant, Calendar, GitHub, etc.)
flowchart TB
subgraph Cloud["Cloud"]
CLAUDE[("Claude API")]
end
subgraph Agent["Claude in the Shell"]
subgraph SDK["Claude Agent SDK"]
SESSION["Session Manager"]
MCPCLIENT["MCP Client"]
HOOKS["PreToolUse Hooks"]
end
subgraph Core["Agent Core"]
LOOP["ReachyAgentLoop"]
PERM["4-Tier Permissions"]
end
subgraph Memory["Memory System"]
MGR["MemoryManager"]
CHROMA[("ChromaDB<br/>Vector Store")]
SQLITE[("SQLite<br/>Profiles")]
EMBED["Embeddings<br/>(MiniLM)"]
end
subgraph Interface["Interfaces"]
CLI["CLI REPL"]
WEB["Web Dashboard"]
end
subgraph MCP["MCP Servers (stdio)"]
REACHY["Reachy MCP (23)"]
MEMMCP["Memory MCP (4)"]
GITHUB["GitHub MCP (50+)"]
end
end
subgraph Hardware["Reachy Mini"]
DAEMON["Daemon API"]
ROBOT["Head • Body • Antennas"]
end
CLAUDE <-->|HTTPS| SESSION
SESSION --> LOOP
LOOP --> HOOKS
HOOKS --> PERM
PERM --> MCPCLIENT
MCPCLIENT -->|stdio| MCP
LOOP <-->|context| MGR
MGR --> CHROMA
MGR --> SQLITE
MGR --> EMBED
MEMMCP --> MGR
CLI --> LOOP
WEB --> LOOP
REACHY -->|HTTP| DAEMON
DAEMON --> ROBOT
style SDK fill:#7c4dff,color:#fff
style Memory fill:#e1bee7
style CLAUDE fill:#f9a825
style DAEMON fill:#4caf50
See Architecture Documentation for detailed diagrams, and the ReachyAgentLoop Deep Dive to understand how the Perceive → Think → Act cycle works.
- Python 3.10+
- uv (recommended) or pip
- Reachy Mini with Raspberry Pi 4, or MuJoCo for simulation
# Clone the repository
git clone https://github.com/jawhnycooke/claude-in-the-shell.git
cd claude-in-the-shell
# Create virtual environment
uv venv && source .venv/bin/activate
# Install dependencies
uv pip install -r requirements.txt
# Copy environment template
cp .env.example .env
# Edit .env with your ANTHROPIC_API_KEYTest the full stack with MuJoCo physics simulation:
# Terminal 1: Start simulation (macOS)
/opt/homebrew/bin/mjpython -m reachy_mini.daemon.app.main --sim --scene minimal --fastapi-port 8765
# Terminal 2: Run validation
python scripts/validate_mcp_e2e.pySee the Getting Started Tutorial for complete setup instructions.
On the Raspberry Pi with Reachy daemon running:
# Interactive agent (defaults to :8000 for production)
python -m reachy_agent run
# Rich terminal REPL (defaults to :8765 for simulation)
python -m reachy_agent repl
# Web dashboard (defaults to :8765 for simulation)
python -m reachy_agent web
# For production hardware, specify :8000
python -m reachy_agent repl --daemon-url http://localhost:8000
# Health check
python -m reachy_agent check# Run with mock daemon (no hardware or MuJoCo needed)
python -m reachy_agent run --mockclaude-in-the-shell/
├── src/reachy_agent/
│ ├── agent/ # Agent loop with Claude SDK client
│ ├── behaviors/ # Idle behavior controller
│ ├── cli/ # CLI REPL interface
│ ├── errors/ # Error codes and responses
│ ├── expressions/ # Antenna/emotion sequences
│ ├── mcp_servers/ # MCP server implementations
│ │ ├── reachy/ # 23 robot control tools
│ │ ├── memory/ # 4 memory system tools
│ │ └── integrations/ # External service MCP servers
│ ├── memory/ # Memory manager (ChromaDB + SQLite)
│ │ └── storage/ # ChromaDB and SQLite backends
│ ├── perception/ # Wake word, audio, vision (Phase 2)
│ ├── permissions/ # 4-tier permission system
│ │ ├── handlers/ # CLI and WebSocket handlers
│ │ └── storage/ # SQLite audit logging
│ ├── privacy/ # Privacy indicators (Phase 2)
│ ├── resilience/ # Error recovery (Phase 2)
│ ├── simulation/ # MuJoCo simulation bridge
│ ├── web/ # FastAPI web dashboard
│ │ ├── routes/ # API and WebSocket routes
│ │ └── static/ # CSS and JavaScript assets
│ └── utils/ # Config, logging utilities
├── ai_docs/ # AI agent reference materials
├── config/ # Configuration files (YAML)
├── docs/ # MkDocs documentation
│ ├── api/ # Auto-generated API reference
│ ├── architecture/ # System design diagrams
│ ├── diagrams/ # Mermaid diagram source files
│ ├── guides/ # How-to guides
│ ├── planning/ # PRD, TRD, implementation docs
│ └── tutorials/ # Getting started guides
├── prompts/ # System prompt templates
│ ├── context/ # Context injection templates
│ ├── expressions/ # Expression prompt templates
│ ├── integrations/ # Integration prompts
│ └── system/ # Core system prompts
├── scripts/ # Validation & demo scripts
└── tests/ # Test suite
├── integration/ # Integration tests
├── simulation/ # MuJoCo simulation tests
└── unit/ # Unit tests
The agent exposes 27 tools to Claude via two MCP servers, discovered dynamically via MCP protocol:
| Category | Tools | Description |
|---|---|---|
| Movement (5) | move_head, look_at, look_at_world, look_at_pixel, rotate |
Head/body positioning, IK |
| Expression (6) | play_emotion, play_recorded_move, set_antenna_state, nod, shake, dance |
Emotions from HuggingFace SDK |
| Audio (2) | speak, listen |
Speech I/O |
| Perception (3) | capture_image, get_sensor_data, look_at_sound |
Sensors and camera |
| Lifecycle (3) | wake_up, sleep, rest |
Power management |
| Status (2) | get_status, get_pose |
Robot state feedback |
| Control (2) | set_motor_mode, cancel_action |
Motor control, action cancellation |
| Tool | Description |
|---|---|
search_memories |
Semantic search over stored memories (ChromaDB) |
store_memory |
Save a new memory with type classification |
get_user_profile |
Retrieve user preferences and info (SQLite) |
update_user_profile |
Update user preferences |
Optional integration with the official GitHub MCP server:
# Install binary (recommended for Raspberry Pi)
mkdir -p ~/.reachy/bin
curl -sL https://github.com/github/github-mcp-server/releases/latest/download/github-mcp-server_Linux_arm64.tar.gz?target=https://github.com | tar xzf - -C ~/.reachy/bin
# Set token
export GITHUB_TOKEN=ghp_...Enable in agent:
agent = ReachyAgentLoop(
enable_github=True,
github_toolsets=["repos", "issues", "pull_requests", "actions"],
)| Toolset | Tier | Description |
|---|---|---|
repos |
1-3 | Read repos (T1), create/push (T3) |
issues |
1-3 | List/search (T1), comment (T2), create (T3) |
pull_requests |
1-4 | Read (T1), create (T3), merge (T4-forbidden) |
actions |
1-3 | List workflows (T1), trigger/cancel (T3) |
See MCP Tools Quick Reference for full parameter details.
flowchart LR
REQ["Tool Request"] --> EVAL{"Evaluate"}
EVAL -->|Tier 1| AUTO["Execute"]
EVAL -->|Tier 2| NOTIFY["Execute + Notify"]
EVAL -->|Tier 3| CONFIRM["Ask User"]
EVAL -->|Tier 4| BLOCK["Block"]
| Tier | Behavior | Examples |
|---|---|---|
| 1. Autonomous | Execute immediately | Body control, reading data |
| 2. Notify | Execute and inform | Smart home control |
| 3. Confirm | Ask before executing | Creating events, PRs |
| 4. Forbidden | Never execute | Security-critical ops |
# Install dev dependencies
uv pip install -r requirements-dev.txt
# Run tests
pytest -v
# Format code
black . && isort .
# Type check
mypy src/
# Lint
ruff check .| Document | Description |
|---|---|
| Getting Started Tutorial | Complete setup from scratch |
| Architecture Overview | System design with Mermaid diagrams |
| ReachyAgentLoop Deep Dive | How the Perceive → Think → Act cycle works |
| MCP Tools Reference | All 27 robot control + memory tools |
| Memory System | Long-term memory with 4 MCP tools |
| Development Commands | Command cheat sheet |
| Agent Behavior Guide | Personality and expression patterns |
- Project scaffolding and configuration
- Reachy MCP server (23 tools with native SDK emotions)
- Official Claude Agent SDK integration (
ClaudeSDKClient) - 4-tier permission system with SDK PreToolUse hooks
- MuJoCo simulation testing (238 tests)
- Web dashboard with real-time control
- CLI REPL interface
- Idle behavior controller
- End-to-end validation
- Raspberry Pi environment setup
- Wake word detection (OpenWakeWord)
- Attention state machine (Passive/Alert/Engaged)
- Privacy indicators via antennas
- ChromaDB vector memory
- SQLite structured storage (profiles, sessions)
- Memory MCP tools (4 tools)
- Auto-injection of user profile + last session
- Expression system
- Personality configuration
- Home Assistant MCP
- Google Calendar MCP
- GitHub MCP
- Offline fallback (Ollama + Piper)
MIT
- Reachy Mini SDK - Official Pollen Robotics SDK
- Claude Agent SDK - Anthropic Agent SDK
- MCP Python SDK - Model Context Protocol
- MuJoCo - Physics simulation