MCP Server Integration
Connect CenterOS to Claude Code, Claude Desktop, Cursor, and Windsurf. All 31 platform actions become AI tools automatically.
What is MCP
Model Context Protocol (MCP) is an open standard that lets AI assistants connect to external data sources and tools. Instead of manually copying data between your platform and your AI tool, MCP lets the assistant call platform actions directly -- listing datasets, creating training jobs, deploying models -- all through natural language.
The centeros-mcp server acts as a bridge: it discovers all available actions from the CenterOS backend and registers each one as an MCP tool. When you ask Claude "list my datasets" or "create a training job with datasets 1 and 2," the MCP server translates that into the appropriate API call.
Installation
# From PyPI $ pip install centeros-mcp # Or run directly with uvx (no install needed) $ uvx centeros-mcp # From source $ cd fearless-platform/centeros-mcp $ pip install -e .
Prerequisites
You need a Personal Access Token (PAT) from the platform. Create one with the CenterOS CLI or from the platform dashboard:
$ centeros pat create --name "mcp-server" --scopes "*" # PAT created: frl_abc123...
Claude Desktop
Add the following to your Claude Desktop configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"centeros": {
"command": "uvx",
"args": ["--from", "centeros-mcp", "centeros-mcp"],
"env": {
"CENTEROS_TOKEN": "frl_your_pat_here"
}
}
}
}
Restart Claude Desktop. The CenterOS tools will appear in the tools menu.
Claude Code
Add to .claude/settings.json in your project, or ~/.claude/settings.json globally:
{
"mcpServers": {
"centeros": {
"command": "uvx",
"args": ["--from", "centeros-mcp", "centeros-mcp"],
"env": {
"CENTEROS_TOKEN": "frl_your_pat_here"
}
}
}
}
Cursor / Windsurf
Both editors support MCP servers through their settings. Add the same configuration (command + env) to your editor's MCP settings panel. The server name, command, args, and environment variables are identical to the Claude configurations above.
Environment Variables
| Variable | Required | Description |
|---|---|---|
CENTEROS_TOKEN | Yes | Bearer token -- JWT or PAT (starts with frl_). |
CENTEROS_PAT | No | Alias for CENTEROS_TOKEN. |
CENTEROS_API_URL | No | Backend URL. Default: https://fearless-backend-533466225971.us-central1.run.app |
Available Tools
The server dynamically registers all actions from the backend. These tools are organized by category:
Datasets
list_datasets, get_dataset, delete_dataset, upload_dataset, get_upload_signed_url, confirm_upload
Training
list_training_jobs, create_training_job, get_training_status, launch_training, get_finetune_catalog, estimate_cost
Models
list_models, register_model, promote_model, get_model_lineage
Fleet
get_fleet_summary, list_robots, get_robot_state, register_robot, deploy_model, emergency_stop
Annotations
list_annotation_tasks, create_annotation_task, get_annotation_metrics, submit_task
Organizations
list_orgs, list_org_members, invite_member
Stats
get_platform_stats, get_system_health
Utility Tools
centeros_action-- Execute any action by name (fallback if dynamic registration missed one)list_available_actions-- Returns the full action catalogue
See the Platform API Reference for full input schemas and descriptions of each action.
MCP Resources
The server also exposes read-only resources that AI tools can use for context:
| URI | Description |
|---|---|
platform://health | Backend health check status. |
platform://stats | Platform-wide statistics (datasets, training, robots). |
platform://actions | Full action catalogue with schemas. |
Example Workflows
Once the MCP server is configured, you can interact with the platform using natural language. Here are some example conversations:
Check platform status
You: What's the current state of our robotics platform? Claude: Let me check the platform stats and fleet summary... [calls get_platform_stats, get_fleet_summary] Your platform has 247 datasets (18.3 GB), 15 training jobs (3 running), and 12 robots in the fleet (8 online, 4 offline). There are 2 open alerts.
End-to-end training workflow
You: Find all HDF5 datasets related to grasping, then create a diffusion policy training job using those datasets. Claude: Let me search for the datasets first... [calls list_datasets with search="grasp", file_type="hdf5"] Found 3 matching datasets: grasp-ep-001 (ID 42), grasp-ep-002 (ID 43), grasp-ep-003 (ID 44). Creating a training job now... [calls create_training_job with dataset_ids=[42,43,44]] Training job "dp-grasp-0424" created (job_key: abc123). Launch it?
Model deployment
You: Deploy our latest validated model to arm-01. Claude: Let me find the latest validated model... [calls list_models with status="validated"] Found model "dp-v1" (ID 5, tag v-a1b2c3). Deploying to arm-01... [calls deploy_model with robot_id="arm-01", model_version_id=5] Deployment initiated: dep-key=dep-xyz789. The model will be pushed to arm-01 using the immediate strategy.
Development
To run the MCP server locally for development or debugging:
# Run locally $ cd fearless-platform/centeros-mcp $ pip install -e . $ CENTEROS_TOKEN=frl_xxx centeros-mcp # Debug with the MCP Inspector $ npx @modelcontextprotocol/inspector centeros-mcp