SIM-ONE

SIM-ONE Cognitive Control Protocol (mCP) Server

⚠️ Important Naming Note: The mcp_server directory predates the industry-standard “Model Context Protocol” (MCP). In this codebase, “mcp_server” refers to SIM-ONE’s “Multi-Protocol Cognitive Platform” or “Modular Cognitive Platform” - the core orchestrator and agent system. This is NOT an MCP tool registry in the modern sense. See ../MIGRATION_PLAN.md for future renaming strategy.


Project Overview

Welcome to the SIM-ONE mCP Server, a sophisticated, multi-protocol cognitive architecture designed to simulate advanced reasoning, emotional intelligence, and metacognitive governance. This server is the backbone of the SIM-ONE framework, providing a powerful platform for developing and orchestrating autonomous AI agents that can perform complex cognitive tasks.

The server is built on a modular, protocol-based architecture, allowing for dynamic loading and execution of various cognitive functions. From deep reasoning and emotional analysis to advanced entity extraction and self-governance, the mCP Server provides the tools to build truly intelligent systems.

Key Features

Quick Start Guide

Get the server up and running in 5 minutes.

1. Clone the repository:

git clone [repository-url]
cd SIM-ONE

2. Set up a Python virtual environment:

python3 -m venv venv
source venv/bin/activate

3. Install dependencies:

pip install -r requirements.txt

4. Configure environment variables: Create a .env file in the root directory and add the required variables. See the Configuration Guide for details. A minimal example:

MCP_API_KEY="your-secret-api-key"

5. Run the server:

uvicorn mcp_server.main:app --host 0.0.0.0 --port 8000

The server is now running and accessible at http://localhost:8000.


Using SIM-ONE Protocols as Standalone Tools

Each protocol is available as a CLI tool in /tools/ for integration with autonomous agents:

Individual Protocol Tools

Governance Tools

Example Usage

# Validate AI response against Five Laws
python tools/run_five_laws_validator.py --text "response to check"

# Use REP for reasoning
python tools/run_rep_tool.py --reasoning-type deductive \
  --facts "Socrates is a man" "All men are mortal" \
  --rules '[["Socrates is a man", "All men are mortal"], "Socrates is mortal"]'

# Chain protocols for governed workflow
python tools/run_rep_tool.py --json '{...}' | \
  python tools/run_vvp_tool.py | \
  python tools/run_five_laws_validator.py

📖 Full Tool Documentation: tools/README.md 🔧 Integration Guide: ../PAPER2AGENT_INTEGRATION.md 📋 Tool Manifest: tools/tools_manifest.json


Architecture Overview

The mCP Server consists of several key components that work in tandem:

For a more detailed breakdown, please see the full Architecture Documentation.

Installation

For detailed, step-by-step installation instructions for various platforms and production environments, please refer to our comprehensive Installation Guide.

Configuration

The server is configured entirely through environment variables. For a full list of all required and optional variables, their purpose, and example values, please see the Configuration Guide.

Security & Governance

The mCP Server includes a governance layer and structured audit logging. Enable and tune these with environment flags:

Audit logs are emitted in JSON format by a dedicated audit logger:

Tip: Responses from /execute include a governance_summary field with quality_scores and is_coherent for quick inspection.

Admin model management (MVLM local)

Execution Controls

Local Monitoring

Run a local Prometheus + Grafana stack to visualize governance/recovery metrics.

Model Test CLI

Quickly test a local GPT‑2–style MVLM without running the server.

Notes

API Documentation

The server exposes a powerful API for executing cognitive workflows. For detailed information on all available endpoints, request/response formats, authentication, and usage examples, please refer to our full API Documentation.

Basic API Usage Example

Here is a quick example of how to execute a workflow using curl:

curl -X POST "http://localhost:8000/v1/execute" \
-H "Authorization: Bearer your-secret-api-key" \
-H "Content-Type: application/json" \
-d '{
    "workflow": "StandardReasoningWorkflow",
    "data": {
        "user_input": "John works at Microsoft and lives in Seattle."
    }
}'

Contributing

We welcome contributions from the community! If you’d like to contribute, please read our Contributing Guidelines to get started.

License

This project is licensed under the terms of the AGPL v3 / Commercial.