Skip to main content

Installation

Prerequisites

  • uv: Fast Python package manager (Install uv)
  • Node.js (if using Node.js-based MCP servers)

Install Dependencies

git clone https://github.com/BenItBuhner/Agent-Chassis.git
cd agent-chassis
uv sync

Configuration

  1. Copy environment template:
    cp .env.example .env
    
  2. Set required variables:
    OPENAI_API_KEY=sk-...
    OPENAI_MODEL=kimi-k2-thinking
    
  3. Optional - Enable persistence:
    ENABLE_PERSISTENCE=true
    REDIS_URL=redis://localhost:6379/0
    DATABASE_URL=postgresql+asyncpg://user:pass@localhost/agent_chassis
    
  4. Optional - Enable user authentication:
    ENABLE_USER_AUTH=true
    JWT_SECRET_KEY=your-secret-key-here
    GOOGLE_CLIENT_ID=your-google-client-id
    GOOGLE_CLIENT_SECRET=your-google-client-secret
    

Running the Server

uv run uvicorn app.main:app --reload
The API will be available at http://localhost:8000.

Your First Request

POST /api/v1/agent/completion

Client-Side Mode (No Persistence)

curl -X POST http://localhost:8000/api/v1/agent/completion \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{
    "messages": [
      {"role": "user", "content": "What time is it?"}
    ],
    "model": "kimi-k2-thinking",
    "allowed_tools": ["get_server_time"]
  }'

Server-Side Mode (With Persistence)

curl -X POST http://localhost:8000/api/v1/agent/completion \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{
    "message": "What time is it?",
    "model": "kimi-k2-thinking",
    "allowed_tools": ["get_server_time"]
  }'
The response includes a session_id for continuing the conversation:
{
  "role": "assistant",
  "content": "The current server time is...",
  "session_id": "abc-123-def-456"
}
Continue the conversation:
curl -X POST http://localhost:8000/api/v1/agent/completion \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{
    "session_id": "abc-123-def-456",
    "message": "What about tomorrow?",
    "model": "kimi-k2-thinking"
  }'

Streaming Responses

Enable streaming for real-time feedback:
curl -X POST http://localhost:8000/api/v1/agent/completion \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{
    "messages": [
      {"role": "user", "content": "Calculate 123 + 456"}
    ],
    "model": "kimi-k2-thinking",
    "stream": true,
    "allowed_tools": ["calculate"]
  }'

Next Steps