OpenClaw API Complete Documentation

Welcome to the comprehensive OpenClaw API documentation. This guide covers everything you need to integrate OpenClaw's powerful AI capabilities into your applications using REST APIs and WebSocket connections.

API Overview

OpenClaw provides a robust API ecosystem with:

Getting Started

Base URLs

Production

REST API: https://api.openclaw.ai/v1 WebSocket: wss://api.openclaw.ai/v1/ws

Development

REST API: https://dev-api.openclaw.ai/v1 WebSocket: wss://dev-api.openclaw.ai/v1/ws

Self-Hosted

REST API: http://localhost:3000/api/v1 WebSocket: ws://localhost:3000/v1/ws

Authentication

API Key Authentication

OpenClaw uses API key authentication for secure access to the API.

Getting Your API Key

# 1. Log in to your OpenClaw dashboard # 2. Go to Settings > API Keys # 3. Click "Generate New Key" # 4. Copy your API key securely # 5. Store it in environment variables

Using API Keys

# In HTTP headers Authorization: Bearer YOUR_API_KEY # Or as query parameter (less secure) ?api_key=YOUR_API_KEY

Environment Setup

# Set environment variable export OPENCLAW_API_KEY="your_api_key_here" # In your application const apiKey = process.env.OPENCLAW_API_KEY;

JWT Authentication (Advanced)

JWT Token Generation

# Generate JWT token POST /auth/token { "api_key": "your_api_key", "expires_in": 3600 } # Response { "token": "eyJhbGciOiJIUzI1NiIs...", "expires_at": "2026-02-28T12:00:00Z" }

Using JWT Tokens

# In HTTP headers Authorization: Bearer YOUR_JWT_TOKEN # Token validation # Tokens are automatically validated on each request

REST API Endpoints

AI Chat Completions

POST /chat/completions

Create chat completions with OpenClaw AI models

POST /chat/completions { "model": "gpt-4", "messages": [ {"role": "user", "content": "Hello, OpenClaw!"} ], "max_tokens": 1000, "temperature": 0.7 }

POST /chat/completions/stream

Stream chat completions in real-time

POST /chat/completions/stream { "model": "gpt-4", "messages": [ {"role": "user", "content": "Tell me a story"} ], "stream": true }

Models

GET /models

List all available AI models

GET /models # Response { "data": [ { "id": "gpt-4", "name": "GPT-4", "provider": "openai", "context_length": 8192, "pricing": { "prompt": 0.03, "completion": 0.06 } } ] }

GET /models/{model_id}

Get detailed information about a specific model

GET /models/gpt-4 # Response { "id": "gpt-4", "name": "GPT-4", "provider": "openai", "context_length": 8192, "capabilities": ["chat", "completion", "function_calling"] }

Skills Management

GET /skills

List all installed skills

GET /skills # Response { "data": [ { "id": "weather-skill", "name": "Weather Skill", "version": "1.0.0", "status": "active" } ] }

POST /skills/{skill_id}/execute

Execute a specific skill

POST /skills/weather-skill/execute { "command": "get", "parameters": { "location": "New York" } }

Workflows

GET /workflows

List all workflows

GET /workflows # Response { "data": [ { "id": "daily-brief", "name": "Daily Brief", "status": "active" } ] }

POST /workflows

Create a new workflow

POST /workflows { "name": "Custom Workflow", "description": "My custom workflow", "steps": [ { "type": "ai_completion", "model": "gpt-4", "prompt": "Generate summary" } ] }

Files & Media

POST /files/upload

Upload files for processing

POST /files/upload Content-Type: multipart/form-data file: [binary_data] purpose: "analysis"

GET /files/{file_id}

Get file information

GET /files/file_123 # Response { "id": "file_123", "filename": "document.pdf", "size": 1024000, "purpose": "analysis", "status": "processed" }

Analytics & Monitoring

GET /analytics/usage

Get usage statistics

GET /analytics/usage?period=7d # Response { "period": "7d", "total_requests": 1000, "total_tokens": 50000, "cost": 15.50 }

GET /health

Check API health status

GET /health # Response { "status": "healthy", "version": "2.1.0", "uptime": 86400, "models": ["gpt-4", "claude-3"] }

WebSocket API

WebSocket Connection

WebSocket API provides real-time bidirectional communication with OpenClaw.

Establishing Connection

// JavaScript WebSocket connection const ws = new WebSocket('wss://api.openclaw.ai/v1/ws'); // Authentication ws.onopen = () => { ws.send(JSON.stringify({ type: 'auth', token: 'your_api_key' })); };

Python WebSocket Connection

import websocket import json def on_message(ws, message): data = json.loads(message) print(f"Received: {data}") ws = websocket.WebSocketApp( "wss://api.openclaw.ai/v1/ws", on_message=on_message ) ws.send(json.dumps({ "type": "auth", "token": "your_api_key" }))

WebSocket Message Types

Chat Completion

// Send chat message { "type": "chat", "id": "msg_123", "model": "gpt-4", "messages": [ {"role": "user", "content": "Hello!"} ], "stream": true } // Response (streaming) { "type": "chat_chunk", "id": "msg_123", "content": "Hello! ", "finished": false } // Final response { "type": "chat_complete", "id": "msg_123", "content": "Hello! How can I help you?", "usage": { "prompt_tokens": 10, "completion_tokens": 8 } }

Skill Execution

// Execute skill { "type": "skill_execute", "id": "skill_123", "skill": "weather-skill", "command": "get", "parameters": { "location": "London" } } // Response { "type": "skill_result", "id": "skill_123", "result": { "location": "London", "temperature": 15, "description": "Cloudy" } }

Workflow Execution

// Start workflow { "type": "workflow_start", "id": "workflow_123", "workflow": "daily-brief", "parameters": { "date": "2026-02-28" } } // Progress updates { "type": "workflow_progress", "id": "workflow_123", "step": 1, "total_steps": 3, "status": "running" } // Completion { "type": "workflow_complete", "id": "workflow_123", "result": { "summary": "Daily brief generated" } }

Error Handling

WebSocket Errors

// Error message format { "type": "error", "id": "msg_123", "error": { "code": "invalid_model", "message": "Model 'invalid-model' not found", "details": { "available_models": ["gpt-4", "claude-3"] } } }

Connection Management

// Heartbeat { "type": "ping" } // Response { "type": "pong", "timestamp": "2026-02-28T12:00:00Z" } // Reconnect { "type": "reconnect", "reason": "connection_lost" }

Integration Examples

JavaScript/Node.js

REST API Example

const axios = require('axios'); class OpenClawClient { constructor(apiKey) { this.apiKey = apiKey; this.baseURL = 'https://api.openclaw.ai/v1'; } async chatCompletion(messages, options = {}) { try { const response = await axios.post( `${this.baseURL}/chat/completions`, { model: options.model || 'gpt-4', messages, max_tokens: options.maxTokens || 1000, temperature: options.temperature || 0.7 }, { headers: { 'Authorization': `Bearer ${this.apiKey}`, 'Content-Type': 'application/json' } } ); return response.data; } catch (error) { console.error('OpenClaw API Error:', error.response?.data || error.message); throw error; } } async streamChatCompletion(messages, options = {}) { const response = await fetch(`${this.baseURL}/chat/completions/stream`, { method: 'POST', headers: { 'Authorization': `Bearer ${this.apiKey}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ model: options.model || 'gpt-4', messages, stream: true }) }); const reader = response.body.getReader(); const decoder = new TextDecoder(); while (true) { const { done, value } = await reader.read(); if (done) break; const chunk = decoder.decode(value); const lines = chunk.split('\n').filter(line => line.trim()); for (const line of lines) { if (line.startsWith('data: ')) { const data = JSON.parse(line.slice(6)); yield data; } } } } } // Usage const client = new OpenClawClient('your_api_key'); // Simple chat const response = await client.chatCompletion([ { role: 'user', content: 'Hello, OpenClaw!' } ]); // Streaming chat for await (const chunk of client.streamChatCompletion([ { role: 'user', content: 'Tell me a story' } ])) { process.stdout.write(chunk.choices[0].delta.content || ''); }

WebSocket Example

class OpenClawWebSocket { constructor(apiKey) { this.apiKey = apiKey; this.ws = null; this.messageId = 0; this.pendingRequests = new Map(); } connect() { return new Promise((resolve, reject) => { this.ws = new WebSocket('wss://api.openclaw.ai/v1/ws'); this.ws.onopen = () => { // Authenticate this.send({ type: 'auth', token: this.apiKey }); resolve(); }; this.ws.onmessage = (event) => { const data = JSON.parse(event.data); this.handleMessage(data); }; this.ws.onerror = (error) => { reject(error); }; this.ws.onclose = () => { console.log('WebSocket connection closed'); }; }); } send(message) { if (!this.ws || this.ws.readyState !== WebSocket.OPEN) { throw new Error('WebSocket not connected'); } if (message.type !== 'auth' && message.type !== 'ping') { message.id = `msg_${++this.messageId}`; } this.ws.send(JSON.stringify(message)); return message.id; } async chatCompletion(messages, options = {}) { const id = this.send({ type: 'chat', model: options.model || 'gpt-4', messages, stream: options.stream || false }); return new Promise((resolve, reject) => { this.pendingRequests.set(id, { resolve, reject }); }); } handleMessage(data) { if (data.id && this.pendingRequests.has(data.id)) { const { resolve, reject } = this.pendingRequests.get(data.id); this.pendingRequests.delete(data.id); if (data.type === 'error') { reject(new Error(data.error.message)); } else { resolve(data); } } else { // Handle other message types (streaming, notifications, etc.) console.log('Received message:', data); } } } // Usage const wsClient = new OpenClawWebSocket('your_api_key'); await wsClient.connect(); const response = await wsClient.chatCompletion([ { role: 'user', content: 'Hello!' } ]); console.log(response.content);

Python

REST API Example

import requests import json class OpenClawClient: def __init__(self, api_key, base_url="https://api.openclaw.ai/v1"): self.api_key = api_key self.base_url = base_url self.headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } def chat_completion(self, messages, model="gpt-4", max_tokens=1000, temperature=0.7): """Create a chat completion""" payload = { "model": model, "messages": messages, "max_tokens": max_tokens, "temperature": temperature } response = requests.post( f"{self.base_url}/chat/completions", headers=self.headers, json=payload ) response.raise_for_status() return response.json() def stream_chat_completion(self, messages, model="gpt-4"): """Stream chat completion""" payload = { "model": model, "messages": messages, "stream": True } response = requests.post( f"{self.base_url}/chat/completions/stream", headers=self.headers, json=payload, stream=True ) response.raise_for_status() for line in response.iter_lines(): if line: line = line.decode('utf-8') if line.startswith('data: '): data = json.loads(line[6:]) yield data def get_models(self): """List available models""" response = requests.get( f"{self.base_url}/models", headers=self.headers ) response.raise_for_status() return response.json() def execute_skill(self, skill_id, command, parameters=None): """Execute a skill""" payload = { "command": command, "parameters": parameters or {} } response = requests.post( f"{self.base_url}/skills/{skill_id}/execute", headers=self.headers, json=payload ) response.raise_for_status() return response.json() # Usage client = OpenClawClient("your_api_key") # Simple chat messages = [{"role": "user", "content": "Hello, OpenClaw!"}] response = client.chat_completion(messages) print(response["choices"][0]["message"]["content"]) # Streaming chat for chunk in client.stream_chat_completion(messages): if chunk.get("choices"): content = chunk["choices"][0].get("delta", {}).get("content", "") print(content, end="", flush=True)

WebSocket Example

import websocket import json import threading import uuid class OpenClawWebSocket: def __init__(self, api_key): self.api_key = api_key self.ws = None self.message_id = 0 self.pending_requests = {} self.lock = threading.Lock() def connect(self): """Connect to WebSocket""" def on_message(ws, message): data = json.loads(message) self.handle_message(data) def on_error(ws, error): print(f"WebSocket error: {error}") def on_close(ws, close_status_code, close_msg): print("WebSocket connection closed") def on_open(ws): # Authenticate self.send({"type": "auth", "token": self.api_key}) self.ws = websocket.WebSocketApp( "wss://api.openclaw.ai/v1/ws", on_message=on_message, on_error=on_error, on_close=on_close, on_open=on_open ) # Start WebSocket in separate thread self.ws_thread = threading.Thread(target=self.ws.run_forever) self.ws_thread.daemon = True self.ws_thread.start() def send(self, message): """Send message to WebSocket""" if not self.ws or not self.ws.sock: raise Exception("WebSocket not connected") if message["type"] not in ["auth", "ping"]: with self.lock: message["id"] = f"msg_{self.message_id}" self.message_id += 1 self.ws.send(json.dumps(message)) return message.get("id") def chat_completion(self, messages, model="gpt-4", stream=False): """Send chat completion request""" message_id = self.send({ "type": "chat", "model": model, "messages": messages, "stream": stream }) if stream: return self.stream_response(message_id) else: return self.wait_for_response(message_id) def stream_response(self, message_id): """Generator for streaming responses""" while True: with self.lock: if message_id in self.pending_requests: data = self.pending_requests.pop(message_id) if data["type"] == "chat_complete": break yield data threading.Event().wait(0.1) def wait_for_response(self, message_id, timeout=30): """Wait for response with timeout""" import time start_time = time.time() while time.time() - start_time < timeout: with self.lock: if message_id in self.pending_requests: data = self.pending_requests.pop(message_id) return data threading.Event().wait(0.1) raise TimeoutError("Request timed out") def handle_message(self, data): """Handle incoming messages""" if "id" in data: with self.lock: self.pending_requests[data["id"]] = data else: # Handle other message types print(f"Received: {data}") # Usage ws_client = OpenClawWebSocket("your_api_key") ws_client.connect() # Simple chat messages = [{"role": "user", "content": "Hello!"}] response = ws_client.chat_completion(messages) print(response["content"]) # Streaming chat for chunk in ws_client.chat_completion(messages, stream=True): if chunk.get("content"): print(chunk["content"], end="", flush=True)

cURL Examples

Basic Chat Completion

curl -X POST "https://api.openclaw.ai/v1/chat/completions" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4", "messages": [ {"role": "user", "content": "Hello, OpenClaw!"} ], "max_tokens": 100 }'

Streaming Chat

curl -X POST "https://api.openclaw.ai/v1/chat/completions/stream" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4", "messages": [ {"role": "user", "content": "Tell me a story"} ], "stream": true }'

List Models

curl -X GET "https://api.openclaw.ai/v1/models" \ -H "Authorization: Bearer YOUR_API_KEY"

Execute Skill

curl -X POST "https://api.openclaw.ai/v1/skills/weather-skill/execute" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "command": "get", "parameters": { "location": "New York" } }'

Error Handling

HTTP Error Codes

Common Error Responses

// 400 Bad Request { "error": { "code": "invalid_request", "message": "Invalid request format", "details": { "field": "messages", "issue": "must be an array" } } } // 401 Unauthorized { "error": { "code": "unauthorized", "message": "Invalid API key" } } // 429 Rate Limited { "error": { "code": "rate_limit_exceeded", "message": "Rate limit exceeded", "details": { "limit": 100, "reset_time": "2026-02-28T12:01:00Z" } } } // 500 Server Error { "error": { "code": "internal_error", "message": "Internal server error" } }

Rate Limiting

Rate Limits

  • Free Tier: 100 requests/minute
  • Pro Tier: 1000 requests/minute
  • Enterprise: Custom limits

Rate Limit Headers

X-RateLimit-Limit: 1000 X-RateLimit-Remaining: 999 X-RateLimit-Reset: 1677628800

Best Practices

Security

API Key Security

  • Never expose API keys in client-side code
  • Use environment variables for API keys
  • Implement key rotation policies
  • Use HTTPS for all API requests
  • Monitor API key usage

Input Validation

  • Validate all user inputs
  • Sanitize content before sending to API
  • Implement content length limits
  • Filter malicious content
  • Use parameterized queries

Performance

Optimization Tips

  • Use streaming for long responses
  • Implement request batching
  • Cache frequently used responses
  • Use appropriate model for task
  • Implement retry logic with exponential backoff

Connection Management

  • Reuse HTTP connections with keep-alive
  • Implement connection pooling
  • Use WebSocket for real-time features
  • Handle connection failures gracefully
  • Implement health checks

SDKs and Libraries

Official SDKs

JavaScript/TypeScript

npm install @openclaw/client import { OpenClawClient } from '@openclaw/client'; const client = new OpenClawClient({ apiKey: process.env.OPENCLAW_API_KEY });

Python

pip install openclaw-python import openclaw client = openclaw.Client(api_key="your_api_key")

Go

go get github.com/openclaw/go-client import "github.com/openclaw/go-client" client := openclaw.NewClient("your_api_key")

Ready to Integrate?

With the OpenClaw API, you can bring powerful AI capabilities to any application. Start with the REST API for simple integrations, or use WebSocket for real-time features.

← Back to Knowledgebase Example Projects

Explore ClawUniverse Services

Powerful AI tools to automate your workflow and boost productivity.

OpenClaw Setup Agent Marketplace Skills Library Explore ClawUniverse →