Learn how to integrate SmarterRouter with various AI applications and frameworks.
- Python OpenAI SDK
- curl
- OpenWebUI
- VS Code Extensions (Continue, Cursor)
- SillyTavern & Other Chat UIs
- Anthropic SDK
SmarterRouter is compatible with any OpenAI-compatible client. See the API Reference for full details.
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:11436/v1",
api_key="dummy-key" # SmarterRouter doesn't require a key, but some clients need it
)
response = client.chat.completions.create(
model="smarterrouter/main",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a Python function to calculate fibonacci numbers"}
],
temperature=0.7,
max_tokens=500
)
print(response.choices[0].message.content)
print(f"Model used: {response.model}") # Always 'smarterrouter/main'
# Check response signature or use /admin/explain to see actual modelcurl -X POST http://localhost:11436/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "Hello, how are you?"}
],
"max_tokens": 100
}'curl -X POST http://localhost:11436/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Tell me a story"}],
"stream": true
}'curl -X POST http://localhost:11436/v1/embeddings \
-H "Content-Type: application/json" \
-d '{
"model": "nomic-embed-text",
"input": "The quick brown fox jumps over the lazy dog"
}'See detailed guide: OpenWebUI Integration
Quick setup:
- OpenWebUI → Settings → Connections → Add Connection
- Name:
SmarterRouter - Base URL:
http://localhost:11436/v1 - API Key: (leave empty)
- Model:
smarterrouter/main - Save
Configure ~/.continue/config.json:
{
"models": [
{
"title": "SmarterRouter",
"model": "smarterrouter/main",
"apiBase": "http://localhost:11436/v1",
"apiKey": "dummy"
}
]
}In Cursor settings (settings.json):
{
"cursor.router": {
"baseUrl": "http://localhost:11436/v1",
"apiKey": "dummy"
}
}Most chat UIs (SillyTavern, JanitorAI, etc.) support custom OpenAI endpoints.
- Open API settings in your chat UI
- Select "Custom" or "OpenAI" API mode
- Configure:
- API URL:
http://localhost:11436/v1 - API Key: (leave empty or any string)
- Model:
smarterrouter/main
- API URL:
- Save and start chatting
SmarterRouter provides OpenAI-compatible endpoints. For Anthropic SDK compatibility, use the OpenAI base URL:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:11436/v1",
api_key="dummy"
)
# Use Anthropic-compatible format if needed
# Note: SmarterRouter expects OpenAI format messages
response = client.chat.completions.create(
model="smarterrouter/main",
messages=[{"role": "user", "content": "Hello!"}]
)For true Anthropic API format support, you may need a compatibility layer. See Backends Documentation for details.
Using the official OpenAI SDK:
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'http://localhost:11436/v1',
apiKey: 'dummy-key' // Not required but some clients need it
});
const completion = await openai.chat.completions.create({
model: 'smarterrouter/main',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(completion.choices[0].message.content);package main
import (
"context"
"fmt"
"net/http"
"github.com/gin-gonic/gin"
openai "github.com/henomis/going-openai"
)
func main() {
r := gin.Default()
r.POST("/chat", func(c *gin.Context) {
client := openai.NewClient(
"http://localhost:11436/v1",
openai.WithAPIKey("dummy"),
)
resp, err := client.ChatCompletion(
context.Background(),
openai.ChatCompletionRequest{
Model: "smarterrouter/main",
Messages: []openai.ChatCompletionMessage{
{Role: "user", Content: "Hello!"},
},
},
)
if err != nil {
c.JSON(500, gin.H{"error": err.Error()})
return
}
c.JSON(200, resp)
})
r.Run(":8080")
}- Verify SmarterRouter is running:
curl http://localhost:11436/health - Check firewall rules and network connectivity
- Ensure correct port (default: 11436)
- SmarterRouter only exposes
smarterrouter/mainas the model - Don't request other model names; the router selects internally
- SmarterRouter doesn't require an API key for chat endpoints
- Some clients require a non-empty key; use any string (e.g., "dummy", "sk-...")
- First request to a model may be slow (cold start)
- Check Performance Tuning for optimization tips
- Consider pinning a small model:
ROUTER_PINNED_MODEL=phi3:mini
- API Reference - Full endpoint documentation
- Configuration - Customize SmarterRouter behavior
- Troubleshooting - Common issues and solutions
- Performance Tuning - Optimize for your workload