Integrate ZeroLimitAI into your applications with our OpenAI-compatible REST API. Available on Optimized, Premium, and Lifetime plans.
https://zerolimitai.com/api/v1Bearer token (Authorization header)JSON (application/json)OpenAI SDK & REST clientsAll requests must include your API key as a Bearer token in the Authorization header. Generate your key from the Developer settings page.
Authorization: Bearer zlai_xxxxxxxxxxxxxxxxxxxxxxxx
/api/v1/chat/api/v1/models/api/v1/usageSend a conversation and receive a completion. Supports streaming via Server-Sent Events and optional agent context.
| Parameter | Type | Description |
|---|---|---|
messagesrequired | array | Array of { role, content } objects. role is "user", "assistant", or "system". |
agentId | string | Optional. ID of a saved agent. Its system prompt and configured model are used automatically. |
stream | boolean | Optional (default false). If true, returns a text/event-stream SSE response compatible with the OpenAI streaming format. |
curl https://zerolimitai.com/api/v1/chat \
-H "Authorization: Bearer zlai_xxxxxxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "user", "content": "What is the capital of France?" }
]
}'{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"model": "mistralai/Mistral-7B-Instruct-v0.3",
"choices": [
{
"index": 0,
"message": { "role": "assistant", "content": "The capital of France is Paris." },
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 14,
"completion_tokens": 9,
"total_tokens": 23
}
}import openai
client = openai.OpenAI(
base_url="https://zerolimitai.com/api/v1",
api_key="zlai_xxxxxxxxxxxxxxxxxxxxxxxx",
)
stream = client.chat.completions.create(
model="mistralai/Mistral-7B-Instruct-v0.3",
messages=[{"role": "user", "content": "Tell me a joke."}],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)Returns the list of models available to your account based on your plan tier. Response format is compatible with the OpenAI /v1/models endpoint.
curl https://zerolimitai.com/api/v1/models \ -H "Authorization: Bearer zlai_xxxxxxxxxxxxxxxxxxxxxxxx"
{
"object": "list",
"data": [
{
"id": "anthropic/claude-3.5-sonnet",
"object": "model",
"created": 1711929600,
"owned_by": "anthropic"
},
{
"id": "openai/gpt-4o",
"object": "model",
"created": 1711929600,
"owned_by": "openai"
}
]
}Returns usage statistics for your account: daily message count and monthly token/cost aggregates.
curl https://zerolimitai.com/api/v1/usage \ -H "Authorization: Bearer zlai_xxxxxxxxxxxxxxxxxxxxxxxx"
{
"today": {
"used": 42,
"limit": 500
},
"month": {
"requests": 1204,
"tokensIn": 892400,
"tokensOut": 412100,
"cost": 0.0187
},
"plan": "LIFETIMEPRO"
}| Status | Code | Description |
|---|---|---|
400 | — | Invalid request body or missing required fields. |
401 | — | Missing or invalid API key. |
402 | trial_expired | Account trial has expired. Upgrade to continue. |
403 | upgrade_required | Your plan does not include API access. |
404 | agent_not_found | The specified agentId does not exist. |
429 | — | Rate limit exceeded. See Retry-After header. |
429 | daily_limit | Daily message limit reached for your plan. |
502 | — | Upstream model error. |
Rate limits apply per API key. When a limit is exceeded you receive a 429 response with a Retry-After header indicating seconds until the limit resets.
The API is compatible with the OpenAI client libraries. Just change the base_url to https://zerolimitai.com/api/v1 and use your ZeroLimitAI API key.
# Python pip install openai # Node.js npm install openai
// Node.js
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://zerolimitai.com/api/v1",
apiKey: "zlai_xxxxxxxxxxxxxxxxxxxxxxxx",
});
const response = await client.chat.completions.create({
model: "mistralai/Mistral-7B-Instruct-v0.3",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0].message.content);Ready to build?
Generate your API key from the Developer settings page.