Back to home

API Documentation

Quick Start

Get your first AI response in 3 steps.

1. Create an account

Sign up at /register. You get 500 free credits instantly.

2. Create an API key

Go to your API Keys dashboard and create a new key. Copy it — you won't see it again.

3. Make your first request

import Tchavi from '@tchavi/sdk';

const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });

const response = await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response.choices[0].message.content);
console.log('Credits used:', response.tchavi.credits_used);

Authentication

All API requests require a valid API key sent in the Authorization header:

Authorization: Bearer YOUR_API_KEY

API keys can be created and managed from your dashboard. Each key is tied to your account's credit balance. You can create multiple keys for different projects.

Keep your API keys secret. Do not share them in client-side code, public repositories, or URLs.

Tip: Store your key in an environment variable and read it at runtime — never hardcode it in source code.
# .env
TCHAVI_API_KEY="sk-tch_..."

# Node.js
const client = new Tchavi({ apiKey: process.env.TCHAVI_API_KEY });

# Python
import os
client = OpenAI(api_key=os.environ["TCHAVI_API_KEY"], base_url="...")

Base URL

https://tchavi.com/api/api/v1

Tchavi is 100% compatible with the OpenAI API format. If you already use the OpenAI SDK or any OpenAI-compatible library, just change the base URL and API key — the rest of your code stays the same.

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://tchavi.com/api/api/v1',
});

const response = await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response.choices[0].message.content);

Chat Completions

POST/v1/chat/completions

Generate a chat completion from a list of messages. This is the primary endpoint for text generation with all supported models.

Note: The parameter tables below list the common fields shared across models. Each model may support additional model-specific parameters (e.g. vision input, tool calling, JSON mode). For the exhaustive list of parameters a given model accepts, open its details page at /models and switch to the API tab — the parameter reference there is generated from the model's declared capabilities.

Request body

ParameterTypeRequiredDescription
modelstringYesModel ID (e.g. "gpt-4o-mini", "claude-sonnet-4-6")
messagesarrayYesArray of message objects. Each has a role (system, user, or assistant) and content. system sets the AI's behavior; user is your message; assistant is a prior AI reply.
temperaturenumberNoControls randomness. 0 = deterministic/focused, 1 = balanced (default), 2 = highly creative/random.
max_tokensintegerNoMaximum tokens to generate
streambooleanNoStream response as SSE. Default: false
top_pnumberNoNucleus sampling parameter (0–1)
stopstring | string[]NoUp to 4 stop sequences. The model stops generating when it hits one.
frequency_penaltynumberNo-2.0 to 2.0. Positive values penalize repeated tokens. Default: 0
presence_penaltynumberNo-2.0 to 2.0. Positive values push the model toward new topics. Default: 0
seedintegerNoReproducibility seed. Same seed + params returns similar output (best-effort).
response_formatobjectNo{ type: "json_object" } or { type: "json_schema", json_schema: ... } for structured output. Model support varies — see the model's API tab.
toolsarrayNoFunction definitions the model can call. Paired with tool_choice. Available on tool-capable models only.

Example request

import Tchavi from '@tchavi/sdk';

const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });

const response = await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is the capital of Benin?' },
  ],
  temperature: 0.7,
});

console.log(response.choices[0].message.content);
console.log('Credits used:', response.tchavi.credits_used);

Example response

JSON
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1711234567,
  "model": "gpt-4o-mini",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The capital of Benin is Porto-Novo."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 24,
    "completion_tokens": 12,
    "total_tokens": 36
  },
  "tchavi": {
    "credits_used": 2,
    "credits_remaining": 498,
    "model_tier": "budget"
  }
}

Streaming

Set stream: true to receive the response token-by-token as Server-Sent Events (SSE). This lets you display text as it arrives rather than waiting for the full response.

import Tchavi from '@tchavi/sdk';

const client = new Tchavi({ apiKey: process.env.TCHAVI_API_KEY });

const stream = await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Tell me a short story.' }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content ?? '');
}

Image Generation

POST/v1/images/generations

Generate images from text prompts across all supported image models — Nano Banana, Imagen, GPT Image, DALL·E, and more. The same endpoint also handles image editing when you pass reference images (aliased as POST /v1/images/edits).

Common request body

The fields below are shared by every image model. Model-specific options — size, aspect_ratio, resolution, quality, output_format, negative_prompt, seed, background, etc. — depend on the family. Open the model on /models and switch to the API tab for the full parameter reference.

ParameterTypeRequiredDescription
modelstringYesAny image model ID — e.g. nano-banana-pro, imagen-4, gpt-image-1, dall-e-3.
promptstringYesText description of the image to generate.
nintegerNoNumber of images (1–4). Default: 1
response_formatstringNob64_json (default — base64 in response) or url (hosted URL). Model support varies.
imagesstring[]NoBase64-encoded reference images for editing. The max number accepted depends on the model (e.g. 14 for Nano Banana, 16 for GPT Image).
userstringNoOptional end-user identifier for abuse monitoring.

Example

Swap model for any image model ID — parameters beyond those shown below must match that model's API tab.

import Tchavi from '@tchavi/sdk';

const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });

const result = await client.images.generations.create({
  model: 'YOUR_MODEL_ID',
  prompt: 'A colorful parrot on a branch, digital art',
});

console.log(result.data[0].b64_json);
console.log('Credits used:', result.tchavi.credits_used);

The response contains base64-encoded image data in data[0].b64_json. Here's how to use it:

// Display the image in a browser
const img = document.createElement('img');
img.src = `data:image/png;base64,${data.data[0].b64_json}`;
document.body.appendChild(img);

Audio

Tchavi supports two audio endpoints: text-to-speech (TTS) for generating audio from text, and transcription (Whisper) for converting audio files to text.

Text-to-Speech

POST/v1/audio/speech

Converts text to spoken audio. Returns raw audio bytes.

ParameterTypeRequiredDescription
modelstringYes"tts-1" (faster) or "tts-1-hd" (higher quality)
inputstringYesThe text to convert to speech (max 4096 characters)
voicestringYesalloy, ash, ballad, cedar, coral, echo, fable, marin, nova, onyx, sage, shimmer
response_formatstringNomp3, opus, aac, flac, wav, pcm. Default: mp3
speednumberNoPlayback speed 0.25–4.0. Default: 1.0
import Tchavi from '@tchavi/sdk';
import { writeFileSync } from 'fs';

const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });

const response = await client.audio.speech.create({
  model: 'tts-1',
  input: 'Tchavi is the best AI API gateway in Africa.',
  voice: 'nova',
  response_format: 'mp3',
});

const buffer = Buffer.from(await response.arrayBuffer());
writeFileSync('speech.mp3', buffer);

Transcription (Whisper)

POST/v1/audio/transcriptions

Transcribes audio files to text. Send as multipart/form-data.

ParameterTypeRequiredDescription
modelstringYes"whisper-1"
filefileYesAudio file (mp3, wav, m4a, webm, ogg…). Max 25MB
languagestringNoISO-639-1 code (e.g. "fr", "en"). Auto-detected if omitted
response_formatstringNojson, text, srt, vtt, verbose_json. Default: json
promptstringNoOptional text to guide the model's style or continue a previous segment. Must match the audio language.
temperaturenumberNoSampling temperature 0–1. Higher values yield more varied transcriptions. Default: 0
import Tchavi from '@tchavi/sdk';
import { createReadStream } from 'fs';

const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });

const result = await client.audio.transcriptions.create({
  model: 'whisper-1',
  file: createReadStream('audio.mp3'),
  language: 'fr',
});

console.log(result.text);
console.log('Duration:', result.tchavi.duration_minutes, 'min');
console.log('Credits used:', result.tchavi.credits_used);

Embeddings

POST/v1/embeddings

Embeddings convert text into a numeric vector that captures its semantic meaning. Use them for semantic search (find content by meaning, not keywords), clustering similar documents, recommendations, and RAG (retrieval-augmented generation) pipelines.

import Tchavi from '@tchavi/sdk';

const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });

const response = await client.embeddings.create({
  model: 'text-embedding-3-small',
  input: 'Tchavi is the best AI API gateway in Africa.',
});

console.log(response.data[0].embedding);

Models

Tchavi gives you access to 40+ AI models from OpenAI, Anthropic, Google, and more — all through a single API. Models are organized into budget groups:

ModelProviderTypeCredits
GPT-5 NanoOpenAIchat2 cr/req
GPT-5 MiniOpenAIchat8 cr/req
GPT-5OpenAIchat37 cr/req
Claude Opus 4.7Anthropicchat98 cr/req
Claude Sonnet 4.6Anthropicchat59 cr/req
WhisperOpenAIaudio20 cr/min
DeepSeek ChatDeepSeekchat2 cr/req
DeepSeek ReasonerDeepSeekchat2 cr/req
Mistral Small 4Mistralchat3 cr/req
Devstral 2Mistralchat9 cr/req
o3OpenAIchat33 cr/req
TTS HDOpenAIaudio100 cr/1K chars
GPT Image 1.5OpenAIimage33 cr/image
DALL-E 3OpenAIimage132 cr/image
Imagen 4Googleimage132 cr/image
Nano Banana ProGoogleimage223 cr/image

See all available models on the Models page. New models are added within 48 hours of release.

Credits & Billing

Tchavi uses a credit-based billing system. Each API request consumes credits based on the model used and the number of tokens processed.

How credits are calculated

  • Chat models: Credits = (input_tokens × rate + output_tokens × rate) per 1K tokens
  • Image models: Flat credit cost per image based on resolution
  • TTS (text-to-speech): Credits per 1K characters of input text
  • Transcription (Whisper): Credits per minute of audio

Response headers

Every API response includes metadata headers:

HeaderDescription
X-Credits-UsedCredits consumed by this request
X-Credits-RemainingYour current credit balance
X-RateLimit-RPM-LimitYour requests-per-minute limit
X-RateLimit-RPM-RemainingRequests remaining in the current minute
X-RateLimit-TPM-LimitYour tokens-per-minute limit
X-RateLimit-TPM-RemainingToken budget remaining in the current minute
X-Request-IdUnique request ID for support/debugging
Retry-AfterSeconds to wait before retrying (on 429 responses)
Example: A gpt-4o-mini request with 500 input tokens + 200 output tokens at the Budget tier (e.g. 1 cr/1K input, 2 cr/1K output) costs: (500/1000 × 1) + (200/1000 × 2) = 0.9 credits → rounded up to 1 credit. The exact rate for each model is shown in the Models table.

Recharging

Buy credit packs from your billing dashboard using Wave, Orange Money, MTN MoMo, and 30+ mobile money operators. Credits are added instantly after payment.

Rate Limits

Tchavi is pay-as-you-go: every user can call every model, and credits are the natural gate. Rate limits depend on your account level, which is unlocked automatically based on your lifetime spend on the platform — there is no subscription. Two independent limits apply per user per minute:

  • RPM — maximum number of requests per minute.
  • TPM — maximum tokens processed per minute (input + output). For TTS, each character counts as 1 token. For Whisper, each billed minute counts as 1,000 tokens.
Account levelUnlocked atRPMTPMMax API keys
FreeDefault10 req/min100,0001
BuilderFirst top-up30 req/min500,0003
Growth10,000 FCFA lifetime120 req/min1,000,00010
Pro50,000 FCFA lifetime300 req/min3,000,000Unlimited

A global IP-based limit of 500 req/min also applies across all users sharing the same IP address.

When a limit is exceeded you receive a 429 response with a Retry-After header indicating how many seconds to wait before retrying. Your current level, RPM budget, and TPM budget are always visible in the X-RateLimit-* response headers (see Credits & Billing).

Error Handling

Tchavi returns standard HTTP status codes. Errors include a JSON body:

JSON
{
  "error": {
    "code": "insufficient_credits",
    "message": "You don't have enough credits for this request.",
    "status": 402
  }
}

Common error codes

StatusCodeDescription
401invalid_api_keyMissing or invalid API key
402insufficient_creditsNot enough credits — recharge to continue
403model_not_allowedYour pack doesn't include this model tier
429rate_limit_exceededRPM limit reached — check Retry-After header
429user_rate_limit_exceededPer-user RPM limit reached — upgrade plan to increase
429tpm_rate_limit_exceededToken-per-minute limit reached — wait before retrying
500internal_errorServer error — retry or contact support
502upstream_errorAI provider is temporarily unavailable

SDKs

@tchavi/sdk (recommended)

Our official SDK wraps the API with type-safe methods and credit tracking.

Bash
npm install @tchavi/sdk
TypeScript
import Tchavi from '@tchavi/sdk';

const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });

const response = await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response.choices[0].message.content);
console.log('Credits used:', response.tchavi.credits_used);

OpenAI SDK (drop-in)

Already using the OpenAI Python or Node.js SDK? Just change the base URL:

Python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://tchavi.com/api/api/v1",
)

# Use it exactly like OpenAI
response = client.chat.completions.create(
    model="claude-sonnet-4-6",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
JavaScript
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://tchavi.com/api/api/v1',
});

const response = await client.chat.completions.create({
  model: 'claude-sonnet-4-6',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response.choices[0].message.content);

cURL

No SDK needed — use standard HTTP requests:

Bash
curl -X POST https://tchavi.com/api/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'