API Key of the from Bearer <api_key>
, you can get it from here.
Request
Array of message objects containing the conversation history.
The role of the message author. One of system
, user
, assistant
, or tool
.
The content of the message. Can be a string or array of content parts for multimodal inputs.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
For tool messages, the ID of the tool call this message is responding to.
Alternate list of models for routing overrides.
Preferences for provider routing.
List of prompt transforms (OpenRouter-only).
Enable streaming of results via Server-Sent Events.
Maximum number of tokens to generate (range: [1, context_length)).
Sampling temperature (range: [0, 2]).
Seed for deterministic outputs.
Top-p sampling value (range: (0, 1]).
Top-k sampling value (range: [1, Infinity)).
Frequency penalty (range: [-2, 2]).
Presence penalty (range: [-2, 2]).
Repetition penalty (range: (0, 2]).
Mapping of token IDs to bias values.
Number of top log probabilities to return.
Minimum probability threshold (range: [0, 1]).
Alternate top sampling parameter (range: [0, 1]).
Stop sequences - generation will stop if any of these strings are encountered.
Tool definitions following OpenAI’s tool calling format.
Type of tool, typically function
.
Function definition.
Description of what the function does.
JSON Schema object defining the function parameters.
Controls which (if any) tool is called by the model.
none
- Model will not call any tool
auto
- Model can pick between generating a message or calling tools
required
- Model must call one or more tools
- Or specify a particular tool via
{"type": "function", "function": {"name": "function_name"}}
Enforce structured output format.
A stable identifier for your end-users. Used to help detect and prevent abuse.
Response
Unique identifier for the chat completion.
Unix timestamp when the completion was created.
The model used for the completion.
Object type, always chat.completion
or chat.completion.chunk
for streaming.
System fingerprint for the model configuration.
Array of completion choices.
Reason the generation stopped. Options: stop
, length
, content_filter
, tool_calls
, error
.
Index of the choice in the list.
The generated message.
Role of the message author, typically assistant
.
Tool calls made by the assistant.
Unique identifier for the tool call.
Type of tool, typically function
.
The function call details.
Name of the function being called.
JSON string of arguments for the function.
Deprecated function call field (use tool_calls
instead).
Provider-specific response fields.Show Provider Specific Fields
Raw finish reason from the underlying provider.
Token usage information.
Number of tokens in the completion.
Number of tokens in the prompt.
Total number of tokens used (prompt + completion).
completion_tokens_details
Detailed breakdown of completion tokens.Show Completion Tokens Details
accepted_prediction_tokens
Number of accepted prediction tokens.
Number of audio tokens in the completion.
Number of reasoning/thinking tokens used.
rejected_prediction_tokens
Number of rejected prediction tokens.
Detailed breakdown of prompt tokens.Show Prompt Tokens Details
Number of audio tokens in the prompt.
Number of cached tokens used from previous requests.
The provider that served the request.
curl -X POST https://api.blackbox.ai/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "blackboxai/openai/gpt-4",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
],
"temperature": 0.7,
"max_tokens": 256,
"stream": false
}'
{
"id":"gen-...",
"created":1757140020,
"model":"openai/gpt-4",
"object":"chat.completion",
"system_fingerprint":"None",
"choices":[
{
"finish_reason":"stop",
"index":0,
"message":{
"content":"The capital of France is Paris.",
"role":"assistant",
"tool_calls":"None",
"function_call":"None"
},
"provider_specific_fields":{
"native_finish_reason":"stop"
}
}
],
"usage":{
"completion_tokens":7,
"prompt_tokens":14,
"total_tokens":21,
"completion_tokens_details":{
"accepted_prediction_tokens":"None",
"audio_tokens":"None",
"reasoning_tokens":0,
"rejected_prediction_tokens":"None"
},
"prompt_tokens_details":{
"audio_tokens":0,
"cached_tokens":0
}
},
"provider":"OpenAI"
}