These examples use the Enterprise plan endpoint https://enterprise.blackbox.ai. The API is also available on standard plans at https://api.blackbox.ai, where it is currently experimental.
When sending multi-turn conversations that include tool calls, every tool_call ID and its matching tool_call_id must only contain letters, numbers, underscores, and hyphens ([a-zA-Z0-9_-]).
What causes the error
Using IDs with dots, colons, or other special characters. This commonly happens when you construct IDs yourself or copy them from provider-internal formats:
// These IDs will be REJECTED — they contain dots
{
"role": "assistant",
"tool_calls": [
{
"id": "toolu_vrtx_01QhtesphwJp7uBdvuFWVhMd.nested.id",
"type": "function",
"function": { "name": "get_weather", "arguments": "{}" }
}
]
}
The correct way
Always use the id from the model’s response exactly as returned. If you are generating your own IDs (for example, when replaying a conversation), follow the pattern call_<alphanumeric>:
from openai import OpenAI
client = OpenAI(
base_url="https://enterprise.blackbox.ai/chat/completions",
api_key="<BLACKBOX_API_KEY>",
)
# Step 1: Send the initial request with tools
response = client.chat.completions.create(
model="blackboxai/anthropic/claude-sonnet-4.5",
messages=[
{"role": "user", "content": "What's the weather in Tokyo?"}
],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}]
)
message = response.choices[0].message
# Step 2: Use the tool_call ID exactly as the model returned it
# The ID will look like "call_abc123" — never modify it
tool_call_id = message.tool_calls[0].id
# Step 3: Send the tool result back, matching the exact ID
messages = [
{"role": "user", "content": "What's the weather in Tokyo?"},
{
"role": "assistant",
"content": message.content,
"tool_calls": [
{
"id": tool_call_id, # Use the exact ID from the response
"type": "function",
"function": message.tool_calls[0].function
}
]
},
{
"role": "tool",
"tool_call_id": tool_call_id, # Must match the tool_call ID above
"content": '{"temperature": 22, "condition": "sunny"}'
}
]
follow_up = client.chat.completions.create(
model="blackboxai/anthropic/claude-sonnet-4.5",
messages=messages,
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}]
)
Never construct tool call IDs manually with characters like . or :. If you need to generate your own IDs (for testing or replaying conversations), use a format like call_abc123 — only alphanumeric characters, hyphens, and underscores.
When the model responds with one or more tool_calls, every tool call must have a corresponding tool message with a matching tool_call_id immediately after the assistant message.
What causes the error
Sending an assistant message that contains tool_calls without providing a tool result for each one in the next turn. This commonly happens when you only handle the first tool call and ignore the rest, or when you skip the tool result entirely and send another user message instead:
// The model returned TWO tool calls, but only ONE tool result is provided
{
"messages": [
{ "role": "user", "content": "Weather and time in London?" },
{
"role": "assistant",
"content": null,
"tool_calls": [
{ "id": "call_abc", "type": "function", "function": { "name": "get_weather", "arguments": "{}" } },
{ "id": "call_def", "type": "function", "function": { "name": "get_time", "arguments": "{}" } }
]
},
{ "role": "tool", "tool_call_id": "call_abc", "content": "{\"temp\": 15}" }
// MISSING: tool result for "call_def" — this will cause a 400 error
]
}
The correct way
Loop through all tool_calls and send a tool message for each one:
from openai import OpenAI
client = OpenAI(
base_url="https://enterprise.blackbox.ai/chat/completions",
api_key="<BLACKBOX_API_KEY>",
)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"]
}
}
},
{
"type": "function",
"function": {
"name": "get_time",
"description": "Get current time for a timezone",
"parameters": {
"type": "object",
"properties": {"timezone": {"type": "string"}},
"required": ["timezone"]
}
}
}
]
response = client.chat.completions.create(
model="blackboxai/anthropic/claude-sonnet-4.5",
messages=[
{"role": "user", "content": "What's the weather and time in London?"}
],
tools=tools
)
message = response.choices[0].message
# The model may return multiple tool_calls — handle ALL of them
messages = [
{"role": "user", "content": "What's the weather and time in London?"},
{
"role": "assistant",
"content": message.content,
"tool_calls": [
{"id": tc.id, "type": "function", "function": tc.function}
for tc in message.tool_calls
]
}
]
# Add a tool result for EVERY tool call
for tool_call in message.tool_calls:
if tool_call.function.name == "get_weather":
result = '{"temperature": 15, "condition": "cloudy"}'
elif tool_call.function.name == "get_time":
result = '{"time": "14:30 GMT"}'
else:
result = '{"error": "Unknown tool"}'
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result
})
# Now send all tool results back
follow_up = client.chat.completions.create(
model="blackboxai/anthropic/claude-sonnet-4.5",
messages=messages,
tools=tools
)
If a tool call fails on your side, still send back a tool message for it — set the content to a JSON error like {"error": "service unavailable"}. The model can use this to adjust its response. Never skip a tool result.
Preserving Reasoning Blocks in Multi-Turn Requests
When using reasoning models with tool calling, the model returns reasoning_details alongside its response. If you send those reasoning blocks back in a follow-up request, they must be exactly as the model returned them — including any signature fields.
What causes the error
Modifying, reordering, or stripping fields from the reasoning_details before sending them back. This commonly happens when you serialize the response to a database and a field gets dropped, or when you manually rebuild the assistant message and forget the signature:
// The signature has been altered or removed — this will cause a 400 error
{
"role": "assistant",
"content": null,
"tool_calls": [{ "id": "call_abc", "type": "function", "function": { "name": "get_weather", "arguments": "{}" } }],
"reasoning_details": [
{
"type": "reasoning.text",
"text": "Let me think about what clothes to recommend...",
"signature": null
}
]
}
The signature was originally a cryptographic string like "erUBMkiJvNVMxLa..." but was set to null during serialization. The provider rejects the entire request because the signature no longer matches.
The correct way
Pass the entire reasoning_details array back without touching it:
from openai import OpenAI
client = OpenAI(
base_url="https://enterprise.blackbox.ai/chat/completions",
api_key="<BLACKBOX_API_KEY>",
)
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"]
}
}
}]
# First call — model reasons and requests a tool
response = client.chat.completions.create(
model="blackboxai/anthropic/claude-sonnet-4.5",
messages=[
{"role": "user", "content": "What should I wear in Boston today?"}
],
tools=tools,
extra_body={"reasoning": {"max_tokens": 2000}}
)
message = response.choices[0].message
# Pass reasoning_details back EXACTLY as received — do not modify
messages = [
{"role": "user", "content": "What should I wear in Boston today?"},
{
"role": "assistant",
"content": message.content,
"tool_calls": [
{"id": tc.id, "type": "function", "function": tc.function}
for tc in message.tool_calls
],
# Keep reasoning_details intact — do not edit, reorder, or
# strip the signature field
"reasoning_details": message.reasoning_details
},
{
"role": "tool",
"tool_call_id": message.tool_calls[0].id,
"content": '{"temperature": 45, "condition": "rainy"}'
}
]
# Second call — model continues from where it left off
response2 = client.chat.completions.create(
model="blackboxai/anthropic/claude-sonnet-4.5",
messages=messages,
tools=tools,
extra_body={"reasoning": {"max_tokens": 2000}}
)
The reasoning_details array contains signature fields that are cryptographically verified by the provider. If you serialize and deserialize these blocks (for example, storing them in a database), make sure no fields are dropped or altered. The entire sequence must match the original output exactly.
If you don’t need to preserve reasoning continuity across turns, you can simply omit the reasoning_details field from the assistant message. The model will start fresh reasoning for the next turn, which avoids signature validation entirely.