Skip to main content
The Messages API supports tool calling to give Claude models access to external functions. Define tools using the Anthropic input_schema format. When the model wants to use a tool, it returns a tool_use content block with stop_reason: "tool_use".
The Messages API (/v1/messages) is fully supported on the Enterprise plan using https://enterprise.blackbox.ai. On standard plans (https://api.blackbox.ai), this endpoint may not work as expected. For the best experience, use an Enterprise API key.
Important — Tool Format: The Messages API uses input_schema for tool parameter definitions. This is different from the Responses API which uses parameters. Using the wrong format will result in errors.
See API Best Practices for how to correctly structure tool call IDs and pair every tool call with a tool result in multi-turn conversations.

Basic Tool Calling

import os
import requests

response = requests.post(
    'https://enterprise.blackbox.ai/v1/messages',
    headers={
        'Content-Type': 'application/json',
        'Authorization': f"Bearer {os.environ['BLACKBOX_API_KEY']}",
        'anthropic-version': '2023-06-01',
    },
    json={
        'model': 'blackboxai/anthropic/claude-sonnet-4.5',
        'max_tokens': 1024,
        'tools': [
            {
                'name': 'get_weather',
                'description': 'Get current weather for a city',
                'input_schema': {
                    'type': 'object',
                    'properties': {
                        'city': {'type': 'string', 'description': 'City name'}
                    },
                    'required': ['city'],
                },
            }
        ],
        'messages': [
            {'role': 'user', 'content': "What's the weather in Paris?"}
        ],
    },
)

result = response.json()
print(result['stop_reason'])  # "tool_use"
for block in result['content']:
    if block['type'] == 'tool_use':
        print(f"Tool: {block['name']}, Input: {block['input']}")

Tool Call Response

When the model wants to call a tool, the response has stop_reason: "tool_use" and includes tool_use content blocks:
{
  "id": "gen_01KJRNF3KKH18317Z4441HVH1V",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "tool_use",
      "id": "toolu_01DFdL9a3hM7jjbaTRHYSYoy",
      "name": "get_weather",
      "input": {
        "city": "Paris"
      }
    }
  ],
  "model": "blackboxai/anthropic/claude-sonnet-4.5",
  "stop_reason": "tool_use",
  "stop_sequence": null,
  "usage": {
    "input_tokens": 591,
    "output_tokens": 53
  }
}

Handling Tool Results

After receiving a tool_use response, execute the tool locally and send the result back. The assistant’s full response becomes the next assistant message, and your tool result goes in a user message:
# After getting the tool_use response above...
tool_use_block = next(b for b in result["content"] if b["type"] == "tool_use")

# Execute the tool locally
weather_data = get_weather(tool_use_block["input"]["city"])

# Send the result back to continue the conversation
messages = [
    {"role": "user", "content": "What's the weather in Paris?"},
    {"role": "assistant", "content": result["content"]},
    {"role": "user", "content": [
        {
            "type": "tool_result",
            "tool_use_id": tool_use_block["id"],
            "content": weather_data
        }
    ]}
]

body = {
    "model": "blackboxai/anthropic/claude-sonnet-4.5",
    "max_tokens": 1024,
    "tools": tools,
    "messages": messages
}

# The model now uses the tool result to generate a final answer
The model then uses the tool result to generate a natural language response:
Final Answer Response
{
  "id": "gen_01KJRNF6ABJA4J76NWMMRVYMFT",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "The weather in Paris is currently sunny with a temperature of 22°C. It's a beautiful day!"
    }
  ],
  "model": "blackboxai/anthropic/claude-sonnet-4.5",
  "stop_reason": "end_turn",
  "stop_sequence": null,
  "usage": {
    "input_tokens": 637,
    "output_tokens": 31
  }
}

Multiple Tools

You can define multiple tools and the model will call whichever ones it needs — including calling multiple tools in a single response (parallel tool calls):
import os
import requests

response = requests.post(
    'https://enterprise.blackbox.ai/v1/messages',
    headers={
        'Content-Type': 'application/json',
        'Authorization': f"Bearer {os.environ['BLACKBOX_API_KEY']}",
        'anthropic-version': '2023-06-01',
    },
    json={
        'model': 'blackboxai/anthropic/claude-sonnet-4.5',
        'max_tokens': 1024,
        'tools': [
            {
                'name': 'get_weather',
                'description': 'Get current weather for a city',
                'input_schema': {
                    'type': 'object',
                    'properties': {
                        'city': {'type': 'string', 'description': 'City name'}
                    },
                    'required': ['city'],
                },
            },
            {
                'name': 'get_time',
                'description': 'Get current local time in a city',
                'input_schema': {
                    'type': 'object',
                    'properties': {
                        'city': {'type': 'string', 'description': 'City name'}
                    },
                    'required': ['city'],
                },
            },
        ],
        'messages': [
            {'role': 'user', 'content': "What's the weather and time in Tokyo?"}
        ],
    },
)

result = response.json()

# The model may call multiple tools at once
for block in result['content']:
    if block['type'] == 'tool_use':
        print(f"Tool: {block['name']}, Input: {block['input']}")
    elif block['type'] == 'text':
        print(f"Text: {block['text']}")
The model calls both tools in a single response:
Response — Parallel Tool Calls
{
  "id": "gen_01KJRNF8P97RJAYQPX6J9ASV8R",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "I'll get the current weather and time in Tokyo for you."
    },
    {
      "type": "tool_use",
      "id": "toolu_01WMNa2FQJw199bNLS2rJiK4",
      "name": "get_weather",
      "input": { "city": "Tokyo" }
    },
    {
      "type": "tool_use",
      "id": "toolu_01BUTXWBVKJqrumPzi2zTLvL",
      "name": "get_time",
      "input": { "city": "Tokyo" }
    }
  ],
  "model": "blackboxai/anthropic/claude-sonnet-4.5",
  "stop_reason": "tool_use",
  "stop_sequence": null,
  "usage": {
    "input_tokens": 617,
    "output_tokens": 103
  }
}
When the model makes parallel tool calls, you must provide a tool_result for every tool_use block in the response. Send all results in a single user message.

Multi-Turn Tool Calling

For multi-turn conversations with tools, pass the assistant’s response (including tool_use blocks) back as an assistant message, followed by a user message containing tool_result blocks. Continue the loop until stop_reason is "end_turn".
import os
import requests

url = 'https://enterprise.blackbox.ai/v1/messages'
headers = {
    'Content-Type': 'application/json',
    'Authorization': f"Bearer {os.environ['BLACKBOX_API_KEY']}",
    'anthropic-version': '2023-06-01',
}

tools = [{
    'name': 'calculator',
    'description': 'Perform arithmetic operations',
    'input_schema': {
        'type': 'object',
        'properties': {
            'operation': {'type': 'string', 'description': 'add, subtract, multiply, divide'},
            'a': {'type': 'number'},
            'b': {'type': 'number'},
        },
        'required': ['operation', 'a', 'b'],
    },
}]

def calculate(operation, a, b):
    ops = {'add': a + b, 'subtract': a - b, 'multiply': a * b, 'divide': a / b}
    return str(ops.get(operation, 'Unknown'))

messages = [
    {'role': 'user', 'content': 'Calculate (15 + 27) * 3. Do addition first, then multiply.'}
]

# Agentic loop
for turn in range(10):
    result = requests.post(url, headers=headers, json={
        'model': 'blackboxai/anthropic/claude-sonnet-4.5',
        'max_tokens': 4000,
        'system': 'Use the calculator tool for each step.',
        'tools': tools,
        'messages': messages,
    }).json()

    # Done — model returned final text
    if result['stop_reason'] == 'end_turn':
        for block in result['content']:
            if block['type'] == 'text':
                print(f"Final answer: {block['text']}")
        break

    # Model wants to use tools — execute them
    if result['stop_reason'] == 'tool_use':
        # Add assistant response to conversation
        messages.append({'role': 'assistant', 'content': result['content']})

        # Execute each tool call and collect results
        tool_results = []
        for block in result['content']:
            if block['type'] == 'tool_use':
                answer = calculate(**block['input'])
                tool_results.append({
                    'type': 'tool_result',
                    'tool_use_id': block['id'],
                    'content': answer,
                })

        # Send tool results back
        messages.append({'role': 'user', 'content': tool_results})

Multi-Turn Conversation Flow

Here’s how the message array builds up across turns:
[
  // Turn 1: User asks a question
  {"role": "user", "content": "Calculate (15 + 27) * 3"},

  // Turn 1: Assistant calls a tool
  {"role": "assistant", "content": [
    {"type": "text", "text": "Let me start with the addition."},
    {"type": "tool_use", "id": "toolu_01ABC...", "name": "calculator", "input": {"operation": "add", "a": 15, "b": 27}}
  ]},

  // Turn 2: You provide the tool result
  {"role": "user", "content": [
    {"type": "tool_result", "tool_use_id": "toolu_01ABC...", "content": "42"}
  ]},

  // Turn 2: Assistant calls another tool
  {"role": "assistant", "content": [
    {"type": "text", "text": "Now let me multiply by 3."},
    {"type": "tool_use", "id": "toolu_02DEF...", "name": "calculator", "input": {"operation": "multiply", "a": 42, "b": 3}}
  ]},

  // Turn 3: You provide the tool result
  {"role": "user", "content": [
    {"type": "tool_result", "tool_use_id": "toolu_02DEF...", "content": "126"}
  ]}

  // Turn 3: Assistant returns final answer with stop_reason: "end_turn"
]
Each tool_result must reference the tool_use_id from the corresponding tool_use block. The API will return an error if the IDs don’t match.

Tool Definition Format

Tools use the Anthropic input_schema format:
{
  "name": "tool_name",
  "description": "What this tool does",
  "input_schema": {
    "type": "object",
    "properties": {
      "param1": {"type": "string", "description": "Parameter description"},
      "param2": {"type": "number"}
    },
    "required": ["param1"]
  }
}
FieldTypeRequiredDescription
namestringUnique tool name
descriptionstringWhat the tool does — helps the model decide when to use it
input_schemaobjectJSON Schema for the tool’s parameters