Skip to main content
Tool calls (also known as function calls) give an LLM access to external tools. The LLM does not call the tools directly. Instead, it suggests the tool to call. The user then calls the tool separately and provides the results back to the LLM. Finally, the LLM formats the response into an answer to the user’s original question.
See API Best Practices for how to correctly structure tool call IDs, pair every tool call with a tool result, preserve reasoning signatures, and avoid thinking + forced tool_choice conflicts.

Request Body Examples

Tool calling with BLACKBOX AI involves three key steps. Here are the essential request body formats for each step:
Step 1: Inference Request with Tools
{
    "model": "gemini-2.0-flash-001",
    "messages": [
        {
            "role": "user",
            "content": "What are the titles of some James Joyce books?"
        }
    ],
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "search_gutenberg_books",
                "description": "Search for books in the Project Gutenberg library",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "search_terms": {
                            "type": "array",
                            "items": { "type": "string" },
                            "description": "List of search terms to find books"
                        }
                    },
                    "required": ["search_terms"]
                }
            }
        }
    ]
}
Step 2: Tool Execution (Client-Side)
After receiving the model’s response with tool_calls, execute the requested tool locally and prepare the result:
// Model responds with tool_calls, you execute the tool locally
const toolResult = await searchGutenbergBooks(["James", "Joyce"]);
Step 3: Inference Request with Tool Results
{
    "model": "gemini-2.0-flash-001",
    "messages": [
        {
            "role": "user",
            "content": "What are the titles of some James Joyce books?"
        },
        {
            "role": "assistant",
            "content": null,
            "tool_calls": [
                {
                    "id": "call_abc123",
                    "type": "function",
                    "function": {
                        "name": "search_gutenberg_books",
                        "arguments": "{"search_terms": ["James", "Joyce"]}"
                    }
                }
            ]
        },
        {
            "role": "tool",
            "tool_call_id": "call_abc123",
            "content": "[{"id": 4300, "title": "Ulysses", "authors": [{"name": "Joyce, James"}]}]"
        }
    ],
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "search_gutenberg_books",
                "description": "Search for books in the Project Gutenberg library",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "search_terms": {
                            "type": "array",
                            "items": { "type": "string" },
                            "description": "List of search terms to find books"
                        }
                    },
                    "required": ["search_terms"]
                }
            }
        }
    ]
}
Note: The tools parameter must be included in every request (Steps 1 and 3) to validate the tool schema on each call.
Tool Calling Example
Here is typescript code that gives LLMs the ability to call an external API — in this case Project Gutenberg, to search for books. First, let’s do some basic setup:
const API_KEY = "YOUR_API_KEY";
const API_URL = "https://api.blackbox.ai/chat/completions";

const response = await fetch(API_URL, {
    method: "POST",
    headers: {
        Authorization: `Bearer ${API_KEY}`,
        "Content-Type": "application/json",
    },
    body: JSON.stringify({
        model: "gemini-2.0-flash-001",
        messages: [
            { role: "system", content: "You are a helpful assistant." },
            {
                role: "user",
                content: "What are the titles of some James Joyce books?",
            },
        ],
    }),
});
Define the Tool
Next, we define the tool that we want to call. Remember, the tool is going to get requested by the LLM, but the code we are writing here is ultimately responsible for executing the call and returning the results to the LLM.
async function searchGutenbergBooks(searchTerms: string[]): Promise<Book[]> {
    const searchQuery = searchTerms.join(" ");
    const url = "https://gutendex.com/books";
    const response = await fetch(`${url}?search=${searchQuery}`);
    const data = await response.json();

    return data.results.map((book: any) => ({
        id: book.id,
        title: book.title,
        authors: book.authors,
    }));
}

const tools = [
    {
        type: "function",
        function: {
            name: "searchGutenbergBooks",
            description:
                "Search for books in the Project Gutenberg library based on specified search terms",
            parameters: {
                type: "object",
                properties: {
                    search_terms: {
                        type: "array",
                        items: {
                            type: "string",
                        },
                        description:
                            "List of search terms to find books in the Gutenberg library (e.g. ['dickens', 'great'] to search for books by Dickens with 'great' in the title)",
                    },
                },
                required: ["search_terms"],
            },
        },
    },
];

const TOOL_MAPPING = {
    searchGutenbergBooks,
};
Note that the “tool” is just a normal function. We then write a JSON “spec” compatible with the OpenAI function calling parameter. We’ll pass that spec to the LLM so that it knows this tool is available and how to use it. It will request the tool when needed, along with any arguments. We’ll then marshal the tool call locally, make the function call, and return the results to the LLM.
Tool use and tool results
Let’s make the first BLACKBOX AI API call to the model:
const API_KEY = "YOUR_API_KEY";
const API_URL = "https://api.blackbox.ai/chat/completions";

const request_1 = await fetch(API_URL, {
    method: "POST",
    headers: {
        Authorization: `Bearer ${API_KEY}`,
        "Content-Type": "application/json",
    },
    body: JSON.stringify({
        model: "gemini-2.0-flash-001",
        tools,
        messages,
    }),
});

const data = await request_1.json();
const response_1 = data.choices[0].message;
The LLM responds with a finish reason of tool_calls, and a tool_calls array. In a generic LLM response-handler, you would want to check the finish_reason before processing tool calls, but here we will assume it’s the case. Let’s keep going, by processing the tool call:
// Append the response to the messages array so the LLM has the full context
// It's easy to forget this step!
messages.push(response_1);

// Now we process the requested tool calls, and use our book lookup tool
for (const toolCall of response_1.tool_calls) {
    const toolName = toolCall.function.name;
    const { search_params } = JSON.parse(toolCall.function.arguments);
    const toolResponse = await TOOL_MAPPING[toolName](search_params);
    messages.push({
        role: "tool",
        tool_call_id: toolCall.id,
        name: toolName,
        content: JSON.stringify(toolResponse),
    });
}
The messages array now has:
  1. Our original request
  2. The LLM’s response (containing a tool call request)
  3. The result of the tool call (a json object returned from the Project Gutenberg API)
Now, we can make a second BLACKBOX AI API call, and hopefully get our result!
const API_KEY = "YOUR_API_KEY";
const API_URL = "https://api.blackbox.ai/chat/completions";

const response = await fetch(API_URL, {
    method: "POST",
    headers: {
        Authorization: `Bearer ${API_KEY}`,
        "Content-Type": "application/json",
    },
    body: JSON.stringify({
        model: "gemini-2.0-flash-001",
        messages,
        tools,
    }),
});

const data = await response.json();
console.log(data.choices[0].message.content);
The output will be something like:
Here are some books by James Joyce:

* Ulysses
* Dubliners
* A Portrait of the Artist as a Young Man
* Chamber Music
* Exiles: A Play in Three Acts
We did it! We’ve successfully used a tool in a prompt.

Interleaved Thinking

Interleaved thinking allows models to reason between tool calls, enabling more sophisticated decision-making after receiving tool results. This feature helps models chain multiple tool calls with reasoning steps in between and make nuanced decisions based on intermediate results.
Important: Interleaved thinking increases token usage and response latency. Consider your budget and performance requirements when enabling this feature.
For comprehensive information about reasoning tokens and configuration, see the Reasoning and Interleaved Thinking documentation.

How Interleaved Thinking Works

With interleaved thinking, the model can:
  • Reason about the results of a tool call before deciding what to do next
  • Chain multiple tool calls with reasoning steps in between
  • Make more nuanced decisions based on intermediate results
  • Provide transparent reasoning for its tool selection process

Enabling Reasoning with Tool Calls

To enable reasoning with tool calls, include the reasoning parameter in your request:
{
    "model": "claude-sonnet-4.5",
    "messages": [
        {
            "role": "user",
            "content": "Research the environmental impact of electric vehicles and provide a comprehensive analysis."
        }
    ],
    "tools": [...],
    "reasoning": {
        "effort": "high"
    }
}

Example: Multi-Step Research with Reasoning

Here’s an example showing how a model might use interleaved thinking to research a topic across multiple sources:
from openai import OpenAI

client = OpenAI(
    base_url="https://api.blackbox.ai/chat/completions",
    api_key="<BLACKBOX_API_KEY>",
)

tools = [
    {
        "type": "function",
        "function": {
            "name": "search_academic_papers",
            "description": "Search for academic papers on a given topic",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {"type": "string"},
                    "field": {"type": "string"}
                },
                "required": ["query"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "get_latest_statistics",
            "description": "Get latest statistics on a topic",
            "parameters": {
                "type": "object",
                "properties": {
                    "topic": {"type": "string"},
                    "year": {"type": "integer"}
                },
                "required": ["topic"]
            }
        }
    }
]

response = client.chat.completions.create(
    model="blackboxai/anthropic/claude-sonnet-4.5",
    messages=[
        {
            "role": "user",
            "content": "Research the environmental impact of electric vehicles and provide a comprehensive analysis."
        }
    ],
    tools=tools,
    extra_body={
        "reasoning": {
            "effort": "high"
        }
    }
)

# The model's reasoning will show step-by-step thinking between tool calls
print("Reasoning:", getattr(response.choices[0].message, "reasoning", None))
print("Tool calls:", response.choices[0].message.tool_calls)
Model’s Reasoning and Tool Calls:
  1. Initial Thinking: “I need to research electric vehicle environmental impact. Let me start with academic papers to get peer-reviewed research.”
  2. First Tool Call: search_academic_papers({"query": "electric vehicle lifecycle environmental impact", "field": "environmental science"})
  3. After First Tool Result: “The papers show mixed results on manufacturing impact. I need current statistics to complement this academic research.”
  4. Second Tool Call: get_latest_statistics({"topic": "electric vehicle carbon footprint", "year": 2024})
  5. After Second Tool Result: “Now I have both academic research and current data. Let me search for manufacturing-specific studies to address the gaps I found.”
  6. Third Tool Call: search_academic_papers({"query": "electric vehicle battery manufacturing environmental cost", "field": "materials science"})
  7. Final Analysis: Synthesizes all gathered information into a comprehensive response.

Preserving Reasoning Context

When using tools with reasoning models, you can preserve reasoning context across multiple API calls. This is particularly useful for complex workflows where the model needs to maintain its reasoning chain.
# First call with reasoning
response1 = client.chat.completions.create(
    model="blackboxai/anthropic/claude-sonnet-4.5",
    messages=[
        {"role": "user", "content": "What's the weather in Boston? Then recommend what to wear."}
    ],
    tools=weather_tools,
    extra_body={"reasoning": {"max_tokens": 2000}}
)

# Preserve reasoning_details for context continuity
messages = [
    {"role": "user", "content": "What's the weather in Boston? Then recommend what to wear."},
    {
        "role": "assistant",
        "content": response1.choices[0].message.content,
        "tool_calls": response1.choices[0].message.tool_calls,
        "reasoning_details": response1.choices[0].message.reasoning_details
    },
    {
        "role": "tool",
        "tool_call_id": response1.choices[0].message.tool_calls[0].id,
        "content": '{"temperature": 45, "condition": "rainy", "humidity": 85}'
    }
]

# Second call continues reasoning from where it left off
response2 = client.chat.completions.create(
    model="blackboxai/anthropic/claude-sonnet-4.5",
    messages=messages,
    tools=weather_tools
)

Best Practices for Interleaved Thinking

  • Clear Tool Descriptions: Provide detailed descriptions so the model can reason about when to use each tool
  • Structured Parameters: Use well-defined parameter schemas to help the model make precise tool calls
  • Context Preservation: Maintain conversation context across multiple tool interactions using reasoning_details
  • Error Handling: Design tools to provide meaningful error messages that help the model adjust its approach
  • Reasoning Budget: Consider setting appropriate max_tokens or effort levels based on task complexity

Implementation Considerations

When implementing interleaved thinking:
  • Models may take longer to respond due to additional reasoning steps
  • Token usage will be higher due to the reasoning process
  • The quality of reasoning depends on the model’s capabilities
  • Some models may be better suited for this approach than others
  • Reasoning tokens are charged as output tokens
For more detailed examples and provider-specific implementations, see the Reasoning and Interleaved Thinking documentation.

A Simple Agentic Loop

In the example above, the calls are made explicitly and sequentially. To handle a wide variety of user inputs and tool calls, you can use an agentic loop. Here’s an example of a simple agentic loop (using the same tools and initial messages as above):
const API_KEY = "YOUR_API_KEY";
const API_URL = "https://api.blackbox.ai/chat/completions";

async function callLLM(messages: any[]): Promise<any> {
    const response = await fetch(API_URL, {
        method: "POST",
        headers: {
            Authorization: `Bearer ${API_KEY}`,
            "Content-Type": "application/json",
        },
        body: JSON.stringify({
            model: "gpt-5.3-codex",
            tools,
            tool_choice: "auto",
            messages,
        }),
    });

    const data = await response.json();
    const message = data.choices[0].message;
    messages.push(message);
    return message;
}

async function handleToolCalls(message: any, messages: any[]): Promise<void> {
    for (const toolCall of message.tool_calls) {
        const toolName = toolCall.function.name;
        const toolArgs = JSON.parse(toolCall.function.arguments);

        // Look up the correct tool locally, and call it with the provided arguments
        // Other tools can be added without changing the agentic loop
        const toolResult = await TOOL_MAPPING[toolName](toolArgs);

        messages.push({
            role: "tool",
            tool_call_id: toolCall.id,
            content: JSON.stringify(toolResult),
        });
    }
}

const maxIterations = 10;
let iterationCount = 0;

while (iterationCount < maxIterations) {
    iterationCount++;
    const message = await callLLM(messages);

    if (message.tool_calls) {
        await handleToolCalls(message, messages);
    } else {
        // Model responded with text — task complete
        break;
    }
}

if (iterationCount >= maxIterations) {
    console.warn("Warning: Maximum iterations reached");
}

console.log(messages[messages.length - 1].content);

Best Practices and Advanced Patterns

Function Definition Guidelines
When defining tools for LLMs, follow these best practices: Clear and Descriptive Names: Use descriptive function names that clearly indicate the tool’s purpose.
// Good: Clear and specific
{ "name": "get_weather_forecast" }
// Avoid: Too vague
{ "name": "weather" }
Comprehensive Descriptions: Provide detailed descriptions that help the model understand when and how to use the tool.
{
    "description": "Get current weather conditions and 5-day forecast for a specific location. Supports cities, zip codes, and coordinates.",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "City name, zip code, or coordinates (lat,lng). Examples: 'New York', '10001', '40.7128,-74.0060'"
            },
            "units": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"],
                "description": "Temperature unit preference",
                "default": "celsius"
            }
        },
        "required": ["location"]
    }
}
Streaming with Tool Calls
When using streaming responses with tool calls, handle the different content types appropriately:
const API_KEY = "YOUR_API_KEY";
const API_URL = "https://api.blackbox.ai/chat/completions";

const stream = await fetch(API_URL, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
        model: "claude-3.5-sonnet",
        messages: messages,
        tools: tools,
        stream: true,
    }),
});

const reader = stream.body.getReader();
let toolCalls = [];

while (true) {
    const { done, value } = await reader.read();
    if (done) {
        break;
    }

    const chunk = new TextDecoder().decode(value);
    const lines = chunk.split("\n").filter((line) => line.trim());

    for (const line of lines) {
        if (line.startsWith("data: ")) {
            const data = JSON.parse(line.slice(6));

            if (data.choices[0].delta.tool_calls) {
                toolCalls.push(...data.choices[0].delta.tool_calls);
            }

            if (data.choices[0].delta.finish_reason === "tool_calls") {
                await handleToolCalls(toolCalls);
            } else if (data.choices[0].delta.finish_reason === "stop") {
                // Regular completion without tool calls
                break;
            }
        }
    }
}
Tool Choice Configuration
Control tool usage with the tool_choice parameter:
// Let model decide (default)
{ "tool_choice": "auto" }
// Disable tool usage
{ "tool_choice": "none" }
// Force specific tool
{
    "tool_choice": {
        "type": "function",
        "function": { "name": "search_database" }
    }
}
Parallel Tool Calls
Control whether multiple tools can be called simultaneously with the parallel_tool_calls parameter (default is true for most models):
// Disable parallel tool calls - tools will be called sequentially
{ "parallel_tool_calls": false }
When parallel_tool_calls is false, the model will only request one tool call at a time instead of potentially multiple calls in parallel.
Multi-Tool Workflows
Design tools that work well together:
{
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "search_products",
                "description": "Search for products in the catalog"
            }
        },
        {
            "type": "function",
            "function": {
                "name": "get_product_details",
                "description": "Get detailed information about a specific product"
            }
        },
        {
            "type": "function",
            "function": {
                "name": "check_inventory",
                "description": "Check current inventory levels for a product"
            }
        }
    ]
}
This allows the model to naturally chain operations: search → get details → check inventory.

Example UseCase: Tool Calling with gpt-5.3-codex

GPT-5.3-Codex is OpenAI’s most capable agentic coding model, designed specifically for multi-turn tool calling workflows. This section covers how to use it correctly with both the Chat Completions API and the Responses API to avoid common pitfalls.
Important: gpt-5.3-codex uses a different tool format depending on the API endpoint. Using the wrong format will cause the model to ignore tools and respond with plain text instead.
  • Chat Completions (/chat/completions): Tools are nested under function
  • Responses API (/v1/responses): Tools use a flat structure
See the Format Comparison below and the Responses API documentation for Responses API-specific examples.

Chat Completions: Complete Multi-Turn Example

Here is a complete, working example of a multi-turn tool calling loop with gpt-5.3-codex using the Chat Completions API. This pattern is what coding agents (like Codex CLI) use internally.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.blackbox.ai/chat/completions",
    # Enterprise users: use "https://enterprise.blackbox.ai/chat/completions"
    api_key="YOUR_BLACKBOX_API_KEY",
)

# Step 1: Define your tools using the Chat Completions nested format
tools = [
    {
        "type": "function",
        "function": {
            "name": "read_file",
            "description": "Read the contents of a file at the given path",
            "parameters": {
                "type": "object",
                "properties": {
                    "path": {
                        "type": "string",
                        "description": "The file path to read"
                    }
                },
                "required": ["path"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "edit_file",
            "description": "Edit a file by replacing old text with new text",
            "parameters": {
                "type": "object",
                "properties": {
                    "path": {"type": "string", "description": "The file path to edit"},
                    "old": {"type": "string", "description": "The text to replace"},
                    "new": {"type": "string", "description": "The replacement text"}
                },
                "required": ["path", "old", "new"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "execute_command",
            "description": "Run a shell command and return the output",
            "parameters": {
                "type": "object",
                "properties": {
                    "command": {
                        "type": "string",
                        "description": "The shell command to execute"
                    }
                },
                "required": ["command"]
            }
        }
    }
]

# Step 2: Build your messages array
messages = [
    {"role": "system", "content": "You are a coding agent. Use the provided tools to complete tasks."},
    {"role": "user", "content": "Read src/auth.py, find the bug, fix it, and run the tests."}
]

# Step 3: Agentic loop — keep calling the API until the model responds with text
max_turns = 10

for turn in range(max_turns):
    response = client.chat.completions.create(
        model="blackboxai/openai/gpt-5.3-codex",
        messages=messages,
        tools=tools,
        tool_choice="auto",
    )

    message = response.choices[0].message

    # Always append the assistant message to maintain conversation history
    messages.append(message)

    if message.tool_calls:
        # Model wants to call tools — execute them and feed results back
        for tool_call in message.tool_calls:
            name = tool_call.function.name
            args = json.loads(tool_call.function.arguments)

            # Execute the tool (your implementation)
            result = execute_tool(name, args)

            # Append the tool result — tool_call_id MUST match
            messages.append({
                "role": "tool",
                "tool_call_id": tool_call.id,
                "content": result
            })
    else:
        # Model responded with text — task is complete
        print(message.content)
        break

Step-by-Step Turn Walkthrough

The agentic loop above handles everything automatically. This section breaks down what happens at each turn so you can see exactly how messages flow.

Turn 1 — Initial request

Send the user’s message and tool definitions:
const response1 = await fetch('https://api.blackbox.ai/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.BLACKBOX_API_KEY}`,
  },
  body: JSON.stringify({
    model: 'blackboxai/openai/gpt-5.3-codex',
    messages: [
      { role: 'system', content: 'You are a coding agent. Use the provided tools to complete tasks.' },
      { role: 'user', content: 'Read config.json and tell me what environment it is configured for.' },
    ],
    tools: [
      {
        type: 'function',
        function: {
          name: 'read_file',
          description: 'Read the contents of a file at the given path',
          parameters: {
            type: 'object',
            properties: { path: { type: 'string', description: 'The file path to read' } },
            required: ['path'],
          },
        },
      },
    ],
    tool_choice: 'auto',
  }),
});

const turn1 = await response1.json();
const message1 = turn1.choices[0].message;
// message1.tool_calls[0] → { id: 'call_abc', type: 'function', function: { name: 'read_file', arguments: '{"path":"config.json"}' } }
// turn1.choices[0].finish_reason → 'tool_calls'
The model responds with finish_reason: "tool_calls" and a tool_calls array containing the function name, arguments (as a JSON string), and a unique id.

Turn 1 — Execute the tool and send the result

Append the assistant message (with tool_calls) to the messages array, execute the tool locally, then append the tool result and send the next request:
const tc = message1.tool_calls[0];
const args = JSON.parse(tc.function.arguments);
const fileContents = readFile(args.path); // your implementation

// Build the messages array for turn 2
const messages = [
  { role: 'system', content: 'You are a coding agent. Use the provided tools to complete tasks.' },
  { role: 'user', content: 'Read config.json and tell me what environment it is configured for.' },
  message1, // assistant message with tool_calls — MUST be included
  {
    role: 'tool',
    tool_call_id: tc.id, // MUST match the id from the tool call
    content: fileContents, // must be a string
  },
];

const response2 = await fetch('https://api.blackbox.ai/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.BLACKBOX_API_KEY}`,
  },
  body: JSON.stringify({
    model: 'blackboxai/openai/gpt-5.3-codex',
    messages,
    tools, // tools must be included in every request
    tool_choice: 'auto',
  }),
});

const turn2 = await response2.json();
const message2 = turn2.choices[0].message;

if (message2.tool_calls) {
  // Model wants to call another tool — repeat the pattern
} else {
  // Model responded with text — task is complete
  console.log(message2.content);
  // → "The file is configured for the **production** environment with debug mode disabled."
}
Three critical rules:
  1. The assistant message with tool_calls must be appended before the tool result
  2. tool_call_id in the tool result must exactly match the id from the tool call
  3. Tool result content must be a string — use JSON.stringify() for objects

Conversation History Shape

After two tool calls (e.g., read → edit), the messages array looks like this:
[
  { "role": "system", "content": "You are a coding agent. Use tools to complete tasks." },
  { "role": "user", "content": "Read config.json and tell me the environment." },
  {
    "role": "assistant",
    "content": null,
    "tool_calls": [
      { "id": "call_abc", "type": "function", "function": { "name": "read_file", "arguments": "{\"path\":\"config.json\"}" } }
    ]
  },
  { "role": "tool", "tool_call_id": "call_abc", "content": "{\"env\":\"production\",\"debug\":false}" },
  {
    "role": "assistant",
    "content": null,
    "tool_calls": [
      { "id": "call_def", "type": "function", "function": { "name": "search_file", "arguments": "{\"path\":\"config.json\",\"pattern\":\"env\"}" } }
    ]
  },
  { "role": "tool", "tool_call_id": "call_def", "content": "1: {\"env\":\"production\"}" },
  { "role": "assistant", "content": "The file is configured for the production environment with debug mode disabled." }
]
Every message pair follows this pattern: assistant (with tool_calls) → tool (with matching tool_call_id). Dropping any message in the chain will cause the model to fall back to text responses.

Multi-Turn with User Follow-up Messages

After the model completes a task and responds with text, you can continue the conversation by appending a new user message. This lets users ask follow-up questions based on tool results without starting over.
// Phase 1: Initial task — model reads a file and responds
const messages: any[] = [
  { role: 'system', content: 'You are a coding agent. Use tools to complete tasks.' },
  { role: 'user', content: 'Read src/auth.py' },
];

// ... run agentic loop until model responds with text ...
// messages now contains: system → user → assistant(tool_calls) → tool → assistant(text)

// Phase 2: Follow-up — user asks a related question
messages.push({
  role: 'user',
  content: 'Now run pytest to check if the tests pass.',
});

// Continue the agentic loop with the same messages array
const response = await client.chat.completions.create({
  model: 'blackboxai/openai/gpt-5.3-codex',
  messages,
  tools,
  tool_choice: 'auto',
});

const message = response.choices[0].message;
if (message.tool_calls) {
  // Model calls execute_command to run pytest
  console.log('Tool:', message.tool_calls[0].function.name);
  // → 'execute_command'
}
The conversation history after the follow-up looks like this:
[
  { "role": "system", "content": "You are a coding agent. Use tools to complete tasks." },
  { "role": "user", "content": "Read src/auth.py" },
  { "role": "assistant", "content": null, "tool_calls": [{ "id": "call_1", "type": "function", "function": { "name": "read_file", "arguments": "{\"path\":\"src/auth.py\"}" } }] },
  { "role": "tool", "tool_call_id": "call_1", "content": "import flask\n..." },
  { "role": "assistant", "content": "Here's what's in src/auth.py: ..." },
  { "role": "user", "content": "Now run pytest to check if the tests pass." },
  { "role": "assistant", "content": null, "tool_calls": [{ "id": "call_2", "type": "function", "function": { "name": "execute_command", "arguments": "{\"command\":\"pytest\"}" } }] },
  { "role": "tool", "tool_call_id": "call_2", "content": "2 passed in 0.34s" },
  { "role": "assistant", "content": "All tests pass — 2 passed in 0.34s." }
]
The follow-up user message goes after the assistant’s text response, not in the middle of a tool call sequence. The model sees the full conversation context including previous tool results.

Tool Choice Options

Control how the model uses tools with the tool_choice parameter:
ValueBehavior
"auto"The model decides whether to call a tool (recommended)
"required"The model must call at least one tool
"none"The model cannot call any tools
// Let model decide (recommended for agentic loops)
{ "tool_choice": "auto" }

// Force a tool call on every turn
{ "tool_choice": "required" }

// Disable tools for this turn
{ "tool_choice": "none" }

Use Case: Coding Agent

A coding agent gives the model a set of file system and terminal tools and runs an agentic loop — calling the API, executing whatever tools the model requests, and feeding the results back — until the model returns a plain text response with no further tool calls. Define seven SWE tools using the Chat Completions nested format:
Python
TOOLS = [
    {
        "type": "function",
        "function": {
            "name": "read_file",
            "description": "Read the full contents of a file at the given path.",
            "parameters": {
                "type": "object",
                "properties": {
                    "path": {"type": "string", "description": "File path to read"},
                },
                "required": ["path"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "write_file",
            "description": "Write content to a file, creating it if it doesn't exist.",
            "parameters": {
                "type": "object",
                "properties": {
                    "path": {"type": "string", "description": "File path to write to"},
                    "content": {"type": "string", "description": "Full content to write"},
                },
                "required": ["path", "content"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "edit_file",
            "description": "Replace the first occurrence of old_string with new_string in a file.",
            "parameters": {
                "type": "object",
                "properties": {
                    "path": {"type": "string"},
                    "old_string": {"type": "string", "description": "Exact string to find"},
                    "new_string": {"type": "string", "description": "Replacement string"},
                },
                "required": ["path", "old_string", "new_string"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "search_file",
            "description": "Search for a regex pattern in a file and return matching lines with line numbers.",
            "parameters": {
                "type": "object",
                "properties": {
                    "path": {"type": "string"},
                    "pattern": {"type": "string", "description": "Regex pattern to search for"},
                },
                "required": ["path", "pattern"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "execute_command",
            "description": (
                "Run a shell command and return its output. Use this to execute scripts, "
                "run tests, install packages, compile code, or inspect the environment."
            ),
            "parameters": {
                "type": "object",
                "properties": {
                    "command": {"type": "string", "description": "Shell command to execute"},
                    "working_directory": {
                        "type": "string",
                        "description": "Directory to run the command in (default: current directory)",
                    },
                },
                "required": ["command"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "list_directory",
            "description": "List the files and subdirectories in a directory.",
            "parameters": {
                "type": "object",
                "properties": {
                    "path": {"type": "string", "description": "Directory path to list (default: current directory)"},
                },
                "required": [],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "glob_files",
            "description": "Find files matching a glob pattern, e.g. '**/*.py' or 'src/**/*.ts'.",
            "parameters": {
                "type": "object",
                "properties": {
                    "pattern": {"type": "string", "description": "Glob pattern to match files against"},
                    "directory": {"type": "string", "description": "Root directory for the search (default: current directory)"},
                },
                "required": ["pattern"],
            },
        },
    },
]
Then run the agentic loop:
Python
import os, json
from openai import OpenAI

client = OpenAI(
    base_url="https://api.blackbox.ai/chat/completions",
    # Enterprise users: use "https://enterprise.blackbox.ai/chat/completions"
    api_key=os.environ["BLACKBOX_API_KEY"],
)

MODEL = "blackboxai/openai/gpt-5.3-codex"

def run_agent(task: str, max_turns: int = 10) -> str:
    messages = [
        {"role": "system", "content": "You are a coding assistant with access to file system and terminal tools. "
         "Use the tools to read, write, edit, search files, run terminal commands, "
         "list directories, and find files to complete the task. When done, summarize what you did."},
        {"role": "user", "content": task},
    ]

    for _ in range(max_turns):
        response = client.chat.completions.create(
            model=MODEL,
            messages=messages,
            tools=TOOLS,
            tool_choice="auto",
        )

        message = response.choices[0].message
        messages.append(message)

        if not message.tool_calls:
            # No more tool calls — agent is done
            return message.content

        # Execute each tool and append results
        for tc in message.tool_calls:
            args = json.loads(tc.function.arguments)
            result = execute_tool(tc.function.name, args)  # your dispatch function
            messages.append({
                "role": "tool",
                "tool_call_id": tc.id,
                "content": result,
            })

    return "Max turns reached."
The agent loop continues until the model returns a response with no tool_calls. Always set a max_turns guard to prevent runaway loops.
Example tasks this agent handles:
  • "Read main.py and tell me what the entry point function does."
  • "Write a file /tmp/utils.py with a helper function for parsing JSON, then read it back to confirm."
  • "Search app.py for all lines containing 'TODO' and list their line numbers."
  • "Edit config.py: replace DEBUG = False with DEBUG = True, then verify the change."
  • "Run python3 tests/test_api.py and report any failures."
  • "List the project root and find all TypeScript files under src/."

Key Requirements for gpt-5.3-codex Tool Calling

Follow these requirements to ensure reliable tool calling:
Common mistake: If the model responds with text instead of calling tools, check these items first:
  1. Tools array is present in every request (not just the first one)
  2. tool_call_id in tool result messages matches the id from the tool call
  3. Assistant messages with tool_calls are appended to the messages array before the tool results
  4. Tool result content is a string (use JSON.stringify() for objects)
1. Always include tools in every request The tools array must be sent with every API call in the loop, not just the first one. The model needs to see the available tools on each turn. 2. Preserve the full message chain Every assistant message (including those with tool_calls) and every tool result must be appended to the messages array. Dropping any message breaks the conversation chain and causes the model to fall back to text.
// ✅ Correct: full chain preserved
[
  {"role": "user", "content": "Fix the bug in auth.py"},
  {"role": "assistant", "content": null, "tool_calls": [{"id": "call_abc", ...}]},
  {"role": "tool", "tool_call_id": "call_abc", "content": "file contents..."},
  {"role": "assistant", "content": "I found the bug. Let me fix it.", "tool_calls": [{"id": "call_def", ...}]},
  {"role": "tool", "tool_call_id": "call_def", "content": "Edit successful"}
]
// ❌ Wrong: missing assistant message before tool result
[
  {"role": "user", "content": "Fix the bug in auth.py"},
  {"role": "tool", "tool_call_id": "call_abc", "content": "file contents..."}
]
3. Match tool_call_id exactly Each tool result must reference the exact id from the corresponding tool call. A mismatch causes the model to ignore the result. 4. Tool result content must be a string The content field in tool result messages must be a string. If your tool returns an object, serialize it with JSON.stringify(). 5. Use tool_choice: "auto" (recommended) For most use cases, tool_choice: "auto" gives the best results. The model will call tools when appropriate and respond with text when the task is complete. Use tool_choice: "required" only when you want to force a tool call on every turn.

Using Reasoning with Tool Calling

For complex multi-step tasks, enabling reasoning improves tool calling reliability. The model will think through its approach before deciding which tool to call.
{
    "model": "blackboxai/openai/gpt-5.3-codex",
    "messages": [...],
    "tools": [...],
    "tool_choice": "auto",
    "reasoning": {
        "effort": "medium"
    }
}
Reasoning EffortBest For
"low"Simple, single-tool tasks
"medium"General coding tasks (recommended default)
"high"Complex multi-file refactors, debugging
See Reasoning and Interleaved Thinking for details on preserving reasoning context across tool call turns.

Format Comparison

Using the wrong tool format for your API endpoint is the most common cause of “model responds with text instead of calling tools.” Make sure you use the correct format.
AspectChat Completions (/chat/completions)Responses API (/v1/responses)
Tool definitionNested: tools[].function.nameFlat: tools[].name
Tool result role"role": "tool""type": "function_call_output"
Tool result ID field"tool_call_id""call_id"
Message format{"role": "user", "content": "..."}{"type": "message", "role": "user", "content": [...]}
System prompt{"role": "system", "content": "..."}"instructions": "..."

Chat Completions Tool Format

{
    "tools": [{
        "type": "function",
        "function": {
            "name": "read_file",
            "description": "Read a file",
            "parameters": {
                "type": "object",
                "properties": {
                    "path": {"type": "string"}
                },
                "required": ["path"]
            }
        }
    }]
}

Responses API Tool Format

{
    "tools": [{
        "type": "function",
        "name": "read_file",
        "description": "Read a file",
        "parameters": {
            "type": "object",
            "properties": {
                "path": {"type": "string"}
            },
            "required": ["path"]
        }
    }]
}
For complete Responses API tool calling examples, see the Responses API documentation.

Troubleshooting: Model Returns Text Instead of Tool Calls

If gpt-5.3-codex responds with plain text instead of calling tools:
IssueSolution
Wrong tool format for endpointUse nested format for /chat/completions, flat format for /v1/responses
tools array missing from requestInclude tools in every request, not just the first one
Missing tool_call_id in tool resultsEnsure each tool result has tool_call_id matching the call’s id
Tool result content is not a stringUse JSON.stringify() to convert objects to strings
Broken message chainAppend every assistant message (including tool_calls) before tool results
Vague user messageBe specific: “Read src/auth.py” instead of “Look at the code”
No system promptInclude a system prompt like “You are a coding agent. Use tools to complete tasks.”