Use tools in your prompts
tool_calls
, execute the requested tool locally and prepare the result:
tools
parameter must be included in every request (Steps 1 and 3) to validate the tool schema on each call.
tool_calls
, and a tool_calls
array. In a generic LLM response-handler, you would want to check the finish_reason
before processing tool calls, but here we will assume it’s the case. Let’s keep going, by processing the tool call:
search_academic_papers({"query": "electric vehicle lifecycle environmental impact", "field": "environmental science"})
get_latest_statistics({"topic": "electric vehicle carbon footprint", "year": 2024})
search_academic_papers({"query": "electric vehicle battery manufacturing environmental cost", "field": "materials science"})
tools
and initial messages
as above):
tool_choice
parameter:
parallel_tool_calls
parameter (default is true for most models):
parallel_tool_calls
is false
, the model will only request one tool call at a time instead of potentially multiple calls in parallel.