Authentication
To use this API, you need a BLACKBOX API Key. Follow these steps to get your API key:- Click on your Profile Image in the top right corner at cloud.blackbox.ai
- Click on “BLACKBOX API Token” from the dropdown menu
- Copy the existing token or click “Generate” if you don’t have one yet
bb_xxxxxxxxxxxxxxxxxxxxxx
Headers
API Key of the form
Bearer <api_key>.Example: Bearer bb_b41b647ffbfed27f61656049d3eaeef3d903cc503345d9eb80080d98bc0Must be set to
application/json.Request Parameters
The task description or instruction for the AI agents to execute. All agents will work on the same prompt.Examples:
- “Add Stripe Payment Gateway”
- “Implement user authentication with JWT”
- “Refactor the payment processing module”
- “Add comprehensive unit tests”
An array of agent configurations. Each agent will independently execute the task in parallel.Structure: Each object must contain:Minimum: 2 agents
Maximum: 5 agents
agent(string, required): The agent typemodel(string, required): The specific model to use
claude- Anthropic’s Claude modelsblackbox- BLACKBOX AI modelscodex- OpenAI’s GPT modelsgemini- Google’s Gemini models
Maximum: 5 agents
GitHub repository URL to work on. If provided, all agents will work on this repository.Example:
https://github.com/username/repository.gitThe branch to work on in the repository. Defaults to
main if not specified.Example: main, master, develop, feature/new-featureAvailable Agent Models
Claude Agent
blackboxai/anthropic/claude-sonnet-4.5- Latest Sonnet (Recommended)blackboxai/anthropic/claude-sonnet-4- Sonnet 4blackboxai/anthropic/claude-opus-4- Most capable Claude model
BLACKBOX Agent
blackboxai/blackbox-pro- BLACKBOX PRO (Recommended)blackboxai/anthropic/claude-sonnet-4.5- Claude via BLACKBOXblackboxai/openai/gpt-5-codex- GPT-5 Codex via BLACKBOXblackboxai/anthropic/claude-opus-4- Claude Opus via BLACKBOXblackboxai/x-ai/grok-code-fast-1:free- Grok Code (Free)blackboxai/google/gemini-2.5-pro- Gemini via BLACKBOX
Codex Agent
gpt-5-codex- GPT-5 Codex (Recommended)openai/gpt-5- GPT-5openai/gpt-5-mini- GPT-5 Miniopenai/gpt-5-nano- GPT-5 Nanoopenai/gpt-4.1- GPT-4.1
Gemini Agent
gemini-2.0-flash-exp- Gemini 2.0 Flash (Recommended)gemini-2.5-pro- Gemini 2.5 Progemini-2.5-flash- Gemini 2.5 Flash
How Multi-Agent Tasks Work
- Task Submission: You submit a single task with multiple agent configurations
- Parallel Execution: Each agent independently processes the task simultaneously
- Independent Analysis: Each agent analyzes the codebase and creates its own solution
- Separate Commits: Each agent creates its own commits (if working on a repository)
- Result Aggregation: All results are tracked in the
agentExecutionsarray - Comparison: You can compare approaches, code quality, and results across all agents
Response Structure
The response includes several fields specific to multi-agent tasks:selectedAgents: Array of agent configurations used (always present when multi-agent task is created)multiLaunch: Initiallyfalse, becomestruewhen agents start executingagentExecutions: Initiallynull, populated when agents start executing. Once populated, contains:agent: The agent typemodel: The model usedstatus: Current status (pending,in_progress,completed,failed)executionId: Unique identifier for this agent’s executionresult: The agent’s output (when completed)commits: Array of commits made by this agenterror: Error message (if failed)
agentExecutions will be null and multiLaunch will be false. You need to poll the task status to see when agents start executing and agentExecutions gets populated.
Best Practices
Agent Selection
Diverse Models: Choose agents with different strengthsUse Cases
Complex Refactoring: Get multiple approaches to restructuring codePerformance Tips
- Optimal Agent Count: Use 2-3 agents for most tasks, up to 5 for critical comparisons
- Task Complexity: More complex tasks benefit more from multi-agent approach
- Resource Awareness: Multi-agent tasks consume more credits and take longer
- Result Analysis: Focus on comparing approach and quality, not just output
- Polling Frequency: Check status every 5-10 seconds to avoid rate limits
Comparison Metrics
When comparing agent results, consider:- Code Quality: Readability, maintainability, best practices
- Completeness: Does it fully address the prompt?
- Efficiency: Performance and resource usage
- Error Handling: Robustness and edge case coverage
- Documentation: Code comments and explanations
- Testing: Test coverage and quality
- Consistency: Adherence to project conventions
Limitations
- Minimum Agents: At least 2 agents required
- Maximum Agents: Up to 5 agents recommended
- Execution Time: Longer than single-agent tasks
- Credit Usage: Consumes credits for each agent execution
- Conflicts: Agents may create conflicting changes (manual merge may be needed)
Related Endpoints
- Create Single Agent Task - Standard single-agent task execution