| AutoGen | Swarms API |
|---|---|
AssistantAgent(name, system_message, llm_config) | agent with agent_name, system_prompt, model_name |
UserProxyAgent | Not needed; task input is the top-level task field |
GroupChat(agents, messages, max_round) | GroupChat swarm type |
GroupChatManager(groupchat, llm_config) | Managed by the API |
agent.initiate_chat(recipient, message) | POST request with task field |
ConversableAgent | Any agent with system_prompt |
llm_config = {"model": "gpt-4o", "api_key": ...} | "model_name": "gpt-4o" on agent spec |
max_consecutive_auto_reply | max_loops on agent spec |
is_termination_msg | Handled by workflow max_loops |
code_execution_config | "tools": ["code_interpreter"] on agent spec |
Side-by-Side: Basic Two-Agent Chat
AutoGen
Swarms API
UserProxy only relays a message and Assistant responds, a single agent completion is all you need. The UserProxyAgent is not an AI agent — it just passes the task.
Side-by-Side: GroupChat
AutoGen
Swarms API
UserProxyAgent→ removed; thetaskfield replaces itGroupChatManager→ managed by the APImax_round=5→"max_loops": 5at the top levelcache_seed→ not needed; the API is statelessllm_configper agent →"model_name"per agent
Side-by-Side: Code Execution Agent
AutoGen
Swarms API
Side-by-Side: Multi-Agent Debate Pattern
AutoGen is often used for debate-style workflows where agents argue positions. The Swarms API has a dedicatedDebateWithJudge architecture for this.
AutoGen
Swarms API
LLM Configuration Migration
AutoGen requires per-agentllm_config dicts with API keys and model lists. The Swarms API handles all authentication — you just specify the model name.
AutoGen
Swarms API
Termination Conditions
AutoGen usesis_termination_msg callbacks to stop conversations. The Swarms API uses max_loops to bound execution.
AutoGen
Swarms API
Key Differences to Keep in Mind
| Concern | AutoGen | Swarms API |
|---|---|---|
| Conversation history | Managed in-memory per session | Stateless; each request is independent |
| Human input | human_input_mode="ALWAYS" | Not supported in API mode |
| Code execution | Docker or local subprocess | Cloud sandbox via "tools": ["code_interpreter"] |
| Function calling | register_function() | "tools" array with built-in tool names |
| Nested chats | register_nested_chats() | Use HierarchicalSwarm or GraphWorkflow |
| Cost / token tracking | Via OpenAI usage logs | usage.token_cost in every response |
| Caching | cache_seed on llm_config | Not applicable; no local cache |