LLMChain, AgentExecutor, LCEL pipes — for constructing AI workflows locally. The Swarms API replaces this entire stack with a single REST endpoint: you describe your agents and workflow in JSON and the API handles orchestration, model routing, retries, and billing.
| LangChain | Swarms API | |
|---|---|---|
LLMChain(llm, prompt) | Single agent completion via /v1/agent/completions | |
SequentialChain([chain_a, chain_b]) | SequentialWorkflow via /v1/swarms/completions | |
RunnableParallel({a: chain_a, b: chain_b}) | ConcurrentWorkflow via /v1/swarms/completions | |
AgentExecutor(agent, tools) | Agent with "tools" array | |
ChatPromptTemplate.from_messages([...]) | system_prompt + task fields | |
| `chain_a | chain_b` (LCEL pipe) | SequentialWorkflow with agents in order |
chain.invoke({"input": "..."}) | POST request with "task": "..." | |
chain.stream({"input": "..."}) | Streaming endpoint (see Streaming) | |
chain.batch([input1, input2]) | ConcurrentWorkflow or batch endpoint | |
ConversationBufferMemory | Stateless; manage conversation history externally | |
Tool(name, func, description) | "tools" array with built-in tool names | |
ChatOpenAI(model="gpt-4o") | "model_name": "gpt-4o" on agent spec |
Side-by-Side: Simple LLMChain
LangChain
Swarms API
Side-by-Side: SequentialChain (LCEL Pipe)
LangChain
Swarms API
Side-by-Side: RunnableParallel
LangChain
Swarms API
Side-by-Side: AgentExecutor with Tools
LangChain
Swarms API
Side-by-Side: Streaming
LangChain
Swarms API
Prompt Templates → System Prompts
LangChain’sChatPromptTemplate separates system messages from human messages. In the Swarms API, system instructions go in system_prompt and the user’s request goes in task.
LangChain
Swarms API
ChatPromptTemplate are simply inlined into the system_prompt string.
Structured Output
LangChain
Swarms API
Memory and Conversation History
LangChain’sConversationBufferMemory persists chat history between chain calls. The Swarms API is stateless — maintain history externally and pass it in the task field.
LangChain
Swarms API
Key Differences to Keep in Mind
| Concern | LangChain | Swarms API | |
|---|---|---|---|
| LCEL composition | `chain_a | chain_b` pipe syntax | SequentialWorkflow with agents in order |
| Memory | ConversationBufferMemory, VectorStoreRetriever | Stateless; manage externally | |
| Streaming | .stream() / .astream() | Dedicated /stream endpoint | |
| Callbacks | callbacks=[...] on chain/agent | Not needed; all outputs returned in response | |
| Retry logic | with_retry() | Handled server-side | |
| Fallbacks | with_fallbacks([...]) | model_name can be swapped per agent | |
| Output parsers | StrOutputParser, PydanticOutputParser | response_format: {type: "json_object"} | |
| Vector stores / RAG | VectorStoreRetriever | Pass retrieved context directly in task | |
| Embeddings | OpenAIEmbeddings, etc. | Not needed in the API; use external embedding service |