Build, deploy, and manage AI agent workflows on your own infrastructure. No cloud dependencies, full control.
from localagents import Orchestrator, Agent
# Create an orchestrator
orchestrator = Orchestrator()
# Define your agents
research_agent = Agent("researcher", model="llama3")
writer_agent = Agent("writer", model="mistral")
# Orchestrate the workflow
result = orchestrator.run([
research_agent.task("Research AI trends"),
writer_agent.task("Write summary")
])
Run everything on your infrastructure. No API keys, no cloud costs, complete data privacy.
Chain agents intelligently with conditional logic, parallel execution, and error handling.
Simple Python SDK with intuitive APIs, full type hints, and comprehensive documentation.
Bring your own models, tools, and integrations. Works with any local LLM.
pip install localagents
Then create your first agent in just a few lines of code
Read the Docs