Quickstart¶
A working locus agent in five minutes.
1. Install¶
This installs locus plus the OCI Generative AI provider. For other providers add the corresponding extra:
pip install "locus-sdk[openai]" # OpenAI directly
pip install "locus-sdk[anthropic]" # Anthropic directly
pip install "locus-sdk[ollama]" # local models
pip install "locus-sdk[sdk]" # everything
2. Configure your provider¶
For Oracle Generative AI — the day-0 path — set one environment variable:
If your profile uses a session token (e.g. SSO), make sure it's fresh:
For OpenAI / Anthropic / Ollama, set the relevant *_API_KEY or
OLLAMA_HOST. See Models for the per-provider matrix.
3. Your first agent¶
Save this as hello_agent.py:
from locus import Agent
from locus.tools.decorator import tool
@tool
def add(a: int, b: int) -> int:
"""Add two integers and return the sum."""
return a + b
@tool
def search_books(topic: str) -> list[str]:
"""Search the catalogue for books on a topic."""
return [f"{topic} for Beginners", f"Advanced {topic}"]
agent = Agent(
model="oci:openai.gpt-5.5",
tools=[add, search_books],
system_prompt="You are a helpful assistant.",
)
result = agent.run_sync("What's 17 + 25, and recommend two books on Rust.")
print(result.message)
Run:
You should see something like:
4. Stream the events¶
For UIs and real-time logging, switch to async and consume the typed event stream:
import asyncio
from locus.core.events import (
ThinkEvent, ToolStartEvent, ToolCompleteEvent, TerminateEvent,
)
async def main():
async for event in agent.run("What's 17 + 25?"):
match event:
case ThinkEvent(reasoning=r) if r:
print(f"💭 {r}")
case ToolStartEvent(tool_name=n, arguments=a):
print(f"🔧 {n}({a})")
case ToolCompleteEvent(result=r):
print(f" ↳ {r}")
case TerminateEvent(final_message=m):
print(f"\n✅ {m}")
asyncio.run(main())
See Streaming for the full event taxonomy.
5. Persist conversations across restarts¶
For real applications you'll want state to survive a restart. Wire a
checkpointer and a thread_id:
from locus.memory.backends.file import FileCheckpointer
agent = Agent(
model="oci:openai.gpt-5.5",
tools=[...],
system_prompt="...",
checkpointer=FileCheckpointer(directory="./threads"),
)
# Day 1
agent.run_sync("I'm planning a trip to Tokyo.", thread_id="user-c42")
# Day 2 — same thread_id, conversation continues
agent.run_sync("What were we talking about?", thread_id="user-c42")
For OCI-native durability, swap to OCIBucketBackend(bucket=..., namespace=...).
See Conversation Management.
6. Make it production-grade¶
Add idempotency to side-effecting tools, Reflexion to catch wrong premises, and termination algebra to stop when the work is done:
from locus.memory.backends import OCIBucketBackend
from locus.core.termination import (
MaxIterations, ToolCalled, ConfidenceMet,
)
@tool(idempotent=True)
def submit_order(item_id: str, qty: int) -> dict:
return shop.submit(item_id, qty)
agent = Agent(
model="oci:openai.gpt-5.5",
tools=[search_catalog, submit_order],
system_prompt="...",
reflexion=True,
checkpointer=OCIBucketBackend(bucket="locus-threads", namespace="..."),
termination=(
ToolCalled("submit_order") & ConfidenceMet(0.9)
) | MaxIterations(8),
)
Each piece in detail:
@tool(idempotent=True)→ Idempotencyreflexion=True→ Reasoningcheckpointer=...→ Checkpointerstermination=...→ Termination
7. Multi-agent¶
When one agent isn't enough — pick the coordination shape that fits the problem:
| Shape | When |
|---|---|
| Composition | linear chain, fan-out + merge |
| Orchestrator + Specialists | one router, parallel experts |
| Swarm | open-ended research, peer-to-peer |
| Handoff | escalation desks |
| StateGraph | review-loops, retry-until-confidence |
| Functional API | map/reduce over agents |
| A2A | cross-process meshes |
8. Deploy¶
AgentServer is a drop-in FastAPI app:
from locus.server import AgentServer
server = AgentServer(agent=agent)
server.run(host="0.0.0.0", port=8080)
POST /invoke, POST /stream, GET /threads/{id}. Deploys
anywhere FastAPI runs — see Deploy.
Where to next¶
- Read deeper. Agent Loop is the architectural reference for how all of this fits together.
- Browse examples. Forty progressive tutorials at
examples/. Each is a single runnable file that adds one idea on top of the previous. - Steer it. Hooks give you logging, telemetry, retry, guardrails, and steering as one-line additions.