Reasoning¶
Reflexion¶
Reflector ¶
Reflector(loop_threshold: int = 3, success_weight: float = 0.15, error_penalty: float = 0.2, diminishing_returns: bool = True, min_progress_delta: float = 0.05, completion_bonus: float = 0.05)
Evaluates agent progress after each iteration.
The Reflector analyzes tool execution patterns, results, and state to determine if the agent is making progress toward its goal.
Attributes:
| Name | Type | Description |
|---|---|---|
loop_threshold |
Number of repeated tool calls to consider a loop. |
|
success_weight |
Weight for successful tool executions in confidence. |
|
error_penalty |
Penalty for failed tool executions. |
|
diminishing_returns |
Whether to apply diminishing returns to confidence. |
|
min_progress_delta |
Minimum confidence delta for "on_track" assessment. |
Initialize the Reflector.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
loop_threshold
|
int
|
Number of repeated tool calls to consider a loop. |
3
|
success_weight
|
float
|
Base confidence increase per successful tool call. |
0.15
|
error_penalty
|
float
|
Confidence decrease per failed tool call. |
0.2
|
diminishing_returns
|
bool
|
Apply diminishing returns to positive deltas. |
True
|
min_progress_delta
|
float
|
Minimum delta to consider making progress. |
0.05
|
completion_bonus
|
float
|
Small confidence bump applied when an
iteration produces an assistant turn but no tools fired
(i.e. a successful chat reply). Without this, tool-less
chat agents would never raise their confidence above
|
0.05
|
Source code in src/locus/reasoning/reflexion.py
reflect ¶
reflect(state: AgentState, iteration_executions: list[ToolExecution] | None = None) -> ReflectionResult
Evaluate agent progress and produce reflection result.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
state
|
AgentState
|
Current agent state with history. |
required |
iteration_executions
|
list[ToolExecution] | None
|
Tool executions from the current iteration. If None, uses the most recent executions from state. |
None
|
Returns:
| Type | Description |
|---|---|
ReflectionResult
|
ReflectionResult with assessment and guidance. |
Source code in src/locus/reasoning/reflexion.py
adjust_state_confidence ¶
Apply reflection result to update agent state confidence.
Uses the AgentState.adjust_confidence pattern for consistency.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
state
|
AgentState
|
Current agent state. |
required |
reflection
|
ReflectionResult
|
Reflection result with confidence delta. |
required |
Returns:
| Type | Description |
|---|---|
AgentState
|
New state with updated confidence. |
Source code in src/locus/reasoning/reflexion.py
create_guidance_message ¶
Create a guidance message to inject into the next iteration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reflection
|
ReflectionResult
|
Reflection result with assessment and guidance. |
required |
Returns:
| Type | Description |
|---|---|
str | None
|
Formatted guidance message or None if no guidance needed. |
Source code in src/locus/reasoning/reflexion.py
Grounding¶
GroundingEvaluator ¶
GroundingEvaluator(replan_threshold: float = 0.65, claim_threshold: float = 0.5, require_evidence: bool = True)
Evaluates if claims are grounded in evidence using LLM-as-judge pattern.
The GroundingEvaluator analyzes claims against evidence gathered during tool execution to determine if the claims are factually supported.
Attributes:
| Name | Type | Description |
|---|---|---|
replan_threshold |
Score below which replanning is triggered. |
|
claim_threshold |
Minimum score for a claim to be considered grounded. |
|
require_evidence |
Whether claims without explicit evidence are penalized. |
Initialize the GroundingEvaluator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
replan_threshold
|
float
|
Score below which replanning is triggered. |
0.65
|
claim_threshold
|
float
|
Minimum score for individual claim grounding. |
0.5
|
require_evidence
|
bool
|
Penalize claims without explicit evidence. |
True
|
Source code in src/locus/reasoning/grounding.py
evaluate ¶
evaluate(claims: Sequence[str], evidence: Sequence[str], context: str | None = None) -> GroundingResult
Evaluate claims against evidence.
This is a rule-based evaluation. For LLM-based evaluation, use evaluate_with_llm which integrates with a model provider.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
claims
|
Sequence[str]
|
List of claims to evaluate. |
required |
evidence
|
Sequence[str]
|
List of evidence strings from tool executions. |
required |
context
|
str | None
|
Optional context for evaluation. |
None
|
Returns:
| Type | Description |
|---|---|
GroundingResult
|
GroundingResult with scores and ungrounded claims. |
Source code in src/locus/reasoning/grounding.py
evaluate_with_llm
async
¶
evaluate_with_llm(claims: Sequence[str], evidence: Sequence[str], model: Any, context: str | None = None) -> GroundingResult
Evaluate claims using an LLM as judge.
This method uses a language model to evaluate whether claims are grounded in the provided evidence.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
claims
|
Sequence[str]
|
List of claims to evaluate. |
required |
evidence
|
Sequence[str]
|
List of evidence strings. |
required |
model
|
Any
|
Model instance implementing ModelProtocol. |
required |
context
|
str | None
|
Optional context for evaluation. |
None
|
Returns:
| Type | Description |
|---|---|
GroundingResult
|
GroundingResult with LLM-based evaluations. |
Source code in src/locus/reasoning/grounding.py
should_replan ¶
Check if replanning is recommended based on grounding result.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
result
|
GroundingResult
|
GroundingResult from evaluation. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if replanning is recommended. |
Source code in src/locus/reasoning/grounding.py
get_replan_guidance ¶
Generate guidance for replanning based on grounding failures.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
result
|
GroundingResult
|
GroundingResult with ungrounded claims. |
required |
Returns:
| Type | Description |
|---|---|
str
|
Guidance string for the agent. |
Source code in src/locus/reasoning/grounding.py
Causal chains¶
CausalChain ¶
Builder for causal inference chains.
CausalChain allows agents to construct and analyze causal graphs, identifying root causes, symptoms, and potential conflicts.
Attributes:
| Name | Type | Description |
|---|---|---|
nodes |
dict[str, CausalNode]
|
Dictionary of nodes by ID. |
edges |
list[CausalEdge]
|
List of causal edges. |
Initialize an empty causal chain.
Source code in src/locus/reasoning/causal.py
add_node ¶
Add a node to the causal graph.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
node
|
CausalNode
|
The node to add. |
required |
Returns:
| Type | Description |
|---|---|
CausalNode
|
The added node. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If a node with this ID already exists. |
Source code in src/locus/reasoning/causal.py
create_node ¶
create_node(label: str, node_type: NodeType = NodeType.UNKNOWN, evidence: list[str] | None = None, confidence: float = 0.5, **metadata: Any) -> CausalNode
Create and add a new node to the graph.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
label
|
str
|
Human-readable description. |
required |
node_type
|
NodeType
|
Classification of this node. |
UNKNOWN
|
evidence
|
list[str] | None
|
Supporting evidence. |
None
|
confidence
|
float
|
Confidence in classification. |
0.5
|
**metadata
|
Any
|
Additional metadata. |
{}
|
Returns:
| Type | Description |
|---|---|
CausalNode
|
The created and added node. |
Source code in src/locus/reasoning/causal.py
add_edge ¶
Add an edge to the causal graph.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
edge
|
CausalEdge
|
The edge to add. |
required |
Returns:
| Type | Description |
|---|---|
CausalEdge
|
The added edge. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If source or target node doesn't exist. |
Source code in src/locus/reasoning/causal.py
link ¶
link(source_id: str, target_id: str, relationship: RelationshipType = RelationshipType.CAUSES, confidence: float = 0.5, evidence: list[str] | None = None, reasoning: str | None = None) -> CausalEdge
Create and add an edge between existing nodes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
source_id
|
str
|
ID of the source node. |
required |
target_id
|
str
|
ID of the target node. |
required |
relationship
|
RelationshipType
|
Type of relationship. |
CAUSES
|
confidence
|
float
|
Confidence in the relationship. |
0.5
|
evidence
|
list[str] | None
|
Supporting evidence. |
None
|
reasoning
|
str | None
|
Explanation of the link. |
None
|
Returns:
| Type | Description |
|---|---|
CausalEdge
|
The created edge. |
Source code in src/locus/reasoning/causal.py
get_node ¶
Get a node by ID.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
node_id
|
str
|
The node ID to look up. |
required |
Returns:
| Type | Description |
|---|---|
CausalNode | None
|
The node or None if not found. |
get_edges_from ¶
Get all edges originating from a node.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
node_id
|
str
|
The source node ID. |
required |
Returns:
| Type | Description |
|---|---|
list[CausalEdge]
|
List of edges from this node. |
Source code in src/locus/reasoning/causal.py
get_edges_to ¶
Get all edges pointing to a node.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
node_id
|
str
|
The target node ID. |
required |
Returns:
| Type | Description |
|---|---|
list[CausalEdge]
|
List of edges to this node. |
Source code in src/locus/reasoning/causal.py
identify_root_causes ¶
Identify nodes that are root causes.
Root causes are nodes with outgoing causal edges but no incoming causal edges, or nodes explicitly marked as root_cause.
Returns:
| Type | Description |
|---|---|
list[CausalNode]
|
List of root cause nodes. |
Source code in src/locus/reasoning/causal.py
identify_symptoms ¶
Identify nodes that are symptoms.
Symptoms are nodes with incoming causal edges but no outgoing causal edges, or nodes explicitly marked as symptom.
Returns:
| Type | Description |
|---|---|
list[CausalNode]
|
List of symptom nodes. |
Source code in src/locus/reasoning/causal.py
get_causal_path ¶
Find a causal path between two nodes.
Uses BFS to find the shortest path through causal edges.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
source_id
|
str
|
Starting node ID. |
required |
target_id
|
str
|
Ending node ID. |
required |
Returns:
| Type | Description |
|---|---|
list[CausalNode] | None
|
List of nodes in the path, or None if no path exists. |
Source code in src/locus/reasoning/causal.py
detect_conflicts ¶
Detect conflicts in the causal graph.
Checks for: - Cycles (A causes B causes A) - Bidirectional causation (A causes B and B causes A) - Contradictory relationships (A causes B and A inhibits B)
Returns:
| Type | Description |
|---|---|
list[CausalConflict]
|
List of detected conflicts. |
Source code in src/locus/reasoning/causal.py
classify_nodes ¶
Automatically classify all nodes based on graph structure.
Returns:
| Type | Description |
|---|---|
dict[str, NodeType]
|
Dictionary mapping node IDs to their inferred types. |
Source code in src/locus/reasoning/causal.py
update_node_types ¶
Update node types based on graph structure (in place).
Source code in src/locus/reasoning/causal.py
get_chain_summary ¶
Get a summary of the causal chain.
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Dictionary with chain statistics and structure. |
Source code in src/locus/reasoning/causal.py
to_dict ¶
Serialize the causal chain to a dictionary.
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Dictionary representation of the chain. |
Source code in src/locus/reasoning/causal.py
from_dict
classmethod
¶
Deserialize a causal chain from a dictionary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict[str, Any]
|
Dictionary with nodes and edges. |
required |
Returns:
| Type | Description |
|---|---|
CausalChain
|
CausalChain instance. |
Source code in src/locus/reasoning/causal.py
GSAR (typed grounding)¶
GSARThresholds ¶
Bases: BaseModel
Decision thresholds τ_regenerate < τ_proceed (§5.1 + Eq. 3).
GSARResult ¶
Bases: BaseModel
Final result of running the Algorithm-1 outer loop.