Skip to content

Reasoning

Reflexion

Reflector

Reflector(loop_threshold: int = 3, success_weight: float = 0.15, error_penalty: float = 0.2, diminishing_returns: bool = True, min_progress_delta: float = 0.05, completion_bonus: float = 0.05)

Evaluates agent progress after each iteration.

The Reflector analyzes tool execution patterns, results, and state to determine if the agent is making progress toward its goal.

Attributes:

Name Type Description
loop_threshold

Number of repeated tool calls to consider a loop.

success_weight

Weight for successful tool executions in confidence.

error_penalty

Penalty for failed tool executions.

diminishing_returns

Whether to apply diminishing returns to confidence.

min_progress_delta

Minimum confidence delta for "on_track" assessment.

Initialize the Reflector.

Parameters:

Name Type Description Default
loop_threshold int

Number of repeated tool calls to consider a loop.

3
success_weight float

Base confidence increase per successful tool call.

0.15
error_penalty float

Confidence decrease per failed tool call.

0.2
diminishing_returns bool

Apply diminishing returns to positive deltas.

True
min_progress_delta float

Minimum delta to consider making progress.

0.05
completion_bonus float

Small confidence bump applied when an iteration produces an assistant turn but no tools fired (i.e. a successful chat reply). Without this, tool-less chat agents would never raise their confidence above 0.0. Set to 0.0 to opt out.

0.05
Source code in src/locus/reasoning/reflexion.py
def __init__(
    self,
    loop_threshold: int = 3,
    success_weight: float = 0.15,
    error_penalty: float = 0.2,
    diminishing_returns: bool = True,
    min_progress_delta: float = 0.05,
    completion_bonus: float = 0.05,
) -> None:
    """Initialize the Reflector.

    Args:
        loop_threshold: Number of repeated tool calls to consider a loop.
        success_weight: Base confidence increase per successful tool call.
        error_penalty: Confidence decrease per failed tool call.
        diminishing_returns: Apply diminishing returns to positive deltas.
        min_progress_delta: Minimum delta to consider making progress.
        completion_bonus: Small confidence bump applied when an
            iteration produces an assistant turn but no tools fired
            (i.e. a successful chat reply). Without this, tool-less
            chat agents would never raise their confidence above
            ``0.0``. Set to ``0.0`` to opt out.
    """
    self.loop_threshold = loop_threshold
    self.success_weight = success_weight
    self.error_penalty = error_penalty
    self.diminishing_returns = diminishing_returns
    self.min_progress_delta = min_progress_delta
    self.completion_bonus = completion_bonus

reflect

reflect(state: AgentState, iteration_executions: list[ToolExecution] | None = None) -> ReflectionResult

Evaluate agent progress and produce reflection result.

Parameters:

Name Type Description Default
state AgentState

Current agent state with history.

required
iteration_executions list[ToolExecution] | None

Tool executions from the current iteration. If None, uses the most recent executions from state.

None

Returns:

Type Description
ReflectionResult

ReflectionResult with assessment and guidance.

Source code in src/locus/reasoning/reflexion.py
def reflect(
    self,
    state: AgentState,
    iteration_executions: list[ToolExecution] | None = None,
) -> ReflectionResult:
    """Evaluate agent progress and produce reflection result.

    Args:
        state: Current agent state with history.
        iteration_executions: Tool executions from the current iteration.
            If None, uses the most recent executions from state.

    Returns:
        ReflectionResult with assessment and guidance.
    """
    # Get executions for this iteration if not provided
    if iteration_executions is None:
        iteration_executions = self._get_recent_executions(state)

    # Check for loops first (highest priority)
    loop_result = self._detect_loop(state)
    if loop_result is not None:
        return loop_result

    # Analyze tool execution results
    success_count, error_count, results_content = self._analyze_executions(iteration_executions)

    # Calculate base confidence delta
    confidence_delta = self._calculate_confidence_delta(
        success_count,
        error_count,
        state.confidence,
    )

    # Determine assessment category and guidance
    assessment, guidance, findings = self._assess_progress(
        confidence_delta,
        success_count,
        error_count,
        results_content,
        state,
    )

    return ReflectionResult(
        confidence_delta=confidence_delta,
        assessment=assessment,
        guidance=guidance,
        findings_summary=findings,
    )

adjust_state_confidence

adjust_state_confidence(state: AgentState, reflection: ReflectionResult) -> AgentState

Apply reflection result to update agent state confidence.

Uses the AgentState.adjust_confidence pattern for consistency.

Parameters:

Name Type Description Default
state AgentState

Current agent state.

required
reflection ReflectionResult

Reflection result with confidence delta.

required

Returns:

Type Description
AgentState

New state with updated confidence.

Source code in src/locus/reasoning/reflexion.py
def adjust_state_confidence(
    self,
    state: AgentState,
    reflection: ReflectionResult,
) -> AgentState:
    """Apply reflection result to update agent state confidence.

    Uses the AgentState.adjust_confidence pattern for consistency.

    Args:
        state: Current agent state.
        reflection: Reflection result with confidence delta.

    Returns:
        New state with updated confidence.
    """
    return state.adjust_confidence(
        reflection.confidence_delta,
        diminishing=self.diminishing_returns,
    )

create_guidance_message

create_guidance_message(reflection: ReflectionResult) -> str | None

Create a guidance message to inject into the next iteration.

Parameters:

Name Type Description Default
reflection ReflectionResult

Reflection result with assessment and guidance.

required

Returns:

Type Description
str | None

Formatted guidance message or None if no guidance needed.

Source code in src/locus/reasoning/reflexion.py
def create_guidance_message(
    self,
    reflection: ReflectionResult,
) -> str | None:
    """Create a guidance message to inject into the next iteration.

    Args:
        reflection: Reflection result with assessment and guidance.

    Returns:
        Formatted guidance message or None if no guidance needed.
    """
    if reflection.guidance is None:
        return None

    parts = [f"[Reflection - {reflection.assessment.value}]"]
    parts.append(reflection.guidance)

    if reflection.loop_pattern:
        parts.append(f"Pattern detected: {reflection.loop_pattern}")

    if reflection.findings_summary:
        parts.append(f"Key findings: {reflection.findings_summary}")

    return "\n".join(parts)

Grounding

GroundingEvaluator

GroundingEvaluator(replan_threshold: float = 0.65, claim_threshold: float = 0.5, require_evidence: bool = True)

Evaluates if claims are grounded in evidence using LLM-as-judge pattern.

The GroundingEvaluator analyzes claims against evidence gathered during tool execution to determine if the claims are factually supported.

Attributes:

Name Type Description
replan_threshold

Score below which replanning is triggered.

claim_threshold

Minimum score for a claim to be considered grounded.

require_evidence

Whether claims without explicit evidence are penalized.

Initialize the GroundingEvaluator.

Parameters:

Name Type Description Default
replan_threshold float

Score below which replanning is triggered.

0.65
claim_threshold float

Minimum score for individual claim grounding.

0.5
require_evidence bool

Penalize claims without explicit evidence.

True
Source code in src/locus/reasoning/grounding.py
def __init__(
    self,
    replan_threshold: float = 0.65,
    claim_threshold: float = 0.5,
    require_evidence: bool = True,
) -> None:
    """Initialize the GroundingEvaluator.

    Args:
        replan_threshold: Score below which replanning is triggered.
        claim_threshold: Minimum score for individual claim grounding.
        require_evidence: Penalize claims without explicit evidence.
    """
    self.replan_threshold = replan_threshold
    self.claim_threshold = claim_threshold
    self.require_evidence = require_evidence

evaluate

evaluate(claims: Sequence[str], evidence: Sequence[str], context: str | None = None) -> GroundingResult

Evaluate claims against evidence.

This is a rule-based evaluation. For LLM-based evaluation, use evaluate_with_llm which integrates with a model provider.

Parameters:

Name Type Description Default
claims Sequence[str]

List of claims to evaluate.

required
evidence Sequence[str]

List of evidence strings from tool executions.

required
context str | None

Optional context for evaluation.

None

Returns:

Type Description
GroundingResult

GroundingResult with scores and ungrounded claims.

Source code in src/locus/reasoning/grounding.py
def evaluate(
    self,
    claims: Sequence[str],
    evidence: Sequence[str],
    context: str | None = None,
) -> GroundingResult:
    """Evaluate claims against evidence.

    This is a rule-based evaluation. For LLM-based evaluation, use
    evaluate_with_llm which integrates with a model provider.

    Args:
        claims: List of claims to evaluate.
        evidence: List of evidence strings from tool executions.
        context: Optional context for evaluation.

    Returns:
        GroundingResult with scores and ungrounded claims.
    """
    if not claims:
        return GroundingResult(
            score=1.0,
            claims=[],
            ungrounded_claims=[],
            requires_replan=False,
            evaluation_details={"reason": "no_claims_to_evaluate"},
        )

    evaluated_claims: list[ClaimEvaluation] = []
    evidence_set = set(evidence)
    evidence_text = " ".join(evidence).lower()

    for claim in claims:
        evaluation = self._evaluate_single_claim(
            claim,
            evidence_set,
            evidence_text,
        )
        evaluated_claims.append(evaluation)

    # Calculate overall score
    if evaluated_claims:
        overall_score = sum(c.score for c in evaluated_claims) / len(evaluated_claims)
    else:
        overall_score = 1.0

    # Identify ungrounded claims
    ungrounded = [c.claim for c in evaluated_claims if c.score < self.claim_threshold]

    # Determine if replan is needed
    requires_replan = overall_score < self.replan_threshold

    return GroundingResult(
        score=overall_score,
        claims=evaluated_claims,
        ungrounded_claims=ungrounded,
        requires_replan=requires_replan,
        evaluation_details={
            "evidence_count": len(evidence),
            "claim_count": len(claims),
            "grounded_count": len(claims) - len(ungrounded),
        },
    )

evaluate_with_llm async

evaluate_with_llm(claims: Sequence[str], evidence: Sequence[str], model: Any, context: str | None = None) -> GroundingResult

Evaluate claims using an LLM as judge.

This method uses a language model to evaluate whether claims are grounded in the provided evidence.

Parameters:

Name Type Description Default
claims Sequence[str]

List of claims to evaluate.

required
evidence Sequence[str]

List of evidence strings.

required
model Any

Model instance implementing ModelProtocol.

required
context str | None

Optional context for evaluation.

None

Returns:

Type Description
GroundingResult

GroundingResult with LLM-based evaluations.

Source code in src/locus/reasoning/grounding.py
async def evaluate_with_llm(
    self,
    claims: Sequence[str],
    evidence: Sequence[str],
    model: Any,
    context: str | None = None,
) -> GroundingResult:
    """Evaluate claims using an LLM as judge.

    This method uses a language model to evaluate whether claims
    are grounded in the provided evidence.

    Args:
        claims: List of claims to evaluate.
        evidence: List of evidence strings.
        model: Model instance implementing ModelProtocol.
        context: Optional context for evaluation.

    Returns:
        GroundingResult with LLM-based evaluations.
    """
    from locus.core.messages import Message  # noqa: PLC0415

    if not claims:
        return GroundingResult(
            score=1.0,
            claims=[],
            ungrounded_claims=[],
            requires_replan=False,
            evaluation_details={"reason": "no_claims_to_evaluate", "method": "llm"},
        )

    # Build the evaluation prompt
    prompt = self._build_evaluation_prompt(claims, evidence, context)

    # Call the model
    messages = [Message.user(prompt)]
    response = await model.complete(messages)

    # Parse the response
    evaluated_claims = self._parse_llm_response(
        claims,
        response.message.content or "",
    )

    # Calculate overall score
    if evaluated_claims:
        overall_score = sum(c.score for c in evaluated_claims) / len(evaluated_claims)
    else:
        overall_score = 1.0

    # Identify ungrounded claims
    ungrounded = [c.claim for c in evaluated_claims if c.score < self.claim_threshold]

    return GroundingResult(
        score=overall_score,
        claims=evaluated_claims,
        ungrounded_claims=ungrounded,
        requires_replan=overall_score < self.replan_threshold,
        evaluation_details={
            "evidence_count": len(evidence),
            "claim_count": len(claims),
            "method": "llm",
        },
    )

should_replan

should_replan(result: GroundingResult) -> bool

Check if replanning is recommended based on grounding result.

Parameters:

Name Type Description Default
result GroundingResult

GroundingResult from evaluation.

required

Returns:

Type Description
bool

True if replanning is recommended.

Source code in src/locus/reasoning/grounding.py
def should_replan(self, result: GroundingResult) -> bool:
    """Check if replanning is recommended based on grounding result.

    Args:
        result: GroundingResult from evaluation.

    Returns:
        True if replanning is recommended.
    """
    return result.requires_replan

get_replan_guidance

get_replan_guidance(result: GroundingResult) -> str

Generate guidance for replanning based on grounding failures.

Parameters:

Name Type Description Default
result GroundingResult

GroundingResult with ungrounded claims.

required

Returns:

Type Description
str

Guidance string for the agent.

Source code in src/locus/reasoning/grounding.py
def get_replan_guidance(self, result: GroundingResult) -> str:
    """Generate guidance for replanning based on grounding failures.

    Args:
        result: GroundingResult with ungrounded claims.

    Returns:
        Guidance string for the agent.
    """
    if not result.ungrounded_claims:
        return "All claims are grounded. No replanning needed."

    parts = [
        f"Grounding score ({result.score:.0%}) is below threshold ({self.replan_threshold:.0%}).",
        "",
        "Ungrounded claims that need evidence:",
    ]

    for claim in result.ungrounded_claims[:5]:  # Limit to first 5
        parts.append(f"- {claim}")

    parts.extend(
        [
            "",
            "Recommendations:",
            "1. Gather additional evidence for ungrounded claims",
            "2. Revise claims that cannot be substantiated",
            "3. Focus on verifiable facts from tool results",
        ]
    )

    return "\n".join(parts)

Causal chains

CausalChain

CausalChain()

Builder for causal inference chains.

CausalChain allows agents to construct and analyze causal graphs, identifying root causes, symptoms, and potential conflicts.

Attributes:

Name Type Description
nodes dict[str, CausalNode]

Dictionary of nodes by ID.

edges list[CausalEdge]

List of causal edges.

Initialize an empty causal chain.

Source code in src/locus/reasoning/causal.py
def __init__(self) -> None:
    """Initialize an empty causal chain."""
    self._nodes: dict[str, CausalNode] = {}
    self._edges: list[CausalEdge] = []
    self._adjacency: dict[str, list[str]] = {}  # source -> [targets]
    self._reverse_adjacency: dict[str, list[str]] = {}  # target -> [sources]

nodes property

nodes: dict[str, CausalNode]

Get all nodes in the graph.

edges property

edges: list[CausalEdge]

Get all edges in the graph.

add_node

add_node(node: CausalNode) -> CausalNode

Add a node to the causal graph.

Parameters:

Name Type Description Default
node CausalNode

The node to add.

required

Returns:

Type Description
CausalNode

The added node.

Raises:

Type Description
ValueError

If a node with this ID already exists.

Source code in src/locus/reasoning/causal.py
def add_node(self, node: CausalNode) -> CausalNode:
    """Add a node to the causal graph.

    Args:
        node: The node to add.

    Returns:
        The added node.

    Raises:
        ValueError: If a node with this ID already exists.
    """
    if node.id in self._nodes:
        msg = f"Node with ID '{node.id}' already exists"
        raise ValueError(msg)

    self._nodes[node.id] = node
    self._adjacency[node.id] = []
    self._reverse_adjacency[node.id] = []
    return node

create_node

create_node(label: str, node_type: NodeType = NodeType.UNKNOWN, evidence: list[str] | None = None, confidence: float = 0.5, **metadata: Any) -> CausalNode

Create and add a new node to the graph.

Parameters:

Name Type Description Default
label str

Human-readable description.

required
node_type NodeType

Classification of this node.

UNKNOWN
evidence list[str] | None

Supporting evidence.

None
confidence float

Confidence in classification.

0.5
**metadata Any

Additional metadata.

{}

Returns:

Type Description
CausalNode

The created and added node.

Source code in src/locus/reasoning/causal.py
def create_node(
    self,
    label: str,
    node_type: NodeType = NodeType.UNKNOWN,
    evidence: list[str] | None = None,
    confidence: float = 0.5,
    **metadata: Any,
) -> CausalNode:
    """Create and add a new node to the graph.

    Args:
        label: Human-readable description.
        node_type: Classification of this node.
        evidence: Supporting evidence.
        confidence: Confidence in classification.
        **metadata: Additional metadata.

    Returns:
        The created and added node.
    """
    node = CausalNode(
        label=label,
        node_type=node_type,
        evidence=evidence or [],
        confidence=confidence,
        metadata=metadata,
    )
    return self.add_node(node)

add_edge

add_edge(edge: CausalEdge) -> CausalEdge

Add an edge to the causal graph.

Parameters:

Name Type Description Default
edge CausalEdge

The edge to add.

required

Returns:

Type Description
CausalEdge

The added edge.

Raises:

Type Description
ValueError

If source or target node doesn't exist.

Source code in src/locus/reasoning/causal.py
def add_edge(self, edge: CausalEdge) -> CausalEdge:
    """Add an edge to the causal graph.

    Args:
        edge: The edge to add.

    Returns:
        The added edge.

    Raises:
        ValueError: If source or target node doesn't exist.
    """
    if edge.source_id not in self._nodes:
        msg = f"Source node '{edge.source_id}' not found"
        raise ValueError(msg)
    if edge.target_id not in self._nodes:
        msg = f"Target node '{edge.target_id}' not found"
        raise ValueError(msg)

    self._edges.append(edge)
    self._adjacency[edge.source_id].append(edge.target_id)
    self._reverse_adjacency[edge.target_id].append(edge.source_id)
    return edge
link(source_id: str, target_id: str, relationship: RelationshipType = RelationshipType.CAUSES, confidence: float = 0.5, evidence: list[str] | None = None, reasoning: str | None = None) -> CausalEdge

Create and add an edge between existing nodes.

Parameters:

Name Type Description Default
source_id str

ID of the source node.

required
target_id str

ID of the target node.

required
relationship RelationshipType

Type of relationship.

CAUSES
confidence float

Confidence in the relationship.

0.5
evidence list[str] | None

Supporting evidence.

None
reasoning str | None

Explanation of the link.

None

Returns:

Type Description
CausalEdge

The created edge.

Source code in src/locus/reasoning/causal.py
def link(
    self,
    source_id: str,
    target_id: str,
    relationship: RelationshipType = RelationshipType.CAUSES,
    confidence: float = 0.5,
    evidence: list[str] | None = None,
    reasoning: str | None = None,
) -> CausalEdge:
    """Create and add an edge between existing nodes.

    Args:
        source_id: ID of the source node.
        target_id: ID of the target node.
        relationship: Type of relationship.
        confidence: Confidence in the relationship.
        evidence: Supporting evidence.
        reasoning: Explanation of the link.

    Returns:
        The created edge.
    """
    edge = CausalEdge(
        source_id=source_id,
        target_id=target_id,
        relationship=relationship,
        confidence=confidence,
        evidence=evidence or [],
        reasoning=reasoning,
    )
    return self.add_edge(edge)

get_node

get_node(node_id: str) -> CausalNode | None

Get a node by ID.

Parameters:

Name Type Description Default
node_id str

The node ID to look up.

required

Returns:

Type Description
CausalNode | None

The node or None if not found.

Source code in src/locus/reasoning/causal.py
def get_node(self, node_id: str) -> CausalNode | None:
    """Get a node by ID.

    Args:
        node_id: The node ID to look up.

    Returns:
        The node or None if not found.
    """
    return self._nodes.get(node_id)

get_edges_from

get_edges_from(node_id: str) -> list[CausalEdge]

Get all edges originating from a node.

Parameters:

Name Type Description Default
node_id str

The source node ID.

required

Returns:

Type Description
list[CausalEdge]

List of edges from this node.

Source code in src/locus/reasoning/causal.py
def get_edges_from(self, node_id: str) -> list[CausalEdge]:
    """Get all edges originating from a node.

    Args:
        node_id: The source node ID.

    Returns:
        List of edges from this node.
    """
    target_ids = self._adjacency.get(node_id, [])
    return [e for e in self._edges if e.source_id == node_id and e.target_id in target_ids]

get_edges_to

get_edges_to(node_id: str) -> list[CausalEdge]

Get all edges pointing to a node.

Parameters:

Name Type Description Default
node_id str

The target node ID.

required

Returns:

Type Description
list[CausalEdge]

List of edges to this node.

Source code in src/locus/reasoning/causal.py
def get_edges_to(self, node_id: str) -> list[CausalEdge]:
    """Get all edges pointing to a node.

    Args:
        node_id: The target node ID.

    Returns:
        List of edges to this node.
    """
    source_ids = self._reverse_adjacency.get(node_id, [])
    return [e for e in self._edges if e.target_id == node_id and e.source_id in source_ids]

identify_root_causes

identify_root_causes() -> list[CausalNode]

Identify nodes that are root causes.

Root causes are nodes with outgoing causal edges but no incoming causal edges, or nodes explicitly marked as root_cause.

Returns:

Type Description
list[CausalNode]

List of root cause nodes.

Source code in src/locus/reasoning/causal.py
def identify_root_causes(self) -> list[CausalNode]:
    """Identify nodes that are root causes.

    Root causes are nodes with outgoing causal edges but no incoming
    causal edges, or nodes explicitly marked as root_cause.

    Returns:
        List of root cause nodes.
    """
    root_causes: list[CausalNode] = []

    for node_id, node in self._nodes.items():
        # Check explicit marking
        if node.node_type == NodeType.ROOT_CAUSE:
            root_causes.append(node)
            continue

        # Check graph structure
        incoming = self._reverse_adjacency.get(node_id, [])
        outgoing = self._adjacency.get(node_id, [])

        # Has outgoing but no incoming = likely root cause
        if outgoing and not incoming:
            root_causes.append(node)

    return root_causes

identify_symptoms

identify_symptoms() -> list[CausalNode]

Identify nodes that are symptoms.

Symptoms are nodes with incoming causal edges but no outgoing causal edges, or nodes explicitly marked as symptom.

Returns:

Type Description
list[CausalNode]

List of symptom nodes.

Source code in src/locus/reasoning/causal.py
def identify_symptoms(self) -> list[CausalNode]:
    """Identify nodes that are symptoms.

    Symptoms are nodes with incoming causal edges but no outgoing
    causal edges, or nodes explicitly marked as symptom.

    Returns:
        List of symptom nodes.
    """
    symptoms: list[CausalNode] = []

    for node_id, node in self._nodes.items():
        # Check explicit marking
        if node.node_type == NodeType.SYMPTOM:
            symptoms.append(node)
            continue

        # Check graph structure
        incoming = self._reverse_adjacency.get(node_id, [])
        outgoing = self._adjacency.get(node_id, [])

        # Has incoming but no outgoing = likely symptom
        if incoming and not outgoing:
            symptoms.append(node)

    return symptoms

get_causal_path

get_causal_path(source_id: str, target_id: str) -> list[CausalNode] | None

Find a causal path between two nodes.

Uses BFS to find the shortest path through causal edges.

Parameters:

Name Type Description Default
source_id str

Starting node ID.

required
target_id str

Ending node ID.

required

Returns:

Type Description
list[CausalNode] | None

List of nodes in the path, or None if no path exists.

Source code in src/locus/reasoning/causal.py
def get_causal_path(
    self,
    source_id: str,
    target_id: str,
) -> list[CausalNode] | None:
    """Find a causal path between two nodes.

    Uses BFS to find the shortest path through causal edges.

    Args:
        source_id: Starting node ID.
        target_id: Ending node ID.

    Returns:
        List of nodes in the path, or None if no path exists.
    """
    if source_id not in self._nodes or target_id not in self._nodes:
        return None

    if source_id == target_id:
        return [self._nodes[source_id]]

    # BFS
    visited: set[str] = set()
    queue: list[list[str]] = [[source_id]]

    while queue:
        path = queue.pop(0)
        current = path[-1]

        if current in visited:
            continue
        visited.add(current)

        for neighbor in self._adjacency.get(current, []):
            new_path = [*path, neighbor]
            if neighbor == target_id:
                return [self._nodes[n] for n in new_path]
            queue.append(new_path)

    return None

detect_conflicts

detect_conflicts() -> list[CausalConflict]

Detect conflicts in the causal graph.

Checks for: - Cycles (A causes B causes A) - Bidirectional causation (A causes B and B causes A) - Contradictory relationships (A causes B and A inhibits B)

Returns:

Type Description
list[CausalConflict]

List of detected conflicts.

Source code in src/locus/reasoning/causal.py
def detect_conflicts(self) -> list[CausalConflict]:
    """Detect conflicts in the causal graph.

    Checks for:
    - Cycles (A causes B causes A)
    - Bidirectional causation (A causes B and B causes A)
    - Contradictory relationships (A causes B and A inhibits B)

    Returns:
        List of detected conflicts.
    """
    conflicts: list[CausalConflict] = []

    # Check for cycles
    cycle_conflicts = self._detect_cycles()
    conflicts.extend(cycle_conflicts)

    # Check for bidirectional causation
    bidirectional_conflicts = self._detect_bidirectional()
    conflicts.extend(bidirectional_conflicts)

    # Check for contradictory relationships
    contradictory_conflicts = self._detect_contradictory()
    conflicts.extend(contradictory_conflicts)

    return conflicts

classify_nodes

classify_nodes() -> dict[str, NodeType]

Automatically classify all nodes based on graph structure.

Returns:

Type Description
dict[str, NodeType]

Dictionary mapping node IDs to their inferred types.

Source code in src/locus/reasoning/causal.py
def classify_nodes(self) -> dict[str, NodeType]:
    """Automatically classify all nodes based on graph structure.

    Returns:
        Dictionary mapping node IDs to their inferred types.
    """
    classifications: dict[str, NodeType] = {}

    for node_id in self._nodes:
        incoming = self._reverse_adjacency.get(node_id, [])
        outgoing = self._adjacency.get(node_id, [])

        # Preserve explicit classifications
        if self._nodes[node_id].node_type != NodeType.UNKNOWN:
            classifications[node_id] = self._nodes[node_id].node_type
        elif outgoing and not incoming:
            classifications[node_id] = NodeType.ROOT_CAUSE
        elif incoming and not outgoing:
            classifications[node_id] = NodeType.SYMPTOM
        elif incoming and outgoing:
            classifications[node_id] = NodeType.INTERMEDIATE
        else:
            classifications[node_id] = NodeType.UNKNOWN

    return classifications

update_node_types

update_node_types() -> None

Update node types based on graph structure (in place).

Source code in src/locus/reasoning/causal.py
def update_node_types(self) -> None:
    """Update node types based on graph structure (in place)."""
    classifications = self.classify_nodes()

    for node_id, node_type in classifications.items():
        if self._nodes[node_id].node_type == NodeType.UNKNOWN:
            self._nodes[node_id] = self._nodes[node_id].with_type(node_type)

get_chain_summary

get_chain_summary() -> dict[str, Any]

Get a summary of the causal chain.

Returns:

Type Description
dict[str, Any]

Dictionary with chain statistics and structure.

Source code in src/locus/reasoning/causal.py
def get_chain_summary(self) -> dict[str, Any]:
    """Get a summary of the causal chain.

    Returns:
        Dictionary with chain statistics and structure.
    """
    classifications = self.classify_nodes()

    return {
        "total_nodes": len(self._nodes),
        "total_edges": len(self._edges),
        "root_causes": [
            self._nodes[n].label for n, t in classifications.items() if t == NodeType.ROOT_CAUSE
        ],
        "symptoms": [
            self._nodes[n].label for n, t in classifications.items() if t == NodeType.SYMPTOM
        ],
        "intermediates": [
            self._nodes[n].label
            for n, t in classifications.items()
            if t == NodeType.INTERMEDIATE
        ],
        "conflicts": len(self.detect_conflicts()),
        "avg_confidence": (
            sum(n.confidence for n in self._nodes.values()) / len(self._nodes)
            if self._nodes
            else 0.0
        ),
    }

to_dict

to_dict() -> dict[str, Any]

Serialize the causal chain to a dictionary.

Returns:

Type Description
dict[str, Any]

Dictionary representation of the chain.

Source code in src/locus/reasoning/causal.py
def to_dict(self) -> dict[str, Any]:
    """Serialize the causal chain to a dictionary.

    Returns:
        Dictionary representation of the chain.
    """
    return {
        "nodes": [n.model_dump() for n in self._nodes.values()],
        "edges": [e.model_dump() for e in self._edges],
    }

from_dict classmethod

from_dict(data: dict[str, Any]) -> CausalChain

Deserialize a causal chain from a dictionary.

Parameters:

Name Type Description Default
data dict[str, Any]

Dictionary with nodes and edges.

required

Returns:

Type Description
CausalChain

CausalChain instance.

Source code in src/locus/reasoning/causal.py
@classmethod
def from_dict(cls, data: dict[str, Any]) -> CausalChain:
    """Deserialize a causal chain from a dictionary.

    Args:
        data: Dictionary with nodes and edges.

    Returns:
        CausalChain instance.
    """
    chain = cls()

    for node_data in data.get("nodes", []):
        node = CausalNode.model_validate(node_data)
        chain.add_node(node)

    for edge_data in data.get("edges", []):
        edge = CausalEdge.model_validate(edge_data)
        chain.add_edge(edge)

    return chain

GSAR (typed grounding)

GSARThresholds

Bases: BaseModel

Decision thresholds τ_regenerate < τ_proceed (§5.1 + Eq. 3).

GSARResult

Bases: BaseModel

Final result of running the Algorithm-1 outer loop.