What it is
The v1 of OpenHands — re-architected from v0 with a cleaner controller, a pluggable memory condenser, and better tool dispatch. The cleanest case in the corpus for event-sourced agent state.
What’s worth studying
Three things make OpenHands v1 worth your time:
- Event-sourced architecture. Every action and observation is an immutable event in an append-only log. The “context for the next LLM call” is a derived view, not the source of truth. Replay debugging is free; audit logging is free; microagent triggers attach as event subscribers.
- Pluggable memory condenser. Strategies (LLM summarizing, no-op, etc.) implement a common interface and you swap them per workload. The summarizer prompt is tuned to preserve task IDs, file paths, and exact error messages — the preservation list is the design.
- Mock function-calling for non-native models. A
fn_call_converterinjects examples into the prompt for models without native tool calls and post-processes their output. Every model gets the same structured tool API. Powerful for cross-provider portability.
A fourth pattern — the temperature perturbation recovery for empty responses — lives in the LLM client wrapper. A simple cheap fix for a specific failure mode.
Drill-down
The full per-doc analysis lives below — these are the original numbered analyses, rendered as styled HTML. Pick a section to study deeper.