← All projects

Hermes Agent

40+ tool, multi-platform agent. Provider adapters per LLM, trajectory compression preserves first/last turns, side-channel auxiliary client.

Category
agent-framework
Language
Python
Runtime
Python 3.11+
Providers
Anthropic, OpenAI, Bedrock
Agent loop
while-loop
Tool format
openai-fn / anthropic-tool_use
Memory
llm-summarize
Caching
cache_control-explicit
Sandbox
process
Pedagogical
★★★★
Flags
multi-agent · thinking · streaming
Concepts
agent-loop ·tool-calling-formats ·memory-compression ·prompt-caching ·multi-agent-coordination ·provider-abstraction ·extended-thinking
Verified
cloned
Upstream
github.com/NousResearch/hermes-agent

What it is

A broad-surface agent (40+ tools, 26+ external platforms) with the most mature per-provider abstraction in the corpus. Read it for the patterns of a production multi-provider system.

What’s worth studying

Three things stand out:

  1. Per-provider adapter classes. Anthropic, Bedrock, OpenAI all have their own adapter that converts the unified internal API to the provider’s wire format. The trade vs LiteLLM: more code, but you opt into new provider features the day they ship instead of waiting for a wrapper to catch up.
  2. The auxiliary client. A separate LLM client (often a cheaper model on a different provider) handles side-channel work — summarization, dedupe, tool-arg validation. Decouples cost from capability: smart-and-expensive for the main loop, cheap-and-fast for the support tasks. See provider-abstraction.
  3. Trajectory compression that preserves first and last turns verbatim. The first turn is the agent’s charter; the last few are the freshest context. Everything in between gets summarized with a domain-specific preservation list.

Drill-down

The full per-doc analysis lives below — these are the original numbered analyses, rendered as styled HTML. Pick a section to study deeper.