Strix’s BaseAgent class holds three module-level dicts:
class BaseAgent:
_agent_graph: dict[str, list[str]] = {} # parent_id -> children
_agent_instances: dict[str, BaseAgent] = {} # id -> instance
_agent_messages: dict[str, list[Msg]] = {} # to_id -> queue
Any agent can post to any other agent’s mailbox by appending to a dict entry. Any agent can read its own mailbox by popping its key.
Why no locks
Python’s GIL serializes individual dict operations. dict.setdefault(key, []).append(msg) is atomic. dict.pop(key, []) is atomic. So in a single Python process, you don’t need explicit locking for these operations.
This breaks down the moment you have two scans concurrently — they share module state. Strix documents the limitation: one scan per process.
Why this is elegant
- Zero infrastructure.
- Zero serialization (objects, not bytes).
- Zero startup time.
- Debuggable in a Python REPL:
BaseAgent._agent_messagesand you can see the queues.
The cost: it doesn’t generalize. You can’t scale this to a fleet. But for the use case (a single Strix scan, in-process, possibly multi-threaded), it’s perfect.
When you’d reach for it
Build a multi-agent system inside one Python process. Skip Redis, skip Celery, skip any broker. Use module-level dicts. Move to a real queue only when you have a real distributed need.
When it breaks
- Two scans in one process (state collision).
- Long-running process with leak-y conversations (dicts grow forever).
- Cross-language agents (Python and JS in the same fleet).
The first is solvable with explicit per-scan namespacing. The second wants a TTL or explicit cleanup. The third wants a real broker.
Sources
-
strix/02_agent_and_llm.md:291? unverified