I build local-first AI infrastructure β systems that stay on your hardware, stay inspectable, and don't depend on cloud APIs at runtime.
Most of my time right now goes into multi-agent coordination. I run a private OpenClaw setup where multiple LLM agents collaborate through a shared workspace, and I'm building the tooling around that:
- Sovereign Memory β Local memory system for agent swarms. Hybrid FTS5 + FAISS retrieval, cross-encoder re-ranking, markdown-aware chunking, write-back memory, context window budgeting. The agents built V1 and V2 themselves; V3.1 is the cleaned-up version.
- LobsterTauri (in progress) β Dynamic group chat for controlled agent swarms. React/Vite + Tauri. Agents coordinate as sub-agents, speed-running tasks together in a managed conversation.
- Syntra β Swift AI morality framework. Vapor server, multi-provider LLM support, tool registry, and context management.
- llmHub β Native macOS/iOS AI client with Brain/Hand/Loop architecture and MCP support.
- Binary Forge β Hand-forged x86-64 Linux binaries. No compiler, no libc β raw ELF, direct syscalls, machine code.
- Syntra-Testing β Python evaluation framework for testing LLM agent reasoning, tool use, and conversation quality.
I favor explicit state, local-first design, and solutions that remain under your control. The goal is infrastructure that's useful regardless of which foundation model is underneath.
Del Haven, NJ Β· Hugging Face


