MERPIAN Limited (Modular Ecosystems for Research into Personal Intelligent Assistive Nodes) founded in 2019, is a research and development company building sovereign, offline-first AI systems. Our work bridges deep research in memory, indexing, and reasoning engines with practical tools for individuals, devices, and federated networks. We believe AI should serve people first — sovereign, verifiable, and under your control.
Our work explores knowledge engines, adaptive indexing, and resilient distributed architectures that are:
- Offline-first, mobile-first — designed to work everywhere.
- Federated & sovereign — privacy, control, and resilience by default.
- Deduplicated & adaptive — atomising data, removing duplication, and enabling intelligent querying at scale.
LMouthful™ is the core language engine inside Hǫllr™/Hqllr™ — a tiny, cache-friendly model designed to run entirely on local hardware, from mobile phones, tablets, desktops to full Hǫllr™ nodes.
A small mouthful of model. A large appetite for your data.
Why it exists
LMouthful™ is built to answer a simple question:
“How far can we get with a small, structured, local model over your data — without GPUs, giant checkpoints, or cloud APIs?”
Instead of a single giant black-box network, LMouthful™ sits on top of a layered memory system:
-
GLTM – Global Long-Term Memory
Deduplicated, domain-aware data store: the “truth layer” ingested from books, documents, notes, logs, etc. -
Personas
Configurable “ways of thinking” and explaining (e.g. student, junior assistant, expert, professor), controlling tone, depth, vocabulary and which parts of GLTM are visible. -
PSTM – Persona Short-Term Memory
Per-persona working memory: recent queries, answers, retrieved snippets and feedback, used to keep responses coherent and context-aware.
LMouthful™ then uses a compact language model (and planned tiny vector space) to generate short, grounded responses over that memory.
Key properties
-
Small, local, sovereign
Designed to fit in single-digit MiB for the core model and run fully offline. No external LLM calls, no hidden cloud dependency. -
Fast by design
Built around compact structures and CPU-friendly algorithms. Ingest and token generation are tuned for small devices first principles. -
Grounded in your data
Works over your own GLTM store (documents, books, Obsidian vaults, etc.), using IR/RAG-style retrieval plus a compact model to produce answers with traceable supporting snippets. -
Persona-aware
Responses are shaped by the active persona’s capabilities and style (from “keep it simple” to “deep technical dive”), rather than one generic voice.
Intended use cases
- Local Q&A over personal or organisational knowledge bases
- Summarisation and explanation using only ingested data
- On-device assistants that must remain offline and under user control
- Embedded “micro-LM” behaviour in Hǫllr™ nodes and tools
Status
LMouthful™ is currently under active R&D and used internally as the core language engine within the Hǫllr™/Hqllr™ stack. Public demos, benchmarks and further technical details will be released as the project matures.
A hall of knowledge. A guardian of truth. Built for businesses, communities and individuals.
Hqllr™ is more than software — it’s a knowledge engine and AI ecosystem that puts people before platforms.
We believe knowledge should be:
- Local-first — your data stays with you, always in your control.
- Verifiable — every answer is traceable, every claim accountable.
- Adaptive — responses shaped to your persona, context, and needs.
By combining sovereign design with human values, Hqllr becomes a trustworthy companion in the age of AI.
-
HqllrMobile — Local AI In Your Pocket
Ingest, index, and query knowledge directly on your phone or tablet. Always offline, private, and verifiable. -
Hqllr — Document Indexer
Atomises documents, deduplicates content, and prepares data for instant, provenance-rich search. -
HqllrAI — AI ReInvented
An adaptive query engine built on Hqllr Indexer: fact-first, persona-aware, provenance-rich answers. -
HqllrOS — RAM-Based Operating System
A lightweight, RAM-orchestrated runtime for Hqllr clusters: replication, deployment, and audit-ready federation made simple. -
HqllrEnterprise — Sovereign AI for Business
From personal knowledge engines to enterprise-scale verifiable clusters. Designed for compliance, scale, and security. -
HqllrBenchmarks — Performance Metrics
Transparent performance data, throughput milestones, and scaling benchmarks.
📄 Coming soon — the Hqllr Vision Paper
Exploring today’s AI challenges and how Hqllr delivers a sovereign-first, verifiable solution for individuals, small businesses, and organisations.
At MERPIAN, we combine research-driven design with practical engineering to create systems that empower individuals, enterprises, and communities to control their own AI and data futures.
This repository is part of the MERPIAN showcase.
Some materials are published here for reference and transparency,
but the core systems remain proprietary to MERPIAN Limited.
- 🔒 Closed source by default
- 📄 Selective whitepapers, benchmarks, and demos may be published
- 💼 Full access is available only under commercial or research agreements
🌐 merpian.ltd (placeholder - not live yet)