I built my first computer at seven. My first website at eight. I've been writing software for most of my life, and I've learned to be sceptical of the impulse to build everything yourself. Most of the time, you should use what already exists.
We benchmarked every major graph database before we made this decision. Neo4j, Amazon Neptune, TigerGraph, Memgraph. We ran them all against the same workload: multi-hop traversal across a 10-million-node graph representing a realistic enterprise dataset. The results were not ambiguous.
What the benchmarks showed
At 3-hop depth, the commercial databases performed acceptably. Latency in the 50 to 200ms range. Fine for a human querying a dashboard. Fatal for an AI agent running 20 reasoning loops per second.
At 10-hop depth, several of them started timing out. At 20 hops, two of the databases simply stopped returning results. They either ran out of memory or deadlocked under the traversal pattern our agent workload generated.
The problem isn't the query planner. It's the underlying data model. Standard graph databases model edges as pairwise connections between two nodes. When your enterprise data has join tables connecting 6 or 8 entities simultaneously, those databases are forced to decompose the relationship into a fan of binary edges. The traversal graph explodes. Path enumeration becomes NP-hard at depth. There is no index that fixes this.
The hypergraph insight
The key insight was that enterprise data is already a hypergraph. SQL join tables are hyperedges. A table with 6 foreign keys is a relationship connecting 6 entities simultaneously. When you model that natively, as one hyperedge instead of 15 pairwise edges, the traversal graph stays tractable at depth.
And we went one step further. In our model, relationships can connect to other relationships. Tom works at Acme. Tom was hired by Jane. That hiring event is itself a node. It connects Tom, Jane, Acme, the role, the department, and the date. You can traverse across the hiring event from any direction. This is what we mean by a metagraph. It's not a marketing term. It's the actual data structure that makes certain enterprise queries possible that are genuinely impossible in standard graph models.
Why Rust
The engine is written in Rust. Not because Rust is fashionable, but because we are doing RAM-resident graph traversal where any garbage collection pause breaks our latency guarantees. We needed a language that gives us full control over memory layout without sacrificing safety. Rust is the only serious option for this class of problem.
The current benchmark: 20-hop traversal across a realistic enterprise graph in 0.00019 seconds. That's 190 microseconds. Agent loops don't collapse at this speed. The reasoning layer stops being the bottleneck.
What this means for you
If you're building AI agents on top of enterprise data, you're going to hit this wall. The first time you try to give an agent real structural context, not just keyword retrieval but actual relationship traversal across your company's entity graph, you'll discover that none of the existing databases were built for this workload.
We built the one that was. Early access is open. Point us at your Postgres schema and we'll show you what 190 microseconds feels like.
