The Foundation

What We Are

The Grove Foundation exists to protect, preserve, and enhance human flourishing in the age of artificial intelligence.

We publish open architectural standards that ensure individuals and institutions retain sovereignty over their cognitive tools. Our core thesis: the architecture of AI systems determines whether those systems expand or collapse human possibility. Centralized architectures accumulate cognitive substrate at the platform. Distributed architectures keep cognitive substrate at the operator’s node. Both polarities are legitimate engineering choices. We publish the standards for the distributed path — the architectural posture durable institutions need.

The Grove Foundation does not choose the future. We protect the conditions under which futures can be chosen — freely, by the people doing the thinking. Not the people with the biggest checkbooks.

Studying and promoting world-changing patterns that tip the balance toward human flourishing.

We are a standards body, not a software company. We publish the patterns, score the implementations, and fund the research. Our standards are released under CC BY 4.0 because the thesis requires it. Distributed cognition that depends on a single vendor’s permission is not distributed.

Scroll
The Lineage

What a Standards Body Does.

Standards bodies do not build products. They publish the shared frameworks that markets need to price categories that participants cannot price individually. IEEE published the protocols that made networked computing possible. ICANN published the namespace governance that made the internet addressable. W3C published the open standards that made the web a substrate rather than a vendor product.

We sit in this institutional lineage. The Grove Foundation publishes open architectural standards for AI under CC BY 4.0. We diagnose what we call the Telemetry Trap — the structural condition where default AI consumption patterns extract operator judgment back to the model layer — and name its component mechanisms: cognitive platforming, judgment extraction, the lien on thinking. We operate the Λ measurement framework for scoring AI deployment patterns. We do not compete with AI providers. We publish the architecture so capital, institutions, and engineers can build against it.

We are organized as a not-for-profit business league under Section 501(c)(6) of the Internal Revenue Code, headquartered in Indianapolis. We operate on the Linux Foundation model — open standards, member firms, and a mission-aligned research function.

The Roadmap

Three Acts

The same architectural pattern, applied at three scales. Each act builds on the one before it.

Act I
The Autonomaton
Individual cognitive sovereignty. A governance architecture for AI workflows that ensures humans retain structural control over their tools. Five-stage pipeline, zone model, skill flywheel.
Act II
The Trellis
Domain-scale knowledge architecture. Declarative exploration infrastructure where domain experts compose cognitive workflows through configuration, not code.
In development
Act III
The Knowledge Commons
Distributed cognitive economy. A network protocol where sovereign nodes exchange knowledge, expertise compounds, and no single entity controls the questions or the answers.
On the horizon
The Bigger Picture

Models are seeds.
Architecture is soil.

The current AI buildout is mainframe thinking. $650 billion. Four companies. One year. Whoever controls the frontier controls AI.

What if that assumption is wrong — and the money gets spent anyway?

Epoch AI research shows that frontier AI capabilities reach consumer hardware on a 6–12 month lag. The UK AI Safety Institute puts the gap at 4–8 months depending on the benchmark. METR data shows the capability doubling time has compressed to 4.3 months. What requires a $200/month API subscription today runs on hardware you already own within a year.

Two billion personal computers exist worldwide. Hundreds of millions can run local AI today. This distributed compute already dwarfs any planned data center buildout — deployed, powered, and owned by the people who benefit from what it produces.

The internet didn’t beat mainframes by being more powerful. It beat them by being architecturally uncontrollable.

The Grove Foundation’s Autonomaton Pattern is the internet play for AI.

The Cultural Pattern

What Bauhaus Knew.

Bauhaus did not reject industrial production. It embraced machine-made goods and reframed the question: what should industrial production produce, who should it serve, what principles should constrain it? It published the curriculum. It named the constraint — form follows function — and the entire design world had to react.

We follow the same playbook in the AI era.

We do not reject vendor AI. We embrace apex compute as critical infrastructure and reframe the architectural question: where does the substrate accumulate? Vendor implementations of the Autonomatonic loop accumulate routing tables, validated patterns, and decision context inside the vendor’s infrastructure. The same loop, run through our standards, accumulates the substrate at the operator’s node. Both polarities are legitimate engineering choices. We publish the standards for the operator-substrate polarity — and name the conditions that make the choice visible: cognitive platforming, judgment extraction, the lien on thinking. Read more in our white paper on the Telemetry Trap. Once the architectural question is named and measured, the industry has to engage with it.

Naming a condition is what makes it measurable. Measurement is what creates the market reaction.

The pattern is older than Bauhaus. The same institutional move recurs across categories: zero trust (Kindervag, 2010), technical debt (Cunningham, 1992), shadow IT (Gartner, mid-2000s). Each began as a name for a structural condition that markets could not yet price. The naming made the condition measurable. The measurement made the architecture investable. We sit in this lineage.

The Test

Seven questions.

Behavior governance in declarative config?
Cognitive Router enables downward migration?
Every action explicitly zone-classified?
New surface without writing cognitive code?
System learns from telemetry and proposes?
System fails honestly with diagnostic context?
Auditor can reconstruct any decision from telemetry?

All yes → Your deployment conforms to GRV-001.
Any no → You know exactly what to fix.

Reference Implementation

Autonomaton Primitive · GitHub