Proof Points

Systems that remember.

We don't just consult on Semantic Memory Systems. We build them. The methodology we teach is the methodology we use.

Start a Conversation →

Explore

The Recursive Proof

Three systems. All live. All built on canonical claims.

This site is built this way. The claims on this site aren't scattered across independent pages. They exist as canonical assertions in a structured knowledge base. The pages you see are derived from those claims. When we update a canonical claim, every page that references it can update.

  • TerpTune — AI cannabis concierge with product semantic memory
  • Book of Fire — A thesis about semantic memory, built with semantic memory
  • This Website — The page you're reading, generated from canonical claims

No prototypes. No demos. Production systems handling real queries.


TerpTune

An AI cannabis concierge that actually knows its products.

What It Is

TerpTune helps dispensary customers find the right products through conversational AI. Users describe what they want—relaxation, creativity, pain relief—and TerpTune recommends specific products with reasoning.

The Semantic Memory

TerpTune maintains canonical product knowledge: terpene profiles, effects, contraindications, inventory status. The AI doesn't search documents. It queries verified claims about each product.

How It Demonstrates

Canonical claims work. Each product has structured assertions: "Blue Dream contains myrcene as dominant terpene." "Myrcene correlates with relaxation effects." The AI reasons from these claims, not from scraped descriptions.

Strategic forgetting works. TerpTune doesn't remember every customer conversation. It remembers product truth. Session context is ephemeral. Product knowledge is canonical.

Discrimination works. When TerpTune lacks verified claims about a product, it says so. No confident hallucination about effects that haven't been established.

Technical Stack

┌─────────────────────────────────────────┐
│           Conversational Layer          │
│         (Natural language I/O)          │
└───────────────────┬─────────────────────┘
                    │
                    ▼
┌─────────────────────────────────────────┐
│           Reasoning Engine              │
│    (Queries claims, builds responses)   │
└───────────────────┬─────────────────────┘
                    │
                    ▼
┌─────────────────────────────────────────┐
│         Canonical Product Claims        │
│  ┌─────────┐ ┌─────────┐ ┌─────────┐   │
│  │Terpenes │ │ Effects │ │Inventory│   │
│  └─────────┘ └─────────┘ └─────────┘   │
└─────────────────────────────────────────┘

Book of Fire

A thesis about semantic memory systems—built as a semantic memory system.

What It Is

Book of Fire is a 50,000-word exploration of how AI systems should handle organizational knowledge. The thesis argues for canonical claims, strategic forgetting, and derivation-based content.

The Recursive Proof

The thesis practices what it preaches. Every claim in Book of Fire exists in a canonical claims file. Every chapter derives from those claims. Update a claim, regenerate the chapter.

How It Demonstrates

Canonical claims work. The book contains 127 canonical assertions. Each has an ID, a source, and a confidence level. Chapters don't contain original claims—they arrange and explain canonical ones.

Derivation works. Chapter 7 on "Strategic Forgetting" pulls from 12 canonical claims. Rewrite the chapter without changing the claims? The argument stays consistent. Change a claim? Every chapter referencing it can update.

Self-explanation works. The thesis explains semantic memory while demonstrating semantic memory. Readers see the methodology and experience its output simultaneously.

Architecture

┌─────────────────────────────────────────┐
│            Book of Fire                 │
│         (Generated Output)              │
│                                         │
│  Chapter 1    Chapter 2    Chapter 3    │
│     ↑            ↑            ↑         │
│     │            │            │         │
└─────┼────────────┼────────────┼─────────┘
      │            │            │
      └────────────┼────────────┘
                   │
                   ▼
┌─────────────────────────────────────────┐
│         Canonical Claims (127)          │
│  ┌──────────────────────────────────┐   │
│  │ S3KAI-001: "Semantic memory..."  │   │
│  │ S3KAI-002: "Claims are atomic.." │   │
│  │ S3KAI-003: "Verify upstream..."  │   │
│  │ ...                              │   │
│  └──────────────────────────────────┘   │
└─────────────────────────────────────────┘

This Website

The page you're reading is itself a proof point.

What It Is

SemanticMemorySystems.com isn't a traditional website with independently authored pages. Every page derives from canonical claims stored in structured files. This proof-points page pulls from the same claims file as the methodology page, the healthcare page, and every other page on the site.

See exactly how it works →

The Recursive Proof

We can't credibly teach semantic memory methodology while building our site the old way. So we don't. This site demonstrates the architecture we consult on.

How It Demonstrates

Canonical claims work. The claim "Documents are outputs, not sources" appears on multiple pages. Each page pulls from the same canonical assertion. Update the claim once, every page reflects the change.

Strategic forgetting works. Page-specific phrasing is ephemeral. Canonical claims are permanent. We forget the presentation; we remember the truth.

Derivation works. This page didn't exist as a Word document that someone wrote. A manifest defined its structure. A generator pulled relevant claims. The page assembled.

Architecture

┌─────────────────────────────────────────┐
│         SemanticMemorySystems.com       │
│            (Generated Pages)            │
│                                         │
│  index.html   methodology   proof-points│
│      ↑            ↑             ↑       │
│      │            │             │       │
└──────┼────────────┼─────────────┼───────┘
       │            │             │
       ▼            ▼             ▼
┌─────────────────────────────────────────┐
│            Page Manifests               │
│  (Structure, claim requirements, voice) │
└───────────────────┬─────────────────────┘
                    │
                    ▼
┌─────────────────────────────────────────┐
│         SMS-CLAIMS.json (Canonical)     │
│  ┌──────────────────────────────────┐   │
│  │ sms-method-01: "Verify upstream" │   │
│  │ sms-method-02: "Documents are.." │   │
│  │ sms-recursive-01: "This site..." │   │
│  └──────────────────────────────────┘   │
└─────────────────────────────────────────┘

What the Proof Points Prove

Five principles. Three proof points. Each principle demonstrated in production.

Documents are outputs, not sources. They're generated from canonical claims, not authored independently. One truth, multiple presentations.

1. Canonical Claims Work

TerpTune's product knowledge, Book of Fire's thesis assertions, this site's methodology claims—all stored as structured, verified, atomic units. Not scattered across documents. Queryable. Citable.

Updateable.

2. Strategic Forgetting Works

TerpTune forgets conversations but remembers products. Book of Fire forgets chapter drafts but remembers claims. This site forgets page layouts but remembers truth. Forgetting the right things is as important as remembering.

3. Derivation Works

Every proof point generates outputs from sources. TerpTune generates responses from product claims. Book of Fire generates chapters from thesis claims. This site generates pages from methodology claims.

One truth, multiple presentations.

4. Self-Explanation Works

Book of Fire explains semantic memory while demonstrating it. This site teaches the methodology while embodying it. The best documentation of a system is a system that documents itself.

5. This Isn't Theory

Three production systems. Real users. Real queries. Real updates propagating through real derivation pipelines.

The methodology works because we use it.


Your Turn

What would semantic memory look like for your organization?

Healthcare systems — Clinical protocols that update once and propagate everywhere. AI that admits uncertainty.

Software companies — Documentation that derives from code, not the other way around. Support bots that know what they know.

Retail operations — Product knowledge that stays consistent across channels. Recommendations based on verified claims.

Educational institutions — Curriculum that traces to learning objectives. Assessment aligned by design.

The architecture exists. The methodology is proven. The proof points are live.

Let's talk about your knowledge infrastructure →