TerpTune Case Study

How semantic memory invalidated market confusion.

We didn't solve the recommendation problem. We invalidated the entire class of problems. This is how semantic memory systems turn fragmented domain knowledge into personalized intelligence.

Request Your Free Test →

Explore

The Problem

35,000+ cannabis products in the market. Consumers can't predict how any of them will affect them personally.

The traditional classification—"indica vs sativa"—is genetically meaningless. Certificate of Analysis data exists but isn't actionable. Budtenders recommend based on brand recognition, not neurochemistry. Consumers gamble $30-60 on every purchase.

The real problem: The same product affects different people differently. Without knowing individual neurochemistry, recommendations are just noise.


Why Traditional Approaches Fail

Traditional Approach **Rating-based systems** — "Users who liked X also liked Y." Works for movies. Fails for neurochemistry. **Strain databases** — Aggregate ratings that don't account for individual variation. **AI chatbots** — Search the internet for strain descriptions. Parrot marketing copy.
The Core Problem They treat this domain like e-commerce. They optimize for engagement, not outcomes. They can't answer the only question that matters: **"Will this specific product work for MY neurochemistry?"**

The Semantic Memory Architecture

Semantic memory systems transform fragmented domain knowledge into predictive intelligence by establishing canonical claims, tracking episodes, and discovering personal thresholds.

TerpTune production system

TerpTune implements the full canonical-episodic-threshold architecture. Four layers, each building on the last:

Layer 1: Canonical Knowledge

What it stores:

71 research-backed canonical claims about terpene mechanisms. Evidence-graded (A/B/C/D) based on study quality.

Examples:

"Linalool is a GABA-A receptor positive allosteric modulator" (Grade A)

"Caryophyllene ≥1.5% correlates with anchoring sensation" (Grade B-C)

These aren't opinions. They're verified assertions about neurochemistry, stored as structured claims with provenance.

Layer 2: Episode Tracking

What it captures:

Each session logged with product, time, context, and phenomenology. The user's exact language preserved.

The difference:

"Background noise stops" not "felt calm." Actual tracked experiences, not surveys or ratings. Links between episodes and products build the individual map.

Layer 3: Threshold Discovery

What emerges:

After 10+ sessions, personal thresholds emerge from the data.

The canonical-episodic-threshold architecture enables personalized predictions from universal science—the same compound affects different people differently, but the mechanism is consistent.

TerpTune production system

Examples:

"User's myrcene fog onset: 0.9%"

"User's caryophyllene anchor minimum: 0.5%"

Universal science + personal calibration = predictions that work.

Layer 4: Context-Aware Scoring

What it calculates:

NV/D: Neurochemical Value per Dollar. The same product scores differently for "evening wind-down" vs "functional work."

The output:

Patterns detected across sessions. Specific terpene combinations that produce reliable effects. Rank-ordered recommendations by desired outcome.


Karl: The AI That Calculates

AI interfaces to semantic memory systems calculate answers from structured knowledge rather than searching for approximate matches in unstructured content.

TerpTune production system

User prompt: "It's 6pm. I need to wind down. What should I use?"

Normal Chatbot - Searches for "relaxing strains" - Returns marketing copy - Recommends whatever has good reviews
Karl 1. Loads user's terpene thresholds 2. Scores every product for EVENING_WIND_DOWN 3. Applies gates (excludes above fog threshold) 4. Detects patterns 5. Returns rank-ordered recommendations

Karl's actual output:

#1 PRODUCT A — Score: 11.5 ⭐
   ✓ Triple GABA stack (linalool + nerolidol + bisabolol)
   ✓ Anchor territory (2.1% caryophyllene)
   ✓ Reliable pattern detected

   "Background noise stops. Textured sedation with body presence."

⚠️ EXCLUDED: Product B — Myrcene 0.94% exceeds your fog threshold

The difference: Karl doesn't search the internet. He queries a semantic memory system built from canonical knowledge, personal episodes, and validated thresholds.


What This Delivers

Products scored using semantic memory architecture show strong correlation with positive session outcomes, validating the canonical-episodic-threshold approach.

TerpTune session data

Recommendation accuracy — Products scoring high show strong correlation with positive outcomes. The architecture validates itself through use.

Waste reduction — Users avoid products that cause unwanted effects. The system gates out products above personal thresholds before recommending.

Decision time — From "browsing for 20 minutes" to "ranked recommendations in seconds."

Value optimization — NV/D scoring finds the moderately-priced product that outperforms the premium for this specific user.

We didn't build a better recommendation engine. We built a system that makes the recommendation problem trivial.


The Architecture

Verify upstream, generate downstream. Human verification happens once, at the source. Everything downstream is generated, not manually maintained. This eliminates drift by making it architecturally impossible.

┌─────────────────────────────────────────────────────────────┐
│                    CANONICAL LAYER                          │
│  71 research claims, evidence-graded, vocabulary bridges    │
└─────────────────────┬───────────────────────────────────────┘
                      │
                      ▼
┌─────────────────────────────────────────────────────────────┐
│                    EPISODIC LAYER                           │
│  Sessions, products, phenomenology, context                 │
└─────────────────────┬───────────────────────────────────────┘
                      │
                      ▼
┌─────────────────────────────────────────────────────────────┐
│                   THRESHOLD LAYER                           │
│  Personal calibration derived from episode patterns         │
└─────────────────────┬───────────────────────────────────────┘
                      │
                      ▼
┌─────────────────────────────────────────────────────────────┐
│                    SCORING LAYER                            │
│  Context-aware NV/D calculation, pattern detection          │
└─────────────────────┬───────────────────────────────────────┘
                      │
                      ▼
┌─────────────────────────────────────────────────────────────┐
│                       KARL                                  │
│  AI interface that queries the system, not the internet     │
└─────────────────────────────────────────────────────────────┘

Key insight: Karl is not a smart chatbot. Karl is an interface to a semantic memory system. The intelligence is in the structure, not the model.


Your Organization Has the Same Problem

Your knowledge is fragmented across documents, systems, people. Your AI retrieves documents but doesn't understand relationships. Your experts hold knowledge that isn't captured. When they leave, the knowledge leaves with them.

Documents are outputs, not sources. They're generated from canonical claims, not authored independently. One truth, multiple presentations.

TerpTune proves:

  • Canonical knowledge can be extracted and structured
  • Episodes can be tracked and linked to outcomes
  • Thresholds can be discovered from patterns
  • AI can query semantic memory instead of guessing

The question: What would it look like if your AI could calculate the right answer instead of searching for it?

Your domain has canonical knowledge. Your users generate episodes. Patterns exist. Thresholds can be discovered. The architecture transfers. The methodology is proven.


Request Your Free Test

Find out if your AI is hallucinating.