About

Decades of watching organizations build systems that don't remember correctly.

Mark Ulett

Explore

The Short Version

I'm Mark Ulett. I build systems that help organizations remember what matters.

PhD in History and Philosophy of Science. Former product manager at a software company where I worked on Fortune 1000 SaaS products in retail and content management, powering hundreds of websites. Now independent, helping organizations fix the architecture that makes their AI hallucinate.

Based in Montana's Flathead Valley with my partner Beck and an Australian Cattle Dog named Sinopah.


The Longer Version

My dissertation traced a forgotten branch of evolutionary theory — scientists who asked "what if the production of variation matters as much as its selection?" That question turned out to be about more than biology.

It's about any system where what gets generated shapes what survives.

In my years in tech, I kept seeing the same pattern: organizations drowning in documentation that contradicted itself. Knowledge bases where nobody knew which version was current. AI systems that confidently delivered wrong answers because the source material was wrong.

The pattern was always the same: generation is cheap, verification is expensive, and nobody builds the verification infrastructure.

AI made this crisis visible. When generation cost drops to near-zero, the verification bottleneck becomes the whole problem. Your AI isn't hallucinating because it's stupid. It's hallucinating because your documentation is wrong and it can't tell the difference.

That's what I fix.

I also run BeargrassAI, a managed AI services company in Montana's Flathead Valley. BeargrassAI runs every client site from a single semantic memory system — the same architecture described on this site, applied to small business web presence. It's one more proof point that the methodology scales: from enterprise knowledge systems to local business visibility, the principle is the same — verify upstream, generate downstream.


The Methodology

I didn't invent the distinction between episodic and semantic memory — Endel Tulving did, in 1972. I'm applying it to organizational knowledge systems.

Most "knowledge bases" are episodic: timestamped documents that capture what was written, not what's true. Semantic memory systems store verified claims with ownership, evidence, and review cycles. Documents derive from claims, not the other way around.

We hold no truths to be self-evident. Every claim an AI reasons from must be explicit, verified, and canonical — not inferred, not assumed, not 'obvious.' The ability to declare what is canonically true is what makes you independent.

SMS Anti-Motto, 2026

This isn't theoretical. I build these systems. TerpTune demonstrates the full architecture. This website is built on its own methodology. The AI Hallucination Detector finds the gaps in yours.


Get In Touch

Email: mark@semanticmemorysystems.com

LinkedIn: linkedin.com/in/markulett

Location: Flathead Valley, Montana (Mountain Time)

I read every message. Response time is typically within 24 hours.

Is Your AI Hallucinating? →