Product & Vision

From AI Detection to Proof: Why We Built TypeTrace

Over the last two years, AI writing tools have become cheap, fast, and nearly indistinguishable from human prose. The default response has been a wave of “AI detectors” — classifiers that look at a block of text and guess whether a model probably wrote it.

The problem is that guessing is a terrible foundation for high‑stakes decisions. When grades, jobs, or reputations are on the line, “it looks like AI” is not good enough.

TypeTrace exists because we believe integrity questions should be answered with proof of how something was written, not predictions about how it might have been written.

Detection is a prediction problem

Most AI detectors work the same way: they take a finished piece of text and try to estimate how “likely” it is that a large language model produced it. Some look at token entropy, others learn statistical fingerprints from model outputs.

In low‑stakes environments, that’s fine. But in real classrooms and workplaces, detectors are already causing damage:

These tools are probabilistic by design. They can be calibrated, tuned, and improved — but they can never tell you how a particular document came into existence. They only look at the static end result.

Proof is a history problem

If you ask a different question — not “does this text look like AI?” but “how was this text actually produced?” — you end up with a different kind of system.

Instead of scanning finished prose, you track the process that created it:

With that history, you no longer have to infer authorship from surface style. You can replay the writing session and see the document unfold as it was written.

What TypeTrace actually records

TypeTrace runs inside the tools people already use — Google Docs and Microsoft Word on the web — and quietly records a stream of events while you write. For each tracked document, we capture:

From that stream, we build a provenance report: a tamper‑resistant record that shows this person typed this document, in this sequence, at these times.

Why this matters for students

For students, the stakes are stark. A single false AI‑cheating accusation can sit on an academic record for years. AI detectors often provide no appealable evidence beyond a screenshot of a score.

With TypeTrace, a student can attach a provenance report to their essay. An instructor reviewing a case can:

The question stops being “does this look like AI?” and becomes “does this process look like an honest student writing their work?” — a much more grounded standard.

Why this matters for professionals

Agencies, consultants, ghostwriters, analysts — anyone who sells writing or thinking — are running into a new kind of client concern: “How do I know you didn’t just paste this from ChatGPT?”

With TypeTrace, a client doesn’t have to trust a promise. They can review a provenance report that shows:

It turns an uncomfortable conversation about tools into a simple question of process: “Does this look like human work done over time?”.

Privacy as a hard constraint

Recording keystrokes is powerful — and extremely sensitive. From day one, we treated privacy as a hard constraint, not a marketing bullet.

In practice, that means:

Proof without privacy would be another form of surveillance. We built TypeTrace to be the opposite: a tool writers can wield in their own defence.

From adversarial to collaborative integrity

The AI detector model is fundamentally adversarial. It treats every piece of text as suspicious until proven otherwise, and then guesses at that proof.

A provenance model is collaborative. Writers opt in to generating evidence as they work. Educators and clients get concrete timelines instead of scores. Disputes can be resolved by looking at the same underlying history.

That shift — from prediction to proof, from surface to process — is why we built TypeTrace.

As AI systems keep improving, it will only get harder to look at finished text and know where it came from. But if you can replay how it was written, you don’t have to guess.