Over the last two years, AI writing tools have become cheap, fast, and nearly indistinguishable from human prose. The default response has been a wave of “AI detectors” — classifiers that look at a block of text and guess whether a model probably wrote it.
The problem is that guessing is a terrible foundation for high‑stakes decisions. When grades, jobs, or reputations are on the line, “it looks like AI” is not good enough.
Detection is a prediction problem
Most AI detectors work the same way: they take a finished piece of text and try to estimate how “likely” it is that a large language model produced it. Some look at token entropy, others learn statistical fingerprints from model outputs.
In low‑stakes environments, that’s fine. But in real classrooms and workplaces, detectors are already causing damage:
- Honest students being flagged as cheaters based solely on detector scores.
- Professionals having client work questioned because an automated system “isn’t sure”.
- Institutions drowning in appeals, each one hinging on a black‑box probability.
These tools are probabilistic by design. They can be calibrated, tuned, and improved — but they can never tell you how a particular document came into existence. They only look at the static end result.
Proof is a history problem
If you ask a different question — not “does this text look like AI?” but “how was this text actually produced?” — you end up with a different kind of system.
Instead of scanning finished prose, you track the process that created it:
- Keystrokes — every character typed, with timing.
- Edits — deletions, rewrites, and revisions over time.
- Pauses — where someone stopped to think, not just to paste.
- Paste events — moments where large chunks of text appear at once.
With that history, you no longer have to infer authorship from surface style. You can replay the writing session and see the document unfold as it was written.
What TypeTrace actually records
TypeTrace runs inside the tools people already use — Google Docs and Microsoft Word on the web — and quietly records a stream of events while you write. For each tracked document, we capture:
- Session start and end times.
- Keystroke‑level changes to the document.
- Indicators that a paste occurred, and how much text was inserted.
- High‑level stats like total words typed, average speed, and number of sessions.
From that stream, we build a provenance report: a tamper‑resistant record that shows this person typed this document, in this sequence, at these times.
Why this matters for students
For students, the stakes are stark. A single false AI‑cheating accusation can sit on an academic record for years. AI detectors often provide no appealable evidence beyond a screenshot of a score.
With TypeTrace, a student can attach a provenance report to their essay. An instructor reviewing a case can:
- See that the essay was written over multiple sessions leading up to the deadline.
- Verify that the bulk of the words were typed live, not pasted from elsewhere.
- Replay the writing timeline and watch the argument evolve.
The question stops being “does this look like AI?” and becomes “does this process look like an honest student writing their work?” — a much more grounded standard.
Why this matters for professionals
Agencies, consultants, ghostwriters, analysts — anyone who sells writing or thinking — are running into a new kind of client concern: “How do I know you didn’t just paste this from ChatGPT?”
With TypeTrace, a client doesn’t have to trust a promise. They can review a provenance report that shows:
- Exactly how many words were typed versus pasted.
- How the document was assembled over days or weeks.
- That the work existed before a critical meeting or deliverable.
It turns an uncomfortable conversation about tools into a simple question of process: “Does this look like human work done over time?”.
Privacy as a hard constraint
Recording keystrokes is powerful — and extremely sensitive. From day one, we treated privacy as a hard constraint, not a marketing bullet.
In practice, that means:
- We scope recording to a narrow set of environments and active documents.
- We encrypt document content before it ever leaves the browser.
- We store proofs with zero‑knowledge encryption on our side: we cannot read your prose.
- You control what is exported, who sees it, and for how long.
Proof without privacy would be another form of surveillance. We built TypeTrace to be the opposite: a tool writers can wield in their own defence.
From adversarial to collaborative integrity
The AI detector model is fundamentally adversarial. It treats every piece of text as suspicious until proven otherwise, and then guesses at that proof.
A provenance model is collaborative. Writers opt in to generating evidence as they work. Educators and clients get concrete timelines instead of scores. Disputes can be resolved by looking at the same underlying history.
That shift — from prediction to proof, from surface to process — is why we built TypeTrace.
As AI systems keep improving, it will only get harder to look at finished text and know where it came from. But if you can replay how it was written, you don’t have to guess.