When people talk about “AI detection”, they usually mean scanning a finished document and asking a classifier whether the style looks machine‑generated. At TypeTrace, we care about a narrower, more actionable question: did AI‑generated text get pasted into this otherwise human‑written document?
We call that problem AI substitution — swapping part of a document with model output — and it shows up everywhere from student essays to client deliverables.
The substitution pattern in real documents
In practice, AI substitution rarely looks like “an AI wrote the whole thing.” It looks like:
- A student writing their own introduction, then pasting an AI‑generated middle section.
- A consultant pasting a model’s first draft of a section, then editing lightly on top.
- A team using AI to rewrite specific paragraphs for tone or clarity.
If you only look at the final text, those substitutions are hard to see. If you look at the timeline of how the document was assembled, they become much clearer.
What we observe instead of “AI vs human text”
TypeTrace does not label text blocks as “AI” or “human.” Instead, we observe a few concrete signals:
- Continuous typing — text that appears character‑by‑character at human speeds.
- Paste events — chunks of text that appear all at once.
- Session structure — when work happens and how it’s split across sessions.
- Editing behaviour — whether a block was substantially reworked after insertion.
By combining those signals, we can answer questions like:
- “Was this section typed or pasted?”
- “If it was pasted, how much of the document does it represent?”
- “Did the writer meaningfully edit that pasted text afterward?”
Spotting substitution in a session timeline
Imagine a student writing a 3,000‑word essay. In TypeTrace, their provenance report might show:
- 2,300 words typed live across three sessions.
- Two paste events: one 500‑word block, one 200‑word block.
- The large paste occurring 15 minutes before submission, with minimal edits.
That pattern looks very different from a student who:
- Types almost everything live.
- Uses paste only for quotes, references, or small snippets.
- Makes substantial edits after pasting anything longer.
In both cases, the final document may pass or fail an AI detector’s style‑based test. But the provenance timelines tell two very different stories.
How this shows up in the TypeTrace report
In a typical TypeTrace report, you’ll see:
- Words typed live — the total word count produced through real keystrokes.
- Paste events — count and relative size of each paste.
- Timeline view — when each session started and ended.
Reviewers can quickly scan this to answer core integrity questions:
- “Was most of this document typed or pasted?”
- “Did large pasted sections arrive right before submission?”
- “Do the patterns match what we’d expect for this assignment or deliverable?”
Why we avoid absolute labels
It might be tempting to jump from “large paste event here” to “this is AI.” We deliberately stop short of that. There are legitimate reasons to paste:
- Moving text between drafts.
- Inserting boilerplate sections or templates.
- Quoting source material with proper citation.
Instead of declaring “AI use” or “no AI use”, TypeTrace shows what actually happened and leaves room for human judgment and context.
Substitution in professional workflows
For professionals, the concern is often client trust. A client may be perfectly comfortable with light AI assistance, but not with wholesale replacement of sections.
A provenance report lets a writer:
- Demonstrate that the core analysis or argument was typed live.
- Mark any sections where templates or boilerplate were pasted in.
- Be transparent about where tools helped and where human work drove the result.
Again, the key is that we show the process, not just the product.
Building better norms around AI assistance
AI writing tools are not going away. The challenge is to build norms and policies that distinguish between acceptable assistance and unacceptable substitution.
Our view is simple:
- Institutions and clients should define what counts as acceptable use.
- Writers should have tools to document how they worked.
- Disputes should be resolved with reference to concrete timelines, not opaque classifier scores.
TypeTrace’s substitution signals are one piece of that infrastructure. They don’t replace policy or human judgment — they give both something solid to stand on.