AI DETECTION + HUMANISATION
"This text was written by an AI."
0 AI score

You definitely don't want your texts flagged like this. We know how to fix that.

SEE HOW IT WORKS
SIGNAL ANALYSIS

Where the AI fingerprints are

Each highlighted segment triggers a detection signal. Hover to see which one.

The implementation of Formulaic openeradvanced machine learning algorithmsAI marker jargon has significantly transformed Connector phrasethe landscape of modern data analyticsGeneric framing, enabling organisations to derive Passive constructionactionable insightsAI cliché from previously Hedge wordunstructured datasetsTechnical jargon.
Formulaic
AI vocabulary
Connector / passive
Hedge word
Generic framing
DETECTION ENGINE

20-Signal Detection Matrix

Over 20 analysis parameters. Mathematical — rhythm, length, repetition. Linguistic — formulaic phrases, artificial balance typical of trained models.

COMPUTATIONAL METRICS · 12 · weight 35%
M-01 Burstiness CV Sentence length variance
M-02 Connector Density Transitional phrase frequency
M-03 Formulaic Patterns Template phrase detection
M-04 AI Marker Density Model-specific vocabulary
M-05 Sentence Variance Structural diversity score
M-06 Repetition Rate Redundant phrase clusters
M-07 Hedge Phrase Density Uncertainty expressions
M-08 Punctuation Variance Punctuation pattern entropy
M-09 Topic Diversification Subject coherence spread
M-10 Passive Voice Ratio Passive construction frequency
M-11 Readability Delta Flesch-Kincaid deviation
M-12 Vocabulary Richness Type-token ratio analysis
LLM-EVALUATED CHARACTERISTICS · 8 · weight 65%
L-01 Register Mixing Formal/informal style shifts
L-02 Selective Argumentation Bias in evidence selection
L-03 Emotional Texture Authentic emotional markers
L-04 Structural Improvisation Paragraph deviation patterns
L-05 Implicit Knowledge Assumed context density
L-06 Cultural Specificity Cultural reference markers
L-07 Logical Breaks Reasoning discontinuities
L-08 Authentic Voice Personal style fingerprint
WHY AI TEXT IS DETECTABLE

Every model has a writing fingerprint

Language models don't choose words — they sample from a probability distribution frozen at training time. That distribution is the fingerprint. It's always there, whether the model knows it or not.

1
Training

The model reads billions of documents. Every phrase it sees repeatedly becomes a high-probability path. "In today's rapidly evolving landscape" — seen millions of times — becomes the path of least resistance.

2
RLHF shaping

Human raters reward structured, balanced, polished answers. The model learns: use formal connectors, acknowledge all sides, hedge uncertain claims, avoid controversy. This preference encoding is shared across all major models.

3
Generation

At every token, the model picks the most statistically likely continuation. It isn't reasoning — it's sampling. The output is deterministically shaped by the distribution. The fingerprint is in the statistics.

AI text sits at probability maxima — local peaks of the model's distribution. Perturb the text slightly and it becomes less likely. Human text has no such property. This asymmetry is measurable.

GPT
OpenAI

Encyclopedic coverage, RLHF-polished positivity, heavy em-dash use

"delve" 50× more common than in human prose — a training data artifact
"tapestry" Used as metaphor for complexity at 10× human rate
"testament to" Formulaic conclusion device: "a testament to its success"
Em-dash overuse 7× higher than human writers — used as rhetorical separator
Ascending tricolon "Not only A, not only B, but C" — learned from persuasive training data
Gemini
Google

Rigid hierarchical structure, SEO-optimized phrasing, trailing participials

", enabling…" Trailing participial clauses reflect Google documentation style
", allowing for…" Another trailing construct from technical writing patterns
Bullet-heavy structure Fragments prose into lists even when unnecessary — SEO training artifact
"showcasing" Overused verb from product/app documentation corpus
Header → bullets → summary Rigid document structure even for short answers
ALL MODELS
Universal patterns

These signals appear across every RLHF-trained model without exception

"Furthermore," "Moreover," Formal connectors at 4× human rate — RLHF prefers structured transitions
"It's worth noting that…" Hedge phrase density 3–5× human baseline — RLHF-instilled politeness
"In today's rapidly evolving…" Near-zero frequency in pre-AI writing — appears in all models
Low sentence burstiness AI sentences cluster around 80–120 chars; human text varies wildly
Zero contraction rate AI defaults to "do not", "it is" — formal register from training data
HUMANISATION

The same idea. A different voice.

We change the rhythm, sentence structure and word choice — so the text reads as written by a human. Your idea stays intact, everything else finds a living voice.

91 AI score
Human (<30) Mixed (30–59) AI (60+)
BEFORE · AI-generated

The implementation of advanced machine learning algorithms has significantly transformed the landscape of modern data analytics, enabling organisations to derive actionable insights from previously unstructured datasets.

Score: 91
AFTER · Humanised

Three years ago I watched a junior analyst spend a week manually tagging survey responses. Last month, the same task took forty minutes — and the results were sharper. That's the shift we're living through.

Score: 14
WHAT MAKES WRITING HUMAN

8 signals that separate a writer from a machine

These are what the LLM layer evaluates. Each characteristic scores 0–100 based on presence in the text. Low scores are the target.

Register Mixing

Humans shift between formal and casual mid-sentence. AI never does.

"...technically speaking, yeah, it's a mess."
Selective Argumentation

Real writers choose sides. AI considers all perspectives equally.

"The data supports it, but I still don't buy it."
~
Emotional Texture

Feeling bleeds into language. Frustration, excitement — all leave marks.

"It took forever, but honestly? Worth it."
Structural Improvisation

Human paragraphs wander, backtrack, and contradict themselves.

"Actually — wait, that's not quite right."
Implicit Knowledge

We assume shared context. Humans write for a specific reader, not everyone.

"You know how that meeting always goes."
Cultural Specificity

References, idioms, and local knowledge anchor text to a real person.

"Like that situation in Moscow, but worse."
Logical Breaks

Humans leap. Conclusions precede evidence. Ideas interrupt each other.

"Anyway. The point is: don't do it this way."
Authentic Voice

Style is a fingerprint. Word choices and rhythms that repeat across everything you write.

"The kind of thing that makes you want to quit."
UNDERSTANDING YOUR SCORE

What the score actually measures

AI detection at every service — ours included — is a probability estimate, not a verdict. Here's what that means in practice.

~
It's a spectrum, not a binary

Text doesn't exist in two categories — AI or Human. Every piece sits somewhere on a continuum. Formal academic writing, technical documentation, and text by non-native speakers naturally share many statistical patterns with AI output.

The score measures AI-likeness, not AI origin

A high score means: this text exhibits a high density of patterns statistically associated with AI generation. It does not mean the text was generated by AI. Think of it like a medical screening — a signal for closer attention, not a diagnosis.

Known false-positive scenarios

ESL writers, technical authors, legal and academic writers, and anyone trained to write in a formal structured style will score higher than a casual native blogger — regardless of whether they used AI at all.

Detection degrades on edited AI text

If AI output has been substantially rewritten by a human — or if a human has been specifically instructed to write "less formally" — the statistical signal weakens. Our score reflects what is detectable, not what is true.

Clearly human Casual journal entry, personal anecdote, unedited voice note transcript
0 30 60 100
ESL formal writer Edited AI text · Academic prose Raw AI output
Clearly AI Unedited GPT output, formulaic corporate copy, template-generated text

A high score is a signal for review — not proof of AI authorship. A low score does not guarantee human authorship. Use the result as one input among several, not as a final verdict. Our goal is to help you understand your text's statistical profile and improve it — not to judge authorship.

THE SERVICE

Detect the fingerprint. Remove it.

Two tools that work together — one to show you exactly where AI patterns live in your text, one to replace them with something that reads like a person actually wrote it.

01

Detect

We analyse your text across 20 signals in two layers — 12 computational metrics and 8 LLM-evaluated characteristics. The result is a precise 0–100 score with a full breakdown: which phrases triggered which signals, and why.

  • 20 signals · 2 analysis layers
  • Sentence-level annotation
  • Weighted hybrid scoring
  • 15 tokens
02

Humanise

Our AI agent rewrites the flagged segments — not by swapping synonyms, but by reconstructing sentence rhythm, varying structure, and replacing model-specific vocabulary with authentic patterns. The meaning stays. The fingerprint doesn't.

  • Structural rewrite, not synonym swap
  • Meaning and intent preserved
  • Measurable before/after score
  • 30 tokens
Two tools. One goal — text that reads like a person wrote it.

"By the time you reach the bottom of this page, your text could already be different."

Transform your text

Paste it. We detect across 20 signals. You choose depth. Our AI agent humanises. 60 seconds.

Begin the transformation