Dubious AI detectors drive 'pay-to-humanize' scam

BSS
Published On: 30 Mar 2026, 09:11

WASHINGTON, United States, March 30, 2026 (BSS/AFP) - Feed an Iranian news dispatch or a literary classic into some text detectors, and they return the same verdict: AI-generated. Then comes the pitch: pay to "humanize" the writing, a pattern experts say bears the hallmarks of a scam.

As AI falsehoods explode across social media, often outpacing the capacity of professional fact-checkers, bogus detectors risk adding another layer of deception to an already fractured information ecosystem.

While even reliable AI detectors can produce false results, researchers say a crop of fraudulent tools has emerged online, easily weaponized to discredit authentic content and tarnish reputations.

AFP's fact-checkers identified three such text detectors that claim to estimate what percentage is AI-generated. The tools -- prompted in four languages -- not only misidentified authentic text as AI-generated but also attempted to monetize those errors.

One detector, JustDone AI, processed a human-written report about the US-Iran war and wrongly concluded it contained "88% AI content." It then offered to scrub any trace of AI for a fee.

"Your AI text is humanizing," the site claimed, leading to a page where "100% unique text" was locked behind a paywall charging up to $9.99.

Two other tools -- TextGuard and Refinely -- produced similar false positives and sought to monetize them.

- 'Scams' -

AFP presented its findings to all three detectors.

"Our system operates using modern AI models, and the results it provides are considered accurate within our technology," TextGuard's support team told AFP.

"At the same time, we cannot guarantee or compare results with other systems."

JustDone also reiterated that "no AI detector can guarantee 100 percent accuracy."

It acknowledged the free version of its AI detector "may provide less precise results" due to "high demand and the use of a lighter model designed for quick access."

Echoing AFP's findings, one user on a review platform complained that "even with 100% human-written material, JustDone still flags it as AI."

AFP fed the tools multiple human-written samples -- in Dutch, Greek, Hungarian, and English. All were wrongly flagged as having high AI content, including passages from an acclaimed 1916 Hungarian classic.

The tools returned AI flags regardless of input -- even for nonsensical text.

JustDone and Refinely appeared to operate even without an internet connection, suggesting their results may be scripted rather than genuine technical analysis.

"These are not AI detectors but scams to sell a 'humanizing' tool that will often return what we call 'tortured phrases'" -- unrelated jargon or nonsensical alternatives -- Debora Weber-Wulff, a Germany-based academic who has researched detection tools, told AFP.

- 'Liar's dividend' -

Illustrating how such tools can be used to discredit individuals, pro?government influencers in Hungary claimed earlier this year that a document outlining the opposition's election campaign had been entirely created by AI.

To support the unfounded allegation, they circulated screenshots on social media showing results from JustDone.

The tools tested by AFP sought to lure students and academics as clients, with two of them claiming their users came from top institutions such as Cornell University.

Cornell University told AFP it "does not have any established relations with AI detector companies."

"Generative AI does provide an increased risk that students may use it to submit work that is not their own," the university said.

"Unfortunately, it is unlikely that detection technologies will provide a workable solution to this problem. It can be very difficult to accurately detect AI-generated content."

Fact-checkers, including those from AFP, often rely on AI visual detection tools developed by experts, which typically look for hidden watermarks and other digital clues.

However, they too can sometimes produce errors, making it necessary to supplement their findings with additional evidence such as open-source data.

The stakes are high as false readings from unreliable detectors threaten to erode trust in AI verification broadly -- and feed a disinformation tactic researchers have dubbed the "liar's dividend": dismissing authentic content as AI fabrications.

"We often report on misinformers and other hoaxsters using AI to fabricate false images and videos," said Waqar Rizvi from the misinformation tracker NewsGuard.

"Now, (we are) monitoring the opposite, but no less insidious phenomenon: claims that a visual was created by AI when in fact, it's authentic."

 

  • Latest
  • Most Viewed
BNP-backed panel sweeps Khulna Bar Association polls
IPL captain takes pop at Cricket Australia over record-buy Green
G7 ministers set to tackle financial fallout of Mideast war
'Key to survival': Ukrainian army battles to change amid manpower shortages
Bangladesh, South Korea to strengthen trade, investment ties
Israel army says striking Iranian military infrastructure across Tehran
UN nuclear watchdog says Iran heavy water plant 'damaged' after Israeli strike
Australia to halve fuel tax in response to Middle East war
Crude surges, stocks dive as Houthi attacks escalate Iran war
Australian police shoot dead fugitive wanted for killing officers
১০