The Dark Side of’s False Positives: Are Content Creators at Risk?

The Innocent Content Creator: A Victim of False Positives

Imagine being a hardworking content creator, always ensuring that your work is original and well-researched. You’ve built a solid reputation over the years and have loyal clients. But one day, out of the blue, one of your clients falsely accuses you of using AI-written content thanks to a detection service like As a result, you lose the client and face emotional turmoil. This scenario is becoming all too common as’s false positive issue raises questions about its accuracy and fairness.

AI detection false positives can damage content creators like you, who rely on their skills and expertise to create high-quality content. When a tool like incorrectly labels your work as AI-generated content, it’s not only a slap in the face but also a threat to your livelihood. Your reputation and your income are at stake, and the emotional impact can be devastating.

And these are the AI detection companies who consider themselves the good guys, defending content writers against the flood of AI-written spam!

What’s even more troubling is that continues to market itself as a reliable solution for content detection. The company touts a low false positive rate, but the experiences of content creators and recent research findings tell a different story. This disconnect between’s claims and the reality faced by content creators raises serious concerns.’s Provided Definitions and Statistics claims that false positives occur only 2% of the time. This claim is based upon their own study, which was carried out on 1200 pieces of text, half human written and half AI written. The definition of “human text” and “AI text” was set by and is broken down into the below classifications:

      • AI-Generated and Not Edited = AI-Generated Text.

      • AI-Generated and Human Edited = AI-Generated Text.

      • AI Outline, Human Written and heavily AI Edited = AI-Generated Text.

      • AI Research and Human Written = Original Human-Generated.

      • Human Written and Edited with Grammarly = Original Human-Generated.

      • Human Written and Edited = Original Human-Generated.

    These classifications appear to be completely arbitrary. Moreover, the “human-written text” definitions are very strict, whilst the “AI text” definitions are more vague. For example, Grammarly is mentioned by name as a tool which can be used whilst remaining human-generated, despite its rephrasing tools using AI. There is no definition provided for what makes up “heavily AI edited” or “human edited.”

    Although the reason for these classifications is unclear, setting a strict definition for “human text” and a vague definition for “AI text” would reduce the false positives rate.

    These findings suggest that’s detection system is not as foolproof as they would like us to believe. The company has built its reputation on the promise of accurate and reliable AI-written content detection, but the reality doesn’t live up to the claims. By continuing to promote their service as reliable, they’re causing harm to innocent content creators who fall victim to Originality AI’s false positives.’s misleading statistics and definitions only serve to further muddy the waters. The company’s insistence on promoting their low false positive rate can lull businesses into a false sense of security. This is dangerous, as it could lead companies to make decisions that unfairly penalise content creators based on inaccurate information.

    The Freelance Writer’s Struggle: The Stark Reality of AI Detection

    A freelance writer, let’s call her Jane (this individual’s name has been changed for privacy reasons), found that some of her 100% human-written articles failed’s test. The system seemed to penalise well-written content above content with typos, suggesting that error-free text was a sign of AI-generation, according to the algorithm.

    The emotional toll on Jane was immense. She lost clients who were only willing to pay for 100% human written content and her confidence was shaken. This is just one example of the impact’s false positives can have on innocent content creators. Unfortunately, Jane’s story is not an isolated incident. Many other content creators are facing similar challenges, which raises questions about the ethics and accuracy of’s detection methods.

    The false positive issue is not just about the numbers. It’s about the real people, like Jane, whose livelihoods and careers are being jeopardised by a system that’s supposed to protect them. regularly focus on a 2% false positive rate, but it has to be noted that even if that figure is accurate, that would be eighty-eight thousand false positives a day if every blog post was checked on Eighty-eight thousand people who feel just like Jane.

    The Injustice Exposed: Unreliable Detection and the Bible Test

    A recent study revealed that many text detection tools, including, struggle to accurately differentiate between AI-generated and human-generated text. The researchers demonstrated that the accuracy rate of these detectors dropped significantly when AI-generated content was paraphrased, bringing their performance down to that of a random predictor.

    Many Twitter users have also noted that passages from the Bible and essays predating ChatGPT were flagged as AI-written content by these tools. This further undermines the credibility of and its claims about low false positive rates. It becomes increasingly clear that the technology is not as reliable as it’s made out to be, and content creators are paying the price.

    Fear Marketing: Creating a Problem and Offering the “Solution”

    Fear marketing with's false positivesThe growing concerns surrounding’s false positives raise questions about this company’s marketing tactics. By creating a problem (the risk of AI-written content) and then offering their detection service as the “solution,” taps into the fear of businesses and content creators, convincing them they need this service to protect themselves.’s recent email marketing claiming a correlation between 70% human-written content and higher Google rankings is an example of their fear-based marketing. This goes against Google’s own recent advice, which states that they will not penalise any method of content writing. Although their blog post mentions that their research doesn’t confirm that Google is penalising AI content, the email still advises website owners to target 70% human content detection score with the help of

    An blog post published on the 5th of May claimed it was worth continuing to use detection tools to stay on the “right side of Google’s guidance.” This is despite Google’s artificial intelligence guidance clearly stating that they will continue to “focus on the quality of content, rather than how content is produced.” Ironically, using Originality’s new SEO tool to manipulate search rankings could place you on the wrong side of Google’s guidance. The penalties for using it will be just as severe as the penalties you’ll get for using AI content. You’ll get…

    No penalties at all.

    There’s no reason to fear Google penalties if the content you post is high quality. Whether you wrote the content yourself, you did it with AI, or you hired a team of writers to write it. High-quality content will rank well, low-quality spam won’t. Just remember the ancient proverb “Generate a great blog post today, and on the search engines you will stay” (OK, I made that one up, but the point still stands).’s unreliable detection methods are causing more harm than good. As more innocent content creators are falsely accused, the emotional and financial consequences can be devastating. The fear marketing tactics employed by not only perpetuate this problem but also undermine the trust between businesses and content creators.

    Demanding Fairness and Accuracy

    It’s time to hold accountable for their misleading claims and the harm they cause to content creators. We must demand fairness, transparency and accuracy in content detection tools. By raising awareness about this issue, we can support the content creators who are being wrongfully accused and ensure that their hard work and dedication are recognised.

    As content creators, businesses, and consumers, we all have a responsibility to push for better, more accurate tools and to challenge the fear marketing tactics that perpetuate this problem. Together, we can help protect the livelihoods of content creators and foster a more transparent, trustworthy environment for everyone involved.

    In conclusion,’s false positives present a genuine threat to content creators, casting doubt on their accuracy and fairness. While the company continues to promote its low false positive rate, recent research and real-life experiences show that this detection tool is far from reliable. As a result, the emotional and financial consequences for innocent content creators are severe. It’s time to demand better, more accurate content detection tools and hold accountable for the harm they cause.

    Leave a Comment

    Your email address will not be published. Required fields are marked *