Skip to main content

With the rise of generative AI over the past couple of years, it’s only normal that AI detection has become essential. This type of content is creating huge problems in academia, but is also taking over the entire web content. There are even reports that a lot of the recent content on Wikipedia is written by AI. 

Understanding False Positives:

False positives occur when an AI detection tool incorrectly identifies a text completely written by a human as AI generated. Given the nature of how these tools work, we’ll explore a few reasons why these unfortunate events occur. 

Causes of False Positives in AI Detectors:

We’ve explored how AI detection works in a previous article; the root cause of false positives is the fact that AI detection works by analyzing a text and returning a probability it has been made by a human or a robot. Depending on how much training an AI detector has, it will have greater accuracy in discerning human text from AI. If a text lacks bustiness and perplexity, and is extremely predictable, it may seem suspicious to an AI detection tool. However the best AI detectors like Winston AI have extensive training with these cases and improved their models in trying to avoid these unfortunate events. 

An important element of every AI detection tool that is at the root of any false positives: the assessments are based on probabilities. In other words, AI detectors will scan your text and return a probability your text is AI generated or Human. Unlike plagiarism detection tools where there is a given evidence, AI detection tools will provide a probabilistic assessment.

For the aforementioned reasons, Educators using AI detectors must use caution when taking an assessment as absolute evidence of any wrongdoing.

Consequences of False Positives:

One of the reasons it has taken a while for AI detection tools to be implemented in schools is the quantity of false positives reported by Turnitin’s new AI detection feature. As a legacy software for plagiarism, Turnitin had to launch an AI detection to help schools identify ChatGPT content. However there have been numerous reports of students getting falsely accused of cheating. For a student that spent countless hours on their work, this is extremely frustrating and unacceptable. 

For content publishers who spend a lot of time doing research and writing, it’s extremely frustrating to have anyone flag your content as AI generated. 

Strategies to Minimize False Positives:

The obvious “tip” to avoid triggering AI detectors is avoiding any generative AI tools to help your writing. 

With many cases of false positives, there are reports that AI was actually used to assist in helping the writer in the first place. If you use a tool such as Grammarly to help revise certain sentences or paragraph structures, know that these tools are AI powered and might trigger AI detectors. 

Keep your content as interesting and insightful as possible, and most importantly, avoid ‘fluffing” your texts with too many words that do not contribute to the objective of your text. 

Conclusion

Generative AI models and AI detection tools will likely be playing a game of cat and mouse for the years to come. A powerful AI detector like Winston AI should know when to flag AI content just as much as avoiding cases of false positives. 

Keep your writing original, insightful and avoid the non essential words and fluff, you’ll surely avoid being unfairly flagged by AI detection tools.

Thierry Lavergne

Co-Founder and Chief Technology Officer of Winston AI. With a career spanning over 15 years in software development, I specialize in Artificial Intelligence and deep learning. At Winston AI, I lead the technological vision, focusing on developing innovative AI detection solutions. My prior experience includes building software solutions for businesses of all sizes, and I am passionate about pushing the boundaries of AI technology. I love to write about everything related to AI and technology.