Skip to main content

Generative AI is gradually affecting the content on Wikipedia.

If you’ve ever tried using generative AI tools like open AI and ChatGPT to generate information, you’ll notice that they write believable, human-like text. But the problem is— they are prone to include erroneous information. 

Now, Wikipedia, the world’s encyclopedia that provides reliably sourced information to hundreds of millions of people, is using these same generative AI tools to create, summarize, and update articles.

In this article, we explain how generative AI is affecting Wikipedia.


What is Generative AI?

Generative artificial intelligence is an artificial intelligence capable of creating new content, designs, or ideas through machine learning algorithms. The process begins when you input a prompt that could be a text, an image, a video, a design, or any input that the AI can analyze. The AI then generates new content in response to the prompt.

Many Wikipedia writers use generative AI tools like Open AI ChatGPT for their articles. Unfortunately, these AI tools tend to “hallucinate” and produce fake citations– leading to misinformation.

Jimmy “Jimbo” Wales, founder of the collaborative encyclopedia and the Wikimedia non-profit organization, also agreed that information provided by generative AI can not be entirely relied on. He gave an example of a conversation he had with ChatGPT.

Jimmy asked the bot if an airplane crashed into the Empire State Building. “No, an airplane did not crash into the Empire State Building,” the bot replied. However, ChatGPT continued to explain how a B25 bomber crashed into the Empire State Building. This answer is contrary to the reply it shared earlier.

Generative AI and Wikipedia

For over 20 years, Wikipedia has always relied on content created and edited by volunteers worldwide. Today, the site is available in 334 languages and provides information on almost all subjects.

But recently, there have been growing concerns over the widespread of AI-generated articles and summaries on the site. These text summaries often look accurate but, on closer look, are revealed to be completely false.

Apart from concerns about data inaccuracy, Wikipedians also gathered that generative AI cites sources and academic papers that don’t exist.

The risk for Wikipedia is that people could be lowering the quality every time they publish content that isn’t fact-checked.

Effects of Generative AI on Wikipedia

  1. Misinformation and Disinformation

Millions of people view Wikipedia daily, seeking reliable information on topics that affect their lives and shape their decisions. However, the generative AI content published on the platform makes it harder to identify if the realistic-sounding content has been fact-checked. The implication is that Wikipedia starts to lose its credibility with people once they see the content misleads.

  1. Fake Citations

Generative AI tools like Open AI ChatGPT often scrape data from different sources but fail to cite the sources. It can promote new kinds of plagiarism that ignore the rights of authors of the original content. Also, since citations have always been crucial for researchers, the implication would be academic work with wrong Citations.

  1. Lack of Empathy

Generative AI is simply a machine. It is not capable of human feelings such as empathy. And this affects the way it writes content— bland and lacking emotions. The lack of empathy gives the editors double work as they need to edit endlessly to make the articles and summaries suit the site’s tone.

  1. Problems for Future Models

Many AI companies use Wikipedia open-source data as a training source for their data-hungry AI models. If the content published on Wikipedia is AI-generated, future models will have no option but to rely on the information provided— which may be full of misinformation and inaccuracies.

Report shows that the Wikimedia Foundation, host of the free encyclopedia website, is looking into building tools that would help volunteers easily detect bot-generated content; this doesn’t rule out that editors may have oversight issues.


Though there are speculations that generative AI might be the end of Wikipedia, this assumption is a bit exaggerated.

However, at the rate at which more generative AI content is published on Wikipedia, it could slowly lose its credibility among its users globally.

Thierry Lavergne

Co-Founder and Chief Technology Officer of Winston AI. With a career spanning over 15 years in software development, I specialize in Artificial Intelligence and deep learning. At Winston AI, I lead the technological vision, focusing on developing innovative AI detection solutions. My prior experience includes building software solutions for businesses of all sizes, and I am passionate about pushing the boundaries of AI technology. I love to write about everything related to AI and technology.