Skip to main content

A.I tools like ChatGPT seem to be everywhere, but what happens when you use them to do your work for you and don’t communicate that with your boss?

Let’s find out.

What happened at Sports Illustrated?

According to an investigation by the tech news website Futurism, Sports Illustrated used A.I tools to create content for their website.  

They discovered this by reading through some of the content and finding some weird phrasing like “people with strong financial status are revered and given special advantages everywhere around the world,”.

This is a typical example of using A.I content where generic phrases don’t say anything insightful or noteworthy.

It turns out that they were also using A.I authors with photos from image generator websites. Sports Illustrated initially denied that they were using A.I to help them with their publishing.

They have since fired their CEO Ross Levinsohn.

They are also currently investigating these A.I generated articles internally(which they have since deleted). 

Is A.I Generated Content taking over Online Publishing?

A.I is certainly being used by online publishers big and small since the launch of ChatGPT in November 2022.

Big online publishers like CNET are using A.I as part of the workflow.

If and when we use generative AI to create content, that content will be sourced from our own data, our own previously published work, or carefully fact-checked by a CNET editor to ensure accuracy and appropriately cited sources.

CNET A.I. Policy

They also add that all reviews will feature hands-on product testing by their writers.

From my experience, small publishers are using A.I and many have let go of their freelance writers.

They do that so that they can compete with bigger media companies. Here is a case study of a small publisher getting huge traction using an A.I tool.  

Why it’s important to distinguish between A.I and human-written content

The fact that this caused a scandal is quite interesting to me as many people can’t really tell if what they are reading was created by a tool like ChatGPT or if it was written by a real writer.   

A.I witten content is not at the point where we can use it to replace expert journalists. One of the main problems with A.I content is that the tool often hallucinates, 

What are A.I Hallucinations?

A.I hallucinations are when the A.I tool creates something that is simply untrue. The tools do this as they were designed primarily to create fluent and cohesive text by predicting the next plausible word that might make sense. 

They fundamentally do not “understand” what they are writing about and can therefore create sentences that don’t make sense and can be factually inaccurate.   

Do we need A.I disclosures?

In my opinion, websites that use A.I should say that they are using A.I. One of the reasons for this is because of the tendency to hallucinate as discussed above. 

There are certain situations where A.I can be used to help writers but we need to be vigilant in providing accurate content online.

How do you tell if what you are reading online was written with A.I

The fact that many writers and students are using A.I to produce their work has led to the need for A.I content detectors.

AI detectors use advanced algorithms to spot patterns in texts to tell you if the text was written by A.I or by a human.

Our A.I content detector Winston AI has a 99.98% accuracy rate at detecting A.I content and you can test it out for yourself here.   

The wider implications of an internet with mostly A.I content

A.I is undoubtedly having a huge effect on the online world. 

First, let’s start with the downsides and finish on a positive note.

What’s wrong with a little help from my A.I tool?

Individually, there is nothing wrong with using a tool to help you in your daily life. The problem is when bad actors get their hands on these tools.

Online scams are becoming more prevalent, especially with the rise of voice cloning and deep fakes. 

A.I content is also starting to affect elections as more and more fake content is postent on social media.

How A.I can help almost anybody

Generative A.I has so many use cases that it would be impossible to list them all here. According to, 57% of workers have tried to use ChatGPT to help them with their work.

Learning how to use A.I at work is very important. Here are some tasks that A.I can hep you with:

  • Writing emails
  • Brainstorming 
  • Design
  • Research
  • Content Creation

As you can see using A.I can be very beneficial but you might want to tell your boss first that you are using it unless you want to end up in a story like what happened at Sports Illustrated. 

Conor Monaghan

Conor is an AI expert and English Teacher. He spends his time researching and writing about AI tools to help educators and publishers to become more productive.