Is Generative AI Telling The Truth?
Metaverse
10 March 2025

Vodacom

Is Generative AI Telling The Truth?

Most of the time, yes. But not always. Here’s how to tell whether it’s giving you facts or fiction.

Generative AI is everywhere. A recent EY survey found that 75% of people now use it in the workplace (up from 22% in 2023). When Japanese author Rie Kudan won Japan’s most prestigious literary award last year, she admitted that 5% of her novel (about GenAI, ironically) was actually written by GenAI. In 2023, Chinese professor Shen Yang won a national award for a novel written by GenAI. (It took the AI three hours. It took Shen Yang 66 prompts.) 

You literally can’t make this stuff up… except, you can. AI is revolutionary and fast, but it’s not very trustworthy.  

Can You Trust GenAI? 

Don’t get us wrong: GenAI doesn’t lie. But it does sometimes talk like the older kid on the school playground who cites shady sources and embellishes their stories to the point where you’re not sure whether you can believe them or not.  

Techies call this “hallucination”. We asked Microsoft’s Copilot AI what that was, and it said that: “In the context of AI, a ‘hallucination’ refers to instances where an artificial intelligence model generates outputs that are incorrect, nonsensical, or unrelated to the input it received. These outputs can sometimes seem convincing or realistic but are actually based on no factual data or logical reasoning.” 

So… yeah. A lot like the kid on the playground. And it’s a lot more common than you’d think. In 2024 a French study found that ChatGPT4 had a hallucination rate of 28.6% on a few basic tests. That’s on par with the hallucination rate of large language learning models, and it’s scarily high. After all, would you trust someone who made up just over a quarter of everything they said?  

How AI Hallucination Happens 

To understand why this happens, you have to understand how GenAI works. We asked ChatGPT what makes AI hallucinate, and it listed seven reasons. Given that it talks rubbish 28.6% of the time, we picked the first one: “AI models like ChatGPT generate text based on probabilities learned from vast datasets. They predict the most likely next word but don’t truly ‘understand’ facts. If the model lacks enough accurate data on a topic, it may fill in the gaps with plausible-sounding but incorrect information.” 

While humans have an innate sense of scepticism, AI doesn’t. It doesn’t think like we do. It thinks like a machine. It can’t grasp context or check facts like humans can; all it does is generate responses based on patterns it picked up in the data it was trained on. 

How To Fact-Check AI Content 

That’s why it’s so important that you get a human fact-checker to review anything that GenAI generates for you. You can do this in a few ways. Start by using credible sources like academic databases; and if you see a statistic mentioned (like that 28.6% number), find the original report. Always ask your AI tool to include sources or references when it generates content.  

Read the GenAI’s text carefully. Check for inconsistencies and contradictions, and remember that some of the data sets that your AI trained on may be outdated. Double-check facts using up-to-date sources.  

And finally… Do your homework. If you copy the notes of that loudmouth playground kid, you shouldn’t be surprised if you get 28.6% for your assignment.  

For more stories about scams and ways to avoid them, visit our fraud overview for more stories. 

thumb

Vodacom