Verifying Truth In The Age of AI
Finding the truth can be tricky these days.
“Trust, but verify” is a Russian proverb, made famous by Ronald Reagan when talking about nuclear disarmament. I find that an interesting way to raise the stakes on the conversation about truth in content, especially for those of us in the content industry.
Why would you trust me when I make the claim about what a past U.S. President did or didn’t say? By default, you shouldn’t. However, take a look at the bottom of this post to see all the source references. Go ahead, fact check me!
Once upon a time, folks didn’t trust what was printed in a book made on a press, even if it was the Bible. If it wasn’t written by a scribe or a monk, how were we to know it wasn’t full of lies made to subvert the masses into revolting. Which, incidentally, they did.
Gutenberg was the sixteenth-century version of our Sam Altman from Open AI. Or maybe he was the Elon Musk, depending on your perspective. The folks over at BigThink.com had an interesting perspective drawing the same parallels between printed books and AI today. Their key takeaways:-
New technologies have always scared people. When the printing press was invented, Scribes’ Guilds destroyed the machines and chased book merchants out of town.
-
Today, it’s hard to view the printing press as a disastrous blot on history, but that’s exactly how many people perceived it at the time.
-
Just like today with AI, people worried about job security and the spread of disinformation. Everything ended up alright. Society changes, but it doesn’t break down.
The fears of 16th century workers sound an awful like us. We aren’t sure where we’ll end up with in terms of the scope of AI’s disruption, but there’s appears to be precedent for how this has happened before.
Let’s Get Back to Gutenberg for a Minute, Though
Considering the effort required to write, edit, publish and distribute a physical book, we of the modern era hold the printed medium in a different regard. A textbook, a classic novel, a treaty: these things are near-sacred in term of their intrinsic value. We might consider them a source of truth simply due to their media and lineage. Furthermore, we probably assume that someone has verified the contents and, as such, the text within the book itself can be used as a source of truth itself.
The first versions of Open AI’s Chat GPT were, quite publicly, supposedly training on BOOKS. Why do think they did that? Turns out many of those books were sci-fi and fantasy, not necessarily just the classic tomes and textbooks we may have imagined to be fountain of knowledge and intelligence.
I majored in English at the University of Guelph (go, Gryphons!) here in Ontario, Canada. One of the courses I took introduced me to the concept of discourse, especially as it relates to institutions, the media, and whose ideas get to be promoted as the definitive truth.
My school years were back in the late 90s. Newspapers were still very much a thing. I would never have considered that the Washington Post, Wall Street Journal or Los Angeles Times would ever prefer to print the opinions of their owners rather than the unbiased “truth.” Even the New York Times has fallen from grace in my opinion.
Part of our discourse with the media is that we presume a level of verification has gone into the declaration of something as fact. Our trust erodes when we find evidence to the contrary, or when our own beliefs run counter to what is being sold to us.
Enter the internet age, when anyone could publish something and change it later. How could we possibly be expected to disseminate truth from something online?
Maybe Google would save us! After all, they promised to never do evil, right? As it turns out, Google has become the most sought-after and respected pathway to truth that we have. While not the source of truth itself, we’ve leaned heavily on search to disseminate meaning from what we ask them to find. Our goal, though, has most often been to get to the source of truth. The magical link at the end of the rainbow.
The difference between the source of truth and bearer of that truth can easily become confused. As colleague Mauro Flammini reminded me:
Google can lead people to wrong information. When people say "I Googled it”, they are explaining that Google told them something when in reality, it’s the website Google showed them that’s presenting the info.
Over time, Google has evolved to do more and more of work for us, with inline descriptions and answers that we presume are derived from what we were actually looking for in the first place.
Are those answers and inline information reliable? Almost before we’ve been able to answer that for ourselves, we have a whole new paradigm shift to content with.
Now we have generative AI to answer our questions. How much trust can we put into these tools? As an early adopter of lots of technology, including the internet itself back in the 90s, and as the CTO of company whose sole purpose is to provide content for our customers’ digital properties, I have questions.
It’s Easy to be a Naysayer; One is Often Rewarded for Being So
Generative AI often provides wrong answers. Elizabeth Lopatto, writing for The Verge, an online publication that often epitomizes the early adopter/tech enthusiast mindset, writes of a now-infamous example where asking about presidential pardons of relatives or in-laws returned incorrect results. Referred to as “hallucinations,” this is where generative AI and the Large Language Models (LLMs) that power it starts to fall down.
I often rely on the cogent analysis from John Gruber, writing for his own blog Daring Fireball:
But all of the arguments being made today against using generative AI to answer questions sound exactly like the arguments against citing web pages as sources in the 1990s. The argument then was basically ‘Anyone can publish anything on the web, and even if a web page is accurate today, it can be changed at any time’ — which was true then and remains true today.
Gruber helps return us to the understanding that everything is an iteration on a timeline. Where the folks back in the 1500s would distrust printed books, so too did many of us distrust the web pages of our fledgling internet. He reminds us that the revolutionary technology shifts that disrupt our patterns of information gathering are following directly by evolutions of that technology which make us more comfortable with it.
Has technology earned our trust? Not even a little bit. Rather, I believe we have accepted each generation of info-tech as the convenience of it has lured us into it use. Certainly, when we chat with a bot nowadays, we are much more likely to expect meaningful responses, if not ones that are 100% accurate. We have adapted to expect near-truthiness.
When you perform a Google search nowadays, you may be presented with an AI generated response. Here’s what I learned when I asked “does Google use AI to answer search queries”:
If Google can show you an AI generated answer, it will attempt to do so. Keep in mind that Google has more than 90% of the search market. It’s a good thing we all trust in Google to tell us the truth, otherwise we ought to be concerned.
When Going to Google Takes Too Long
The thing is, though, many people don’t even make it to a Google search. Short Instagram, TikTok, and YouTube videos with influencers are probably the greatest source of knowledge to Millennials and Gen Z. Very little fact checking, and with rarely a link to source material, the “social media” platforms and their algorithms now control much of how knowledge is circulated in the modern world.
I would argue further that the creators of the social media content economy are no more in control of the effect of the knowledge that they produce than those of us who consume it. In a 1997 interview David Bowie proclaimed:
I think it’s terribly dangerous for an artist to fulfill other people’s expectations. They generally produce their worst work when they do that.
I would argue that many content creators today are breaking Bowie’s rule. The content we consume on social media appears designed specifically to play to our expectations, our desires and, more insidiously, our fears.
Whatever their intent, today’s creators are not in control of their content. The awareness of their content, the order and frequency that it appears in our feeds, is controlled by the platform. It’s no coincidence that less and less of that algorithm is maintained by humans, but rather by… you guessed it: AI.
What I find to be fascinating about our enthrallment in Social Media is that we are aware of our addiction to it. According to market intelligence firm S&P Global in 2023:
- 72% of Gen Z in 2023 felt that they consume too much social media.
- For millennials, it dropped to 54%.
- Gen X was 41%.
All significant signs that doomscrolling is influencing us beyond our comfort zone.
I believe our trust in social media videos is linked to our ancient oral traditions. We seem to have some genetic predisposition when it comes to trusting what people say. Our well-documented dopamine response to short videos hooks into our willingness to believe the spoken word.
In a sense, social media has circumvented all our skepticism for the written word. In one fell swoop, folks who were scoffing at WebMD are forwarding videos to their friends from an influencer on how to cure a rash. People who say the New York Times are liars will choose who they vote for based on a random person from who-knows-where. No fact checking necessary.
- Wikipedia – Trust But Verify
-
Maria Popova - The Marginalian - David Bowie on Creativity and His Advice to Artists
-
Elizabeth Lopatto – The Verge – Stop Using Generative AI as a Search Engine
-
John Gruber - Daring Fireball - Don’t Throw the Baby Out With the Generative AI Bathwater
-
Adam Rogers – Business Insider - ChatGPT's secret reading list
-
Keith Nissen – S&P Global - For US Gen Z adults and millennials, social media is a way of life
About the Author
Joel is CTO at Agility. His first job, though, is as a father to 2 amazing humans.
Joining Agility in 2005, he has over 20 years of experience in software development and product management. He embraced cloud technology as a groundbreaking concept over a decade ago, and he continues to help customers adopt new technology with hybrid frameworks and the Jamstack. He holds a degree from The University of Guelph in English and Computer Science. He's led Agility CMS to many awards and accolades during his tenure such as being named the Best Cloud CMS by CMS Critic, as a leader on G2.com for Headless CMS, and a leader in Customer Experience on Gartner Peer Insights.
As CTO, Joel oversees the Product team, as well as working closely with the Growth and Customer Success teams. When he's not kicking butt with Agility, Joel coaches high-school football and directs musical theatre. Learn more about Joel HERE.