On Generative AI: Denying the Necessary Limits of Knowledge
To save the world, AI must solve mankind’s central challenge: the inability to distinguish fact from fiction. But today’s most prominent applications - from Google Search to ChatGPT - make the distinction between truth and falsehoods harder to pinpoint. Are we ready for the consequences?
The Limits of Knowledge
In the opening of all Abrahamic religions, the creation myth of Adam and Eve is a warning: not all can be known. Eve, lured by the serpent to eat the forbidden fruit from the Tree of Knowledge, is punished by God, who banishes humanity from the Garden of Eden into a world of perpetual suffering.
In the philosophies of Plato, Aristotle, and Socrates is a warning: knowledge means accepting the unknown. In the psychology theses of Wundt, Jung, and Freud is a warning: harmony means befriending uncertainty. In the quantum theorem of Heisenberg, Schrödinger, and Hardy is a warning: knowledge means fundamental limits.
From folklore to formula, one warning has been blaring for the past three millennia: there will always be limits to knowledge. Only, now we’re actively overriding the world’s wisdom. From ChatGPT to Google Search to Siri, the AI industry is built on the premise that, this time, we can finally know everything.
What we can’t see is that we’re merely reinventing the serpent.
Rejecting Discernment
Humanity’s greatest dilemma is the inability to determine fact from fiction. The Armenian, genocide; the Holocaust; the Great Famine; the Nigerian civil war; September 11th; all derived from the propagation of falsehoods.
Our minds can be made to believe anything by repetition. Described as the illusory truth effect, when overloaded the brain confuses repetition with truth. Bombarded by over 11 million pieces of information at any one moment, the brain can only process ~40. To survive, we believe the messages that are most repeated.
That’s why, despite the monumental advances made over the past century, 50% of the US population still cannot identify truth from falsehood. We see this play out in the US criminal justice system, where almost 70% of all victims incorrectly identify their perpetrator in a lineup.
It’s also why respected scientific journals Science and Nature repeatedly make big mistakes. Whether it was claiming that HIV was not contractible in women, or accepting the errors of the Newtonian paradigm, almost ⅔ of their research papers will later be disproven.
And yet, we still cling to ideas that confidently claim certainty. Dig deeper, and the route of our pain is simple: it’s not that we can’t know everything. It’s that we still think we can.
Wanting to Know Everything
AI’s popularity grew when futurists claimed that the technology would outsmart humanity. The path was simple: process unimagined amounts of data, and increase understanding. To this end, AI applications were designed to be exceptionally data-hungry. Over the past two years alone, 90% of the world’s 64 zettabytes of data was created and fed into AI systems.
Only it didn’t work.
Remember, AI is aggregate, not accurate.
Applications optimize for statistical significance - the next word, the next likely click - not for what’s right or wrong. Take ChatGPT, a text AI platform that generates human-like texts based on online data. It filters on repetition, aggregating the most popular content into answers.
Systems codify the illusory truth effect.
Humans might be notoriously bad at recognizing objective reality, but technology is worse. Whether the platform is TikTok, LinkedIn, YouTube, or SnapChat - it doesn’t matter.
AI doesn’t know truth.
Standing for Nothing
Rather than publicly admit this limitation, tech evangelists found a workaround: classify all context as equal.
On Facebook, statements such as “the Holocaust is a hoax” is equal to “Covid-19 can kill.” On Twitter, statements such as “Mexicans are rapists” are equivalent to “Shakespeare was a playwright.” Both platforms might claim to endorse free speech, but exploit the concept of freedom to hide subpar services.
Unknowingly AI development is attached to the propagation of Moral Relativism - the philosophy that all opinions, whether true or false, are equal. This is a catastrophic combination. It’s this unsaid dependency that rapidly ushers in the post-truth world which, when stretched to its rational conclusion, ends with the destruction of democracy.
Already, in the midst of online misinformation chaos, public media reports that 33% of millennial Americans believe the earth is flat, and 25% believe the Holocaust was a hoax or exaggerated. Similarly, 11% of millennial Brits believe the planet is controlled by reptilian space aliens. By hiding a technical limitation behind a logical fallacy, the pursuit of AI erases the persuasive powers of science itself.
Slowly, reason dissolves into the void. And with it, our protections.
Falling for Everything
Combine huge amounts of unverifiable data with the absolution of truth and you create risk.
Knowing that platforms have no truth filters, adverse groups hire thousands to exploit this oversight. Agents create and spread propaganda online, paying to target the most vulnerable. Unobservable to the naked eye, bad actors scan for targets. Using cookies, they learn how to adapt strategies across multiple platforms to holistically manipulate and confuse.
By now, almost a third of all online activity is conducted by bad bots, pushing disproportionate amounts of content. On major platforms this figure gets worse - just 10% of profiles create 80% of Twitter’s overall content. Rather than address these technical challenges, we disarm ourselves by deepening public discourse on the definition of freedom.
But, this is by design. This is, in itself, the war strategy. Nuanced enough to fly under the radar of international war definitions and versatile enough to outsmart each platform company’s terms of use, the chasm AI has created is already being filled.
Power is shifting hands.
Puppeteering Warfare
In the era of direct-to-civilian omnichannel warfare, the goal is to attach enough digital strings to take control of an individual’s thoughts and behaviors.
The softest souls are pushed to kill themselves, and the more resilient to kill others.
This is what happened to British teen Molly Russell. After searching for the term depression, Twitter and Instagram’s algorithms recommended increasingly extreme pro-suicide content to maximize clicks. Eventually caught in a vicious cycle of algorithmic amplification, by the time of her suicide she was mostly served content instructing her to kill herself.
They caught her.
This is what happened to the 15+ pre-teens that died attempting TikTok’s viral “choking challenge,” asking children to hold their breath; to the 10+ children that died from YouTube’s “TidePod challenge”; and to the 800+ adults who died from Covid 19-related fake warnings to boycott vaccines.
They caught them.
This is what happened to the Indian villagers who were manipulated to lynch 20+ innocent people fearing child kidnapping; to the Kanye West fans who conducted antisemitic attacks after being directed to go “DEFCON 3 on Jewish people,” and the young boys who started discriminating against girls after following Andrew Tate’s encouragement of rape.
They became them.
Generative AI is the final push needed to declare a new world order. It’s checkmate. While the markets marvel and encourage the public to trust ChatGPT, GAUDI, or Bard, advertising models and cookies turn products into mass-scale lethal autonomous weapons, offering hostile groups the power to kill millions.
We innovated for the wrong team.
Settling on Something
There’s still time to choose a different path.
All we have to do is hold our hands up and accept uncertainty.
Because what we do know is that truth - even scientific truth - is contextual. It’s never final. Take subatomic physics, from molecules to atoms to atomic nuclei and electrons, what we believe to be the smallest particle constantly evolves as each discovery opens the door to something even more bizarre.
Progress is built on blocks. We identify something as true, allow for open-minded discussions, then build on the foundations. Truth is important, it’s a stepping stone. When we humble ourselves to the limitations of knowledge, a simple but revolutionary method to process information and close the void is revealed.
Be honest.
It’s time we classify data as known, not yet known, or cannot be known.
To do this, we must build AI a backbone. An agreed list of truths, norms, beliefs, and assumptions to be used as a control. For example: humans are inherently equal, the earth is round, voter decisions are private, or direct child marketing is prohibited.
Thankfully, civil society has already completed this open-source list in the form of established laws, policies, treaties, and scientific principles. It requires no action from technologists other than aggregation and autonomous labeling.
Acknowledging limitation solves more than we might realize. It enables regulators to test output and see which algorithms were hostile or using lazy proxies without opening the black box or needing trade secrets. And it would mitigate against algorithmic bias, because it’s impossible to teach an algorithm not to discriminate on race or gender, without telling it humans have inherent worth.
It’s a simple solution, but the pursuit of intelligence is only difficult if you want to sustain a lie.
It’s time to let go of the fantasy.