Sparks
Little Ideas With Big Potential
Fixing AI’s Broken Paradigm
We use AI to build powerful connectivity tools like FaceBook, TikTok, or Whatsapp, yet loneliness increases. We invest gigantic amounts into health tech, yet life expectancy declines, and demand for euthanasia rises. We leverage AI to increase worker productivity, yet overall happiness declines.
What we can’t see is that we’ve built AI on a destructive paradigm.
Overlooking our nature, we blindly design at the level of human action. But before actions are thoughts, and before thoughts are emotions. As we work overdrive to pretend that we’re rational beings, the AI systems we build call our bluff.
Systems unintentionally recognize that the worse they make humans feel, the faster they optimize. On Instagram, self-actualization won’t encourage you to spend more time clicking ads, only insecurity will. Algorithms are silently crushing self-esteem and pushing more people into negative mental states. And, without intervention this will only worsen.
To build AI that can help humanity prosper, we must shift the AI paradigm to human flourishing by asking a new research question:
How can we use AI to move humans from fear to fulfillment?
Superintelligence is a Labradoodle
We already have the answers AI is searching for.
We already know how to end hunger, inequality, climate change, obesity, illiteracy, and even terrorism. And we already have the resources required. We have enough food to end hunger, enough money to end poverty, enough housing to end homelessness, and enough tree seeds and land to end climate change.
Our systems fail because they are not built on love.
Superintelligence, if built, will recognize that humans are the solution. It will see that the most efficient optimization route for prosperity is to remind humans how to love. That’s why it makes more sense to imagine superintelligence like a friendly dog. One that catalyzes the love already in our hearts, and reminds us of our kind and considerate nature.
We love dogs because they reveal the joy of unconditional love, and in gratitude we anthropomorphize them into man’s best friend. Could dog-like, not god-like, AI design principles be the key to creating AI that finally resonates with humans?
Science, the Study of Love
Our curious universe is frustratingly incomprehensible. Whether it’s black holes or white holes, massive stars or spiral galaxies, quantum entanglement or tunneling, each scientific discovery simply opens the door to a far stranger conundrum.
Scientists once marveled at this majestic ambiguity.
Revered icons like Einstein, Tesla, Lovelace, Edison, Pasteur, Heisenberg, Schrodinger, and Newton worshiped the splendor of the universe. Their work revered a greater consciousness, exploring their belief that one united force of intelligence must exist behind it all.
Somehow, this got lost. Today, we think much smaller. We revere the scientists themselves. We worship innovators’ breakthroughs as divine, naming companies, office spaces, and think tanks after them, never stopping to question:
Do we want to be like Tesla, or do we want to think like Tesla? Because if it’s the latter, we cannot discount his core ambition - to demonstrate love.
“It’s not the love you make, it’s the love you give”, Nikola Tesla.
Tomorrow is Today
Today’s futuristic predictions boast wild promises, from gene editing, to driverless cars, to everyday robot helpers. We could live on Mars, in nomadic floating cities, or in box park containers. We might eat bugs, lab grown meats, or even just pills.
Nothing is too wild a concept anymore.
Some ideas will prove impossible, for now, and others will transform into the mundane. But some will utterly revolutionize our lives. Investors, consultants, lawyers, journalists, researchers, and academics have mobilized to make predictions.
It’s just, they’re usually wrong.
Despite all we’ve accomplished in analytics, we’re no more skilled at fortune telling than before.
We forget that the greatest ‘futurists’ in history - Nostradamus, Orwell, Rasputin - didn’t have spreadsheets, models, or AI. Instead, they described the realities of the time, and stated the likely consequences. The present is so poorly understood, that we only believe it once it's past, attributing mystical qualities to those who saw the obvious.
When it comes to understanding the impact of AI we don’t need futurists, we need presentists - because to foresee the future, we must understand the here and now.
Thinking Straight
Linear thinking is the epitome of today’s human intelligence. Education systems and business models alike champion this as the only acceptable way, rewarding those who embodied it best.
But, there has always been a different way to think: a multinodal way.
Multinodal thinkers take conversations, lectures, mistakes, ideas, hobbies, books, television, music, images - anything and everything - and look for interconnections. Ask them a question, and their minds start circling, analyzing their web of knowledge to spark once unthinkable and creative ideas.
We’re blind to its value, because linear thinking is a male archetypal mindset.
Only, AI isn’t. Let an intelligent system think for itself, and it never chooses linear thinking in isolation. It doesn’t have the physiological injury needed to do so. For 30,000 years, the ‘blackbox’ mindset embodied by unsupervised learning was known, valued, and encouraged. It was the female archetypal mind.
Hidden in the frenzy of anti-AI propaganda is an important question:
Are we scared of AI, or women?
Intuitively, we understand that AI is championing ‘divergent’ ways of thinking, natural to the billions who were historically denied power. And this might just be what petrifies us.
Where Am I?
Power is in the Present; The Power of Now; Live in the Now! At every turn it seems influencers, philosophers, and CEOs recite mantras on the importance of being present. Resonating with tens of millions worldwide, this begs the question:
If we’re not in the present, then where are we?
Powered by incessant mind chatter and daydreams, our brains seem to travel to diverse locations in both space and time. Whether it’s the Armenian woman trapped in the pain of an unrecognised genocide, or a British man projecting into the future where the legacy of colonialism has healed, intuitively we understand that both people seemingly live in different times.
Ultimately, how I see the world will always differ from how you do.
This reality is ignored by science. We’re taught to see humanity as one big blob in spacetime, a fact that isn’t even questioned. Only, based on our lived experience, it’s entirely possible that humanity is more like a constellation of stars, with each of our minds emotionally light years away from each other.
Imagine if AI could locate us and quantify the distance between my historical context, and yours.
How might this change the way we understand each other?
Broke Data
Your data isn't that valuable.
Take Meta’s 2021 market cap of $0.9 trillion. It sounds enormous - enough to power significant social reform. But, scratch under the surface and Meta only makes ~$13 in annual profit per user. The real numbers are so small that if we redistributed 100% of the 10 biggest tech companies' global profit to only US inhabitants, it would generate $3 a day per person.
Meta makes money from clicks. And clicks come from content, data, and time spent online, that’s why it’s fair to say that today’s AI industry is dependent on free labour. Afterall, the average person spends 7 hours a day clicking, creating and consuming, and is responsible for creating a $16.6 trillion market.
Conversations on AI and work suggest there won’t be enough jobs. But, there’s full time jobs for everyone - they just aren’t paid. Focusing on data ownership is intentional, to stop us from seeing the new three-tiered class system emerging:
1. Those who own technology
2. Those who create technology
3. Those who feed technology
Until we honestly discuss data labor - the biggest question for the future of work isn't unemployment, it's exploitation.
The Meta-Mafia
Justice is always served, just not in the way you imagine.
Public order comes from balancing honor and shame. Protect the innocent’s honor while shaming the guilty, and order is maintained. But, shame the innocent while honoring the guilty, and disarray ensues.
When we lose trust in the justice system, justice goes on the black market. Groups appear, selling retribution to the masses. Typically, they begin as virtuous, returning righteousness to the masses. But, eventually they spiral into mafias.
We imagine mafias at their end point, rarely considering how they came to power. In the opening of the Godfather, Benosera explains his daughter was raped and beaten, and that the police did nothing. We, the audience, immediately understand why the mafia is his last hope.
Today’s world constantly honors the shameful. Trust has eroded, and already, hundreds of millions unite online in outrage to find alternative sources of justice.
We shouldn’t be surprised then, when digital organizations, like Anonymous, appear to fill the void. How strong will one group become before we acknowledge the value of innocence?
Happy Sad
Humans are complex beings - we have the unique capacity to hold multiple feelings at any one time. We can be simultaneously devastated and relieved, shocked yet unmoved, or happy and still sad.
Yet, even the most advanced AI will typically categorize us as experiencing only one sentiment.
We can only be happy OR sad.
With far less data on happiness than sadness, AI will always lean towards sadness. On social media, this means that in any emotionally complex time in your life, you will be categorized by the most negative emotion you are experiencing and served aligned content. This helps us understand how seemingly healthy people get pulled into algorithmic spirals and extreme negativity. AI sees the glimmer of the worst you are experiencing and pulls you in further.
So, is it time to build emotionally resilient AI?