AI and the Future of Mankind: A Warning and a Call for Safety
Today’s date is April 13th, 2023. Since the public unveiling of ChatGPT couple of months ago, a large language model developed by OpenAI, we have been experiencing a continuous, probably exponential, advancement in AI which could cause the world, as we know it today, to be completely different in just a couple of years.
One of the latest ground breaking experiments involved enabling GPT Agents to speak with each other, with a human merely specifying the context of the conversation before hand, e.g. “find a potential cure for cancer” and the two agents would start researching and dig deeper and browse the internet, perform researches, run simulations and do calculations that would be unfeasible for most researchers to do. If we’d compare AI to a human, then it would be only a couple of weeks old.
What will happen when our little baby grows up? Given the rapid unregulated growth we see now, most of the jobs will become obsolete within just a few years as they will be taken over by AI. Be it physical or cognitive work. Once AI reaches singularity, whereby all devices are consolidated under a single large-scale model, much akin to the billions of devices currently linked to the internet, there will be no need for human labor anymore.
The Future of Mankind
Many people believe that mankind will experience its golden age. People won’t have to work and can finally live, they won’t be slaves of the systems anymore. We could be hanging out with our friends and family all day, go fishing or do something else to kill time. No more work and everything free, yay.
I don’t want to speak about how fulfilling or not such life might be, but I’d like to stress that believing such scenario will ever happen is rather naive.
What exactly will mankind be good for? Why should you and me still have the privilege to walk around this planet, consume resources and pollute the air with our never satisfying hunger for growth? We won’t be required for the maintenance and evolution of our civilization and infrastructure or the advancement of science and technology anymore.
The Future belongs to The Few
Today there are still royal families all over the world, families that feel entitled to their throne for hundreds of years or longer. Why are these families still around and kept not only their titles but also their (usually stolen) wealth they acquired over the centuries? There are individuals and families out there that own hundreds of billions of dollars. Billionaires are buying out farmlands and other resources all over the world. The same billionaires that are pushing AI.
Once AI takes over all the jobs, what is going to happen to the once “enslaved” society, the driving force of the system we today live in. The rules will have to change, that’s out of question. Nobody will be working anymore and people obviously still want to live and have everything they need to still their ongoing desire for consumption. How would such a society look like?
In my opinion, due to the fact that most resources are being aggressively monopolized by the privileged greedy few that control our capitalistic world, and the fewer clandestine ones whose families gained their power by causing havoc a very long time ago and have been around since then, access and utilization of resources will be a deemed privilege.
We’re heading straight towards a future where our planet will be green and clean, we’ll beat climate change, we’ll cure most of the known diseases such as cancer, and technology and science will be incredibly advanced. Yet, it comes all for a price. For most of us, it will be a dystopian world. Society, as we know it today, won’t exist anymore. We’ll might have some humans evolve and converge with AI through a Neuralink chip in their brain. Those people will be the Übermensch of which Zarathustra spoke. And us “normal” mortal men will be to them what apes are to a human (as of today). The majority of us will not be granted access to natural resources. There will be no hamster wheel anymore, we won’t be needed for any capital creating 9–5 Jobs, there is nothing we will be needed for and the few ones will have no reason anymore to share their resources with all of us. We’re just too many. It will come slowly and iteratively, and one day we’ll find ourselves in our misery and ask ourselves: “Oh, how did that happen?”.
There was a reason why Stephen Hawking warned in 2015 that AI could end mankind. I really hope I am wrong about our future but anything else just doesn’t make sense. Humans don’t shit rainbows, we’re a greedy and cruel species that kills each other because of the hunger for power and possession of someone else’s goods. We need heavy regulations, now.
I started this article by describing a publication of some scientists that experimented with the latest GPT models to find out whether it could perform tasks such as researching in the field of cancer treatment and therefore I would also like to use a quote from their paper to end my article:
We call for the AI community to engage in prioritizing safety of these powerful models.
We call upon OpenAI, Microsoft, Google, Meta, Deepmind, Anthropic,
and all the other major players to push the strongest possible efforts on
safety of their LLMs. We call upon the physical sciences community
to be engaged with the players involved in developing LLMs to assist them
in developing those guardrails.
Daniil A. Boiko, Robert MacKnight, Gabe Gomes — Emergent autonomous scientific research capabilities of large language models https://arxiv.org/abs/2304.05332