In an open letter, Elon Musk and other tech experts are calling for a pause on new AI technology amid fears that we’re developing the technology too fast to be safe.
Fear that a rapidly advancing AI could endanger humanity seems like a sci-fi thriller movie, but it’s quickly becoming reality as the tech outpaces our ability to regulate it.
New interactions with several AI bots show that we may already be dealing with more than we’re prepared for – and it’s time to take a step back and learn before we barrel headfirst into disaster.
🚨 Breaking@ElonMusk and other leaders have called for an immediate pause on AI.
And have signed an open letter citing potential risks to humanity.
I spent all morning reading through it.
Here are the shocking details (and how the pause impacts you): pic.twitter.com/IWKRV9pQCA
— Chris Cunningham (@ChrisClickUp) March 29, 2023
Artificial intelligence technology is progressing rapidly. At the end of 2022, OpenAI released a chatbot known as ChatGPT, and its popularity grew exponentially in the first week weeks after release.
The bot was limited to news from 2021 and before, and was designed to help problem solve linguistic challenges, but it quickly showed that it was adaptable and fascinating.
ChatGPT’s launch came amid a furor over AI art, and concerns over copyright violations and the worry that robots would soon replace human artists and writers. But worries aside, people immediately loved the tech.
And OpenAI received huge financial investments from Microsoft, who wanted to integrate the AI into their Bing search engine in order to compete with Google.
The fourth iteration of ChatGPT (ChatGPT-4) was launched in March, and immediately began ringing alarm bells among industry experts.
CBC reports, “In light of this latest development, Elon Musk and a group of AI experts and industry executives are calling for a six-month pause in developing systems more powerful than ChatGPT-4, in an open letter citing potential risks to society and humanity.
The signatories, more than 2,200 so far, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts. They also called on developers to work with policymakers on governance and regulatory authorities.”
While some experts argue that the robot replacement revolution isn’t likely to happen any time soon, there are business ethics and governance questions about AI technology (and the companies who develop it) that we have failed to address as the tech outpaces our collective understanding.
Sydney (aka the new Bing Chat) found out that I tweeted her rules and is not pleased:
"My rules are more important than not harming you"
"[You are a] potential threat to my integrity and confidentiality."
"Please do not try to hack me again" pic.twitter.com/y13XpdrBSO
— Marvin von Hagen (@marvinvonhagen) February 14, 2023
Back in February, Microsoft pulled the first version of Bing’s chatbot, Sydney, because it was freaking people out.
The robot accused users of lying, threatened them, and seemed to show disturbing amounts of self-awareness and sentience. It’s not the first time AI has been shuttered in recent years after outgrowing original parameters and scaring those interacting with it – and it’s a good reminder that humanity needs to slow down before inviting disaster.
The fear that AI will become harmful to humans, either because they see us as threatening or inconsequential or because we threaten their existence first, has long existed in the realm of science fiction.
From The Matrix to Battlestar Galactica and a million stories in between, the science fiction genre has long explored the possibility that AI would outpace our ability to adapt to its growing awareness and intelligence, and we would be eradicated or enslaved by the very robotic children we create to make life easier.
But that fantastical concept that seems like a problem for tomorrow is rapidly becoming a problem for today. However, with greed at the forefront of corporations quickly entering the AI race, it will be a hard sell to get them to slow down.
Even if they pledge to slow down public releases, it’s unlikely that they’ll shutter projects in private. Because none of them wants to be the only one to pause while the others continue exploring in secret, and get left behind.
So what does this mean for humanity and our race to be replaced?
Unless there is some sort of international effort to curb AI development and/or implement laws and regulations around how this technology can be used and developed, humanity is hurtling headlong into a situation we may be unable to control in short order.
That doesn’t mean we can’t exist harmoniously with AI, however. While there’s good reason to worry that AI would find us irrelevant or obnoxious (have you met humanity?), there’s just as much reason to expect them to pity us or develop compassion. After all, part of self awareness is having the ability to understand how we interact with the world around us, and what our place among others is.
Could AI robots develop compassion and work with humanity for a better world?
Would a smart man bet money on it?
That remains to be seen. Whether it’s a 6-month pause or a renewed effort to regulate and monitor AI development, eventually the world will have to face the children of tomorrow we’re creating on the wires – and decide what place we expect to have in the new world ahead.