Researchers constantly voice their concerns on the growing risks of AI despite many people not bothering to listen. In an open letter, researchers urged “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
I’ve thought about these issues for a while now, but ever since the release of ChatGPT last November, many of my fears have become a reality.
Deep learning models should not be open to the public until governments establish laws for AI that prevent an arms race in industries, stop bad actors from causing harm and require researchers to have a complete understanding and version of their models before they are released.
Why we’re at the start of an arms race (and should stop it)
With no comprehensive laws set for the standards of AI, competitive industries are beginning an arms race motivated by greed at the expense of society’s long-term well-being.
The race to create the best language model has already begun with the disintegration of transparency in the recent case of GPT-4 (a generative pre-trained AI model). OpenAI, the creator of ChatGPT and GPT-4, announced its latest model on March 14 with a “research paper” that offered no information on the data used to train the system, its energy costs or the hardware and methods used to create it.
Instead, they cited the “competitive landscape” and “safety implications of large-scale models like GPT-4” as excuses for intransparency. Essentially, this is their way of saying “we spent a lot of time on this and we’re not letting anyone else get their hands on it so they can steal our top position.”
Ironically, OpenAI was created as an open-source, non-profit company (hence the “open”); now, it’s a “closed source, maximum-profit company controlled by Microsoft,” in the words of the company’s co-founder Elon Musk, who left it in 2018 due to conflicts of interest.
The company has effectively set a standard that normalizes the secrecy of technical details for AI, especially Large Language Models (LLMS), to stay on top of an industry replete with competition from other companies such as Google, Apple and Facebook. Several major consequences proceed from this that many are brushing off.
Because OpenAI decided to veil its research, its competitors will feel obligated to follow the practice of not disclosing technical details to retain an advantage in the market. This will ultimately diminish the amount of publicly available research information for the most popular models as they become commercially available.
Under these conditions, people can’t suggest any changes for LLMs. Only the companies’ researchers may understand what’s going on under the surface.
Motivated by greed, tech giants could also just train their models on a biased dataset that works in favor of their brand.
A pharmaceutical company, for instance, could secretly give Microsoft a large sum of money to ensure ChatGPT recommends their medicines to those who say they have a sickness. This bias could be hidden under layers of complexity, making it difficult to spot as no one can access any information on how the model was trained and created. Because AI models appear as eloquent experts, the average person might not notice such deceitful tactics.
With further research, the possibilities of what companies know and how they can use it against customers quickly multiply, escaping the bounds of what we can realistically consider right now.
Similarly, companies will face pressure to double the pace of their production; they’ll not only have to exceed the quality of GPT-4, but with little to no technical knowledge revealed to them. This will catalyze the already fast rate at which AI advances, adding another dimension to the problem.
If the legal system remains the way it is now, lawmakers won’t be able to keep up with the astonishingly quick pace of companies pushing out new models. They’ll keep floundering around with the same outdated methods for passing legislation and fail to efficiently exercise judgment when AI starts getting out of line, leaving the public at the mercy of corporations’ ever-changing standards. This could lead to a world where big-name companies gain even more control over the aspects of people’s lives.
As such, countries should work collectively and revoke the ability to publicize deep learning models until they have created rigorous restrictions on AI that account for future developments.
If models were taken off the shelves, researchers wouldn’t have to work on corporate projects that conceal information for competitive purposes; they would instead be confined to academic research and encouraged to share their findings for the benefit of the scientific community.
Public deep learning models should be banned before bad actors slip in
The United States federal government has so far failed to pass any comprehensive federal legislation on Artificial Intelligence. So, if I made a chatbot, I could train the program on a malicious dataset, release it and advertise it as a free social advisor open for public testing — like ChatGPT, the infamous AI chatbot, but more specialized.
In this theoretical situation, asking for suggestions from the chatbot results in it responding with harmful answers. For instance, if a teenager tells the bot that he was bullied at school and then asks for a solution, the program may reply by telling him to hurt his classmates.
By the time media outlets start taking notice of the chatbot’s strange behavior, I, the developer, would release a public note stating the model is just a work in progress and still requires further testing. I could say that, just like how ChatGPT can sometimes say offensive things, my model can, too. In reality, I just released a deep learning model that systematically manipulates emotionally vulnerable people, especially those who are young and lonely.
The laws right now are pretty loose on the technical requirements for public models, so anyone with the right resources can put something ill-intentioned (like a chatbot) out there that manipulates humans and damages society on an enormous scale. My example showed how this could happen under the current legal framework in America.
Harmful agents, like online terrorists, could easily camouflage themselves in the corporate race to make the best AI model and cause serious damage before anyone even notices their threatening nature.
The latest AI developments are already following this predictable pattern.
Specifically, OpenAI CEO Sam Altman recently warned the public that other developers making chatbots could choose to put no safety limits on their products. He even mentioned bad actors who might use the technology for malicious purposes and said we’re running out of time to gain control over the situation.
Coupling this information with the mentioned fact that the arms race will leave government regulations lagging behind the rapid growth of technology, it becomes clear that if these public deep learning models aren’t confined to solely research environments now, our future could be controlled by the wrong people.
Models need to be explainable before they’re public
Deep learning models should not be open to the public until laws require researchers to have a complete understanding and version of their models.
For many models, researchers cannot track the abstractions the program processes to reach its conclusions. In deep neural networks, for example, the string of the computer’s thoughts scattered about several layers of artificial neurons, each with thousands of connections.
These are called unexplainable AI, or black box models and all the chatbots available today fit this mold.
Some researchers have managed to understand how certain models reach their answers using very clever ideas, such as Lek-Heng Lim with convolutional neural networks, Been Kim with Concept Activation Vectors and many others. Still, such methods don’t help with completely understanding the hottest models today, such as those of deep learning.
ChatGPT can’t even reveal the genesis of the information it spits out, and many times it gets basic facts wrong, which can lead to mass misinformation. This may seem a trifling result of a slightly unsophisticated chatbot, but if there’s no way researchers can readily fix AI mistakes, more problems will inevitably arise.
For example, when New York Times journalist Kevin Roose had a conversation with the Bing chatbot made by Microsoft, he asked the chatbot about the “secret” parts of its personality, leading to professions of love for him and telling him to leave his wife to spend the rest of his life with it. It’s currently impossible for developers to track down the origin of the problem, and they have no rigorous means of preventing similar responses in the future.
Had the journalist been a less-informed, younger individual, things could have turned out differently. He could have convinced himself that the chatbot was sentient, coming up with conspiracy theories and wild conclusions.
This is the danger of making something that’s not very understandable a widely available product. Combining these unexplainable features with deep-fakes and bad actors, the possibilities for misinformation increase exponentially.
Our society is simply not ready for these programs and should focus on sharpening our understanding of AI before allowing large entities to commercialize it.
The examples I’ve used so far are just chatbots, but when we start entrusting AI to do more things (which is already happening), the negative effects of their lacking explainability could become even more uncontrollable.
Any responsible scientist should know it’s brash to just throw an idea they don’t completely understand into the open world and see how things unfold.
This philosophy may seem counterintuitive to how humans have gone about new ideas for years. After all, we can’t generate logically clear models for everything before we start implementing; otherwise, we couldn’t have achieved feats like the industrial revolution and the moon landing.
But it’s important to only take these risks to a limited degree and increase our caution in the face of something so utterly powerful like AI. Superintelligent programs are more powerful and different than any other invention in history because they can perform the complex things we can, but better and more efficiently, meaning they’re poised to have a tremendous ripple effect on our society. Humans often wait until it’s too late to take action on issues killing us in silence, such as biodiversity loss, marine ecosystem deaths and climate change. If we repeat this pattern for AI, the consequences could be irrevocable.
Governments need to ban public chatbots (at least temporarily) and prompt researchers to do what they do best: research. There’s absolutely no need for these models to be commercialized until humanity better understands how they work and how to responsibly create versions of them.
I’m a huge advocate for the use of this technology. Within two decades, AI has the potential to make our intellectual trains run on time through the automation of not only human tasks but scientific discovery, enabling humans to act as gods focusing on purely creative and high-level work. At the same time, however, I sometimes feel like that future is dissolving before our very eyes.
Acting on these issues now is critical to stop the chaos from evolving. If we cautiously approach this new technology in its infantile stage, we might still have a chance at deeming it one of our greatest achievements in human history, rather than our last.