Everyone keeps talking about living in the past, but why can’t we live in the future?
The main problem is that we do not know what the future will be like. We see these future predictions from the past, and most of them are far from the truth.
Many of the predictions we make in the present about the future are likely to be just as inaccurate. But there is one future prediction that is too hard to ignore because of the potential danger that it possesses: the rise of AI.
My favorite billionaire Elon Musk predicts that AI will overtake humans in five years. His main issue with AI is that it is possible for it to become smarter than humans. Once that happens, unless humans can connect their brains to a computer, then humans will serve no purpose, hence initiating the AI takeover. While the five-year prediction seems generous, Musk did unveil his plans for Neuralink, his brain computer interface startup.
Still, I don’t plan on being reliant on this technology to survive the robotic takeover. While it does seem promising, the technology poses many challenges, raising philosophical and ethical questions about what would happen if the computer malfunctioned, for example.
Instead, I have decided to build a nuclear bunker. While this does seem extreme, it will be the only safe way to hide from the robots that will be roaming our planet trying to take us over.
I plan on building the bunker out of steel reinforced concrete. The bunker will be in the middle of the Santa Cruz Mountains, where there are plenty of trees to hide it. Inside, the bunker will be stuffed with years of food and survival supplies.
To defend my bunker, I plan on using whatever weapons the future provides us to kill the robots that are taking over, but I’ll only use these weapons defensively. Hiding from technology rather than trying to learn with it is the clear solution to protecting myself.
This obviously isn’t a realistic solution to the robot takeover that I believe is likely going to happen in the future due to the rapid development of machine learning. Rather than hiding from this potential problem, humans might have to learn to work alongside AI.
Musk’s Neuralink is one way of working with computers and enhancing the human capability to not be overtaken by machines. Still, other measures need to be taken.
There needs to be government regulation of AI technology before it rolls out. There also needs to be safety mechanisms that help us shut down the AI in case it starts getting out of control. This is especially a concern for AI weapons, which are in development and are meant to identify targets and take appropriate action.
Many individuals, including myself, are concerned about the development of machine learning. But as we build machines to resemble human characteristics, it is important that we evolve with those machines to keep control of them rather than trying to shelter ourselves from technology.