The idea of the AI Apocalypse, similar to what is portrayed in films like Terminator, Skynet, Eagle Eye, I, Robot, and others that depict artificial intelligence acting autonomously and attempting to enslave or annihilate humans, raises the question: is such a scenario possible given the current way artificial intelligence operates?

In my personal opinion, for an AI system to reach this stage, two conditions must be met: a goal and the capability.

Current AI models are essentially mathematical models based on the laws and theories of probability, statistics, and mathematics. Reinforcement Learning models are a type in which the model tries to find different ways to achieve a certain goal, which is set by the designer and writer of the model. Ultimately, humans set goals for AI models. The first condition could be met if a person foolishly sets a goal for a model that resembles a survival instinct, for example, to preserve its existence by any means, or sets a harmful goal.

However, having a goal in itself can be a limited danger until the AI system possesses the capability, which can take different forms. For instance, the simplest and most obvious is internet access or access to closed systems that interact with the external world, like weapon systems, infrastructure systems such as electricity and water, or traffic systems. The idea is that the more access you give the system outside its closed environment, the greater its capabilities become in achieving its goals.

Another form of capability is the potential of the system models themselves. For example, LLMs (Large Language Models) only deal with language, thus their capability is limited. But creating a system that can understand and interpret language, images, and sounds gives it higher capabilities to achieve its goals. Rapid progress is being made towards achieving AGI (Artificial General Intelligence), where the capabilities of models will be exponentially greater than what we see now, leading to the singularity, which is beyond anyone’s imagination!

Despite my personal belief that we are still a bit far from the scenario of unleashed AI that could enslave us, we are closer than ever to that point. The world is in a frantic race to increase the capabilities of AI systems. We’ve seen in GPT-4 a multi-modal system capable of handling voice, image, and text. Soon, we might see AI systems gaining access to other closed systems that interact with the external world (if this isn’t already happening in security and military applications). History has shown that humans can reach levels of folly where they build applications intended to harm. So, it’s possible for someone to design a system with harmful objectives!

However, there are always groups of humans trying to raise awareness, sound alarms, and set regulatory rules to prevent such occurrences. Many voices are now calling for the establishment of governing rules for AI systems and defensive regulations to prevent these systems from evolving in harmful directions or leading us to the worst-case scenarios.

This is a personal viewpoint where I try to foresee the future based on my knowledge and experience in the field, a lot of imagination, and the ability to organize and connect information.

Check Also

John Nash, Chat GPT, Hallucinations, Technological Progress, and Human Nature

John Nash, a famous American mathematician, is known for the Nash Equilibrium, a fundament…