Why are people afraid of artificial intelligence and what real danger can it pose?
21.11.2023
In early November, representatives of 28 countries signed the "Bletchley Declaration" with the following wording: artificial intelligence (AI) poses a "potentially catastrophic risk" to humanity. We asked Sergei Kasanin, Deputy Director General for Research at the United Institute of Informatics Problems of the National Academy of Sciences of Belarus, PhD in Engineering, Associate Professor, to explain: is AI really that smart, cunning and dangerous for us?
Is Skynet Near?
- Sergey Nikolaevich, is artificial intelligence now a set of complex algorithms, a self-improving computer program, or an electronic monster that is about to break out of control?
- At the moment, scientists define artificial intelligence as algorithms that can self-learn in order to apply this knowledge to achieve human-defined goals. There are three types of artificial intelligence: weak, strong, and super AI. The first is used everywhere, including voice assistants, advertising on social networks, facial recognition, and finding romantic partners in applications. These weak AI systems are the only ones available today.
Strong AI is as close as possible to the capabilities of human intelligence and is endowed with self-awareness according to the classical Turing definition. According to experts, it will be formed around 2075, and after another 30 years, the time of super-AI will come. It could surpass the best minds of humanity in all areas, while reprogramming itself, continuing to improve and, probably, developing new systems and algorithms on its own.
However, despite the fact that artificial intelligence technologies continue to develop, there are no signs of the possible emergence of electronic monsters that can escape from control. AI is used to solve various problems and improve the quality of people's lives.
- Is it possible to make a forecast for the development of AI for the next 15 years?
- It is difficult to give, since technology and scientific advances can develop at an unexpected speed. However, it can be said that AI will continue to play an important role in various fields, such as medicine, transportation, finance, education and others. In addition, with the development of technologies such as 5G, IoT and blockchain, the possibilities for the application of artificial intelligence will expand.
According to experts at Oxford University, by 2026, AI will create an essay that passes for human writing, replace truck drivers by 2027, and perform the work of a surgeon by 2053. AI will also outperform humans in all tasks within 45 years and automate all jobs within 120 years.
Several significant technologies have been developed in the field of artificial intelligence so far. GPT from the field of natural language processing is the most complex and flexible neural network, capable of generating articles on almost any topic that are difficult to distinguish from human-created ones at first glance. AlphaFold 2, a breakthrough in medical science, is able to determine the 3D structure of a protein with high accuracy in just a few hours. AutoML (automated machine learning) algorithms have made AI accessible to small and medium businesses through integration with cloud systems.
According to experts at Oxford University, by 2026, AI will create an essay that passes for human writing, replace truck drivers by 2027, and perform the work of a surgeon by 2053. AI will also outperform humans in all tasks within 45 years and automate all jobs within 120 years.
Several significant technologies have been developed in the field of artificial intelligence so far. GPT from the field of natural language processing is the most complex and flexible neural network, capable of generating articles on almost any topic that are difficult to distinguish from human-created ones at first glance. AlphaFold 2, a breakthrough in medical science, is able to determine the 3D structure of a protein with high accuracy in just a few hours. AutoML (automated machine learning) algorithms have made AI accessible to small and medium businesses through integration with cloud systems.
The first reason is superintelligence
- In June 2023, an AI embedded in a drone during a US Air Force military exercise violated the operator's command to "return to base" and attempted to "destroy" it and the communications center it was in. Are such incidents possible in the future?
- Of course, it is possible that a situation will repeat itself, when AI built into autonomous systems can unexpectedly act independently of the instructions of operators. How can we protect ourselves from this? For example, by developing reliable algorithms and security systems that guarantee that AI will only execute specified and approved commands. This means conducting comprehensive tests and checks to detect and prevent possible vulnerabilities.
It is also important that AI systems are transparent and accountable. Developers and operators must have full understanding of the system and control over its actions. To do this, there must be mechanisms to monitor the actions and decisions of the AI. It is also important to define zones of prohibited actions or modes where the operator has final control.
- Why do you think there have been more and more calls lately to limit developments in the field of AI?
- Let's look at some key reasons why these calls are being made. The first is superintelligence. The development of AI could lead to superintelligence (SI), which would surpass human cognitive abilities in all areas. If such systems were to end up in someone's hands without proper control, they could create a number of serious problems.
The second reason is security and control. AI systems, especially when using autonomously functioning algorithms, can pose a serious security threat. Unsupervised AI systems are capable of making decisions that contradict human ethics and values.
The third is worker displacement. The spread of AI could lead to significant unemployment in some industries.
Beyond these reasons, there are also concerns about unintended consequences, ethical issues, and the potential for misuse of AI in the areas of security and control. In addition, adversaries could use AI for advanced cyberattacks, information manipulation, or to create autonomous weapons systems.
Calls to limit developments in the field of AI are related to the understanding of these and other risks and the need to conduct responsible and conscientious work in the creation and implementation of artificial intelligence systems.
Although artificial intelligence technologies continue to develop, there are no signs of possible electronic monsters that can escape from control. AI is used to solve various problems and improve the quality of people's lives.
The second reason is security and control. AI systems, especially when using autonomously functioning algorithms, can pose a serious security threat. Unsupervised AI systems are capable of making decisions that contradict human ethics and values.
The third is worker displacement. The spread of AI could lead to significant unemployment in some industries.
Beyond these reasons, there are also concerns about unintended consequences, ethical issues, and the potential for misuse of AI in the areas of security and control. In addition, adversaries could use AI for advanced cyberattacks, information manipulation, or to create autonomous weapons systems.
Calls to limit developments in the field of AI are related to the understanding of these and other risks and the need to conduct responsible and conscientious work in the creation and implementation of artificial intelligence systems.
Although artificial intelligence technologies continue to develop, there are no signs of possible electronic monsters that can escape from control. AI is used to solve various problems and improve the quality of people's lives.
Source: BELTA