The first person who defined three laws for robotics was the world-famous fantastic writer Isaak Asimov. According to Asimov’s laws,
- a robot must obey human’s commands,
- a robot can’t injure a human being,
- it can’t allow such harm done by others to happen.
Today we observe a furious development and rash integration of machines powered by artificial intelligence (AI). We should notice that Asimov’s laws are theoretically consistent, but they are useless in practice as nowadays AI has gone too far from what Asimov kept in mind.
It is impossible to imagine our lives without all advantages that machine intelligence offers. With the advent of artificial intelligence and neural network systems, societies across the globe are lost in thoughts about the legal regulation of advancing technology.
Possible AI threats
There is a controversial question of whether machines can take the human being’s privilege to make a decision. On the one hand, machines think free from emotions. But on the other hand, only human beings can make a conscious and genuine decision. A machine facility for learning may lead to an outcome that is different from what was expected. All possible threats from AI and the necessity of its legal regulation come from this point. Let’s have a look at some of them.
- The constant arms race between leading countries provokes active research in stand-alone weapon systems. As a result, armed forces are equipped with drones driven by artificial intelligence. The probable malfunction of these intelligent weapon systems may lead to fatal consequences. Debates over drafting the legal framework of machine integration into weapons continue as the leading military countries are not ready to give up the power of artificial intelligence and supremacy of advanced technologies.
- The destination of AI integration is to make peoples’ lives easier and “smarter”. With the help of various algorithms, machines can learn from initial data sets and previous experience. But the vast introduction of machine intelligence into different fields may lead to dangerous situations, when humanity will not recognize whom to blame for the error in the robot behavior which leads to the death of people, like in the driverless car accident. So, it is significant to define borders and set a legal framework for human and machine cooperation concerning innovation development.
- Security has always been an urgent issue in the unstoppable process of AI development. Although intelligent systems may be secured, there is always a human being capable of hacking. For example, one of the hottest issues is the growing popularity of children’s smartwatches, useful for the identification of children’s geolocation. Without a reliable security system, these smartwatches may create a threat to children’s lives as a good hacker can track and get access to a child. That is why it is vital to guarantee users security legally to minimize cyber threats.
- A jobless future is considered to be another unfavorable outcome of the widespread use of robotics. Many analysts believe that shortly the majority of workplaces will be occupied by robots, throwing back human labor.
It is an open secret that Japan is the leading country in technology development and innovations. Nowadays, there are several hotels in Japan where 80% of all service staff are represented by intelligent robots. This country is an example of an upcoming robotics future and the machines’ invasion into peoples’ daily life.
The healthcare system is another field of rapid AI and robotics integration. Today many hospitals are provided with intelligent computers to diagnose cases and to assist patients. It is expected that soon robots will play nursing roles, reducing the needs of real nurses.
Such interactions between robots and humans cause debates over the observance of ethical principles by robots. How to judge the wrong diagnosis of cancer, predicted by the machine based on the datasets, provided by the open-source platform. Who to blame for the wrong treatment in this case: the programmers, the medicines, the machine? It is a complicated question that has to be legally designed.
Legal status for robots: myth or reality?
Legally fixing the autonomous status of robots will inevitably lead to its identification as a particular form of personality and its rights. The legal status of intelligent systems depends on the set of AI functions, forms of its realization, and degree of autonomy.
Many scientists do not believe in robot personifications and deny the necessity of giving them legal status. Robots and machines will never become human-like creatures. AI does not have a soul, consciousness, emotions, and feelings like a human being. For example, according to the US Constitution, «All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and the State wherein they reside». What does it mean? The answer is only a natural person can be a citizen and can have civil rights.
Machines will always be human property. They can’t be something more.
Conclusion
An enormous amount of data and machines’ ability to learn and make decisions through introduced algorithms make it more difficult to predict machines’ behavior and probable threats. For many countries, including the leading ones, it is a real challenge to originate a national and international competent legal basis for AI and robotics. The complexity of the legal framework is rooted in two issues. On the one hand, it is necessary to stimulate further tech development and ensure leading positions on the local and global market. But on the other hand, there is a controversial question about how to guarantee the security and safety of people and businesses.
Violent AI integration has to be held smoothly and wisely. Many countries make attempts to set a legal framework to address the rapid technological development and challenges. Nevertheless, every national and international legislation has some gaps. So the problem of legal regulation of AI is global. That is why the standardization of such rules must be implemented on the global level, taking into consideration successful practices from all over the world.
The legal groundwork has to develop, taking into consideration a possible risk evaluation. Besides, it is necessary to keep the balance between social interests and the interests of individuals, including the security and the importance of technological innovations.