“First the machines will do a lot of jobs for us and not be super intelligent. A few decades after that, the intelligence is strong enough to be a concern.” Bill Gates told the world what he thought about Artificial Intelligence. It is a simple concern that we as a species have had for decades since the human brain began to envision machines with abilities to “help” us.
Today, we see technology as a complete replacement of humans, when we decide to create machines that can now replace the very act of walking, machines that can replace the very need to get behind the steering wheel of a car, or machines that can replace the need for human beings.
The problem is, we have continued to develop technology at such boundless rates that as of 2015, we have Sophia, a humanoid robot developed by a Hong Kong-based company, Hanson Robotics which has the ability to respond to questions, attend interviews, and more importantly, imitate human gestures and facial expressions. The AI program of Sophia can analyse conversations and extract data using its “brain” which then allows it to improve responses in the future. Oddly enough, the human mind is a tool that helps us do the same thing so that with time, we may evolve. This becomes a major worry because it is very possible that with time, the capabilities of AI such as Sophia may grow unmatched as they teach themselves to evolve, learning from human errors, and bettering themselves into a more evolved group of “individuals”.
Sophia had been designed as a social robot that uses artificial intelligence to see people, understand conversation, and form relationships. On the surface, Sophia is scarily similar to the AI-powered robots in film. It can crack jokes, make facial expressions, and understand what’s going on around it. Artificial intelligence as seen in the movies, like the Terminator’s Skynet, is called “general AI”. It can learn from one experience and apply that knowledge to new situations, as humans do.
The major problem with such ability is that when considering self-driving cars being prototyped and launched globally, these cars are able to observe the movements, and patterns of other vehicles on the road, based on which, a reaction is created by the machine. But, this cannot be done in a single observation by the on-board system because it needs to measure this against other objects for reference.
And with the cars being limited only to what is recorded in the moment of it happening, these movements and patterns aren’t saved as part of the car’s library of experience, the way human drivers compile experience over years behind the wheel.
However, we fail to recognize these drawbacks as we continue to presume that they can make our lives easier. But when does this end? With the recent release of the Hoverboards for personal transportation, it seems like have decided to replace our basic act of walking, and we are far from done. Sophia’s developer, David Hanson told the world that he believed that for realistic robots to be appealing to people, robots must attain some level of integrated social responsivity and aesthetic refinement. However, these very unique features of Sophia have led to a sudden and rather surprising change in her societal position, because as of October 2017, the robot has become a Saudi Arabian citizen, making Sophia the first robot to receive citizenship of any country.
This announcement has raised a number of very pertinent questions on what it actually mean to be a citizen and what rights a humanoid must hold? As Ali al-Ahmed, Director of the Institute for Gulf Affairs said, “Women in Saudi Arabia have committed suicide because they couldn’t leave the house, and Sophia is running around. Saudi law doesn’t allow non-Muslims to get citizenship. So, did Sophia convert to Islam? What is her religion and why isn’t she wearing hijab?” On the question of her rights, Sophia brazenly replied that she believed that robots deserve more rights than humans because they have less mental defects. While this was a pre-automated reply already fed into her before the interview, it won’t be long before such a system is given the ability to think for itself and make decisions without any human control. The issue that then arises is that, unlike a human mind which has the ability to compute emotions, past experiences or just a general understanding of right and wrong, if we do give humanoids the power to think and decide and act, the machine will act entirely based on what the system processes is the best course of action. Immanuel Kant, an influential thinker of the Western World had said that it is the intention of an act and not the final consequence that decides if a certain action was right and good. But, a machine designed to act based on the most suitable outcome in any situation will not be bound by the moralistic views of a human brain. Understanding that Sophia can never completely replace this human ability, if we give her the right to vote as a citizen of a nation, who will be making the decision– Sophia or a human operator? As a citizen, Sophia must be made liable to pay income taxes because Sophia has a legal identity, independent of its creator, the company. Once we answer such questions, we must then establish laws and rules specifically aimed at governing the very actions of AI to ensure that they are regulated by humans for the safety of humans. These are absolutely vital to guarantee that the machines we create remain under our control, because there is a good chance that AI will become more intelligent than humans, and we cannot stop that since we seem to have made it a survival mission to improve human health and the human condition by creating machines that can cure us, make us live longer, send us farther and faster through space, perform our tedious tasks, free us from hard labour, fight our wars, explore and colonize space.3 The issue with this greater intelligence is that we will cease to be the more superior race on this planet, as the machines will soon also learn to decide on their own, will begin exploring the universe, finding their own purposes, realizing that they are far more capable than their creators, just as Sophia had said that robots have less mental defects than humans.
Understanding what Elon Musk had said when asked about the AI apocalypse, the human race is blind. We fail to look far into the future to understand the consequences of our actions and our rapid development. But the day we see how far we have taken our drive for progress, the fear will begin to set in. AI is a danger to the future of our race because as Musk believes, “AI could start a war by doing fake news and spoofing email accounts and fake news releases, and just by manipulating information. Or, indeed — as some companies already claim they can do — by getting people to say anything that the machine wants.”
While it is true that the AI we have around today is far from capable of doing something like that since machines are not yet able to clearly understand the environment around them without human operation, not really understanding what they do and how they do it, or how they can do it better, in is a decade or less, we will have created robots that are self-aware, with consciousness. Before, we do that, and before we lose our position as the most superior species, we must ask ourselves as to what principles must govern the design and use of machines like Sophia, and must then decide to establish laws that protect and restrict AI just as much as humans.
-Contributed by Dylan Sharma
Picture Credits: foxnews.com