Artificial Intelligence: A Risky Path?

Artificial Intelligence (AI) is an intelligent machine that works and acts like humans. Though it started out as a humble idea in 1956, it has increased exponentially in its scope in recent years.

Artificial intelligence was developed with the hope of amplifying human intelligence, but has recently become a double-edged sword. Stephen Hawking had warned that AI could spell the end of human race– Elon Musk and Bill Gates claim the same, with Musk going so far as to say that AI can reduce people to the status of cats.

More and more of our decisions are being made by robots; who’s to say that one day they won’t decide that we’re obsolete and deserving retirement? Researcher’s plan on replacing the weak AI we use with strong AI by 2070, which might be a hint at Frankenstein’s monster in the making. Mark Zuckerberg too, the strongest supporter of AI, had to bite his tongue when chat bots plagued Facebook.

AI advocates however, tell a different story. With the increase in AI applicability, resource sensitive and high-risk situations like major surgeries where every human error has a ripple effect, can save lives and billions of dollars by deploying AI instead. What one needs to keep in mind, however, is that machines aren’t immune to flaws or mistakes either – they are fooled or hacked. By stretching our imagination, many hazards of AI dependence can be thought of- for example, misaligned intelligence can continue doing something and become difficult to control.

Moreover, AI and robots are already competing with the present human workforce, causing a decline in the median income. Even though AI and machines make our work faster, giving us more time to spend with family, they are making jobs overly specialised. As a result, it will become harder and harder for people to switch jobs (as entry-level profiles are already being filled by AI’s), and come out of unemployment.

Algorithms can perform calculations, process data, and solve a problem. However, they are created on the basis of creator’s opinion coded in maths, therefore have a profit motive. AI creator’s and programmers seem to have no consideration for the chaotic social transformation and unconstitutional discrimination that it’s likely to cause. Predictive models used in AI are based on what exists today, without any consideration for the future world.

It looks like the superpowers are already behind the new age of technology, though. Last week, the US Secretary of Defence showed his concerns for China’s plans for AI development, while the US military has a research plan in the field of AI for the next five years, worth 2 billion US dollars. Moreover, tech companies in the US have signed contracts with the government—tech magnate Google, and Pentagon’s joint venture ‘Maven’ is one such example.

China too, hopes to become the world leader in AI by 2030, and create a domestic industry worth dollar 150 billion.

Nobel laureate Joseph Stiglitz delivered a lecture in the Royal Society’s ‘You and AI’ series on 11th September, to give other statesmen an estimate about where, when, and in what quantity, Artificial Intelligence should be introduced.

Machines should not be allowed to take decisions that have critical implications for humans, or that endanger society as we know it. Data scientist Cathy O Neil writes in her book “Weapons of Maths Destruction”, that AI evaluates people and work in an opaque manner, while threatening democracy.

If unopposed, AI’s will dictate our lives as we move into the future, in its mechanical and inhumane way.

Picture Credits : entrepreneur.com



Most Popular

To Top
Please check the Pop-up.