Thinking about the potential dangers of artificial intelligence might sound like science fiction, but AI is already here, and it offers both benefits and disadvantages. From customer service chatbots to devices like Alexa to safety features in cars, Google algorithms and more, artificial intelligence has become part of most people’s everyday lives. However, there are dangers as well.
Related resource: Top 20 Artificial Intelligence Engineering Schools in the U.S.
Lack of Privacy
From apps on mobile devices to companies that mine social media for information about users to cookies on computers, many people have grown somewhat blase about the intersection of privacy and technology. Even the scandal involving the company Cambridge Analytics, which gathered data from Facebook users without their knowledge, has not done much to slow the pace at which people willingly turn information over to companies.
AI makes it possible to collect and analyze information on a much larger scale than has ever been possible in the past. As an article in Forbes discusses, this could eventually go beyond being targeted by advertisers and could turn into a social credit system. This could affect everything from a person’s employability to their ability to get insurance and more.
The Scope of Systems
There are a number of risks associated with failing to control the scope of what AI is allowed to do. One challenge faced by designers of self-driving cars is what will happen if the car is in a situation in which it is forced to choose between injuring the occupants of the car or other people. This raises the ethical question of whether this is something that should be left up to AI at all as well as liability issues. Another danger is that people will cede too much responsibility to AI without fully understanding what they are giving up. One of the greatest dangers of AI is that it will launch weapons or make other destructive decisions that cannot be stopped by human intervention. These incidents would happen not because AI is a malevolent force but simply because in trying to achieve its programmed aims, it might overlook considerations that would be obvious to human operators.
What has been termed “existential risk” is the type of risk most people probably think of when they think about the potential dangers of AI. The idea behind existential risk is that AI could eventually lead to a catastrophe or even the extinction of human beings altogether. As described by the Centre for the Study of Existential Risk at the University of Cambridge, the problem may arise when AI is expanded beyond narrow uses. There is the potential for AI to become superior to humans both physically and in terms of intelligence, and this could cause the AI to either act on its own or to be used by individuals or groups to disastrous ends. Elon Musk, Bill Gates and Stephen Hawking are among the prominent figures in technology and science who have raised concern about the existential risk from AI. However, the AI community is doing research into machine safety and ways to prevent this at centers like the one in Cambridge.
AI has improved human lives in large and small ways, from contributing to more accuracy in surgery to preventing people from getting lost. It is important to address the potential dangers of AI as well to ensure it continues to be used to benefit humans.