Should we be scared of Artificial Intelligence?
Some notable individuals such as legendary physicist Stephen Hawking and Tesla and SpaceX leader and innovator Elon Musk suggest AI could potentially be very harmful Musk at one point was comparing AI to the dangers of the dictator of North Korea.
Microsoft co-founder Bill Gates also believes there is a reason to be cautious, but that the good can outweigh the bad if managed suitably. Since current developments have made super-intelligent machines possible much sooner than initially thought, the time is now to examine what dangers artificial intelligence poses.
What is applied and general Artificial Intelligence?
At the core, Artificial Intelligence is about building machines that can think and act intelligently and involves tools such as Google’s search algorithms or the machines that make self-driving cars possible. However most current applications are used to impact humankind positively, any powerful tool can be wielded for dangerous purposes when it falls into the wrong hands.
Today, we have achieved applied AI—AI that performs a narrow task such as facial recognition natural language processing, or internet searches. Eventually, experts in the field are working towards artificial general intelligence, where systems can handle any task that intelligent human beings could perform and more likely beat us and each of them.
Elon Musk Said about AI
In a comment, Elon Musk wrote: “The pace of progress and development in artificial intelligence (I’m not referring to narrow Artificial Intelligence) is incredibly fast. If not you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace near exponential. The risk of something seriously dangerous happening is in the 5-year time year 10 years at most
There are indeed a lot of AI applications that make our everyday lives more convenient and efficient. It is the AI applications that play a major role in ensuring safety that Musk, Hawking, and others were concerned about when they proclaimed their hesitation about the technology.
For example, if AI is responsible for ensuring the operation of our power grid and our worst fears are realized and the system goes rogue or gets hacked by an enemy, it could result in massive danger.
How can artificial intelligence be dangerous?
While we have not achieved super-intelligent machines yet, the legal, political, societal, financial, and regulatory matters are so complex and wide-reaching that it not necessary to take a look at them now so we are prepared to safely operate among them when the time comes. Outside of preparing for a future with super-intelligent machines now, artificial intelligence can already pose harm in its current form. Let’s take a look at some key AI-related risks.
AI programmed to do something harmful, as is the case with autonomous weapons programmed to kill is one-way Artificial Intelligence can pose risks.
It might even be plausible to expect that the nuclear arms race will be placed with a global autonomous weapons race. Russia’s president Vladimir Putin said: “Artificial intelligence is the future not only for Russia but for all humankind , It comes with huge opportunities but also threats that are difficult to predict. Whoever becomes the leader in this sphere will get to be the ruler of the world.”
Beside from being concerned that autonomous weapons might gain a “mind of their own,” a more imminent concern is the harmful autonomous weapons might have with an individual or government that does not value human life. Once deployed, they will likely be difficult to dismantle or combat.
Social media through its huge-powered algorithms is very effective at target marketing. They knew who we are, what we like, and are incredibly good at surmising what we think. Investigations are yet underway to determine the fault of Cambridge Analytica and others associated with the firm who used the data from 50 million Facebook users to try to sway the outcome of the 2016 U.S. presidential election and the U.K.’s Brexit referendum but if the accusations are accurate, it illustrates AI’s power for social manipulation.
By spreading propaganda to each one of them identified through algorithms and personal data, Artificial Intelligence can target them and spread whatever information they like, in whatever format they will find more convincing—fact or fiction.
Invasion of privacy and social grading
It is now possible to track and analyze each and every move online as well as when they are going about their daily business. Cameras are closely everywhere, and facial recognition algorithms knew who you are. In fact, this is the kind of information that is going to power China’s social credit system that is expected to give every one of its 1.4 billion citizens a personal score based on how they behave—things such as do they jaywalk do they smoke in non-smoking areas and how much and more time they spend playing video games.
When Big Brother is watching you and then making decisions and ideas based on that intel, it is not only an invasion of privacy it can quickly turn to social oppression.
Misalignment between our goals and the machines
Part of what humans value in AI-powered machines is their efficiency and effectiveness. But, if we are not clear with the goals we are all set for AI machines, it could be harmful if a machine is not armed with the same goals we have. For example, a command to “Get me to the airport as quickly as possible” it might have dire consequences.
Without specifying that the rules of the road must be respected cause we value human life, a machine could quite effectively accomplish its aim of getting you to the airport as quickly as possible and do literally what you asked but leave beyond a trail of accidents.
Machines can gather track and analyzed so much about you, that it is very possible for those machines to use that information against you. It is not hard to imagine an insurance company telling you you are not insurable based on the number of times you were caught on camera talking on your phone. An employer might withhold a job offer based on your “social credit score.”
Any powerful technology can be misused. Today, artificial intelligence is utilized for many good causes including to help us make better medical diagnoses, find new ways and ideas to cure cancer and make our cars safer. Unfortunately, as our AI capabilities expand we will also see it being used for harmful or malicious purposes. Since AI technology is advancing so rapidly it is vital for us to start to debate the better ways for AI to develop positively while minimizing its destructive potential.
Like what you just read? Let us know how we can improve your experience with us by leaving a comment or mailing to us at (firstname.lastname@example.org)