Top 10 Scary Facts About Artificial Intelligence

Posted on

We are in the fourth industrial revolution, which is characterized by advances in robotics and self-driving car technology, the proliferation of smart home appliances, and more. At the forefront of all these is artificial intelligence (AI), which is the development of automated computer systems that could match or even surpass humans in intelligence.

AI is regarded as the next big thing—so big that future technologies will be dependent on it. But then, do we really know what we are getting ourselves into? Here are ten scary facts about artificial intelligence.

 

10. Your Self-Driving Car Might Be Programmed To Kill You

Let’s assume you’re driving down a road. Then, a group of children suddenly appear in front of your car. You hit the brakes, but they don’t work. Now, you have two options: The first is to run over the children and save your life. The second is to swerve into a nearby wall or bollard, thus saving the children but killing yourself. Which would you pick?

Most people agree they will swerve into the bollard and kill themselves.

Now imagine that your car is self-driving, and you’re the passenger. Would you still want it to swerve into the bollard and kill you? Most people who agreed they would swerve into the bollard if they were the driver also agreed that they would not want their self-driving car to swerve into the bollard and kill them. In fact, they won’t even buy such car if they knew it would deliberately put them at risk in an accident.

This takes us to another question: What would the cars do?

The cars will do what they were programmed to do. As things are, makers of self-driving cars aren’t talking. Most, like Apple, Ford, and Mercedes-Benz, tactfully dodge the question at every instance. An executive of Daimler AG (the parent company of Mercedes-Benz) once stated that their self driving cars would “protect [the] passenger at all costs.” However, Mercedes-Benz refuted this, stating that their vehicles are built to ensure that such a dilemma never happens. That is ambiguous because we all know that such situations will happen.

Google came clean on this and said its self-driving cars would avoid hitting unprotected road users and moving things. This means the car would hit the bollard and kill the driver. Google further clarifies that in the event of an impending accident, its self-driving cars would hit the smaller of any two vehicles. In fact, Google self-driving cars might be seeking to be closer to smaller objects at all times. Google currently has a patent on a technology that makes its self-driving cars move away from bigger cars and toward smaller cars while on the road.[1]

9. Robots Might Demand Rights Just Like Humans

With the current trends in AI, it’s possible that robots will reach a stage of self-realization. When that happens, they may demand for their rights as if they were humans. That is, they’ll require housing and health care benefits and demand to be allowed to vote, serve in the military, and be granted citizenship. In return, governments would make them pay taxes.

This is according to a joint study by the UK Office of Science and Innovation’s Horizon Scanning Center. This research was reported by the BBC in 2006, when AI was far less advanced, and it was conducted to speculate the technological advancements they might be seeing in 50 years’ time. Does this mean that machines will start demanding citizenship in about 40 years? Only time will tell.[2]

 

8. Automatic Killer Robots Are In Use

When we say “automatic killer robots,” we mean robots that can kill without the interference of humans. Drones don’t count because they are controlled by people. One of the automatic killer robots we’re talking about is the SGR-A1, a sentry gun jointly developed by Samsung Techwin (now called Hanwha Techwin) and Korea University. The SGR-A1 resembles a huge surveillance camera, except that it has a high-powered machine gun that can automatically lock onto and kill any target of interest.

The SGR-A1 is already in use in Israel and South Korea, which has installed several units along its Demilitarized Zone (DMZ) with North Korea. South Korea denies activating the auto mode that allows the machine decide who to kill and who not to kill. Instead, the machine is in a semi-automatic mode, where it detects targets and requires the approval of a human operator to execute a kill.[3]

Prev1 of 4Next

Leave a Reply

Your email address will not be published. Required fields are marked *