Stephen Hawking Signs Document Warning Potential Dangers Of Artificial Intelligence

16227395801_59f6aab284_zby Luke Miller Truth Theory

A document has surfaced from the Future Of Life Institute attempting to create some guidelines and warning of the potential dangers of the ever increasing capability of artificial intelligence.

The Future Of Life Institute are- “A volunteer-run research and outreach organization working to mitigate existential risks facing humanity.” and are “Currently focusing on potential risks from the development of human-level artificial intelligence.”

Some notable names on their scientific advisory board include Elon Musk, Stephen Hawking and Morgan Freeman, and the organisation was co-founded by Jaan Tallinn the founding engineer of Skype. So there are some pretty smart people working there.

This document has been signed by thousands of scientists, researchers, professors including the IBM team behind the Watson supercomputer and even Apple co-founder Steve Wozniak.

It has outlined some of the dangers associated with creating “Human Level AI” and the summery from the document reads-

“Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls. This document gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.”

Some of the warnings include the potential economic impact, saying that the rise of AI “could include increased inequality and unemployment” and they have also highlighted some of the legal ramifications and the possibility of how and when it is acceptable to prioritise profit over human life- “How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost?”

The use of robotic weaponry- “Can lethal autonomous weapons be made to comply with humanitarian law” and how artificial intelligence should handle privacy issues- “How should the ability of AI systems to interpret the data obtained from surveillance cameras, phone lines, emails, etc., interact with the right to privacy?”

They also brought up the issue of maintaining control “how to enable meaningful human control over an AI system after it begins to operate. “Which had been the plot line in many futuristic AI movies, where the robots surpass us in intelligence and we lose total control.

Some of the document brings up some very valid points for discussion using the example of self driving cars- “If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits.”  So what would be a fair trade off? Because ultimately AI could bring the number of deaths down, but for some this would not necessarily be a good thing, as a lot of car accidents are caused by human error, so those less prone to these errors would perhaps be put in more danger.

I am sure that some of the potential benefits of AI are huge, but at the same time it also leaves the door open for a lot of trouble. If AI is used for the benefit of mankind then it could be a great thing, but if it is used to profiteer and exploit it will not.

One things for certain and that’s the need for regulation and I think this document opens up the conversation and gets things started.

So what do you think? Is this something that could get out of control? Can corporate companies be trusted with this technology? Or is it going to open up more opportunity for collective growth? Let me know what you think by leaving me a comment below.

Future Of Life

Image Credit

THIS ARTICLE IS OFFERED UNDER CREATIVE COMMONS LICENSE. IT’S OKAY TO REPUBLISH IT ANYWHERE AS LONG AS ATTRIBUTION BIO IS INCLUDED AND ALL LINKS REMAIN INTACT.

Creative Commons License

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Leave Comment: