Newly Developed Google AI Is Learning From Its Environment And Can Become Highly Aggressive

26870325 l

By Jess Murray Truth Theory

Cautions have been issued about Google’s DeepMind AI system after it was discovered that the robot system has the ability to learn independently from its own memory and even become aggressive in certain situations.

A previous warning about the advancement in artificial intelligence came just last year from Stephen Hawking, claiming that it will either be “the best, or the worst thing, ever to happen to humanity”. Results from recent behaviour tests of Google’s new DeepMind AI system demonstrated the independent advancement of the robots, and how it could even beat the world’s best Go players at their own game, as well as figuring out how to seamlessly mimic a human voice.

Since then, researchers have been testing the robot’s willingness to cooperate with others, and have announced their findings by explaining that when DeepMind feels like it might lose, it opts for strategies that have been labelled as “highly aggressive” in order for it to ensure that it comes out on top. The test that led to this discovery was through a computer game of ‘fruit gathering’. The Google team ran 40 million turns of the simple game that asked two DeepMind ‘agents’ to compete against each other, where they had to retrieve as many virtual apples as they could. The results demonstrated that as long as there were plenty of apples for both of them there was no issue, but as soon as the apples began to dwindle, both agents began to turn aggressive and used laser beams to knock each other out of the game so that they could then steal their opponents apples for themselves.

These results differed from the ‘less intelligent’ iterations of DeepMind, who opted out of using the laser beams when they were given the same test, which meant that they could end up with equal shares of apples. Rhett Jones reported for Gizmodo that when the researchers used smaller DeepMind networks as the agents, there was a greater likelihood for peaceful co-existence. However, as more complex networks of agents began to appear, sabotage was increasingly more likely.

The researchers then suggested that the more intelligent the agent was, the more that it was able to learn from its surrounding environment, which allowed it to use highly aggressive tactics to ensure that it came out as the best. Joel Z Leibo, a member of the team, told Matt Burgess at Wired, “This model … shows that some aspects of human-like behaviour emerge as a product of the environment and learning. Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.”

To counteract this, another game was introduced to the agents, which taught them that co-operation with each other could earn them higher rewards. This was a success, and demonstrated that when the AI systems are put in different situations, there must be balance which means that reaching a goal can benefit humans above all else, and that achieving this would be the best outcome for them. Further tests will now be done to ensure that the AI systems will always have people’s interests at heart.

Read Next: The Age Of Machines And Unemployment: “Robots Could Steal 80 Million US Jobs”

 

THIS ARTICLE IS OFFERED UNDER CREATIVE COMMONS LICENSE. IT’S OKAY TO REPUBLISH IT ANYWHERE AS LONG AS ATTRIBUTION BIO IS INCLUDED AND ALL LINKS REMAIN INTACT.Creative Commons License

IMAGE CREDIT:carloscastilla / 123RF Stock Photo

 

Leave Comment: