Visual Foresight: Researchers Develop New Robot That Can Predict The Future Outcome Of Its Action

87874184 l

By Fattima Mahdi Truth Theory

News of technological advancements in robotics is common these days – from A.I. intelligent enough to teach itself to walk, to robots developing their own language between themselves. So, it’s likely to come as little to no surprise that yet another technological advancement has taken place, allowing robots to imagine the outcome of their actions before they’ve made them. It’s called visual foresight, and the new learning system allows robots to predict what their cameras will pick up after making a series of movements.

These new robotic learning systems were developed by a research team based in UC Berkeley. What they would do is allow the robot to go through a ’play phase’, wherein it would learn about its environment and surrounding objects by touching, moving and trying to grab them – as opposed to having information about the objects and environment written into its code. The robot goes through this phase completely unsupervised. After a week of learning, the researchers would then intervene and give the robot a task, something simple, like moving an object from one place to another. The robot would then pull from what it’s learned during the ‘play phase’ and complete the task based on that knowledge alone. The knowledge stored creates somewhat of a web of imagination that the robot can use to predict the outcome of its actions.

Sergy Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, explained, “In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it. This can enable intelligent planning of highly flexible skills in complex real-world situations.”

The system, at its core, is a deep learning algorithm based on dynamic neural advection (DNA). The neural network decodes information provided to it via linked cameras by reading the pixels of the images the cameras create. What the neural network then does with the pixel information is build up a backlog that it can observe – analysing how the pixels changed after certain actions were performed. “In the past, robots have learned skills with a human supervisor helping and providing feedback”, said doctoral student Chelsea Fin, inventor of the original DNA model. “What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,”. Frederik Ebert, a graduate student who worked on the project said, “We have shown that it possible to build a robotic system that leverages large amounts of autonomously collected data to learn widely applicable manipulation skills, specifically object pushing skills,”.

The evidence that humanity is about to give birth to a new form of intelligence continues to pile up. Soon we’ll have a new form of life on our hands, effectively making us Gods. What will we do with this power once it’s in our hands?

Read More: Google’s Artificial Intelligence Built An AI That Outperforms Any Made By Humans

Image Credit: Copyright: phonlamaiphoto / 123RF Stock Photo

Leave Comment: