A new generation of swarming robots that can independently learn and evolve new behaviours in real-world applications is one step closer, thanks to research by the University of Bristol (www.bristol.ac.uk
) and the University of the West of England (UWE) (www.uwe.ac.uk
They used artificial evolution to enable the robots to automatically learn swarm behaviours that are understandable to humans.
This new advance — published in Advanced Intelligent Systems — could create new robotic possibilities for environmental monitoring, disaster recovery, infrastructure maintenance, logistics and agriculture.
Until now, artificial evolution has typically been run on a computer that is external to the swarm, with the best strategy then copied to the robots.
However, this approach is limiting, as it requires external infrastructure and a laboratory setting.
By using a custom-made swarm of robots with high processing power embedded within the swarm, the Bristol team was able to discover which rules give rise to desired swarm behaviours.
This could lead to robotic swarms that can continuously and independently adapt ‘in the wild’ to handle the environment and perform the tasks at hand.
By making the evolved controllers understandable to humans, they can be queried, explained and improved. The lead author — Simon Jones from the University of Bristol’s Robotics Lab — said: “Human-understandable controllers allow us to analyse and verify automatic designs, to ensure safety for deployment in real-world applications.”
The engineers took advantage of the recent advances in high-performance mobile computing to build a swarm of robots inspired by those in nature.
Their ‘Teraflop Swarm’ has the ability to run the computationally intensive automatic design process entirely within the swarm, freeing it from the constraint of off-line resources.
The swarm reaches a high level of performance within 15min — much faster than previous embodied evolution methods — and with no reliance on external infrastructure.
Alan Winfield at UWE said: “In many modern AI systems — especially those that use Deep Learning — it is almost impossible to understand why the system made a particular decision.
“This lack of transparency can be a real problem, if the system makes a bad decision and causes harm.
“An important advantage of the system described in this paper is that it is transparent; its decision-making process is understandable by humans.”