If the theory of evolution has worked well for us -- even if this is arguable these days -- why not apply it to mobile robots?, asks Technology Research News. Several U.S. researchers just did that and trained neural networks to play the Capture the flag game. Once the neural networks were good enough at the game, they transferred them to the robots' onboard computers. These teams of mobile robots, named EvBots (for Evolution Robots), were then also able to play the game successfully. This method could be used to build environment-aware autonomous robots able to clear a minefield or find heat sources in a collapsed building within 3 to 6 years. But the researchers want to build controllers for robots that adapt to completely unknown environments. And this will not happen before 10 or maybe 50 years.
Evolutionary computing has been tapped to produce coherent robot behavior in simulation, and real robots have been used to evolve simple behavior like moving toward light sources and avoiding objects.
Researchers from North Carolina State University and the University of Utah have advanced the field by combining artificial neural networks and teams of real mobile robots to demonstrate that the behavior necessary to play Capture the Flag can be evolved in a simulation.
"The original idea... came from the desire to find a way to automatically program robots to perform tasks that humans don't know how to do, or tasks which humans don't know how to do well," said Andrew Nelson, now a visiting researcher at the University of South Florida.
After this introduction, let's look at how they developed and trained the neural networks -- and the robots.
The capture-the-flag learning behavior evolved in a computer simulation. The researchers randomly generated a large population of neural networks, then organized individual neural networks into teams of simulated robots that played tournaments of games against each other, said Nelson.
After each tournament, the losing networks were deleted from the population, and the winning neural networks were duplicated, altered slightly, and returned to the population.
When they first start learning, [the networks] are unable to drive the robots correctly or even avoid objects or one another," said Nelson. "However, some of the networks are bound to be slightly better than others and this [is] enough to get the artificial evolution process started," he said. "After that, competition will drive the process to evolve better and better networks."
After several hundred generations, the neural networks had evolved well enough to play the game competently and were transferred into real robots for testing in a real environment. "The trained neural networks were copied directly onto the real robots' onboard computers," said Nelson.
||Here are two EvBots trying to find their way (Credit: Center for Robotics and Intelligent Machines (CRIM), North Carolina State University).|
What can we expect from these robots trained by evolution?
The method could be used to automatically tune well-defined components of robot control systems, said Nelson. "For example, a robot might retune its object avoidance mechanisms upon entering a new environment -- outdoors vs. inside," he said. This could be used practically in 3 to 6 years, he said.
The long-term benefit of evolutionary robotics research is that it may lead to controllers for robots that can automatically adapt to unknown environments, said Nelson. This ability is many years off, however -- more than 10, and perhaps as many as 50 years, he said.
The research work was published by the Robotics and Autonomous Systems journal in its March 31, 2004 issue (Volume 46, Issue 3, Pages 135-150) under the name "Evolution of neural controllers for competitive game playing with teams of mobile robots." Here are the links to the abstract and to the full report (PDF format, 16 pages, 443 KB). If the paper is quite technical, it also contains dozens of diagrams and illustrations showing the training process.
Finally, here is another approach to build team building for robots, described in this previous story, "Robots Developing Team Building Skills."
Sources: Kimberly Patch, Technology Research News, May 19/26, 2004; and various websites