Roland Piquepaille's Technology Trends
How new technologies are modifying our way of life


vendredi 21 mai 2004
 

If the theory of evolution has worked well for us -- even if this is arguable these days -- why not apply it to mobile robots?, asks Technology Research News. Several U.S. researchers just did that and trained neural networks to play the Capture the flag game. Once the neural networks were good enough at the game, they transferred them to the robots' onboard computers. These teams of mobile robots, named EvBots (for Evolution Robots), were then also able to play the game successfully. This method could be used to build environment-aware autonomous robots able to clear a minefield or find heat sources in a collapsed building within 3 to 6 years. But the researchers want to build controllers for robots that adapt to completely unknown environments. And this will not happen before 10 or maybe 50 years.

Evolutionary computing has been tapped to produce coherent robot behavior in simulation, and real robots have been used to evolve simple behavior like moving toward light sources and avoiding objects.
Researchers from North Carolina State University and the University of Utah have advanced the field by combining artificial neural networks and teams of real mobile robots to demonstrate that the behavior necessary to play Capture the Flag can be evolved in a simulation.
"The original idea... came from the desire to find a way to automatically program robots to perform tasks that humans don't know how to do, or tasks which humans don't know how to do well," said Andrew Nelson, now a visiting researcher at the University of South Florida.

After this introduction, let's look at how they developed and trained the neural networks -- and the robots.

The capture-the-flag learning behavior evolved in a computer simulation. The researchers randomly generated a large population of neural networks, then organized individual neural networks into teams of simulated robots that played tournaments of games against each other, said Nelson.
After each tournament, the losing networks were deleted from the population, and the winning neural networks were duplicated, altered slightly, and returned to the population.
When they first start learning, [the networks] are unable to drive the robots correctly or even avoid objects or one another," said Nelson. "However, some of the networks are bound to be slightly better than others and this [is] enough to get the artificial evolution process started," he said. "After that, competition will drive the process to evolve better and better networks."
After several hundred generations, the neural networks had evolved well enough to play the game competently and were transferred into real robots for testing in a real environment. "The trained neural networks were copied directly onto the real robots' onboard computers," said Nelson.
EvBots playing Here are two EvBots trying to find their way (Credit: Center for Robotics and Intelligent Machines (CRIM), North Carolina State University).

What can we expect from these robots trained by evolution?

The method could be used to automatically tune well-defined components of robot control systems, said Nelson. "For example, a robot might retune its object avoidance mechanisms upon entering a new environment -- outdoors vs. inside," he said. This could be used practically in 3 to 6 years, he said.
The long-term benefit of evolutionary robotics research is that it may lead to controllers for robots that can automatically adapt to unknown environments, said Nelson. This ability is many years off, however -- more than 10, and perhaps as many as 50 years, he said.

The research work was published by the Robotics and Autonomous Systems journal in its March 31, 2004 issue (Volume 46, Issue 3, Pages 135-150) under the name "Evolution of neural controllers for competitive game playing with teams of mobile robots." Here are the links to the abstract and to the full report (PDF format, 16 pages, 443 KB). If the paper is quite technical, it also contains dozens of diagrams and illustrations showing the training process.

Finally, here is another approach to build team building for robots, described in this previous story, "Robots Developing Team Building Skills."

Sources: Kimberly Patch, Technology Research News, May 19/26, 2004; and various websites


6:16:04 PM   Permalink   Comments []   Trackback []  


Click here to visit the Radio UserLand website. © Copyright 2004 Roland Piquepaille.
Last update: 01/11/2004; 09:00:42.


May 2004
Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          
Apr   Jun


Search this blog for

Courtesy of PicoSearch


Supported by
BigFitness.com

If you're tired to read about technology, it's time to take a break.
Try their exercise and fitness equipment.
Read more


Personal Links



Other Links

Ars Technica
Bloglines
BoingBoing
Daily Rotation News
del.icio.us
Engadget
Feedster
Gizmodo
I4U News
Mindjack Daily Relay
Nanodot
Slashdot
Smart Mobs
Techdirt
Technorati


People

Paul Boutin
Dan Gillmor
Lawrence Lessig
Jenny Levine
Karlin Lillington
John Robb
Dolores Tam
Jon Udell
Dave Winer


Drop me a note via Radio
Click here to send an email to the editor of this weblog.

E-mail me directly at
pique@noos.fr

Subscribe to this weblog
Subscribe to "Roland Piquepaille's Technology Trends" in Radio UserLand.

XML Version of this page
Click to see the XML version of this web page.