忍者ブログ
Technical News
[175]  [174]  [173]  [172]  [171]  [170]  [169]  [168]  [167]  [166
×

[PR]上記の広告は3ヶ月以上新規記事投稿のないブログに表示されています。新しい記事を書く事で広告が消えます。

No Directions Required--Software Smartens Mobile Robots

DARPA initiative to develop self-navigating robots introduces a world of potential for the development of autonomous vehicles, but will the government take advantage of its research or let it wither on the vine?

 
SMART ROBOT: DARPA's LAGR initiative awarded each of eight teams of scientists $2 million to $3 million to develop software that would give unmanned vehicles the ability to autonomously learn and navigate irregular off-road terrain.

 
LOOK OUT!: Classifying natural obstacles was one of several challenges researchers faced as they programmed their robots to evaluate their surroundings and avoid impassible terrain.

Computer experts recently gathered in San Antonio, Tex., to test one last time how well their software programs enabled a mobile robot vehicle to think for—and steer—itself. The event wrapped up the Defense Advanced Research Projects Agency's (DARPA) three-year
Learning Applied to Ground Robots (LAGR) initiative, which awarded each of eight teams of scientists $2 million to $3 million to develop software that would give unmanned vehicles the ability to autonomously learn and navigate irregular off-road terrain.

Autonomous maneuvering may not seem terribly difficult for a reasonably smart robot on wheels. But although some vegetation, such as short grass on a prairie, is easily traversable, obstacles such as dense bushes and tree trunks are not. To expediently reach point B, the robot must be able to quickly sort through a range of flora and decide which ones it can travel over—or through—and which are rigid, impenetrable barriers.

Researchers initially believed that visual learning—making basic sense of a surrounding based on changes in light—would be easy to implement in computer systems. But Eero Simoncelli, a principal investigator at the New York University's (N.Y.U.) Laboratory for Computational Vision, pointed out that humans take vision for granted and overlook its complexity. "For you to avoid an object in your path is trivial," he says. "What's visual input [to a computer]? It's a bunch of pixels. It's a bunch of numbers that tell you how much light fell on each part of the sensor. That's a long way from a description of a cup sitting on a table." Extracting symbolic definitions from a large set of numeric values, he adds, is much harder than anyone realized.

Classifying natural obstacles was but one of myriad factors that DARPA researchers had to predict and implement in a software program to expand the capacity of a mobile robot to quickly analyze and travel through an environment. "Of course, no one [knew] how to design this," says Yann Lecun, professor of computer science at N.Y.U.'s Courant Institute of Mathematics who led the university's team. "So DARPA [was] interested in funding projects that advance the science of [robot] learning and vision."

Lecun, who has a knack for designing computer systems that pick out the key visual features in an environment, was an ideal candidate for the LAGR project. DARPA provided the funding and a standard test vehicle so Lecun and Urs Muller, CEO of software maker Net-Scale Technologies in Morganville, N.J., could focus on writing the software. They set out to push the realm of visual-based navigation forward—or to at least bring it up to speed.

A 2002 study by the Washington, D.C.–based National Research Council found that the increase in speed of unmanned ground vehicles was greatly outpaced by the rapid improvement in computer processing from 1990 to 2000 when the physical capability of a vehicle and course complexity is adjusted for. Muller points out that over the past decade there has been a 100-fold increase in computing power and a 1,000-fold gain in memory capacity but developments in unmanned navigational systems have lagged far behind these advances and will continue to without the development of new approaches to visual learning. "The limiting factor in software [design] is the human imagination," he says.

Until LAGR, most self-navigating mobile robots could only scan their immediate surroundings and plot a course over short distances. This made it difficult for robots to figure out an optimum route to any place farther than their own shortsighted universe of about 25 feet (7.6 meters), limiting them to a feel-as-you-go approach that often resulted in time-wasting, circuitous paths to a destination.

This visual (computational) restriction, which LAGR founder Larry Jackel likened to a person driving in a dense fog or blinding blizzard, motivated the program managers to challenge the depth perception of the contestant programs. In San Antonio, this was done by placing a goal (a set global positioning system, or GPS, point) directly behind a cul-de-sac formed by four-foot- (1.2-meter-) high plastic barriers. With a starting point several feet from the entrance, a program with short range vision would drive straight to the goal—and toward the dead end—only to encounter a barrier, forcing the clueless robot to aimlessly search for a way out by navigating along the wall. A smarter robot with greater depth perception would have seen the dead end from afar and instantly adjusted its course to go around the barrier to reach the goal sooner.

Many teams failed to equip the standard-issue LAGR robot with sufficient long-range vision (that would have allowed perfect execution of the cul-de-sac challenge), but the LAGR participants still took advantage of a mapping system that stored acquired information about the barrier. This way, the robot adapted and modified its behavior to avoid repeating the same mistake. After two runs, the robots usually mapped a complete picture of a continuous wall and figured that it had to go around the obstacle to reach its goal.

In addition to the obstacles, a portion of the final LAGR challenge, called the "petting zoo," allowed contestants to demonstrate the specific strengths of their robot algorithms. Lecun exhibited his program's quick response to obstacles that suddenly popped up. This trait reflects a design that is akin to the human reflex by using a faster (but less analytical) system that searches six times per second for any obstacles within 15 feet (4.6 meters) as well as a slower process that processes long-range data in more detail once every second. "We ran the robot through the crowd," he says, referring to spectators and LAGR teams who attended the event. "People weren't afraid of it since they saw it was driving really well and didn't bump anyone. It drives itself better than we can."

The LAGR competition is different from the sportier and better-publicized DARPA Urban Challenge, which features a course that resembles city streets, or the agency's Grand Challenge in which autonomous vehicles race through the desert. Both competitions allow vehicles to use cameras, sensors, GPS, radar and lasers, whereas LAGR vehicles essentially use stereo cameras, GPS and onboard computers.

The goal of autonomous vehicle research is to make unmanned transport an option during dangerous situations, such as war, to avoid putting a person's life at risk. Great strides are being made in visual navigation, thanks to projects like LAGR, but ever more sophisticated systems will eventually have to be developed to deal with increasingly complex problem-solving demands.

Now that LAGR has wrapped up, researchers are unsure if DARPA will pony up any more cash for more such research. "It's hard to tell whether [LAGR] will be perceived as a great success or failure because the devil is in the details," says Lecun, who points out that the best systems ran 2.5 times faster than the baseline ones already built into the robot. "I think there is a huge potential in some of the techniques that were developed during this program. It would be a shame if people disappeared into the woods and nothing came of it."

PR
POST
name
title
mail
URL
comment
pass   Vodafone絵文字 i-mode絵文字 Ezweb絵文字

secret(※管理者へのみの表示となります。)
COMMENT
TRACKBACK
trackbackURL:
Calendar
04 2024/05 06
S M T W T F S
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
Timepiece
タグホイヤー フォーミュラー1 ドリームキャンペーン
Blog Plus
SEO / RSS
Podcast
by PODCAST-BP
New TB
Bar Code
Data Retrieval
Oldest Articles
(09/30)
(09/30)
(09/30)
(09/30)
(09/30)
Photo Index
Reference
Latina




RSS Reader
無料RSSブログパーツ
Misc.
◆BBS


◆Chat


◆Micro TV


Maps



顔文字教室




Copyright © Tech All Rights Reserved.
Powered by NinjaBlog
Graphics by 写真素材Kun * Material by Gingham * Template by Kaie
忍者ブログ [PR]