ScientificAmerican.com  

December 26, 2005
Innovations from a Robot Rally
This year's Grand Challenge competition sputted advances in laser sensing, computer vision and autonomous navigation-not to mention a thrilling race for the $2-million prize
By W. Wayt Gibbs
The most valuable and complex component in a modern vehicle typically is also the most unreliable part of the system. Driving accidents usually have both a human cause and a human victim. To certain engineers--especially those who build robots--that is a problem with an obvious solution: replace the easily distracted, readily fatigued driver with an ever attentive, never tiring machine.

The U.S. military, which has been losing soldiers to roadside bombs in Iraq for several years, is particularly keen on this idea. But by 2002 more than a decade of military-funded research on autonomous ground vehicles had produced only a few slow and clumsy prototypes.

So that year the Pentagon authorized its Defense Advanced Research Projects Agency (DARPA) to take an unconventional approach: a public competition with a $1-million prize. The next February DARPA director Anthony J. Tether announced that the Grand Challenge--the first long-distance race for driverless vehicles--would be held in the Mojave Desert in March 2004. When no robot completed that course, DARPA doubled the prize and scheduled a second running, through a different part of the desert, for October 2005.

The point of the Grand Challenge was not to produce a robot that the military could move directly to mass production, Tether says. The aim was to energize the engineering community to tackle the many problems that must be solved before vehicles can pilot themselves safely at high speed over unfamiliar terrain. "Our job is to take the technical excuse off the table, so people can no longer say it can't be done," Tether explained at the qualifying event held 10 days before the October 8 race.

Clearly, it can be done--and done in more than one way. This time five auton­omous vehicles crossed the finish line, four of them navigating the 132-mile course in well under the 10 hours required to be eligible for the cash prize.

More important than the race itself are the innovations that have been developed by Grand Challenge teams, including some whose robots failed to finish or even to qualify for the race. These inventions provide building blocks for a qualitatively new class of ground vehicles that can carry goods, plow fields, dig mines, haul dirt, explore distant worlds--and, yes, fight battles--with little or no human intervention.

"The potential here is enormous," insists Sebastian Thrun, director of Stanford University's Artificial Intelligence Laboratory and also head of its robot racing team. "Autonomous vehicles will be as important as the Internet."

From Here to There
If robotics is ever to fulfill ­Thrun's bold prediction, it will have to leap technical hurdles somewhat taller than those posed by DARPA's competition. The Grand Challenge did define many of the right problems, however. To succeed in such a race, vehicles first have to plot a fast and feasible route for the long journey ahead. Next, the robots need to track their location precisely and find the road (if there is one) as well as any obstacles in their way. Finally, the machines must plan and maneuver over a path that avoids obstructions yet stay on the trail, especially at high speed and on slippery terrain.

Two hours before the event began, DARPA officials unveiled the course by handing out a computer file listing 2,935 GPS waypoints--a virtual trail of bread crumbs, one placed every 237 feet on average, for the robots to follow--plus speed limits and corridor widths. Many teams simply copied this file to their robots unchanged. But some used custom-built software to try to rapidly tailor a route within the allowed corridor that could win the race.

The Red Team, based at Carnegie Mellon University, raised this mission-planning task to a military level of sophistication. In a mobile office set up near the starting chutes 13 route editors, three speed setters, three managers, a statistician and a strategist waited for the DARPA CD. Within minutes of its arrival, a "preplanning" system that the team had built with help from Science Applications International Corporation, a major defense contractor, began overlaying the race area with imagery drawn from a 1.8-terabyte database containing three-foot-resolution satellite and aerial photographs, digital-elevation models and laser-scanned road profiles gathered during nearly 3,000 miles of reconnaissance driving in the Mojave.
ADVERTISEMENT

The system automatically created initial routes for Sandstorm and H1ghlander, the team's two racers, by converting every vertex to a curve, calculating a safe speed around each curve, and knocking the highest allowable speeds down to limits derived from months of desert trials at the Nevada Automotive Testing Center. The software then divided the course and the initial route into segments, and the manager assigned one segment to each race editor.

Flipping among imagery, topographic maps and reconnaissance scans, the editors tweaked the route to take tight turns the way a race driver would and to shy away from cliff edges. They marked "slow" any sections near gates, washouts and underpasses; segments on paved roads and dry lake beds were assigned "warp speed."

The managers repeatedly reassigned segments so that at least four pairs of eyes reviewed each part of the route. Meanwhile, in a back room, team leaders pored over histograms of projected speeds and estimates of elapsed time. Team leader William "Red" Whittaker ordered completion times of 6.3 hours for H1ghlander and 7.0 hours for Sandstorm, and the system adjusted the commanded speeds to make it so.

Hitting the Road
Roads change--desert roads more than most--so no map is ever entirely up-to-date. And even the perfect route is of no value unless the robot always knows where it is and where it needs to go next. Every vehicle in the Grand Challenge was equipped with differential GPS receivers. They are generally accurate to better than three feet, but overpasses and canyons block the GPS signal, and it sometimes shifts unpredictably.

Most teams thus added other tracking systems to their robots, typically inertial navigation systems that contain microelectromechanical accelerometers or fiber-optic gyroscopes. But two of the competitors created technologies that promise to be more accurate or less expensive, or both.

A team of high school students from Palos Verdes, Calif., found inspiration in the optical mouse used with desktop computers. They installed a bright lamp in their Doom Buggy robot and directed the white light onto the ground through optical tubing. A camera aimed at the bright spot picks up motion in any horizontal direction, acting as a two-dimensional odometer accurate to one millimeter. "We call it the GroundMouse," says team member Ashton Larson.

The Intelligent Vehicle Safety Technologies (IVST) team, staffed by professional engineers from Ford, Honeywell, Delphi and Perceptek, used a similar technique on its autonomous pickup truck. A radar aimed at the ground senses Doppler shifts in the frequency of the reflected beam, from which the robot then calculates relative motion with high precision. Whenever the vehicle loses the GPS fix on its position, it can fall back on dead-reckoning navigation from its radar odometer.

In the desert, even human drivers sometimes have difficulty picking out a dirt trail. It takes very clever software indeed to discriminate terrain that is probably road from terrain that is probably not. Such software, Tether says, "is a big part of what I call the 'secret sauce' that makes this technology work."

The experience of the Grand Challenge suggests that for robots, laser scanners provide the best view for this task. By rapidly sweeping an infrared laser beam across a swath of the world in front of the machine, a scanner creates a three-dimensional "point cloud" of the environment. A single laser beam cannot cover both distant objects and nearby road with sufficient fidelity, however, so a robot typically uses several in concert.

More lasers are not necessarily better. IRV, the Indy Robot Racing Team's autonomous Jeep, sported 11. But when the vehicle's sensors were knocked out of alignment, it ran over hay bales, caught fire and was eliminated during the qualification round. Without accurate calibration, laser scanners place obstacles in the wrong spot on the robot's internal map, drawing the vehicle into the very objects it is trying to avoid.

David Hall of Team DAD, a two-man operation from Morgan Hill, ­Calif., created a novel laser sensor that addresses the calibration problem by fixing 64 lasers inside a motorized circular platform that whirls 10 times a second. A bank of fast digital signal processors, programmed in the low-level Assembly language, handles the flood of data. In prerace trials, the sensor was able to pick out obstacles the size of a person from up to 500 feet away.

The Red Team took a different but equally innovative approach with its two robots. Each carries a single long-range laser that can do the job of many, because it swivels, rolls and nods on top of an articulated arm called a gimbal. Protected by a dome and windshield that look like a giant eyeball on top of the robot, the laser can tilt up or down when the vehicle climbs or descends. As the robot approaches a turn, the gimbal swivels left or right, keeping its eye trained on the road.

Red Team engineers also mounted fiber-optic gyroscopes to each of the gimbal's three axes and linked them via a feedback system to actuators that stabilize the laser so that it holds steady even as the vehicle jumps underneath it. The team failed to integrate that stabilization capability with the robots' other systems in time to use it for the race. But both Motion Zero, a company just launched by the Blue Team in Berkeley, Calif., and HD Systems in Hauppauge, N.Y., are miniaturizing the technology and planning to market it for use in satellites, weapons systems and camera platforms.

A Path to the Future
Indispensable as lasers seem to be, they have their drawbacks. At $25,000 to more than $100,000 each, the price of long-range laser scanners is formidable. Other kinds of sensors, such as video cameras and radars, can see farther and cost less. Yet these have their own weaknesses, and they produce torrents of data that are infamously hard to interpret.

Many teams equipped their robots with a combination of sensors. But only a few succeeded in building systems that could integrate the disparate perspectives to deduce a safe and fast path ahead--and do so many times a second.

Team Terramax's 15-ton robotic Osh­kosh truck completed the course thanks in part to a novel "trinocular" vision system designed by Alberto Broggi's group at the University of Parma in Italy. The program selects from among three possible pairs of cameras to get an accurate stereo view of the near, medium or distant terrain. The higher its speed, the farther out the robot peers.

After the competition, Thrun reflected that one of the key advantages of his Stanford team's Stanley robot, which won the race and the $2 million, was its vision-based speed switch. Stanley uses a simple but powerful form of machine learning to hit the gas whenever it spots a smooth road extending into the distance.

Some of the innovations with the greatest reach, however, appeared on robots that never reached the finish line. The IVST team, for example, devoted desert trials to discovering the optimum sensor configurations for its Desert Tortoise in a variety of "contexts"--such as washboard trail, paved highway or interstate underpass. As the robot drives, explains team leader William Klarquist, "the vehicle chooses an appropriate context that switches off some sensors, switches on others, and reassigns the confidence that it places in each one." This technique should allow a robot to move from desert to, say, farmland and still perform well by loading a new set of contexts.

In IRV, the Indy Robot Racing Team demonstrated a "plug and play" system for sensors, a feature that is probably a prerequisite for the creation of an autonomous vehicle industry. The far-flung team of more than 100 engineers needed a way to swap sensors and software modules in and out of the robot easily as the group tested and refined the system. So they invented a network protocol (analogous to the hypertext transfer protocol on which the Web runs) for autonomous driving.

Each sensor on IRV plugs into a dedicated computer, which boils the raw data down to a set of obstacle coordinates and sizes, and then translates that into the network protocol. Every sensor computer broadcasts its obstacle list to all other sensors and to the robot's central path-planning computer. The standard makes removing a malfunctioning radar or upgrading a buggy vision algorithm as simple as a changing a tire.

With the dust hardly settled from the race, the next milestone for autonomous ground vehicles is not yet clear. DARPA's Tether points to military interest in convoys that use a human lead driver to send coordinates to a pack of robots behind. Whittaker aimed to have H1gh­lander tending fences on his farm by the end of 2005, and by November he was already drafting proposals for a lunar mission. Both he and Thrun said they received lucrative offers from commercial investors in the days before the race. So whatever else happens, these robots will keep moving.

Special Advertising Section: Friuli Venezia Giulia
© 1996-2006 Scientific American, Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.