User:Lukedog48/Sandbox
DARPA LAGR Program
[edit]The Learning Applied to Ground Vehicles (LAGR) program, which ran from 2004 until 2008, had the goal of accelerating progress in autonomous, perception-based, off-road navigation in robotic unmanned ground vehicles (UGVs). LAGR was funded by DARPA , a research agency of the United States Department of Defense.
History and Background
[edit]While mobile robots had been in existence since the 1960s, (e.g.Shakey), progress in creating robots that could navigate on their own, outdoors, off road, on irregular, obstacle-rich terrain had been slow. In fact no clear metrics were in place to measure progress [1]. A baseline understanding of off-road capabilities began to emerge with the DARPA PerceptOR program [2] in which independent research teams fielded robotic vehicles in unrehearsed Government tests that measured average speed and number of required operator interventions over a fixed course over widely space waypoints. These tests exposed the extreme challenges of off-road navigation. While the PerceptOR vehicles were equipped with sensors and algorithms that were state-of-the-art for the beginning of the 21st century, the limited range of their perception technology caused them to become trapped in natural cul-de-sacs. Furthermore their reliance on pre-scripted behaviors did not allow them to adapt to unexpected circumstances. The overall result was that except for essentially open terrain with minimal obstacles, or along dirt roads, the PerceptOR vehicles were unable navigate without numerous, repeated operator intervention.
The LAGR program was designed to build on the methodology started in PerceptOR while seeking to overcome the technical challenges exposed by the PerceptOR tests.
LAGR Goals
[edit]The principal goal of LAGR was to accelerate progress in off navigation of UGVs. Additional, synergistic goals included (1) establishing benchmarking methodology for measuring progress for autonomous robots operating in unstructured environments, (2) advancing machine vision and thus enabling long-range perception, and (3) increasing the number of institutions and individuals who were able to contribute to forefront UGV research.
Structure and Rationale of the LAGR program
[edit]The LAGR program was designed [3] to focus on developing new science for robot perception and control rather than on new hardware. Thus, it was decided to create a fleet of identical, relatively simple robots that would be supplied to the LAGR researchers, who were members of competitive teams, freeing them to concentrate on algorithm development. The teams were each given two robots of the standard design. They developed new software on these robots, and then sent the code to a Government test team that then tested that code on Government robots at various test courses. These courses were located throughout the US and were not previously known to the teams. In this way, the code from all teams could be tested in essentially identical circumstances. After an initial startup period, the code development/test cycle was repeated about once every month.
The standard robot was designed and built by the Carnegie Mellon University National Robotics Engineering Center (CMU NREC) Official Website. The vehicles’ computers were preloaded with a modular “Baseline” perception and navigation system that was essentially the same system that CMU NREC had created for the PerceptOR program and was considered to represent the state-of-the-art at the inception of LAGR. The modular nature of the Baseline system allowed the researchers to replace parts of the Baseline code with their own modules and still have a complete working system without having to create an entire navigation system from scratch. Thus, for example, they were able to compare the performance of their own obstacle detection module with that of the Baseline code, while holding everything else fixed. The Baseline code also served as a fixed reference – in any environment and at any time in the program teams’ code could be compared to the Baseline code. This rapid cycle gave the Government team and the performer teams quick feedback and allowed the Government team to design test courses that challenged the performers in specific perception tasks and whose difficulty was likely to challenge, but not overwhelm, the performers’ current capabilities. Teams were not required to submit new code for every test, but usually did. Despite this leeway, some teams found the rapid test cycle distracting to their long term progress and would have preferred a longer interval between tests.
To advance to Phase II, each team had to modify the Baseline code so that on the final 3 tests of Phase I of the Government tests, robots running the team’s code averaged at least 10% faster than a vehicle running the original Baseline code. This rather modest “Go/ No Go” metric was chosen to allow teams to choose risky, but promising approaches that might not be fully developed in the first 18 months of the program. All 8 teams achieved this metric, with some scoring more twice the speed of the Baseline on the later tests which was the objective for Phase II. Note that the Phase I Go / No Go metric was such that that teams were not in completion with each other for a limited number of slots on Phase II: any number of teams, from eight to zero could make the grade. This strategy by DARPA was to designed to encourage cooperation and even code sharing among the teams who were not in a zero-sum battle with each other – DARPA expected that professional pride would incent the teams to excel, and this expectation was confirmed when many teams far exceeded the Phase I and later the Phase II metrics.
The LAGR Teams
[edit]Eight teams were selected as performers in Phase I, the first 18 months, of LAGR. The teams were from Applied Perception (Principal Investigator [PI] Mark Ollis), Georgia Tech (PI Tucker Balch, Jet Propulsion Laboratory (PI Larry Matthies), Net-Scale Technologies (PI Urs Muller), NIST[ (PI James Albus) , Stanford University (PI Sebastian Thrun), SRI International (PI Robert Bolles), and University of Pennsylvania (PI Daniel Lee).
The Stanford team resigned at the end of Phase I to focus its efforts on the DARPA Grand Challenge; it was replaced by a team from the University of Colorado, Boulder (PI Greg Grudic). Also in Phase II, the NIST team suspended its participation in the competition and instead concentrated on assembling the best software elements from each team into a single system. Roger Bostelman became PI of that effort.
The LAGR vehicle
[edit]The LAGR vehicle, which was about the size of a supermarket shopping cart, was designed to be extremely simple to control. (A companion DARPA program, Learning Locomotion [4], addressed complex motor control.) It was battery powered and had two independently-driven wheelchair motors in the front, and two caster wheels in the rear. When the front wheels were rotated in the same direction the robot was driven either forward or reverse. When these wheels were driven in opposite directions, the robot turned.
The ~ $30,000 cost of the LAGR vehicle meant that a fleet could be built and distributed to a number of teams expanding on the field of researchers who had traditionally participated in DARPA robotics programs. The vehicle’s top speed of about 3 miles/ hour and relatively modest weight of ~100kg meant that it posed a much reduced safety hazard compared to vehicles used in previous programs in unmanned ground vehicles and thus further reduced the budget required for each team to manage its robot.
Nevertheless, the LAGR vehicles were sophisticated machines. Their sensor suite included 2 pairs of stereo cameras, an accelerometer, a bumper sensor, wheel encoders, and a GPS. The vehicle also had three computers that were user-programmable.
Scientific Results
[edit]A cornerstone of the program was incorporation of learned behaviors in the robots. In addition, the program used passive optical systems to accomplish long-range scene analysis. By combining long-range perception with learned behavior, LAGR made a qualitative break with the myopic, brittle behavior that characterized most UGV autonomous navigation in unstructured environments when the program was formulated in 2003.
The difficulty of testing UGV navigation in unstructured, off-road environments made accurate, objective measurement of progress a challenging task. While no absolute measure of performance had been defined in LAGR, the relative comparison of a team’s code to that of the Baseline code on a given course demonstrated whether progress was being made in that environment. By the conclusion of the program testing showed that many of the performers had attained leaps in performance that significantly advanced the state of the art. In particular, average autonomous speeds where increased by factor of 3 and useful visual perception was extended to ranges as far as 100 meters. [5]
While LAGR did succeed in extending the useful range of visual perception, this was primarily done by either by pixel or patch-based color or texture analysis. Object recognition was not directly addressed.
Even though the LAGR vehicle had a WAAS GPS, its position was never determined down to the width of the vehicle, so it was hard for the systems to re-use obstacle maps of areas the robots had previously traversed since the GPS continually drifted. The drift was especially severe if there was a forest canopy. A few teams developed visual odometry algorithms that essentially eliminated this drift.
LAGR also had the goal of expanding the number of performers and removing the need for large system integration so that so that valuable technology nuggets created by small teams could be recognized and then adopted by the larger community.
Some teams developed extremely rapid methods for learning with a human teacher: a human could Radio Control (RC) operate the robot and give signals specifying “safe” and “non-safe” areas and the robot could quickly adapt and navigate with the same policy. This was very clearly demonstrated when the robot was taught to be aggressive in driving over dead weeds while avoiding bushes or alternatively taught to be timid and only drive on mowed paths.
LAGR was managed in tandem with the DARPA Unmanned Ground Combat Vehicle – PerceptOR Integration Program (UPI) CMU NREC UPI Website. UPI combined state-of-the-art perception with a vehicle of extreme mobility. The best stereo algorithms and the visual odometry from LAGR were ported to UPI. In addition the on-going interactions between the LAGR PIs and the UPI team resulted in the incorporation of adaptive technology into the UPI codebase with a resultant improvement in performance of the UPI "Crusher" robots.
Program Management
[edit]LAGR was administered under the DARPA Information Processing Technology Office. Larry Jackel conceived the program and was the Program Manager from 2004-2007. Eric Krotkov, Michael Perschbacher, and James Pippine contributed to LAGR conception and management. Charles Sullivan played a major role in LAGR testing. Tom Wagner was the Program Manager from mid 2007 to the program conclusion in early 2008.
References
[edit]- ^ See especially appendix C, National Research Council of the National Academies, “Technology Development for Army Unmanned Ground Vehicles,” National Academies Press, Washington DC , 2002.
- ^ E. Krotkov, S. Fish, L. Jackel, M. Perschbacher, and J. Pippine, “The DARPA PerceptOR evaluation experiments. Autonomous Robots, 22(1):pages 19-35, 2007.
- ^ L.D. Jackel, Douglass Hackett, Eric Krotkov, Michael Perschbacher, James Pippine, and Charles Sullivan. “How DARPA structures its robotics programs to improve locomotion and navigation. Communications” of the ACM, 50(11):pages 55-59, 2007.
- ^ James Pippine, Douglas Hackett, Adam Watson, “ An overview of the Defense Advanced Research Projects Agency’s Learning Locomotion program,” International Journal of Robotic Research, Vol 30, Num 2, pages 141-144, 2011
- ^ For detailed discussion of LAGR results see the Special Issues of Journal of Field Robotics, Vol 23 issues 11/12 2006 and Vol 26 issue 1/2 2009 .