Preparing for a Robotics Field Test

Most robotics programs inevitably reach a phase of development in which the system must be tested in the field: where the environment and usage scenarios closely reflect the conditions in which the robot is to ultimately be used. This migration from the “lab” environment can require increased logistics overhead costs stemming from additional support personnel, establishing remote offices/labs, transporting equipment and conducting post-test analysis. For this reason, it is critical that field test activities avoid wasted time, redundant operations and the loss of data.

In my experience, the time spent preparing for a field test almost always ends up saving time and effort in the long run regardless of whether you are running out to the parking lot or setting up a remote test station. Of course, as with any planning activity, the resource investments necessary for the task will ultimately dictate the level of preparation effort that makes sense. In most cases, however, there are a few key practices that can be adapted to a project of any size which help to avoid common pitfalls.

Develop a test plan. Rough environmental conditions, unfamiliar surroundings and unexpected robotic behavior can all contribute to a lack of focus during field testing. Outlining a plan for discrete tests in terms of the overall purpose, necessary setup/configuration, test procedure and expected results can help to reduce mistakes and limit unnecessary activities. It can be especially useful to design the final presentation for results (including expected analysis, graphs, figures) as an exercise to determine what tests will be necessary. As part of the test plan, include a set of “unit tests” which can be used to verify the function of key subsystems prior to full system-level testing. In many cases, the process of developing a test plan uncovers unforeseen procedural details or required system modifications which can be more efficiently handled prior to field testing. However, its important to reiterate that the effort involved in test planning should scale with the resources necessary to run the test. Also, be prepared to deviate from the plan if needed or even call off testing prematurely to avoid wasting time if the data collected will not be useful or other activities take precedence.

Plan to record as much information as possible, automatically. It’s time consuming and expensive to re-run tests, so set the system up to collect as much meaningful data as possible the first time. Automate the data recording process using scripts or building the logging facilities into the software itself rather than relying on a test procedure that is subject to human error. Recording external video (with audio) of the robot is always a good idea. Video provides an intuitive, external representation of the robotic system which can be used to identify/diagnose problems or convey details of the testing within presentations. Similarly, an overview still image (wide angle/panoramic) of the test site is extremely useful for providing context for the experiment. Don’t forget to record configuration variables that can be altered between test runs and specific data required to run algorithms and reproduce failure modes offline.

Protect the integrity of logged data. It is important to understand and prevent the conditions under which digital data logging can fail. When possible, set up data logging on the processor where it is captured and processed rather than relying on wireless communication which can result in latency or dropped packets. Create a naming convention (often including time/date stamps) for log files which prevents the overwriting of data from a previous test run. Measure the rate of logging and calculate the size of storage necessary to avoid running out of disk space during a test.

Verify the utility of logged data. Do not immediately trust the accuracy of the data provided by sensors and subsystems — independently verify the accuracy of recorded data in the lab before field testing. In addition, the data must be able to be correlated accurately in time to be useful in post-test analysis. Time stamp all recorded data (video, data files, messages) and ensure that system clocks (including cameras) are synchronized before testing. Within the software, time stamps should be associated with collected data as soon after the measurement as possible. For precision systems, it may be necessary to model the latency between the time of the actual measurement and when the time stamp can be applied to the data.

Conduct a dry run. Run through a complete field experiment and conduct a sample post-test analysis in the lab. Follow your test plan. Record the data and video. Backup the logged data and put together an analysis indicative of the desired final field test result. This exercise can uncover problems with data accuracy, data correlation, and highlight specific needs for post-test analysis tools. Every problem that you uncover and fix at this stage while in the lab will save considerable time and effort in the field. As part of this process, develop a checklist of key items to verify (ie. disk space, battery levels, configuration settings, logging) prior to leaving the lab and while testing.

Of course, these broad practices can not prevent every field test hiccup. Unforeseen mistakes and robot anomalies will inevitably occur. In these situations, the key is to maintain focus and make incremental progress towards the overall program goals. At the end of the day, the effort to prepare you for your robotic field test should enable you to be more flexible and provide a framework by which you can make dynamic program-focused decisions.

Posted in:
About the Author

brians

Brian has spent the last decade developing autonomous capability for a wide range of unmanned ground vehicles from man-transportable systems to large multiple-ton platforms. His technical focus is on integrated robotic systems including real-time sensor processing, mobility planning and control. Prior to joining Neya, Brian served as a project manager at QinetiQ North America where he lead the development of automated capability for several of the military’s fielded robotic EOD and squad support platforms including QinetiQ’s DragonRunner, TALON, and MAARS unmanned systems. He has led research programs from a wide range of agencies in the Department of Defense including DARPA, TARDEC, TATRC, SPAWAR, NAVEODTECHDIV and U.S. SOCOM. Brian holds an MS in Electrical and Computer Engineering from Carnegie Mellon University and a BS in Computer Science from Virginia Tech.