Command disabled: backlink

RoboCupRescue Robot League Rules for 2013


Date Author Remark
2012/12/03 J. Pellenz Initial version
2013/01/18 J. Pellenz Yellow Arena: Walls might have only half height (Idea from Amir H. Soltanzadeh and Jafar Chegini)
2013/06/27 J. Pellenz Added z-position for QR-code



The RoboCupRescue Robot League has three main objectives:

  • Increase awareness of the challenges involved in deploying robots for emergency response applications such as urban search and rescue and bomb disposal,
  • Provide objective performance evaluations of mobile robots operating in complex yet repeatable environments, and
  • Promote collaboration between researchers.

Robot teams demonstrate their capabilities in mobility, perception, localization and mapping, mobile manipulation, practical operator interfaces, and assistive autonomous behaviors to improve operator performance and/or robot survivability. All missions in the arenas are conducted via remote teleoperation as the robots search for simulated victims in a maze of terrains and challenges based on emerging standard test methods for response robots. Winning teams must reliably perform 7-10 missions of 20-30 minutes each from various start points to find the most victims. As robots continue to demonstrate successes against the obstacles posed in the arenas, the level of difficulty will continually increase so the arenas always serve as a stepping-stone from the laboratory to the real world. Meanwhile, the annual competitions provide direct comparison of robotic approaches, objective performance evaluations, and a public proving ground for capable robotic systems that will ultimately be used to save lives.

Competition Vision

When disaster happens, minimize risk to search and rescue personnel while increasing victim survival rates by fielding teams of collaborative mobile robots which enable human rescuers to quickly locate and extract victims. Specific robotic capabilities encouraged in the competition include the following:

  • Negotiate compromised and collapsed structures
  • Locate victims and ascertain their conditions
  • Produce practical sensor maps of the environment
  • Establish communications with victims
  • Deliver fluids, nourishment, medicines
  • Emplace sensors to identify/monitor hazards
  • Mark or identify best paths to victims
  • Provide structural shoring for responders

Search Scenario

These tasks are encouraged through challenges posed in the arena, specific mission tasks, and/or the performance metric. Demonstrations of other enabling robotic capabilities are always welcome.

Building has partially collapsed due to earthquake. The Incident Commander in charge of rescue operations at the disaster site, fearing secondary collapses from aftershocks, has asked for teams of robots to immediately search the interior of the building for victims.

The mission for the robots and their operators is to find victims, determine their situation, state, and location, and then report back their findings in a map of the building with associated victim data. The section near the building entrance appears relatively intact while the interior of the structure exhibits increasing degrees of collapse. Robots must negotiate and map the lightly damaged areas prior to encountering more challenging obstacles and rubble. The robots are considered expendable in case of difficulty.

A league of teams, competing not against one another, but rather collaborating against the application domain itself to implement viable robotic capabilities.

Innovation is the goal, so the design space is wide open – just make it work!

A maze of walls, doors, elevated floors and complex terrains provide various tests for robot mobility, manipulation, and mapping capabilities. Sensory obstacles, intended to confuse specific robot sensors and perception algorithms, provide additional challenges while searching for simulated victims. Intuitive operator interfaces and robust sensory fusion algorithms are highly encouraged to reliably negotiate the arenas, locate victims, and map the results.

Repeatable test method apparatuses

The RoboCupRescue arenas constructed to host the competitions consist of emerging standard test methods for emergency response robots developed by the U.S. National Institute of Standards and Technology through the ASTM International Committee on Homeland Security Operations; Operational Equipment; Robots (E54.08.01). The color coded arenas form a continuum of challenges with increasing levels of difficulty for robots and operators. See for more information on the standard test methods.

The test method apparatuses are easy to build, so anybody can practice the tasks in advance!

Simulated Victims

The objective for each robot in the competition, and the incentive to traverse every corner of the arena, is to find simulated victims. Each simulated victim is either a mannequin part of a baby doll emitting body heat and other signs of life, including motion (shifting, waving), sound (crying or audible numbers to identify), and/or carbon dioxide to simulate breathing. Particular combinations of these sensor signatures imply the victim’s state: unconscious, semi-conscious, or aware.

Each victim is placed in a particular rescue situation: surface, trapped, void, or entombed based on the number of access holes and the direction they face, side or upward. Each victim also displays identification tags such as hazmat labels and eye charts that are usually placed in hard to see areas around the victim, requiring advanced robot mobility or directed perception to identify. Once a victim is found, the robot must identify all signs of life, read the tags, determine the victim’s location, and then report their findings on a human readable GeoTiff map with a pre-determined format (see mapping section).


  • 4 Yellow Arena (autonomous only)
  • 4 Orange Arena (auto or teleop)
  • 4 Red Arena (auto or teleop)
  • 2 Radio Drop-Out Zone (auto nav.)

Signs of Life to Identify

Victims are located in directed perception boxes with limited access holes.

Every victim box contains visual acuity challenges and other signs of life:

  • Form (doll or mannequin parts)
  • Visual (eye charts and hazardous materials labels)
  • Thermal (heating pad)
  • Motion (waving cloth)
  • Sound (random numbers)
  • CO2 (bicycle tire cartridges)

Victim Situations

Victims can have either 1, 2, or 3 access holes (15 cm diameter) through which robots must identify the signs of life. Triple access holes will be found in arenas with more difficult tasks to perform such as full autonomy in the Yellow arena and advanced mobility in the Red arena. Orange arena victims will mostly be viewed through single access holes. Situations are represented by viewing directions into the victim boxes. These will also vary. Victim boxes can be located below the robot on the elevated floors.

  • “Trapped” victims are in boxes open on top
  • “Void” victims are in boxes open to side
  • “Entombed” victims are in boxes with single access holes in any direction.

Victim Elevations

Victims can be placed at any of three levels and will be distributed equally over the course of multiple missions:

  • 0-40 cm
  • 40-80 cm
  • 80-120 cm
  • 120-160 cm

Simulated Victims

Types of arena


General remarks about the arena:

  • Hallways have a width of 120 cm. Also the other elements such as staircases and stepfields have the standard size of 120 cm x 120 cm.
  • In some sections, there are floor elements on the ground (ramps) and there might be a roof above the drive way. Therefore, the height of the robot should be significantly lower than 1.2 m. A maximum height around 0.7 m is desirable, but not enforced.
  • As a benefit for small robots, we add one or more triangles (60 cm x 60 cm x 60 cm) or square holes (60 cm x 60 cm) with one side flat on the ground to the arena. They can be used as shortcuts for smaller robots to reach victims faster.

Yellow Arena

The purpose of the Yellow arena is to encourage fully autonomous robot navigation and sensor fusion capabilities. It consists of random mazes of 1.2 m wide hallways and larger rooms with continuously rolling and pitching 15° ramps throughout to challenge localization and mapping implementations. Paper and debris cover the ramps to thwart odometry sensors and encourage reliable feature based approaches that can transition to realistic environments. Victims in the Yellow arena can only be scored by fully autonomous robots. All other robots must map the Yellow arena then navigate into the other arenas to score victim points. Robots are required to autonomously search the environment until they recognize more than one co-located sign of life associated with a victim. It should use the multiple sensor signatures to guide its approach onto the pallet directly in front of the victim, saturating all its available sensors for confidence and displaying them on the interface before calling the operator to verify. If the robot is more than one pallet away from the victim, or if the robot has identified a single false positive indication (e.g. a lone heating pad in the environment), the robot is penalized 1 minute. If the robot is correctly placed on the pallet in front of the victim, the judge uses all the sensor signatures displayed on the interface to score the victim. The operator is then allowed to map the victim (one keystroke by the operator) and resume the search.

Challenges for robots with autonomous navigation and victim identification:

  • Random mazes with continuous 15° ramps flooring to challenge localization/mapping
  • Openings to harder terrains to encourage terrain classification
  • Walls might have only half height (60 cm), or they start 60 cm above the ground (see image below). This makes it harder for the autonomous robots to simply perform wall following. Also, a single, fixed mounted 2D laser scanner might not be enough to find a safe way in the arena.

Orange Arena

The purpose of the Orange arena is to encourage robots to negotiate more difficult structured terrains and obstacles. The random maze continues with 1.2 m wide hallways and rooms, but contain crossing 15° ramp flooring to increase complexity. A 45° inclined plane provides challenges for motor torque and configuration management to maximize friction. Stairs with 20 cm risers, 40° incline, and rounded wood tread edges provide access to elevated flooring platforms. So-called pipe steps which use 10 cm diameter plastic pipes stacked to form 20 cm and 30 cm high steps encourage variable geometry robots and good operator interfaces to ascend reliably. Confined spaces under the elevated flooring platforms constrict clearance while on ramp flooring to a minimum of 50 cm vertically with complex ceiling features like stalactites made from 10 cm square posts. Mobile manipulation challenges include negotiating the complex terrains and working on side slopes to reach victim locations that range from 0-40 cm, 40-80 cm, and 80-120 cm elevation. Mobile manipulators must also stow well to ascend the stairs and the inclined plane, and also deploy while under the confined spaces to identify victims.

Challenges for robots with modest mobility

  • Crossing ramps (15°)
  • Inclined plane (45°)
  • Stairs (40-45°)
  • Rolling pipe steps
  • Confined spaces
  • Manipulator challenges

Red Arena

The purpose of the Red arena is to encourage innovative mobility approaches that can reliably maneuver and deploy sensors and/or manipulators in complex terrain. The stepfield terrains provide the describable, repeatable, reproducible terrain for Red arena. Stepfields are made from 10 cm square posts cut to lengths of 10, 20, 30, 40, and 50 cm. Initially all the stepfield terrains in RoboCupRescue were arranged in random topographies which tended toward flat, hills, and diagonal hill pallets. As intended, this produced a very difficult challenge for robot developers both for mobility and for operator awareness during remote teleoperation. However, these random stepfield pallets were not producing repeatable results over many trials, which is a necessity to become a standard test method. So the latest configurations of stepfield terrains are more symmetric rather than random. They appear as similar mobility challenges to the robot either forwards or backwards over the terrain. They are very easy to describe and build, and so should make a good test method apparatus. RoboCupRescue has been, and will continue to be, instrumental in capturing performance data for robots demonstrating advanced mobility, especially in confined spaces. And these new stepfield terrains will be setup as an open room (2.4m wide x 6m long) as shown below, and as hallways with forced turns in the maze configuration.

Generally, flying robots can score victims that are placed in the red arena. However, the victims there might be covered with a roof, and the flying robot must enter this artificial cave to score the points. (Flying robots can not participate in the best in class mobility competition.)

Blue Arena

The purpose of introducing pick and place tasks in the arena is to encourage development of cartesian controlled mobile manipulators with inverse kinematics that can perform grasping and precision placement of items at different levels (0, 50, 100 cm) and reaches (30, 60 cm) while working in complex terrains (initially 15° ramps). Wood blocks (10 cm cubes covered in duct tape) provide a relatively lightweight object to manipulate. Addition of a singe eye-bolt screwed into one side allows simplified grasping, hooking, or carrying and delivering on a rod. The eye-bolt is roughly 2 cm diameter. This can be considered a sensor, communications repeater, or other useful object that is adapted for easy handling by a robot. There are also full water bottles (500 ml) and small radios that can be rasped and manipulated to encourage more generalized approaches needed to retrieve samples in the field. The goal is to place one of these three objects inside a found victim box to score an additional 20 points - the same incentive as the mapping capability. Three items can be carried in as a payload on the robot from the initial mission start point. But additional items must be retrieved from the Blue arena shelves. Teams may choose which items to have available and in which orientation they will be placed on the shelving targets.

Challenges for mobile manipulators with coordinated controlled arms, automatic tool changing, object grasping, and/or payload carrying and precision placement capabilities.

Radio Drop Out Zone

The purpose of the radio drop-out zone (clearly marked with black/yellow hazard tape) is to encourage autonomous behaviors on reasonably mobile robots. Bounded autonomous navigation behaviors such as wall following or centering between obstacles should suffice for this challenge. The operator can remotely teleoperate the robot through the Orange arena to the beginning pallet of the radio drop-out zone . Once the robot is in position on the initial pallet of the zone, the operator can initiate an autonomous behavior to try to navigate the marked hallway. It will initially consist of a few turns but will get more complex toward the final missions. The floor will be continuous 15° degree ramps just like the Yellow arena. A bonus victim will be placed on the far side of the radio drop-out zone so that once the robot is through, when radio communications can resume, the victim can be found via remote teleoperation. The same victim will count again if the robot can return through the radio drop-out zone to the initial start point at the beginning of the zone. So there are 2 extra victims worth of points to find, depending on the robot’s onboard autonomous behaviors and mobility. Resets go back to the radio drop-out zone entrance.

Challenges for robots with modest mobility and autonomous navigation capabilities (e.g. wall following behaviors)

Black Arena

The purpose of introducing a mini emergency response scenario into the arenas is to begin to correlate performance seen in the arena features with tasks in more unstructured environments. One or more vehicles will be placed within the arena and called the Black arena. The vehicles will have assorted rubble surrounding them as if they were involved in an earthquake collapse of an overpass, tunnel, or parking garage. Victims will likely be inside, since at the time of the accident the vehicle was being driven on a roadway. One side of the accident will have more complex terrain associated with a possible collapse. This will be accessible from the Red arena stepfield terrain. The other side of the vehicles may be more relatively clear as if it were a hazardous materials event rather than a collapse. This side will be accessible from the Yellow, Orange, or Radio Drop-Out Zone. Directed perception tasks to look into the windows to identify victims, and into the trunk and other areas for potential hazards will be required. Finding victims and hazards will score points just like in the rest of the arenas.

Aerial Arena

The purpose of the Aerial arena is to encourage autonomous behaviors on small unmanned aerial systems (sUAS). Specifically, this event is focused on vertical take-off and landing aerials, typically quad-rotor aerials, that are under 2 kg total weight. The tasks to be demonstrated will encourage vertical and horizontal station-keeping with automatic standoff and centering on windows. There will also be victim boxes with targets hanging in free-space over the ground arenas. The aerial victim boxes may be hanging nearby each other to add complexity. Line following tasks may also be included. Wind effects generated by fans will add environmental complexity in later missions. This will extend the same basic tasks from the ground arenas into the aerial domain. So all the aerial targets will contain similar signs of life such as form, heat, and sound to challenge the aerial’s ability to deploy sensors both for station-keeping and for target identification. Best-In-Class aerials in the finals missions will provide overflight imagery, maps, and other helpful information to all ground robots in the finals missions (not in real time) to encourage collaboration opportunities for ground/aerial operations.

Challenges for micro/mini aerial vehicles with autonomous stand-off and station-keeping capabilities.

If you plan to bring an aerial vehicle, please contact the local organizers, since the aerial arena is only set up if there is a request for one.


Your robot should map the arena with range finder sensors (such as laser range finders, 3D depth cameras, or a Kinect sensor). The 2D map, that represents a roughly 50 cm slice through the arena, has to be handed in as a GeoTIFF file at the end of the mission. Maps of several robots can be combined, as long as the merging happens automatically. Only one map per mission can be turned in. If your robot has produced multiple maps, pick the best one for the grading.

Introduction of a mandatory GeoTiff map format solves two problems:

  1. It provides for a standard map format that can be used to compare maps both to ground truth arena designs and to other maps across missions and teams.
  2. It also allows us to develop algorithmic scoring metrics to automatically evaluate the location accuracy of victims initially, and ultimately map quality more generally.

The map color scheme should highlight the important features without distraction. The goal is to have certain key information be immediately identifiable to make the map useful as a printed page while beginning to explore all the potential of a graphical viewer that allows interacting with the map (potential printing in black and white is also a secondary consideration). Below you find a description of the GeoTIFF map format for the RoboCupRescue Robot League competitions. The colors for each element are unique so that algorithmic scoring methods can be used without confusion. The values for gray have not exactly the same value for R, G and B to indicate that this particular pixel was modified. This makes it easy to reconstruct the original value when needed.

FILENAME: DARK BLUE (RGB: 0, 44, 207) TEXT For example, “RoboCup2009-TeamName-Prelim1.tiff” displayed in the upper left corner to identify the map, make it sort properly in a directory, and findable on a computer.

MAP SCALE: DARK BLUE (RGB: 0, 50, 140) TEXT AND EXACTLY 1 METER LONG LINE Display this in the upper right corner to indicate the scale of the map.

MAP ORIENTATION: DARK BLUE (RGB: 0, 50, 140) TEXT (“X” AND “Y”) AND ABOUT 50 cm LONG ARROWS Display this next to the map scale. It gives the orientation for the victim location in the victim file. Must be a right-handed coordinate system: X points upwards, Y to the left.

UNEXPLORED AREA GRID: LIGHT/DARK GREY (RGB: 226, 226, 227/RGB: 237, 237, 238) CHECKERBOARD WITH 100CM SQAURES This solid checkerboard pattern should show the unexplored area and provide scale on all sides of the mapped area. It should also print in black and white without ambiguity with other areas potentially turned grey in the process.

EXPLORED AREA GRID: BLACK (RGB: 190,190,191) GRID WITH 50CM GRID AND ABOUT 1 CM THICK LINES (use a one pixel line in the map) This grid should only appear in the explored area, behind any walls, victim locations, or other information. The grid should be aligned with the checkerboard pattern of the unexplored area, but twice as fine to allow visual inspection of wall alignments.

INITIAL ROBOT POSITION: YELLOW (RGB: 255, 200, 0) ARROW This should mark the initial pose of the robot and always be pointed toward the top of the map.

WALLS AND OBSTACLES: DARK BLUE (RGB: 0, 40, 120) FEATURES This should indicate the walls and other obstacles in the environment. The color should make the walls stand out from everything else.

SEARCHED AREA: WHITE CONFIDENCE GRADIENT (RGB: 128, 128, 128 to RGB: 255, 255, 255) This should be based on the confidence that the area is really free. It should produce a clean white when seen as free by all measurements and nearly untouched when undecided, that is, nearly equally seen as occupied as free, to produce a dither effect.

CLEARED AREA: LIGHT GREEN CONFIDENCE GRADIENT (RGB: 180, 230, 180 to RGB: 130, 230, 130) This should be based on a history of 1-50 scans to show the area cleared of victims with confidence. This should also factor in the actual field of view and range of onboard victim sensors – noting that victim sensors don’t typically see through walls!

VICTIM LOCATION: SOLID RED (RGB: 240, 10, 10) CIRCLE WITH ABOUT 35CM DIAM CONTAINING WHITE (RGB) TEXT “#” This should show the locations of victims with a victim identification number such as “1” in the order they were found. Additional information about this victim should be in the victim file noted below.

HAZARD LOCATION: SOLID ORANGE (RGB: 255, 100, 30) DIAMOND WITH ABOUT 30CM SIDES CONTAINING WHITE (RGB) TEXT “#” This should show the locations of hazards with an identification number such as “1” in the order they were found. Additional information about this hazard should be in the hazard file noted below.

QR CODE LOCATION: SOLID BLUE (RGB: 10, 10, 240) CIRCLE WITH ABOUT 35CM DIAM CONTAINING WHITE (RGB) TEXT “#” This should show the locations of QR codes with the QR code number such as “1” in the order they were found. The details about the QR code (such as the text coded in the pattern) must also be listed in the CSV-file.

ROBOT PATH: MAGENTA (RGB: 120, 0, 140) LINE ABOUT 2 CM THICK This should show the robot path.

Please make sure that all text is readable when the map is printed out on letter size or A4 paper!

Example of a map produced by the team Hector Darmstadt (Mexico City, 2012).

QR Codes

RoboCupRescue is about searching. This includes victims of course, but also other things such as tables of closets might give hints where people might have taken refuge during an earthquake. To recognize and map such objects is a long term goal; for now we simplify the task be placing just tags (QR codes), that represent these objects.

Around the arena, these autonomously recognizable QR codes are distributed. If the robot is able to identify the QR codes, it earns extra points. If the robot maps the QR code correctly (uniquely, and within 1 m of the true location) it earns additional points. Example source code is provided to help the teams to quickly integrate this into their code (see below).

These QR codes play several roles.

  1. They bring back the incentive to perform a complete search of the arena, something that disappeared when we removed the arena secrecy component back in 2007.
  2. They allow us to measure your coverage of the arena area more accurately and with higher resolution.
  3. They allow us to properly measure the amount of the arena that has been observed with visual camera (as a stand-in for your victim finding sensor).

These QR codes come in two forms:

  1. In victim boxes: A high resolution version (small print), alongside and worth a similar number of points to the existing hazmat and eye chart labels.
  2. At the wall in the arena: On a A4 sheet, 5 QR codes are placed that name an object, e.g. “chair” or “clock”. Next to the QR codes, there will be a picture of this particular object. If the named object is “hazmat”, then the picture shows a particular hazmat label. If the robot is able to find out autonomously which hazmat label it is, it scores extra. The text on the QR codes are not unique within the arena; however, same QR codes are placed at least 1,2 m apart. If the same QR code is placed in different locations, both occurrences of the QR code should be listed. Around 20 of these “objects” will appear in the orange and red arena, up to 20 in the yellow arena and - if there is an extra maze - another 20 codes in this area. They may be placed anywhere - on the ground, low or high on the walls, on stairs or stepfields, on the ceiling when underneath the raised floor and so-on. The scoring is discussed in the section Scoring below.

All QR codes must be detected and read autonomously, regardless of if your robot is teleoperated or autonomous. That is, apart from perhaps pointing the robot's camera towards the QR code, the operator must not do anything to tell the system that the QR code exists. Autonomous robots should be constantly looking for these codes in the camera stream as they search the arena for victims; these codes will contribute towards the Best-in-Class Autonomy competition. Autonomous robots can score QR codes everywhere; teleop robots cannot score QR codes in the yellow arena nor in the maze (similar rule as applied for victims).

The outcome of the detection must be presented in different ways:

  1. Your interface should have a window visible to the referee (the bottom of your program's console window is fine) where the observed QR codes will scroll through and a verbal announcement (either from the OCU in the form of text-to-speech, or an announcement from the operator) should be made in order to get the referee's attention.
  2. The codes should also be saved to a CSV-file (format see below).
  3. The detected codes should also be placed, autonomously and properly localized, in your map (including the name of the tag).

Each single code should appear in the CSV-file and in the map only once, unless the code was detected at two different locations (1,20 m apart). It is NOT necessary for your robot to be co-located with the landmark codes - if you can see one from across the arena with a zoom lens and can autonomously place it accurately in the map that's fine. It is also not necessary to stop and wait for the referee at each landmark code. Codes co-located with victims share the same requirements as the hazmat and eyechart labels.

QR code examples

QR codes in the “Maze section” at the RoboCup 2012 in Mexico City (only one QR code per sheet).

Multiple QR codes as used in RoboCup 2013 (2013-qr-code-example1a.pdf).

Please note that the percentages indicated are of equivalent average human vision (6/6 or 20/20). Scoring associated with each level of vision is to be determined.

File format

QR Code Text-/CSV-File

Naming convention for the file: RC[Year]_[Teamname]_[Mission]_qr.csv where Mission is Prelim1, Prelim2, Semi1, Final, BC_Autonomy and so on.

Format for the file header:

[Date]; [Time]
[blank line]

Format for the file body:

[Number];[Time found];[QR code text];[x-Position in m];[y-Position in m];[z-Position in m]

Example for a QR code file named RC2013_ReskoKoblenz_Semi2_qr.csv. Here, three QR codes were found on the first sheet at the same location:

Resko Koblenz, Germany
2013-06-23; 14:37:03


Development of new Robot Performance Metrics

The RoboCupRescue Robot League supports research agendas beyond the robotic capabilities on display. The robotics community is already benefiting from common test methods, but also need agreed upon measures of performance to guide their research. Robotic mapping is one area where the league is implementing innovative measurement techniques to provide developers with clear reproducible metrics they can use to measure their progress and then compare to others. For example, this competition features new map fiducials as markers in the laser scanner data sets typically used to generate maps. They provided easy measures for general coverage of the environment, consistency of fiducials across multiple scans, along with local and global accuracy of fiducial locations. There is somewhat more to the idea not described here, but it is essentially a low cost way to populate any environment in which mapping is to take place enabling a ground-truth assessment of the mapping system performance.

Freestanding barrelHanging barrelBarrels in the mazeBarrels in the maze

The images show: A) freestanding barrels work well as mapping fiducials to analyze the coverage of rooms in the maze. B) Two half barrels make up one single mapping fiducial on either side of maze walls, adding occlusions to prevent single scans from capturing the complete fiducial. Some barrels span adjacent hallways, requiring extensive mobility between scans to completely map. C) Barrels are placed throughout the maze independent of terrain type to encourage mapping on more complex terrains. D) The barrels are relatively inexpensive to purchase and easy to cut so can be placed throughout the environment.

In the resulting map, shown to the right, when the barrel fiducials are well formed they are extremely easy to locate and score, which essentially provides the “coverage” metric giving one point for each half barrel mapped. The difficulty factor can be set and maintained for different random maze configurations by counting how many flooring pallets are traversed between contact with both halves of a mapping fiducial. When the mapped fiducials visually separate in the map due to errors in the robot’s pose estimate, the “consistency” metric quantified how much the fiducials degraded in terms of barrel diameters to provide a course but obvious measure of performance.


The purpose of the scoring metric is to encourage development of complete and reliable robotic systems while emphasizing certain best-in-class capabilities and integration of critical sub-components. The scoring approach centers around finding simulated victims distributed roughly uniformly throughout the arenas. There are the same amount of points available on every victim. The league doesn’t assign specific points associated with autonomy or remote teleoperation. Rather, robots with more capabilities can find more victims across the arenas and score more points. For example, robots with advanced mobility capable of negotiating the stepfields can access more victims in the Red arena just as robots with autonomous navigation and victim identification capabilities can access more victims in the Yellow arena. Robots with manipulators that can reach and search victim boxes at the highest elevation have access to modestly more victims if they can also deploy their manipulator under the elevated flooring platforms to find the confined space victims.

The total victims available for different robotic implementations will be well known by teams prior to each mission, as will the situations in which they will be located. Victims will be moved around the arena and placed at different elevations each round to ensure that the winning robots have reliably negotiated the entirety of the arenas across multiple missions. Victim locations may also get harder, search times may get shorter, and arena sizes may get larger as the competition progresses toward the Finals to ensure that the best robots are appropriately challenged. But in the end, this is a league of teams not competing against one another, but rather collaborating against the application domain. As such, we are interested in robots demonstrating world-class capabilities in the arenas so that the rest of the league may appreciate, learn, and ultimately leverage new and exciting approaches.

Competition Schedule

Rough schedule for the competition:

  • Day 1: Setup, testing and registration
  • Day 2: Setup, testing and registration
  • Day 3: Morning: Prelims (to qualify for the Semi-Finals), Afternoon: Standard test methods (to qualify for the best-in-class competitions competitions)
  • Day 4: Morning: Prelims (to qualify for the Semi-Finals), Afternoon: Standard test methods (to qualify for the best-in-class competitions competitions)
  • Day 5: Semi-Finals
  • Day 6: Morning: Final; Afternoon: best-in-class competitions for qualified teams.

The Standard test methods to qualify for the best-in-class competitions (and to gather test data are):

  • For Autonomy: Mapping & QR code detection
  • For Mobility: stepfields traversal & stair climbing
  • For Manipulation: Pipestar inspection & barrel inspection

Mission Procedure


  • Each mission lasts for 15 to 30 minutes, depending of the number of teams. The mission time is determined by the technical committee.
  • Mission time include robot placement at the start point and operator station setup.


  • Victim placements will be known to the operators and audience prior to missions, and changed each round to ensure complete arena coverage over multiple missions.
  • Each team is responsible for making sure victims in the arenas are functional (heat, movements, sound, tags) prior to mission start.


  • Teams should queue at the assigned preparation table with their robot(s) and operator interface(s) 5 min prior to their scheduled start time.
  • After the time is started, the team is allowed to bring the operator station into the operator booth and to place the robots into the starting area.
  • The operator station will be limited to a 120 cm wide x 60 cm deep desk with walls.
  • All robot start points will be in or around the Yellow arena and facing the same direction (marked as “north” on your map), which is indicated by an arrow on the floor.
  • The initial direction may be facing a wall.
  • Multiple robots must be co-located at the start point (as near as possible) and facing the same direction.


  • All teams must pass the Yellow arena, but robots must perform autonomous navigation and victim identification to score Yellow arena victims.
  • Operators may remotely teleoperate a former autonomous robot at any time to navigate into the Orange and Red arenas to score Orange and Red Arena victims. To find Yellow arena victims after that, the operator must return the robot to the start point to resume autonomous searches.
  • Teams are allowed only one operator in the operator station at any time during missions.
  • Teams may switch operators whenever necessary.
  • Only one extra team member is allowed in the arena to watch the robots (and rescue the robots in case they flip over etc.).
  • Teleoperative robots can only score Orange or Red arena victims, which are likely placed on both sides of the Yellow arena to encourage complete mapping of all arenas.


  • An operator (or the team leader) may request a RESET to fix a robot during a mission, but suffers loss of accumulated victim points, maps, and elapsed time.
  • After a reset, all robots must be brought back to the initial mission start point and work for the remaining time available.
  • Touching a robot that is part of a mission (e.g. by preventing it from falling) also causes a reset.
  • The reset starts a new so called “mini-mission” within the mission. The time continues running. All points are reset to zero. The map has to be deleted.
  • After a reset, all robots have to brought back to the starting point.
  • In the break between two mini-missions, robots can be repaired or exchanged.
  • Only one teleop robot is allowed in the arena for each mini-mission.


  • The weaker form of a reset is a restart, which only applies to autonomous robots: If the operator detects a critical situation for the robot (e.g. the robot want to drive into a stepfield), the operator can switch from autonomous to remote control mode for this robot.
  • After switching to teleop mode, the operator has to drive this robot back to the original start position. The operator might use an automatic mode, that drives back the robot to the start position. The robot can not score yellow victims while in teleop mode.
  • When back to the start position, the operator can switch back the robot to autonomous mode.
  • A restart does not affect the points gained so far. Also, the robot can keep the map.

After the mission

  • After the end of the mission, the arena and the operator station has to be cleared within 2 min.
  • GeoTiff maps are required and will be compared to ground truth for accuracy. Map quality will be based on Technical Committee review. The maps have to follow the specifications (see GeoTIFF section), and have to be turned in 5 min after the end of the mission.

Radio/Frequency Management

  • Teams have to provide their own communication equipment. Also for the missions, the team has to install their own wireless access points.
  • All team SSID’s of the league must be “RRL-<team_name>”, e.g. “RRL-RESKO”
  • Only 802.11 a and n is allowed.
  • For 802.11 a, only certain frequencies are allowed, which will be announced during the competition. The organizers will ensure that these frequencies are available exclusively for the team during the competition.
  • If you use 802.11 b, we will not guarantee for any free channels. Remember that there are literally hundreds of WLAN devices on site during RoboCup; so using standard 802.11 b always causes problems and frustration.
  • Any other wireless communication has to be announced to the TC four weeks prior the competition. The TC will decide if the alternative form of wireless communication is permitted. Keep in mind that analog transmitters in the 2.4 and 5.0 GHz band will not be permitted, because they block the 802.11 channels.

Mission Scoring

Robots must be within 1 meter directly in front of found victims to score points. Several key capabilities are specifically rewarded in the scoring metric. Since victims are distributed across all arenas, more capable robots have access to more victims. For teleoperated robots, the body of the robot (except its flippers) must be entirely on the pallet (or staircase or ramp) in front of the victim.

VICTIMS PER ARENA (might vary depending of the size of the arena)

  • 4 Yellow
  • 4 Orange
  • 4 Red
  • 2 Radio Drop-Out Zone



  • (5 pts) Hazmat labels
  • (5 pts) Eye charts


  • (5 pts) Motion sensors
  • (5 pts) Thermal sensors
  • (5 pts) CO2 sensors
  • (5 pts) Audio: victim —> operator
  • (5 pts) Audio: operator —> victim


  • (0 - 10 pts) Quality of geotiff map
  • (0 - 10 pts) Accuracy of victims


  • (20 pts) Placing of payloads blocks or bottles into found victim boxes.


  • (1 pt) Identification of the largest QR code (the “anchor”) on the sheet. Identification of this code is required to get any of the following points for this sheet.
  • (1-4 pt) Identification for each of the four small QR codes on the same sheet (in sum: up to 4 points)
  • (1 pt) If the text of the anchor says “hazmat”, and the correct hazmat label next to the QR code is identified, an extra point is given.
  • (1 pt) If the location of the anchor QR code is given globally correct mapped in the map (within 1 m), this gives an extra point.

All together, each QR sheet can give 6 points (7 if it shows a hazmat sign).


  • (-10 pts) per event
  • Assessed when arena elements need to be replaced or
  • when a victim is harmed by the robot

Best-In-Class Missions

All teams are eligible to participate in the Best-In-Class missions if they scored at least one victim in the associated color-coded arena elements during the preliminary missions in which all teams participated. The number of victims found or tasks achieved in each color-coded arena are summed as one point each and counted as 50% of a given robot’s Best-In-Class score. The other 50% of the Best-In-Class score are determined in one final mission with an equal number of points available as the combined preliminary missions (3 victims per color-coded arena over 4 preliminary missions = 12 possible points). The Best-In-Class mission tasks are as follows:

Autonomy (Yellow & Radio Drop-Out Zone):

The number of Yellow arena victims found autonomously combined with the number of Radio Drop Zone victims navigated to autonomously in the preliminary round are added to a final autonomous mapping mission throughout the entire arena. The mapping mission has 12 possible points using 6 mapping fiducials (barrels) placed throughout the arena providing 1 point for coverage and 1 point for accuracy. The start point is the same for all teams. All doors to more difficult arenas are left open to require terrain classification to circumnavigate the entire arena. The mission duration is 30 minutes. Reset of the robot back to the start point is allowed in case of failure of any kind but accumulated points are lost and time keeps running.

Mobility (Orange & Red Arena):

The number of Red arena victims found in the preliminary round is added to one final mobility mission totaling 12 possible points with no attention to victims. The final mission counts 1 point per obstacles traversed in each direction on each of the following:

  • up/down the 45° stairs with steering occlusions
  • up/down the 45° inclined plane on a diagonal path
  • up/down the 30 cm pipe step
  • 3 figure-8 laps in the Red Arena stepfields, 2 points

per lap.

If 10 figure-8 laps can be achieved, an ASTM standard test method form will be filled out and provided to the team to show reliable performance of advanced mobility. Resets of the robot back to the start point is allowed in case of failure of any kind but accumulated points are lost and time keeps running.

Mobile Manipulation (Blue Arena):

The number of objects successfully placed into victim boxes during the preliminary missions are added to one final manipulation mission requiring pick, carry, and place tasks. Blocks, half-blocks, and/or full water bottles are placed on the four marked locations of a Blue Arena shelf at 50cm elevation for teams to pick. The teams chose which objects they prefer in each of the four marked locations. Blocks and full water bottles are worth 1 point per task to pick, carry, and place. Half-blocks are worth 0,5 point per task because of their reduced weight. The robot has to pick the object from the shelf, carry the object to a nearby victim location across complex terrain, and place the object into the associated victim access hole at the four different elevations used throughout the competition. Only one object can be placed in any given hole. The holes nearest the pick task are three complex terrain pallets away, but are single access holes requiring more precise placement. An additional set of victim holes are twice as far away over complex terrain but have double access holes at each elevation making for a longer carry but a relatively easier placement task. The total number of pick, carry, and placement points are equal to the number of objects allowed to be carried in from the start of the preliminary missions, which was three objects in each of four missions for a total of 12 possible points. Reset of the robot back to the start point is allowed in case of failure of any kind but accumulated points are lost and time keeps running.

Shoring task

For the best in class manipulation mission, a new shoring task has to be fulfilled: Light wooden bars (build out of wooden boards, of styrofoam or balsa wood) are provided next to the robot. The dimension of each bar is 10 cm x 10 cm x 60 cm. The task is to build a supporting structure as depicted in the image around a given pole, up to 10 levels high within 10 min. Each correctly placed bar gives 1 point.

(Shoring describes the task of supporting a partly collapsed structure by temporary items, e.g. using wooden beams. It is an important technique for USAR responders to protect the victims and their own lifes.)

Championship Awards

1st, 2nd, 3rd place awards will be given to teams with the highest cumulative scores from 7-10 missions.

Best-In-Class Awards

There must be at least 3 teams in a Best-In-Class competition (2 for regionals) to give out a certificate (or a trophy).

Best-In-Class awards will be given to individual robots that do the following:

Best-In-Class Autonomy

  • 50%: Find autonomously (1 pts.) and map autonomously (2 pts.) victims and other landmarks (such as QR codes) during the preliminaries.
  • 50%: Produce the best map during a Best-In-Class mission to map the entire arena. Do not pay attention to the victims, but still search and map the QR codes.
  • 3D map: If you map the arena in 3D, please provide a 2D slice at 2 m height. Detected landmarks (such as high walls, with or without barrels) give 1 bonus point, if mapped correctly another 2 bouns points for each landmark.
  • Mission time is about 20 min (to be determined by the judge), no “restarts” (driving teleop back to the starting point) is allowed.
  • If more that one mini mission is done, the best mini mission counts.

Best-In-Class Mobile Manipulation

  • 50%: Place the most payload items into victim boxes during regular missions.
  • 50%: Pick the most items from the Blue arena shelves during the Best-In-Class mission to quantify manipulator reach in two shelf conditions: open and covered by a shelf above.

Best-In-Class Small Unmanned Aerial System

  • 50%: Compulsory tasks such as station-keeping in front of window targets, line following, and lost-link behaviors.
  • 50%: Search task over ground arenas


rrl-rules-2013.txt · Last modified: 2017/07/13 07:42 (external edit)
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki