Mentoring Two FIRST Robotics Teams across Upstate New York while attending College at the Rochester Institute of Technology

On Scouting: Match Scouting Methods

I see a new thread every year in the fall about some team wanting to adopt a fancy electronic scouting system. Most of the time I post some similar post about how these fancy systems aren’t necessary, and how making sure that the rest of your scouting is working for you before attempting to take on a more difficult technological challenge like tablet scouting.
I want to break down the methodology of scouting to really demonstrate where I’m coming from.

This post is going to be about the systems we use to scout matches.

Let’s start with the basics: Why do we scout?

Scouting is an incredibly important part of any team’s toolset, and offers unique challenges that have helped me in both school and internships I’ve worked at.
We scout to gather more information about our opponents and our allies. This is more important than in most high school sports because you have to play with your opponents too much of the time.

That information can be used to picklist, generate match and competition strategy, generate camaraderie within your team and between teams, and figure out how to win both matches and the whole competition.

So what information do we want?
This depends on your team, your robot, and your position within a competition, as well as your resources (both human and technological).

A logical, smart place to start is to have 6 scouts, one for each robot on the field. If you don’t have the personnel to spare for this, or have just enough people for this, partner with another team at your event! Veteran teams with established scouting systems are usually happy to help, and other less experienced teams may appreciate the help themselves. These six scouts will gather quantitative information about what their assigned robot is doing during that match.

Here I’ll make a note that some people will insist that qualitative information can be used just as effectively as quantitative information when picklisting or engaging in strategy discussions. They aren’t necessarily wrong, but usually individuals who can make these judgments are few and far between, and trusting your scouting to one individual (who often leaves the team within a year or two) is a dangerous proposal. Quantitative information doesn’t lie to you. Data is data is data regardless of who possesses it, and can be used to prove to other teams that you are correct if they doubt you.

What quantitative data should you gather and how?
Figuring out what to scout can be difficult, but usually the simple options are what we care about. Generally game pieces scored, scores attempted, endgame and/or autonomous successes and failures are easy. Sometimes certain metrics may be useful that are non-obvious.

Here is an example. This Team 20’s 2013 scouting sheet. Each individual robot in each individual match got a scouting sheet dedicated to that team’s performance in that instance. The first thing we tallied was autonomous scores, then teleoperated scores. A few non-obvious metrics were “Discs over Auto line”, which was later reworded to “Full court misses”. This metric was useful for Team 20 because we had a robot capable of picking up discs off the floor, and numbers in that category meant discs that were easier for us to score. Shots blocked was a category to tell us whether that robot managed to play defense on a full court shooter that match by blocking shots. We didn’t care how many they blocked, just if they did.

An example of a poor metric on our sheet is the “Speed” category. This was an entirely subjective rating based on the scout’s opinion. The numbers were not useful in any context, and since then we’ve tried to avoid metrics like that that are confusing and useless.

Another example of a scouting sheet for even lower resource teams is Team 180’s Poor Man’s Scouting System. This is their 2013 version as well. For their system, all of one team’s data ends up on one sheet of paper for aggregation purposes, eliminating electronic databases from the equation if necessary. They also provide a comprehensive database to use if desired, which is absolutely fantastic. I can’t recommend this enough to teams with low resources. It’s published on Chief Delphi every year in February by the same person from SPAM.

180’s Scouting System is also an example of how to aggregate data in multiple ways.

Their one-team-per-sheet system aggregates the data on paper, which isn’t ideal, but it works. They also, however, have a fantastic excel database with summary pages for each team and for the whole competition as well. They also usually collect intelligent quantitative data. Being able to use the data you collected is incredibly important, and scattered, unusable data is little to no help with anything in competition.