ORBIT 1690 Off-Season Project - Power cell Tracker

Team 1690 is proud to present our 2020 offseason project!

As our team worked on improvements to our shooting mechanism, a new way to measure our shooter accuracy was needed! Our method of measuring the power cell’s hit point wasn’t productive (shooting in slow-mo on a grid) Therefore, during the summer we developed a custom board that can tell us the hitpoints.

Link to video: https://youtu.be/IpdLMfcgM5w

Link to Grabcad: https://grabcad.com/library/coordinate-board-orbit-1690-1

The Problem

We wanted to know the effect of different parameters (speed of the shooting wheel, material, etc.) on the quality of our shooting system. The way we did that until this season was by shooting at a cardboard target which we drew tiles on, while filming this in slow motion, then we would analyze the video and find the points where the power cells hit the board.

This method was problematic because there was a long period of time between shooting and finding the coordinates of the power cells and the whole process was hard to repeat multiple times.

We tried to think of a new method for measuring the shooting accuracy and comparing parameter changes in a more automatic and efficient manner. Many ideas came up; making a lazer net in two axes that would give us an accurate location of the power cells, placing numerous sound sensors around the target to find the location of the power cell by the intensity of the sound of it’s impact, image processing, etc.

The Solution

An automatic goal that detects power cell hits using a conductive wire grid.

After a lot of brainstorming we decided to use the following concept:

As you can see, the system is composed of four layers.

First layer:

On a wooden frame we laid a big piece of fabric, to which we sewed 20 parallel rows of steel thread, meaning they are conductive. Each one of these treads lead to a pin in the arduino mega board.

On top of this piece of fabric we laid another layer of fabric, the fabric which would be hit by the power cells in the process of the shooting, and it’s there to protect the steel threads from the spin and impact of the power cells shot at it.

Second layer:

Under the first layer there is an aluminium plate cut in the shape of a piano with 20 keys, which are separated from each other and are able to move separately when hit. These keys are perpendicular to the previous steel threads.

Third layer:

This layer consists of a polycarbonate plaid with two rows, each with 20 holes, one row in the middle and a second row at the end inside.

Fourth layer:

This layer is the base of the system, to which all the components connect, together with the electronics.

All cables go into the Arduino digital PINS
The second layer is connected to the ground of the Arduino, and as soon as the first layer touches it, or it touches the third layer, a circle closes.
When the ball is fired from the robot, it hits the first layer, pushing a few rows of wire leading to the second layer, and at that moment a circle closes. Each wire in the first layer is connected to the pin in the mega Arduino, and through the software, we can know in which row of the first layer the ball hit.
When the first layer hits the second layer, the fingers of the second layer are pressed where the ball hit, and press on the rivets in the third layer and thus close a circle. The rivets are connected to cables that go through the pins in the Arduino Mega and so we can see with which finger the ball hit.

When we connect together the wire where the ball hit and the finger where the ball hit we get coordinates of the ball hit.

Arduino software

Each row and column is connected to one of the Arduino Mega digital pins.

The Arduino runs a loop that reads the 40 Pins, when one of the pins changes to HIGH, it starts recording in the buffer.

The data is stored in 2 arrays, one for the x-axis and one for the y-axis. In each loop, a row that contains which pins are HIGH and which are LOW is stored.

At the end it looks like this:

Then this data is passed to a function that calculates the center of mass of the Power Cell. The calculated coordinates are passed through the serial to the GUI on the computer.


How was it made?

The UI is written in Typescript. We chose to use a library called “electron”, which lets you write native desktop apps using web technologies, via the chromium browser. On top of that, we used a UI library called “React”, whose purpose is to manipulate the DOM (stuff you see on the screen). Communications with the Arduino are done using the “serialport” library. To bring together all this stuff, we used “webpack”.

Link to Github repository: Github.com/NoamPrag/Power-Cell-Tracker

The purpose of the user interface is to visualize the data as convenient as possible, while giving some analytic stats about the data.

The coordinates of the power cell hit points are represented in a scatter plot. In addition, there is visualization of data about each burst. The software knows to separate individual bursts by the time when the power cell hit was sensed; when there is a big gap between power cell hits, we know that the burst was completed.

Bursts are colored randomly (with colors that are significantly different from each other). When there is a need to inspect a specific burst (or bursts), there is an option to turn off all bursts’ colors except the desired ones.

Each burst has its own accuracy and precision stats, and a list of power cells, each colored in green or red, indicating which power cells went into the inner port.

You could clear the data and start recording again if you wanted to:

The export button lets you save all the data in a JSON file. The data which will be saved is the total accuracy and precision, coordinates of each burst, accuracy and precision of each burst, and which power cells went into the inner port.

“Easter egg” - confetti when a whole burst goes into the inner port :slight_smile:


The average of the distances between the power cell hit coordinates to the origin (center of the target) in the x and y axes (where n is the number of instances) is:



Using the averaged distances, the total accuracy is defined as:


Note that simply calculating all distances of the coordinates to the origin and taking their average, does not reflect the real accuracy.

For example, let’s say we have two coordinates: (2, 3), (-2, -3). They both have the same distance to the origin so the average distance will be sqrt(2^2 + 3^2). However, the accuracy should be zero, because the two coordinates are on opposite sides of the origin.

What is actually done for accuracy calculation, is finding the average coordinate of all the hit points and taking its distance to the origin.

In order to calculate the precision (which means how persistent our shooter is), we calculated the standard deviation of the distances:


Absolutely love the concept of an area-sensitive goal tracker to measure accuracy! Now I’m curious how a similar mechanism might be used on a real field to not only show that a team shot the goal successfully, but where they did so.

If you had the opportunity to start over with what you know now, what would you do differently?


This plus RFID.

Kudos to 1690 on a sleek and effective design to help take performance to the next level!


This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.