I was looking around to find a way of plotting the position of a robot in ShuffleBoard, and couldn’t find any widgets that would allow direct plotting. Is there a widget that can do this?
Numbers can be represented as line graphs but I don’t know of a built-in type that will plot say two dimensional numbers (e.g. x, y coordinates)
I am currently working on a pull request to wpilib that enables transmitting Pose2d objects to LiveWindow/SmartDashboard. I expect it to be merged soon, the next step would be creating a shuffleboard widget for it. It would probably be ready for the 2021 season.
I’m developing a library of components for the web that lets you build dashboards similar to shuffleboard. It requires some HTML knowledge to add the elements to your page and CSS to position and resize them, but otherwise it’s pretty straightforward and has most of the components shuffleboard currently has:
It’s currently possible to implement a component like the one you describe, but it would require you adding NetworkTable values in your robot code to provide the the robot’s Pose information. For example if you set the following keys in NetworkTables:
- /pose/x
- /pose/y
- /pose/angle
You could add this element or something like this to your HTML dashboard:
<frc-robot-pose source-key="/pose"></frc-robot-pose>
And it will show the robot’s current pose as long as you keep updating the pose data in NetworkTables.
Creating something for this wouldn’t be too tricky, and I’m interested in your feedback for what kinds of features you might want a component like this to have. I’m imagining it would be an area that you can add an image to to represent the playing field for the current year, and a little drawing of a robot that will be rotated and translated based on the robot’s pose data.
I’d like to be able to post multiple poses on the same map, to show target and actual position for example. Also, and maybe this is feature creep/ impossible but I imagine a graphics library compatibility to show things like shooter trajectory, camera fov and target, etc. Just a pipe dream though.
So perhaps an interface like this:
<frc-field unit="ft" width="27" height="54" image="/path/to/field/image.jpg">
<frc-field-object width="2" height="3" source-key="/robotPose" image="/path/to/robot/image.png"></frc-field-object>
<frc-field-object unit="in" width="7" height="7" source-key="/powerCellPose" image="/path/to/power/cell/image.png"></frc-field-object>
<frc-field-object width="1" height="1" source-key="/shooterTrajectoryPose" >
<!-- An svg, html canvas or some arbitrary element can go here which will be shown inside the field object -->
<svg>...</svg>
</frc-field-object>
</frc-field>
This will allow you to add as many objects as you’d like. Each object can be an image or HTML element (svg or html canvas for example) positioned and oriented on top of the field. I think this will allow for a trajectory to be drawn and positioned on the field? Does that sound reasonable?
As far as implementation generally that looks usable. That would also provide an example for others to add field objects. Of course, these objects would need x y Theta coordinates.
Making lots of progress. Here’s what I have so far:
Here’s the HTML for this particular dashboard:
<style>
frc-field {
width: 900px;
margin-bottom: 15px;
--frc-grid-line-width: 2;
}
frc-number-field {
margin-right: 10px;
}
frc-networktable-tree {
height: 400px;
vertical-align: top;
}
</style>
<nt-number key="/pose/x" value="10"></nt-number>
<nt-number key="/pose/y" value="10"></nt-number>
<nt-number key="/pose/angle" value="180"></nt-number>
<frc-field grid-size="1" unit="ft" width="52.4375" height="26.9375" image="./2020-field.png">
<frc-field-object
source-key="/pose"
width="3"
height="3.6"
image="./robot.png"
></frc-field-object>
</frc-field>
<frc-networktable-tree></frc-networktable-tree>
<frc-number-field source-key="/pose/x" label="X Position" has-controls min="0" max="26.9375" theme="align-right">
<span slot="suffix">ft</span>
</frc-number-field>
<frc-number-field source-key="/pose/y" label="Y Position" has-controls min="0" max="52.4375" theme="align-right">
<span slot="suffix">ft</span>
</frc-number-field>
<frc-number-field source-key="/pose/angle" label="Angle" has-controls step="10" theme="align-right">
<span slot="suffix">°</span>
</frc-number-field>
The important piece is this:
<frc-field grid-size="1" unit="ft" width="52.4375" height="26.9375" image="./2020-field.png">
<frc-field-object
source-key="/pose"
width="3"
height="3.6"
image="./robot.png"
></frc-field-object>
</frc-field>
Here’s a video of me messing around with the pose through NetworkTables:
You can add any number of frc-field-object elements and give each a pose. You will be able to add any arbitrary element to the frc-field-object element instead of an image (like an svg or canvas) if you’d like. This should be done with examples you can mess around with in a day or so. Also a couple of things to note:
- Changing the /pose/x value makes the robot move up and down
- Changing the /pose/y value makes the robot move left and right
- When /pose/angle is 0 the robot faces right.
- The robot’s front center is the origin of the pose.
I thought this would make sense since I believe these are the conventions when we are discussing robot pose in FRC. Let me know if this doesn’t make sense, I’m wrong or this should be configurable.
@Starlight220 What will the NetworkTable keys be you’re adding to to LiveWindow for Pose2d? Will they be:
- x
- y
- theta
Or something different? The above are the current assumptions I’m making in my code.
@Amicus1 I was thinking about adding components specifically for drawing on the field. These components would inherit from a base class called FieldDrawing (html element will be called frc-field-drawing). Basically in the <frc-field></frc-field>
element there will be an html canvas and all its children that are components that inherit from FieldDrawing will be drawn on the canvas. For example:
<frc-field unit="ft" width="27" height="54" image="/path/to/field/image.jpg">
<frc-field-trajectory source-key="/trajectory"></frc-field-trajectory>
<frc-field-trajectory-state source-key="/trajectoryState"></frc-field-trajectory-state>
</frc-field>
Both frc-field-trajectory and frc-field-trajectory-state will inherit from frc-field-drawing so they will both be drawn on the canvas overlaid on top of the field.
frc-field-trajectory will pull its data from the trajectory class and frc-field-trajectory-state will pull its data from the trajectory state class.
I’m not quite sure what the best way is to represent each of these things (how they are drawn on the canvas), or what the NetworkTable keys/values added should be. Any ideas?
Something like that.
this is the PR, you are welcome to review/follow.
My team(s) have had a shuffleboard widget for odometry for a while. It can draw the robot position which will fade away over time, as well as fancier things like plot debugging information for your motion profile/trajectory/ramsete points, and overlay a line for where your camera says a vision target is
(The gif is super downsampled because my computer wasn’t happy running the simulator, shuffleboard, and a screen recorder)
We publish the widget (and one for visualizing our superstructure) and added a task in our gradle script to automatically download it and put it in the right spot. That way whenever we update it we can run a gradlew copyWidgets
Very cool! I’m curious about the paths that are being drawn on the field. Are those the drawn using trajectories? I was thinking about adding trajectories to the web based component I’m building using wpilib’s trajectory class, but I’m not sure what the best way to display them would be. Would I just get all the trajectory states and draw each of the states’ poses on the field? Perhaps in NetworkTables it would look like:
/trajectory/xs
/trajectory/ys
Where xs and ys would be arrays of doubles representing the poses in the trajectory.
So the green line represents where the vision target is? It’s a line drawn from your robot in a direction over a certain distance? I’m trying to think of the best interface for this…
<frc-field unit="ft" width="27" height="54" image="/path/to/field/image.jpg">
<frc-field-object width="2" height="3" source-key="/robotPose" image="/path/to/robot/image.png">
<frc-field-camera fov="60" angle="0" x="1" y="0" range="10" sees-target></frc-field-camera>
</frc-field-object>
<frc-field-trajectory xs="[0,1,2,3]" ys="[0,0,0,0]"></frc-field-trajectory>
</frc-field>
So the frc-field-trajectory element will draw the trajectory using the arrays of points passed in. frc-field-camera will display a field of view for the camera where the attribute fov represents the angle of the fov in degrees, angle represents the center of the fov (the direction the camera is pointed) relative to the robot, x and y represent the position of the camera relative to the robot, and range represents the distance in can see.
The pure pursuit / old school 254 trajectory / ramsete information, we use the same data structure for the plots as well as drawing on the field, which actually has a ton of information in it. For the field drawing, we just use the [x, y]
components, but for the debug plots we use the time, and velocity as well.
The “ideal” points are sourced from the Trajectory
class on the robot side, with the list of points serialized into a single CSV string for the network tables. This is used for both drawing on the field, and creating the plots. Each loop a single “actual/measured” point is serialized into a string and sent over the NT which is used explicitly for the plotting.
The camera ray layer was added back in the days before the limelight rose to power and we were dealing with 150+ms latency from shutter to the robot receiving it. Because we were doing latency compensation we decided to calculate the targets [x,y] location and published [robotX, robotY, cameraX, camerY]
rather than [azimuth, range]
, but that should work in practice.
Some of the API could be improved, since it is build around shuffleboards requirements for how data is organized in the tables, and the fact that it is data driven and can trigger updates when partial portions of the data have changed (like drawing a phantom dot because it might see [x0, y0], *[x1, y0]*, [x1, y1]
Getting closer… you can now nest objects and draw on the field without a lot of code:
Here’s a video of the car, turret, and laser being manipulated on the field with NetworkTables:
Here’s the code for the field, car, and turret:
<!-- Code to draw field, car, turret, and laser -->
<script>
function drawLaser(info) {
const { ctx, scalingFactor, source, parentHeight } = info;
if (source.firing) {
// make laser width actually 1.5px
ctx.lineWidth = 1.5 / scalingFactor;
// draw a laser 3ft in length in front of turret
ctx.moveTo(0, parentHeight / 2);
ctx.lineTo(0, parentHeight / 2 + 3);
ctx.strokeStyle = "red";
ctx.stroke();
}
}
</script>
<frc-field grid-size="1" unit="ft" width="52.4375" height="26.9375" image="./2020-field.png">
<frc-field-object source-key="/pose" width="2" height="4" image="./car.png">
<frc-field-object source-key="/turret" width="1.3" height="1.4" y="-.4" image="./turret.png">
<frc-field-drawing source-key="/turret" draw="drawLaser(this)"></frc-field-drawing>
</frc-field-drawing>
</frc-field-object>
</frc-field>
Sorry, I don’t see how this works even in polar coordinates. Please explain how that parents to the turret? I see that you pass in the object but I don’t see you transferring the angle.
So the transformations are done automatically so you don’t have to do them yourself. The origin has been translated to the (x,y) center of the turret and rotated in the direction/theta the turret is facing.
ctx.moveTo(0, parentHeight / 2);
Since the origin is currently in the center of the turret, we should move it forward half the length of the turret so we’re now drawing in front of it.
ctx.lineTo(0, parentHeight / 2 + 3);
We create a line from the current point to 3 feet forward.
Does that make sense? The transformations are done when you nest elements, you could add the drawing as a direct child of the frc-field component and none of the transformations will be done and you’ll have full control.
Makes sense, just unexpected. Thanks.
Let me know if you think there’s a better way, I’m open for suggestions. I thought this way made the most sense in terms of convenience. I’ll have all this carefully documented.
@Amicus1 I’ve added a demo for an example you can mess around with if you want to try it out. As you suggested I’ve add the ability to show trajectories, camera fov, and other drawings. The demo currently only works on chrome:
https://frc-web-components.github.io/examples/vanilla/field/field.html
Each input changes a different Networktable value, which are assigned to the attributes in the html elements.
Let me know what you think or if you have any other ideas.
Have you considered support for the snobotsim pose2d shuffleboard widget? Edit: never mind, I don’t think that’s a thing.