More complicated code help

I can do most of the more simple programming in wpilib and some of the more complicated stuff too. But when I’m trying to get into more complicated and useful code topics like vision, pathweaver/pathplanner I am being given code and being told that it will work instead of what this code does and why I need it. Are there any good resources for this?

1 Like

There are some good examples of simple code for various tasks at WPILib Example Projects — FIRST Robotics Competition documentation .

If there’s particular code you cannot understand, post a link here, explain your specific issue, and we’ll do our best to help.

Hopefully this experience will give you a lifetime conviction of the value of writing good code comments and other documentation. :slight_smile:

3 Likes

what I suggest you do is check past years or other teams code and try to understand what is going on, after that try and write that same code by yourself.
when trying to write the code use wpilib documentation and other sources, and learn from there how to write the code, I learned to do vision completely off of wpilib docs and it was easier than I thought.

my problem with the wpilib docs is that they tell you what to do to make it work but not why the code you wrote works and how it works. I can copy that but I’d end up having to do that over and over until I figure out myself how the code works.

For a general resource, you might find this video series useful: https://www.youtube.com/@0ToAuto

I second the recommendation to look at other teams’ code, but you won’t find that to be uniformly well-written and well-documented.

I suggest you pick out three specific questions you have with specific code, and ask them here.

for vision, you can use limelight docs they have great explanation(dont think it includes april taging) and for other stuff just check cd, unofficial first discord, and just ask or look for people who asked/ made guides and more

1 Like

This is most of what the computer science world is. It’s called abstraction, and it’s done so you don’t have to understand the code to use it. That way, you can use building blocks someone else has created.
However, this can be frustrating if you want to understand how something works. One method with WPILIB is to physically dive into the code itself. If you are interested in the control theory, this textbook is very comprehensive.

3 Likes

With AprilTags on the horizon my training material on Vision thresholding, contours, finding green blobs is obsolete. The LL document as suggested should do for the retroreflective tape experience.

I also put together training of swerve drive calculations, Convolutional Neural Networks (CNN), and Finite State Machines (FSM). I’ve hesitated to “publish” since there isn’t an original word in them - I edit copy/paste from the Internet.

Ether is the basis of the swerve document. And after all that reading there is still a line of code in the WPILib SwerveBot example included classes that I have to guess what it does and I’m not a rookie - over 60 years experience.

My favorite resources are behind the bumpers videos since they go over high-level system structure and flow. I find that most other technical subtleties can be figured out through a combination of Chief Delphi searches and trial-and-error.

1 Like

I can tell you, as someone who specializes in vision, it’s not actually that difficult to learn. Getting a limelight (or whatever) working on the robot is pretty simple, and writing basic vision targeting is pretty easy once you understand what the values the limelight is feeding to you mean. But, the cool thing about it is there are myriads of super advanced things you can do that build off of the basics. If you can’t find what you need on WPllib’s docs, you can find more Limelight-specific things in CTRE’s docs. They’ve written plenty of 0 to aimbot tutorials that actually delve into the why, not just the how. If you are using photonvision (it’s better imo), they have built up their own documentation as well, with similar niceties as LL.

Most of the advanced things you are trying to learn are 3rd party (not made by WPIlib) sources that all have their own docs. WPIlib is useful for learning how to integrate the systems into your robot, but the 3rd party docs go more into the nitty-gritty of how they work.

well right now a few things I’d love to know are
what does odometry/kinematics do? (are those even the same things?)
are there any good resources for using april tags? (as well as a cheap option for playing around with it at home)
what’s the next step in coding after I’ve done most of the controls and simple autonomous stuff?

They’re related, but different.

Odometry is using sensors (e.g. wheel encoders and the gyro, and possibly camera data) to estimate where your robot is on the field.

Kinematics is generally the study of how things move, in robotics it can cover how joint angles in an arm relate to the position and orientation of an end-effector, but in the WPILIB context, it’s a model for relating movement of the robot to actions of the drivetrain. It answers the question: If I want the robot as a whole to do this, what should I make the motors do?

(Arguably this is Inverse Kinematics. Forward Kinematics answers the question: If the motors/joints do this, what will the robot as a whole do?)

You can see that this is fairly straightforward for a differential drivetrain (aka tank drive, west coast drive), but more complex for a swerve drive, mecanum drive, or the even more exotic alternatives.

In my team, we’re still working on a good answer to that question, and we’re probably going to do things the hard way (that is, not a way I would recommend for others), because that’s the way we roll.

I see a lot of threads here about PhotonVision, so that may be a good place to start. There are rumours that Limelight will eventually have support.

In terms of cheap, I would normally say grab a Raspberry Pi and a camera and get started, but there are currently supply chain issues.

There many possible answers to this question, so I’ll pick two: More complex autonomous stuff and driver assist.

Does your autonomous routine always work perfectly? What would it take to increase the reliability? Can it cope with the unexpected? Can you find a way to score more points during autonomous?

Would you be more likely to be drafted if you had more autonomous routines? Hint: What sort of autonomous routine would you want your alliance partner to have that would be useful but which wouldn’t interfere with yours?

Autonomous programming doesn’t have to end with the autonomous period. Is there a way to introduce aspects of autonomy to the teleop portion? Can you provide a “turn to target” button? What about “drive to target”? Is your odometry good enough to last the whole match? What about “pick up the blue balls but not the red ones”? Can you program a magic way to drive out of a “moving T-bone”?

Can you improve the data visualization on the dashboard to help the driver? What about helping them to press the right button at the right time and avoid mistakes? Can programming help to prevent the robot from incurring penalties? From suffering damage? Do you ever burn out motors? Does the robot ever fall over?

How autonomous is your climb?

1 Like

Fault tolerance.

1 Like

Curious, which line is it?

I’ve seen more than one person get stuck on this one (explained here).

 SwerveModuleState state =
    SwerveModuleState.optimize(desiredState, 
        new Rotation2d(m_turningEncoder.getDistance()));
Mr. Thomas was confused story

As is true of many things in life - “context matters” and “education and training are critical” along with a “good night’s sleep”.

I found where I was confused and @bovlb pointed to really close to the statement but not the exact ones I was hung up on.

A year ago the team with the urging of the head coach (a really good build guy) voted in January to use swerve drive for the first time. The panic stricken two software mentors (myself and the Java/math teacher) spent some long nights with SwerveBot WPILib example. Way past my bedtime and no easy, instant swerve calculation reference, I was baffled by the SwerveDriveKinematics.class several lines resembling this:

m_inverseKinematics.setRow(
i * 2 + 0,
0, /* Start Data */
1,
0,
-m_modules[i].getY() + centerOfRotationMeters.getY());

Part of my problem was I misunderstood the comment about Start Data. I thought it meant the 0 was the start of data. I failed to recognize the 0 was column 0 and data started on the next line (a new line for Start Data would have helped immensely as would have me reading the notes on setRow()).

This morning the code is clear and matches exactly what I later learned while putting together my Swerve Drive Calculations paper (written to organize my thoughts and all the disparate references and assumptions people were making).

The SwerveBot is an excellent example. We wanted to change 3 relatively small things (one I reported as an issue) and that required learning a lot about the code. Thus our deep dive into swerve calculations and profiled PID. It was a lot to bite off in an emergency for me who took calculus 60 years ago.

Kudos to the OP @SammyPants who is being proactive to advance their team. An awesome attitude for a rookie! The kind of attitude I enjoy writing good references for and awarding scholarships.

1 Like

Be sure to notice that PhotonVision that was suggested does run on a Windows PC that you may already own at home. Use a laptop and the camera is likely included in it. Check the web site for other supported platforms.

1 Like

I would defiantly have done this, however the laptop that I have doesn’t have a camera so I get to have fun with that. I do have a webcam but that’s somewhat a pain to go grab to do stuff with