A barebones-ish codebase written in c++ to quickly get a robot driving
How to Use?
Just clone the repo:
# For the main repo: git clone https://github.com/ewpratten/barebonesfrc.git # For the 5024 fork: git clone https://github.com/frc5024/barebonesfrc.git
Then open it in vscode (That is explained below)
I (@ewpratten) have been observing comments and posts on various threads about FRC. They are a great place to learn new things (and sometimes you can find out about cool new tools from FRC half a year before release). During the first two months of the 2018 season, I kept seeing the same thing over and over again. "It has been 3 weeks and the programmers haven't even gotten to see the robot to start programming it" "Why should we have to give our programmer more than three days with the robot? How long does it take them?? All they have to do is press Upload Code right?". There are plenty more posts of programmers complaining that they arent given enough time and more posts where builders are complaining that the programmers are taking too long to get the drivebase to work. (yes.. we (@frc5024) took a while to get our robot to do basic things too..) So, I came up with an idea. Why not build a simple (ish) codebase that (most) teams can use to immeadiatly get their bot moving around (and there is an integrated network based opencv server built in). So. This is it!
Now.. To address all those side-notes. The reason I say simple-ish is because the code is really just a super stripped down (and updated) version of: 5024 PowerUp. The code could be a lot smaller and more efficent (we are working on the efficency part) but I ran out of time to test the code because school finished. This codebase can not be used by all teams because 1. It is written in c++. Not java or python. and 2. Currently, the only supported ESC is the Talon SRX.
How do I Use This OpenCV Server?
Using the OpenCV Server is quite simple.
The "server" (more like an api) is designed to let you build whatever off-board vision code you want using any tools you want! Just plug a webcam in to the rio then follow these steps:
- Make sure that both
Camera_Serverare equal to true in
- Reboot the RIO
- Connect your off-board device to the wifi or ethernet network
- In your vision code on the device, set your camera feed to get video (mJPG stream) from http://10.TE.AM.2:1181/stream.mjpg (Replace TE.AM with your team number. for example, 50.24)
- Use one of the many NetworkTables libraries (Like this one) to send your output motor speeds to the robot
- Send the forward speed through
- Send the rotation through
- Hold the B button on your controller during teleop and the robot will be fully controlled by your off-board code
- Feel free to send sensor data through NetworkTables too!
If you would like an example of some off-board vision code, or are too lazy to write it yourself, check out my repo called RIOCV-PI for some vision code that follows a PowerUp cube.
Why Does the Project Look Weird?
This section will be outdated as of January 6 2019
This is the first public project ever (I think) to use the FRC 2019 Toolchain, Build system, and vscode plugin.
Instructions on installing the tools can be found here: FRC 2019 Beta
You can use the
gradlew.bat files to build, deploy, and test from commandline.
How Did You Set Up CI?
More info on that can be found over at my CI For FRC repo.
Who Made This?
The code was originally designed by:@ewpratten
And was tested by:@ewpratten
The OpenCv code can be found at: https://github.com/Ewpratten/RioCV-PI
And was written by:@ewpratten
Special thanks to our two awesome programming mentors:@johnlownie