How do you GIT'er done?

My team uses a Gerrit server for code review that runs on an AWS instance that the software mentors maintain. At competition, the students make test branches locally and merge to master after testing. They either push them to Gerrit at the end of the day or tether to a phone to push to the server. Since I was at university for the last few years, I was typically not at competition, so the students occasionally pushed patchsets to Gerrit for me to review remotely and catch potential issues (I was usually watching the regional that weekend online anyway). I either leave the comments in Gerrit or ping them directly on Slack.

We use Gerrit for code review because I like the rebase workflow better than the merge workflow of GitHub (and we enforce rebases through the admin interface). I cover more of how to do that stuff in the presentation at https://csweb.frc3512.com/ci/git02/. I still need to go through those and expand on the lecture notes.

To make the students self-sufficient at competition, I teach them how Git really works. I try to emphasize how Git is basically an application of graph theory, because having that deeper understanding has saved our butts a couple times at competition when my students start doing hairy things to the branches and history (my students are all comfortable using “git reset” to move around the reflog). The HTML intro slides I use are at https://csweb.frc3512.com/ci/git01/. There’s always a set of Git commands to fix the issue at hand, so recloning should never be needed. (This is nice: https://github.com/k88hudson/git-flight-rules/blob/master/README.md)

As far as branch hygeine, we keep master as the working version and give branches descriptive names and commit messages (no "Fixes " branch and also “Fixes the issue for real, I promise” with no indication as to which is the real “working” branch). A clean commit history also makes code review much easier. I despise merge commits for this reason; they are just noise (last I checked, Brad was tied for most commits in wpilibc/wpilibj if you include the huge number of merge commits). Just rebase against master since you’re doing that merging work either way and you get cleaner history with a rebase. Rebasing a PR on GitHub or a changeset on Gerrit is technically modifying public history, but we treat those branches as private so the author can rebase it to produce a branch that is easier to review.

I was just wondering:

Why not have your robot code burp back information to the DS log to tell you what code is in the robot?

I mean discipline with Git is valuable but if you don’t know what you put in the RoboRio how are you going to post mortem someone breaking that flow?

This is actually a subject of much debate among developers: https://www.atlassian.com/git/articles/git-team-workflows-merge-or-rebase

I’ve got my opinions but they matter not. Do what works for you.

I’m hoping to get some time to try out an implementation of this. I was thinking maybe the java manifest file, but to maybe make it a bit more cross-platform I’ll do a generic properties file to get deployed with the rest of the code, and a java sample implementation for reading the file.
Current fields:
–Build Date/Time
–Build PC Username
–Git uncommitted files in working tree/Index (True/False)
–Commit SHA1
–Git Branch (if available)
–Git Tags (if any avaialble)

I got you!

~marshall$ cat .vimrc
set tabstop=4
set expandtab
set shiftwidth=4

I don’t use vim much, but set expandtab words to live by. :smiley:

There is an objective answer.

Let’s write whitespace for the RoboRio:

Seems like a good idea. We could just leverage this implementation. Anyone gotten their robot programmed in scratch yet?

I will find a place and put in one non-breaking-space. I promise.
Then spend hours watching people debug that.

Realistically, I think programming a robot in Scratch would already be worse than Whitespace…

Got a sample implementation to generate the build info file, and deploy it to the RIO. Still need to implement a class to read it at runtime…

Edit: confirmed the deploy is working on an actual RIO tonight. On my 8 year old laptop running lubuntu, a full clean build deploy takes 23 seconds. This is incredible! Excellent job again WPI Team, this is already an awesome user experience!

So I got GitLab 11.1.4 running in LXLE 16.04LTS Xenial just fine on Oracle VirtualBox 5.2.16 R123759.

It requires:

  1. Just under 10GB of storage with nothing in the repository.
  2. 3GB of RAM and even with this, a copy of SeaMonkey web browser open on the GUI with 2 tabs leaves 500MB of RAM free.
  3. I set this up with VMDK disk files split at 2GB so it’s more transportable across systems.
  4. You have to use the 64bit of LXLE because there are no current GitLab OmniBus installer packages for the 32bit version of LXLE.

It’s in bridge networking mode, with the firewall configured with UFW as it comes with LXLE out-of-the-box. So the IP for it can be DHCP or set static from the Network Manager icon on the LXLE desktop, of course it can be done from the command line as well.

I left it booting into the GUI even though I could prevent that and save some RAM. The GUI boots fast on my Lenovo G50-45 AMD A8 laptop with 256GB SSD (this laptop cost less than $550 with the upgrades to 16GB of RAM and from HDD to SSD), less than 1 minute. GitLab comes up within 2 minutes after that. As this is a 64bit OS in a virtual machine run by Oracle VirtualBox it will require Intel VT / AMD-V equipped systems (currently low end Intel processors sometimes have VT forced off in the CPU, most AMD CPU support AMD-V).

So if someone actually wanted all of GitLab portable to go someplace without Internet this is perfectly doable. You’d still be able to run this and develop on the Windows 10 system because at least on this laptop you’d have 13GB of RAM free which is plenty. Even if the laptop had only 8GB of RAM you’d be able to run many development tools and this VM at the same time.

Obviously this is not necessary to just get Git itself working. Also there are features for GitLab and if those start getting added I’d expect this would need more RAM to operate.

If anyone is actually interested in doing this: I can provide instructions for the installation, a copy of the VM by DropBox, or a copy of the image on DVD+R DL media.

Thanks. We’ll look at this for sure, when we get to it in a few weeks.

For a simpler and lightweight cross-platform git server solution (no VM required!), you might try https://gitea.io/ … I’ve not used it personally but I’ve heard good things.

I’ve installed Gitea once on a Raspberry Pi
I’ve never seen it in a production setting personally for anything commercial. Open to the idea: just never had the opportunity.

Looks like there are at least 4 commercial users:

Just for comparison here’s GitLab:

Not saying it won’t work, it does have a comparison page with GitLab on their website:
https://docs.gitea.io/en-us/comparison/

It does support dump for backups as well, like GitLab. So if someone messes up at least you can restore the state of that install. Cause installing it without Docker or VM means it is not contained from other things.

Rasberry Pi 3 are pretty cheap. I suppose you could literally dedicate one to hosting Gitea as well. Here is a link to a HowTo for installing Gitea on a Raspberry Pi 2/3:

Also remember if you do this on a Rasperry Pi and power is interrupted, which a laptop in functioning order has a battery, you’ll be cleaning that up.

There are tons of wrappers for Git frankly:
https://www.artificialworlds.net/blog/2014/07/15/what-git-server-should-i-use/

From a personal perspective: I could help someone with GitLab in a contained situation like a VM faster than Gitea. I just use GitLab and VMs more professionally.

Finally I suppose since Gitea can run in a bit 32bit situation easier, someone can probably put it in a 32bit LXLE Oracle VirtualBox which would remove the requirement for a laptop that supports Intel VT or AMD-V. I mean a VM is not the worst thing in the world or no one would use Cloud computing. Sure the OS in the VM needs 256MB - 768MB of RAM and it probably takes up a few GB of diskspace. However: it’s a heck of a lot easier to support a single OS custom configured for a the server running in the VM than to support: Debian, (X/K/L)Ubuntu Tahr/Xenial/Beaver, Red Hat 6-8, Federa, CentOS, Mint, LXLE, Arch(Bang), Windows XP/Vista/7/8/8.1/10 and the server hosted on those OS along with all the things users can do to mess these OS up and all the bugs and development that go into each. This works the other way as well. Both GitLab and Gitea stand on other Open Source products like MySQL and PostgreSQL. Putting these in the VM contains them from other similar or conflicting products that may already exist on the computer to which the VM is added.

Still looking for the reason a FAT32 formatted USB flash drive with the master repository isn’t cheaper and Faster to set up than (pick your flavor) Git server implementation on (pick your OS/platform) server and quite adequate for your development needs at competition.
I get there are benefits to teaching students how to set up a server, etc. However, during competition we’re looking for the simplest solution that “just works.”

I believe you glanced over the point at which I mentioned that Git is extended with additional features by these products. The question of the value in the added software is determined by the need of these features. In this case where you probably don’t have Internet.

Obviously Git alone won’t program your robot. So you add tools to Git: Gradle, Eclipse, IntelliJ, Java. At some point you have the set of tools you need. However the determination of need is somewhat loose. Otherwise new tools wouldn’t be added to the solutions like Gradle.

Now maybe you don’t need issue tracking. Maybe someone else does.

Perhaps adding BugsEverywhere is enough bug tracking for someone. Maybe not.

Obviously a team that has vigilance in tracking what they deploy to the robot, would rarely use a solution to determine what they deployed to the robot.

Another reason I suggested the VM install, as apposed to trying to roll your own deployment, is because it can be copied into the system and simply work (given that some of these products have a very large set of dependencies). Then moved to another compatible system and just work. Linux and the tools wouldn’t have license requirements preventing making a dozen copies of that VM if you wanted. Quite frankly as CSA I often wish FIRST would just bundle all the development tools into a VM with an OS such that installation was less fractured. That might not work for the Driver’s Station but it would certainly make the number of hours spent getting everything together lower. I mean the enormous amount of random access to the drives of each developer as they download and install all this software easily creates a longer time than simply copying the entire multiGB VM to your system or to external storage to your system and booting it.

We stepped up our git-based game this year, but looking at this thread it’s apparent that our workflow is not nearly as complex as some. Regardless, we use a fairly simple (stack-wise) Git workflow which works well with our team of 4-5 dedicated programmers.

During slow periods (basically kickoff through week 4 or 5) development of features takes place in dedicated branches, then once tested and confirmed working changes are merged into master via Github PRs. However, once the robot is complete we tend to only have a week or so to test and we’ve found that the standard Github Flow method adds a lot of overhead in those situations. So, we generally create a “sprint branch” before times with high amounts of work (namely, before bag day and the week before each competition) and all code goes into that branch. It doesn’t compromise our ability to keep track of code changes, as we make sure to keep commits very granular and isolated to specific systems. During competitions, we have one master laptop (belonging to the appointed main software person in the pit) where all code changes are made, and changes are committed when they have a free moment and pushed when they have internet.