Machine Learning for Game Piece Detection

Machine Learning for Game Piece Detection

I used machine learning for game piece detection instead of standard computer vision techniques. It’s accuracy is incredible, as it is not nearly as easily confused as many other forms of CV are. Color filtering, for example, can easily be confused by other yellow object. A machine learning model is not so easily fooled. If anyone is interested in trying it out for themselves I have documentation on my github for the project.

https://github.com/cgund98/frcbox

Feel free to get ahold of me if there are any questions on how to use it.

As anyone should ask regarding a ML project: what is the accuracy on your test set (I didn’t see it in the repo)?

What are you using for the dataset? I didn’t see it in the repo. On mobile so I might be missing something.

Looks cool though.

Yay, another team is going down this route. Looking pretty good so far.How long did you train it for, and how big was your data set?

According to the description, 216 photos taken from the team’s workspace.

OP: It sounds like your test accuracy is surprisingly acceptable given such a small training dataset. Did you separate the 216-image dataset for training and testing, or use a different method for testing? How big are the hidden layers? Are you worried about overfitting?

When I was working with the API I was being lazy and did not separate the data into a training and evaluation set. However, I combatted overfitting by using very diverse images with different backgrounds and angles. I have found that this has worked without issues so far.

If anyone is interested in the dataset and the training process, you can find that in another repository. I trained it for an hour or two on my personal pc with a GTX 1060. I finetuned an existing model, MobileNet, which is why I was able to be successful with such a small dataset and train for such a short period of time.

Thanks! That’s what I was looking for.

This is really cool. I’m excited to see how it works during the season!:eek:

Did you fine tune the whole network? An up and coming trend is to only update the full connected weights and keep the spatial features untouched, for smaller datasets that is. The precedent from that is Harvard’s skin cancer detector which was pretrained on imagenet.

I do not believe that I finetuned the whole network. Unless you are going for extreme accuracy, e.g. in a kaggle competition, it is usually enough to only finetuned the dense layers.