|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools |
Rating:
|
Display Modes |
|
#76
|
|||
|
|||
|
Re: A New Way to Scout
Quote:
Not to criticize, honest question - If you're already doing VLOOKUP and INDEX type operations why not invest a little bit of time to learn how to do this stuff with a full blown relational database? It'd be faster performing, more scalable, more reliable, and teach students useful skills. I guess the only real issue would be a simple front end but that's not that bad to do any more… Disclaimer - my bread and butter is databases, Postgres, SQL-Server, Mongo, CouchDB… When all you've got is a hammer every problem looks like a nail, I'm genuinely curious why so many people seem to shoehorn excel into a job for an actual DB. |
|
#77
|
|||||
|
|||||
|
Re: A New Way to Scout
Quote:
Excel/Gdocs is a useful tool for ALL engineers, so in theory every student on the team should learn it. A viable scouting system can be made in an hour easily if you're just doing averaging like most teams are. Less by an experienced user. You don't have to worry about potential errors/bugs you've introduced, etc... I don't buy the increased reliability argument. Furthermore, the real big gain, is distribution. Every kid on the team has a google account, they have the scouting folder shared. Phones, tablets, laptops, etc... can all view it with no pain or hassle. User interface as well, the UI is so much faster and easier to edit for people with no programming experience. It has a lot of power for so little effort. It's the same reason why so many teams buy COTS shifters. |
|
#78
|
||||
|
||||
|
Re: A New Way to Scout
Quote:
Quote:
The way to do this would be to incorporate the data collecting into the event volunteer staff. Teams provide the volunteers, and those people take the requisite training just like they would if they want to become inspectors / referees / etc. Then they collect quality data in their roles as the official scorekeepers, and stats get released for public consumption. Just like they do in athletics. The lack of this type of official scoring data is a deficiency in the FRC program. It's not like FRC doesn't have enough on its plate for the few people who actually get paid to run the operation, so that's not a criticism of them. I just think this is something that the community and FRC should address in the long term. If we want FIRST to be loud, having scoring statistics readily available would help a bit. It gives people something to look at online to learn who's who, and that draws certain types of people in. It would make it possible to produce events with commentary and some statistics incorporated into the broadcast to add context and history to the matches to make them more interesting to casual observers and newbies. Without that stuff, it's harder for an observer to know or care about the difference between teams 6875 and 6587 and 6758. |
|
#79
|
||||
|
||||
|
Re: A New Way to Scout
Quote:
Scouters would need to open an IFTTT account and give it access to receive SMS/Texts (normal messaging rates apply), and Google account information to write to their Google Drive. A Master Scout would need to designate a Spread Sheet file and share it with the other scouts. (All scouters would also automatically have access to the raw data in their Google Drive as well.) If crowd scouting: On Thursday set up a scouting kiosk with a cellular hot spot for potential scouters to set up their account... or have the regional director e-mail all the teams ahead of time the set-up instructions. PS. You can opt in the recipe to send/collect the phone number of the SMS sender. This would allow the master scout to text back anybody who is doing it wrong, or sort out all of their data and remove it from the pool. This would also release the phone numbers of the scouting participants to each other (no names attached). Not sure if that is a problem. |
|
#80
|
|||
|
|||
|
Re: A New Way to Scout
Quote:
I'm not going to grab a spreadsheet at the moment, but can you put data into hidden columns? Make the spreadsheet view only(And thus they can't directly open the hidden column) to everyone except the owner, and make a copy for the owner to check phone numbers in the hidden columns. From there, it's up to the owner to know when data is correct or faulty and let the users know how to improve their data input. |
|
#81
|
||||
|
||||
|
Re: A New Way to Scout
Quote:
Even if the owner hides and protectes a column the shared users could still make a copy of the spreadsheet and then as the owner of the copy unhide and unprotect the information. |
|
#82
|
||||
|
||||
|
Re: A New Way to Scout
Quote:
That is probably it. We have a TON of vlookups, and that probably messed everything up. After finding the vlookups, we never tried to look for a more efficient method. (Although I still am worried about losing the online data, after it consistently happened to us) A few months ago, I would have loved for you to take a look at the system, but I invested quite literally my entire summer creating our own proprietary system, that somehow works. Thank you for the offer! |
|
#83
|
|||
|
|||
|
Re: A New Way to Scout
I talked with the lead programmer for CrowdScout today, and got some more details confirmed:
We'll use a database backend (MySQL or similar) to store the data. We'll have an API so that we can take data from many sources over in a standardized format; we'll publish this API for anyone to use with their software, though we may need to set up API keys to prevent server overloading from unintentional traffic floods blocking it for others. We should have no issues accommodating all the data from every match in every tournament, with one database per year. It turns out FRC actually creates a fairly small amount of data in a season, somewhere in the millions of data points range (Relative to some of the data sets I've worked with, at least.) The API will just provide a way to upload data; every year we'll have a suggested list of metrics to score by, as well as taking other inputs (we're still rolling ideas around for the custom inputs, so that's not final.) Data will be distributed to teams by both raw data access (bulk exporting to some storage file) and data processing (running weighting algorithms and so forth). We envision teams as doing their own visualizations/processing beyond that, so we're not going to have a massively complicated GUI with a million different buttons; we could never match the ingenuity of teams in how they want to do their own processing. (Someone could easily create a frontend and make it public, but we think that's better done locally and that our time and energy is better spent on making a rock-solid reliable backend and great API.) |
|
#84
|
|||
|
|||
|
Re: A New Way to Scout
Quote:
Might I suggest taking a look at MongoDB. As it is schema less database you might have better luck, that way if teams don't collect some metric it won't be a complete mess for your DB and you don't have to have columns for everything. Simple clean JSON payloads ![]() |
|
#85
|
|||
|
|||
|
Re: A New Way to Scout
Quote:
Our scouting software has been rebuilt annually since we started using software for scouting. This meant that one or two people spent over 300 collective hours during the season writing it. My plan for this year prioritizes maintenance, efficiency, and software design. We are using: node.js: Node is a tool for the modern web. It was created to be used for numerous client-server interactions in which small amounts of data were passed (think chatrooms). It is single-threaded, but extremely efficient. In all honesty, we could probably over a thousand FRC teams scouting concurrently with node (when used correctly). (Twitter did a study, and saw that using node for certain aspects of their system increased the amount of clients that a server could service from a few thousand to over a million). express.js: This is a lightweight framework for node. Makes routing simple, among other small things. mongodb w/ mongoose: mongo is the best option for use with node because it is nonblocking (unlike SQL). Both mongodb and node will be extremely efficient by servicing other requests while waiting for requests to be made and return. Mongoose is a nice wrapper library for mongo which makes it much easier, and adds schema support for mongo. bootstrap: bootstrap is the most popular library on GitHub. We're using it as a frontend CSS framework to keep things simple. angular.js: frontend JS framework made by Google. allows for two-way data binding, as well as easier maintenance of HTML. This is great, and eliminates some of the more boring code that deals with DOM manipulation. In order to improve year on year, we have to use the most efficient system possible, while keeping it maintainable. I'm writing multiple wrapper APIs to further simply each of these parts of the frameworks. Maybe we can work together to make a better overall crowdscouting system. I'd definitely want to see the details of how you currently do it. |
|
#86
|
||||
|
||||
|
Re: A New Way to Scout
Below is from last year before we went to wolds.
Quote:
|
|
#87
|
|||
|
|||
|
Re: A New Way to Scout
Khanh111: I've PMed you about a collaboration.
IPA is really cool, and I'd love to integrate it if 3138 is willing; not all teams would be able to submit all the data that IPA uses (paper based teams couldn't submit shot location, for instance), but we could definitely store that data on the platform we're developing, and the client would have access to both the specific data it needs as well as access the rest of the data. |
|
#88
|
||||
|
||||
|
Re: A New Way to Scout
With HTML5, you get drag n drop! So, record the position of robot shoot point(s) and store them in a database.
After thinking for a lot of time, I was thinking about which Database I should use. My hosting platform is a Raspberry Pi, so I do not have too much power. What database should I use for such an intensive application? Single MySQL Database Multiple MySQL Databases Single Text File Multiple Text Files Something else? |
|
#89
|
|||
|
|||
|
Re: A New Way to Scout
Quote:
(Warning: speculation ahead, I could be wrong about this: ) Honestly, a Raspberry Pi doesn't have nearly enough power to run this scale an application. You need a lot more RAM for DBs; the 512MB on the Raspberry Pi probably won't be enough, I'd want at least 2-4GB for this size of database with the amount of data that we could be working with. Part of that is me wanting plenty extra for comfort, since the database may be able to run in 512MB, but part is the fact that I use more than 512MB while my web servers are idle on my personal server (admittedly not heavily optimized, but also not nearly as intensive as the sort of database we'll be using.) Last edited by 1306scouting : 23-11-2013 at 02:17. Reason: Clarify speculation |
|
#90
|
||||
|
||||
|
Maaannn!!! What do you run on that poor server? My server uses just 192MB on standby, running all the crapware I keep on it!
See the screenshot. BTW, I was using SSH Display forwarding, so I was hogging up CPU resources and RAM by that. I do not keep LXDE running. Lol. I was writing this post while taking the screenshot! |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|