Quote:
Originally Posted by wilsonmw04
This tread sounds like some of my students after they get a test back. These students who have not done well and complain that the test was "a bad" test for various reasons. I would then post the range of test scores. They would not be significantly different than the results of the last test. The students would still insist the test is bad or the teacher didn't teach the material properly.
It's amazing how folks can see data right in front of them and still insist their original opinion is that of the majority when it may not be.
So much rage this year. So much misplaced rage.
|
Actually, this season, the teacher decided to randomly yoink the test out of a large percentage of students' hands before the normally alotted amount of time for taking the exam ran out. Each and every bad foul call, pedestal delay, missed assist, unpenalized instance of damage, etc. yoinked additional valuable time away from the students trying to pass the exam.
Some high-performing students may have been better able to still "ace the test" given the artificially compressed environment foisted upon them. Others who very well could have passed the test if given the full allotted time will never know if they actually would have achieved that goal because THE TEACHER NEVER GAVE THEM THE FULL OPPORTUNITY TO SUCCEED THEY WERE ORIGINALLY PROMISED.
Also, the survey email I received from the (very good, nice, and wonderful) Pittsburgh RD also contains the following:
"A note about surveys:
Seriously, we are working hard to NOT over-survey you guys! Both FIRST HQ and I need team data in order to write grant reports and funding proposals. Last year we had only a 27% return on student surveys at the Pittsburgh Regional,
and ~1% (yep, one %) return on HQ surveys from our event. The only way we can ensure having regional competitions is to have evaluation processes in place that work. I thank you from the bottom of my heart for your patience and care in responding to our surveys."
I don't know what a good number is for an estimate of the total students/mentors in FIRST, but let's assume (very conservatively) 20 students/mentors per team on average. 20 * 2700 = 54000 potential responses. 3600 responses out of a possible 54000 is a mere 6.6%. And this doesn't include the fact teams are given the chance to respond multiple times if they attend multiple events.
Honestly, I'd like to see more team members SPEAK UP and contribute to these surveys regardless of opinion, such that the statistics drawn from them become more relevant and worthy of reporting.