I wouldn’t necessarily conflate NCLB-mandated testing with university-entry tests like the SAT and ACT. They’re aimed at different things.
The impetus to perform well on the college tests lies first and foremost with the student: their future will almost certainly depend on it. As for the NCLB testing, depending on the jurisdiction, a passing mark may not have any effect on the student’s promotion to the next grade, or their eventual admission by a university. Those tests are primarily a way to measure schools and other administrative entities, and as a result, the schools are the ones that take them most seriously.
Nevertheless, you’re right to wonder what methodological issues might be affecting the statistics, and right to question the policies that govern the education system.
Incidentally, as far as NCLB is concerned, it was bad policy, but with reasonable intentions. The key flaw was assuming that performance on standardized tests would necessarily be an accurate predictor of students’ knowledge. This is only true to the extent that you hold everything—and in particular, the curriculum—constant. As soon as you adjust your teaching style and methods to focus on the testable items, without also revising your definition of knowledge to focus on the test content, you’ve introduced a confounding effect.
One example of this might be a breadth-for-depth tradeoff: students become more proficient in the types of language skills required on their test, but spend less time being exposed to a variety of literature, or practicing different writing styles. If this increased their test scores by 2%, did that curriculum change make the students 2% smarter (in terms of language)? It seems obvious that it probably wouldn’t have, yet this is an expectation against which the schools are being measured. (Note that it isn’t inconceivable that there actually was a benefit to the new teaching method, measured against a constant standard—maybe 0.4%, to pick a number for illustrative purposes.)
The other glaring flaw with NCLB is that it takes funding away from schools that score poorly. Given that change and improvement tend to cost money, this is exactly the wrong move. It comes from a misguided belief that it would be irresponsible to throw good money after bad—but that’s just folk wisdom substituting for an actual understanding of whether fiscal irresponsibility was actually involved in a school’s lack of success. The architects of NCLB in effect blamed low educational outcomes on mismanagement, and therefore tried to punish the bad managers by cutting their funding. Unfortunately, they never figured out a rigourous way of determining whether the management was doing a good or a bad job in the first place—and their proxy, student test scores, is a terrible approximation in a substantial number of circumstances.