By Eli Meyer
Benjamin D’Israeli, Prime Minister of Britain, is credited with a famous saying about falsehood: “There are three kinds of lies: lies, damned lies, and statistics.” In other words, the author of that phrase (probably not actually D’Israeli, based on the historical record) was stating the statistics are easy to misrepresent, yet dangerously convincing—the worst sort of deception.
And there are countless ways to twist statistics. These range from outright fabrications of data to subtle misunderstanding of numbers that can fool even statisticians (check out Simpson’s Paradox next time you’re on a study break for a mind-bending example.)
But on the GMAT, there is one statistical error to rule them all. The instant that a GMAT Critical Reasoning problem offers you a poll, survey, or experiment, you should always ask the exact same question: is the sample that was tested representative of the larger group that is concluded about?
There are many, many things that must be true for a sample to be valid, and most Critical Reasoning stimuli will leave those conditions as unstated assumptions. That means that as soon as you see statistical data in the evidence, you can instantly spot assumptions that you can paraphrase, undermine, or support as the question stem demands!
Generally, there are three specific ways that a sample can go wrong. The first is sample size—too small a sample the poll is worthless! You don’t need to over-think this one. The GMAC does not expect test takers to calculate P-values or margins or error, just to use common sense. 200 respondents? Might be valid. 5? Probably not. Second, the sample has to be unbiased. If everyone in the sample has, or lacks, some quality that distinguishes the group surveyed from the general population, then the statistics are doubtful. And finally, the sample should be random. A non-random selection process, such as a self-reporting internet poll, or a pollster only asking his questions at a specific time and location, carries with it a presumption of bias. You don’t need to know exactly how or why the skew is introduced, because its mere potential is enough to render the argument questionable.
Spotting this pattern is a big time saver. Take a look at today’s question of the day—it has two statistics, and if you look closely, you’ll realize the samples from those statistics don’t quite mesh. Good luck!
Police officers in Smith County who receive Special Weapons and Tactics (SWAT) training spend considerable time in weapons instruction and practice. This time spent developing expertise in the use of guns affects the instincts of Smith County officers, making them too reliant on firearms. In the past year in Smith County, in 12 of the 14 cases in which police officers shot a suspect while attempting to make an arrest, the officer involved had received SWAT training, although only five percent of the police force as a whole in the county had received such training.
Which of the following, if true, most strengthens the argument above?
(A) In an adjacent county, all of the cases in which police shot suspects involved officers with SWAT training.
(B) SWAT training stresses the need for surprise, speed, and aggression when approaching suspects.
(C) Only 15 percent of Smith County’s SWAT training course is devoted to firearms lessons.
(D) Among officers involved in the arrest of suspects in Smith County in the past year, the proportion who had received SWAT training was similar to the proportion who had received SWAT training in the police force as a whole.
(E) Some Smith County officers without SWAT training have not been on a firing range in years.
This problem has two statistics: the 5% of the police force as a whole that has taken SWAT training, and the 12 of 14 shootings during arrests. At first glance, this looks fair–after all, both are statistics about the same police force, and the first one actually counts the entire force and doesn’t rely on a potentially biased survey sample!
However, closer inspection reveals that there is a problem. Though 5% of the police force trained, we don’t have any information about how that 5% was distributed. The “12 of 14″ statistic is drawn only from a specific subset of officers: ones involved in violent arrest. Given that traffic officers, animal control officers, and child welfare officers might not even carry a gun, let alone face down violent criminals, that data is suspect.
This question asks us to strengthen the argument. Since we’ve spotted a potential reason that this argument might be dubious, we need to patch up that hole for the argument to be compelling. The correct answer, therefore, must establish that using only arresting officers as the basis of our evidence does NOT skew the data. Answer choice (D), which establishes that 5% of arresting officers had SWAT training, matches that prediction perfectly.
The correct answer is (D)