NoHalfMeasures wrote:
KarishmaB wrote:
StandardizedNerd wrote:
Ohk! FWIW, another forum lists it as a GMAT paper tests question. So was probably written by the ETS or maybe not.
I will be very surprised if it turns out to be an official question. We are to infer about the principal's beliefs about the beliefs of the list makers (what they think academically-troubled means). Too much of a stretch.
All I can say is what the factors are that the principal believes play a role in making a school 'academically-troubled' - graffiti, drugs, scores and he thinks his school did better than 50 other schools on those parameters.
Well done! This indeed isn't there in 9 GMAT paper tests. According to you, what are the trappings of an apocryphal question such as this?
Mods,
Bunuel: Can u pls confirm and remove the tag? Thank you
That's an interesting question. Thankfully, the test taker doesn't need to worry about it, but the test maker does!
Official questions with debatable answers have been extremely rare. Normally, the logic of the correct option is undeniable (even if it is not apparent in the beginning) and the other 4 options are definitely worse. The reason we are unable to correctly solve the question will usually be our own oversight or hurry. Given enough time and enough attention, the answer to an official question would be absolutely non-debatable for most people. Also, with experience, one starts relating to the logic used in official questions. The logic might be convoluted but will never be far fetched.
If we are forced to weigh the pros and cons of two options even after getting the full explanation of the reasoning, it is usually not an official question.
The framing of official questions involves not only a lot of money, time and highly experienced effort, but also a lot of 'experimentation.' When a new question is put as an experimental one, if most people above a certain score get it right, the question is dependable. If, on the question, the performance of people with similar aptitudes shows no definite pattern, then the question is likely to have low predictability. So even if a less than optimum question does sneak in, the test maker can weed it out at the 'experimental' stage. A CR question of 700 level must be answered correctly by almost everyone who is at 750 level. If half the people who got 750 got it wrong and marked another option, we know the logic of the question is off or it has certain regional or other biases.