Veritas Prep Representative
Joined: 26 Jul 2010
Posts: 416
Given Kudos: 63
Re: Questions in the real GMAT
[#permalink]
05 Jul 2018, 13:16
I think what you're hearing or thinking about is the way that validated questions are served in the adaptive algorithm: once GMAC knows how difficult a question is, it only really makes sense to serve it to candidates when that problem would give the algorithm new information about that candidate. For example, if after 12 quant questions the system has a strong belief that you're a 700+ scorer, it wouldn't show you a question that 95% of all examinees get right: if you're scoring around the 90th percentile, you have a 99+% chance of getting that right! Similarly if someone is looking to be a 300-400 level test taker, you wouldn't show them a question that only 25% of people get right, since most if not all of those 25% are likely 650+ scorers (or blind guessers), so that 300-400 level student is probably going to get it wrong, or simply guess right.
So when the system is serving you problems, it will tend toward questions that you have more of a 50/50 shot at, because those are the problems that give it the most new information about you. Think about if you were given 5 yes/no questions to ask about which number I'm thinking about from 1-20. Your first question should be "is it greater than 10?" (or "is it odd?") - a 50/50 shot question - as opposed to "is it 19?," because the 50/50 question helps you more quickly narrow down the options.
However, in the pre-test, experimental phase of a question, questions are served to everyone so that they can get a good feel for how difficult they really are. In this phase, any question that's going to be valuable at differentiating between, say, 680 and 730 users, is likely going to be missed by the vast majority of people scoring below 500. To be a good question at differentiating at the upper end of difficulty, it has to be pretty hard and probably out of the reach of most below-average users...in fact it has to be hard enough that many people in the high 600s miss, or it just won't do a good job of differentiating between the high ability folks.
So...the stats you're seeing on the Veritas Prep tests are the stats from the pre-test, experimental phase when the questions were served to a random sampling of users. In this way, you get to see the true profile of the question (how hard is it, which trap answers are most tempting, etc.) because the data is "pure." Once you start only serving certain questions to smaller chunks of the bell curve, that data isn't as useful (in fact, when we worked on our scoring algorithm with the Chief Psychometrician from GMAC, he was really adamant that you only use pre-test data to create the statistical profile of a question, because once it's live in the adaptive algorithm the smaller, more homogenous pool of users skews the data quite a bit).
So anyway - I think that's what you're thinking of, that once a question is live, in general / in theory it should be served to users who have close to a 50/50 shot at it. But the statistical profile of a question will often show that more than half of pre-test users missed it.