Last visit was: 29 Apr 2026, 06:47 It is currently 29 Apr 2026, 06:47
Close
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.

Customized
for You

we will pick new questions that match your level based on your Timer History

Track
Your Progress

every week, we’ll send you an estimated GMAT score based on your performance

Practice
Pays

we will pick new questions that match your level based on your Timer History
Not interested in getting valuable practice questions and articles delivered to your email? No problem, unsubscribe here.
Close
Request Expert Reply
Confirm Cancel
avatar
M83
Joined: 11 Jul 2008
Last visit: 22 May 2017
Posts: 30
Own Kudos:
Posts: 30
Kudos: 2
Kudos
Add Kudos
Bookmarks
Bookmark this Post
avatar
otomid
Joined: 10 Jan 2006
Last visit: 17 Feb 2009
Posts: 1
Own Kudos:
1
 [1]
Posts: 1
Kudos: 1
 [1]
1
Kudos
Add Kudos
Bookmarks
Bookmark this Post
User avatar
highhopes
Joined: 26 Mar 2008
Last visit: 26 Mar 2022
Posts: 643
Own Kudos:
138
 [1]
Given Kudos: 16
Schools:Duke 2012
GMAT 1: 740 Q49 V42
Posts: 643
Kudos: 138
 [1]
1
Kudos
Add Kudos
Bookmarks
Bookmark this Post
User avatar
anonymousegmat
Joined: 14 Jun 2007
Last visit: 14 Mar 2009
Posts: 300
Own Kudos:
29
 [1]
Posts: 300
Kudos: 29
 [1]
1
Kudos
Add Kudos
Bookmarks
Bookmark this Post
no. the questions from OG/GMAT prep are retired and are not live.

from a super basic psychometric perspective: if someone was given a ? which they had access to the correct answer in advance, then that question doesn't serve any purpose in estimating your 'true score' ability level in whatever construct it purports to measure (well i guess if you knew the answer the first time, but then you spend less time on that question and have more time for others, etc... so lets assume that you didn't know the answer the first time). This question is just contributing error variance to the measurement, so it really serves no role. There is no point in lengthening a test with questions that don't contribute to some form of reliable variance. Since the GMAT is such a sophisticated empirically constructed mental test, I'm guessing they've got their ABCs of psychometrics 101 down.

to the poster who said he saw repeated questions - you probably saw questions with structural similarity, since items are designed to tap very specific content areas and procedures and requirements are quite detailed (i.e. must be able to manipulate a linear equation with one three digit constant and one two digit fractional constant presented in a word/story format containing < 75 words.)

If you did see questions from the OG, rest assured they were not scored.
avatar
M83
Joined: 11 Jul 2008
Last visit: 22 May 2017
Posts: 30
Own Kudos:
Posts: 30
Kudos: 2
Kudos
Add Kudos
Bookmarks
Bookmark this Post
Interesting perspective. Thanks.
User avatar
jallenmorris
Joined: 30 Apr 2008
Last visit: 09 Oct 2014
Posts: 1,226
Own Kudos:
967
 [1]
Given Kudos: 32
Location: Oklahoma City
Concentration: Life
Schools:Hard Knocks
Posts: 1,226
Kudos: 967
 [1]
1
Kudos
Add Kudos
Bookmarks
Bookmark this Post
Interesting that these two sentences come from the same poster within the same post because, to me, they do not make sense together.

We all know GMAC puts "test" test question on teh GMAT to see how people do and whether they will be used later on. It does lengthen the test as those questions are not scored. I don't like the fact that they do this. Does GMAC factor in the difficulty of an unscored question and how the mere presence of that question changes the entire test?

Out of 37 questions, if you have 34 that count, those 3 questions will change the entire test. If they factor in how those 3 unscored questions alter the variance of the test, then those questions aren't really "unscored" in a literal sense. Just my thoughts on it. If a question is on the test, it has an impact. I think they should put only questions on it that are actually going to be scored.

anonymousegmat
There is no point in lengthening a test with questions that don't contribute to some form of reliable variance. .... If you did see questions from the OG, rest assured they were not scored.
avatar
M83
Joined: 11 Jul 2008
Last visit: 22 May 2017
Posts: 30
Own Kudos:
Posts: 30
Kudos: 2
Kudos
Add Kudos
Bookmarks
Bookmark this Post
You're right about the use of test questions in FUTURE exams. Interesting...


PS - Your avatar is hilarious! :lol:
User avatar
IanStewart
User avatar
GMAT Tutor
Joined: 24 Jun 2008
Last visit: 24 Apr 2026
Posts: 4,143
Own Kudos:
11,285
 [1]
Given Kudos: 99
Expert
Expert reply
Posts: 4,143
Kudos: 11,285
 [1]
1
Kudos
Add Kudos
Bookmarks
Bookmark this Post
There would be no reason at all to include a question from the OG or from GMATPrep in a real GMAT as an unscored question. Unscored questions are included so that they can be used as live questions on future tests, so there would be no reason to include an unscored retired question. Otherwise I agree with anonymousegmat; including 'known' questions would not be helpful in pinpointing a test-taker's ability level.

Diagnostic questions are pre-organized into pools, and each test-taker will see one pool of questions (randomly interspersed among the 'live' questions on their test). I have not seen any information about how these pools are assembled, but I'd guess that the test-creators make a preliminary guess about question difficulty, to ensure a roughly even spread of difficulty within each pool.

The GMAT scoring algorithm, like that of other computer adaptive tests, requires that questions be calibrated against the test population. The only convenient way to do this accurately is to include unscored questions on each test.
User avatar
anonymousegmat
Joined: 14 Jun 2007
Last visit: 14 Mar 2009
Posts: 300
Own Kudos:
Posts: 300
Kudos: 29
Kudos
Add Kudos
Bookmarks
Bookmark this Post
jallenmorris
Interesting that these two sentences come from the same poster within the same post because, to me, they do not make sense together.

We all know GMAC puts "test" test question on teh GMAT to see how people do and whether they will be used later on. It does lengthen the test as those questions are not scored. I don't like the fact that they do this. Does GMAC factor in the difficulty of an unscored question and how the mere presence of that question changes the entire test?

Out of 37 questions, if you have 34 that count, those 3 questions will change the entire test. If they factor in how those 3 unscored questions alter the variance of the test, then those questions aren't really "unscored" in a literal sense. Just my thoughts on it. If a question is on the test, it has an impact. I think they should put only questions on it that are actually going to be scored.


(Any PhD's or Statistics or Psych majors on this board will be able to contribute/refine/elaborate/correct on this more as not many folks read up on psychometrics )

I can see where you are confused, but experimental questions are not part of the test. On test day, you are actually taking part in two different activities (1) the test and (2) a pre-test of future questions. These experimental questions aren't scored as part of your exam, so they aren't lengthening (1) but (2). The unscored items are used for an item analysis later. This seems counterintuitive, I know, but a test is traditionally considered to be comprised of the stimuli to which your responses are scored or measured in some meaningful way.

Also, when someone in educational measurement speaks of length, s/he isn't usually referring to time, but the number of items on a given instrument. Yes, more questions would = more time, but in the literature you would see more specific reference to the nature of the speeded test in the units of time. As far as unscored items contributing to variance, you have to approach it from the perspective of what variance is and how it is calculated. Variance is the square of the standard deviation, if something is never scored it doesn't contribute in any way to this calculation. I know where you are coming from - what if an experimental question is really tough, and it messes with your head and cause you to waste time and screw up later?

So, in a sense, it can influence variance that way... but it isn't approached like that, because so can a million other things (from the color of the carpet and walls, how many other people are in the room, the room's temperature, whether your girlfriend just dumped you - that may affect your score ). In test construction sometimes certain assumptions have to be made and sometimes certain things have to be simplified, or demonstrated insignficant. This is where the concept of random error is convenient. If you chalk up the affect diagnostic pretest questions have as a random error component (the mean of random errors in a population = 0) It's like saying it doesn't really matter.

Unscored items chosen randomly from some pretest pool getting introduced randomly as you progress through an exam is very different conceptually from the idea of lengthening a test with scored questions to which people may or may not have already had access to. Besides the obvious waste of time, it opens up a pandora's box - it can artificially inflate the mean, it can lower prediction (remember, your score is used to predict how well you will do in bschool) and introduce measurement bias (who has the OG and who doesn't). It can even shift the desired factor composition of the exam (while long term memory plays a role in many knowledge based tests in a general sense, more variance than desired would be accounted for by the memory factor than by the spatial/mathmatical skills/reasoning factors) When you change the factor composistion of a test, it becomes a different test.

All this might seem circular, since error reduces reliability...why would they put up with any potential random errors introduced by pretesting? The answer is because they *have* to do item analysis on new questions and this really is probably the best method of doing it, they accept the trade off as something inescapable. Adding unscored questions as a way to mess with your head is not an efficient method of discrimination b/c it throws out valuable information (arguably the most valuable info - did you get the ? right or wrong). If GMAC wanted more accurate estimates of examinees true scores they would acheive this with scored questions.

Ian - you make an even better point that I overlooked, which is that there is no reason to even include a known question as an unscored diagnostic item b/c they don't have to test it - its retired! Parsimony is a wonderful thing. I was trying to be polite and not say to the OP "hey, dude you are either lying or crazy because you didn't see an OG question on the test".
But I doubt that the diagnostic questions are first sorted on any a priori grounds. Empirical test construction on this scale is quite scientific, in this case randomization is a key component of the process that can't be overlooked. It would be questionable if they decided a priori that X is a 700 level question and to confirm their hypothesis by testing it out in a group that displays a restricted range of scores (correlation analysis on restricted ranges can present other problems). It would be more acceptable to collect data from random samples and let this aggregate data tell them it is a 700 level question.

Again, this can seem illogical at first, b/c it means a question like 2x=4 could possibly make its way into the difficult item pool... but hey, welcome to the world of psychological measurement :) not everything makes sense here!!!( why would answering false to "I sometimes tease animals" show up on a scale that measures hysteria? (this is true -BTW))
User avatar
jallenmorris
Joined: 30 Apr 2008
Last visit: 09 Oct 2014
Posts: 1,226
Own Kudos:
Given Kudos: 32
Location: Oklahoma City
Concentration: Life
Schools:Hard Knocks
Posts: 1,226
Kudos: 967
Kudos
Add Kudos
Bookmarks
Bookmark this Post
Anon,

Thanks for the analysis. That tells me much more about structured testing than I think I ever really cared to know. I'm coming at this from a very basic view, and rather selfish as well. I don't want to have to answer a question on a test that will affect my ability/confidence/score on the rest of the exam that is actually scored. I realize the decision to include unscored questions is probably the best way for GMAC to do get data regarding those questions for future exams, but I don't have to like it. It can all make sense with how standardized tests are created, scored, analyzed, etc...but I still don't have to like it :-D. Now do I still have to take the test? Um...yeah, if I want any shot at getting into the b-schools I choose. So for me, it's also a trade-off. I can complain, whine, and generally throw a tantrum if I choose, but then I have to pick myself up, dust myself off, and get back in the chair and finish the GMAT. The things that I don't like have already been decided and my input wasn't necessary (or asked for). I accept that no test such as the GMAT is going to be perfect and I trust that the makers and authors of the test do the best they can and should any of them find a better way to do it, I believe changes would be made.

Again,thanks for the analysis and great explanation.
User avatar
IanStewart
User avatar
GMAT Tutor
Joined: 24 Jun 2008
Last visit: 24 Apr 2026
Posts: 4,143
Own Kudos:
Given Kudos: 99
Expert
Expert reply
Posts: 4,143
Kudos: 11,285
Kudos
Add Kudos
Bookmarks
Bookmark this Post
anonymousegmat

But I doubt that the diagnostic questions are first sorted on any a priori grounds. Empirical test construction on this scale is quite scientific, in this case randomization is a key component of the process that can't be overlooked. It would be questionable if they decided a priori that X is a 700 level question and to confirm their hypothesis by testing it out in a group that displays a restricted range of scores (correlation analysis on restricted ranges can present other problems). It would be more acceptable to collect data from random samples and let this aggregate data tell them it is a 700 level question.

That's not quite what I was suggesting. The population viewing a particular diagnostic question is still a random sample; the GMAT definitely does not use the ability estimate of the test-taker to determine which diagnostic questions to display. I do know (from one of the GMAC research reports) that diagnostic questions are organized, pre-test, into pools (of between 7 and 14 questions each). So they may have selected 10 diagnostics for Pool A, 10 for Pool B, etc. When you sit down for your test, one pool of diagnostic questions is randomly selected for your test- Pool B, say- and you'll see those 10 questions from Pool B randomly inserted into your test. Another test-taker who also is randomly assigned Pool B should also see those 10 questions.

I don't know why the questions are pre-organized into pools, but I'd guess that this pre-organization is not completely random. When the test selects live questions to display to a test-taker, it uses three criteria- the test-taker's ability estimate (the test tries to select a question of appropriate difficulty), certain content balance specifications (so you won't see 35 geometry questions), and question exposure (the test tries to ensure that questions aren't 'overexposed', i.e. seen by too many people, for security reasons). By organizing diagnostics into pools, they can definitely control exposure. They could also, if desired, control content balance within each pool- they could ensure that no diagnostic pool contains 10 geometry questions, for example. I don't know whether they also try to ensure a rough balance of difficulty within a pool, but there are ways this could be done, using the results of the question pre-screening that is conducted before questions are used as diagnostics. In any case, here I'd be speculating.

Considering how much research has been conducted by the designers of the GMAT and other computer adaptive tests, I'm sure that the influence of diagnostic questions on test scores has been extensively studied. If diagnostic questions had a serious enough influence to make test scores unreliable as ability estimates, the test would be of little value. So I don't think anyone should be too concerned about seeing diagnostics- everyone else is in the same position, and I'd bet that it's been established that they won't affect your score in any statistically significant way.
avatar
Kotu
Joined: 13 Jul 2008
Last visit: 12 Apr 2014
Posts: 39
Own Kudos:
Posts: 39
Kudos: 4
Kudos
Add Kudos
Bookmarks
Bookmark this Post
anonymousegmat
no. the questions from OG/GMAT prep are retired and are not live.

to the poster who said he saw repeated questions - you probably saw questions with structural similarity, since items are designed to tap very specific content areas and procedures and requirements are quite detailed (i.e. must be able to manipulate a linear equation with one three digit constant and one two digit fractional constant presented in a word/story format containing < 75 words.)

If you did see questions from the OG, rest assured they were not scored.

One of my verbal questions was repeated and it was the exact same one.

Probably wasn't scored though.
Moderator:
Founder
43171 posts