Last visit was: 30 Apr 2026, 10:22 It is currently 30 Apr 2026, 10:22
Close
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.

Customized
for You

we will pick new questions that match your level based on your Timer History

Track
Your Progress

every week, we’ll send you an estimated GMAT score based on your performance

Practice
Pays

we will pick new questions that match your level based on your Timer History
Not interested in getting valuable practice questions and articles delivered to your email? No problem, unsubscribe here.
Close
Request Expert Reply
Confirm Cancel
avatar
pbhat14
Joined: 24 May 2014
Last visit: 28 Oct 2014
Posts: 22
Own Kudos:
Given Kudos: 1
Posts: 22
Kudos: 4
Kudos
Add Kudos
Bookmarks
Bookmark this Post
User avatar
farhanchaudhary
User avatar
Current Student
Joined: 14 Jul 2013
Last visit: 24 Mar 2026
Posts: 272
Own Kudos:
Given Kudos: 131
Products:
Posts: 272
Kudos: 433
Kudos
Add Kudos
Bookmarks
Bookmark this Post
User avatar
VeritasPrepBrian
User avatar
Veritas Prep Representative
Joined: 26 Jul 2010
Last visit: 02 Mar 2022
Posts: 416
Own Kudos:
3,270
 [1]
Given Kudos: 63
Expert
Expert reply
Posts: 416
Kudos: 3,270
 [1]
1
Kudos
Add Kudos
Bookmarks
Bookmark this Post
User avatar
farhanchaudhary
User avatar
Current Student
Joined: 14 Jul 2013
Last visit: 24 Mar 2026
Posts: 272
Own Kudos:
433
 [1]
Given Kudos: 131
Products:
Posts: 272
Kudos: 433
 [1]
1
Kudos
Add Kudos
Bookmarks
Bookmark this Post
VeritasPrepBrian
Good questions, guys - and thanks for studying with our tests! A few thoughts on what you're seeing with your scores:

*Our tests are scored using Item Response Theory, the same data-driven philosophy behind the official GMAT. And on any IRT test, "response patterns" only tell part of the story, so you may find that analysis to be wanting a little. The reason is that there are three total but really two parameters that matter for you: "B-value" which is an indication roughly of "difficulty level" and "A-value" which is a measure of reliability. When you're looking at your response patterns, the metric you're most trying to predict/include is "difficulty" (you even mentioned the % of people who answer incorrectly, which is a big factor in B-value but not the whole story). What you're not able to assess is A-value, and that matters more than the untrained eye realizes.

For example, think about taking investment advice. If you have three people helping you - Warren Buffet, your college roommate who works for Morgan Stanley, and your dentist, would you buy a stock if all three people told you to? Probably. But what about if only 2 of the 3 did? You'd want to know which 2, right? That's A-value, really - Warren Buffet would probably have the highest A-value by a large margin, and then you'd assign lesser values to your roommate and dentist (who each might know a few things, too). You'd believe Buffet's "don't buy" probably more than you'd buy the other two telling you "buy", just as on the GMAT some questions are stronger predictors of ability than others. So some of what is hard to assess for you in looking at your response pattern and trying to determine how it worked is explained by that A-value. If you answered two 650-level questions, one correct and one incorrect, the system wouldn't just split the difference and say you're a 650. If the one you got correct has a high A-value and the one you missed has a low A-value, the system has more evidence that you're above 650 than below it, so depending on the weight by those A-values it might give you a 660 or 670.

For more about those metrics, you can check out this article: https://poetsandquants.com/2013/07/21/the-mystery-of-gmat-scoring/

*In terms of final score, the official GMAT comes with a +/- around 20 point margin of error, and ours are probably close but maybe more like +/-30. What's a little different about our tests vs. others is that they're so data driven that we don't put in any artificial tweaking to the scores. So that margin of error may appear on the high end (you scored 720 but really it should have been 700) or on the low end (it should have been 740). Other tests seem to knowingly "round down" (which isn't a bad thing) which is why you'll often hear that _____ tests always score harder than usual. They know that there's a margin for error and they'd rather underestimate your score rather than accidentally tell you that you're at your goal when you're not. We're confident enough in our item statistics and and overall algorithm that we're letting it run itself without those kind of hedges, and the only caveat to that is that sometimes we will end up overshooting your ability by 20-30 points just as likely that we'll sometimes underestimate by a similar amount, as opposed to always being on the "too hard" side.


Hi Brian,

Thanks for clearing that up.

Would highly recommend Veritasprep test to anyone looking for tests which simulates the actual condition.
User avatar
kinjiGC
Joined: 03 Feb 2013
Last visit: 12 Oct 2025
Posts: 789
Own Kudos:
Given Kudos: 567
Location: India
Concentration: Operations, Strategy
GMAT 1: 760 Q49 V44
GPA: 3.88
WE:Engineering (Computer Software)
Products:
GMAT 1: 760 Q49 V44
Posts: 789
Kudos: 2,737
Kudos
Add Kudos
Bookmarks
Bookmark this Post
farhanc85
VeritasPrepBrian
Good questions, guys - and thanks for studying with our tests! A few thoughts on what you're seeing with your scores:

*Our tests are scored using Item Response Theory, the same data-driven philosophy behind the official GMAT. And on any IRT test, "response patterns" only tell part of the story, so you may find that analysis to be wanting a little. The reason is that there are three total but really two parameters that matter for you: "B-value" which is an indication roughly of "difficulty level" and "A-value" which is a measure of reliability. When you're looking at your response patterns, the metric you're most trying to predict/include is "difficulty" (you even mentioned the % of people who answer incorrectly, which is a big factor in B-value but not the whole story). What you're not able to assess is A-value, and that matters more than the untrained eye realizes.

For example, think about taking investment advice. If you have three people helping you - Warren Buffet, your college roommate who works for Morgan Stanley, and your dentist, would you buy a stock if all three people told you to? Probably. But what about if only 2 of the 3 did? You'd want to know which 2, right? That's A-value, really - Warren Buffet would probably have the highest A-value by a large margin, and then you'd assign lesser values to your roommate and dentist (who each might know a few things, too). You'd believe Buffet's "don't buy" probably more than you'd buy the other two telling you "buy", just as on the GMAT some questions are stronger predictors of ability than others. So some of what is hard to assess for you in looking at your response pattern and trying to determine how it worked is explained by that A-value. If you answered two 650-level questions, one correct and one incorrect, the system wouldn't just split the difference and say you're a 650. If the one you got correct has a high A-value and the one you missed has a low A-value, the system has more evidence that you're above 650 than below it, so depending on the weight by those A-values it might give you a 660 or 670.

For more about those metrics, you can check out this article: https://poetsandquants.com/2013/07/21/the-mystery-of-gmat-scoring/

*In terms of final score, the official GMAT comes with a +/- around 20 point margin of error, and ours are probably close but maybe more like +/-30. What's a little different about our tests vs. others is that they're so data driven that we don't put in any artificial tweaking to the scores. So that margin of error may appear on the high end (you scored 720 but really it should have been 700) or on the low end (it should have been 740). Other tests seem to knowingly "round down" (which isn't a bad thing) which is why you'll often hear that _____ tests always score harder than usual. They know that there's a margin for error and they'd rather underestimate your score rather than accidentally tell you that you're at your goal when you're not. We're confident enough in our item statistics and and overall algorithm that we're letting it run itself without those kind of hedges, and the only caveat to that is that sometimes we will end up overshooting your ability by 20-30 points just as likely that we'll sometimes underestimate by a similar amount, as opposed to always being on the "too hard" side.


Hi Brian,

Thanks for clearing that up.

Would highly recommend Veritasprep test to anyone looking for tests which simulates the actual condition.

Hi Brian,

How are you doing?

I agree with Frahan. Veritasprep questions were quite good. In fact I took veritasprep 2 Mock last sunday and I got 720 (50/39). Normally, I don't get more than 4 -5 questions wrong in QA but I got 9 questions wrong and I must say, I was really exhausted after QA and those were quite good questions.