GMAT Scoring Algorithm - My observations : General GMAT Questions and Strategies
Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack

 It is currently 16 Jan 2017, 06:33

### GMAT Club Daily Prep

#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.

Customized
for You

we will pick new questions that match your level based on your Timer History

Track

every week, we’ll send you an estimated GMAT score based on your performance

Practice
Pays

we will pick new questions that match your level based on your Timer History

# Events & Promotions

###### Events & Promotions in June
Open Detailed Calendar

# GMAT Scoring Algorithm - My observations

Author Message
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

22 Oct 2004, 09:36
3
This post was
BOOKMARKED
Many observers have criticized the difficulty of the questions that appear in the OG (the Official Guide). The problem is not so much that the OG items are too easy as it is that the OG, like a traditional pencil and paper test, features items that range across the p scale. The high probability items (the easy questions) are generally of little interest to students who are seeking scaled scores at the far right tail. Suppose we group items into seven difficulty strata with the middle stratum called stratum x. The majority of items that one would encounter on a traditional pencil and paper test would come from the middle strata near x (i.e. x-1, x, x+1). In a traditional test, you would encounter very few questions at the top of the range (x+3) since these questions have undesirable characteristics for most test takers (it would provide low differentiation (discrimination) among low and medium ability test takers).

Now suppose we create an adaptive test from this body of traditional questions. The first test item administered to each student would come from the middle stratum (stratum x). If it is answered correctly, we move to stratum x+1 for the second question. If the first question is answered incorrectly, we move to stratum x-1 for the second question. If the first two questions are both correct, we move to stratum x+2. If this questions is answered incorrectly, we move back down to stratum x+1. At the end of the test we would combine the information concerning difficulty and number of questions answered correctly and incorrectly to obtain an estimate of the net number of items this student would have answered correctly on a traditional paper based test (this is the raw score). The estimated raw score on the paper test is then used to read the associated scaled score from the paper test's raw to scaled score conversion table.

Hjort

Last edited by Hjort on 15 Aug 2005, 13:47, edited 1 time in total.
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

23 Oct 2004, 09:53
Some readers might question what to do with test takers who do not finish a section. Suppose that there are 36 questions in each section that contribute to the final score (we have eliminated any validation questions). Suppose that test taker performs perfectly on the first 18 questions but runs out of time. Should this student get a perfect score for having missed none of the questions she attempted? No, this would seem to benefit this student and penalize other students who rationed their time to answer all 36 questions. One way to correct this problem is to adjust each student's raw score by the proportion of questions she answered. Thus, we would cut this student's raw score in half since she only finished half of the section. Of course, cutting the raw score in half would not necessarily cut the scaled score in half.

Hjort
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

29 Oct 2004, 19:57
Did anyone find this overview helpful?
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [1] , given: 0

### Show Tags

02 Nov 2004, 13:21
1
KUDOS
Does anyone else want to share their insights into test design?
GMAT Club Legend
Joined: 15 Dec 2003
Posts: 4302
Followers: 40

Kudos [?]: 429 [0], given: 0

### Show Tags

02 Nov 2004, 13:23
Hey Hjort, I did find your review helpful . However, I do not have any insights into it yet. Maybe soon enough...
_________________

Best Regards,

Paul

CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

04 Nov 2004, 09:20
Another important issue for adapative and pencil-paper examinations is the apparent variability of scores. Suppose two students receive a 500 each the first time they take the exam. Student A then receives a 520 on the second exam and sees this score as proof that her study techniques are working. Student B takes the exam again and scores 480 and thus believes that his study techniques are actually causing his skills to decrease. Unfortunately, all of these scores are consistent with the two students having a true score of about 500. Since the SEM is nearly 30 points, we would expect about two thirds of students to receive scores within 30 points of their true score on any given administration of the test. Thus, it would not be a great surprise for a student to take the GMAT on Saturday and receive a 570 and then receive a 600 the next Monday. Indeed, given a large number of test takers, we should not be surprised to see several students with observed scores 50 or more points above their true score.

Many of the impressive claims made by "test preparation" companies do not fare well when one considers the impact of the inherent variability of observed scores.

Hjort
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

08 Nov 2004, 18:03
Hjort's Empirical Score Estimator

(Vscaled score + Q scaled score)*7.35 + 70.3; (ROUND, -1)

This estimator is best suited for the 620-720 or so range which is probably of greatest interest to test takers here.

This predictor is purely empirical, it should not be construed as the product of theoretical derivation.

Hjort
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

09 Nov 2004, 11:33
Has anyone else tried this formula?
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [2] , given: 0

### Show Tags

11 Nov 2004, 09:43
2
KUDOS
There is a common argument that the GMAT is irrelevant to MBA admissions since schools can use so many other admissions factors in making their choices.

It is crucial to remember, however, that the GMAT has far more predictive power than most other admissions criteria. For instance, the median correlation of V,Q,andAWA scores with first year MBA grades was .42 while the median correlation of undergrad grades with first year MBA was only .25. When the GMAT and undergrad grades are combined the correlation increases to 0.47. Thus, assertions by schools that they weigh all admissions factors equally should be viewed skeptically (of course the correlation varies from school to school).

The median correlation for undergrad GPA and first year MBA is 0.25 as corrected above.

Hjort
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [1] , given: 0

### Show Tags

17 Nov 2004, 12:48
1
KUDOS
A similar study of Executive MBA programs has revealed that the median correlation between first year MBA GPA and GMAT score was 0.49. The correlation between undergrad grades and first year MBA was only .22.

The GMAT V score alone had a correlation of 0.38 while the Q score alone had a correlation of 0.44. Even the often marginalized AWA score had a correlation similar to that of the undergrad grades (.22).

What might be the most interesting revelation of this study is the limited predictive value of some other variables. For instance, the number of year of work experience had virtually no association with academic success (correlation of -.02) while entering base salary was extremely weak (correlation of .09).

While it is important to stress that these data are for EMBA programs, it is intriguing how the other variables have even less predictive power than the AWA alone!

Hjort
Intern
Joined: 02 Sep 2004
Posts: 35
Followers: 0

Kudos [?]: 1 [0], given: 0

### Show Tags

25 Nov 2004, 00:20
By chance i happened to check these posts today only. Interesting findings. I would like to know how these studies were formulated. Are you referring to a simple regression on Grades versus say GMAT score? What were the partial regression coefficients. Were they significant?
What was the goodness of fit when you used all the variables - GMAT score, AWA, Grades, Work experience and salary.

I also had a feeling that work experience might infact have a negative correlation with Grades....
_________________

To be or not to be....

CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

30 Nov 2004, 16:10
All good questions regarding the admissions validity studies. I have only been able to read very brief summaries of these studies so I cannot comment on them in any detail. They appear to be based on simple regressions. Further, the overall goodness of fit for the multiple regression model is probably not great (but still pretty good when compared to other variables used to predict academic success).

Hjort
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

15 Jan 2005, 18:55
Some readers (not many) might find the following information about GMAT scores interesting:

c. 1980 a score of 700 was the 99th percentile and the average score was only 462.

Historically, the total score was calculated from the sum of the raw scores of each section. The scaled score for V or Q was not used directly in the calculation of the total scaled score.
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

17 Jan 2005, 23:40
I have been looking through the Hjort Test Library and found some interesting tidbits from the early 1980s. In this set of tests, the order of difficulty was extremely strict, most sections started with "Easy" questions, had many medium questions in the middle, and some hard questions at the end. In a few sections there were very easy questions at the beginning and a few very hard at the end. Not surprisingly, the test with the most very difficult Quant questions had the highest quant scaled score.
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

25 Jan 2005, 11:41
The subscores of the GMAT have exhibited some interesting trends. For instance, the mean for the Q section from 6/1994 through 3/1997 was 32 with an SD of 9. The mean for 1/2000 through the end of 2002 was 35 with an SD of 10. Thus, a score of 41 which was once one SD above the mean is now only about half an SD above. Comparing the same two periods the verbal subscore mean fell from 28 to 27 with the SD remaining at 9. At the same time, the mean for the AWA has increased from 3.8 to 4.0 while the SD increased from 0.9 to 1.0.
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

04 Feb 2005, 16:16
Some interesting insights into changes in test scores-
It appears that in both nominal scores and percentiles the lower edge of the GMAT distrubtion have increased greatly over the past twenty years. In the late 1970s and early 1980s some of the most selective schools still enrolled at least 10% of their students with scores near average. Not surprisingly, the difference between the score of the 90th percentile and 10th percentile matriculant has decreased considerably as well.

Duke early 80s had a 10th percentile of 500 whereas in the early 2000s it was about 650. Thus, the lower edge a Duke went from about average to considerably above average in 20 years. Likewise, the center of the distribution increased about 150 points from 550 to 700.

Columbia had a difference of 180 points in the early 1980s from the 90th to the 10th. Twenty years later the spread was only some 90 points.

Yale had a difference of 180 that has since fallen to about 100.
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

07 Feb 2005, 15:58
Manager
Joined: 18 Nov 2004
Posts: 72
Followers: 1

Kudos [?]: 1 [0], given: 0

### Show Tags

10 Feb 2005, 08:35
ive been following along. its amazing to me how much scores and percentile especially in quant have increased.
CEO
Joined: 17 Jul 2004
Posts: 3281
Followers: 23

Kudos [?]: 465 [0], given: 0

### Show Tags

10 Feb 2005, 12:27
I extend a hearty thank you to jackson24nj for letting me know that people are still reading this interminable thread.

Hjort

Last edited by Hjort on 23 Feb 2005, 09:06, edited 1 time in total.
Intern
Joined: 30 Jan 2005
Posts: 37
Location: INDIA
Followers: 0

Kudos [?]: 5 [0], given: 0

### Show Tags

22 Feb 2005, 17:42
hi hjort,
i read your thread and it good, you will be happy to know it sticky now. keep posting. i like the inforatmion abt the test. i would like to ask you which is the most appropriate day to take test to maximise score
-regards,

Go to page    1   2   3   4   5   6    Next  [ 110 posts ]

Similar topics Replies Last post
Similar
Topics:
2 Gmat prep Scoring Algorithm Cheatsheet 2 11 Nov 2016, 06:12
10 Leveraging the GMAT Scoring Algorithm to Your Advantage 9 30 Jun 2016, 08:12
1 Scoring Algorithm of GMAT CAT's 3 04 Apr 2015, 23:03
Scoring Algorithm - Gmat prep software - my observation 3 31 Dec 2013, 07:11
Question about the GMAT scoring algorithm 2 09 Jan 2010, 17:46
Display posts from previous: Sort by

# GMAT Scoring Algorithm - My observations

Moderators: WaterFlowsUp, HiLine

 Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.