Based on a discussion going on in this thread:
https://gmatclub.com/forum/evaluate-my-profile-pls-77937.html, I thought it'd be interesting for everyone to have an open discussion on the issue of rankings, and provide my experience with it.
PART I: WHAT DO THOSE RANKINGS MEASURE?
The conventional wisdom is that you shouldn't trust rankings blindly, but that somehow there has to be some truth to them. As anyone with research experience knows, the methodology is extremely important, and my experience is that most people looking at rankings do not make the effort to figure out what the "rankers" are actually trying to measure.
Case in point: the Financial Times doctoral program ranking. I've seen it cited so often that I cringe, since this ranking is mostly based on the number of graduates from a PhD program. Big school = good school? Don't think so. Manchester Business School is #1 in the 2008 FT PhD ranking, and Rotterdam School of Business is #4. Rochester is #57, Cornell #63, Yale #65. Very much sounds like crap.
So how should you rank PhD programs? There are plenty of research papers and surveys out there, which I guess you can categorize in two areas based on the two most important tasks someone graduating from a PhD program will have to accomplish (the large majority of PhD graduates stay in academia): teaching and research. There are teaching-oriented papers, mostly reputational surveys, such as Public Accounting Report (mentioned in the above thread). These provide schools that will "produce" the best teachers our of their PhD programs. Whether that is due to the actual quality of the PhD program, or to self-selection, is unclear from those studies. Then there are countless research-oriented papers, looking at a combination of (1) research productivity/impact from past PhD graduates from that school, (2) research productivity/impact from past and/or current faculty at that school, or (3) quality of initial placement of past PhD graduates from that school, where I guess quality of placement is measured by some other factor, usually (2). These rankings will provide evidence as to which PhD programs will "produce" the best researchers. Obviously, these two types of reports measure two very different dimensions of what it means to be a university professor and as such lead to very different top schools.
I would argue that self-selection plays a very important role in the "teaching quality" results of the PAR and others. Suppose you want to become a great teacher. You can be a great teacher at Wharton or at Idaho, so it doesn't really matter where you go to school, so you'll apply for both Georgia (top school for teaching, not good for research) as well as Wharton (top school for both). Suppose someone else wants to become a great researcher. Chances are your training and exposure to top research will be better at top research universities like Wharton than it'll be at Idaho, so he/she'll apply at Wharton, but not Georgia. So the competition at Wharton is tougher than Georgia (I am sure this is a fact as well -- look at, say, the average GMAT score of admitted applicants at both schools), and the people who enroll at (and graduate from) Georgia will be more interested in teaching. So to me it's no surprise that survey evaluating the top PhD programs for producing great teachers are state schools with significant public accounting programs and where learning accounting standards and teaching them properly are highly valued.
I guess my point is that if you're mostly interested in research you should mostly consider the research rankings, and it may be counterproductive to attach too much importance to the teaching rankings.
PART II: IMPERFECT MEASURES = AVERAGE THEM OUT?
It's a fact of econometrics that if you have two observed measures (say, 2 rankings) that are imperfectly correlated with an unobservable characteristic (PhD program quality), you may be able to tease out the noise by averaging them out. This only makes sense if you think the observed measures are shooting for the same unobservable. Taking the example from part I, if you make an average from a teaching ranking and a research ranking you may end up with completely different schools (one that values teaching, the other that values research) with the same score, and you may have a school that's a great teaching school and a terrible research school ahead of another school that's good but not great in both respects.
PART III: MY ACCOUNTING PHD RANKINGS
When I applied to Accounting PhD programs almost 5 years ago, I decided I was interested in both teaching and research, so I wanted to get to the best possible research school to keep my options open (e.g. I could always go to a teaching school after the PhD from a research school, in fact that's what I'm doing right now). Everyone seemed to consider that there was a strong correlation (but still imperfect) between the general reputation of the business school and the quality of its PhD program, so I used the 3 most widely used general b-school rankings (Financial Times, Business Week, USNWR) to create a "general" ranking, along with 3 accounting research papers measuring slightly different dimensions (Robinson and Adler, and 2 Brown and Robinson papers, I didn't note the actual references but could look them up if you wanted) to make a "research" ranking. I then aggregated the 2 rankings for a "school-wide" ranking that went from 0 (worst) to 100 (best). (This poses the problem that I identified in part II, but still may be a start.) Stanford got 100, Wharton 95, Yale 78, Florida 46, U. So. Carolina 25.
Because I was interested in the best PhD programs, not just the best schools, what I did with those school-wide scores is the following. I looked at all accounting faculty members at the top 35 schools according to my ranking, and noted where they got their PhD. That allowed me to rank PhD programs: if a Rochester graduate is now at Stanford, that's 100 points for Rochester. I summed up all those points for everyone in the database (and then everyone who received his/her PhD since 1990). Here are the results:
Best PhD program (overall)
1. Stanford
2. Michigan
3. Chicago
4. UC-Berkeley
5. UT-Austin
6. Cornell
7. Rochester
8. Ohio St.
9. Harvard
10. CMU
11. NYU
12. Illinois (UIUC)
13. Kellogg (Northwestern)
14. Wharton (Penn)
15. Iowa
16. Minnesota
17. Michigan St.
18. Penn St.
19. UNC
20. U.Washington
Best PhD programs (since 1990)
1. Michigan
2. Stanford
3. Chicago
4. Harvard
5. Wharton
6. Cornell
7. UC-Berkeley
8. Rochester
9. NYU
10. Kellogg (Northwestern)
11. Iowa
12. UT-Austin
13. Penn St.
14. Minnesota
15. CMU
16. UNC
17. MIT
18. Columbia
19. U.Washington
20. Illinois (UIUC)
18 schools make both lists.
I guess I could provide the full database if anyone wanted it. Feel free to comment; I never meant this to be scientific in any way, or even published on this forum. I just thought that might be a helpful reference.