MBA Program Rankings

MBA Program Rankings

Since 1967, publications have ranked MBA programs using various methods. The Gourman Report (1967–1997) did not disclose criteria or ranking methods, and these reports were criticized for reporting statistically impossible data, such as no ties among schools, narrow gaps in scores with no variation in gap widths, and ranks of nonexistent departments. In 1977 The Carter Report ranked MBA programs based on the number of academic articles published by faculty, the Ladd & Lipset Survey ranked business schools based on faculty surveys, and MBA Magazine ranked schools based on votes cast by business school deans.

Today, publications such as US News & World Report, Business Week, Financial Times, The Economist, the Wall Street Journal and Forbes publish rankings of MBA programs. Schools' ranks can vary across publications, as the methodologies for rankings differ among publications:

  • U.S. News & World Report incorporates responses from deans, program directors, and senior faculty about the academic quality of their programs as well as the opinions of hiring professionals. The ranking is calculated through a weighted formula of quality assessment (40%), placement success (35%), and student selectivity (25%).
  • Business Week's rankings are based on student surveys, a survey of corporate recruiters, and an intellectual capital rating.
  • Financial Times uses survey responses from alumni who graduated three years prior to the ranking and information from business schools. Salary and employment statistics are weighted heavily.
  • The Economist Intelligence Unit, published in The Economist, surveys both business schools (80%) and students and recent graduates (20%). Ranking criteria include GMAT scores, employment and salary statistics, class options, and student body demographics.
  • The Wall Street Journal, which stopped ranking full-time MBA programs in 2007, based its rankings on skill and behavioral development that may predict career success, such as social skills, teamwork orientation, ethics, and analytic and problem-solving abilities.
  • Forbes considers only the return of investment five years after graduation. MBA alumni are asked about their salary, the tuition fees of their MBA program and other direct costs as well as opportunity costs involved. Based on this data, a final "5-year gain" is calculated and determines the MBA ranking position.


Other rankings use nontraditional attributes:

  • The Beyond Grey Pinstripes ranking, published by the Aspen Institute, is based on the integration of social and environmental stewardship into university curriculum and faculty research. Rankings are calculated on the amount of sustainability coursework made available to students (20%), amount of student exposure to relevant material (25%), amount of coursework focused on stewardship by for-profit corporations (30%), and relevant faculty research (25%). The 2011 survey and ranking include data from 150 universities.
  • The Quacquarelli Symonds QS Global 200 Business Schools Report compiles regional rankings of business schools around the world. Ranks are calculated using a two-year moving average of points assigned by employers who hire MBA graduates.
  • Since 2005, the UT-Dallas Top 100 Business School Research Rankings ranks business schools on the research faculty publish, similar to The Carter Report of the past.


The ranking of MBA programs has been discussed in articles and on academic websites. Critics of ranking methodologies maintain that any published rankings should be viewed with caution for the following reasons:

  • Rankings exhibit intentional selection bias as they limit the surveyed population to a small number of MBA programs and ignore the majority of schools, many with excellent offerings.
  • Ranking methods may be subject to personal biases and statistically flawed methodologies (especially methods relying on subjective interviews of hiring managers, students, and/or faculty).
  • Rankings use no objective measures of program quality.
  • The same list of schools appears in each ranking with some variation in ranks, so a school ranked as number 1 in one list may be number 17 in another list.
  • Rankings tend to concentrate on representing MBA schools themselves, but some schools offer MBA programs of different qualities and yet the ranking will only rely upon information from the full-time program (e.g., a school may use highly reputable faculty to teach a daytime program, but use adjunct faculty in its evening program or have drastically lower admissions criteria for its evening program than for its daytime program).
  • A high rank in a national publication tends to become a self-fulfilling prophecy.
  • Some leading business schools including Harvard, INSEAD, Wharton and Sloan provide limited cooperation with certain ranking publications due to their perception that rankings are misused.

One study found that objectively ranking MBA programs by a combination of graduates' starting salaries and average student GMAT score can reasonably duplicate the top 20 list of the national publications, and concluded that a truly objective ranking would be individualized to the needs of each prospective student.

National publications have recognized the value of rankings against different criteria, and now offer lists ranked different ways: by salary, GMAT score of students, selectivity, and so forth. While useful, these rankings have yet to meet the critique that rankings are not tailored to individual needs, that they use an incomplete population of schools, may fail to distinguish between the different MBA program types offered by each school, or rely on subjective interviews.

Comments