Missing College Mark

This article is from the archive of The New York Sun before the launch of its new website in 2022. The Sun has neither altered nor updated such articles but will seek to correct any errors, mis-categorizations or other problems introduced during transfer.

The New York Sun

In August, Forbes entered the college ranking game, with a list including several considerable surprises. Wabash College outranks MIT and Centre College outranks Stanford.

It’s exactly this sort of cognitive shock that’s needed to upset the age-old hierarchies of the U.S. News and World Report list, which, for all of its recent drubbings, still holds an implausible sway on the public consciousness.

A robust challenger could provide a real competition, and revitalize the way potential students look at colleges. Unfortunately, the Forbes list doesn’t do that. Due to a bizarre ranking methodology, shock is about all it’s likely to deliver.

For years, the U.S. News rankings rode a formula for success, balancing minor recalibrations of methodology to create a rudimentary frisson of movement, while delivering essentially identical slots to most universities.

Schools, being clever, understood how to retain their spots, and increasingly, how to vault their way up the charts, which all worked fine until the public realized that the schools ranked had become quite good at gaming the system.

Into this void of confidence have entered the Forbes rankings, which generated considerably different results from U.S. News, and controversy all of their own. The co-author, Richard Vedder, described the U.S. News ranking system as “roughly equivalent to evaluating a chef based on the ingredients he or she uses” which is undoubtedly true, yet the “ends-oriented” Forbes list then went on to expose all the ranking flaws of treating students like entrees.

Simply put, half of the weighting in the Forbes list is given to considerably questionable measures. Twenty-five percent of the weight goes to a listing of Alumni in Who’s Who in America, which might provide a morale boost to Central Kansas grads, but seems a hopelessly fragmented measure of educational quality, or an institution’s contributions to individual success. Is Bill Gates on that Harvard list?

The most contentious aspect was the weighting of RateMyProfessors.com ratings at 25%. Sure, anyone can learn useful information from RateMyProfessors.com, but their general point rankings are among the least trustworthy things I’ve ever seen.

I can’t count the number of times a difficult but excellent professor has enjoyed a low ranking, or an easy but mediocre one a high rank.

Even professors with obvious bias don’t often suffer in the rankings, receiving high marks from like-minded students. I’m not merely talking just about liberals; a stridently pro-Israel professor received raves.

Most reviews of professors are basically accurate in the abstract, but to extrapolate beyond this to evaluate school quality in general seems to deserve all the questioning it’s been receiving.

Not to mention, of course, as a professor and education writer, Ann Althouse, pointed out, that the means of manipulating one’s way up the U.S. News list seem meager compared to the influence a few dozen students could have at RateMyProfessors.com, and the next installment of a Forbes list, if there was to be one.

The other factors in the Forbes ranking are quite legitimate: the four-year graduation rate, the enrollment-adjusted number of students and faculty receiving national awards, and the average accumulated debt of attendees. These leave the other half of the criteria looking all the more hollow, however. There is an idea of clear merit in seeking to measure results.

The flimsiness of its methodology is all the more surprising because Mr. Vedder’s writing about the results of the survey at his Web log, collegeaffordability.blogspot.com, just as most of his writing about the survey in the magazine, makes eminent sense.

He hails Northwestern as a laudably undergraduate-focused school that came out ahead of many conventionally far higher-ranked institutions. All of this sounds very right, which makes the lack of a believable ranking system all the more frustrating.

There’s something encouraging about college lists that don’t presume to provide an ultimate classification, but address themselves to narrower concerns. No one’s going to get the ranking formula “right” but a variety of challengers could help make them all better.

The Washington Monthly, for example, developed a college list asking what “colleges are doing for the country” and the taxpayer, using simple criterion of ROTC and public service activity, levels of student aid, and numbers of federal grants.

It succeeded in addressing a narrow but important question with easy formulas for measurement. If only Forbes could develop a list with a formula worthy of their idea.

Mr. Paletta is a senior editor of the Manhattan Institute’s MindingTheCampus.com.


The New York Sun

© 2024 The New York Sun Company, LLC. All rights reserved.

Use of this site constitutes acceptance of our Terms of Use and Privacy Policy. The material on this site is protected by copyright law and may not be reproduced, distributed, transmitted, cached or otherwise used.

The New York Sun

Sign in or  create a free account

By continuing you agree to our Privacy Policy and Terms of Use