Undoubtedly a number of presidents, provosts, directors of admissions, and others at elite colleges and universities are either breathing sighs of relief or wringing their hands this morning with the release of the 30th edition of the U.S. News and World Reports Best College Rankings. As has been the case since the 2011-2012 academic year, Princeton University held the top spot among national universities, with Williams atop the list for national liberal arts colleges.
For all of the hype that the media (e.g., here, here, or here) give to the rankings, a little perspective is in order. The top 20 institutions on the university list collectively enroll slightly more than 150,000 students, which is less than 1.5% of the more than 10.5 million undergraduate students enrolled in four-year degree-granting institutions nationwide. And the lists published today make no mention of the more than 1,500 community colleges currently serving more than 7 million undergraduates nationwide. Indeed, these two-year institutions are educating the most racially and economically diverse cross-sections of college students.
Data from the CIRP Freshman Survey would suggest that the attention given to the rankings each year (and, yes, it’s clear that perhaps even writing this blog is contributing to the hype) is unwarranted, as national rankings appear to matter little in students’ college choice process. When we break out the data by selectivity, we find that rankings in national magazines only matter for those students at the most selective campuses. Just less than a quarter (24%) of students at the most selective campuses indicated that rankings in national magazines was a “very important” factor in their decision to enroll at their current institution. By contrast, 10% and 11% of students attending institutions categorized as “low selectivity” or “medium selectivity” rated rankings as a “very important” factor in their decision process.
Instead, cost and financial aid are increasingly being considered as top factors in students’ college choice process, as we reported in the 2013 Freshman Survey monograph.
Given the lack of importance students place upon rankings in deciding where to attend college and the amount of time institutional research officers put into filling out the annual USNWR survey, what’s the point? Perhaps the rankings help those at the top attract greater numbers of donors and larger gifts, which in turn facilitate their ability to rest atop these lists. But they also encourage poor behavior (a.k.a., cheating, lying) among some colleges and universities – what an example to set for students!
A perennial critique of the USNWR rankings is that the lists focus less on outcomes, although the methodology does include measures of first-year retention and six-year graduation rates. And this year, the magazine added student loan default rates and campus crime, though the editor of the magazine downplayed the latter statistic and suggested that readers should not place much stock in campus crime rates.
The Obama administration is proposing a rating system of its own that focuses more on outcomes, and the details of this proposal are expected to be released later this fall. HERI Director Sylvia Hurtado joined a panel, hosted by UCLA’s Civil Rights Project, on Capitol Hill last week to advocate for fairness in the measures used. Sylvia presented CIRP data highlighting the importance of using input-adjusted measures to account for differentials in student preparation and campus resources when measuring outcomes like retention and degree completion. When we look at the distribution of campuses that do better than expected given the students they serve and the resources at their disposable, we tend to see a very different ranked list.