Bar chart graphic

A Brief Look Under the Hood of College Rankings


Ryan Deuel

Everyone loves a good “top 10” list.

Early best-of lists started to become available to the public when Consumer Reports began in the 1930s ranking our radios, washing machines, and vacuum cleaners. Today, with a plethora of social media platforms in the palm of our hands, we rank our dinners on Yelp, our vacations on TripAdvisor, our drivers on Uber and Lyft, and—let’s be honest here—the photos, comments, and lifestyles of our family, friends, and colleagues on Facebook, Instagram, Snapchat, and LinkedIn. 

Higher education has shown that it is not immune to the rankings frenzy. According to the website College Rank, the American Council on Education first ranked institutional reputation in 1934. The Chicago Tribune is credited with putting together the first ranking of undergraduate institutions back in 1957. Then came U.S. News & World Report’s Best Colleges Guide. First published in 1983, it kicked off consumer media’s fascination with college rankings, followed by Money (1990), The Princeton Review (1992), and Forbes (2008). And for good reason: Each time there’s a new ranking, there’s a new publication; and when there’s a new publication, there’s potential revenue for publishers.

The proliferation of rankings online, however, is nothing less than astonishing. From College Rank to Niche and Rate My Professors, today’s higher educational institutions cannot escape being ranked from the good (St. Lawrence ranked No. 3 according to The Princeton Review for Best Alumni Network and No. 2 Top Volunteer-Producing Small College for the Peace Corps) to the not-so-good (lists that we prefer not to highlight). 

The public relations acrobatics of good and not-so-good rankings are always fraught with mixed enthusiasm: On the one hand, a favorable acknowledgment allows us the opportunity to give people more information about points of pride, but, on the other, we are consistently burdened with having to explain that many rankings are based on less rigorous assessment methodologies and may be motivated more by click-throughs than science. 

So, how does a college make the lists? 

It starts with data. Traditional media that publish college ranking guides often depend on data supplied directly by the institutions themselves. According to U.S. News & World Report, 92 percent of the 1,388 ranked colleges and universities surveyed returned statistical information. At St. Lawrence, that data is housed within the Office of Institutional Research, which organizes all of that information into various reports, including the Common Data Set, or CDS, which began as collaborative effort among university IR offices to provide data to ranking surveys such as the College Board, ACT, Peterson’s, and U.S. News & World Report, to name a few. 

The next step is managed differently at each college and university. At St. Lawrence, requests for data are received by the director of media relations—yours truly—who fills out all the various surveys generated by different ranking entities. Some are relatively brief surveys on costs, enrollment, or financial aid outputs. Others—including The Princeton Review, Peterson’s, Barrons, ACT, College Board, Fiske, Kiplinger’s—the list goes on—can include over 200 questions, each asking for different information and drilling down much deeper than the CDS dataset. There are even a few, such as the Campus Pride Report on climate and inclusivity, that request more qualitative review and involve more comprehensive campus narratives. In 2017, St. Lawrence answered more than 30 surveys, which requires coordination with not only IR, but also multiple offices across campus to ensure accuracy and security of the institution’s information. 

Once received, publishers compile information on topics such as student selectivity based on factors including high school GPAs and SAT scores, graduation and retention rates, reputation based on high school guidance counselor and college president peer assessments, resources available to faculty, the institution’s financial health, and even the alumni giving rate. Some, like The Princeton Review, gather qualitative information through voluntary student surveys, while others consider alumni career earnings. 

Next comes a subjective methodology, which ultimately guides the rankings. Some divide colleges and universities based on the Carnegie Classification of Institutions of Higher Education, while others do not. Some rely on data from hundreds of survey respondents, while others may have access to data from only a handful. Most traditional media clearly spell out how they weigh the various data points in order to come up with their rankings, while the methodology used in online-only rankings may be extremely difficult to locate. 

One of the most perplexing issues with college rankings is that a school may be No. 50 on one list, No. 150 on another, and No. 10 on a third. Without taking time to dig into methodology and understand where the data comes from, there is simply no context for the rank.

If you give the authors of these lists the benefit of the doubt, the intention of both traditional and online publications is to give prospective students and their families tools that make the college selection process better informed. For students, alumni, parents, and administrators, rankings can be hailed as pride points used to help strengthen engagement. Employers may even place greater weight on the ranking of the university attended over a graduate’s GPA when deciding who to hire. Yet the sheer number of rankings comes from the sheer number of publications and websites vying for sales, ad revenue, and online click-throughs constantly creating new criteria and changing methodologies to rank colleges and universities every step of the way. These rankings, however, usually don’t take into account the quality of education at each institution or give significant consideration to student learning outcomes…what students have actually learned.

While a strong ranking may result in Laurentians feeling good about their institution, we would also do well to heed caution whenever we assign numerical values to intellectual and scholarly pursuits. What these rankings fail to do is to tell the most relevant and compelling stories of institutions like St. Lawrence: the co-curricular learning that takes place inside and outside of the classroom, stories of people wrestling with difficult problems, Laurentions making a difference in their communities, the connections made between students and alumni that lead to internships and jobs, or the personal enrichment that goes part and parcel with attending a liberal arts institution and informs the rest of their lives.

And finally, while rankings can help us make data-informed decisions and draw data-informed conclusions, it is the human stories—not the numbers—that make us proud of St. Lawrence and proud to be Laurentians.