Each of our three categories included several
components. We determined the Community Service score by measuring
each school's performance in three different areas: the percentage
of their students enrolled in the Army or Navy Reserve Officer
Training Corps; the percentage of their students who are currently
serving in the Peace Corps; and the percentage of their federal
work-study grants devoted to community service projects. A school's
Research score is based on two measurements: the total amount of an
institution's research spending, and the number of Ph.Ds awarded by
the university in the sciences and engineering. For both Community
Service and Research, we weighted each component equally to
determine a school's final score in the category.
The Social Mobility score was a little more complicated. We had
data that told us the percentage of a school's students on Pell
Grants, which is a good measure of a school's commitment to
educating lower-income kids. But we wanted to know how many of these
students graduate, and, unfortunately, schools aren't required to
track those figures. So we devised our own method of estimating that
statistic.
Because lower-income students at any school are less likely to
graduate than wealthier ones, the percentage of Pell Grant students
is an important indicator: If a campus has a large percentage of
Pell Grant students—that is to say, if it is disproportionately
poor—it will tend to diminish the school's overall graduation rate.
Using data from all of our schools, we have constructed a formula
(using a technique called regressional analysis) that will predict a
school's likely graduation rate given its percentage of students on
Pell. Schools that outperform their forecasted rate will score
better than schools that match, or worse undershoot, the mark. For
instance, 37 percent of UCLA's students receive Pell Grants from the
federal government. Using our analysis, one would expect UCLA to
have a much lower graduation rate (a mere 48 percent) than it does
(a rather high 87 percent) making it the top performer in the
category.
When it came to our methodology, we chose to pursue two primary
goals. First, we did not consider any single category more important
than any other. And second, the final rankings needed to reflect
excellence across the full breadth of our measures, rather than
rewarding an exceptionally high focus on, say, research. All
components were weighted equally when calculating the final score.
In order to ensure that each measurement contributed equally to a
school's score in any given category, we standardized the data sets
so that each had a mean of 0 and a standard deviation of 1. The data
were also adjusted to account for statistical outliers. In our
published list of universities and liberal arts colleges, the data
listed reflect a school's actual performance within that criteria.
However, for the purposes of calculating the final score, no
school's performance in any single area was allowed to exceed three
standard deviations from the mean of the data set.