StudentsReview ™ :: 2005 OFFICIAL Rankings Explained

Search for Colleges by Region

or within distance of city

  Who's got the Best (variable)?

Perceptual Rankings:
You Make 'Em.
We Post 'Em.
You Vote 'Em Up.
You Vote 'Em Down.
Aww yeah.

StudentsReview OFFICIAL Rankings
New Years Day 2005

2005 Rankings Explained

StudentsReview's OFFICIAL Rankings are the first rankings to be COMPLETELY generated from student opinion, and to publicly publish their ranking methodology & analysis.  There are no preconceptions, university administrators, group consensus, or personal expectations governing our rankings.  We perform an analysis that is transparent and understandable, and hopefully informative, governed only by tabulated student opinion. 
Sophistication of analysis is reflected by full and transparent description of internal biases, and its “apples to apples” analysis.

This document will explain how the data was analyzed, how the StudentsReview ranking system operates, and how “apples to apples” comparison is achieved.  If you have a quick question, head on over to the 2005 NYD FAQ (Frequently Asked Questions) page.

The StudentsReview Ranking system consists primarily of filtering machinery and a public formula, which is described below.  There is a minor component to the math (and we assure you, it is only math connected with analysis of student surveys) that we are keeping private to prevent exploitation.  If public, it could be exploited by third parties who want control the rankings, and it would be impossible to model the data, preventing any meaningful analysis in the future. 

StudentsReview biases its rankings heavily towards educational quality.  There are many components and aspects with which you can use to try to asses the “value” or “rank” of any particular school.  For many, the reputation opens doors and provides opportunities right at the outset that might not otherwise or ever be available, or the value surfaces and is inbued in different ways than is explicitly measurable.  StudentsReview recognizes this, and encourages the reader to be aware — when viewing any rankings — of the different ways that an institution can be valued, how its education may surface to help them in their career later, and of how their own priorities may change over time.

For this particular set of rankings, and in general, StudentsReview takes the approach that the quality of education will surface and provide both confidence and opportunities in the workplace once the tangible skills are revealed.  While REPUTATION is fairly well known and understood, the quality of education at any particular institution is an unknown that is insightful to reveal.

Data Purity
First and foremost, StudentsReview considers itself a data analysis company, and so to the best of our ability, StudentsReview's data is clean of unknown/3rd party biases.  We do not modify or remove surveys to “craft” a particular image.  If data is not managed with integrity, then any conclusions drawn from it are meaningless and misleading. 

Several institutions have led surveys from designated “feeder” students — to bias our data in favor of their institution.  Apart from the data we've been able to detect ourselves, the students have surprised us to reveal that they are “self reporting” — that is, that they have acted as ethics police, reporting directly to us those incidents of bad-practice.  Resultingly, we dropped all of the affected surveys and performed manual verification.

A number of institutions have sought to leverage threats of legal action, or a business relationship to influence us into removing or altering data.  In every contention, except those where the data was found to be genuinely invalid, the demands and solicitations were denied.

Data is statistically filtered prior to ranking to remove invalid, duplicate, or statistically blank surveys.  Invalidity is determined by a statistical algorithm which looks for surveys that are not self-consistent, or reflect a systematic corrupting effect on the data, either accidentally or with intent.

5,000 valid surveys were analyzed statistically, and a gaussian matrix was created to model the survey patterns within and between surveys.  We can now identify those surveys that: vary too little, vary too much, have fields that do not covary properly, or are inconsistent.  (i.e.  rating the university as an A for Friendliness, but then rating Social life an F).  The filter was then trained on a marked set of valid and invalid data to set its thresholds for how much inconsistency, variance, and survey-survey interaction it can tolerate. 
A rule-base system was also created to identify duplicates and model trends of surveys from the same machine.  This allows us to be able to identify if a person is falsifying many surveys.  FFT analysis is employed to determine the “data content” of each survey as well, providing more information for modeling. 

The combination of the trained statistical filter, correlation matrix, and survey model runs autonomously, applying its decisions uniformly to all surveys from all schools.  For the OFFICIAL rankings, the thresholds for inconsistency were tightened a bit to prevent spurious or anomalous data from affecting the rankings too heavily.

Out of an early sampling of 7,500 undergraduate student surveys, 483 surveys were rendered invalid.  Inspection of the invalid surveys revealed a failure rate of 5% and 7% for good and missed surveys respectively.  (24 of the 483 surveys were actually “good”), then 32 invalid surveys were missed, leading to an overall incorrectness failure of about 1%.

There are ALWAYS biases in ANY data gathered by ANYONE.  It does not matter who is doing it, when, what methods are used — there are ALWAYS biases in the data.  Anyone who claims their data is unbiased is outright lying.  The most important thing is to understand what those biases are.  Because StudentsReview's surveys are gathered only though our website, with no financial or contest “reward” (we feel it cheapens students' opinions), students who take our survey are self-selected, and like-mindedly referred.  This leads to two predominant results.  The first is a natural bias in our data that is often disproportionately criticized for being overly negative or bitter.  Many do not realize that 61% (8875/14538), the vast majority of our surveyors have said that they would choose to return to their institution given the chance, so the self-selected bias is not as negative as critics have expected. 

If anything, the self-selected bias is actually a polarizing one, in that those students who feel positively tend to “reply” in the comments to those who feel negatively, creating what appears to readers to be a strongly partisan opinion of the school on our website.  But even that belief is incomplete, because many of the students who do not write a comment after taking the survey are those without strong opinions.  So the survey data itself (not the comments) tends to have a reasonable mix of positive, negative, and middle of the road opinion.

The second result is a positive one.  The same self-selected students are also the ones with time to sit down and expound in-depth about their experiences — at far greater detail than possible with any on-campus surveying.  As such, much more meaningful causal insight is gained about they physical factors leading to any satisfaction or dissatisfaction than any short quotes could provide.

Finally, it has been brought to our attention that some critics believe surveyors are fulfilling some “ulterior” motive by taking our survey (i.e.  deferring applicants to reduce competition, etc.  ).  Except in the cases of university admissions bad practice (described above), there is little to no reward or consequence that student can achieve by taking our survey.  That is, deferring prospective applicants through our site as a personal tactic would not achieve visible results for several years — most students will have graduated by then — it is long after most peoples' reward horizons.  It is impossible for them to achieve any ulterior motive beside informing prospective students.

Sampling Biases/Representation 

Figure 1.  - The disparity between the survey reponses (biases) at two very different institutions: Pensacola Christian College (PCC) and the Massachusetts Institute of Technology (MIT)
The most common concern raised about our rankings is one of representative sampling — that the sampling of students in the surveys is not representative of the true student opinion.  For instance, consider if more “positive” students or more “negative” minded students than the school's average take the survey at any particular school (Figure 1.).  Those surveys could bias and lead to a school rank that is artificially high or low. 

A common method to overcome this problem is to model the shape of biases across ALL surveys and ALL schools (Figure 1 - black line).  That shape is used to neutralize the response biases in any particular school, so that all schools have the same modelled sample.  Unfortunately, to some degree, neutralization is a bad thing — consider a school where most of the students actually ARE dissatisfied, or the educational quality really IS NOT that stellar.  Neutralization would disproportionately suppress the valid (dissatisfied) opinions, and overly amplify the one or two satisfied opinions — leading to an artificially high score.

The other method, and the preferred one, is to simply acquire more surveys.  As the numbers grow, the sampling converges on a representative sampling.

Just to reiterate, 61% of our surveys are positive, so we do not maintain an overly negative bias.

Free Variables, Normalization, and Synthesis
( “Apples to Apples” )

Regardless of what anyone may say, schools are inherently non-comparable.  They have different student bodies, different majors, different educational offerings, different locations, weather, and ultimately different people doing the rating.  Ranking the schools in any fashion is the equivalent of me giving you an orange, and Joe over there an apple, and asking each of you, “how sweet is that fruit?”.  You might say, “This orange is super-tangy and sweet”, and Joe might say, “My apple is really tart and sweet!”.  Now which fruit is sweeter?  Who knows?  Two different people said something about two very different fruits — we don't know if Joe is more or less sensitive to sugar than you are, if the apple actually has more or less sugar content, or if the multitude of other flavors teasing your taste buds interfere in some way.  The point is, you are two different people looking at two completely different fruits.

We overcome this problem in three steps: Binding Free Variables, Synthesis, and Normalization.  But before diving in, it is useful to provide an overview of what is occurring.  Essentially what we do is break apart what we know about you and Joe into as many factors as possible — are you male, how much do you like fruits, etc.  Then we look at what we can learn from a large number of people like you and Joe.  How similar are your sensitivities to sugar, how do the similar people to Joe rate oranges, and how do the similar people to you rate apples?  How do you covary?  Using what we know about how you both are related, we synthesize a kind of “ghost” next to each of you to act as a “stand in” for the other person.  True, it is not the same as if Joe had tasted an orange himself, or if you had tasted an apple, but it provides a suggestion of how you “might have” rated it, if you did. 
Finally, we normalize, so that the same number of Joes and the same number of yous have rated the apples and the oranges.

Figure 2.  - Modeling the Free Variables
We start by identifying the “free variables” in our data set.  They are the factors that can vary independently of the variables being measured.  In our data, some of the free variables are: Gender, Intellect, Major, Survey Age, and Region.  An example of a dependent (being measured) variable is “Program Quality” (Figure 2.).

Without a knowledge of the free variables, and the dependencies upon them, it is impossible to insure that the sampling of data is comparable from school to school.  Suppose we know (hypothetically) that “in general” that women tend to rate educational quality a half-grade higher than men.  (i.e.  Women give an A-, and Men a B+).  Now suppose you compare two schools, one of mostly female proportion (School A), and the other mostly men (School B).  If equal in education, the mostly-female school will naturally score higher in the rankings than the mostly men school.  Does this mean that School A has a better educational quality?  Absolutely not.  To actually compare the two, we have to leverage our knowledge about how educational quality is dependent upon the free variable “Gender” — that knowledge will tell us about relationship in Educational Quality has to gender, and allows us to conclude that Schools A & B actually have equivalent educations.  Without the knowledge of the free variables, the surveys we have would be both misleading and useless for concluding anything about relative educational quality — or about anything else, for that matter.

Figure 3.  - Data Synthesis generates data to fill missing data from existing data, so no holes exist. 
What happens if there is missing information?  What if no men have posted an opinion?  Or no out-of-state students?

Well, missing data poses a difficult problem.  If there is a missing data point in the free variables, the “lack of data” is amplified to drive the rankings incorrectly.  What data synthesis does is to create “dummy” data to fill the missing data with a consistent data point based on the existing data, and to act like a dampener on inconsistencies.  The relationships we've learned from the entire data set are used to predict the filler data.  (analogous to our apple-orange ghost example above).  It is there to prevent empty or small amounts of information from driving the analysis one way or another.  Suppose we want surveys from an equal number of men and women at each school, but then find at one of the schools that only 1 or no men have been surveyed.  We take the relationship of the women-men's scores, times the women's scores to determine the average score a male would give.  This data point is completely generated from the women's scores at that institution, but prevents a single poor male rating, or no male rating from driving the school's score down.  Now in practice, there are 30 dummy entries, one corresponding to the total combinations of free variables — each of 5 intellects, 2 genders, and 3 regions. 

MissingDatai[] =

åj ( N×cor(i,j)×(Avg(Cj) - D(OverallAvgj,Ci) ))

åj (N×cor(i,j))

where i represents the missing data, and j the source data.  We take the weighted average of the correllative difference between the values predicted by the free variables. 

Once the data is filled, it is still is not quite comparable, in that different numbers of students with different genders, intellects, and regions are rating the schools.  We overcome this problem by normalizing the distributions of intellects, genders, and regions at each school to the average distributions of the entire data set — achieving a “best fit” of the free variables from each school to data set average.  In this way, the distributions match, artificially making all the student bodies similar.  Normalization stratifies the data set across the free variables, allowing them to be bound and reweighted for each school to achieve a common contribution, and thus an artificially common comparison. 

We did not normalize or synthesis across majors, because insufficient data exists to draw any reliable about the inter-major dependencies, and over-normalization introduces HUGE artifacts into the dependent variables, such as single-overamplification, which we observed. 

Conditional Dependencies 

Figure 4.  - Conditional Dependence Model
Unfortunately, forcing the same “shape” upon all schools (done by weighting different surveys differently) has several undesirable consequences.  First, the same shape that could reflect a response bias, could also actually be an effective approximation of student opinion — in which case, neutralization will defeat the surveys and introduce more difficult to comprehend counter-bias.  Second, neutralization normalizes out the distinguishing features (strengths and weaknesses) of schools — including any actual horizontal (total score) deflections, making the rankings even more meaningless.

As mentioned earlier by the discussion of free variables, the dependent variables (Program Quality, Social Life, etc) are conditioned on the free variables:

Q = åijkNijk× Quality|Intellecti,Genderj,Regionk
S = åijkNijk× Social|Intellecti,Genderj,Regionk  

Private Component
The private component is only piece of information that we are keeping hidden to prevent exploitation by malicious surveyors.  For instance, if we said that Computer Engineering was worth more as a major than other majors (which we don't), then people trying to exploit the rankings would all put that they were computer engineering.  It would become impossible to separate the real computer engineering majors from the artificial ones.  Now, we don't do anything like that about majors, because that would be ridiculous, but it would not be desireable to have our filtering, or our normalization algorithms exploited in some way. 

This particular ranking does NOT take into account department strengths as contributed by their constituent majors.  There is not sufficient data to stratify the departments & make any conclusions about their relative values.  Synthesis is only capable of filling in small amounts of “placeholder/fillers” for missing data, and requires the majority of information to be present as a source.  In the future, we will stratify to officially show “Top Engineering Schools”, “Liberal Arts”, etc, when there is sufficient student opinion to draw from.

Our free variables carry an independency assumption — that Gender, Intelligence, Major, and region coming from are completely independent.  That is, that Gender has no bearing on intelligence, or on Major choice, or region the student is coming from, and vice-versa.  In the cases of Gender«Major, Intelligence«Major, there may be some correllation to break that independency assumption.  But because this particular ranking does not stratify by Major, the independency assumption is not broken.  In the case of Intelligence, there is a hidden dependency upon school and ACT/SAT score (in addition to other factors) (School,ACT|SAT->Intelligence), which is only trivially modelled as a function of school. 

Not included in this ranking, despite a large number of surveys is the University of Alabama, Tuscaloosa.  There are an enormous amount (the greater proportion) of surveys about the University of Alabama which have taken the time to defeat our statistical filters, but when manually evaluated, more than 2 reviewers believe them to be all written by 1 or 2 people due to identical writing styles.  Additionally, disproportionately few have left email addresses and contact information, far below the average.

Oftentimes we are posed the rhetorical question: “What do students know!”.  As both existing customers, and an on-campus presence, students should be considered as “auditors” of Public and Private colleges' quality of service.  In the case of public schools, taxpayers have no immediate connection to school to which their tax dollars go, so students are the only presence on campus to provide that auditing.  In private colleges, the quality of service is often hidden from prospective students, so auditing by current students is the only way for prospective students to become informed buyers.

The contention of “insufficient information” is one that is leveraged fairly frequently, but is often made overlooking the value of a single survey.  Many surveys provide knowledge of systematic failures by the school, but individually, surveys provide causality and insight that would be otherwise invisible.  Consider a survey of Physics, rating program quality an F, and friendliness in department also an F.  A large number of surveys might completely cover this survey up, making it seem like an aberration, but what does one learn from it?  Someone in physics finds the department unfriendly, and perhaps because of it, it impacts their success in the department!  What kind of person is sensitive to friendliness in the department?  A social friendly person!  While this school might be great in general, a social friendly person should not come here to study physics.

Why not rank using ACT/SAT?  Standardized tests are poor predictors of performance and have little provable correllation with actual intelligence - The “Self Rated Intelligence” that we use is intentionally loaded — carrying the facets of ego, ambition, multivariate intelligence, understanding, and reasonable hope.

Thank you for taking the time to read our rankings and this explanation.  Hopefully you found it informative, and our rankings insightful.  Our intent is to make our ranking process as transparent as possible, so that everyone can understand what is going on.  If any part of this document is unclear, please feel free to contact us at:

Next Year's Analysis
The next analysis will include many new features — separation by major, co-prediction of missing data, locality co-prediction, canadian schools, and law schools!


StudentsReview Advice!

• What is a good school?
• Statistical Significance
• How to choose a Major
• How to choose your Career
• What you make of it?
• How Ivy League Admissions works
• On the Student/Faculty Ratio

• FAFSA: Who is a Parent?
• FAFSA: Parent Contribution
• FAFSA: Dream out of reach

• College Financial Planning
• Survive College and Graduate
• Sniffing Out Commuter Schools
• Preparing for College: A HS Roadmap
• Talking to Your Parents about College.
• Is a top college worth it?
• Why is college hard?
• Why Kids Aren't Happy in Traditional Schools
• Essential College Tips
• Cost of College Increasing Faster Than Inflation
• For parents filling out the FAFSA and PROFILE (from a veteran paper slinger)
• How to choose the right college?
• Create The Right Career Habits Now
• Senior Year (Tips and experience)
• Informational Overload! What Should I Look For in a College or University?
• Personality Type and College Choice
• A Free Application is a Good Application
• College Academic Survival Guide
• Getting Involved: The Key to College Happiness
• Choose a Path, Not a Major
• The Scoop on State Schools
• The Purpose of a Higher Education
• The Importance of Choosing the Right College Major (2012)
• How to choose a college major
• How to guarantee your acceptance to many colleges
• Nailing the College Application Process
• What to do for a Successful Interview
• I Don't Know Where to Start (General College Advice)
• Attitude and Dress Code for an Interview (General College Advice)
• Starting College (General College Advice)