The Ranking of Engineering Schools is made up as follows:
50% - Educational Quality
25% - Academic Success
is based upon 'understanding'
12% - Creativity Encouragement
5% - Work is Useful
4% - Faculty Accessibility
4% - University Funding Use
The rationale behind this ranking structure is as follows:
Quality - (in program). Educational Quality is a umbrella over
many issues designed to capture (overall) the quality and depth
of instruction at an institution.
Academic Success is
based upon understanding - While a student may be learning
and exceptional amount from his or her program, upon graduation,
his or her initial earning power (salary) and job quality
will be largely dominated by grades.
This metric is
an estimation of how well the grades a student receives
actually match with their competency in the material. Some
schools stake their reputation on their ability to spread out
grades ('no grade inflation') and to make very qualified
and competent students suffer later in life (earning power) for
the sake of the school reputation. This does students
and families a great disservice, as they are making
significant investments for both the education and reputation to assist
them. Instead, by spreading grades, the school is taking
that investment and punishing the customer.
Creativity Encouragement - Engineering,
particularly is noted for its lack of creativity. Some
people have noted that creativity has no place in engineering.
However, this is the exact opposite of being true.
Nowadays, more than ever, practical engineering itself is reduced
to an overengineered process. Unfortunately, many of the
tasks taken by engineers, if lacking creativity can be reduced
to simple programs. The engineer is seen more as
a 'translator' between the human ideas & management and the
physical implementation. Creativity however, makes the engineer more than
just a tool, bringing value (salary/job reliability) to them
immediately and further into the future.
of work - Being kept busy, repeating mundane application tasks
does not guarantee that a student becomes grounded in
both the techniques and theory to be a good engineer.
Rather, it can lead to student burnout, causing them
to waste significant amounts of tuition moneys in a degree
program that becomes unused.
Faculty Accessibility - Ultimately, faculty
interaction is why students attend colleges and universities. Otherwise,
a student could easily view lectures on webcast or simply
read the book. Ideally the faculty provides access to
more information, more resources, a network, and a link to
the real world that the student can learn from and
access. If the faculty do not make themselves available,
then they reduce the deeper educational capacity of the institution
and hamstring the students — essentially making them walk up
to and dive off of a diving board blindfolded with
only the 'theoretical knowledge' that there is water in
the pool, there are no alligators, or that the
diving board even exists.
University Funding Use - This
is how readily the students see that the university funding
is being used for their benefit. The immediate implications
ones of facilities and resources that the students can use
to grow beyond the classroom. The long term implication
is that if the University continues using its funding in
a manner beneficial to students, then the value of the
students' degrees, post graduation, will continue to rise.
Student Surveys are filtered of duplicate and “invalid”
surveys prior to ranking. Invalid surveys are those
that are not self-consistent, reflecting a corrupting effect on the
data, either accidental or with intent. We have found
that certain inclined students survey their “competing” schools, giving artificially
bad (or good of their own school) reviews.
While we do not wish to point any fingers, we
have been able to link up several groupings of falsified
data with admissions staff at some universities.
surveys were analyzed statistically, and a gaussian matrix was created
to model the survey patterns within and between surveys.
We can now identify those surveys that: vary too little,
vary too much, have fields that do not covary properly,
or are inconsistent. (i.e. rating the university as
an A for friendliness, but then complaining either about the
people or the social life). In addition, a rule-base
system was created to identify duplicates and model trends of
surveys from the same machine.
This allows us to
be able to identify if a person is falsifying many
surveys. FFT analysis is employed to determine the “data
content” of each survey as well, providing more information for
The resulting filter, correlation matrix, and survey model
is applied uniformly to all surveys. Out of
7,500 undergraduate student surveys, 483 surveys were rendered invalid.
Inspection of the invalid surveys revealed a failure rate
of 5%. (24 of the 483 surveys were
How is rank computed?
generic quick answer is that it is the average of
student opinion ratings minus “variability of score”. The “variability
of score” is larger for low numbers of surveys, meaning
that that school's ranking position is less trustably high or
low. Strict statistical variance is not instructive here because
'variance' is computed within a group of surveys — with
only 1 survey, there is no variance.
The 'Variability' function
decreases exponentially with the size of the sample set, applied
equally to all institutions, making it an acceptably fair
accounting form. After 5 surveys, the variability of
score drops to less than .3; after 10 surveys, it
is less than .1. After 20 surveys, there
no significant variability in position. Essentially, each school's score
converges to a position as the number of surveys increases.
More specifically, Rank is computed by multiplying the importance of
each variable selected by that variable and adding together.
The average of all matching surveys for a particular
school is then taken. From this, a 'variability' is
computed — this is based upon the number of
surveys. If there is only 1 survey, and it
ranks a school at a 10, then 1 more survey
could come in, ranking a '0', which would give the
school average a 5 (10/(1+1) = 5). This
is the lowest that the school 'could' be — given
1 more survey. So this 'variability' is subtracted from
the overall score, reducing it. In this manner,
schools that have more surveys have a more believable average
than school with only 1 survey.
score = average(importances*preferences) - (10*(sum(importances)))/(#svys + 1)