Community College of Rhode Island
Noel Levitz Student Satisfaction Inventory
(From Noel-Levitz "About the Student Satisfaction Inventory" included with the CCRI Summary Report.)
Back to NEASC Homepage

Back to Homepage 

The Scales Reliability and Validity How to Interpret the Results

The Student Satisfaction Inventory measures students' satisfaction with a wide range of college experiences.  Principles of consumer theory serve as the basis for the inventory's construction. Therefore, students are viewed as consumers who have a choice about whether to invest in education and where to enroll.  In addition, students are seen as individuals who have definite expectations about what they want from their campus experience.  From this perspective, satisfaction with college occurs when an expectation is met or exceeded by an institution.

Students rate each item in the inventory by the importance of the specific expectation as well as their satisfaction with how well that expectation is being met.  A performance gap is then determined by the difference in the importance rating and the satisfaction rating.  Items with large performance gaps indicate areas on campus where students perceive their expectations are not being met adequately.

Because the Student Satisfaction Inventory results in three different scores for each item, a significant amount of information is generated for institutional decision makers.  Importance how satisfied students are that your institution has met the expectation (the higher the score, the more satisfied the student).  Performance gap scores (importance rating minus satisfaction rating) show how well you are meeting the expectation overall.  A large performance gap score for an item (e.g., 1.5) indicates that the institution is not meeting students' expectations, whereas a small or zero gap score (e.g., .50)  indicates that an institution is meeting students' expectations, and a negative gap score (e.g., -.25) indicates that an institution is exceeding students' expectations.  (Note added by CCRI:  Importance and satisfaction are rated by using a seven-point Likert rating scale.)

In addition to the information provided by the three measurements for each item, inventory composite scales offer a "global" perspective of your students' responses.  The scales provide a good overview of your institution's strengths and areas in need of improvement.


The Scales
Top of page

Community, Junior and Technical College Version and Career and Private School Version

For the community, junior and technical college and career and private school versions of the inventory, 70 items of expectation and 6 items that assess the institution's commitment to specific student populations are analyzed statistically and conceptually to provide the following 12 composite scales:

Academic Advising and Counseling Effectiveness assesses the comprehensiveness of your academic advising program.  Academic advisors and counselors are evaluated on the basis of their knowledge, competence and personal concern for student success, as well as on their approachability.  (7 Items)

Academic Services accesses services students utilize to achieve their academic goals.  These services include the library, computer labs, tutoring and study areas. (7 Items)

Admissions and Financial Aid Effectiveness assesses your institution's ability to enroll students in an effective manner.  This scale covers issues such as competence and knowledge of admissions counselors, as well as the effectiveness and availability of financial aid programs.  (6 Items)

Campus Climate assesses the extent to which your institution provides experiences that promote a sense of campus pride and feeling of belonging. This scale also assesses the effectiveness of your institution's channels of communication for students.  (15 Items)

Campus Support Services assesses the quality of your support programs and services which students utilize to make their educational experience more meaningful and productive.  This scale covers career services, orientation, child care, and special programs such as Veterans' Services and support services for displaced homemakers.  (7 Items)

Concern for the Individual assesses your institution's commitment to treating each student as an individual.  Those groups who frequently deal with students on a personal level (e.g., faculty, advisors, counselors) are included in this assessment.  (5 Items)

Instructional Effectiveness assesses your students' academic experience, the curriculum, and the campus's overriding commitment to academic excellence. This comprehensive scale covers areas such as the variety of courses offered, the effectiveness of your faculty in and out of the classroom, and the effectiveness of your adjunct faculty and graduate teaching assistants.  (14 Items)

Registration Effectiveness assesses issues associated with registration and billing.  This scale also measures your institution's commitment to making this process as smooth and effective as possible.  (9 Items)

Responsiveness to Diverse Populations assesses your institution's commitment to specific groups of students enrolled at your institution, e.g., under-represented populations, students with disabilities, commuters, part-time students, and older, returning learners.  (6 Items)

Safety and Security assesses your institution's responsiveness to students' personal safety and security on your campus.  This scale measures the effectiveness of both security personnel and campus facilities.  (5 Items)

Service Excellence assesses the attitude of staff toward students, especially front-line staff.  This scale pinpoints the areas of your campus where quality, service and personal concern for students are rated most and least favorably. (9 Items)

Student Centeredness assesses your campus's efforts to convey to students attitude toward students and the extent to which they feel welcome and valued. (6 Items)

Some items on the inventory contribute to more that one scale.  In addition, four items (numbers 3, 9, 53 and 68) are not included in any of the two-year scales.
Top of page

Reliability and Validity

The Student Satisfaction Inventory is a very reliable instrument.  Cronbach's coefficient alpha is .97 for the set of importance scores and is .98 for the set of satisfaction scores.  It also demonstrates good score reliability over time; the three-week, test-retest reliability coefficient is .85 for importance scores and .84 for satisfaction scores.

There is also evidence to support the validity of the Student Satisfaction Inventory. Convergent validity was assessed by correlating satisfaction scores from the SSI with satisfaction scores from the College Student Satisfaction Questionnaire (CSSQ), another statistically reliable satisfaction instrument.  The Pearson correlation between these two instruments (r=.71; p<.00001) is high enough to indicate that the SSI's satisfaction scores measure the same satisfaction construct as the CSSQ's scores, and yet the correlation is low enough to indicate that there are distinct differences between the two instruments.
Top of page

How to Interpret the Results

As you review the results, it is important to consider all of the information provided.

Three areas of measurement are especially significant:  importance, satisfaction and performance gaps (the difference between importance and satisfaction). Focusing on only one area of measurement, such as performance gaps, is likely to result in overlooking areas of the campus experience that your students value most.  A combination of scores provides the most dynamic information for institutions to consider when developing an action agenda.

Note added by CCRI:  Along with each satisfaction mean, Noel-Levitz also reports a standard deviation or SD.  This is a measure of variability.  It gives you an indication of how spread out the survey responses are for each question.  The larger the standard deviation, the more spread out are the response ratings.  In normal distributions, 68% of all scores are expected to differ from the mean by no more than plus or minus one standard deviation; and 95% for plus or minus two standard deviations.

Top of page