MIT
MIT Faculty Newsletter  
Vol. XXIII No. 5
May / June 2011
contents
A Letter to the Class of 2011
A Call for Nominations to
Faculty Newsletter Editorial Board
An MIT Housing Dream Finally Comes True
Can Nuclear Disarmament Become a Reaility?
Faculty Governance @ MIT:
Strengths and Future Challenges
Faculty Priorities for MIT
Thanks to the 150th Staff
Technology Enabled Transformation
in the MIT Learning Experience
Interim Report on the HASS First-Year Focus Pilot Program, to be Renamed the HASS Exploration Program
Sam Allen New Chair of the Faculty
Preventing Utter Devastation in Tornado/Hurricane Prone Areas
MIT Subject Evaluations Now Online
MIT Class of 2015: Incoming Freshmen Stats
Center for Work, Family & Personal Life Changes Name
Sponsored Research Expenditures
(2001 – 2010)
Printable Version

MIT Subject Evaluations Now Online

Diana Henderson

As of fall 2010, MIT’s subject evaluation system went entirely online and a new “Who’s Teaching What” (WTW) Web-based application is now being used to improve the quality of teaching data and the ease with which it is collected.

The applications were developed by a team from the Office of the Dean for Undergraduate Education (DUE) and from Information Services and Technology (IS&T) in a four-year project which is being completed this spring. The Office of Faculty Support within DUE administers the new systems. As part of Digital MIT, nearly 40,000 paper forms have been eliminated each term.

As a faculty member, you will see a number of benefits:

  • Everyone involved in a subject can be evaluated – there is no longer a limit on the number of instructors who can be evaluated.
  • You can add your own questions and reuse those questions in future terms. Upon request your departmental evaluation coordinator can send you a preview of the evaluation as it would appear to students, including both the Institute set of questions and your own questions.
  • Student comments are collected for each instructor as well as the subject as a whole.
  • Evaluations can be run for half-term subjects as well full-term subjects.
  • During evaluation periods you can monitor response rates, and your evaluation coordinator can send reminders to students who have not yet responded.
  • Reports are available immediately after the grading period at the end of the term.
  • You can search for reports by subject number or name; evaluations are retrievable 24/7.
  • Report data can be filtered by student type (credit or listener), by subject number (for joint or meets-with subjects), and by section (if section assignments have been recorded in WTW).
  • Reports for faculty and department administrators include comments on your teaching and the subjects as a whole, frequency distributions, and sets of responses from single students. (Other members of the MIT community continue to see only summaries of quantitative data.)

Fall 2010 End-of-term Evaluations by the Numbers

744:     Subjects flagged for evaluation in Fall 2010 end-of-term              evaluations
32:       Departments participating
1331:   Individual instructors who were flagged for evaluation
62%:    Average response rate per subject
5.8:      Average overall rating of subject (on a scale of 1-7)
5.9:      Average overall rating of instructor (on a scale of 1-7)
427:     Additional questions added to surveys by departments and              instructors
4:         Types of questions that were added (rating scale,              open- ended, numeric, multiple-choice)

Back to top

Response Rates and Other Challenges

Throughout the pilot phase, feedback was solicited from many sources:

  • The Subject Evaluation Advisory Group (comprising faculty, administrators, and students, and convened by me)
  • Students from the Student Committee on Educational Policy (SCEP)
  • Department Heads
  • Undergraduate Officers
  • The Committee on the Undergraduate Program
  • Evaluation coordinators (departmental academic administrators) and faculty contacts within each department
  • Students and instructors participating in evaluations

The most frequent issue raised by faculty is how to improve response rates. MIT’s average per-subject response rate has dropped approximately 10-15% since going online. This is consistent with online systems at other universities, except for those that withhold the release of final grades until the student has completed his/her evaluations.

Some have suggested more positive incentives such as giving extra credit or holding a raffle. For extra credit to be awarded, the professor must know which students have completed the evaluation; this would require a change of policy since anonymity could be compromised, particularly in a small class. The central system has so far not offered raffle prizes because research has shown that external incentives can prevent internalization of educational incentives such as helping other students select courses and providing valuable feedback to instructors.

Students frequently have suggested that we reduce the number of standard questions – in their words, many questions are “abstract,” “redundant,” “confusing.” This would free up space for more relevant questions from departments and instructors, and students are more likely to respond to questions that matter to them.

Students also would like to see the end-of-term evaluation period extend through finals week. The current faculty view is that if evaluations were extended through finals, responses from some students might be excessively colored by the exams or by the grades the students receive.

The suggestions noted above would require approval from a faculty committee. The CUP Chair and FPC Chair have consulted about the possibilities and the proper constitution of such a committee, possibly beginning in fall 2011. Thus far, the Subject Evaluation Advisory Group, representing the five Schools and including experts in surveys and statistics, has played an invaluable role in providing good counsel.

Based on the experience of the pilot phase, here is what we have found helps boost response rates:

  • Faculty reminders are essential. Students have told us that when faculty stress the importance of evaluations and communicate how past results have changed the way they structure classes, it makes a strong impact.
  • Have students complete evaluations in class. It is still possible to run the evaluations during class time, and many faculty who do so achieve excellent response rates. Students have indicated that they would be willing to bring computers to class in order to complete the survey.
  • Send carefully spaced reminder e-mails. The Office of Faculty Support sends e-mail notices to non-respondents every few days during the evaluation period. In addition, subject evaluation coordinators have the option of sending their own reminders to non-respondents in selected subjects within their department (without revealing who those students are). OFS alerts the subject evaluation coordinators to its reminder schedule to help avoid multiple e-mails being sent on the same day. Each time a reminder is sent out, there is a corresponding spike in the response rate. Departments that choose to send their own reminders tend to get higher response rates than those that don’t.

Please feel free to contact the project team or me with any suggestions, concerns or questions you might have.

For more information

Project Website: web.mit.edu/se-project
Project e-mail: se-wtw@mit.edu

Back to top
Send your comments
   
MIT