MIT
MIT Faculty Newsletter  
Vol. XX No. 3
January / February 2008
contents
Finding Polaris and Changing Course: A Closer Look at the December Faculty Meeting
The Power of Technology for Transparency
Deliberations Without Resolutions: Is it Time for a New Format for Faculty Meetings?
Teaching this spring? You should know . . .
How Do We Know if Students are Learning?
Not Just Another Survey . . . !
Online Subject Evaluation: One Step Toward More Effective Teaching
MIT Should Establish a Standing Committee on Investment Responsibility
Top Ten City of Cambridge Tax Payers
Reading the Newspaper By the Open Window
Introduction to the Campaign for Students
MIT Historical Society is Proposed
MIT's New Adoption Assistance Program
The Institute's Future
Teaching this spring? You should know . . .
Select Student Admissions and
Financial Aid Numbers
Printable Version

How Do We Know if Students are Learning?

Lori Breslow , John Lienhard , Barbara Masi , Warren Seering , Franz Ulm

In the past decade, the issue of how to best assess student learning has been at the forefront of the national conversation about education. Discussions have intensified about how to ascertain what students have learned as a result of their undergraduate education. The good news is that the discipline of engineering has been out in front of the curve.

In 1998, the Accreditation Board for Engineering and Technology (ABET) adopted Engineering Criteria 2000, which mandated that engineering programs present data to show achievement of learning outcomes. Having just completed the second cycle under the new rules, the School of Engineering’s departments, with support of the School’s Director of Education Innovation and Assessment (EIA), have developed models and tools to improve the assessment of student learning that fit the unique needs of each department.

In addition to this effort, faculty who have been involved in educational innovation at MIT have collaborated with educational researchers, primarily from MIT’s Teaching and Learning Laboratory (TLL), to devise ways to assess the success of those advances. Those initiatives have also contributed to the storehouse of knowledge now available to the MIT community about how to evaluate what students are learning, what contributes to their educational success, and what can be done to strengthen teaching and learning at the Institute.

A Mini-Assessment Primer

First, a few words about assessment methods! Grades are just the beginning of assessment of student learning. Grades are excellent measures of student performance; however, it could be argued that if grades are not carefully aligned with subject or program learning outcomes, they may only generally reflect student learning while disguising areas where individual or overall student performance is weak. However, using a variety of assessment allows for a more nuanced portrait of student learning.

The table shows examples of other assessment methodologies, categorized, first, as either “direct” or “indirect” measures. Indirect measures, such as student attitudinal surveys or number of students progressing to advanced degrees, allow stakeholders to infer the effectiveness of educational efforts. However, the data gathered from indirect measures cannot report with precision exactly what students have learned, or the skills they have acquired as a result their education. Direct measures, as, for example, a rigorous analysis of theses or capstone design projects, provide more rigorous data about students' knowledge and abilities.

Assessment methods can also be classified as internal or external. Internal measures are used before students graduate from a program, while external measures are often administered either just before or after graduation.

Assessment in SoE

SoE departments’ process for choosing assessment methods for ABET review began with each department's faculty, in collaboration with the SoE education director, reviewing existing program goals and learning outcomes or drafting new ones. They also discussed curriculum issues of particular interest to the department. Using this information, the SoE education director suggested existing assessment tools (other than grades), or drafted unique instruments that fit the program.

One example of a new instrument was the Senior Project Score Sheet used to analyze senior design projects and theses. The score sheet lists detailed program learning outcomes. Faculty and instructors, and sometimes external engineering professionals, scored theses or design work according to each outcome.

Individual faculty were asked to complete three brief tasks. First, they were asked to identify program learning outcomes covered in their subject. They were also asked to write subject learning outcomes using program outcomes as a guide. At the end of the term, they were asked to review student work as a function of subject learning outcomes.

In one example of a program plan, faculty in the Department of Civil and Environmental Engineering’s new undergraduate program hoped to discern how well its pilot subjects were working. Feedback from alumni was used to confirm program goals. For example, alumni noted the need to add material to give students a better sense of complex, large-scale problem solving as well as global and economic contexts. Custom subject-level surveys at mid-term permitted retooling before term’s end. Custom end-of-term surveys provided annual benchmarking of program goals and outcomes. Focus groups provided the needed level of detail for identifying areas for improvement.

The Department of Mechanical Engineering’s program assessment plan illustrates the value of joining program-level and subject-level assessment. At the program level, a longitudinal review of alumni and senior survey learning-outcome data from
2000 to 2006 suggested specific areas for improvement. One area in which the program was found to be weak, for instance, was technical communication, so subjects addressing communication skills were targeted for improvement.

At the subject level, instructors developed a set of learning outcomes that were aligned with the program-level outcomes. In a simple tabular format, each term instructors could track how students performed for each learning outcome (using assignments and tests), and whether any instructional changes needed to be made. If concerns arose spanning several subjects for a program learning outcome, a program level change could be made. MechE also developed an online subject evaluation form; not only do subject data reach instructors more quickly, but program officers can look at valuable student comments across subjects.

Back to top

Assessing Educational Innovation at MIT

One example of an assessment of educational innovation was the research undertaken to understand the strengths (and weaknesses) of experiments carried out by the Mechanical Engineering faculty to bring small-group pedagogies to core curriculum subjects. Funded by the Cambridge-MIT Institute, two dozen Course 2 faculty, along with seven faculty from other departments (particularly Aeronautics and Astronautics and Math), participated in this effort from 2004-2006.

In general, two types of small-group teaching were tried: Students were put into small groups in their recitations, or students met in groups of four or five with a graduate student or faculty member in place of their recitation. In both cases, the overarching purpose of the small group was to help students understand concepts presented in lecture and to apply those concepts in solving complex problems. Another facet of these experiments was to ask students to present solutions to problems in their small group as a way both to master technical material and to practice oral communication.

The assessment of these experiments included four surveys, 200 student interviews, a number of focus groups, a comparison of exam grades, and the mining of alumni data.

One of the surveys, the Small Group Survey (SGS), was created specifically for this effort by a TLL educational researcher. The SGS is one of a group of surveys developed by TLL that asks students to identify the extent to which a new pedagogical practice or technology has impacted their learning.

The data produced by the variety of assessment methods used to research these pedagogical experiments provide insights into how students experience different teaching methods, and contribute to our understanding of how MIT students learn best.

The Value of Assessment

A valuable outcome of the kind of assessment undertaken by SoE for ABET accreditation is that the departments have been able to use the data gathered to identify particularly effective educational activities, as well as to pinpoint areas in their undergraduate curriculum in need of improvement. Similarly, assessment of educational innovation helps the Institute to understand how such efforts contribute to improving the overall educational enterprise. In both situations, faculty, working in collaboration with educational researchers, control the goals and learning outcomes assessed, as well as the amount of time and effort needed for each assessment initiative.

For more information on assessment at MIT

Information on SoE assessment plans and tools can be found on the SoE Education and Assessment Website, https://web.mit.edu/engineering/eia (requires MIT certificate).

General information on assessment and evaluation can be found on the TLL Website,
http://web.mit.edu/tll/assessment-evaluation/index.html.

A summary of the pedagogical experiments in Mechanical Engineering and the results of their assessment can be found at, http://web.mit.edu/tll/research/studies/TutorialsMechE.rtf.

Back to top
Send your comments
   
MIT