FNL
HomePage
Editorial Board
E-mail FNL
FNL Archives
MIT HomePage

Teach Talk

Strategic Assessment at MIT

A Coordinated Approach to Assessment is
Being Developed at the Teaching and Learning Laboratory

Lori Breslow

Assessment can be a dirty word in higher education. To some faculty, it implies someone – the department head, a school dean, a visiting committee – is looking over their shoulder to evaluate the quality of their work in the classroom, with the possibility, of course, that they will be found wanting. Assessment efforts can be equally problematic for those doing the assessing. They are often left with a bewildering pile of data that is hard to interpret and even harder to use as a springboard for educational change. And both “assessors” and “assessees” often harbor the suspicion that current methods of assessment cannot possibly judge with any degree of accuracy how well students have learned, or how successful a curriculum is at reaching its stated goals.

But there is no reason, in fact, to think of assessment as an adversarial process, a call to judgment, or a time sink. Actually, educational assessment is a first cousin to the kind of research that goes on throughout MIT, for it, too, is scholarship that results in the expansion of knowledge and innovation.

The point of assessment, write Gloria M. Rogers and Jean K. Sando in Stepping Ahead: An Assessment Plan Development Guide, is to “improve, to inform, and/or to prove. The results of an assessment process,” they continue, “should provide information which can be used to determine whether or not intended outcomes are being achieved and how the project can be improved.” [p. 1] Assessment is the process of generating hypothesis about the functioning of an educational system, strategy, technique, or tool (so that the thing being assessed can be, for example, the individual learner, the classroom environment, a pedagogical method, the instructor, or a department-wide curriculum); testing that hypothesis by gathering data through the use of accepted methodologies; and feeding the results back into the system in order to strengthen how it functions.

 

Devising a Strategic Approach

At the Teaching and Learning Laboratory (TLL), we are all too aware of both the possible hazards in and potential benefits of educational assessment. [If you are not familiar with TLL, please see our Web page at http://web.mit.edu/tll. A part of the Office of the Dean for Undergraduate Education, TLL provides a comprehensive range of services to help faculty, students, and administrators improve teaching and learning at the Institute.] TLL has been charged with overseeing, aiding, and in some cases implementing, the assessment efforts of many of the new educational initiatives that have begun at the Institute in the last several years. These include, for example, the Cambridge-MIT Institute (CMI), the Communication Requirement, and the Residence-Based Advising Program. However, TLL is most involved in the assessment and evaluation of the projects being supported by the Microsoft/iCampus and d’Arbeloff funds.

Coordinated assessment efforts at TLL began approximately a year ago with the arrival from Northwestern University of Dr. John Newman, TLL’s associate director for Assessment and Evaluation. Other TLL staff members who are involved in assessment are Dr. Alberta Lipson, associate director for Educational Studies, and Cindy Dernay-Tervalon, staff associate for Research and Development. Each staff member is directly responsible for assessing one or more subjects or educational experiments, as well as consulting with PIs or faculty members responsible for other initiatives. TLL also collaborates with other assessment experts on campus, with assessment/evaluation consultants working with individual PIs, and with graduate students in education who are using MIT projects for their field research.

We have been given the opportunity and challenge of creating a coordinated assessment program. Throughout its history, of course, the Institute has continually assessed its educational work, but perhaps in not quite such a systematic way as it seeks to do now. Staff members at TLL, along with members of the MIT faculty, administration, and staff have spent the last year devising an approach to assessment that is compatible with the idea of assessment as a scholarly, research-oriented activity. We have sought to create a strategic approach to assessment that combines individual projects into themes, which, in turn, feed into a research agenda. (Please see the figure.) Let me explain this approach in more detail.

 

Identifying Common Themes

There are over 30 projects underway at MIT that are being funded by either iCampus or d’Arbeloff grants. Together, they represent a rich array of educational experiments. Faculty members, administrators, students, and staff are working to incorporate new educational technologies into the classroom; structure new kinds of relationships between students and faculty, among students, and between students and alumni; employ a wider range of pedagogical methods; and develop new tools to evaluate the efficacy of these efforts. Some of these experiments will doubtlessly work better than others. But studying the strengths and weaknesses of as many as possible will provide us with a wealth of information to help guide the future direction of educational innovation.

At the urging of EECS Professor Hal Abelson, a member of the iCampus Joint Steering Committee, a team of MIT faculty and administrators has been working to group the iCampus projects into “themes” that link them conceptually according to commonalities in objectives, technology, or pedagogical method – or some combination thereof. These seven themes are: (1) teaching life-long lessons through project-based learning; (2) using multi-media to expand knowledge; (3) creating learning communities with alumni/mentor participation; (4) employing active learning alternatives in the classroom; (5) producing on-line alternatives to lectures; (6) permitting remote acquisition of real-time data; (7) developing new methodologies for assessment of educational innovation.

“My concern,” Abelson has said, “is that at the end of some period of time we know more than simply how the individual projects fared. I want us to be able to say something about how we can provide MIT students with an overall higher quality education than we are giving them now.”

Each theme encompasses at least several projects. By linking them conceptually, we can gain synergy of effort. By comparing the assessment data that comes out of one project with the data from other projects in the same group, we will get a clearer picture of which innovations are worth exporting to other courses or learning situations, and which are not. Finally, coordination of methods and measures will provide credible, replicated knowledge that can be disseminated to the wider educational research community.

This is not the place to go into a detailed description of each theme, but to impart a better understanding of what a theme entails, I will briefly describe just one.

The idea behind “teaching life-long lessons through project-based learning” is that there are a core of skills and capabilities that MIT students should be developing from the very beginning of their careers at the Institute through their four years of study. Examples of these skills include communicating effectively (using both the written and spoken word); finding credible information relevant to a particular topic or task; managing time efficiently; working well as part of a team; and solving complex, open-ended problems. The philosophy behind the two d’Arbeloff experiments that best represents this pedagogical approach – Mission 200X (the X stands for the year the subject is given) and Public Service Design – is that these skills can be targeted for development within a subject in addition to the content that is being taught.

For example, in Mission 2005, whose official subject name is “Solving Complex Problems” (12.000), teams of students this past fall tackled the problem of building an underwater research facility on both a coral reef and in a deepwater environment. Solving that problem required students to cull information from a number of different disciplines, cooperate with another on teams that tackled smaller pieces of the problem, and coordinate their work to devise a comprehensive plan. That final design was posted on a Mission 2005 Website and presented to a panel of outside experts.

Dr. Lipson has been working alongside Professor Kip Hodges, who teaches Mission 200X, to assess both last year’s Mission 2004 and Mission 2005. She uses a variety of methodologies in that work – primarily participant observation, focus groups, and surveys. A part of the assessment plan is to follow the students who have taken the Mission 200X courses longitudinally throughout their careers at MIT and perhaps beyond. If subjects like Mission 200X meet their objectives, we hope to be able to identify the pedagogical variables that bring about that result so that those techniques can be adopted in other subjects.

In the same way, by assessing individual projects united by a common theme we hope to learn something about whether or not online lectures are as effective as live lectures; the ways in which electronic communication helps or hinders the formation of a community of learners; or, whether having students engage in hands-on activities in the classroom increases conceptual understanding. These assessment objectives are framed very broadly, I realize. Our work will entail refining them to make their answers useful to the MIT community.

 

Creating a Research Agenda

As if all that were not ambitious enough, our long-range goal is to do the kind of work that will allow MIT to contribute to research into the question of how the introduction of educational technology affects teaching and learning. To that end, a team of assessment experts from both MIT and Microsoft, along with four UROP students, has spent several months exploring the state of knowledge in that area and identifying the interesting, important questions that need to be explored. We settled on three areas for study. As the figure shows, our research agenda is to study the impact of educational technology on conceptual learning, student engagement and student interaction, and resource allocation with a particular emphasis on faculty time and effort. Let me again briefly describe each.

The impact of educational technology on conceptual learning. One of the weaknesses often cited in science and engineering education is that students are taught a relatively narrow set of skills. Often called “algorithmic learning,” this skill set, at its worst, entails memorizing a collection of formulae/equations and trying to determine which can be used to answer questions on a problem set or exam. However, another approach is to focus educational efforts more broadly, teaching students to solve the kind of novel problems they will face in their professional work. This is often called “conceptual learning.”

More specifically, conceptual learning means students should be able to: understand and describe in concrete terms how physical objects, phenomena, systems, or processes behave and how they interact with other objects, phenomena, systems, and processes; understand how mathematical expressions can represent physical objects, phenomena, systems, or processes, their behavior, and their interactions; model various reasoning and problem-solving techniques; pose and solve paradoxes and dilemmas; and transfer material they have learned from the context in which they learned it to other contexts.

On the simplest level, then, our assessment goal is to discover whether or not the use of various educational technologies (e.g., simulations) will add to, detract from, or have no effect on conceptual learning.

The impact of educational technology on student engagement and peer interaction. Student engagement in learning is defined as the extent to which students enjoy, take responsibility for, and participate in learning. Student engagement has three components: (1) behavioral (e.g., does the student attend class regularly?); (2) cognitive (e.g., does the student engage in educational activities with the goal of developing further and deeper understanding?); and (3) affective (e.g., was the student satisfied with the subject and would he/she recommend it to others?).

As with conceptual learning, we are trying to understand the extent to which educational technology enhances or detracts from student engagement. Do educational technologies contribute to students putting forth greater effort? Do they help students to enjoy the content of the course more? Do they aid students in taking more responsibility for their own learning? We are also interested in understanding change over time. If the educational technology is one that requires students to change ingrained ways of learning, for example, it is important to know how long it takes for habits to change, and the process by which that change occurs.

Educational research has shown that college students are satisfied with their college experience when the amount of interaction they have both with their peers and with faculty is significant. (See, for example, Richard Light, Making the Most of College: Students Speak Their Minds. Cambridge, MA: Harvard University Press, 2001.) The debate that technology either impedes or increases opportunities for communication is a hotly contested one both inside and outside of academia. Our focus is how educational technology changes interactions, and what are the benefits or drawbacks of those changes. For example, are there aspects of face-to-face interactions that lend themselves to the development of certain skills? If so, is that development stifled by technology? Are there technologies currently not being employed or ways of using current technologies that could benefit interactions and, therefore, learning? These are the kinds of questions we will explore.

The impact of educational technology on resource allocation. No one argues with the fact that implementing educational technology takes time and money. But how much time? whose time? and how much money? The first questions, then, to tackle in this area are essentially accounting ones, and it will be no easy matter to determine the costs associated with developing and implementing educational technology.

The next set of questions can be summed up in one simple one: Are the costs worth it? Exploring the impact of educational technology on conceptual learning, student engagement, and student interactions will help answer that question. But there are also questions related to faculty and institutional concerns. For example, what will be the impact of implementing educational technology on a faculty member’s scholarship, professional reputation, or place in the campus community? Does the faculty member feel more or less engaged with the topical content of the subject when using a new educational technology? Can technology create renewed interest in basic material? And, finally, do students and faculty members have different reasons for wanting or not wanting technology in the educational process? What about administration and staff?

* * *

We realize we have bitten off a lot to chew. Questions need to be sorted, refined, and prioritized. Some will fall by the wayside. Others may occupy us over a long period of time. But we are excited about the intellectual challenges associated with this work, and motivated by the contributions it can make to improving undergraduate education at MIT. We will continue to report back to you about what we discover.

FNL
HomePage
Editorial Board
E-mail FNL
FNL Archives
MIT HomePage