Trends in Science Education


John Belcher
Professor of Physics
This article originally appeared in the MIT Faculty Newsletter, Vol. IX, No.1, September 1996.

I lectured 8.02, Physics II, Electromagnetism (the 750 student, on-term version) a few years back. As a result, I have become interested in the conceptual difficulties that freshman have when they encounter MIT's core science subjects. So, after 25 years of being on the faculty here, I have for the first time in my professional career attended two conferences focused exclusively on education. At the beginning of August, I attended the International Conference on Undergraduate Physics Education (ICUPE), July 31-August 3, 1996, and later the American Association of Physics Teachers (AAPT ), August 5-10, 1996, both held at the University of Maryland at College Park.

Although much of what I learned at these conferences is old hat to many people at the Institute, much of it is new to me. The context is physics education, but much of it applies to science education in any of the core disciplines. These are a common set of issues that we all deal with. I think it is worthwhile to give a brief summary of what I found to be of interest at these meetings. A lot of this material is on-line. To reach the on-line resources, take a look at this article at http://web.mit.edu/jbelcher/www/trends.html.

Given the extensive history of educational reform in this country, both here and elsewhere, my preconception before I went to these meetings was that what can be done, probably has been done in physics education. But I was wrong. This is a lively field, with a theoretical underpinning based on general research in education, with new modes of teaching, many based on advanced technology, and with a variety of assessment tools used to evaluate the effectiveness of teaching methods. Both meetings I attended had a number of workshops illustrating various teaching innovations, some of which I will mention below. The area I found most interesting is research into methods used in the general science education of engineers and scientists (i.e., what we do in our freshman core science subjects), and that is what I will focus on here.

What does research in education have to say about teaching methodology in the freshman year? Over the last decade, a number of studies seem to show that the lecture/recitation format in its traditional form is not very effective in getting conceptual material across. Although the format has some success in teaching problem solving, it leaves glaring holes in conceptual understanding. There is quantitative weight to this statement. There are a number of physics education research groups, both in the US and abroad (many with homepages) which study these issues, in part by using assessment tests given both before and after courses (in mechanics, for example). One such test is the Force Concept Inventory (FCI) (The Physics Teacher 40, 141-153, 1992). Such tests have been used in conjunction with a number of physics courses across the country, including courses at Harvard.

A problem typical of these assessment tests is the following. A ball is thrown straight upward. Disregarding any effects of the air, the force(s) acting on the ball from the moment it leaves until it returns to the ground is (are): (a) its weight vertically downward along with a steadily decreasing upward force; (b) a steadily decreasing upward force until it reaches its highest point, after which there is a steadily increasing downward force of gravity; (c) a constant downward force of gravity along with an upward force that steadily decreases until the ball reaches its highest point, after which there is only the constant downward force of gravity; (d) a constant downward force of gravity only.

The answer to this question is (d); many students will give (c) as the correct answer (why do you think this is so?). The interesting result is not that a fair number of students answer this question incorrectly before they take a course like 8.01, but that a substantial number still get it wrong after taking a course like 8.01. That is, the standard course in the standard format does not change the student's basic conceptual framework about mechanics very much. This is not because the students are dumb. It is because the standard course we teach is not effective at changing preconceptions or misconceptions that the students bring with them.

Why is this so? An answer to that question is contained in the article The Implications of Cognitive Studies for Teaching Physics by Edward Redish (The American Journal of Physics 62, 796-803, 1994) (this article can be found on-line; see the URL given above). Cognitive studies are about how people understand and learn. Constructivism in cognitive studies postulates that: (1) people tend to organize their experiences and observations into patterns or mental models--the student does not come to us as a blank slate; (2) it is reasonably easy to for the student to learn something that matches or extends an existing mental model; (3) it is very difficult to change an established mental model substantially; (4) different people have different styles of learning.

There is a wealth of detail in the article by Redish that expands on these points, and quotes the relevant literature, and I strongly recommend it. In particular, with regards to different learning styles, there is a passage from Redish that I quote below. We should all keep the following in mind. It is appropriate for any faculty teaching introductory courses in the sciences (not only physics), especially at a place like MIT, where the faculty have been outstandingly successful in their own disciplines from an early age.

"Our own personal experiences may be a very poor guide for telling us what to do for our students. Physics teachers are an atypical group. We selected ourselves at an early stage in our careers because we liked physics for one reason or another. This already selects a fairly small subclass of learning styles from the overall panoply of possibilities. We are then trained for approximately a dozen years before we start teaching our own classes. This training stretches us even further from the style of approach of the "typical" student. Is it any wonder why we don't understand most of our beginning students and they don't understand us?".

If we accept the fact that our introductory courses do not get basic conceptual ideas across to many of our students, what do we do about it? The pervasive answer in the community at these two meetings is the abandonment of an exclusive emphasis on problem solving, and a modification of the traditional lecture format to permit teaching of underlying concepts. "Teaching of underlying concepts" usually means some sort of active interaction between student and teacher, or student and student, frequently mediated by technology, as opposed to the passive "telling" mode of traditional lectures. There are well-documented examples of approaches along these lines which are much more successful in getting across basic conceptual material than the standard lecture format. "Successful" is again defined quantitatively in terms of the results of standardized assessment tools such as the FCI mentioned above.

For example, there is the Peer Instruction approach of Eric Mazur at Harvard University. In this approach, used in a one-year calculus based introductory physics course for science concentrators, "...the lectures are broken in 12-minute long sections. Each section starts with about 7 minutes of lecturing on one of the fundamental concepts to be covered. This mini-lecture is then followed by a short multiple-choice question that tests the students' understanding. After one minute the students record an answer and are then asked to turn to their neighbors to try and convince them of their answers. After another minute or so, the students are asked to reconsider their answer and record it again. A poll is taken so the instructor can decide whether to move on to the next concept, or to continue on the same. This process repeats until the end of the class...". The polls are taken electronically, with the results instantaneously posted in histogram form visible to the entire class. Assessment data show a dramatic gain in student performance compared to that in the same course taught in the traditional lecture format

There are other such efforts involving innovative teaching methods, which I will reference here but not detail: the CUPLE (Comprehensive Unified Physics Learning Environment) approach of Jack Wilson of Rensselaer Polytechnic Institute; the Microcomputer-Based Laboratory (MBL) approach of Ron Thorton of Tufts University; the Physics by Inquiry approach of Lillian McDermott of the University of Washington; the Workshop Physics approach of Priscilla Laws of Dickinson College; a workbook approach to teaching Electric and Magnetic Interactions using integrated desktop experiments, from Ruth Chabay and Bruce Sherwood of Carnegie Mellon University; the RealTime Physics laboratory approach, which features the comprehensive use of microcomputers for data collection and analysis, by Sokoloff, Laws, and Thorton, among others.

Most of these approaches use assessment tools to measure in some quantitative fashion the effectiveness of the pedagogy. Many of them involve the use of technology, but it is important to note that this use is frequently to facilitate faculty-student or student-student interaction, not do away with it. For example, the Peer Instruction approach uses interconnected small computers which provide immediate feedback to the students and to the instructor about the range of answers, which is then the focus of small group discussions. Other approaches mentioned above also make use of computers, e.g., digital video processing as a means of studying realistic examples of Newtonian mechanics, motion sensors in conjunction with computers to simultaneously measure and graph such physical quantities as position, velocity, and acceleration, and so on, all in an interactive laboratory environment.

The use of these approaches has been successful in a variety of venues. Rutgers University has a class, Extended Analytic Physics, which is a first year calculus-based physics course for students who plan to become engineers, but who enter with poor preparation in physics and mathematics. The lectures in this course use an anonymous student response system similar to the Harvard Peer Instruction system. The class also has a weekly workshop that is a hands-on group activity, partially using the RealTime Physics MBL based laboratory mentioned above. The Extended Analytic Physics students have about twice the contact hours as compared to the mainline Analytic Physics students, with smaller classes, and more diverse teaching methods.

This course and courses like it at Rutgers have been outstandingly successful. For example, the retention rate of minorities in engineering, who are one component of such courses, has gone from 9% in 1985, before such courses were introduced, to 50% in 1995. At the end of their first year, the students in Extended Analytic Physics (about 120 students) take the same final as the parallel Analytic Physics (about 450 students), and on average do better on that final than the mainline students.These are remarkable results; someone at Rutgers is doing something right. In student interviews, all of the Extended Analytic Physics students felt that the hands-on, cooperative nature of the weekly workshop was important to their success, as was the anonymous student response system used in lecture, a technology-facilitated innovation. However, the students in Extended Analytic Physics were also uniform in saying that it was very important to them that the lecturer knew their names. We live in an age of transforming technological advances. Some things do not change, though.

What are the take-home messages of all this? First, there is a lot of research and innovation in core science education going on. A lot of this innovation uses advanced technology to good effect. Second, there is a focus on the use of quantitative assessment tools to see if what we intend to teach students is what they learn. Such tools have been used in the last decade to examine the results of both our traditional approaches and results of innovative approaches. There are innovative approaches out there which do much better than our traditional approaches, by this standard. Whether or not we agree with these innovative approaches, or the assessment tools by which they are judged, we should be aware of them. It is also clear that there is enormous educational potential in emerging technology. We at MIT, of all places, should be involved and knowledgeable about innovations in science education which make effective use of advanced technology.