|
|||||||||||||||||||||||||||||||||||||||
|
Teach TalkMIT Students and Deep Learning:
|
Back to top |
David Hestenes (Hestenes, 1987) has convincingly argued that models form the everyday mental tools that most STEM professionals use. The word model appears 12 times in Freeman et. al. and I find it valuable in my professional and teaching life. Models and modeling are central to MIT and explicit in many upper division subjects, especially engineering: we use mathematics to make models of the structure and behavior of some well-specified system.
Yet in most GIRs, we fail to impart knowledge about models explicitly:
The result is students who, at the end of introductory subjects, can manipulate the equations, i.e., “run the model” in Freeman et. al. – often without being able to name the model they are applying or knowing its limitations.
For example, in 8.01 the equations F=ma and F = mfriction* m*g are used in the same solution without knowledge of the very limited applicability of the second formula (fails for static friction, assumes that normal force is mg). Expert scientists and engineers are careful to check the applicability of the model as part of their solution process – the types of systems and circumstances under which the model containing these formulae applies, the model’s limitations, and whether its predictions make sense (NAS13).
Hestenes and collaborators have developed a pedagogy called “modeling Instruction” for physics that is explicitly designed to teach the ideas and procedures of modeling reality. In modeling instruction, students are guided to discover the basic models in laboratory, and to apply them to problems – a process called “modeling.” This pedagogy leads to very large improvements of students’ scores on both concept inventories and more problem-oriented tests; high school students start lower but finish higher than students taking traditional introductory courses at selective colleges (Hestenes, Wells, & Swackhamer, 1992). A recent review paper of ~ 50 different introductory physics courses (Madsen, McKagan, & Sayre, 2015) documents another benefit of modeling instruction: it uniformly improves the expertness of students’ learning attitudes as measured by the Colorado Learning Attitudes about Science Survey (CLASS ) – typically by ~ 11%.
Hestenes’ group started the American Modeling Teachers Association (modelinginstruction.org) that runs two-week workshops on modeling instruction which upwards of 10% of all U.S. high school physics teachers have attended, and this pedagogy is widely known among physics high school teachers. (This summer there will be a total of ~ 62 workshops for biology, chemistry, physical science, and physics teachers.) This is known by only a small percentage of college or university faculty – unfortunate because modeling pedagogy would give many MIT professors a valuable perspective on reforming their subjects.
As an example, in developing MIT’s three-week Mechanics ReView, my group has designed a modeling-based approach to categorizing domain knowledge and problem solving, and found that in addition to improving grades on the final exam by ~ 1.5 grades it improved their expert-thinking by ~ 11% as measured by CLASS. Importantly, these students also showed an improvement in their subsequent 8.02 performance relative to their peers who either did not take the ReView or who took another full semester of traditionally taught 8.01 (A. Pawl, Barrantes, & Pritchard, 2009; Rayyan, Pawl, Barrantes, Teodorescu, & Pritchard, 2010).
The MIT course catalog is based on a list of topics that are taught in each subject. A complementary perspective is to specify the cognitive skills that the student is supposed to learn. Indeed, a cognitive perspective is highly germane to understanding the student difficulties mentioned above. The figure below shows a cognitive hierarchy, and gives the teacher’s relationship with questions at each cognitive level (below).
The figure divides cognitive knowledge into four categories, shown as overlapping ovals with examples from Newtonian mechanics at the top. These start on the left with a foundation of facts, definitions, and simple concepts like how to define and measure acceleration. Built on this factual foundation (expressed by its oval overlapping) is knowledge of procedures and operations – these are the models. Confronted with an unfamiliar problem, an expert applies strategic knowledge to sort through the known procedures (models) to determine which might be relevant or helpful, then solves the problems using the relevant model(s). At the very right is Adaptive Expertise, the knowledge/ability to create something new. These categories are closely paralleled by cognitive taxonomies like those of Bloom and Marzano (Marzano & Kendall, 2007; Teodorescu, Bennhold, Feldman, & Medsker, 2013).
This perspective illuminates typical tasks assigned by teachers; these are presented (below the ovals) in different colors depending on whether the problem/task has a known answer, and whether the teacher who poses it intends the student to answer it in a particular way. Let’s categorize our current instructional approach (e.g., in 8.01) through this lens: list the topics in the syllabus, teach this material (mostly concepts and procedures) in serial order (energy one week, momentum the next, angular momentum later, . . .), and give lots of practice (homework) each week, concentrating on the topic of the week. Obviously, this pedagogy is focused on the two left-most ovals (facts and procedures). Such instruction doesn’t improve students’ strategic knowledge because they don’t need to learn how to determine whether momentum is a key to this problem when momentum is this week’s topic.
The facts & procedures-based instructional approach allows little opportunity for helping the students obtain strategic knowledge spanning the whole range of topics in that subject – the ability to organize their knowledge of different facts and procedures so that it can be fluently accessed when confronting an unfamiliar problem. Indeed, in my bookshelf of introductory physics textbooks, not one attempts explicitly to instill strategic knowledge, for example with a chapter whose title is something like “How to analyze a new problem to determine which of the previous 12 chapters can help you solve it.” It is not surprising that unfamiliar final exam problems involving several of the studied procedures are considered to be very difficult by our students. Many of the problems in Freeman et. al. don’t have a clear similarity to any of the weekly homework problems in the subject; hence they expose the students’ lack of strategic knowledge.
Several other useful (to me) perspectives are Kahneman’s Type 1 and Type 2 thinking (quick and reactive vs. thoughtful and logical) and its relevance to short concept questions vs. traditional long-form problems, the importance of quick association among relevant domain vocabulary as a measure of knowledge interconnectedness (Gerace, 2001), and defining “understand a concept” as “fluency with, and interrelating of, the representations commonly used with that concept.” (e.g., Motion with constant acceleration might be represented with a table of position vs. time, a formula for velocity vs. time, a strobe picture, or a graph of velocity vs. position.)
Back to top |
Having broadened the list of troubling student deficiencies and offered some perspectives, I now turn to how individual departments can reform our subjects to help students overcome these difficulties.
Freeman et. al. and I are distressed that, having done well in our subjects, our students are not reaching the learning outcomes involving skills, habits, and attitudes that many faculty strongly believe are important. I put the blame on our system (shared by other colleges) that defines a subject as a syllabus of topics and subtopics that will be taught by an expert in that subject. This teacher-centered description lacks any specification of what is expected of students in terms of skills, habits, or abilities. Thus addressing these deficiencies starts with:
#1 Departments must specify subjects in terms of outcomes expected of students – both learning outcomes with respect to specific topics, and general skills.
Thus, where the current course description lists a topic, e.g., “momentum,” there would be a learning objective “identify when momentum is/is not conserved.” This has the advantage that a professor can write a problem that most other department members would agree assesses a particular learning objective. Importantly, learning objectives can address more general learning outcomes than topics – for example “learn to check their solutions using dimensions and limiting cases.” Specifying learning objectives would enable us to emphasize general skills and habits that are generally considered important in the 21st century such as the 4 C’s - collaboration, communication, critical thinking and problem solving, and creativity (p21.org, (NGSS Lead States, 2013)).
I also recommend that we move our instructional goals toward Strategic and Creative cognitive levels because smart phones and Internet search engines provide instant access to facts and integrated collections of procedures (like Wolfram alpha, the computational package r, Mathematica, etc.).
For example, we can remove “challenging” algebra-intensive problems the first time students are exposed to a topic, and add review problems later that explicitly require students to say which previously studied topics apply in a given physical situation and why – or give problems with multiple choice answers where the distractors can be eliminated by dimensional analysis or special cases. Given that freshmen will forget over half of what they learn but don’t use regularly by graduation (Ebbinghaus, Ruger, & Bussenius, 1913; Andrew Pawl, Barrantes, Pritchard, & Mitchell, 2010), but will improve on skills that they reuse, answer checking, dimensional analysis, and determining what’s conserved should receive more emphasis than they do now – especially when reforming subjects like GIRs where most students will major in another course.
Given that the department has set the learning objectives and goals, it is important to realize that assessment sets the standard for student learning (and has strong influence on the instruction). So,
#2 Departments must adopt assessment instruments and develop calibrated question pools that accurately assess their learning goals.
This process should strongly consider incorporating some of the research-developed instruments. Examples might include instruments that evaluate students’ ability to reason from fundamental physical principles (Andrew Pawl et. al., 2012), assessments of general scientific reasoning (Lawson, 1978), and widely used instruments whose typical results are known for different institutions, e.g., the CLASS, Test of Understanding of Graphs (TUG), and discipline-based instruments like the venerable Force Concept Inventory that has transformed teachers’ views on the importance of conceptual reasoning. We should also consider making some instruments of our own – a good place to start would be to collect questions like those in Freeman et. al. This process will result in stable year-to-year assessments of student knowledge and learning. As a side benefit, these assessments can complement student evaluations of learning in assessing teacher performance.
The main thrust of this article is that the education research and cognitive science literature provides many important insights and remedies that address the serious student deficiencies identified by Freeman et. al. and in this article – and that improving the outcomes of our subjects forces us to bring this research (and research-developed assessment instruments) to bear on our efforts to improve our courses. This is a tremendous challenge due to the immensity of the possibly relevant literature. To put this in perspective, a typical faculty member is well acquainted with literature in a specialty like Atomic Physics Research for which a Google search will have ~ 0.1M hits, in comparison Physics Research will yield ~ 2.7M, and Education Research ~ 12.4M, a count that probably excludes much education-relevant research from fields like cognitive science, behavioral psychology, etc. Hence it is unrealistic to expect even our most dedicated professors to know the literature relevant to education – even the dedicated teacher-authors in Freeman et. al. believe that “researchers in STEM education [have not] . . . identified these problems and shown their solution.” I believe that the only realistic route to filtering through the voluminous education research to get beyond the “educational technology fix of the day” and to find what will truly help us improve our subjects is that:
#3 We must incorporate Discipline-Based Education Researchers into processes #1 and #2 above, as suggested in the recent National Research Study, (Singer, Nielsen, & Schweingruber, 2012).
The above three recommendations – set goals, agree on how to assess them, and incorporate relevant educational research – are consistently recommended by education reformers, and have been successfully implemented at two selective universities by Nobel prizewinner Carl Wieman (Wieman, 2017). Grant Wiggins has advocated them, calling the process “backward design” to contrast it with the topics first/assessment last approach that is typical in universities (Mctighe & Wiggins, 2012).
It should be easy for MIT faculty to adopt backward design because it is really the “forward design” that we practice in our professional lives: select goals, determine how we’ll measure them, build on relevant literature, experiment /fail/recycle until the goals are met, then publish or patent. Hopefully, we can adopt this familiar practice to systematically and scientifically improve MIT undergraduate education.
I acknowledge helpful comments by Lori Breslow, Sanjoy Mahajan, Leigh Royden, and Gerald Sussman.
I welcome comments on this (dpritch@mit.edu) and encourage further Faculty Newsletter articles on improving MIT education.
Back to top |
Adams, W., Perkins, K., Podolefsky, N., Dubson, M., Finkelstein, N., & Wieman, C. (2006). New instrument for measuring student beliefs about physics and learning physics: The Colorado Learning Attitudes about Science Survey. Physical Review Special Topics - Physics Education Research, 2(1), 10101. http://doi.org/10.1103/PhysRevSTPER.2.010101
Barrantes, A., & Pritchard, D. (2012). Partial Credit Grading Rewards Partial Understanding. (unpublished – ask me for a copy)
Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain, mind, experience, and school: Expanded edition. Washington, DC: National …. Washington, D.C.: National Academy Press. Retrieved from http://www.eric.ed.gov/ERICWebPortal/recordDetail?accno=EJ652656
Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). How Students Study and Use Examples in Learning to Solve Problems, Cognitive Science, 182, 145–182.
Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5, 121–152. Retrieved from http://www.sciencedirect.com/science/article/pii/S0364021381800298
DiSessa, A. (1988). diS88 diSessa Phenomenological Primitives “knowledge in pieces”.pdf. In P. Forman, G. Pufall (Ed.), (p. 49).
Ebbinghaus, H., Ruger, H. A., & Bussenius, C. E. (1913). Memory: A contribution to experimental psychology., 1913. http://doi.org/10.1037/10011-000
Ericsson, K. A. (2009). Discovering deliberate practice activities that overcome plateaus and limits on improvement of performance. In A.Willamon, S.Pretty, & R.Buck (Eds.), Proceedings of the International Symposium on Performance Science (pp. 11–21). Utrecht, The Netherlands: Association Europienne des Conservatoires Academies de Musique et Musikhochschulen (AEC).
Gerace, W. J. (2001). Problem Solving and Conceptual Understanding. Proceedings of the 2001 Physics Education Research Conference, 2–5.
Hestenes, D. (1987). Toward a modeling theory of physics instruction. American Journal of Physics, 55(5), 440. Retrieved from http://scitation.aip.org/content/aapt/journal/ajp/55/5/10.1119/1.15129
Hestenes, D., Wells, M., & Swackhamer, G. (1992). Force concept inventory. The Physics Teacher, 30(3), 141. http://doi.org/10.1119/1.2343497
Lawson, A. E. (1978). The development and validation of a classroom test of formal reasoning. Journal of Research in Science Teaching, 15(1), 11–24. http://doi.org/10.1002/tea.3660150103
Madsen, A., McKagan, S. B., & Sayre, E. C. (2015). How physics instruction impacts students’ beliefs about learning physics: A meta-analysis of 24 studies. Physical Review Special Topics - Physics Education Research, 11(1), 1–19. http://doi.org/10.1103/PhysRevSTPER.11.010115
Marzano, R., & Kendall, J. (2007). The new taxonomy of educational objectives (2nd ed.). Thousand Oaks, CA: Corwin Press.
NGSS Lead States. (2013). Next Generation Science Standards: For States, By States. Washington, D.C.: The National Academies Press.
Pawl, A., Barrantes, A., Cardamone, C., Rayyan, S., Pritchard, D. E., Rebello, N. S., . . .Singh, C. (2012). Development of a mechanics reasoning inventory. In 2011 Physics Education Research Conference (Vol. 2, pp. 287–290). http://doi.org/10.1063/1.3680051
Pawl, A., Barrantes, A., & Pritchard, D. E. (2009). Modeling applied to problem solving. In AIP Conference Proceedings (Vol. 1179). http://doi.org/10.1063/1.3266752
Pawl, A., Barrantes, A., Pritchard, D. E., & Mitchell, R. (2010). What do Seniors Remember from Freshman Physics??
Pritchard, D. E., Barrantes, A., Belland, B. R., Sabella, M., Henderson, C., & Singh, C. (2009). What Else (Besides the Syllabus) Should Students Learn in Introductory Physics? In 2009 Physics Education Research Conference (pp. 43–46). http://doi.org/10.1063/1.3266749
Rayyan, S., Pawl, A., Barrantes, A., Teodorescu, R., & Pritchard, D. E. (2010). Improved student performance in electricity and magnetism following prior MAPS instruction in mechanics. In AIP Conference Proceedings (Vol. 1289). http://doi.org/10.1063/1.3515221
Singer, S. R., Nielsen, N. R., & Schweingruber, H. A. (2012). Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering. ( Committee on the Status, Contributions, and Future Directions of Discipline-Based Education Research; Board on Science Education; Division of Behavioral and Social Sciences, and Education, Eds.). The National Academies Press. Retrieved from https://www.nap.edu/catalog/13362/discipline-based-education-research-understanding-and-improving-learning-in-undergraduate
Teodorescu, R. E., Bennhold, C., Feldman, G., & Medsker, L. (2013). New approach to analyzing physics problems: A Taxonomy of Introductory Physics Problems. Physical Review Special Topics - Physics Education Research, 9(1), 10103. http://doi.org/10.1103/PhysRevSTPER.9.010103
Tuminaro, J., & Redish, E. F. (2007). Elements of a cognitive model of physics problem solving: Epistemic games, (January), 1–22. https://journals.aps.org/prper/abstract/10.1103/PhysRevSTPER.3.020101
Wieman, C. E. (2017). Improving How Universities Teach Science.
Back to top | |
Send your comments |
home this issue archives editorial board contact us faculty website |