I recently came across a striking article: “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching.” This title is followed by an equally provocative abstract: “…Although
unguided or minimally guided instructional approaches are very popular and
intuitively appealing,…these approaches ignore…evidence from empirical studies
over the past half-century that consistently indicate that minimally guided
instruction is less effective and less efficient than instructional approaches
that place a strong emphasis on guidance of the student learning process…” And
the paper ends with more than two solid pages of supporting literature.
In the body of the paper, Kirschner et al. make many good
points, which appear to be well supported by research and theory; for example,
that IBL and similar pedagogies may be too challenging for
the weakest students, that students benefit from scaffolding and other
guidance, and that performance on tests doesn’t necessarily improve as a result
of IBL. But the paper seems to be primarily a straw man argument: It makes valid points, but these points do not support the claim that IBL is a “failure.” Those of us who teach with IBL know from experience that it has real benefits that are not measured by test scores, and that test scores improve in some cases.
Thus, I take the paper as a challenge to those of us who see value in inquiry-based learning to more clearly articulate that value and the factors that contribute to it, so research can be designed to tease out the beneficial aspects of IBL. This paper prompted me to do a quick search of the literature to see what is known about the benefits of IBL. Not surprisingly, it’s complicated.
Thus, I take the paper as a challenge to those of us who see value in inquiry-based learning to more clearly articulate that value and the factors that contribute to it, so research can be designed to tease out the beneficial aspects of IBL. This paper prompted me to do a quick search of the literature to see what is known about the benefits of IBL. Not surprisingly, it’s complicated.
One of the challenges with past research is articulated well
by both Norman & Schmidt (2000) and Prince (2004): The many different forms
of IBL are characterized by various variables, such as the amount and type of guidance,
whether work is student directed and/or student paced, the percent of class
time that is student led, and even the personality of the instructor. These
variables have different, possibly negative, and likely interacting, effects on
student learning, so lumping all of the variables together under the single
name “IBL” naturally gives muddy research results. To obtain more meaningful
results, Norman & Schmidt call for multivariate analysis that captures all
possible variables and interactions.
Fortunately, the exciting recent study by Laursen et al. has
begun to tease out some of these variables. They found, for example, that
student-reported gains (e.g., in confidence and math thinking) correlated with some
class practices, including peer interactions, student-instructor interactions, and
the extent to which the class was student directed and student-paced.
But which of these variables contribute most to an IBL
classroom’s success? For example, is it more important for an IBL classroom to
have peer interactions or to be student directed? Prince notes that cooperative learning has
much more robust research support than does student-directed work: Cooperative
learning not only improves test scores, but also improves interpersonal skills,
student attitudes, retention in academic programs, and more. But
student-directed and student-paced work can have a slight negative effect on
test scores. Could past muddy results for test scores in IBL research be due in
part to student-directed work? Could
Laursen et al.’s student-reported gains be explained entirely by the benefits
due to cooperative learning? Is it even
possible to gather sufficient data to tease out the interactions among these
variables?
Perhaps if we can tease out which variables most contribute
to the benefits we know IBL can provide, not only can we more easily respond to
skeptics but, more importantly, we can craft our classes to even better serve
our students.
So, what’s so good about IBL, anyway? And what variables do
you hypothesize are most important to that success?
Kirschner, P. A.,
Sweller, J., & Clark, R.E., (2006). “Why Minimal Guidance DuringInstruction Does Not Work: An Analysis of the Failure of Constructivist,Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching,”
Educational Psycologist, 41(2): 75-86.
Laursen, S.,
Hassi, M., Kogan, M., Hunter, A., & Weston, T., (2011). Evaluation of theIBL Mathematics Project: Student and Instructor Outcomes of Inquiry-BasedLearning in College Mathematics: A Report Prepared for the EducationalAdvancement Foundation and the IBL Mathematics Centers. Assessment &
Evaluation Center for Inquiry-Based Learning in Mathematics.
Norman, G.R.,
& Schmidt, H. G., (2000). “Effectiveness of problem-based learningcurricula: theory, practice and paper darts,” Medical Education, 34(9):721-728.
Prince, M.,
(2004). “Does Active Learning Work? A Review of the Research,” Journal of
Engineering Education, 93(3): 223-231.
1 comment:
One aspect of IBL that's "so good" is the focus on process. By process, I mean problem solving, habits of mind similar to what mathematicians do, communication, getting better at being stuck. These are things that have not fully be studied yet, and are opportunities for new research. Most studies have touched on things we can more easily observe (for obvious reasons). Measuring "transformative experiences" is a tougher challenge. When this sort of research does happen, however, I think it'll shed more light on the matter.
I'll also point out a "blind spot" in our profession. Many of us give 2 midterms and a final. All one would see are the results on timed tests. This is a fairly narrow assessment of what students are actually capable of, and knowledge of problem solving ability, conceptual understanding, ability to hand being stuck, attitudes about learning math, all are under the radar. Thus, an aspect of this issue is bringing out into the open what students actually think. For instance, being blind to negative attitudes and beliefs vs. being aware of these things changes perceptions of the data, the benefits of a teaching system, the goals of a course, etc.
Great post! I enjoyed reading it!
Post a Comment