Multimodal Learning Through Media: A Synopsis
The claim that human beings remember "ten percent of what we read, twenty percent of what we hear, thirty percent of what we see, fifty percent of what we see and hear, seventy percent of what we say, and ninety percent of what we say and do" is an overconfident attempt to express the impact that different forms of stimuli (and combinations thereof) have on memory. This claim was rooted in Edgar Dale's 1954 "Cone of Experience," which was simply intended to indicate a spectrum of concreteness (or abstractness) in audiovisual learning resources. However, Dale's work was falsely quantified over the years to indicate that human beings have concrete and inflexible limitations on the amount of information able to be encoded into memory via certain forms of stimuli.
Though we can not necessarily specifically quantify the amount of information that we can expect to store a certain type of stimulus will provide, valid research has indeed shown that different types of multimedia, when coupled with other forms, are more effective than these same types of media in isolation. That said, there are certainly general guidelines to follow when combining certain types of audiovisual stimuli that allow for greater and easier encoding into long-term memory. Though we will not provide a comprehensive list of those guidelines here, a representative example found in the text is what Richard Mayer, Roxanne Moreno, "and others" call the "Spatial Contiguity Principle," which stipulates that "[s]tudents learn better when corresponding words and pictures are presented near each other rather than far from each other on the page or screen." Another, the "Coherence Principle," states that "[s]tudents learn better when extraneous words, pictures, and sounds are excluded rather than included" (12). The latter principle is somewhat obvious, as irrelevant clutter on a screen make it more difficult for students to focus on encoding select information on that screen.
Ultimately, this study concluded that properly-designed multimodal learning has a positive effect on both interactive and non-interactive forms of learning, an effect that ultimately promotes learning in both of these forms greater than traditional, single-mode learning such as text analysis (13). Yet simply because a teacher incorporates multimedia into his/her teaching does not mean that such teaching is effective. It is important to consider the goals of the lesson, and in turn which type of stimulus (or combination of them) is most effective for achieving those goals. For example, a documentary with live footage may better serve to illustrate the famous march from Selma to Montgomery during the American Civil Rights Movement than would a Powerpoint presentation with pictures alongside a narrative account. Of course, there are some occasions where specific skills need to be built around a single type of stimulus. A good example of this would be a textual analysis of a primary historical document such as a census record. In sum, there are certain times and places that certain types of multimedia should be used. It is important for teachers to have a clear rationale for using a specific combination of multimedia resources when teaching a lesson, but it is similarly important for teachers to realize that a variety of stimuli, when used properly, are naturally more effective than single-mode learning.
To the group, I would ask you (in addition to responding to that which I have written above) to provide one or two examples of a good use of multimedia in teaching a lesson in your content area. Defend your choice of media.
Though we can not necessarily specifically quantify the amount of information that we can expect to store a certain type of stimulus will provide, valid research has indeed shown that different types of multimedia, when coupled with other forms, are more effective than these same types of media in isolation. That said, there are certainly general guidelines to follow when combining certain types of audiovisual stimuli that allow for greater and easier encoding into long-term memory. Though we will not provide a comprehensive list of those guidelines here, a representative example found in the text is what Richard Mayer, Roxanne Moreno, "and others" call the "Spatial Contiguity Principle," which stipulates that "[s]tudents learn better when corresponding words and pictures are presented near each other rather than far from each other on the page or screen." Another, the "Coherence Principle," states that "[s]tudents learn better when extraneous words, pictures, and sounds are excluded rather than included" (12). The latter principle is somewhat obvious, as irrelevant clutter on a screen make it more difficult for students to focus on encoding select information on that screen.
Ultimately, this study concluded that properly-designed multimodal learning has a positive effect on both interactive and non-interactive forms of learning, an effect that ultimately promotes learning in both of these forms greater than traditional, single-mode learning such as text analysis (13). Yet simply because a teacher incorporates multimedia into his/her teaching does not mean that such teaching is effective. It is important to consider the goals of the lesson, and in turn which type of stimulus (or combination of them) is most effective for achieving those goals. For example, a documentary with live footage may better serve to illustrate the famous march from Selma to Montgomery during the American Civil Rights Movement than would a Powerpoint presentation with pictures alongside a narrative account. Of course, there are some occasions where specific skills need to be built around a single type of stimulus. A good example of this would be a textual analysis of a primary historical document such as a census record. In sum, there are certain times and places that certain types of multimedia should be used. It is important for teachers to have a clear rationale for using a specific combination of multimedia resources when teaching a lesson, but it is similarly important for teachers to realize that a variety of stimuli, when used properly, are naturally more effective than single-mode learning.
To the group, I would ask you (in addition to responding to that which I have written above) to provide one or two examples of a good use of multimedia in teaching a lesson in your content area. Defend your choice of media.
Indeed, multimedia can be, but isn't always a helpful experience. Powerpoints are a form of multimedia, and we all know they can be deadly as an educational tool. However, there are many effective and interesting ways that multimedia can be utilized.
ReplyDeleteFor instance: In My 7th Grade English class.
1. I could show part of a movie to illustrate something like conflict or plot or any other number of things.
2. I could show part of a documentary about a writer's life before we begin a lesson on the writer. This would have the benefit of listening and seeing both. Since it is a documentary, some of the pictures might also be accompanied by captions.
I agree with Alex and Matthew that we as teachers have to be mindful of the effectiveness of any technology we are integrating into our lessons. From the article, we learn that "informed educators understand that the optimum
ReplyDeletedesign depends on the content, context, and the learner," (8). This can be an excellent guideline as we begin to plan for multimodal learning in our classrooms.
Now, in a math, I could use an online applet to illustrate how changing the characteristics of an equation relates to shifts in the graph. Students could then draw the different graphs in their notes. Another way I could integrate technology effectively is to have students create short movie clips to explain new vocabulary or a new method to their peers. These could then be used to review for a test or even as part of the test (watch this clip and identify the term described). Both of these cases apply more than one mode of learning to increase effectiveness for student learning.
This article reinforces the importance of understanding our students, or "knowing our students" as Jeff Bond said in our Behavior Management Class.
ReplyDeleteWe can meet our students needs through diverse forms of presentation. Multimodal teaching follows the design of inclusion in classrooms. Students who struggle with editing their papers and grammar could benefit from an auditory-visual learning style.
To respond to Alex's prompt I would focus on the teaching of grammar in English classrooms. Our English Method's text reminds us continually to teach grammar as language lessons related directly to the student's writing. I would like to develop a multimodal lesson plan coupling the student writing with language lessons through voice. Ideally, each student would record their voice reading their writing. We could use the computers in the media center. Each student could read one paragraph of their writing and then listen to their voice. The goal of this lesson would be to develop the metacognitive voice for students as they edit their papers.
One great common thread I see here is knowing your students. This probably can't be overstated.
ReplyDeleteI think you made a great point, Alex, when you said, "Yet simply because a teacher incorporates multimedia into his/her teaching does not mean that such teaching is effective. It is important to consider the goals of the lesson, and in turn which type of stimulus (or combination of them) is most effective for achieving those goals." It really always come down to these ideas. While research can inform how we make these choices, the reality is that the teacher is the ultimate arbiter in terms of matching content, pedagogy and technology.
One thing to remember is that multimodal technologies can be interactive or non-interactive. I think it's easy to think of multimedia (instead of multimodal) and assume students are more passive. This is true for some technologies and resources, but not for all. The two approaches are exemplified by Matt's example of watching a documentary on a writer vs. Ruth's suggestion of the online applet. Choosing between the two should be dependent on the type of learning focus (i.e. basic skills vs. higher order thinking.