If you had to pick a word to characterize research in educational technology in the 1990s, it probably would have been 鈥渋nnovation.鈥 Fueled by public and private dollars, experts were in full-bore research-and-development mode for much of that decade, exploring all kinds of classroom applications for digital technology.
The problem was that researchers paid less attention to documenting how鈥攁nd, in some cases, whether鈥攖heir innovations improved learning. And they spent even less time thinking about how to sustain and spread their use.
Feature Stories |
---|
Getting Up to Speed |
Teaching Assistants |
Outside Interests |
Collecting Evidence |
E-Learning Curve |
Information Exchange |
State Data Analysis |
Executive Summary |
鈥淢aybe 10 years ago, there was more latitude to say, 鈥楲et鈥檚 try out some crazy technology ideas and see if kids find it useful,鈥 鈥 says Chris Quintana, an assistant professor of learning technologies at the University of Michigan, in Ann Arbor. 鈥淭hen it would be, 鈥極K, let鈥檚 move on the next idea.鈥 鈥
Experts now agree that the climate has shifted. Nowhere is that change more apparent than at the federal level, where the pool of available money for innovative applications of educational technology has shrunk, and policymakers are putting pressure on developers to prove their products improve academic achievement.
Yet there hasn鈥檛 been a complete lack of studies over the past decade suggesting that digital educational technology does produce results in the classroom.
Several recent research reviews and meta-analyses published in the United States and in Britain suggest that, when measured across the board, educational technology yields 鈥渟mall, but significant鈥 gains in learning and in student engagement. The problem is that those modest gains fell short of advocates鈥 promises.
See the accompanying story,
Looking back, experts say the case for educational technology could have been much stronger by now if researchers had spent more time assessing learning gains and less time innovating.
鈥淣ot measuring the gains was an absolute error on our part, and we need to go deeper and deeper with good research,鈥 says Donald G. Knezek, the chief executive officer of the Washington-based International Society for Technology in Education.
For instance, earlier studies showed that students and teachers liked using technology in the classroom, and that students knew more at the end of a particular study than they did before the intervention began, says Cheryl L. Lemke, the CEO of the Metiri Group, a Culver City, Calif.-based research organization.
But the studies failed to document those improvements in scientifically valid ways. They also didn鈥檛 probe for links between learning gains and what researchers call 鈥渇idelity of treatment鈥濃攖he extent to which students and teachers used the technology being tested.
Gains Found in Diverse Areas
Plus, says Glen Bull, a professor of instructional technology at the University of Virginia, in Charlottesville, researchers are just now understanding how much greater the payoffs can be when digital-learning programs combine specific academic content with lessons from cognitive science and developmental psychology on how children learn in those subjects. In teacher education, it is called 鈥渢echnological pedagogical content knowledge.鈥
鈥淲hen you鈥檙e working with technology, it cannot just be dropped in school,鈥 Bull says. 鈥淭hat鈥檚 the thing that鈥檚 really emerging.鈥
Beyond the concern about missed opportunities, some scholars are skeptical that educational technology itself can improve student achievement. Larry Cuban, a professor emeritus of education at Stanford University, suggests that studies can鈥檛 isolate any gains that might be due to technology from those that might be because of teachers鈥 methods or to classroom climate or the size of a class.
He does note, though, that some studies are getting more sophisticated about taking into account those factors by having the same teacher give lessons with and without the technology intervention under study. Still, Cuban says, 鈥渙ver the past 10 years, I don鈥檛 think technology has produced any gains except those that experimentalists are looking for.鈥
Other scholars report qualified evidence, however, that computer technology can bolster achievement.
In distance learning, for instance, the Metiri Group determined in a November 2006 report that students鈥 performance in virtual classrooms was as good as or better than their performance in face-to-face classrooms. The achievement gains were stronger in Web-based programs than in video-based ones, as well as in programs that included an e-mail component, according to that research review, which was paid for by Cisco Systems Inc., an Internet-network provider based in San Jose, Calif.
Similarly, a 2003 meta-analysis by researchers at Boston College found that students using word processors wrote more and produced better-quality work than did students in comparison groups. The caveat, though, was that for some of the younger technology-savvy students, writing quality suffered when they were asked to write on paper-and-pencil assessments.
鈥淓arlier studies had not found any positive effects for writing with word processors,鈥 says Michael K. Russell, a co-author of the analysis and the director of the Technology and Assessment Study Collaborative at Boston College. 鈥淲e wanted to see if findings had changed.鈥
Several 鈥渋ntelligent鈥 tutoring programs, likewise, have accumulated solid research track records, according to the Metiri report and various experts. A prime example is Cognitive Tutor Algebra, which was developed by researchers at Carnegie Mellon University in Pittsburgh.
A randomized trial of the software conducted over the 2000-01 school year in the Moore, Okla., school district showed that middle school students using the program outperformed their peers in other classrooms on standardized end-of-course tests.
Similarly, studies of individual computer-based programs that allow students to simulate frog dissection or provide a molecular-level view of thermodynamics in action鈥攕uch as those developed by researchers working in the Web-based Science Inquiry Environment initiative at the University of California, Berkeley鈥攈ave shown that such approaches can be more effective than conventional instruction at generating deeper understanding.
The Metiri Group鈥檚 review found few rigorous studies, though, looking at the efficacy of interactive computer whiteboards; personal digital assistants and other kinds of handheld computers; or quick-response devices, such as electronic 鈥渃lickers鈥 that give teachers an instant read on whether a class is 鈥済etting鈥 a lesson.
Reality Checks Needed
The focus on rigorous assessments of achievement gains grew in part out of two federal laws adopted earlier this decade鈥攖he No Child Left Behind Act and the Education Sciences Reform Act鈥攖hat required educators to rely on education programs and practices that have been proved effective through 鈥渟cientifically based鈥 studies.
But researchers say private foundations and federal agencies beyond the U.S. Department of Education have picked up on that trend, too, in a search for tried-and-true strategies that educators can quickly put to practical use.
Like most swings of the pendulum, though, this one has drawbacks, according to observers. One fear is that researchers adhering to the new model could miss out on opportunities to document other benefits that have been linked to digital learning. Those include improvements in writing quality and communication, heightened student engagement, deeper understanding of some abstract concepts, changes in teaching practices, and the opportunity to give students new windows opening onto previously unseen worlds.
鈥淎 lot of what we鈥檝e been saying is we鈥檙e not using the right metrics. We鈥檙e not measuring the full impact of learning,鈥 says Lemke of the Metiri Group.
Researchers also need to be able to study how the programs they develop in the hothouse settings of university laboratories work in different, often inhospitable, classroom environments, says Christopher J. Dede, a professor of learning in technologies at Harvard University. That kind of 鈥渟cale up鈥 research, he says, could give educators a more realistic idea of how programs could work in their own classrooms, and perhaps point to new hybrid models of the same programs tailored to specific, more difficult settings.
鈥淚t鈥檚 probably not hard for something that鈥檚 reasonably well designed to find some site where it works,鈥 Dede says. 鈥淚t鈥檚 like asking, 鈥業s chicken Kiev a good thing to eat at a Russian restaurant?鈥 It鈥檚 a fantastic thing to eat, but if you get it in a diner, it鈥檚 often not good.鈥
Another concern among researchers is that the focus on improving test scores could altogether crowd out the kind of inventive research-and-development work that characterized so much research in educational technology 10 years ago.
That鈥檚 a problem now, they say, because students鈥 use of technology outside school is already outstripping their use of it in classrooms. Yet it is becoming harder to find funding to design educational programs to capitalize on those new uses鈥攁 digital-learning network, perhaps, that can engage students as powerfully as the online YouTube video-sharing site, or social-networking Web sites such as MySpace.
As for video games, which are particularly expensive to build, the Federation of American Scientists, a prominent Washington-based group, issued a report last year calling on the departments of Education and Labor, along with the National Science Foundation, to pay for the development of 鈥渟erious鈥 games.
鈥淚 think there鈥檚 more enthusiasm around gaming for learning than almost any topic I鈥檝e ever seen,鈥 says Roy D. Pea, an education professor at Stanford University. He adds, nevertheless: 鈥淭his is a very big hunch. Lots of research questions need to be addressed.鈥
Digital-Literacy Skills a Concern
Part of the problem is that experts don鈥檛 know exactly what students are doing with technology, either inside or outside school, and how it affects their thinking. The last large-scale survey of school-based educational technology practices occurred in 1998, several experts say, and less is known about how students use digital technology at home.
To fill that void, the Chicago-based John D. and Catherine T. MacArthur Foundation in September 2006 announced plans for a five-year, $50 million digital-learning initiative to research technology鈥檚 effect on students, use social networking and other online tools to help students learn, design and develop online games, and create media-literacy curricula for a digital age.
鈥淎lso, the use of Google opened up a raft of questions around what learning students need to have in order to be productive researchers,鈥 says Pea, referring to the highly popular online search engine.
Rather than just learn how to use technology, students in today鈥檚 Web-dominated environment need to learn how to prioritize and manage a dizzying array of information coming at them through Web sites and e-mails, how to think critically about what they find, and how to use multiple media to communicate well, among other skills. Educators, scholars, and policymakers have yet to agree on what those new skills should be, much less on how best to teach them.
鈥淲e still have a lot to learn about supporting a whole range of digital-literacy skills,鈥 says Margaret A. Honey, a vice president of the Education Development Center Inc., a Newton, Mass.-based research group, and a co-director of its Center for Children and Technology, in New York City. And, she says, new research in that area could provide a lasting payoff.
鈥淭echnologies are always changing,鈥 she says, 鈥渂ut skills of discernment don鈥檛 change.鈥