School leaders debating whether to buy educational technology often find themselves weighing the promised benefits against their worst fears鈥攕oaring costs, disruptive breakdowns, and befuddled teachers and students鈥攏ot knowing whether they鈥檙e about to make a purchase their districts will come to regret.
Now, nascent efforts in schools and districts across the country are underway to help administrators become smarter consumers of technology, so they understand the risks and rewards upfront.
In some cases, those initiatives are being led by school officials determined to approach technology purchases in a more methodical way. In other circumstances, the process is being guided by nonprofits attempting to screen or help school leaders and teachers evaluate ed-tech products and test them under the right conditions in classrooms.
Some school officials and organizations taking on that work see it as connected to the broader goal of bridging what many see as a prevailing divide between technology developers, who complain about not being able to access schools or break into K-12 markets, and administrators and teachers, who complain that too many of the products and services thrust at them are useless or impractical.
Despite their best efforts, many districts 鈥渇undamentally don鈥檛 have the expertise to choose technology products,鈥 said Muhammed Chaudhry, the president and CEO of the Silicon Valley Education Foundation, a San Jose, Calif.-based nonprofit that seeks to improve education in that region and around the country. Districts often make decisions about ed-tech purchases based on reputation, word-of-mouth, and pitches they hear from sales staff, Mr. Chaudhry argued, leaving them susceptible to making decisions based on 鈥渞elationships, rather than using a rigorous process for choosing products.鈥
His organization is helping lead a new program called the Learning Innovation Hub, which aims to foster a freer exchange of information and feedback between ed-tech developers and classroom teachers.
Schools Evaluate Products
Efforts to evaluate educational technology in more sophisticated ways are also playing out in individual schools.
Barton A. Dassinger, the principal of the Cesar E. Ch谩vez Multicultural Academic Center, a pre-K-8 traditional public school on Chicago鈥檚 South Side, had grown used to hearing companies make rosy promises about how their products would improve student learning and school efficiency.
So he set up a process that allows his school and its teachers to test individual ed-tech products, monitor their performance, and decide whether to keep using them.
Mr. Dassinger, in consultation with his teachers, chooses supplemental classroom materials for pilot-testing and other types of trial runs throughout the year, as well as during after-school and summer programs.
He keeps a detailed Excel file on his computer that attempts to track the impact of individual products on student achievement. Down the left side of the screen, students鈥 names, grades, and homeroom assignments are listed. There鈥檚 information indicating whether they belong to English-language-learner or special-needs populations. Across the top row, there鈥檚 a list of companies whose products are being tried鈥攔ecently, the names included classroom tools by Compass Learning and ALEKS (Assessment and Learning in Knowledge Spaces), a McGraw-Hill Education product.
Students鈥 scores on various tests鈥攄istrict, state, and subject-specific鈥攁ppear in green, red, and yellow colors, linked to each product, showing how academic progress has changed over time. Companies often ask to be evaluated by metrics of their choosing, but Mr. Dassinger wants to be able to judge them by the same tests and metrics against which his school鈥攚hich serves an overwhelmingly Hispanic and socioeconomically disadvantaged population鈥攁nd its students and teachers are judged.
The process is not perfect, he acknowledges. Ch谩vez school officials can鈥檛 know for sure, for instance, if students鈥 academic gains or declines can be attributed to any ed-tech product or to other factors.
But when combined with the feedback he gets from teachers about how they view products, and how students鈥 respond to them, Mr. Dassinger is confident the process is helping him evaluate technology in a systematic way.
鈥淚t allows us to filter out some of the field, to get to some of the best companies out there,鈥 he said. In reviewing the products, 鈥渋f students are not engaged, or they don鈥檛 understand a technology, that鈥檚 kind of a red flag.鈥
Competing Instincts
Part of the dilemma school leaders face in weighing the value of digital tools is that those leaders are often torn by different, competing instincts, said Steven Hodas, the executive director of Innovate NYC Schools, a program within the 1.1 million-student New York City district whose goal is partly to give vendors insights on the needs of educators and other district officials.
Superintendents and curriculum and technology officials want to get a good price and avoid costly and embarrassing mistakes, Mr. Hodas said. But they also want to be bold. And buying technology brings its own set of temptations and complications for district leaders, he said, one of which is that so many devices and programs seem impressive and cutting edge at first glance.
鈥淚n the presence of bright and shiny, people tend to make bad decisions,鈥 Mr. Hodas observed.
Barton Dassinger is the principal at Cesar E. Chavez Multicultural Academic Center, a school serving a socioeconomically disadvantaged population on the South Side of Chicago. Dassinger has set up a process for piloting and closely tracking the performance of digital resources in his school. He and his teachers collect information on products and how they affect student achievement and engagement, as well as how easy they are for educators to use.
To avoid problems, he said, it鈥檚 critical for school leaders to clearly identify the problem they hope technology will help solve and to involve principals and teachers early and often, since those educators鈥 acceptance of ed-tech products will go a long way to determining their success.
At the Ch谩vez school, some products go through the review process and come out winners. One such product was st math, developed by the MIND Research Institute, an Irvine, Calif.-based nonprofit that has won good reviews from the school鈥檚 classroom teachers and appeared to show positive academic results since it was introduced a few years ago. That tool, which stands for spatial-temporal math, is game-based instructional software that seeks to build skills through visual learning.
The performance of various reading products, however, has been mixed at best.
Mr. Dassinger recalled one company鈥檚 initial showing with a reading product as a 鈥渄isaster,鈥 replete with a lot of black computer screens.
鈥淭he teachers just gave up because there were so many hands going up [from students needing technical help],鈥 Mr. Dassinger said.
But even that error produced lessons. One teacher continued to experiment with the product, and the vendor that developed it asked for another chance. The company has since boosted its support for the school鈥檚 use of the program, and it has become popular among students, Mr. Dassinger said.
Ch谩vez has also helped pilot a new, broader project being organized across Chicago, called LEAP Innovations (for Learning Exponentially. Advancing Potential.), which is meant to give schools access to ed-tech products that have been screened for quality鈥攁nd to give companies the opportunity to test their products in classrooms.
After an initial period of beta-testing, the program will officially launch a fuller pilot this fall, with a focus on delivering pre-K-8 literacy instruction to schools. Schools apply to participate, identifying the specific educational needs they hope to address through technology as part of their applications. They will receive free software for the length of the program and training on how to implement the product in classrooms. More than 50 schools applied for the fall program鈥攖raditional public schools, charter schools, and private schools in Chicago are eligible鈥攁nd 15-20 are expected to be selected to participate. Companies apply for the program by submitting a detailed application, and if they make it through an initial screening, appear before a 鈥渃uration panel鈥 consisting of educators, literacy specialists, and a learning scientist.
One of the chief benefits is the opportunity to have technology tools evaluated in real classrooms, using 鈥渇lexible but rigorous鈥 research methodology, including internal and nationally norm-referenced tests. If they achieve positive results, that鈥檚 information they can use to market themselves to schools across the city鈥攁 major market, with 400,000 public school students鈥攁nd beyond, said Phyllis Lockett, the CEO of LEAP Innovations.
About 30 percent of the vendors applying for the program are 鈥渕ature鈥 companies, while the rest are early or midstage, said Ms. Lockett. Initial funding for the nonprofit came from the Bill & Melinda Gates Foundation, and funding or support has also come from numerous other sources, including the Michael & Susan Dell Foundation and the Parthenon Group, a national consulting company, among others.
鈥淲e want the best solutions,鈥 Ms. Lockett said, and the process is 鈥渁gnostic as to which companies get chosen.鈥
Educator Feedback
A similar concept is at work with the Learning Innovation Hub, a program launched this year by the Silicon Valley Education Foundation, in cooperation with the NewSchools Venture Fund, an Oakland, Calif.-based philanthropic investment group. The iHub, as it is known, will attempt to connect ed-tech entrepreneurs with educators, who will give companies feedback on their products and what could be changed to improve them.
After an application process, eight teachers were named fellows through the program. They received stipends of $1,750 and agreed to devote 40 hours over the course of the spring 2014 semester for professional development on using the education products, provide vendors with feedback, and collect data.
In the first year of iHub, four companies were chosen to participate in the program, which this year focuses on middle school math instruction.
One of the foundation鈥檚 hopes, said Mr. Chaudhry, whose organization is co-leading the project, is that both the process of reviewing products鈥攁nd the products themselves, if educators find them to be of high-quality鈥攚ill 鈥減ermeate up.鈥
鈥淚deally, it scales from the classroom to the entire district,鈥 he said.
Starting small, whether through pilot projects or other small trials of educational technology, is a good idea for schools and districts, argued Mr. Hodas of Innovate NYC Schools. Doing so gives school administrators the freedom to make mistakes and make adjustments before the stakes are too high.
鈥淵ou鈥檙e going to be mostly wrong a lot of the time,鈥 he said. 鈥淵ou learn more from being wrong than being right. You have to build that into your process.鈥