Personalized learning: Is it an educational imperative, a marketing strategy for an ed-tech product, or both?
Too often, teachers and administrators say, they find that personalized learning is used by companies as mere buzzwords to promote a run-of-the-mill digital tool.
鈥淚n the marketing literature, this term is overused,鈥 said Devin Vodicka, the superintendent of the Vista, Calif., school district. 鈥淢any products that someone claims are personalized are actually just a series of digital worksheets.鈥
But educators are finding ways to sort the real personalized potential from the empty promises of some ed-tech products.
For instance, Vista uses its own Personal Learning Pathway framework to evaluate products, said Vodicka.
Some of the key questions educators in his district ask:
鈥 Is this product based on a student profile?
鈥 Is there an integrated technology component that includes two-way communication?
鈥 Does the product include student choice and pathways?
鈥 How does it fit with the learning environment, and more broadly, is it connected with real-world opportunities?
鈥 Does it use a competency-based model, in which students move at their own pace as they master academic content?
Last month, Vista hosted 42 superintendents from across the country, who discussed that and other approaches as part of an ongoing cohort studying personalized learning through AASA, The Superintendent鈥檚 Association.
The most important starting point to vet a product, for Barton Dassinger, the principal of Chavez Elementary in Chicago, is this query: Does this product improve student learning better than the alternative to using it? And does the company generate reports in such a way to allow for an analysis that will answer that question?
The starting points of 鈥渨hat is our need?鈥 and 鈥渨hat is our goal?鈥 are important for Th茅a Williams, who is both a technology teacher for pre-K-5 and the technology coordinator for Brooklyn Arbor Elementary School in New York City. As an iZone pilot-team leader through the city鈥檚 department of education, she said she has learned to move beyond the catchphrases for products to identify the data needed.
Red flags about the 鈥減ersonalized learning鈥 label abound, according to Amy Nowell, the director of research for LEAP Innovations, a Chicago-based nonprofit that works to build educational innovations by connecting schools and businesses. From her experience testing ed-tech products with evidence-based research methods, Nowell, and other experts have identified several red flags educators should watch for:
Questionable Student Agency
This is a fundamental element of personalization. Can the students take ownership of their own learning by setting their own goals? Can they track their own progress?
Nowell recommends that teachers 鈥渢est drive鈥 products as though they are students and make sure they understand what the data mean.
For instance, if students aim to learn a particular unit within a specified period of time, and the product provides feedback that 鈥測ou have only completed one activity鈥 in that time frame, what, she asked, does that mean to a student trying to complete and comprehend the full unit?
Inadequate Content
By definition, personalized learning allows students to move at their own pace through material.
鈥淭here鈥檚 never a classroom where every student is average,鈥 said Nowell. 鈥淲e鈥檝e had a number of teachers who were really disappointed when they dug in, and two months into the year, their brighter kids are looking for material that wasn鈥檛 there.鈥
While ed-tech products can often be retrofitted to accommodate material for advanced or struggling students, that鈥檚 often a 鈥渃lunky鈥 solution, she said. A digital resource with 鈥渙nly 130 lessons鈥 is unlikely to have enough content to go up or down two grade levels.
Useless Data
Products for personalized learning generally produce a lot of data. Williams recommends educators first ask: 鈥淗ow do I make sense of this data?鈥
In theory, experts say a data dashboard should help students and teachers understand what the metrics are, how a certain metric was arrived at, and what it means for student learning. But that is not always the case.
For example, one product reported to teachers the percent of total lessons each student completed in the 4th grade curriculum. 鈥淛ohnny has completed 4 percent of the lessons has no actual meaning to anybody,鈥 Nowell said. 鈥淚t鈥檚 not tied to what students are learning. It鈥檚 not tied to learning standards or mastery of content. Johnny could have clicked through, 鈥榗ompleted鈥 them, and gotten them all wrong.鈥
Lacking Recommendations
鈥淚 need the data to help make instructional decisions,鈥 said Williams.
The problem is that automating the process of using data to inform educators鈥 decisionmaking is still largely an unmet need. 鈥淚 don鈥檛 feel personalized ed-tech products have mastered that yet,鈥 she said. 鈥淚鈥檇 like to see more automation, providing data to teachers to make instructional interventions in the learning path.鈥
Poorly Aligned Assessments
Generally, the embedded assessment questions in personalized-learning products have had no external validation, Nowell said.
When piloting their products, companies want to know what assessments students will take, whether it鈥檚 from one of the common-core-aligned testing consortia or another well-known test. But because educational technology generally doesn鈥檛 have the resources for rigorous assessment design and validation, the assessment data that are generated from the trials might not align with the tests that eventually will be given to students, she said.
Classroom-Integration Problems
A company should be able to inform teachers how to best integrate their products into the classroom structure. Does the product work best in classrooms where groups of students rotate from one station to another, while the teacher instructs another small group? Can students work on it together, or solo?
鈥淲e鈥檝e had teachers who started with the station-rotation model but had to revert to the typical classroom model,鈥 said Nowell, 鈥渁nd it was really about the way the technology was set up, not about the kids not being mature enough to work that way.鈥
Another question is, 鈥淲ho assigns the next piece of content? The teacher or the software?鈥 Teachers want both options, she said.
Little Evidence It Works
Nowell cautions teachers not to put too much stock in testimonials from other users of a product, five-star ratings on websites, or anecdotes about how a product is used. 鈥淧ress companies for hard data about documented outcomes,鈥 she advised.
鈥淎sk them for any studies done on how well their products impact student outcomes,鈥 she said. For instance, companies should be able to provide empirical data on product effectiveness as it relates to student achievement. 鈥淓ven case studies, if done well and based on a similar context to your classroom, can provide a powerful indication of how a product could potentially work for you.鈥
Measures of student learning can include nationally standardized test scores, as well as more timely measures of student engagement and motivation for learning, she added.
Lacking the Personal Perspective
Personalized learning means taking each child鈥檚 uniqueness into account.
鈥淭here are all these factors you have to consider: culture, family background, interests, learning strengths, social-emotional development. We could go on and on,鈥 said Williams, of Brooklyn Arbor in New York. 鈥淲e鈥檙e talking about a human being.鈥
Teachers would like to see products that provide them with more in-depth insights into their students.
Williams said she sees educators鈥 excitement about the prospect of personalization. But personalized-learning experts generally agree that most ed-tech products are not geared toward students鈥 individual backgrounds and interests.
鈥淲hen they do find products that really work for them, and respond to how students answer questions then make adjustments accordingly, teachers can be so much more effective,鈥 she said. 鈥淭hen they can do the things that the personalized learning tool can鈥檛 do and focus their time and energy on that.鈥