By Pete Wheelan
Education technology providers dangle the promise of transforming classrooms and equipping previously disengaged students with the skills to become lifelong learners. Yet few can demonstrably support their arguments with strong evidence. So how can we expect a teacher or administrator with limited budget and time to separate what works from what doesn’t?
School leaders can better understand whether a flashy product matches their needs by asking its creators questions about what research guided the conceptualization of the tool, what data informed its specific attributes, and what quality and volume of research has been conducted regarding its effectiveness. I recommend beginning with six key questions to separate the promises from the proven solutions.
1. What is this tool designed to accomplish?
One of the biggest mistakes in any purchasing decision is to let the solution define the problem. In the case of education technology, this misguided approach rarely addresses the real needs of students and teachers. Often it results in an expensive tool that is poorly adopted because stakeholders don’t full understand its capabilities and design. So, begin the examination with a two-part question: What issue does this tool address, and how does it do so? If a provider cannot clearly articulate a specific value proposition, it’s a good sign there’s not enough substance to continue the conversation.
2. What research informed the product development?
Look for evidence that the solution providers have done their homework and are aware of the latest peer-reviewed research in the field, or—even better—that they’re collaborating with leading researchers on the design of their solution. You want to feel confident that they’re leveraging the body of knowledge available through experts who have dedicated their careers to the types of issues you are trying to address.
3. Who is this tool designed for and how were they involved in the creation of the tool?
Ask providers whether typical users (e.g. teachers, students, support staff) participated in the development of the user experience and the content creation. Companies should be able to demonstrate that the product team effectively integrated real-world experiences into developing systems that match with users’ needs and capabilities. If they created a tool for students that was designed in isolation by engineers, consider other options.
4. What types of studies have been conducted to demonstrate the effectiveness of product?
Studies can vary from qualitative user interviews to quantitative controlled trials—and everything in between. Each study plays a role in high quality research, but some designs are more informative and statistically valid. In many cases, a variety of study designs is most helpful in understanding both the effects of the tool on outcomes like student learning as well as the extent to which users enjoy and effectively engage with the tool.
Before committing yourself joining pilot studies or investigating very new tools, inquire how the study design will avoid selection bias and other common pitfalls, and be sure to agree on how findings will be reported and evaluated.
5. What were the results of these studies with my type of school or students?
Results of studies are sometimes inconclusive and often require thoughtful examination. Even when looking at the same data, researchers, practitioners, and others can see different findings or value. Look for independent validation of data, and for relevance to your situation. For example, in evaluating an edtech solution to support the success of low-income rural students, one shouldn’t rely on results from a study involving a middle-class urban school.
6. What training and support will you offer to ensure we implement the solution properly and get the most out of it?
Otherwise great tools often fall victim to poor implementation and, as a result, poor outcomes. Joint commitment to success between providers and customers requires both sides to agree on the technology itself, the adoption process, and any cultural changes required to maximize effectiveness.
Providers should have formal procedures for supporting administrators and end users in integrating the solution into their daily lives, especially if it requires changes in attitudes and behaviors. It’s also important to understand whether that support comes with purchase or at an additional cost. Savvy leaders should also consider inquiring whether providers have data on how best to implement their solution to maximize its value—or at least realize enough value to justify selection and implementation.
When it comes to good decision-making about purchasing technology, it’s a shared responsibility between adopters and providers to make sure tools are the right fit and deliver results. To help raise the bar, I’m joining the EdTech Efficacy Research Academic Symposium, a first-of-its-kind symposium focused on EdTech efficacy research, organized and hosted by the University of Virginia, Digital Promise and the Jefferson Education Accelerator.
As school and district leaders, you can join the effort by using the questions above or developing your own. Most critically, exercise your power to set high expectations for the quality and utility of technology tools. Asking the right questions is an essential first step.
This piece originally appeared in EdSurge, on April 23, 2017