Finding the best instructional software (which includes apps and digital content) is a time-consuming and labor-intensive process. There’s no single, trusted recommendation source that educators can turn to in order to shortcut this work. Sure, with instructional software the cream often rises to the top. And some well-informed educators may vouch for these best-of-class applications in their particular content areas. But the educational software field has become increasingly crowded. And the growth of the apps market — including many free items appealing to budget-minded teachers — has further complicated matters. All of these factors have combined with a general lack of research-based evidence on the effectiveness of most software applications.
To help evaluate instructional software, many districts employ a piloting model where the software is tested with students for a prescribed period of time. I ran my share of these pilots, though generally not well. This was mostly due to our not following the procedures necessary to gain a clear understanding of the software’s classroom effectiveness and appeal. Recognizing this issue, Digital Promise, a Washington, D.C.-based nonprofit focusing on ed-tech, has compiled an Edtech Pilot Framework, a set of strong resources and tools for schools to use in planning and conducting a successful pilot program.
Additionally, Digital Promise recently convened a group of K-12 and higher education leaders to answer a timely question: How might we collaborate to ensure evidence of impact, not marketing or popularity, drives ed-tech adoption and implementation?
The key “change ideas” identified by this group are tall orders, though if developed and used, could greatly improve districts’ software evaluation and selection processes in the following ways:
• Develop a “Consumer Reports” for ed-tech purchasing.
• Train school leaders, educators and faculty in new pedagogical methods and in evidence-based decision-making.
In 2015, the U.S. Department of Education (DOE) addressed the issue of software selection by developing a Rapid Cycle Evaluation (RCE) Coach to assist districts’ selection processes. Though not yet widely adopted, this RCE Coach tool looks promising. But with the current changes in the DOE, one must wonder if the RCE Coach tool will be supported in the manner it deserves. Nonetheless, as it exists today, the RCE Coach tool could be a valuable resource to schools.
Finally, even if a district uses good tools like the ones mentioned here, they must also ensure their process is an inclusive one that includes teachers, principals and district curriculum staff. In my ed-tech work, we did try to be inclusive, but it was difficult to recruit the right school-based participants who were able to commit their time and expertise to the project. We also struggled to gain the buy-in and support of our curriculum specialist colleagues.
Therein lies what I believe is another basic problem in many district’s software selection work: Having educational technology staff like me lead the process. I contend that content-area specialists from a district’s curriculum department should be the ones leading software selection or at least serving as co-leaders. They know the curriculum and the district’s content-area standards. So, they should be key players in the work to identify the digital resources that best support these standards. Once the software is selected and purchased, these content-area folks should then share the responsibility for ensuring their school-based colleagues are well versed and supported in using the new resources.
But that’s usually not the case. Ed-tech staff members are expected to take the lead in both the selection and teacher support work. Because, unfortunately, in many districts educational software is still seen as “technology” rather than “instructional materials.” If that issue isn’t addressed first, then districts’ software selection processes will continue to be flawed.