THE LEARNING CURVE

​A MAGAZINE DEVOTED TO GAINING SKILLS AND KNOWLEDGE

THE LEARNING AGENCY LAB’S LEARNING CURVE COVERS THE FIELD OF EDUCATION AND THE SCIENCE OF LEARNING. READ ABOUT METACOGNITIVE THINKING OR DISCOVER HOW TO LEARN BETTER THROUGH OUR ARTICLES, MANY OF WHICH HAVE INSIGHTS FROM EXPERTS WHO STUDY LEARNING. 

Building Bridges in Education Research: The Future of Research and Development is the DARPA Model

From an outside perspective, education research may seem pretty straightforward. Researchers study teaching and learning to determine what methods are the most effective for particular kinds of students. Then they pass on to educators, curriculum developers, and education policymakers the information they need to improve instruction. 

But in reality that’s not how it works. There are a number of significant complications and challenges. One of the most pressing is a frequent mismatch between how education is studied and how it is practiced. This means it’s not always clear to researchers what questions they could answer to be most helpful to educators and policymakers. And it’s not always clear to educators how to interpret and implement the results that come out of academic research. 

For a long time, there was excitement about the research-pracitce partnership, an intentional, long-term relationship between a research institution (or consortium of institutions) and a school district or group of educators. In this article, we discuss some of the challenges involved in bridging the troubling gap between education research and education practice, and highlight some noteworthy developments in this area. Most notably, I argue that the next phase needed is a DAPRA for education. 

Gaps Between Research and Practice

In academic research in the social sciences, the most respected and relied upon experiments are randomized controlled trials, or RCTs. RCTs randomly place otherwise similar study participants into either a control or experimental group. Because (ideally) the only difference between the two resulting groups is the experimental condition being studied, the results of these studies are generally much more reliable than more descriptive studies or those that lack a truly randomized control group.

In the study of education, researchers generally test inventions against a control group that is subjected to no intervention.This often produces results that are statistically significant from an academic perspective. But these results are not necessarily useful from a classroom perspective. 

As Daniel Willingham and David Daniel point out in Education Next, teachers are not choosing between one intervention and nothing, but “among several possible interventions or courses of action.” They recommend studies that instead use what is considered the current “gold standard” as the control group, in order to make their results more useful to educators.

The mismatch between research and practice in education also involves context. Academic research may show that a given intervention is effective, but it rarely delves into context-specific features that must accompany the intervention to make it effective. Kathryn E. Joyce and Nancy Cartwright argue in the American Educational Research Journal for research that is more context-sensitive. Educators need more than evidence of the general effectiveness of a given program or tool, they need help determining whether a given intervention will be effective in a particular local context and they need information about how to implement that intervention in that context most effectively.

Greater cooperation between educators and researchers can help improve both the relevance and utility of education research. These partnerships give researchers access to the priorities of educators and the current state of practice in a given school or district, allowing them to run more context-sensitive experiments against more relevant controls.

RPPs Used to Evaluate Programming

Many research-practice partnerships have emerged after the advent of specific policy changes in order to evaluate new programs as they are implemented. These programs allow administrators and educators to track the effects of new policies on students as they’re happening, so that adjustments to programming and teaching can happen as fast as possible.

COVID-19 and attempts to recover from the pandemic have served as the impetus for many new partnerships of this kind. For example, in Connecticut, coronavirus relieve funds are being used to establish the Connecticut COVID-19 Education Research Collaborative (CCERC). A partnership among universities across the state, CCER will evaluate COVID-related projects and other important areas emerging in the aftermath of the pandemic. Programs being studied include an initiative to reduce absenteeism and disengagement through home visits, as well as an audit of remote learning models and their impact.

Similarly, in Illinois the University of Chicago Consortium on School Research has partnered with Chicago Public Schools to conduct research into the progress of new programming and policies. One recent report produced by the collaborative found evidence that teacher and student surveys about school quality are good predictors of school effectiveness. Another report on Common Core State Standards in Mathematics (CCSS-M) and the Next Generation Science Standards (NGSS) found that instruction improved when teachers were given professional development and able to employ aligned practices.

I’ve worked on this topic for many years. At the  Center for American Progress, for instance, I co-wrote a brief in 2018 that “proposes the creation of state-level education capacity centers, which would help leaders in state and local education departments use research to inform practice.”

RPPs Geared Toward Continuous Improvement

Another kind of research-practice partnership is geared less toward large-scale evaluations and more toward using research to create smaller-scale cycles of iterative, continuous improvement to teaching and learning. One approach in this vein is called Design Based Implementation Research (DBIR), “a method of relating research and practice that is collaborative, iterative, and grounded in systematic inquiry.”

This method has been deployed, for example, by the Building a Teaching Effectiveness Network (BTEN), which is a “network of different institutional partners, including the Carnegie Foundation, the Institute for Healthcare Improvement, the American Federation of Teachers, New Visions for Public Schools, the Austin Independent School District, and the Baltimore City Schools.” BTEN aims at addressing the problems of new teachers, who make up more and more of the overall teacher population, but are also leaving the profession at alarming rates. It focuses on using rapid, small-scale testing to give new teachers at urban schools better resources to improve their teaching, avoid burnout, and build productive relationships with fellow teachers, parents, and school administrators. The organization also works with districts to improve their teacher development systems, while using evidence at every stage in the process.

Similarly, the University of Washington-Bellevue Partnership, which also uses the DBIR approach, is focused on improving elementary school science education by “redesigning and testing […] units that incorporate both student choice and culturally relevant teaching strategies.” The partnership involves a long-term collaboration based on a series of feedback loops between teachers, researchers, and curriculum designers, involving revision, small-scale testing, and scaling up solutions that work.

The Future of Education Research and Practice: DARPA for Education

These new and evolving models of the relationship between education research and practice hold a lot of promise. They can help overcome some of the problems of research that is overly theoretical and insensitive to the needs of teachers and the realities of classroom practice. Sustained partnerships will allow researchers to gain a fine-grained understanding of the needs of school districts and individual student bodies, while teachers and school administrators can benefit from research products and collaborations tailored to their contexts and needs.

But in many ways, they’re simply not enough. They don’t bridge the gap between research and practice nearly enough, and I believe that the next step needs to be an DARPA for education to help build on research practice partnership by relying on more of the development side. 

As is fairly well known in policy circles, DARPA is a successful model. As outlined in this HBR article, DARPA “has produced an unparalleled number of breakthroughs. Arguably, it has the longest-standing, most consistent track record of radical invention in history. Innovations include the internet; RISC computing; global positioning satellites.”

A similar “special forces” approach would focus on applied development, moving forward with new tools and programs. Just as important, it would help bridge the gap between research and practice by setting both important goals and bringing together new teams that include practitioners. The insights from these developments can also be scaled up, under the right conditions, to spread the benefits more widely, thus going beyond the handshake approach of research practice partnerships. 

I’ve been thinking a lot about a DARPA for education – and eager to hear your thoughts and feedback in the comments. Would a DAPRA for education work? 

– Ulrich Boser

6 thoughts on “Building Bridges in Education Research: The Future of Research and Development is the DARPA Model”

  1. Yes, obviously DARPA works if there is going to be proper governance and full political backing.

  2. Federal support for R&D in education is critical to accelerate them and and raise the profile of successful innovations, but the impact will be limited unless there is also long-term funding for scaling up (e.g. i3 Scale Up grants). Unlike a satellite that can be designed once and then built repeatedly, educational innovations need additional support and adaptation to be successful in new environments. DBIR can be helpful for informing that work, but ongoing evaluation, technical assistance and training for school staff will be needed for successful implementation.

Leave a Comment

Your email address will not be published. Required fields are marked *