The Learning Agency Lab launched The Teacher Run Experiment Network in an effort to bridge the divide between education research and teaching practice. COVID19 and the sudden shift to remote learning only raised further questions about what works in education, in part because very little research exists on how to make full-time K-12 remote learning effective. If there was a research-practice gap before, it’s a chasm now. 

​With the support of Ben Motz at Indiana University (IU), over 45 teachers will be answering their most pressing research questions by conducting randomized control trials (RCTs) with their students over the next year. Through this program we hope to build an active community of educators who have both the skills and motivation to design rapid learning experiments, and provide teachers with easy-to-use tools to embed these experimental ideas in large-scale platforms that they use, with data collection for broader scientific learning.


Observers of the U.S. educational system have complained for decades now of a gap between educational research and education practice. For instance:

  • The adoption of dubious education technology — the dynamics that drive decisions about whether to adopt technology are often more about hopes about what the technology might do, instead of evidence about what the technology does.
  • School districts that make arbitrary decisions without regard to, or sometimes in direct contradiction to, research findings — school reform movements, for instance, do not have a particularly good track record at hewing closely to knowledge about how students learn. 
  • ​Precious little science incorporated into teacher training — a report from 2016, for example, found that major training textbooks included little learning sciences and even passed anecdotes off as hard data.
  • An insular and non-responsive research community — in many cases, education research is simply not applicable to the kinds of questions that teachers and administrators are asking themselves. ​

The specific complaints vary, but the effect is the same: the research and teaching communities often moving on their own tracks, especially in the policy-inflected K-12 arena.

As one of our program’s teachers—Kim Kelly—notes: “In education we tend to latch onto new ideas or resources without stopping to explore their impact on student learning or other desired outcomes.”

There are many ways of reaching across this chasm. But one particularly striking way is to empower teachers to ask (and answer) research questions themselves, so that teaching communities have a stronger voice in conversations about research and evidence-based decision-making.


Teacher-driven research is not new. However, most teacher-driven research falls under the category of “action research.” In this approach teachers observe their class, propose a new teaching approach based on their observations, and evaluate the new approach. Action research has several virtues: it orients teachers to improve their classroom practice, it can explore classroom-wide changes, and it’s feasible in most classroom contexts.

​But it also has several drawbacks: teachers usually aren’t changing one thing at a time, so it’s challenging to figure out what classroom change led to what outcome. It also usually involves two different cohorts of students, raising questions about whether the cohorts are truly comparable.

Depending on who you talk to, the goals of action research also diverge from the goal of generating generalizable knowledge. Action research isn’t typically used as a basis for policy-making.

In recent years, RCTs have become more common in education research and show significant potential for improving student outcomes. Providing tools and support for teachers to run these types of studies in their classrooms will only enhance their instruction and improve the learning experience for their students.


The gradual shift to computer- and online-based learning experiences, has opened space up for the use of RCTs. This was true before the pandemic and it’s even more true now. 

One example of this can be seen in a study done by Bill Hinkley, a math teacher who uses ASSISTments, an online math tool. He developed a study to observe how the use of pencil and paper to solve math problems affects his students. In this RCT, he divided students into two groups and assigned a problem set on ASSISTments. The control group completed problems as usual and the test group were given additional reminders via video to do their calculations using pen and paper. 

The results were surprising – students in the test group outperformed their classmates by about 13 points. Of course, more research is necessary, but the results are suggestive. 

As seen with Bill Hinkley’s study, RCTs offer several distinct advantages. Most importantly, they create comparable treatment groups, resulting in more convincing evidence linking changes in teaching practice to changes in learning outcomes. RCTs can test the effects of very minor changes like a different explanation, visualization, or learning sequences. Study protocols can also be replicated in new contexts, revealing more about the effectiveness of a proposed intervention and the mechanism behind its effectiveness.

Of course, the RCT is no silver bullet, either. Some questions—especially questions about classroom-wide experiences—can be challenging to answer with the RCT approach. There are also simple practical obstacles that stand in the way of using more RCTs to study student learning in the classroom.

For instance—how to cleanly give different treatments to different groups of students? How to overcome the small sample sizes in most classrooms? How to obtain and manage student consent to research? How to avoid bias in evaluating students if the teacher knows which treatment group they’re in?

These are significant hurdles. But several teachers have already begun to overcome them. A group of math teachers, like Bill Hinkley who use ASSISTments, have performed several RCTs, giving them insight into their classroom practice.

To do this at scale, however, requires a different kind of tool Several online learning platforms, such as Carnegie Learning’s MATHia and ASSISTments’ Etrials, have begun to offer research tools to teachers for math education, but no general purpose tool gives teachers in all subjects the power to collaborate with other teachers, randomize students into treatment groups, share research designs or de-identified data with other researchers. 

As part of the program, teachers in the Teacher-Run Experiment Network will serve as beta-testers for the development of a general-purpose tool called Terracotta, that will provide these features while smoothly interacting with existing learning management systems.

It’s this tool that we believe will make teacher-driven research more common and more powerful.


You can find out more about our teachers and their research interests here!


If you want to join our network and perform research in your classroom, please contact Aigner Picou (aigner@the-learning-agency.com) for more information.