What does the research say on supporting ELLs when it comes to writing?
What’s more, speaking, reading, and writing skills are all intertwined. For instance, young ELLs with weak English oral skills develop English reading skills at a much slower pace than young ELLs with strong oral skills. As with native speakers, ELLs’ reading and writing English skills must develop together.
While there’s no silver bullet, it seems like the same best practices that help native speakers to advance their English writing skills also help ELLs develop their English writing skills. ELLs, however, face additional—and in some cases, slightly different—challenges that merit special consideration.
Challenges that ELL Students Face
The general view is that ELLs need more help in crossing the boundary from using English socially to using it academically than native speakers. They don’t necessarily have enough exposure to academic discourse to be successful writers, so providing more explicit instruction about the purpose, form, and messages of different academic genres is important.
Argumentative writing instruction illustrates some of these issues. Different cultures tend to structure arguments differently. They may have different ways of getting the reader’s attention. Or different preferences for lexical forms (e.g., preferring more or less lexical variety, more or less repetition, more or less active voice). Or differences in when (or how strongly) the main claim is made.
Teachers cannot assume that ELLs share the same values or are familiar with the same rhetorical structures as mainstream students. ELLs may come with their own “argument schemas” that conflict with teachers’ expectations.
For instance, one study illustrated that Latino students more often used a “diffuse, recursive” organizational style, indirect language, and narrative elements in their arguments. By contrast, white students of European ancestry more often used a “sequential, clustered” organizational style, direct language, and descriptive elements in their arguments. These kinds of struggles are primarily about learning to conform to a different language community.
As students acquire the English language, their performance over time can be inconsistent— a student who used a grammatical construction correctly once might use the same construction incorrectly under almost identical circumstances. This finding is consistent with more general findings in strategy acquisition. Challenges in finding the right word can result in ELLs changing completely how they want to express an idea—or simply not writing the idea down at all.
Perhaps because of these challenges, researchers report a tendency for teachers to give ELLs less rigorous instruction because ELLs lack certain foundational skills. Empirical research, however, suggests that this approach is misplaced. The most successful programs give ELLs challenging content, guided practice, peer interaction, and exposure to a variety of writing genres (just like the best programs for native speakers).
For instance, a longitudinal 8-year quasi-experimental study in a district where 93% of the students are ELLs used an intervention that incorporated challenging texts, explicit writing and reading strategy instruction, meta-cognitive scaffolding, peer interaction, and process writing—in short, many of the strategies found to be most effective in teaching native speakers writing. The control group was made up of classes at the same schools teaching the same ability level as teachers in the intervention group. They also compared results on state-wide examinations to state averages.
The results were impressive. Each year, the intervention group outperformed the control group on gain scores measured through beginning-of-the-year and end-of-the-year essays. On standardized tests, the intervention group pass rate was substantially higher than the control group pass rate (74% to 54%, 91% to 75%, and 93% to 66%), the district pass rate (74% to 40%, 91% to 66%, and 93% 62%), the ELL pass rate (74% to 28%, 91% to 42%, 93% to 39%), and the overall state pass rate (74% to 54%, 91% to 78%, 93% to 75%) for every year measured. This kind of project seems to exemplify best practice.
ELLs also seem to benefit from techniques that reduce cognitive load. Elementary school ELLs, for instance, report lower cognitive effort when dictating their writing, and the resulting essays have more words, fewer errors, and higher overall quality. Dictating to a speech-to-text program, however, did not result in as much benefit as dictating to a scribe. Having students write compositions (as opposed to journal entries or letters) also is associated with higher learning outcomes.
What does improve student outcomes? Lots of practice without feedback and “positive evidence” (examples of correct usage).
Issues in Correcting Grammar
Research in this area encompasses several different learning contexts: students learning English as a second language in foreign countries (Vietnam, China, Finland, etc.), ELLs in the U.S., and students learning English (or another language) as a foreign language. Some of the same questions about feedback come up in oral contexts as well.
Focusing on Grammar Exclusively is a Mistake
Programs and interventions that emphasize both “form and content” (that is, the grammatical form, and saying what it is the student wants to say) is superior—especially in a second language context—than focusing on form alone. Along the same lines, artificial writing tasks just don’t seem that helpful, while focusing on communicating meaning does. This is consistent with the idea that teachers should integrate the development of both low-level and high-level skills.
Other research suggests that explanations of error patterns and training in the revision process play a larger role in student improvement than directly correcting errors.
These findings also coincide with broader findings in the writing literature, which suggest that grammar instruction doesn’t help students nearly as much as any number of other approaches that focus on the writing strategies, processes, or goals.
That said, the following is a summary of research in this area.
Corrective Grammar Feedback Improves Language Skill
The theoretical basis for this position is that students have to notice their errors before they correct them. Corrective error feedback helps people notice their errors and provides them with information about how to correct them.
Psychological research also suggests that skill acquisition roughly proceeds through three stages: as declarative knowledge (the student learns what the grammar rule is and can recall the grammar rule if prompted), then as procedural knowledge (the student consciously tries to apply the grammar rule, but this requires considerable effort), and finally as an automatic skill (the student produces the correct grammatical construction without thinking about it). Feedback (and subsequent practice) helps develop skill development towards automation of the target skill. This position also coincides with research in pretty much every other area: feedback is one of the keys to skill improvement.
Although the evidence favors this position, research studies actually testing the impact of corrective feedback have been plagued by various methodological and contextual issues. Early studies assumed that since to-be-revised essays improved after feedback, long-term learning occurred. But this assumption is not necessarily correct. Several studies show overall improvements on revised essays during treatment, only for performance to remain stagnant on another piece of writing weeks or months later. Delayed post-tests are thus a necessary part of any research design in this area.
In the past fifteen years or so, numerous studies with improved research designs demonstrate that corrective feedback does improve writing skill on realistic writing tasks, in actual classroom settings, and over long periods of time. Several meta-analyses of existing studies confirm these findings.
ELLs also seem to benefit from techniques that reduce cognitive load.
Corrective Grammar Feedback is Unnecessary or Possibly Harmful
First, these researchers suggest that when children learn their first language they learn to produce perfectly grammatical sentences without corrective feedback. A second language, they argue, is no different. Both of these assertions are questionable: children receive plenty of oral feedback in the form of “recasts” from parents and other adults, plus face corrective feedback in early school experiences. There are also plenty of reasons why acquiring a second language might be different than acquiring a first language (e.g., the amount of exposure is different, brain development is at a different stage, first language structures can interfere with acquiring a second language etc.).
Second, researchers suggest that correcting students’ grammar can make students use simpler, shorter sentences to avoid making “mistakes”. Such feedback, the argument goes, simply penalizes behavior that should be rewarded—attempts to use realistic, complicated grammatical forms. Empirical research, however, sharply contradicts this idea. Studies that look at the growth of complexity of grammatical structures over time between corrective feedback and no feedback conditions consistently find that students do not reduce the complexity of their language to avoid errors. Furthermore, grading schemes that reward students for grammatical complexity can encourage students to keep trying complex structures even while receiving lots of feedback.
Third, researchers in this camp argue that there simply isn’t much evidence that corrective feedback benefits students in realistic transfer scenarios. According to this position, grammar correction can help students improve the essay they’re working on or answer rule-based grammar questions, but it doesn’t result in improved performance in essay writing. Truscott has been particularly vocal about this point. His 2007 meta-analysis, for instance, found that corrective feedback had absolutely no effect in realistic transfer scenarios. His findings, however, are sharply contradicted by numerous other meta-analyses and studies over the past fifteen years or so.
Fourth, there’s some practical considerations. Teachers don’t always give consistent (or correct) feedback. Students don’t alway interpret the feedback accurately (especially some indirect forms of feedback, such as grammar codes). And students may revise their essays based on feedback without really understanding why the change was important or necessary. All of which can simply lead to student confusion. And there’s some empirical support for all of these claims.
Grammar instruction doesn’t help students nearly as much as any number of other approaches that focus on the writing strategies, processes, or goals.
Assisted writing feedback. Can it help?
Generally speaking, the platforms have high precision (i.e., somewhere between 45% and 80% of the errors it identifies are correctly identified, the rest being false positives), but low recall low recall (i.e., the software catches somewhere between 13%-40% of all possible errors). ELLs make different kinds of mistakes than native speakers (and different groups of ELLs make different kinds of mistakes, too). These rates also depend upon the kind of error: articles, prepositions, verb tenses, run-on sentences, and complex sentences all seem hard for the software, while capitalization, spelling, white space, punctuation errors can be identified pretty easily. Software tends to miss the kinds of errors that ELLs make.
From the student surveys in these students, students seem to recognize the limitations of the software and make sound judgments about whether and what to revise (although middle schoolers and high schoolers might have more trouble with this), but the overall low level of accuracy on more meaningful mistakes suggests it’s only somewhat helpful. There are some side benefits to it though—students will self-revise on errors not found by the software just from looking at the sentence again. And some software suggests synonyms which anecdotally helps broaden student vocabulary.
As far as I know, only one piece compared a group given AWE (plus teacher feedback) to a group with just teacher feedback. But the teachers in each group behaved differently making it hard to say what was causing any difference in the outcome. More research along these lines would be interesting. But more research in the right context (on younger ELLs in the States) would be great, too.