Abstract:
Automated Writing Evaluation (AWE) is gaining more and more presence, with various models of use proposed frequently. While Grammarly makes big claims about the abilities of AI integration in language learning, there is little scholarly research that supports these claims. As Grammarly has found its use in English learning and teaching nowadays, this study is designed to evaluate the accuracy of form feedback that Grammarly is able to provide. This study investigated the reports generated by Grammarly as a response to a small corpus of EFL writing, compiled from essays by Armenian undergraduate students. The results were contrasted with the converted quantitative results with those of human raters.
Along with the review of form error detection, the study also investigated the solutions that Grammarly provided. The collected data was categorized and analyzed. The results revealed that Grammarly mostly provides accurate feedback on errors of form, with occasional inconsistencies. Grammarly also provides somewhat accurate solutions for correctly detected errors. However, it also missed many errors and solution, at times providing false positives or misleading hints.