In line with SEERNet’s objectives to foster research through digital learning platforms, E-TRIALS has introduced a feature that allows for detailed examination of feedback strategies for common mistakes in math homework. This development utilizes the established capabilities of ASSISTments to offer new avenues for conducting A/B tests to evaluate different feedback methods.
Since its introduction in 2003 within ASSISTments, the “Common Wrong Answers Feedback” has been instrumental in providing insights into educational interventions. This tool has now been enhanced to include capabilities that assist researchers in conducting comprehensive studies focused on optimizing student feedback mechanisms. As a result of this new function, researchers will be able to figure out the best ways to deliver feedback on common wrong answers.
The E-TRIALS Platform is a free research platform that simplifies the process of determining which of two instructional approaches results in better student learning. It integrates with ASSISTments, a free online K-12 math learning platform that has been used by over one million students. Teachers use the platform to assign students math problems from popular open educational resources such as Kendall Hunt Illustrative Mathematics and EngageNY/Eureka Math. Students receive hints or explanations as immediate feedback when doing the math problems. Using the E-TRIALS platform, researchers can investigate different types of student-supports (hints, explanations, or common wrong answer feedback) for math problems or study how different types of math problem content may impact learning. Studies that compare student-supports can be automatically delivered to students who use ASSISTments without requiring researchers to recruit participants. E-TRIALS is a key partner in the SEERNet initiative funded by the US Department of Education.
Feedback in education is crucial, and the specificity and relevance of the feedback significantly enhances its effectiveness. When tailored to address specific misconceptions, this type of feedback supports students with recognizing and correcting their misunderstandings as they work on their homework. This approach helps specific math concepts and encourages a deeper engagement with the material, potentially enhancing understanding and retention of mathematical principles. The Common Wrong Answer Feedback feature aims to facilitate a more interactive and responsive homework review process where students are not merely corrected, but guided through their errors to understand and learn from them. Feedback on the wrong answers may help students identify misunderstandings and areas to work on, potentially providing personalized learning experiences. This focused feedback helps in making homework an active learning experience, rather than a simple assessment of students’ current knowledge. What we don’t know is what type of feedback is best for which students. By exploring this we can better support students.
The “Common Wrong Answer Feedback” feature utilizes data collected through ASSISTments to identify common wrong answers (Gurung et al., 2023), the most frequently occurring incorrect answers submitted by students. The E-TRIALS Platform allows researchers to create multiple conditions of tailored feedback to each specific common wrong answer to examine research questions on how different types of feedback work for students.
The potential of the Common Wrong Answer Feedback feature to impact educational outcomes is recognized, and its effectiveness hinges on robust research and application. We encourage researchers in relevant fields, such as math education, learning science, educational psychology, and education technology to leverage this feature for their studies.
A potential research area is to study the impact of Generic Feedback vs. Personalized Feedback. Researchers could design an experiment comparing the learning outcomes of students who receive generic feedback versus those who receive personalized feedback based on common wrong answers. By analyzing performance improvements, researchers can quantify the added value of personalized feedback in reducing specific misconceptions.
The introduction of this feature supports ongoing efforts to develop a more data-driven, responsive, personalized, and effective math homework framework. As this tool is further adopted and integrated into the E-TRIALS Platform, we anticipate more research studies to examine the effects of different types of common wrong answer feedback.
The “Common Wrong Answer Feedback” research feature is ASSISTments’ ongoing commitment to improving educational technology through research and practical application. We look forward to seeing how this tool contributes to the field of math homework. Researchers and educators are encouraged to explore this feature to enhance homework practices and student outcomes.
Gurung, A., Lee, M. P., Baral, S., Sales, A. C., Vanacore, K. P., McReynolds, A. A., Kreisberg, H., Heffernan, C., Haim, A, & Heffernan, N. T. (2023) How Common are Common Wrong Answers? Crowdsourcing Remediation at Scale. In Proceedings of the Tenth ACM Conference on Learning @ Scale (L@S ’23), July 20–22, 2023, Copenhagen, Denmark. ACM, New York, NY, USA, 11 pages. Submitted PDF. Final PDF. https://doi.org/10.1145/3573051.3593390