Please ensure Javascript is enabled for purposes of website accessibility
Hamburger Nav

Research Questions

There are three different types of researchers who use ASSISTments. They have different types of research questions: 

1. Learning Scientists:

They will ask questions that contrast specific features of interventions they are interested in exploring. Example: A learning scientist interested in the impact of media could design a series of videos and corresponding textual hints to accompany a math problem set to analyze the impact of the medium on learning.

2. Math Educators:

Math educators will draw on their math education experience to find holistic ways of supporting students in specific math domains (like fractions) which may not clearly vary in a single factor, but which synthesize their understanding of the material to provide comprehensive support. They will propose these well-crafted supports to be compared to other existing supports. In doing so they are asking if their ideas are better. See examples of published studies here.

3. Educational Data Scientists:

We have 6,000 problems with more than one support for each one and they have been randomized to over 300 students each. These researchers will ask questions about the features of these supports (i.e., are hints that shorter correlated with better student learning). We also have datasets for secondary data analysis. Researchers can explore a variety of research questions using the datasets . See datasets and examples of published studies here.

K-12 math students solving OER math curriculum problems assigned by their teachers in ASSISTments. As the students solve the problems they will be randomly assigned to different student supports created by the learning scientists or math educators.

User Population

K-12 math students solving OER math curriculum problems assigned by their teachers in ASSISTments. As the students solve the problems they will be randomly assigned to different student supports created by the learning scientists or math educators.

Pre-Registration/Vetting

Vetting: A researcher that submits a study needs to be approved by an E-TRIALS team member. They will look to see that it is aligned to the curriculum (teaching new methods that go against the textbook might cause confusion) and quality normal educational content.

Pre-registration: We strongly support pre-registration. Learning Scientists will have a clear study design that will be pre-registered at OSF.io before the study is deployed. The Math Educators can submit their ideas and get them deployed. Only once they ask for the data will we require them to pre-register their hypothesis at OSF.io, but many math educators are not familiar with pre-registration.

We will originally only share a random 10% of the data to analyze so they can make changes to their proposal before seeing all the data. This allows researchers to make changes to their analysis code after seeing a small throw away sample of the results.

IRB requirements

Conducting the randomization of student supports at the student level is covered by normal educational practice under an IRB at WPI. External researchers must get their own IRB approval to receive the data for analysis.

Recruitment (Students)

The recruitment is already done for the researcher. All the researcher needs to do is build the content and get it approved. We then handle all recruitment and all data collection. The timing of the study is dependent on when teachers normally assign the content the researcher is working on.

Randomization

These studies are student-level randomized experiments. The number of conditions depends on the type of research. If it is a single support study we compare the support with the best so far support. If it is a support comparison study there will be at least three conditions, at least two compared supports and the best so far. The Best-So-Far condition is the highest-performing student support currently on ASSISTments for that problem.

Intervention

Allows researchers to vary the type of support (hints, common wrong answer feedback and explanations) that students receive when completing math assignments using OER content in ASSISTments.

Example: A researcher is interested in the impact of video versus text feedback. They design a series of videos to accompany a math problem set and test whether it improves student mastery versus the default text explanations.

Prior achievement/demographic data

Prior achievement data: gives information about how students have done prior to completing the given problem set.

Experiment student logs: looks at students actions while taking the assignment, as in when they clicked ‘next question’, entered an answer, asked for a hint etc.

Experiment problem logs: file share information about the problems that were completed in the problem set, like score and total time on task.

Experiment action logs: file gives insights about how students completed the entire problem set.

For each problem a student gets wrong there will be a possibility that the next problem they are asked to solve is a similar but not the same (SNS) problem that will act as a dependent measure for the study. This is a new feature being created for this project.There will also be some demographic/contextual data at the school and class level such as guessed gender and if the school is a Title I school.You can read about the structure of available data here and see example data sets from prior studies here.

Outcome Measures

With the development of the Dependent Measure Infrastructure (DM) which is a similar but not the same (SNS) problem given to each student who gets a problem wrong. Performance on those SNS will be an outcome measure.

Analysis

E-TRIALS will not conduct the analysis after data collection. As part of the OSF.io process, a project page will be created that hosts the research questions, study design created by the researcher. Then E-TRIALS will post the anonymized data there. This OSF.io page will be cited in any disseminations.