null
(Ed.)
Online education technologies, such as intelligent tutoring
systems, have garnered popularity for their automation. Whether it be automated support systems for teachers (grading,
feedback, summary statistics, etc.) or support systems for
students (hints, common wrong answer messages, scaffolding), these systems have built a well rounded support system for both students and teachers alike. The automation
of these online educational technologies, such as intelligent
tutoring systems, have often been limited to questions with
well structured answers such as multiple choice or fill in the
blank. Recently, these systems have begun adopting support
for a more diverse set of question types. More specifically,
open response questions. A common tool for developing automated open response tools, such as automated grading or
automated feedback, are pre-trained word embeddings. Recent studies have shown that there is an underlying bias
within the text these were trained on. This research aims
to identify what level of unfairness may lie within machine
learned algorithms which utilize pre-trained word embeddings. We attempt to identify if our ability to predict scores
for open response questions vary for different groups of student answers. For instance, whether a student who uses
fractions as opposed to decimals. By performing a simulated study, we are able to identify the potential unfairness
within our machine learned models with pre-trained word
embeddings.
more »
« less