<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Journal Article</dc:product_type><dc:title>Deconstruction of Holistic Rubrics into Analytic Rubrics for Large-Scale Assessments of Students’ Reasoning of Complex Science Concepts</dc:title><dc:creator>Jescovitch, Lauren N.; Scott, Emily E.; Cerchiara, Jack A.; Doherty, Jennifer H.; Wenderoth, Mary Pat; Merrill, John E.; Urban-Lurain, Mark; Haudek, Kevin C.</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Constructed responses can be used to assess the complexity of student thinking and can be evaluated using rubrics. The two most typical rubric types used are holistic and analytic. Holistic rubrics may be difficult to use with expert-level reasoning that has additive or overlapping language. In an attempt to  unpack  complexity  in  holistic  rubrics  at  a  large  scale, we have developed  a  systematic  approach  called deconstruction. We define deconstruction as the process of converting a holistic rubric into defining  individual  conceptual  components  that  can  be  used  for  analytic  rubric  development  and  application. These individual components can then be recombined into the holistic score which keeps true to the holistic rubric purpose, while maximizing the benefits and minimizing the shortcomings of  each  rubric  type.  This  paper  outlines  the  deconstruction  process  and  presents  a  case  study  that  shows defined concept definitions for a hierarchical holistic rubric developed for an undergraduate physiology-content  reasoning  context.  These  methods  can  be  used  as  one  way  for  assessment  developers  to  unpack  complex  student  reasoning,  which  may  ultimately  improve  reliability  and  validation of assessments that are targeted at uncovering large-scale complex scientific reasoning.</dc:description><dc:publisher/><dc:date>2019-09-01</dc:date><dc:nsf_par_id>10112967</dc:nsf_par_id><dc:journal_name>Practical assessment, research &amp; evaluation</dc:journal_name><dc:journal_volume>24</dc:journal_volume><dc:journal_issue>7</dc:journal_issue><dc:page_range_or_elocation>1-13</dc:page_range_or_elocation><dc:issn>1531-7714</dc:issn><dc:isbn/><dc:doi>https://doi.org/</dc:doi><dcq:identifierAwardId>1660643</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>