<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Surveying online interaction: Relating college instructor characteristics and perceptions toonline instructional practices</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>11/01/2020</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10232474</idno>
					<idno type="doi"></idno>
					<title level='j'>Online Learning Research Center</title>
<idno></idno>
<biblScope unit="volume"></biblScope>
<biblScope unit="issue"></biblScope>					

					<author>G. Orona</author><author>Q. Li</author><author>P. McPartlan</author><author>C. Bartek</author><author>D. Xu</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[Little is known regarding the use of, and factors related with, interaction-oriented practices. In this study we investigate instructors’ use of interaction-oriented practices in online college courses. We begin by drawing on several strands of literature to offer a person-purpose interaction framework for categorizing interaction-oriented practices. The framework’s six sub-domains integrate for whom students are interacting (instructor, student, content) with the interaction’s pedagogical purpose (academic, social, managerial). Subsequently, we examine factors that predict instructors’ use of these six domains of practices, including instructors’ characteristics and their perceptions of online learning, using a sample of (n = 126) community college instructors teaching online courses. The results show that instructors using more interaction-oriented practices consistently have greater employment status and teaching load, greater self-efficacy for using learning management systems, and greater perceived benefits of online learning for students, with subtle distinctions found across sub-domains. The findings have several implications for future research examining pedagogical behavior, as well as thedesign of professional development activities aimed at enhancing the use of effective online instructional practices among college instructors.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Online education continues to rapidly expand at postsecondary institutions. In 2018, roughly 35% of college students took at least one course online (U.S. Department of <ref type="bibr">Education, 2019</ref>). Yet, despite its fast growth and potential benefits of expanding access through increased flexibility, research consistently identifies a persistent performance gap between the face-to-face and online course delivery modes, particularly at broad access institutions <ref type="bibr">(Xu &amp; Xu, 2019)</ref>.</p><p>In light of the online performance gaps, a growing effort has been directed at identifying the unique challenges of online learning, as well as instructional practices that have the potential to circumvent these challenges <ref type="bibr">(Martin et al., 2019;</ref><ref type="bibr">Means et al., 2010)</ref>. Among these discussions, one of the most cited challenges of online learning is the difficulty in achieving effective student-content and interpersonal interactions <ref type="bibr">(Azevedo et al. 2004;</ref><ref type="bibr">Corbeil 2003;</ref><ref type="bibr">Cox 2006;</ref><ref type="bibr">Jaggars and Xu 2016)</ref>. Unlike traditional face-to-face settings where students meet and interact with the course instructor and peers during lectures, online learning imposes a physical separation between the instructor and students. The physical separation, coupled with the selfdirected nature of online learning, often leads to diminished instructor-student and studentstudent communications, and greater challenges for students to be engaged in the course content, which may lead to feelings of isolation and low levels of performance. (e.g. <ref type="bibr">Huguet, et al., 2001;</ref><ref type="bibr">Moore, 1989;</ref><ref type="bibr">Nissenbaum &amp; Walker, 1998)</ref>.</p><p>The importance of connecting interpersonally and being engaged in the course content has led researchers and practitioners to propose an array of instructional practices conducive to facilitate student-content, instructor-student, and student-student interactions in an online setting.</p><p>Yet, there is limited knowledge regarding the patterns in which these interaction-oriented practices are being employed in current college online classes, and even less is known about the factors related to using (or not using) them. Understanding the prevalence and predictive factors of these practices among can provide important insights into possible ways to improve the quality of online instruction in higher education. For example, an instructor's decision to employ certain practices may be rooted in their existing online course teaching experiences, perceptions of the nature and challenges of online teaching and learning, as well as their knowledge of and confidence in using technological tools and enacting specific instructional practices successfully.</p><p>Answers to these questions are fundamental to the development of effective professional development programs to help instructors improve the quality of courses taught online, especially programs intending to promote the adoption of promising instructional practices.</p><p>To address these research gaps, this study aims to achieve a better understanding about college online instructors' use of various interaction-oriented practices that the literature suggests are promising in improving student engagement in an online learning environment, as well as factors that are correlated with the implementation of these practices. More specifically, we address three research questions (RQ):</p><p>&#8226; RQ1: Are there meaningful patterns in the implementation of online interaction-oriented practices that classify instructors into distinct groups?</p><p>&#8226; RQ2: How can instructors' various perceptions of online learning be meaningfully organized and understood?</p><p>&#8226; RQ3: To what extent are instructor characteristics and perceptions of online learning associated with their use/or implementation of online interaction-oriented practices?</p><p>To answer these questions, we develop a comprehensive survey that collects three categories of information: (i) instructor background information, including their demographic characteristics, teaching experiences, and employment status; (ii) instructor use of instructional practices in online courses, specifically interaction-oriented practices; and (iii) instructor perceptions of online education. The survey instruments are informed by two lines of literature: one regarding various domains of interaction-oriented teaching practices that the literature suggests are promising in engaging and supporting students online; the other regarding instructor perceptions of online education that may influence an instructor's teaching practices.</p><p>The survey we have developed is among one of the first attempts to systematically collect information on both instructional practices and instructors' perceptions of online education.</p><p>While this study collects information from one community college only, the problems associated with online teaching and learning at this institution resemble those at other institutions in this state as well as colleges nationwide. In addition, given higher education's unanticipated and sweeping transition to online education in response to the COVID-19 pandemic, it is likely that four-year institutions may begin to reflect substantial variability in online course design and teaching practices akin to community colleges, rendering the results of this study much more broadly applicable. Thus, lessons learned from this study regarding instructors' perceptions of online education and their teaching practices are likely to be useful to college administrators and professional development program directors at higher education institutions nationwide.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Conceptual Framework</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Frameworks for understanding interaction-oriented practices in online learning</head><p>We build on two existing frameworks in understanding instructional practices centered on interactions in online courses. The first one is <ref type="bibr">Moore's (1989)</ref> prominent distance education framework. Having been used for decades in the online instruction literature, <ref type="bibr">Moore (1989)</ref> classifies pedagogical practices according to the type of interaction it promotes: student-content, instructor-student, and student-student interactions (e.g., <ref type="bibr">Anderson, 2004;</ref><ref type="bibr">Bolliger &amp; Martin, 2018;</ref><ref type="bibr">Wagner, 1998)</ref>. Based on this framework, it is essential for instructors to not only provide instructional materials that help students interact intellectually with the course content (e.g., offering lecture videos that explain how to solve an equation), but also to provide sufficient opportunities for both instructor-student and student-student interactions <ref type="bibr">(Moore, 1989;</ref><ref type="bibr">Morris, et al., 2005;</ref><ref type="bibr">Yang &amp; Cornelious, 2005)</ref>.</p><p>The second group of frameworks pivot on the pedagogical purpose of the instructional interaction and classifies online interactions into three major purposes: academic, managerial, and social <ref type="bibr">(Berge, 1995;</ref><ref type="bibr">Quality Matters, 2014;</ref><ref type="bibr">Jaggars &amp; Xu, 2016;</ref><ref type="bibr">Zhang, 1998)</ref>. Academic practices primarily aim at facilitating student learning of knowledge and skills (e.g., providing video lectures and responding to students' questions about a difficult concept). Managerial practices refer to logistical and administrative coordination (e.g., setting up late work policies).</p><p>Social practices refer to instructors actively promoting social exchanges between students and themselves (e.g., instructors introducing herself to allow students to become familiar with her personality), as well as between students (e.g., providing forums where students can get to know each other's interests) to promote feelings of belonging and develop a learning community.</p><p>Taken together, the two types of frameworks suggest that instructional practices are not unidimensional: each specific interaction-oriented instructional practice can be characterized based on the parties it involves during the interaction process, as well as the pedagogical purpose that interaction serves. However, most previous studies have relied only on one type of framework, resulting in limited understanding on how the two dimensions interact with each other in shaping online instruction.</p><p>Drawing on both frameworks, we propose a three-by-three matrix (interactions x purpose) presented in Table <ref type="table">1</ref> that categorizes interaction-oriented practices in online courses into six cells defined by the intersections between two dimensions: (i) the type of interaction that the instructional practice aims to promote, as well as (ii) the pedagogical purpose of the interactions. Specifically, the six categories include: instructor-student academic interaction (ISAI), instructor-student social interaction (ISSI), instructor-student course management interaction (ISCMI), student-student academic interaction (SSAI), student-student social interaction (SSSI), and student-content academic interaction (SCAI). A major benefit of this more fine-grained categorization is that it enables researchers and practitioners to distinguish between different instructional practices with a higher degree of specificity. In the next section, we provide a brief review of existing evidence on the benefits and importance of instructional practices for each cell in Table <ref type="table">1</ref>.</p><p>[Insert <ref type="table">Table 1]</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Interaction-oriented online instructional practices</head><p>Instructor-student interaction. Extensive evidence consistently indicates that instructional practices that promote instructor-student interaction and connection can increase student engagement and satisfaction, which may lead to better learning outcomes in online courses <ref type="bibr">(Dixson, 2010;</ref><ref type="bibr">Martin &amp; Bolliger, 2018;</ref><ref type="bibr">Gayton &amp; McEwen, 2007;</ref><ref type="bibr">Sher, 2009)</ref>. These practices can be further divided into three subcategories depending on the purpose the interaction intends to serve. The first subcategory is instructor-student academic interaction, where instructors communicate with students regarding the knowledge or skills to be learned in a course. Examples include an instructor answering content-related questions in synchronous sessions, discussion boards, and/or providing timely and constructive feedback on assignments <ref type="bibr">(Bolliger &amp; Martin, 2018;</ref><ref type="bibr">Gayton and McEwen, 2007;</ref><ref type="bibr">Martin et al., 2019;</ref><ref type="bibr">Sher, 2009)</ref>. Indeed, multiple studies highlight the importance of instructors providing "meaningful feedback"content-related feedback beyond a mere grade or simple mark, which often leads to greater student engagement <ref type="bibr">(Gayton and McEwen, 2007;</ref><ref type="bibr">Sher, 2009)</ref>.</p><p>The second subcategory is instructor-student social interaction, where instructors and students engage in positive interpersonal interactions unrelated to academic performance.</p><p>Several strategies that enhance instructor-student social interaction are recognized as important by online students, such as instructors introducing their interests and personal experiences and referring to students by name when interacting with students in discussion forums (e.g., <ref type="bibr">Bolliger &amp; Martin, 2018;</ref><ref type="bibr">Ralston-Berg et al., 2015;</ref><ref type="bibr">)</ref>. Recent developments have emphasized the significance of non-academic communication between instructors and students as a strategy leading to enhanced student learning and course satisfaction <ref type="bibr">(Cho &amp; Cho, 2016;</ref><ref type="bibr">Kang &amp; Im, 2013)</ref>.</p><p>The third subcategory is instructor-student managerial interaction, where instructors communicate with students about course policy, schedule, and other logistical issues clearly and frequently to keep students informed of course events and requirements. <ref type="bibr">Bolliger and Martin (2018)</ref> identified a list of managerial interactions between instructors and students that were highly rated by both instructors and students in online learning, such as instructors sending regular announcements and reminders and posting a "due date checklist" at the end of each instructional unit.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Student-student interaction.</head><p>Prior research has supported the important role of studentstudent interaction in the context of online learning in terms of enhancing student performance, completion rate, course satisfaction, and sense of belonging <ref type="bibr">(Ke &amp; Kwak, 2013;</ref><ref type="bibr">Sher, 2009;</ref><ref type="bibr">Jung, et al., 2002)</ref>. Researchers point out that frequent and effective peer interaction not only allows students to learn from each other, but also promotes positive peer relationships and a sense of community in otherwise isolated virtual environments <ref type="bibr">(Anderson, 2004;</ref><ref type="bibr">Liu, et al., 2007)</ref>.</p><p>Instructional practices that facilitate student-student interaction can serve two main pedagogical purposes. The first group of practices focus on improving student-student academic interaction, which is characterized as activities and learning opportunities where academic exchanges occur between students and other students. The constructivism theory points out that peer-to-peer collaborative learning could expose students to new and diverse perspectives, promote them to think critically, and gain a deeper and more comprehensive understanding of the content <ref type="bibr">(Bangert, 2006;</ref><ref type="bibr">Huang, 2002;</ref><ref type="bibr">Van Merri&#235;nboer &amp; Paas, 2003;</ref><ref type="bibr">Walker, 2005)</ref>. In addition to working together collaboratively, students may also help each other through direct teaching (e.g., explaining a solution to an assignment question in discussion forums), which has been found to be valuable for both the students and their peers <ref type="bibr">(Goldschmid &amp; Goldschmid, 1976)</ref>.</p><p>The second group of practices attempt to achieve higher levels of student-student social interaction, such as students uploading a personal profile to the learning management system and participating in icebreaker activities to introduce themselves and connect with one another <ref type="bibr">(Bolliger &amp; Martin, 2018;</ref><ref type="bibr">Stepich &amp; Ertmer 2003)</ref>. The social interaction among students is of particular importance for enhancing students' sense of belonging and sense of community, which is essential for online engagement and persistence <ref type="bibr">(Muilenburg &amp; Berge, 2005;</ref><ref type="bibr">Hung, et al., 2015)</ref>. For example, <ref type="bibr">Stepich and Ertmer (2003)</ref> found that social interaction activities where students introduce themselves at the beginning of a course could enhance their sense of belonging. Interestingly, along these lines, <ref type="bibr">Bettinger et al. (2017)</ref> found that online students merely addressing their classmates by name in discussion forums reaps positive effects on student outcomes for the recipient.</p><p>Student-content interaction. Finally, unlike instructor-student and student-student interactions, student-content interaction typically pivots on improving academic understanding, rarely serving managerial or social purposes. The first line of research on strategies for improving student-content interaction focuses on the delivery media students use to access course content, such as digital textbooks, video/audio lectures, and PowerPoints <ref type="bibr">(Abrami, et al., 2011)</ref>. Early research in multimedia learning suggests that multimedia-materials (e.g., video) provide reinforcing information channels (e.g., auditory and visual), which can improve retention of information and enhance student learning <ref type="bibr">(Lang 1995, p. 86;</ref><ref type="bibr">Mayer &amp; Anderson, 1991;</ref><ref type="bibr">Mayer &amp; Moreno, 1998;</ref><ref type="bibr">Moreno &amp; Mayer, 1999)</ref>. In a similar vein, there is evidence that students prefer instructors to provide instructional materials in more than one format, such as text, video, and audio, giving them the flexibility to choose the media most useful for a specific circumstance <ref type="bibr">(Martin &amp; Bolliger, 2018)</ref>. For instance, students may prefer audio lectures so that they could listen to the lectures on their way to work, whereas they may prefer printing out and reviewing PowerPoint slides before exams.</p><p>Moreover, instructors can promote deeper learning through activities that require higher levels of cognitive engagement with course materials <ref type="bibr">(Craik &amp; Lockhart, 1972;</ref><ref type="bibr">Czerkawski, 2014)</ref>. Strategies that are more cognitively engaging (e.g., elaboration and self-testing) play an important role in improving students' online performance <ref type="bibr">(Huamao, et al., 2006;</ref><ref type="bibr">Puzziferro, 2008;</ref><ref type="bibr">Carson, 2011)</ref>. Along these lines, <ref type="bibr">Dixson (2010)</ref> found that students consider activities engaging when they allow for the application of course concepts to case studies and involve problem-solving skills. Moreover, unlike learning in face-to-face settings, online learning often requires students to work with instructional materials independently due to instructor absence. Therefore, it is important for instructors to provide additional guidance and encouragement in applying cognitive learning strategies and effective studying techniques (e.g., <ref type="bibr">Rodriguez, et al., 2018)</ref>, highlighting, again, the significant role student-content interaction contributes to online student learning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Instructor perceptions regarding online education</head><p>We draw on the broad literature in psychology in understanding how instructors' perceptions might be related to their instructional practices. In particular, broad frameworks of motivation have been instrumental for helping education researchers' study how perceptions influence motivation and behavior, such as Eccles and colleagues' expectancy-value theory <ref type="bibr">(EVT;</ref><ref type="bibr">Eccles et al., 1983)</ref>. This motivation framework is built on the idea that a variety of perceptions inform the two questions most critical to predicting motivation: "Can I do it?" (i.e., expectancies), and "Do I want to do it?" (i.e., values). It has been used to study a variety of choices for teachers (i.e., pedagogical decisions; <ref type="bibr">Foley, 2011)</ref> and students (e.g., major selection; <ref type="bibr">Keyserlingk, et al., 2019)</ref>, including student motivation to participate in online and blended learning environments <ref type="bibr">(Vanslambrouck et al., 2018)</ref>. Similar models have emerged that are more narrowly tailored to studying how instructors' perceptions affect their pedagogical practices in online courses. The technology acceptance model (TAM), for instance, specifically highlights perceptions that predict instructors' intentions to adopt technology <ref type="bibr">(Davis, 1989;</ref><ref type="bibr">Venkatesh &amp; Davis, 2000;</ref><ref type="bibr">Wingo, et al., 2017)</ref>. The two main predictors, perceived ease of use (how much effort the person will have to use to master the technology) and perceived usefulness (how helpful the technology will be for one's job performance), bear substantial similarity to expectancies and values central to Eccles and colleagues' <ref type="bibr">EVT (1983)</ref>, and suggest the relevance of an expectancy-value framework for identifying perceptions that may predict online instructors' practices.</p><p>Although the TAM model is well suited to understanding why instructors adopt specific technologies within an expectancy-value framework, online instruction encompasses a range of decisions beyond just adopting technological tools and may rely on a broader set of perceptions of the online environment as a whole <ref type="bibr">(Mercado, 2008;</ref><ref type="bibr">Wasilik &amp; Bollinger, 2009)</ref>. To organize this literature, we categorize instructor perceptions into four broad categories specific to online learning environments that have theoretical implications for instructors' expectancies and values.</p><p>Self-efficacy in using online platforms. Perhaps one of the most critical perceptions of instructors is their confidence, or self-efficacy, in their ability to use online tools to teach effectively <ref type="bibr">(Wright, 2014;</ref><ref type="bibr">Zhen, et al., 2008)</ref>. Self-efficacy is critical for, when not synonymous with, instructors' expectancies of success. Self-efficacy in online courses involves instructors' confidence in their ability to manage the course and convey content through digital media. This may subsequently feed into students' own abilities and expectations about communicating with the instructor and engaging in the online course <ref type="bibr">(Almeda &amp; Rose, 2000;</ref><ref type="bibr">Baglione &amp; Nastanski, 2007;</ref><ref type="bibr">Young, 2002)</ref>. Teachers' self-efficacy is widely shown to support both student achievement and teachers' own job satisfaction <ref type="bibr">(Caprara, et al., 2006;</ref><ref type="bibr">Mojavezi &amp; Tamiz, 2012)</ref>, an association likely to be mediated by the practices they adopt.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Feelings of Support.</head><p>Many barriers exist to adopting online instructional practices, making institutional efforts to alleviate initial concerns crucial for supporting instructors' decisions to attempt and persist in online teaching <ref type="bibr">(Orr, et al., 2009)</ref>. Institutions can increase instructors' expectancies for success and perceived ease of use in online teaching by alleviating concerns about compensation and time, organizational change, and technical expertise, support, and infrastructure <ref type="bibr">(Berge &amp; Muilenburg, 2001;</ref><ref type="bibr">Muilenburg &amp; Berge, 2001;</ref><ref type="bibr">Porter, 2003)</ref>. The amount of time required to design an online course is seen as a major barrier when it is seen as taking away from other activities such as research <ref type="bibr">(Bolliger &amp; Wasilik, 2009;</ref><ref type="bibr">Rockwell, et al., 1999)</ref>, and is considered a reason institutions should allot greater compensation for teaching one <ref type="bibr">(Porter, 2003)</ref>. Additionally, the technical complexity of online courses can discourage faculty from adopting online instruction <ref type="bibr">(Zhen et al., 2008)</ref>. Therefore, perceptions of support provided by an institution to address issues of time, inexperience, and technical problems can improve faculty's approach to online teaching <ref type="bibr">(Frederickson et al., 2000)</ref>.</p><p>Benefits. The support that institutions offer to deal with the inherent difficulties of online instruction can be complemented by instructors' perceptions of the inherent value of online instruction. Foremost among these benefits is flexible scheduling <ref type="bibr">(Wingo, et al., 2017)</ref>. Having a flexible work schedule is recognized by most instructors as a benefit of teaching online <ref type="bibr">(Green &amp; Brown, 2009)</ref>, and is often considered the greatest overall benefit to teaching online <ref type="bibr">(Chapman, 2011;</ref><ref type="bibr">Shea, 2007)</ref>. However, other benefits may include the professional growth that comes with adopting online instruction or the ability to reach a wider student population <ref type="bibr">(Chapman, 2011;</ref><ref type="bibr">Green &amp; Brown, 2009;</ref><ref type="bibr">Wright, 2014)</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Perceived differences between online and face-to-face learning. Beyond an</head><p>instructor's capacity to use different types of instructional practices, her perceptions of potential differences between online and face-to-face education may also influence how she is going to teach the class. Face-to-face courses are intuitive benchmarks against which to judge the affordances of online courses and students. Thus, instructors' perceptions of these differences, regardless of their accuracy, may have important implications for their approach to the course, and ultimately, student outcomes <ref type="bibr">(Jussim &amp; Harber, 2005)</ref>. First, students themselves may be different in terms of their motivation <ref type="bibr">(Jaggars, 2014)</ref> and competing obligations <ref type="bibr">(Bailey et al, 2015;</ref><ref type="bibr">Author, under review)</ref>. Additionally, perceptions of how online and face-to-face students differ may interact with perceptions of online course affordances to impact instructors' perceptions that online courses are more or less advantageous for achieving common pedagogical goals, such as engaging students, organizing group projects, and monitoring students' progress. Similarly, this interaction may also impact instructors' perceptions of whether it is more or less difficult to help develop students' writing, critical thinking, or content knowledge in online courses.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Summary</head><p>In summary, different forms of interactions provide the foundation for framing online pedagogical innovations. While existing research has investigated these practices to document their use and importance in the online learning environment, a more nuanced framework integrating the type of interaction and the pedagogical purpose it serves has yet to be established.</p><p>Additionally, while a variety of instructor perceptions have proved to be associated with teacher pedagogical approaches and practices in the broad teaching and learning literature, there is limited understanding regarding perceptions specific to online education that might be associated with instructors' use of interaction-oriented practices. Such information would provide important insight into possible mechanisms through which instructors choose to approach online instruction, potentially fueling targeted interventions to enhance adoption of practices beneficial to student learning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methods</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Setting and Background</head><p>This study was conducted at a large suburban community college located in the southeastern United States. The institution serves over 30,000 students in associate degree and certificate programs. Additionally, over one-third of the institution's enrollment is in fully online courses. To better support online teaching and learning, the institution introduced a mandatory online learning assessment and orientation for all students registering for online courses starting in spring 2015, and a comprehensive, mandatory online teaching certification for faculty teaching online courses starting in Fall 2017. Perhaps as a result of these efforts, the success rate (A/B/C/P) of online courses has been increasing slowly in the past few years, although there is still a persistent performance gap between online and face-to-face courses: In the academic year of 2015-2016, the average course success rate in online classes is 69%, compared to 76% in face-to face classes; in 2018-2019, the corresponding rates are 72% and 78% for online and faceto-face classes respectively.</p><p>It is important to note that with the advent of the COVID-19 virus pandemic and the swift shift to online instruction in higher education, many institutions have implemented student preparation and faculty professional development programs similar to the online education initiatives at the institution of the current study <ref type="bibr">(Lederman, 2020)</ref>. Therefore, this study is relevant for understanding the instructional practices and perceptions of online instructors not only at this institution, but at all institutions attempting to improve online teaching and learning during and after the COVID-19 pandemic.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Instrument Development</head><p>The development of the Online College Instructor Survey (OCIS) followed an iterative process. First, the research team conducted an intensive literature review to identify online course design features and instructional practices that are shown to be related to student learning outcomes, as well as specific domains of instructor perceptions of online education that may be correlated with the usage of these instructional practices. Subsequently, we developed a survey instrument to capture relevant aspects of these domains, according to the conceptual framework introduced in Section 2.</p><p>After the initial phase of survey development, the instrument was disseminated widely to experts of online education at the study site as well as education psychologists to vet the items.</p><p>Feedback was incorporated and used to refine, edit, drop, or rephrase existing items and their response categories. A focus group interview was then conducted with five educational researchers of online education to determine if items appeared relevant to the domains they were intended to ask (face validity). This information spurred further refinement.</p><p>Finally, individual cognitive interviews were conducted with community college online instructors from the target population to review the survey item-by-item. Twelve instructors spanning math, physics, business, computer technology, and humanities departments at the study site were recruited. Some participants were asked to go through the full survey and provide feedback on the general clarity, time limit, and any missing aspects. Others were asked to provide step-by-step thoughts on the appropriateness, relevance, and clarity of the perception items.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">3 Measures</head><p>The OCIS intends to capture instructors' use of interaction-oriented teaching practices and instructor characteristics and perceptions that may be related to the use of these practices.</p><p>Accordingly, the survey consists of three sections: (i) the frequency of instructors' implementation of various interaction-oriented practices, (ii) instructor characteristics, and (iii) instructors' perceptions about online education. Most practice and perception questions were developed on a Likert-type scale, where unipolar scales (e.g., one end of the scale corresponds to low levels and the other high levels of measured construct) were positioned on a 5-point response anchor, whereas bi-polar scales (e.g., each scale end corresponds to high levels of a construct, with the ends being opposite from one another, such as preference for F2F on one end and preference for online learning formats on the other) were positioned on a 7-point response anchor <ref type="bibr">(Gehlbach, 2015)</ref>. Below we briefly describe the specific items for each relevant category of variables.</p><p>Teaching practices. Considering that instructors' teaching practices may vary depending on the specific course, survey participants were asked at the beginning of the survey to indicate the course to which they would be referring to when answering subsequent questions regarding their teaching practices. All the questions were framed to retrospectively elicit instructors' frequency of using interaction-oriented practices in the online course they referred to. The survey contains 34 questions related to teaching practices. As explained in more detail in Section 2 and summarized in Table <ref type="table">1</ref>, the conceptual framework specifies instructor-student, student-student, and student-content interaction along the academic, social, and course management dimensions, leading to a total of six categories of interaction-oriented practices. All the practice related survey items have high response rates, where only 5% of the all the response cells had missing values (not 5% of all items). Due to the low missing rate, we used a principal component analysis (PCA) data imputation method to handle missing values. A more detailed explanation of this method can be found in <ref type="bibr">Josse &amp; Husson (2016)</ref>. Appendix A, Tables A.1-A.3 presents the content of specific survey items included in each of the six categories of interaction-oriented practices, together with their scales and descriptive statistics.</p><p>Instructor characteristics. Ten items were included to obtain information on instructor characteristics. The questions pertain to instructor teaching experiences (two items), employment characteristics (five items), and highest education received (one question), instructor age (one item), and discipline area (1 item). Table <ref type="table">2</ref> shows the definition of these variables and sample summary statistics.</p><p>Instructor perceptions. Lastly, the OCIS contains 34 instructor perception questions on the following concepts: instructor-related benefits or motivation for teaching online (5 items), student-related benefits or motivation for teaching online (3 items), self-efficacy in using online platforms (6 items), feeling supported for online teaching (5 items), perceived difference in the characteristics of F2F and online students (6 items), perceived ease of achieving pedagogical goals in F2F as opposed to online delivery format (5 items), and perceived ease developing student capacities in F2F as opposed to online courses (4 items). All perception questions were positioned on a 5-point Likert scale except items indicating a preference between online and face-to-face formats. These items were positioned on a 7-point Likert scale with lower values representing a preference for online format and higher values representing a preference for faceto-face format. The content, scales, and summary statistics of all perception items are shown in Appendix A Table A.4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Sample</head><p>The survey was administered in spring 2019 among all 399 instructors at the institution who taught at least one online course in the last three years. Online instructors within the college were contacted via their school email address and invited to participate in the study, with three follow-up reminders during a two-week period. Each instructor was provided with a study information sheet that explained the purpose, rationale and nature of their participation in the study, which also stated their right to not participate.</p><p>A total of 209 instructors agreed to participate in the study and started the survey, yielding an over-50% participation rate; of the 209 instructors, 60% (N=126) completed at least 89% of the survey and were included in our analytical sample. The majority of the instructors who did not complete the survey did not respond to any (0%) of the perception, practice, and characteristic questions. Most instructors in our sample had taught at least two online courses (81%), were employed full-time during the term of the survey (67%) and were only employed at the surveyed college (72%). The instructors concentrated in STEM and health sciences (33%) and arts and humanities fields (36%). The majority of the respondents had earned a master's degree (72% for instructors in our sample). Access to instructor data for the target population is very limited, though degree attainment information was accessible. The percent of instructors with bachelors (11%), masters (65%), and doctorate degrees (17%) in the population is comparable, with only slight overrepresentation from those with masters and doctorates in our sample.</p><p>[Enter <ref type="table">Table 2]</ref> 3.5 Analytic plan Instructional practices. The analysis of instructional practices follows a three-step procedure. First, we compute a composite score for each of the six domains of interaction by taking the average of all items under each domain. It should be noted here that we are not using these scores to make measurement claims (e.g., presenting a validity argument that a latent construct has been appropriately quantified); nor do we presume that the practices combined in a composite share covariance and/or represent a metaphysical entity <ref type="bibr">(Markus &amp; Borsboom, 2013, p. 112)</ref>. Rather, since the literature indicates that these practices are advantageous, our aggregation is an expedient way of examining which intersection of the above referenced framework discriminates instructors the most from one another. Furthermore, since the most common reliability estimate increases with the number of items included <ref type="bibr">(Drost, 2011</ref>), it's only per convention (e.g., <ref type="bibr">Taber, 2018</ref>) that we report the following Cronbach alpha's (&#945;) in Appendix A.</p><p>Table <ref type="table">3</ref> displays the descriptive statistics for the six composite scores of instructional practices and provides information on the extent to which each domain of practice is used by instructors in our sample. For example, instructor-student academic interaction (ISAI) has a mean of 4.39. Since the responses for items under ISAI were positioned on a 5-point Likert scale ranging from 1 = "Never", 2= "Once", 3 = "Three times in total during the semester", 4 = "Every two weeks", and 5 = "Every Week", a mean score of 4.39 indicates that, on average, instructors in our sample use instructional practices that center on instructor-student academic interactions between every two weeks to every week. In contrast, student-student academic interaction (SSAI) has a substantially lower mean of 2.8. Also following a 5-point Likert scale, a mean score of 2.8 indicates that instructors in our sample, on average, use instructional practices that center on student-student academic interactions between only once and three times in total during a semester.</p><p>Second, we standardize the six composite practice variables using z-score transformations to ensure that those with wider ranges do not exert undue influence on the subsequent cluster analysis <ref type="bibr">(Mohamad &amp; Usman, 2013)</ref>. Lastly, we employ cluster analysis to distinguish instructors into meaningful groups according to their composite scores in each of the six domains. Specifically, following the process outlined by <ref type="bibr">Antonenko et al. (2012)</ref>, we use kmeans specifying two, three, four and five clusters (k) through three steps: cluster identification (setting k), verification (determining the appropriate number of k), and interpretation (making meaning from the derived clusters). Cluster identification refers to the number of cluster solutions that are set to be compared to determine which solution best summarizes the patterns in the data. Verification refers to selecting or deciding on the number of clusters to interpret. A combination of the silhouette plot, scree plot, and visual inspection of overlap serve as criteria for selecting the number of clusters (k). Finally, interpretation refers to making meaning of the classifications by comparing clusters against the variables that generated them <ref type="bibr">(Antonenko et al., 2012)</ref>. In validating our cluster solution, we plot the instructional groups against both composites and individual items.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>[Enter Table 3]</head><p>Instructor perceptions. Since we drew on the psychological literature in designing all our perception items, we expect decent loadings among items that elicit information about a specific concept and utilized confirmatory factor analysis (CFA) to test the fit of the model to the data. We hypothesize a unique factor for self-efficacy in using online platforms, instructorcentered benefits for teaching online, student-centered benefits for teaching online, feelings of support, and three factors for perceived differences in F2F versus online environments: (i) differences in student readiness, characteristics and time; (ii) in achieving pedagogical goals; and (iii) in developing student capacities. Because the last three factors may be the cause of a general preference and perceived difference in ease of F2F over online platforms, we begin by specifying a model with a higher-order factor over the three perceived difference sets, for a total of eight latent factors (7 first-order, 1 higher-order). We then sequentially remove model complexitystarting with the higher-order factor-to find the simplest model that fits the data <ref type="bibr">(Gelman et al., 2020;</ref><ref type="bibr">McElreath, 2020)</ref>. We examine a variety of fit indices typically reported in survey validation studies (e.g., <ref type="bibr">Cronin et al., 2019;</ref><ref type="bibr">Krumrei-Mancuso &amp; Rouse, 2016)</ref> including the Comparative Fit Index (CFI), Tucker Lewis Index (TLI), Root Mean Square Error of Approximation (RMSEA), and the Bayesian Information Criterion (BIC).</p><p>Regression Analysis. In addition to performing cluster analysis and CFA for practice and perception questions, respectively, we utilize logistic regression to examine which instructor characteristics and perceptions are correlated with the usage of interaction-oriented practices. Specifically, we used an instructor's membership in a specific cluster as the dependent variable and regress it on instructor characteristics and composite perception variables.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">RQ1: Identifying meaningful patterns in the implementation of online interactionoriented practices</head><p>Cluster identification and verification. The purpose of employing cluster analysis is to describe the patterns of practice usage among instructors, and to subsequently examine the relations of cluster membership with instructor characteristics and perceptions of online education. Since we do not have pre-existing assumptions regarding the optimal number of distinct clusters of instructors, we display in Figure <ref type="figure">1</ref> the k-means solutions based on different k clusters (k = 2, 3, 4, and 5 respectively), and then use both the elbow method and silhouette analysis to evaluate the degree of separation between clusters under each solution.</p><p>[Insert Figure <ref type="figure">1</ref>] Specifically, the elbow method provides insights into the optimal cluster by examining the sum of squared distance between each data point and its assigned cluster's centroid as a function of the number of clusters chosen (k). The optimal k is the one where adding additional clusters no longer substantially improves the within-cluster sum of squares much. As a result, in the scree plot that shows the within sum of squares as a function of number of k, the location of a bend, or an "elbow" is generally considered as an indicator of the optimal k. The top graph in Figure <ref type="figure">2</ref> displays the scree plot from the elbow method. It seems that the curve has the sharpest decline from one to two clusters, although the curve still decreases noticeably instead of flattening out beyond two clusters.</p><p>An alternative and perhaps less ambiguous method to identify the optimal number of clusters is the silhouette method, which measures the quality of a clustering solution by examining the ratio of the between-cluster distance (which captures how different instructors are between different clusters) over the within-cluster distance (which captures how different instructors are within the same cluster; <ref type="bibr">Burney &amp; Tariq, 2014)</ref>. Thus, the optimal number of clusters k over a range of possible values of k is the one that maximizes the silhouette that indicates maximum differentiation between-clusters and minimal differentiation within-clusters.</p><p>The bottom graph in Figure <ref type="figure">2</ref> displays visually the average silhouette of observations for k ranging from 1 to 10 and shows that k = 2 has the largest average silhouette score of around 0.3, which has been reported and used in previous studies (e.g., <ref type="bibr">Harrak et al., 2019)</ref>. Accordingly, we choose to set k = 2 for the cluster analysis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>[Insert Figure 2]</head><p>Cluster interpretation. To interpret the clusters, we deploy two strategies: testing the mean differences using multivariate analysis of variance (MANOVA), and visualizing the mean differences between the clusters across the six composite variables (used in the cluster analysis) and individual practice items (used to form the six composite variables). In Appendix A we display additional analyses to test the robustness of our results. The Pillai's trace (test statistic for MANOVA) indicates that there is a significant difference between the two clusters on the scores of the six composites (V = 0.73, F (6, 119), p &lt; .001). Additionally, separate univariate ANOVAs reveals significant differences between the two clusters across all six instructional practice composite variables, p &lt; .001. Figure <ref type="figure">3</ref> presents the average value of the six composite variables for the two groups and shows a clear "high-user" group and a "low-user" group.</p><p>[Insert Figure <ref type="figure">3</ref>] Figure <ref type="figure">4</ref> further presents the average value of all the 34 individual practice items for the two groups and yields consistent patterns, where the high-user group displays higher mean scores across all items, except for ISCM_1. Moreover, comparisons across items suggest that items that fall under the domain of student-content academic interaction are associated with the most pronounced gaps between the two groups.</p><p>[Insert Figure <ref type="figure">4</ref>]</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">RQ2: Examining how instructors' perceptions of online learning can be meaningfully organized</head><p>As our hypothesis regarding the explanation of the perception items posits a seven firstorder factor model with one higher-order factor, we sequentially remove complexity to examine the fit of simpler models. Though contemporary technical arguments cast doubt on the formal justification of preferring simpler explanations of the data (e.g., <ref type="bibr">Herrmann, 2020;</ref><ref type="bibr">Sterkenburg, 2016)</ref>, a long-held commitment in philosophy of science encourages trimming theoretical excess, and thus placing credence in more simple and seemingly probable explanations, all things being equal <ref type="bibr">(Gelman et al., 2020;</ref><ref type="bibr">McElreath, 2020)</ref>. Because our first model is relatively complex, we sequentially shave theoretical entities (e.g., factors) while examining model fit indices. Table <ref type="table">4</ref> displays the results of all seven models tested. The table is arranged such that as one starts from the top row and moves downward, complexity decreases. While all models meet acceptable and satisfactory criteria across all fit indices (except the &#967; 2 test), based on the strategy just described, we prefer the five first-order factors and one correlated uniqueness model, &#967; 2 = 686.94 (516), p &lt; .05, CFI = .91, TLI = .90, RMSEA = .054, 90% CI (.043, .065).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>[Insert Table 4]</head><p>Factor loadings for each perception item, the full text associated with the items, and Cronbach alpha (&#945;) scores for each factor are presented in Appendix Table <ref type="bibr">A.4</ref>. The &#945; values generally range between acceptable to great for each factor. Accordingly, we generate perception scores by taking the mean across items for each factor, which are then used as predictor variables in the subsequent regression analysis (descriptive statistics for perception scores provided in Appendix Table A.4).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">RQ3: Relations between instructor perceptions/characteristics and instructional practices</head><p>Finally, Table <ref type="table">5</ref> shows the results of the logistic regression analysis predicting the probability of being a high-user (versus low-user) of interaction-oriented practices. The estimated marginal effects from three model specifications are presented: (1) model 1 (column 1) that includes instructor characteristics and the subject area of the course; (2) model 2 (column 2) that focuses on the five perception factors, and (3) model 3 (column 3) that includes both instructor characteristics and perceptions.</p><p>Based on model 1 that only includes instructor characteristics, total years of postsecondary teaching at any institution positively and significantly predicts membership in the high-user group, b = .03, p &lt; .01. The marginal effect of .03 suggests that each one year of teaching experience is related to a higher probability of being a high-user instead of low-user by approximately 3 percentage points. Model 2 focuses on perception variables only and shows that self-efficacy in using online platforms (b = .22, p &lt; .01) and teaching online for student centered benefits (b = .14, p &lt; .05) are significant and positive predictors of membership in the high-user group. This corresponds to a 22 and 14 percentage point increase for every 1 standard deviation increase in the self-efficacy and student-centered benefits factors, respectively. While not statistically significant (p &gt; .05), the instructor-centered benefit factor was negative, showcasing a different directional association with being a high-user than the student-centered benefit factor.</p><p>In model 3, with both sets of predictors, the coefficients of total years of post-secondary teaching, self-efficacy in using online platforms, and student-centered benefits to teaching online remain significant and, generally, stable in terms of magnitude.</p><p>[Insert <ref type="table">Table 5]</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Discussion</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Key findings and implications</head><p>In this study, we develop and administer a comprehensive online college instructor survey to examine if and to what extent instructor characteristics and perceptions about online education are related to their use of various online interaction-oriented practices. We present an integrated framework for conceptualizing online interaction-oriented practices. The holistic yet nuanced features of the framework are leveraged to develop a survey that systematically collects data on key online interactions.</p><p>Our results indicate that perceptions and characteristics associated with teachers' expectancies for success relate to decisions to use instructional practices in a manner consistent with Eccles and colleagues' expectancy-value theory <ref type="bibr">(1983)</ref>. First, self-efficacy in navigating online learning systems positively predicts high-users of instructional practices. Total years teaching, which is typically associated with instructors' pedagogical confidence and may therefore also be a proxy for self-efficacy (e.g., <ref type="bibr">Helleve et al., 2009;</ref><ref type="bibr">Pancsofar &amp; Petroff, 2013)</ref> also positively predicts instructional behavior.</p><p>Furthermore, the different benefits for teaching online (e.g., benefits for students versus benefits for instructors) predict instructional practices in appropriately different ways. Both the Expectancy-value theory (EVT) and the technology acceptance model (TAM) posit that perceptions of something's benefits, or "utility", should increase motivation to do it. In line with these models, our results indicate that when instructors recognize the benefits that online learning holds for students, instructional use of desirable practices increases. Conversely, when the instructors primarily perceive online teaching as something beneficial for themselves, use of desirable practices decreases. Though this relationship is not statistically significant, a reasonable theoretical explanation would be that instructors may realize benefits of online teaching for their own lifestyle, which does not motivate them to employ student-centered pedagogies. Some current empirical evidence points to this as a plausible explanation. For instance, <ref type="bibr">K&#246;nig and Rothland (2012)</ref>, while applying EVT to understand why instructors choose teaching as a profession, found that intrinsic motivation (being driven by the satisfaction of doing an activity) was positively related to pedagogical knowledge, whereas extrinsic motivation (behaviors that are driven by external rewards) was negatively associated with pedagogical knowledge. Overall, these results support the intuitive notion that instructors are more likely to engage in desirable pedagogical practices when they are driven more by the perceived benefits online teaching can provide for their students than simply the benefits it can provide for themselves.</p><p>To gain a better understanding of motivated choice in using student-centered practices, future research should more formally operationalize and test the relations stipulated in EVT as it pertains to explaining online instructional behavior in higher education settings. Furthermore, based on the results of this study, and using a motivational lens, interventions aimed at increasing instructors' use of effective practices can be tailored to probe instructor self-efficacy and value of online education for student-centered benefits as conduits to higher usage.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Limitations and future research</head><p>There are a few limitations to this study. First, our sample is drawn from one college in one state, and only approximately one third of the targeted online instructor population completed the survey. As a result, the extent to which the responses of instructors can be generalized to the broader college instructor population may be limited. However, previous work taking place elsewhere report similar levels of endorsement for practices reported in our study.</p><p>For instance, <ref type="bibr">Bolliger and Martin (2018)</ref> found that online instructors rate sending announcements and email reminders as the most valuable instructor-student engagement strategy; in our study we found this to be the highest rated instructional practice in the instructorstudent domain as well.</p><p>Another limitation is the sample size. Although the sample is not small, traditional recommendations for structural equation modeling suggest larger indicator-to-individual ratios than that of the current study. Still, methodological work has shown the multi-faceted determinants that must be weighed when estimating sample size requirements for structural models, revealing that many indicators do not necessitate large samples in all situations <ref type="bibr">(Wolf, et al., 2013)</ref>. Lastly, although instructors were asked to reflect on their actual use of practices, the cross-sectional data collected in this study limits the temporal understanding between practices and instructor characteristics and perceptions. This aspect of the study could be strengthened by gathering data on instructor characteristics and perceptions before collecting data on instructional practices.</p><p>There are also several avenues for future research to explore based on the results of this study. First, the pedagogical behavior documented here relies on instructor introspection and self-report. Although some studies have shown strong positive correlations between self-reported measures and objective observations <ref type="bibr">(Junco, 2013;</ref><ref type="bibr">Hill, et al., 2011)</ref>, understanding the extent to which instructors are accurately reporting their behavior precludes this study. Observations of course design features and teaching strategies would be a more direct measure of teaching practices, and future research should consider examining the extent to which self-reported measures and observations of course design features are compatible. Second, future confirmatory work is needed to further validate the structure of the perception constructs. Finally, although it is important to document the relationship between college online teaching practices and perceptions, relating these to student outcomes would shed light on their respective contribution to student learning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3">Conclusion</head><p>While research has highlighted the central role interaction-oriented practices contribute to student engagement and learning in online platforms, little is known of the factors contributing to their use in college online courses. To advance this area of research, the present study introduced a comprehensive online college instructor survey for the purposes of gaining insight into the characteristics and attitudes related to instructional behavior. We observe robust associations between instructor self-efficacy and student-centered benefits and higher use of interactionoriented practices. The findings of this study hold potential for future work aiming to benchmark online instructional quality and highlight areas of intervention. The results also provide an impetus for future research to examine the relationships between instructor characteristics, perceptions, practices and achievement to enhance the quality of online learning experiences for college students.</p><p>Tables <ref type="table">Table 1</ref> After grading an assignment, I proactively identified students that were struggling (e.g., missing assignments, low grades, low class participation) and reached out to them.</p><p>(ISCMI) Note: Text in cells are example items included in the survey measuring online interaction. To see the full list of practice items used in this study, please see tables A, B, and C in the appendix. ISAI = instructor-student academic interaction; ISSI = instructor-student social interaction; ISCMI = instructor-student course management interaction; SSAI = student-student academic interaction; SSSI = student-student social interaction; SCAI = student-content academic interaction. Empty cells suggest that, at the cross-section, no meaningful instructional practice is conceptualized under this framework. 0.05 (0.21) Note: a This variable is numeric; all other variables are binary (1/0). The means for binary variables represents the percent of individuals in that category. For example, 19% of instructors taught at least one online course prior any post-secondary institution, and 67% of the sample were full-time instructors. b Education variables are the highest degree earned reported by the instructor at the time of the survey. That is, 71% of instructors indicated a master's degree as the highest degree earned up to the time of the survey. Note: Composites were computed by taking the means of items. ISAI = instructor-student academic interaction; ISSI = instructor-student social interaction; ISCMI = instructor-student management interaction; SSAI = student-student academic interaction; SSSI = studentstudent social interaction; SCAI = student-content academic interaction. Instructor-student academic items were positioned on a 5-point Likert scale ranging from 1 = "Never", 2= "Once", 3 = "Three times in total during the semester", 4 = "Every two weeks", and 5 = "Every Week". Instructor-student social items ask whether or not an instructor employed a certain technique, and these items were placed on a binary response format, with 0 = "No" and 1 = "Yes". Instructorstudent course management items did not lend themselves to weekly administration; response options included 1 = "Never", 2= "Rarely", 3 = "Occasionally", 4 = "Frequently", and 5 = "Very Frequently". Student-student academic items were positioned on a 5-point Likert scale ranging from 1 = "Never", 2= "Once", 3 = "Three times in total during the semester", 4 = "Every two weeks", and 5 = "Every Week". Student-student social items ask whether or not an instructor encouraged students to introduce and get to know each other, and these items were placed on a binary response format, with 0 = "No" and 1 = "Yes". SCAI_1-7 response options included 1 = "Never", 2= "Rarely", 3 = "Occasionally", 4 = "Frequently", and 5 = "Very Frequently". SCAI_8-15 were on a 5-point Likert scale ranging from 1 = "Never", 2= "Once", 3 = "Three times in total during the semester", 4 = "Every two weeks", and 5 = "Every Week".     </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Additional Tables and Figures</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ISCMI_5</head><p>The last time you taught the course online, how often did you provide opportunities for students to give feedback about the course? 2.92 1.1 2 5 ISCMI_6 I provided explicit grading criteria (e.g., rubric) for discussion forum assignments.</p><p>4.47 1.12 1 5 Note: ISAI = instructor-student academic interaction; ISSI = instructor-student social interaction; ISCMI = instructor-student course management interaction; SSAI = student-student academic interaction; SSSI = student-student social interaction; SCAI = studentcontent academic interaction. ISAI items were positioned on a 5-point Likert scale ranging from 1 = "Never", 2= "Once", 3 = "Three times in total during the semester", 4 = "Every two weeks", and 5 = "Every Week". Instructor-student social (ISSI and ISAI) items ask whether or not an instructor employed a certain technique, and these items were placed on a binary response format, with 0 = "No" and 1 = "Yes". Instructor-student course management items did not lend themselves to weekly administration; response options included 1 = "Never", 2= "Rarely", 3 = "Occasionally", 4 = "Frequently", and 5 = "Very Frequently". 0.81 0.39 0 1 Note: SSAI = student-student academic interaction; SSSI = student-student social interaction. Student-student academic items were positioned on a 5-point Likert scale ranging from 1 = "Never", 2= "Once", 3 = "Three times in total during the semester", 4 = "Every two weeks", and 5 = "Every Week". Student-student social items ask whether or not an instructor encouraged students to introduce and get to know each other, and these items were placed on a binary response format, with 0 = "No" and 1 = "Yes". 3.56 1.57 1 5 SCAI_12 I encouraged my students to make outlines.</p><p>2.67 1.61 1 5 SCAI_13 I encouraged my students to make diagrams.</p><p>2.22 1.49 1 5 SCAI_14 I encouraged my students to use flashcards.</p><p>2.52 1.6 1 5 SCAI_15 I encouraged my students to reflect on their learning.</p><p>3.52 1.35 1 5 Note: SCAI = student-content academic interaction. SCAI_1-7 response options included 1 = "Never", 2= "Rarely", 3 = "Occasionally", 4 = "Frequently", and 5 = "Very Frequently". SCAI_8-15 were on a 5-point Likert scale ranging from 1 = "Never", 2= "Once", 3 = "Three times in total during the semester", 4 = "Every two weeks", and 5 = "Every Week".  (3) represents a model with both controls and perception variables. Across all models, the low-user category is used as the reference, meaning positive coefficients represent a greater likelihood of being classified as a high-user of effective online practices.</p><p>Estimates shown can be interpreted as the change in the logit of the dependent variable associated with a one-unit change in the independent variable. Because this interpretation is not intuitive, we present the odds ratio in the subsequent figure for model (3).  </p></div></body>
		</text>
</TEI>
