Overview – The Article & Key Findings
Many resources outline good practice for online course design, including quality rubrics, such as QualityMatters, and each has been created to ensure effective practices in the development and conduct of online course. However, few of the resources report on empirical evidence to demonstrate a clear link between the approaches recommended in quality rubrics and concrete student outcomes. Additionally, from the list of good practices that are available, little information suggests which of these elements is most important. The 2016 journal article “How Do Online Course Design Features Influence Student Performance?”, published in Computers & Education, addresses both of these challenges, and was the focus of our May Pedagogical Innovations Journal Club.
The take-home message of this paper, by Shanna Smith Jaggars and Di Xu, is that Interpersonal Interaction is positively correlated with student performance as measured by course grades.
The authors began the study with a large literature search of course quality rubrics (like Quality Matters), practitioner-oriented literature (including case studies of successful courses, and theory based frameworks), surveys, and empirical studies. The authors analyzed these existing resources and identified four broad categories of recommendations for good practice:
- Organization and Presentation of the course is easy to navigate, and the material is clear and organized.
- Learning Objectives and Assessments are clearly outlined at course-level and unit-level objectives, and at assignment-level provide clear expectations.
- Interpersonal Interactions reinforce course content and objectives.
- Use/Choice of Technology effectively supports course learning outcomes.
The authors designed an online course design assessment rubric that includes four areas to measure each of the four categories in 23 existing online courses at two community colleges with a total enrollment of 678 students. After determining a score in each category for each course the authors compared these scores to the final grades students received in the course. With this approach they found that the scores determined by their measurement of Interpersonal Interaction correlated significantly with student grades:
Courses with higher Interpersonal Interaction ratings tended to have higher student grades. Interestingly, only Interpersonal Interaction correlated in a significant manner, the other three categories did not.
Defining and Measuring Interpersonal Interaction
The authors described Interpersonal Interaction as “opportunities for students to meaningfully interact with the instructor, and with other students, in ways that enhance knowledge development and create productive relationships among learners.” The rubrics the authors constructed for this category measured both instructor-student and student-student interaction. Below are examples of course elements that were “counted” in the rubric that the authors created.
- Instructor feedback to students is specific, actionable, and timely, clearly indicating what students are doing well and what opportunities are available for improvement.
- Instructors used strategies to increase “instructor presence,” allowing students to become familiar with the instructor’s personality.
- Student-student interactions are embedded in thoughtfully designed instructional activities that are relevant and engaging to students and that advance specified learning objectives.
- The types and natures of interactivity are determined by the desired learning goal, not by arbitrary criteria for collaboration or communication.
- Interactions facilitate knowledge and skill application, not recitation.
The authors identified the number of instances they found of the above elements. The rubric scores were defined as follows:
- Little or no meaningful Interpersonal Interaction
- Moderate meaningful interaction with instructor and/or amongst students
- Strong meaningful interaction with instructor and amongst students
Gathering Student Feedback
To determine how students responded to faculty efforts to encourage Interpersonal Interaction, the authors conducted focus groups with a sample of students enrolled in the classes that ranked high in Interpersonal Interaction. The authors gathered student feedback about what faculty in these high-interaction classes did that helped student learning. Below are the findings from the focus groups that list faculty behaviors in the high-interaction courses.
- Instructors posted frequently – Instructors posted announcements on a regular basis reminding students of assignment requirements, upcoming deadlines, newly posted course materials and other logistical issues.
- Instructors responded to student questions in a timely manner –Typically instructors responded to student questions within 24 hours.
- Instructors invited student questions through a variety of modalities – These instructors also tended to provide multiple ways for students to communicate with them including
- Discussion board postings
- Synchronous chatting (video or text only)
- In-person office hours
- Instructors were more likely to ask for student feedback and seemed responsive to that input.
- Instructors sent emails asking for student input on how to improve the course.
- Instructors seemed to care about the course and students’ performance
- From the focus group responses, it seemed that students could easily distinguish between instructors who cared and those who did not. Students said that instructors in high-interaction courses seemed to care about the course and students’ performance. This helped student’s feel connected to the course, motivated to learn and succeed, and personalize the instructor.
- Student-student interactions
- The authors state that across the 23 courses, students didn’t seem particularly engaged with each other. Even though most of the courses had discussion forums, student postings seemed perfunctory. Information from the focus groups indicated that many students perceived online discussion with their classmates as forced and artificial. The authors suggest that in these courses studied it appeared that instructor-student interaction was more important for student grades than student-student interaction.
Cautions from the Journal Club audience
The measurements were somewhat subjective and arbitrary. The definitions used in the rubrics were somewhat subjective, thus it is possible that results may be different if measured by another person. Also the elements were somewhat arbitrary, thus important elements to measure may have been missed.
Findings from a community college may not apply to all settings. Community college students can differ from students from doctoral universities or liberal arts colleges. The findings from this study may not hold for other institutional types.
Just because the other factors didn’t correlate with student performance doesn’t mean they weren’t important. The other factors didn’t show statistical correlations, but it doesn’t mean they’re not important. For instance a recent Pedagogical Innovations Journal Club article described the importance of online course findability (the course is easy to navigate). This current study only looked at student grades, so it’s possible that other elements were important for other factors like withdrawal and failure rates.
As with many pedagogical studies, more research is needed before we make assumptions about the generalizability of these findings. Nonetheless, this article provides a list of good practice approaches to improve instructor-student interaction, that in at least one setting, were determined to have a positive impact on student learning.