Navigation – Plan du site

AccueilNuméros47Triple Hybrid (TriHy): What Happe...

Triple Hybrid (TriHy): What Happened When COVID Hit the Research Study

Triple Hybride (TriHy) : Ce qui s’est passé lorsque le COVID a impacté notre étude de recherche
Ildiko Porter-Szucs et Barry DeCicco

Résumés

La présente étude examine si la réussite des étudiants dans un cours de Maîtrise (enseignement de l’anglais à des apprenants étrangers – TESOL) sur l’évaluation est comparable quelle que soit la modalité de participation choisie pour ce cours « Triple Hybride » (ou « TriHy »). Les trois modes d’enseignement de ce cours se sont faits en présentiel (F2F), en ligne synchrone (SO), et en ligne asynchrone (ASO). L’étude a commencé en janvier 2020, avant que la pandémie de COVID-19 n’atteigne l’université. Pendant les dix premières semaines, le cours a été enseigné tel que prévu initialement, mais en raison de l’interruption des cours en présentiel à l’université, ce groupe a dû rejoindre le groupe en ligne Synchrone (SO) pendant les cinq dernières semaines du semestre. Bien que les différences de réussite des étudiants dans le cours n’aient pas été statistiquement significatives, la perception des facteurs ayant contribué à la réussite personnelle des étudiants l’a été. Nous pouvons conclure que la qualité de l’enseignement n’est pas compromise dans un format Triple Hybride bénéficiant d’un soutien considérable de l’établissement, d’un engagement important des enseignants, notamment de leur temps, et des choix réfléchis des étudiants.

Haut de page

Texte intégral

Introduction

1This study was designed prior to the SARS-COV2/COVID-19 pandemic, in response to recent changes in enrolment patterns in language-teacher-education programs at US universities. Students seeking to become teachers of English to speakers of other languages (TESOL) had with increasing frequency been inquiring about online courses and even fully online degrees. Others, however, had continued to prefer on-campus degrees. While some TESOL programs had been quick to offer master’s degrees online, others—such as the site of this study—remained reluctant to do so. The training of language teachers is primarily about skill building. In particular, teachers-in-training need to be able to acquire both theoretical knowledge and practical skills in teaching, learning, and assessment; integrate theory with practice; and hone their praxis in response to expert feedback. The primary concern for many institutions has been the perceived inferiority of online language-teacher education (OLTE) in developing the requisite skills in their pre-service teachers. These institutions fear that their options consist of either compromising their students’ learning outcomes due to modality limitations or losing their degree programs due to declining enrolment in face-to-face (F2F) classes (Epstein, 2001; Mills, Yanes, and Casebeer, 2009; Prescott, 2010; Shin and Kang, 2018). In order for institutions to be convinced otherwise, applicable studies are needed to demonstrate that student learning outcomes are comparable across the two types of distance education (synchronous online, or SO, and asynchronous online, or ASO) when directly compared to F2F education. This study aims to fill this gap by investigating whether the success of students in an MA TESOL assessment course is comparable regardless of the students’ chosen mode of attendance (F2F, SO, ASO) in this ‘Triple Hybrid’ (or ‘TriHy’) course.

Triple Hybrid Course Design

2In the TriHy course, the students are offered three different modes of attendance: F2F, SO, and ASO. In this set-up, the instructor teaches three cross-listed sections of a single course. Through a learning management software (LMS) platform, all students have access to course tools such as course materials, assignment descriptions, assignment submission areas, discussion boards, email communication, quizzes, and video recordings of class sessions. From the students’ viewpoint, the most apparent difference among the three modalities lies in their engagement in class sessions. F2F students are present in the classroom with the instructor. Simultaneously, SO students connect to the classroom remotely via technology, which employs a camera and microphone to transmit real-time classroom activities to them; wall-mounted monitors allow the F2F students to see the SO students as well as to project any slides or audio-visual materials. These class sessions are video recorded (e.g., on Zoom), uploaded to the LMS, and made accessible for ASO (and the other) students to view the recording outside of scheduled class hours. Students’ attendance modality naturally influences their ability to participate in class, ask clarifying questions, and interact with their classmates.

3TriHy, however, is unlike the well-known hybrid flexible (HyFlex) model, where the students can choose how to participate on any given day; in TriHy, students are asked to commit to one mode and, barring unforeseen circumstances, will remain in that modality for the entire semester. This provides the instructor with consistency and stability for the practical activities, which is crucial in some fields such as language-teacher training. Another feature of TriHy is that students are intentionally integrated throughout the course. They are encouraged to form mixed-modality groups for the entire semester. These groups serve two primary purposes: firstly, to provide mutual support and create a cohesive class community where students can readily seek assistance regardless of their modality, ensuring that no student feels disadvantaged due to their modality; and secondly, to participate in a group project aligned with the course’s learning objectives. For example, in the class under study, students designed, piloted, administered, and evaluated an English-language assessment tool. In a TriHy course, each student has the opportunity to take a leading role in certain assignments and to manage the flow and content of the live lessons. The ASO students may even be asked to take the lead on the asynchronous online discussion boards, where all the modalities meet on equal footing. The instructor considers the needs of the three groups when selecting the technology, designing the course architecture, delivering the course, and being accessible to the students outside of class. The present article reports on a study examining whether student success in a TriHy MA TESOL assessment course was comparable across the three different modes of attendance and how the COVID-19 pandemic affected students’ experiences when in-person instruction was shut down and the F2F group had to join the SO group for the final five weeks of the semester.

Literature Review

4Publications investigating student learning outcomes in online education abound (DETA Center, nd; Fishman et al., 2013; Joosten et al., 2021). These studies have examined different instructional modalities and yielded a wide variety of results. However, there are virtually no publications comparing student learning outcomes in TESOL or in language-teacher education. The publications that do exist do not compare the success of language-teacher candidates in all three modalities of participation: F2F, SO, and ASO (Fishman et al., 2013; Moradi and Farvardin, 2019; Rovai and Jordan, 2004).

5The earliest of studies on OLTE was Nunan’s (2002) comparison of MA TESOL students’ modality preferences in F2F and ASO settings. The students were found to prefer F2F even though the convenience of ASO appealed to them. This is a sentiment that resonates with many students to this day (Gherheș, 2021; Qiang and Zhang, 2023; Rachman, 2020; Spencer and Temple, 2021) and provides an argument in favor of offering multiple modality options for students to attend.

6Another early study by Rovai and Jordan (2004) compared sixty-eight primary and secondary-school teachers pursuing an MA in Education program through F2F, ASO, and blended (a hybrid of F2F and ASO) modalities. The F2F group was found to outperform the others in learning outcomes while the blended group felt most connected to the class. In the intervening twenty years, however, course design practices have changed and even F2F classes tend to contain an online component and ASO classes consist of more than mere reading of texts, as alluded to earlier. Thus, with the advancement of course-design practices, Rovai and Jordan’s study leaves few relevant lessons for us today besides the importance of intentional community building among students in different modality groups. This became a pillar of the TriHy model and it can be credited—to some extent—with the success of this course design.

7Two studies on the quality of interactions in various modalities are worth examining more closely. Although the first study examined the specific learning outcomes of English learners rather than their teachers, both ESL and TESOL classes build skills through frequent interaction and are relevant to the current study. Moradi and Farvardin (2019) investigated the frequency of negotiation of meaning among English learners in pair and small-group interactions. The two modality groups were computer-mediated SO, where the students exchanged typed messages, and F2F, where the students were able to talk to each other. While both groups were found to produce frequent negotiations, the F2F group outperformed the SO group in the number of negotiation moves in less time. It is not surprising that students who had to type were slower in producing language than those who could talk, but the fact that both groups engaged in frequent negotiation of meaning is noteworthy.

8Leijon and Lundgren, on the other hand, interviewed college professors who teach in hybrid (F2F and SO) settings about their use of physical and virtual spaces inhabited by the various modalities (2019). The teachers reported often struggling with the design of interaction and ‘that a HyFlex model requires an increased didactic awareness of designing for learning’ (p. 1). This emphasis on course architecture significantly influenced the development of the TriHy model.

9An examination of the literature highlights a scarcity of directly applicable studies, specifically those comparing student learning outcomes in teacher-education programs across F2F, SO, and ASO modalities, particularly where even the F2F group utilises a learning management system. However, the examined studies did inform the course design and the current study by flagging up a range of factors, including the need to ensure that in the course architecture an online and room-oriented approach was taken; to create a cohesive class community through course assignments; to provide opportunities for all students to take on leadership roles in the course; to ensure easy access to the instructor and peers for all students; and to monitor student preferences regarding modalities and course elements.

Research Questions

10The study sought to answer the following primary (1) and secondary (2, 3, 4) research questions:

11RQ1: What are the differences—depending on the modality of attendance—in the students’ success in the language assessment course?

12RQ2: What are the differences—depending on the modality of attendance—in the students’ perception of the factors that are expected to contribute to their success?

13RQ3: What are the differences—depending on the modality of attendance—in the students’ prediction/perception of their success?

14RQ4: Which sources of information and support—depending on the modality of attendance—do the students rely on in pursuit of success in the class?

Methodology

15The researchers used an interactive, convergent, mixed-methods design, in which they simultaneously collected quantitative and qualitative data, analysed them, combined the results, and interpreted the results in a way that each informed the other. They collected data using the same variables, constructs, and concepts across both data types (as described in Creswell and Creswell, 2018). They were guided by a pragmatic, participant-centred framework.

Measurements

  • 1 The 2022 article focuses on quantitative analyses of these measures.

16RQ1: The goal of the class was for all students, regardless of modality, to be successful. The purpose of this study was to measure, compare, and understand success across modalities. Success was measured by these quantitative proxies (Porter-Szucs and DeCicco, 2022):1

  • course-completion rates,

  • weekly class attendance,

  • completion of weekly assignments,

  • grades on low-stakes individual assignments,

  • grades on a high-stakes individual assignment, and

  • the final course grade.

17RQ2 and RQ3 were evaluated by an analysis of the responses to a set of survey questions on perception, attitude and preferences. These were questions 3 (a) - (m) in Appendix A (for RQ2), which were asked at the beginning and middle of the class, and the same questions in Appendix B (for RQ3), asked at the end of the class. Post-course part 1 survey (‘post-part-1’ in Appendix B Part 1) asked students to answer based on the groups they were in before the March 11 COVID lockdown and change in class structure; the post-course part 2 survey (‘post-part-2’ in Appendix B Part 2) asked students to answer based on the groups they were in after March 11. This means that the same questions were asked at the end of the class about both the pre- and post-March 11 groups. The differences in scores (called ‘change scores’) between these two sets of responses would give information on how the students perceived the effects of the changes.

18RQ4: Additional measurements were taken to gain deeper insights into the factors contributing to the students’ motivations and preferences. These measurements included

  • surveys of participants before, during, and after the course (see Appendixes A and B), and

  • records of how they communicated with the instructor (the instructor logged communications: date; time; topic; and mode of communication, such as email, F2F, telephone, text message, individual, or group).

19These measurements were designed to capture various aspects of the students’ experiences, such as preferred teaching modalities, degree of clarity/confusion, perceived access to sources of support, aspects of practicality of course attendance, and student satisfaction.

Timing, Groups, and Comparisons

Timing:

– Questions (on attitudes, preferences, and beliefs) were asked at each of three times:

- Pre-semester (Appendix A)

- Mid-semester (Appendix A)

- End of semester (Appendix B) - This was a pair of post-course surveys: post-part-1 and post-part-2. (See Measurement section above.)

– Quizzes, tests and discussions were administered throughout the semester.

Groups:

– The original groups, as of the start of class, were: asynchronous online (ASO), synchronous (SO), and face to face (F2F).

– The restructured groups, created after March 11 and the COVID lockdown, were Asynchronous and Synchronous (the latter consisting of the original synchronous online [SO] + the original face to face [F2F]).

Comparisons: These could be made in three ways:

– Within group, from one time to another (across time), examining within-group changes.

– Between groups, at the same time (across groups), examining between-group differences.

– Both (comparing group changes), examining how changes across time differed between groups.

Procedure

20The study began in January 2020, before the pandemic reached the university. This course was offered at the undergraduate and graduate levels. A master’s-level class was selected for the study due to the experimental course design and a lack of evidence in the literature that such a course design would necessarily be successful. Graduate students—given their more extensive life and educational experiences and greater self-confidence in bringing potential challenges to the instructor’s attention—were deemed more likely to respond positively to this experiment than were undergraduates. A fellow professor not affiliated with the study requested consent from the participants at the beginning of the first class. Eighteen of the twenty students (ranging in age from approximately early 20s to mid-50s) consented to participate in the study. Nine enrolled in the F2F, five in the SO, and four in the ASO sections.

  • 2 In early 2020, the coronavirus spread rapidly worldwide, including in the United States. By March, (...)

21During the semester, the instructor logged various aspects of communications with the students: topic (related to course content or to housekeeping matters, such as logistics about assignments or due dates), timing (during class, immediately before or after class, on another day), and modality (in person, email, text message, telephone call, video conference). The colleague who was not involved in the study also distributed surveys to the participants. These surveys, containing both closed- and open-ended questions, included a pre-course survey (pre-), a mid-semester survey during week 7 (mid-), and two versions of the post-semester survey (post-part-1 and post-part-2), as explained below (see Appendices A and B). For ten weeks, the class ran as originally designed, with students attending according to their chosen modalities. Due to the shutdown of F2F instruction in the wake of the COVID-19 pandemic lockdowns in early 2020,2 the former F2F section was combined with the SO section for the remaining five weeks of the semester. The initial and mid-semester surveys were administered prior to the lockdown on March 11, which was during the 10th week of the semester. The originally designed final survey was then revised to elicit reflections on the semester both as originally conceived (post-part-1 in Appendix B) and as materialised (post-part-2 in Appendix B).

22After the end of the semester and submission of the final course grades, the instructor-researcher (first author) deidentified the collected data. Only then was the statistician (second author) given the anonymised data. At that point, the anonymised data were transferred to the statistician in the form of an Excel workbook. No open-ended data were included, only scores and survey ratings. The statistician looked over the data with the other author to identify possible errors. They also communicated to decide the specific hypotheses to be tested for each question, score, or rating. After conducting the statistical analyses, the statistician put the results into tabular form for clarity, and then discussed with the other author. At this point the results were summarised and discussed in the body of the paper.

23The colleague who was not involved in the study deidentified the qualitative data by substituting participants’ names with numbers. The instructor-researcher then analysed the qualitative data. Data sources included the open-ended questions on each of the surveys, the weekly online discussion posts, the final test-development project, and open-ended questions on the final exam. Qualitative data underwent manual coding, utilising both inductive and deductive approaches. Open-ended survey responses from all four surveys (pre-, mid-, post-part-1, and post-part-2) were systematically analysed inductively to identify themes raised by the participants; while the student learning objectives served as themes for deductive analysis of assignments. The next section quantitatively and qualitatively analyses the data, presenting the results in relation to the research questions. It also highlights unexpected findings due to changes in the study caused by the pandemic.

Analyses and Results

RQ1: What are the Differences—Depending on the Modality of Attendance—in the Students’ Success in the Language Assessment Course?

24Resultsquantitative: There was no statistically significant difference in median scores on any criteria evaluating the learning outcomes of the three groups of students within the same class, although there was a statistically significant difference for the median of the pretest and the percentage change from the pretest to the final exam across the different groups. In other words, the SO group started with the lowest score on the pretest but ended with a comparable score on the post-test (final exam); hence they made the most progress.

25Resultsqualitative for all students, regardless of modality: As compared to the often shallow yet definite views they held at the beginning, students ended the course with a more nuanced and deeper understanding of assessment concepts. For instance, Participant #8-F2F, initially expressed their philosophy of assessment in a discussion post during the first week:

As far as assessment goes, I feel that it does have its place, but it is not my favorite part of being a teacher. At present, I view it as a necessary evil, as somewhat of a burden. I have never enjoyed creating tests or quizzes because I feel as if I am not good at it.

26In other words, initially Participant #8-F2F conflated testing and assessment, reducing assessment to the creation of tests and quizzes. Likewise, Participant #13-SO narrowed down assessment to formative and state standardised tests, stating firmly: ‘I love to teach, I love learning, I use formative assessment daily, and I feel strongly—very—that all state standardised tests should be taken to a “galaxy far, far away.”’ In the same vein, Participant #2-ASO expressed frustration over the purpose of assessment:

[It] is never really communicated to the teachers the true intended purpose of the assessment—is it to assess the efficacy of the curriculum? Create a great image for the school for marketing purposes? Judge the efficacy of the teacher? The possibilities are endless. Next, it has been unclear what purpose the results serve other than end up in a grade book.

27This participant also failed to differentiate between assessments and high-stakes standardised tests, revealing confusion over their purpose, as the results would not end up in the gradebook because they are not classroom assessments.

28By the end, participants were able to create an assessment to specifications for a concrete group of language learners, pilot it, revise it based on the results of the pilot, administer the revised test to the target group of learners, run basic statistical analyses, interpret the results, revise the assessment in response to the findings, and describe the entire process in a research paper. For example, during week 12, in response to learning to use statistics to improve the quality of assessments, participant #2-ASO wrote in a discussion post:

The readings help me look at the questions, not so much the students. It could be that the students understood the concept well, but the question may not be testing that, as it could be confusing, poorly worded, culturally non-responsive, or any number of other issues. By looking at the scores, the students, and the questions, I now have a full view to either change the questions or my instruction methods!

29Participant #2-ASO is highlighting how statistical analysis can reveal why students might answer a question incorrectly—not necessarily due to a lack of understanding, but because the question itself may be flawed. This insight empowers teachers to adjust either their questions or teaching methods based on data. Continuing, participant #2-ASO adds:

There is no question that any person who doesn’t have 48 hours in one day could not do an assessment and calculate all of the stats. So, the software is helpful. Note: It doesn’t do things like item discrimination, etc., but that is where the experience (and commitment) of the teacher comes into play. Yes, I agree, I finally understand what [the professor] is talking about in her reviews on the test:)

30Here, participant #2-ASO humorously acknowledges the time-consuming nature of assessment design and statistical analysis, such as calculating item discrimination. The final sentence indicates a breakthrough in understanding the professor’s statistical reviews of the MA TESOL assessment class’s weekly quizzes.

31Other participants also adeptly integrated previously unfamiliar concepts into their discourse and used terminology with accuracy. In week 13, in responding to a classmate’s account of a negative assessment experience, Participant #5-F2F remarked: ‘Wow! Sorry you went through that! Indeed, it sounds very unfair and actually describes what we are learning not to do in this class (i.e., not being specific in grading, focusing on local vs. global errors, etc.)!’ This comment demonstrates that having internalised the course content, this student was able to critically evaluate a peer’s experience, identify the concerns in it, and offer relevant, supportive feedback. Meanwhile, during the same week, Participant #8-F2F wrote the following about rubrics, or rating scales:

I used to use holistic rubrics very heavily, but I am leaning more towards analytical or single-point rubrics now because of the advantages they offer. One of my priorities in assessment is positive washback—I have generally hated assessment in the past because I felt it took away from learning time—and these types of rubrics can lead to positive washback.

32This response reveals a nuanced understanding of various types of rubrics and their benefits as well as the participant’s familiarity with the concept of positive washback, or the positive impact that assessments can exert on teaching and learning. The quotation also reflects the participant’s intention to leverage rubrics (or rating scales) strategically to effect this positive washback. Another participant, #3-F2F, took part in the discussion about rubrics in the following way:

It’s good that you’ve found a style that works for you. I wouldn’t mind a bit of elaboration regarding your point with analytic vs. single-point rubrics: what specifically about the single-point rubric makes it clearer for students as compared to analytic? It seems to me that the benefits you speak of in regards to single-point rubrics would also exist for analytic. Is it the relative brevity that makes the difference? I’m curious because I have never created my own rubric (like many people here, I’m sure, my employers had one for me), and I always assumed that analytic would be the holy grail because of the level of detail. I, myself, prefer analytic rubrics as a student since it’s easier to understand what’s being asked of me (in my opinion) and easier to understand where I went off track. But… I’m also a graduate student and not [an English Language Learner].

33Participant #3-F2F was observed grappling with analogous concerns regarding rubrics, akin to those of Participant #8-F2F, particularly regarding the relative merits of single-point rubrics. Having been primarily exposed to analytic rubrics, Participant #3-F2F was engaged in a process of reassessment and integration of new information. Analytic rubrics represent the prevailing trend in the field of language assessment and are widely recognised. Therefore, when introduced to the lesser-known single-point rubric, this participant tried to evaluate how factors such as age and language proficiency might influence the suitability of each type of rubric. Lastly, in the same discussion, Participant #1-F2F was able to provide a carefully thought out and technically correct answer to a peer’s connection between the theory studied during the semester and the practical application of teaching language learners.

I can see your situation is a little tricky. One thing you might consider is to give students pre and after writing tests, grade them based on the same rubric, and then conduct [a] statistical analysis to see if there is a significant difference in scores between the two tests. If there is, it could tell you that students have got progress in writing skills; otherwise you may consider reteaching lessons.

34This response could easily have originated from the instructor herself. However, its source being the students underscores their mastery of assessment concepts and their practical application to teaching: a clear demonstration of their growth over the course of the semester.

RQ2: What are the Differences—Depending on the Modality of Attendance—in the Students’ Perception of the Factors that are Expected to Contribute to their Success?

35Resultsquantitative: RQ2 was answered by comparing responses strictly across groups at the same time. These questions were asked twice at the end of the class, first asking the students to consider their pre-March 11 groups (ASO, F2F, SO), and a second time, considering their post-March 11 groups. The responses for the two questions were each broken down twice, first grouping the students by their pre-March 11 groups, and a second time, grouping the students by their post-March 11 groups. This, in turn, was done twice, in two surveys, called ‘post-part-1’ and ‘post-part-2.’ Post-part-1 asked students to answer based on the groups they were in before the March 11 COVID lockdown and change in class structure; post-part-2 asked students to answer based on the groups they were in after March 11.

36First Post-Class Survey (post-part-1): Using the original groups (pre-March 11), the ASO group scored statistically significantly higher than the F2F group for convenience and time efficiency, but lower for being better informed, being part of the class community, having access to the instructor, and having access to classmates (all based on median ratings).

37Before March 11, the F2F group gave statistically significantly higher ratings than the SO for their ability to be part of the class community, to have access to the instructor, and to have access to their classmates (all based on median ratings). The F2F group gave statistically significantly lower ratings than the SO group for convenience and cost effectiveness (all based on median ratings).

38Using the post-March 11 groups (when the F2F and SO groups were merged to form the Synchronous group), the median ratings for the Asynchronous and Synchronous groups were statistically significantly different, for four questions: the Asynchronous group gave statistically significantly lower ratings for their modality allowing them to be the best informed, to be part of the class community, to have the best access to classmates, and to be successful in class (all based on median ratings). Using the post-March 11 groups (when the F2F and SO groups were merged to form the Synchronous group), there were no statistically significant differences in the median ratings between the Asynchronous and Synchronous groups (see the ‘Measurements’ section above for a description of the structure of the questions).

39Second Post-Class Survey (post-part-2): Using the original groups (pre-March 11), the ASO group scored statistically significantly higher than the F2F group for time efficiency, processing information, retaining information, and convenience (all based on higher median ratings). The SO group gave statistically significantly higher ratings than the F2F group for time efficiency, processing information, retaining information, and convenience (all based on higher median ratings).

40Using the post-March 11 groups (when the F2F and SO groups were merged to form the Synchronous group), there were no statistically significant differences in the median ratings between the Asynchronous and Synchronous groups (see the ‘Measurements’ section above for a description of the structure of the questions).

41Summary: For RQ2, the F2F group gave generally higher ratings than the ASO group for the pre-March 11 time period, but lower for the post-March 11 time period.

42Resultschanges: After March 11, when the F2F students had to switch to synchronous online attendance, several statistically significant changes were observed in the responses of the formerly F2F students. Attending synchronously after March 11 increased perceptions of the convenience of attendance but decreased their reported ability to retain and process information, be informed, feel part of the class community, and have access to classmates and the instructor. The implications of this are discussed below in the ‘Discussion and Conclusion’ section.

RQ3: What are the Differences—Depending on the Modality of Attendance—in the Students’ Prediction/Perception of their Success?

43Resultsquantitative: There were statistically significant differences for several questions in the post-course surveys (see the ‘Measurements’ section above for a description of the structure of the questions and Appendixes A and B for the survey instruments).

Changes from Post-course Survey-1 (post-part-1) to Post-course Survey-2 (post-part-2):

44Whereas RQ2 was answered by comparing groups at the same time, i.e., an ‘across-group’ comparison, RQ3 was answered by comparing changes over time across groups. This was done by calculating change scores and comparing change scores across groups. These change scores were calculated by taking the differences in scores within each participant, for each question (the participant’s post-course survey-2 rating minus that same participant’s post-course survey-1 rating). That provided the across-time comparisons; then change scores were compared across groups. This provided across-time/across-group comparisons. This examines differences across groups in the changes from the post-course survey-1 to post-course survey-2. The groups were then compared for differences in the medians of each group’s change scores. There were two sets of comparisons for (a) the pre-March 11 modality groups (ASO, F2F, SO) and (b) the post-March 11 modality groups (Asynchronous and Synchronous).

45Detailed quantitative results by survey question detailing changes in scores from post-course survey-1 to post-course survey-2:

46Survey question 3: ‘Attending this class in the way that I am currently attending since COVID-19’…

  • 3b ‘allowed me to retain information best’: for the pre-March 11 modality groups, the F2F group had a median change of -3; the other two had median changes of 0. The changes were statistically significantly different between the F2F group and each of the other two groups (ASO and SO). Summary: the F2F group experienced a decline; the other two groups did not. For the post-March 11 modality groups, there were no statistically significant differences in the median change scores between the Asynchronous and Synchronous modality groups.

  • 3c ‘allowed me to process information best’: again, for the pre-March 11 modality groups, the F2F group had a median change of -3; the other two had median changes of 0. The changes were statistically significantly different between the F2F group and each of the other two groups (ASO and SO). Summary: the F2F group experienced a decline; the other two groups did not. For the post-March 11 modality groups, there were no statistically significant differences in the median change scores between the Asynchronous and Synchronous modality groups.

  • 3g ‘allowed me to be the best informed’: for the pre-March 11 modality groups, the F2F group had a median change of -2; the other two had median changes of 0. The changes were statistically significantly different between the F2F group and each of the other two groups (ASO and SO). Summary: the F2F group experienced a decline; the other two groups did not. For the post-March 11 modality groups, there was a statistically significant difference in the median change scores between the Asynchronous and Synchronous modality groups. The Asynchronous group had a median change of 0 (no change); the Synchronous group had a median change of -1 (a decline).

  • 3h ‘allowed me to be most part of the class community’: for the pre-March 11 modality groups, the F2F group had a median change of -2; the ASO group had a median change of +1; the SO group, 0 (no change). The changes were statistically significantly different between the F2F group and each of the other two groups (ASO and SO). Summary: the F2F group experienced a decline; the other two groups did not change or experience an increase. For the post-March 11 modality groups, there was a statistically significant difference in the median change scores between the Asynchronous and Synchronous modality groups. The Asynchronous group had a median change of +1; the Synchronous group had a median change of -1.5 (a decline).

  • 3i ‘gave me the best access to the instructor’: for the pre-March 11 modality groups, the F2F group had a median change of -3; the other two had median changes of 0. The changes were statistically significantly different between the F2F group and each of the other two groups (ASO and SO). Summary: the F2F group experienced a decline; the other two groups did not. For the post-March 11 modality groups, there was no statistically significant difference in the median change scores between the Asynchronous and Synchronous modality groups.

  • 3j ‘gave me the best access to my classmates’: for the pre-March 11 modality groups, the F2F group had a median change of -2; the ASO group had a median change of +0.5, the SO group, 0 (no change). The changes were statistically significantly different between the F2F group and each of the other two groups (ASO and SO). Summary: the F2F group experienced a decline; the other two groups did not change or experience an increase. For the post-March 11 modality groups, there was a statistically significant difference in the median change scores between the Asynchronous and Synchronous modality groups. The Asynchronous group had a median change of +0.5; the Synchronous group had a median change of -1 (a decline).

  • 3k ‘allowed me to be as successful in this class as I can be’: for the pre-March 11 modality groups, the F2F group had a median change of -1; the other two had median changes of 0. The change was statistically significantly different between the F2F group and the SO group only. Summary: the F2F group experienced a decline; the other two groups did not. For the post-March 11 modality groups, there was no statistically significant difference in the median change scores between the Asynchronous and Synchronous modality groups.

  • 3L ‘resulted in the best scores on assessments’: for the pre-March 11 modality groups, the F2F group had a median change of -1; the other two had median changes of 0. The change was statistically significantly different between the F2F group and the SO group only. Summary: the F2F group experienced a decline; the other two groups did not. For the post-March 11 modality groups, there was no statistically significant difference in the median change scores between the Asynchronous and Synchronous modality groups.

47In summary, for the pre-March 11 modality groups, the F2F group experienced declines in median change scores, while the other two groups (ASO, SO) experienced no changes or positive changes. For the post-March 11 modality groups, the Synchronous group experienced declines or no changes depending on the survey question, while the Asynchronous group experienced increases or no changes depending on the survey question.

  • 3 The 2022 article delves into the original research study, emphasizing its statistical findings and (...)

48For more information on the statistical findings, see Porter-Szucs and DeCicco, 2022.3

Qualitative Results of Participants’ Perception of Success in Class:

49The pandemic affected the participants’ perception of their success. As the pandemic was rapidly approaching the university, one F2F TESOL student asked to attend the last day of TriHy as SO because of health concerns. Thus, on March 11, this one student was attending SO after having attended F2F for the previous nine weeks. They then reported that their class participation was negatively impacted by their participation in the SO format due to limitations of the technology. One of the challenges for this student was that they could not always hear all the dialog in the F2F class. While attending F2F, they had always been able to hear and follow the class discussion and participate. However, after switching to SO while the rest of the F2F colleagues remained in the classroom, this student was no longer able to follow the discussion; as a result, they did not feel confident about joining in the ongoing dialog. Another difficulty for this student arose due to the so-called Zoom-lag. Due to the network delay, they found it difficult to identify a lull in the conversation and to jump in at the appropriate time. After several unsuccessful or unsatisfactory attempts to engage in the natural flow of the conversation, this student preferred to limit their own participation rather than interrupt their classmates. In summary, although there were no statistically significant differences in the overall success of students regardless of modality of attendance, going beyond the numbers and into the students’ stories allows us to recognise that the modalities of attendance do make a difference in the way students perceive their ability to be successful in the class.

RQ4: Which Sources of Information and Support—Depending on the Modality of Attendance—do the Students Rely on in Pursuit of Success in the Class?

50Results: These statements are based on survey responses. Participants claim to have relied on a variety of sources of information and support. All students cited the textbook and PowerPoint lecture slides. All F2F and SO participants mentioned relying on class attendance as a key to success; that is, having perfect attendance. All but one reported relying on classmates and further reading. All four ASO students (and one SO student) reported relying on the video recording of the class. Other sources of information included searching the internet and communicating with the instructor.

51An analysis of the communications rates with the instructor, as logged throughout the semester, showed no significant difference in the median communication rates across groups, but one statistically significant difference for the median change in rates. The SO group had a significantly higher median percentage change in communication rates from pretest to final exam, compared to the ASO group. At this time, the authors can offer no ready explanation or implication for this finding.

Discussion and Conclusion

52In this study investigating whether the success of three groups of MA TESOL students was comparable regardless of which mode of attendance they chose in the Triple Hybrid course, the results show that overall there was no statistically significant difference in learning outcomes, despite the fact that two of the groups’ pretest scores were statistically significantly different from each other. The SO group had a statistically significantly lower score on the pre-test (Quiz 1) than did the F2F and ASO groups. However, on the final exam, the three groups’ scores did not differ from each other statistically significantly. The SO group had a statistically significantly higher improvement than the ASO group, but not than the F2F group. All students were successful at the end of the class; the SO group improved more than the other two groups. This could be due to the modality or it could be due to a ‘catch-up effect,’ where it was easier for the lower-scoring group to make larger improvements. This was a limitation of the study, because the participants assigned themselves to their groups (due to preference and constraints), rather than having random assignments. RQ1 asked what differences there were in the learning outcomes of the three modality groups; we can say that there were no statistically significant differences. The course had a levelling effect, as posttest scores no longer differed by modality. The different status at the beginning of the study does not affect the validity.

53While students’ success in the course did not differ, their perceptions of the factors contributing to their success did. In particular, the group that attended F2F for ten weeks because that modality best suited their learning style was negatively affected by being forced online in the last five weeks of the course. They felt that their ability to process and retain information declined, as did their access to the instructor and classmates. One of these students (participant #18-F2F) nearly dropped out of the course due to a sudden loss of motivation. It took the instructor’s personal intervention to revive the student’s sense of purpose. This is reflected in the data as one housekeeping-related phone conversation. The ASO and original SO students, on the other hand, emphasised that although some of them would have preferred to attend F2F, they would not have been able to pursue their studies without the flexibility and convenience of online attendance, because in-person further learning would not have been compatible with other parts of their lives.

54Statistically significant results were also found in the change scores for survey questions 3h and 3j from the first to the second post-course surveys (pre-March 11 groups to post-March 11 groups), as it pertains to the ASO group. ASO students reported a heightened sense of belonging to the class community and felt that they had improved access to their classmates, respectively. One potential explanation for this finding could be attributed to the technological setup. At the time of the experiment, the university lacked smart classrooms. Consequently, during the initial ten weeks of the semester, the instructor-researcher would utilise a 360, or omnidirectional, microphone placed on the teacher’s desk and connect an external webcam to the laptop. This setup facilitated the transmission of F2F classroom activities to the SO group, with recordings made available to the ASO and all groups. However, following the forced transition of the F2F group to the online format on March 11, both the originally F2F and the SO groups converged in the same Zoom space. Presumably, this unified virtual environment allowed for enhanced audio and visual clarity, potentially providing the ASO students better access to the class community.

55The disruption to the study was a natural experiment, and a blessing in disguise. It created a second experiment within the first experiment, allowing us to see how the F2F group responded when forced into a less preferred mode of attendance. It became clear that the effect appeared to be negative for this group of students. For the one student (participant #8-F2F) who transitioned from F2F to SO a day early, the switch of modalities had a stifling effect on class participation. For another student (participant #18-F2F, as mentioned earlier), the switch of modalities nearly jeopardised their persistence in the course. The fact that all students persisted and successfully completed the course despite the disruption caused by the pandemic and the forced change of modality can, at least to some extent, be attributed to the vigilance of the instructor and the intentional Triple Hybrid course design. The latter consists in not only offering three simultaneous modalities of course attendance and asking the students to commit to one modality for the semester, but also deliberately integrating students in the three modalities. Thereby students formed learning communities, which supported each other in course content and personal matters.

56The authors thus conclude that with considerable forethought, substantial investment of time and commitment from the instructor, and meaningful choices from the students, the quality of instruction does not need to be compromised even in a post-pandemic language-teacher education program. In the semesters since the conclusion of the study, the TriHy course design has been extended to other graduate and even undergraduate courses in the same program, with similarly successful results. The investment of the instructor’s time has decreased steadily while the students’ success has remained steady.

57This study presents several limitations. Firstly, the sample size was small, consisting of only 18 participants out of a class of 20. While efforts were made to minimise spurious effects from grouping, the lack of random assignment into attendance modalities poses a challenge. The experimental grouping was further confounded by demographic factors, with K-12 teachers predominantly self-selecting into online groups and teachers of adult ESL or young EFL learners opting into the F2F group. While this was a limitation, it was also a strength. The decision to allow self-selection was based on pedagogical considerations, and the social interaction hopefully reduced the causal effects down to the bare effects of the modality. Nevertheless, the authors are publishing the results of this study, hoping to encourage further research and contributions to the discourse. Pooling similar studies and file-drawer results would allow for a meta-analysis, which could yield valuable insights into this topic.

Haut de page

Bibliographie

Creswell, J. W. and Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). Sage Publications, Inc.

DETA Center: National Research Center for Distance Education and Technological Advancements (DETA). (nd). https://www.detaresearch.org/research-support/no-significant-difference/view-full-database/ (accessed October 15, 2024).

Epstein, R. (2001). Teacher education at a distance in Canada and Thailand: How two cases measure up to quality distance education indicators. In Teachers of English to Speakers of Other Languages, Inc. and L. E. Henrichsen (ed.), Distance-learning programs (pp. 127–139).

Fishman, B., Konstantopoulos, S., Kubitskey, B. W., Vath, R., Park, G., Johnson, H. and Edelson, D. C. (2013). Comparing the impact of online and face-to-face professional development in the context of curriculum implementation. Journal of Teacher Education, 64(5), 426–438. https://0-www-doi-org.catalogue.libraries.london.ac.uk/10.1177/0022487113494413.

Gherheș, V., Stoian, C. E., Fărcașiu, M. A. and Stanici, M. E. (2021). E-Learning vs. Face-to-Face Learning: Analysing Students’ Preferences and Behaviors. Sustainability, 13(8), 4381. https://0-www-doi-org.catalogue.libraries.london.ac.uk/10.3390/su13084381.

Joosten, T., Pfeifer-Luckett, R., Baker, M., Schletzbaum, A. and Craig, K. (2021). The digitallearning environment experience: A University of Wisconsin System study. The National Research Center for Distance Education and Technological Advancements. https://www.detaresearch.org/news/publications (accessed October 15, 2024).

Leijon, M. and Lundgren, B. (2019). Connecting physical and virtual spaces in a hyflex pedagogic model with a focus on teacher interaction. Journal of Learning Spaces, 1(8). https://core.ac.uk/download/pdf/234819874.pdf (accessed October 15, 2024).

Mills, S. J., Yanes, M. J. and Casebeer, C. M. (2009). Perceptions of distance learning among faculty of a college of education. Journal of Online Learning and Teaching, 5(1), 10–28. https://jolt.merlot.org/vol5no1/mills_0309.htm (accessed October 15, 2024).

Moradi, A. and Farvardin, M. T. (2019). Negotiation of meaning by mixed‐ proficiency dyads in face‐to‐face and synchronous computer‐ mediated communication. TESOL Journal, 11(1). https://0-www-doi-org.catalogue.libraries.london.ac.uk/10.1002/tesj.446.

Nunan, D. (2002). Teaching MA-TESOL courses online: Challenges and rewards. TESOL Quarterly, 36(4), 617–621. https://0-doi-org.catalogue.libraries.london.ac.uk/10.2307/3588243.

Porter-Szucs, I. and DeCicco, B. (2022). TriHy: Teaching an MA TESOL class face-to-face, synchronously online, and asynchronously online. Springer Nature Social Sciences SN Soc Sci, 143(2), 1–25. https://doi.org/10.1007/s43545-022-00434-4.

Prescott, D. L. (2010). Online English language teacher training courses: Quality and innovation. EA Journal, 26(1), 4–40.

Qiang, S. and Zhang, L. J. (2023). Examining the relative effectiveness of online, blended and face-to-face teaching modes for promoting EFL teacher professional development. Porta Linguarium. https://revistaseug.ugr.es/index.php/portalin/article/view/29619/26690 (accessed October 15, 2024).

Rachman, N. (2020). Effectiveness of online vs offline classes for EFL classroom: A study case in higher education. Journal of English Teaching, Applied Linguistics, and Literatures, 3(1). https://0-dx-doi-org.catalogue.libraries.london.ac.uk/10.20527/jetall.v3i1.7703.

Rovai, A. and Jordan, H. (2004). Blended learning and sense of community: A comparative analysis with traditional and fully online graduate courses. The International Review of Research in Open and Distance Learning, 5(2). https://www.irrodl.org/index.php/irrodl/article/view/192/795 (accessed October 15, 2024).

Shin, D-s. and Kang, H-S. (2018). Online Language Teacher Education: Practices and Possibilities. RELC Journal, 49(3), 369–380.

Spencer, D. and Temple, T. (2021). Examining students’ online course perceptions and comparing student performance outcomes in online and face-to-face classrooms. Online Learning, 25(2), 233–261. https://files.eric.ed.gov/fulltext/EJ1301720.pdf (accessed October 15, 2024).

Haut de page

Annexe

Appendices

Appendix A

Original Course Survey

Pre-/Mid-/Post-Course Survey

Please complete this form during the 1st, 7th, and 15th weeks of the semester.

What is your name? ____________________________________

This is the (circle one) 1st / 7th / 15th week of the semester

   1. I am taking this class (select one) face-to-face / synchronously online / asynchronously online

   2. I learn best (select one) face-to-face / synchronously online / asynchronously online

Please use the following scale to answer the next question.

(1) Strongly Disagree / (2) Disagree / (3) Neither Agree nor Disagree / (4) Agree / (5) Strongly Agree

3. Attending this class in the way that I’m attending…

a.

is the most convenient for me

1 / 2 / 3 / 4 / 5

b.

allows me to retain information best

1 / 2 / 3 / 4 / 5

c.

allows me to process information best

1 / 2 / 3 / 4 / 5

d.

allows me to meet deadlines

1 / 2 / 3 / 4 / 5

e.

is the most cost-effective for me

1 / 2 / 3 / 4 / 5

f.

is the most time-efficient for me

1 / 2 / 3 / 4 / 5

g.

allows me to be the best informed

1 / 2 / 3 / 4 / 5

h.

allows me to be most part of the class community

1 / 2 / 3 / 4 / 5

i.

gives me the best access to the instructor

1 / 2 / 3 / 4 / 5

j.

gives me the best access to my classmates

1 / 2 / 3 / 4 / 5

k.

will allow me to be as successful in this class as I can be

1 / 2 / 3 / 4 / 5

l.

will result in the best scores on assessments

1 / 2 / 3 / 4 / 5

m.

will result in the best final course grade

1 / 2 / 3 / 4 / 5

4. If I could take this class again, I would take it (select one) face-to-face / synchronously online / asynchronously online

Comments: ______

Thank you for your participation.

Appendix B

Revised Course Survey

Pre-/Mid-/Post-Course Survey—Revised

[Part 1]

Please complete this form during the 1st, 7th, and 15th weeks of the semester.

What is your name? ____________________________________

This is the (circle one) 1st / 7th / 15th week of the semester

NOTE: Due to the coronavirus pandemic and face-to-face classes being moved online, starting with the 12th week of the semester face-to-face participants have attended this class synchronously online. This change in modality may have caused all students, regardless of modality, to experience change. Therefore, all students are asked to complete the post-course (15th week) survey twice: once reflecting on their experience pre-COVID-19 (including week 11) and once post-COVID-19 (since week 12).

1. Prior to COVID-19, I was taking this class (select one) face-to-face / synchronously online / asynchronously online

2. I learn best (select one) face-to-face / synchronously online / asynchronously online

Please use the following scale to answer the next question.

(1) Strongly Disagree / (2) Disagree / (3) Neither Agree nor Disagree / (4) Agree / (5) Strongly Agree

3. Attending this class in the way that I was attending prior to COVID-19…

a.

was the most convenient for me

1 / 2 / 3 / 4 / 5

b.

allowed me to retain information best

1 / 2 / 3 / 4 / 5

c.

allowed me to process information best

1 / 2 / 3 / 4 / 5

d.

allowed me to meet deadlines

1 / 2 / 3 / 4 / 5

e.

was the most cost-effective for me

1 / 2 / 3 / 4 / 5

f.

was the most time-efficient for me

1 / 2 / 3 / 4 / 5

g.

allowed me to be the best informed

1 / 2 / 3 / 4 / 5

h.

allowed me to be most part of the class community

1 / 2 / 3 / 4 / 5

i.

gave me the best access to the instructor

1 / 2 / 3 / 4 / 5

j.

gave me the best access to my classmates

1 / 2 / 3 / 4 / 5

k.

allowed me to be as successful in this class as I can be

1 / 2 / 3 / 4 / 5

l.

resulted in the best scores on assessments

1 / 2 / 3 / 4 / 5

m.

resulted in the best final course grade

1 / 2 / 3 / 4 / 5

[Part 2]

1. Since COVID-19, I have been taking this class (select one) face-to-face / synchronously online / asynchronously online

2. I learn best (select one) face-to-face / synchronously online / asynchronously online

Please use the following scale to answer the next question.

(1) Strongly Disagree / (2) Disagree / (3) Neither Agree nor Disagree / (4) Agree / (5) Strongly Agree

3. Attending this class in the way that I am currently attending since COVID-19…

a.

is the most convenient for me

1 / 2 / 3 / 4 / 5

b.

allows me to retain information best

1 / 2 / 3 / 4 / 5

c.

allows me to process information best

1 / 2 / 3 / 4 / 5

d.

allows me to meet deadlines

1 / 2 / 3 / 4 / 5

e.

is the most cost-effective for me

1 / 2 / 3 / 4 / 5

f.

is the most time-efficient for me

1 / 2 / 3 / 4 / 5

g.

allows me to be the best informed

1 / 2 / 3 / 4 / 5

h.

allows me to be most part of the class community

1 / 2 / 3 / 4 / 5

i.

gives me the best access to the instructor

1 / 2 / 3 / 4 / 5

j.

gives me the best access to my classmates

1 / 2 / 3 / 4 / 5

k.

will allow me to be as successful in this class as I can be

1 / 2 / 3 / 4 / 5

l.

will result in the best scores on assessments

1 / 2 / 3 / 4 / 5

m.

will result in the best final course grade

1 / 2 / 3 / 4 / 5

4. If I could take this class again, I would take it (select one) face-to-face / synchronously online / asynchronously online

5. In the course of the entire semester, I have relied on the following as sources of information and support in pursuit of success in this class. (select all that apply)

textbook, additional readings, live class (face-to-face or synchronous), PowerPoint lecture slides, video recording of class, professor in class, professor outside of class, classmates in class, classmates outside of class, other ______

Comments: ______

Thank you for your participation.

Haut de page

Notes

1 The 2022 article focuses on quantitative analyses of these measures.

2 In early 2020, the coronavirus spread rapidly worldwide, including in the United States. By March, recognizing the high contagion and health risks, authorities implemented public health measures. Schools and universities closed their campuses and shifted to online learning to protect students, teachers, and staff.

3 The 2022 article delves into the original research study, emphasizing its statistical findings and details such as technology and course design. In contrast, the current article shifts its focus toward the qualitative aspects of student learning and the natural experiment that emerged due to the pandemic within the framework of the originally designed study.

Haut de page

Pour citer cet article

Référence électronique

Ildiko Porter-Szucs et Barry DeCicco, « Triple Hybrid (TriHy): What Happened When COVID Hit the Research Study »Distances et médiations des savoirs [En ligne], 47 | 2024, mis en ligne le 25 octobre 2024, consulté le 16 janvier 2025. URL : http://0-journals-openedition-org.catalogue.libraries.london.ac.uk/dms/10320 ; DOI : https://0-doi-org.catalogue.libraries.london.ac.uk/10.4000/12jk1

Haut de page

Auteurs

Ildiko Porter-Szucs

Professor of ESL/TESOL, Eastern Michigan University, USA

Barry DeCicco

Senior Statistician, Quality Insights

Haut de page

Droits d’auteur

CC-BY-SA-4.0

Le texte seul est utilisable sous licence CC BY-SA 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search