Challenge

The COVID-19 pandemic disrupted normative engineering education practices, as well as many of the normative assumptions that engineering faculty make in designing learning experiences. Whether done explicitly, implicitly, or anecdotally; using information about learners to plan learning experiences is a constant part of educators’ work. Doing so aligns plans to students’ needs and enables adjustments when differences, like poor preparation from pre-requisite courses, becomes apparent. While formal tools and frameworks for learner analysis exist,7 most faculty rely instead on their previous experience to build their understanding of students and drive course design.3 These experiences with students' prior knowledge and learning preferences include assumptions about students’ physical, cognitive and emotional states as college students that typically seems sufficient.

Beyond the change in medium, COVID-19 also disrupted many normative assumptions about learners (e.g., housing security) that subtly inform how faculty design and implement courses.6 Two organizing themes educators faced were a massive increase in the external factors that could affect students' experience in all courses, and lack of access to resources that could otherwise be assumed in higher education. Reasonable assumptions about learners were suddenly untenable, introduced serious equity issues,Footnote 1 and was beyond what faculty could directly observe and fix.4 Further, the rapid switch did not support typical curricular planning processes. Instead, iterative and real-time action was necessary to support the implementation of a new learning environment with massive external constraints.1 All of these factors left faculty beyond their existing body of learner information and even knowledge of what data to collect.

Faced with the challenge of a chaotic shift to online education by faculty new to online teaching, our department sought to put systems and processes in place to support students and faculty. Individual faculty worked to implement online education in their courses, in a week. Meanwhile, at the department level we sought to implement training and systems to support faculty and students in their new learning environment. Below we describe one of those innovations—a department level system to gather and operationalize information about learners to improve learning environments in near-real-time.

Novel Initiative

We created system to both capture and react to information about the student experience doing the COVID-19 educational shift. Grounded in learner analysis principles, both parts of the system are key7,3: data collection and an organized systematic response to the information collected. This section describes both parts.

The data collection part collects information about students' experiences during COVID-19 to iteratively course design by identifying new factors affecting our learners. Our work here was grounded in classical learner analyses. From that grounding, we adapted the concepts of learner analysis from an a priori approach to an emergent one to iteratively identify and collect information about aspects of learning during COVID-19. Our data collectionFootnote 2 involved a survey with a core set of questions, asked repeatedly over time, and a set of variable questions, which changed as we identified new aspects of our learners where information was useful.

With the core questions, we captured information about students' learning experience in ways that we could predict in advance would be informative to transitioning courses to online. The initial grounding of the core questions was the Critical Incident Questionnaire.2 However, as we analyzed information about students’ experiences, the questions took a broader focus to include both in-class and not-in-class aspects of the COVID-19 learning experience. Nine core questions appeared in every survey distribution with only minor changes to collect both information about our online transition. Those nine included four questions to identify specific follow-up actions that might be necessary:

  • Since the last time you took this survey, is there anything non-course related that is limiting your ability to participate in online learning activities? (If yes, students were asked to describe)

  • Overall, which of your courses transitioned to online the best and why is that the best experience?

  • Overall, which of your courses transitioned to online the least well and why?

  • Are there any specific situations related to learning that you believe the department needs to address? If so, please explain here. If appropriate, please identify a specific course number.

  • [If this box was used, a set of questions appeared for optional contact information]

In addition to the core questions, variable questions allowed timely focus on relevant information based on upcoming events in the semester (e.g., exams) or deeper exploration and quantification of qualitative themes. We acknowledged the potential limitations of limited pre-testing, but accepted that in pursuit of more timely or focused information. During our institutions’ ‘test and tune’ prior to beginning online learning, the variable questions asked about initial communication for the new online form of the course, to identify early issues. After the decision was announced to keep summer courses online, the questions collected learners’ software preferences, access to technology, and time zone. This provided a more traditional learner analysis approach, informed by what we had already learned. Ongoing review of the questions was informed by the data collected in the survey, interactions with students, questions from faculty, and the authors reflection on our teaching experience. Because of concerns about collecting unnecessary identifying information, those questions were placed near the end of the survey and not displayed unless needed. The contact information was also isolated from the rest of the data.

We distributed the survey using Qualtrics™ in a repeating pattern to ensure emergent changes in courses and the learning experience were visible. Starting with our first week online, the survey was distributed twice per week (Tuesday and Friday morning). By doing so, we sought to balance the need for timely data with concerns about survey fatigue due to the significant increase in survey distribution and email from other units in the institution (e.g., student services, the president’s office). Our undergraduate population was randomly split into three sample groups. Each distribution was sent from the email of the department’s Associate Chair for Undergraduate Learning and Experience to one sample group. Students were asked to, if possible, complete the survey that day but the survey links stayed open until that group’s next distribution. The random sample distribution ensured that each learner received the survey approximately every 10 days while each course in the department likely received feedback every three or four days. In total, each student received the survey four times, including once at the conclusion of exams when the survey was sent to all students simultaneously.

We received 575 responses from 367 students over 41 days (Fig. 1 shows response frequency by day). Those responses represent responses from 34.0% of our population and 13.4% of surveys distributed, with a decrease across the four distributions. Each distribution saw a peak in response activity shortly after distribution that quickly decayed. The large peak on May 1st corresponds with the distribution to the entire student population and each smaller peak corresponds with a sample group distribution. Of the 367 students who responded, 232 (63%) responded once, 77 (21%) twice, 43 (12%) three times, and 15 (4%) responded all four times they received the survey. While this might suggest a level of survey fatigue, other patterns suggest otherwise. First, we observed a relatively balanced level of responses across the four distributions (n = 163, 146, 121, and 145 per distribution respectively). Additionally, the number of students responding to the survey for the first time was nontrivial across the later three distributions (nfirst response = 30, 9, and 30 respectively). Further, of the 163 students that responded to the first survey offering, 55% (89) responded at least one more time.

Figure 1
figure 1

Number of responses to survey by day.

The response patterns suggest continued engagement in the survey by students as the semester progressed, we return to this point in the reflection. We did not collect any demographic data, and for ethical reasons are limited in reporting on data provided to individual questions. Response rate also varied by question, as students were asked to leave questions blank that were not applicable.

As noted in the introduction, we see our innovation as not just collecting information but also systematically reacting to information received. Each response to the survey was reviewed and triaged by a team of three faculty members (the authors) for response based on the types of action warranted (Fig. 2 provides a flowchart of our process). The audience for response differed as appropriate with audiences including all learners, individual learners, all department instructors, and individual instructors. Actions included mass emails to students and/or faculty, LMS announcements, training sessions, individual interactions, and revising the survey itself. Our base action was a weekly email to all instructors that explained the current state of the department’s online transition, summarized students’ needs and concerns, and kept faculty informed of any key policies or learnings. In this way, we used the principles of learner analyses as a framework to continuously evolve our understanding of our learners and guide our faculty’s design of learning experiences.

Figure 2
figure 2

System diagram of responses to actionable feedback received through survey.

Reflection

On reflection, we saw enormous value in taking an emergent approach to learner analysis that likely has applications beyond COVID-19.Footnote 3 Critically, much value came from being able to move information about learners from ‘unknown-unknowns’ to ‘known-unknowns’ and then ‘known-knowns’ that could be operationalized in courses. Additionally, some responses suggest that learner’s willingness to provide useful data relied heavily on the systematization of actions taken from that data. With more time or forewarning, we likely would have made some different choices, but the effect of the data collection and action seems to have increased the quality of the student experience in a very difficult time. Primarily, we learned how powerful the “cyclical process”7, p. 228 of course design can be when information about learners drives iterations occur faster than the typical semesterly cycle.

We were successful in our primary goal of using an emergent data approach to systematically react to learners needs during a mid-semester transition to online education. While our ability to provide specific data is limited, as described in Footnote 3, we can provide a general overview of how the system functioned which we see as a marker of success. Through the survey we identified certain courses that poorly transitioned as well as areas in which many courses were struggling. The ability to separate these two phenomena did not solely come from the survey but would have been impossible without it. By developing a system to respond as we developed the survey, we were able to target support for specific instructors while separately deploying training for all faculty members on separate issues. The results of these two types of interventions became apparent in learner comments about ‘recovery’ in specific courses (e.g., poor audio quality of a certain instructor) or issues common across courses (e.g., too many video conference platforms). Learners also noted that the courses in our department transitioned better than others, specifically noting that our department actively sought and responded to feedback. Lastly, we believe that the decrease in responses reflects the deployment of training that fixed issues that otherwise would have continued. While other approaches to do this may have been possible, our department saw particular value in pursuing a centralized and systematized approach. We did use feedback from other sources (e.g., our student advisory board) as well, but the repeated survey approach allowed a broader reach to identify patterns of issues, specific problems that may not have reached the centralized board, and even when certain instructors may not seek feedback.

For example, during the first two weeks of online education, multiple students expressed concern about testing procedures. In response, we hosted a workshop on online assessment to help faculty understand, select, and implement different practices to fit problems students encountered (e.g., having windows rather than pre-set times for completing exams and allowing time for uploading issues). In other cases, we worked with instructors one-on-one before issues students identified became self-sustaining (e.g., unease with video conferencing software). As increasing numbers of students had difficulty with internet access, we communicated to instructors the need to provide asynchronous options and be flexible with synchronous attendance. In other cases, the survey provided the ability to link students with emergency loans, housing assistance, mental health services, and other critical resources. Mass student emails provided information on free internet options, ergonomic workspace configuration, and best practices for remote working. As a final example, when summer classes were announced as online, our data on students' experiences with different software and the availability of hardware was available to faculty who did not teach in the spring. That information (Table 1 is an example provided to faculty) was accompanied by comments from learners that the multitude of platforms were confusing. This helped the department drive to standardize on Microsoft Teams as a classroom interaction platform in concert with continuing to use Canvas as the standard LMS. Encouraging faculty to standardize on Teams was chosen because it provided a single platform for synchronous and asynchronous interaction, a need identified from bimodal attitudes towards synchronous and asynchronous delivery and learners’ identifications of significant, ongoing, and intermittent technological challenges. Data on hardware availability (Fig. 3 was created for faculty) helped specifically because they deviated from assumptions that were driving decisions related to assessment or teaching (e.g., only 60% of learners had access to a printer and only 45% a private workspace).

Table 1 Student ratings of various instructional tools deployed by the department and used for summer instructional planning.
Figure 3
figure 3

Students’ self-reported information on availability of different digital technology in their homes. Information was provided to faculty for use in summer and fall online course planning.

What did not go well with this system was primarily a result of building an airplane as we flew it. Not having the opportunity to test and refine questions or target them in advance likely caused missed opportunities or less useful information. For example, our initial focus was identifying problems that we could act to address. However, the phrasing of one question (is there anything new non-course related that is limiting your ability to participate in online learning activities?) brought in non-actionable information, but still useful information (e.g., sharing a room with a sibling). We learned to reframe such information to provide instructors’ insight into the learner’s ongoing experience. This point, giving faculty access to the less-apparent challenges students faced, is something we have found useful during the summer and fall semesters to change faculty attitudes beyond challenges we could actively solve. We continue to use a modified version of the survey as a tool to humanize students and encourage greater empathy and flexibility from faculty, which some faculty may not be comfortable doing independently. Similarly, our efforts to act on information was at times delayed by a lack of clearly defined processes for escalation or interacting with other units.

Linking our system back to concepts of learner analyses, we gained an understanding of the impact students' lives outside of the classroom have on their experience in it. With one goal of a learner analysis being to understand the learner-context in which we teach,7 this project served as a reminder of the magnitude and diversity of experiences that can affect learning. Whether expressed in frustrations about peers' academic dishonesty, issues of internet access, or simply fear over the state of the world, the stark realities that affected students' experience were now unavoidable. We are left asking what the impact of our prior assumptions or ignorance about these issues may have been. Several of the authors were deeply impacted by exposure to these parts of students' lives and are motivated to make it part of our educational efforts. How we gather good information on and analyze students’ as people with lives should increase, in the time of COVID and beyond, to improve our educational activities. While learner analyses may seem an a priori tool for curricular planning, our experience suggests that such tools and mindsets can also be used for impactful real time adjustment to help faculty and students alike.

Reflecting on our effort does show some limitations, potential adaptations, and opportunities for future improvement that readers should be aware of. Limitations include the possibility that internet access may have mediated response rate, that students were overwhelmed by the transition, and that the increased amount of email may have affected who completed the survey. We are also limited because we did not collect demographic data from making any comparisons across years, gender, and other such variables. Additionally, it is possible that the survey may not have perceived value or safety in participating in this survey because of it being administered by the department and coming from a faculty email address rather than typical student evaluations of teaching processes. As a practical limitation, this approach is somewhat labor intensive, averaging about 3–4 h per week of faculty labor. We do see this approach, and mindset, as potentially valuable to anyone introducing anything new in a classroom or curriculum.

As engineering education scholarship continues to evolve, there has been an ongoing tension between rigorous research and instructional practice in the field.5 Our personal takeaways from this work are the level too which tools and their use can be separated. If the primary goal is development or foundational rigorous research, traditional approaches to planning research and data collection can be useful. If the primary goal is tuning implementation for unexpected factors, a deep prior planning may not be necessary, warranted, or even useful. Adaptive feedback loops can also be useful and provide actionable empirical data, and if properly managed that data can become insights for educators. Areas of educational research such as Design Studies8 show paths where discovery, validation, and intervention in the educational experience can be better integrated. Static and adaptive approaches reflect different thinking about curricular change, and serve different goals. As engineers, the authors typically find ourselves drawn to clearly defined, thoroughly planned, and structured (i.e., well-engineered) approaches to implementing and evaluating change. The opportunity we faced with COVID, was that such planning was not possible, because we only broadly knew what we were looking for. Instead, we took a different approach to planning and executing research on our students. By doing so, we discovered aspects of learners’ experience that we likely would not have seen, thought to look for, or been able to share with colleagues to improve classes.