The use of formative assessments in traditional and hybrid lecture-labs of industrial engineering undergraduates and their motivational profiles

Online way /H aim of study to examine the relationships between the mode of viewing a lecture, participants’ motivational profiles, the use of a formative assessment or not on a laboratory exercise, and participants’ performance on the laboratory exercise and summative assessment.


INTRODUCTION
Accessible and high quality higher education is essential for a healthy nation. However, there is no consensus on how to ensure such a higher education . In addition, students are not obtaining degrees in science, technology, engineering, and math (STEM) majors to the extent necessary to maintain the economic prosperity and safety of the United States.
(President's Council of Advisors on Science and Technology, 2012;. One possible avenue for providing accessible and high quality STEM higher education is through the inclusion of online components. While there have been many studies on online learning, few focus on STEM undergraduates and those studies that do usually have methodological shortcomings . Therefore, research with a robust methodological framework focused on online learning of undergraduate STEM students was needed.
A review of current research in STEM student motivation did not indicate the use of the Work Preference Inventory with this population. In addition, while significant research in use of formative assessments to improve student performance only minimal work has been completed looking at them in hybrid courses. This study utilized random assignment, a large sample size, and detailed methodology. The goal of this study was to examine relationships among the mode of viewing a lecture, either live or online, participants' motivational profiles, the use of a formative assessment or not on a laboratory exercise, and participants' performance on the laboratory exercise and summative assessment.

THESIS ORGANIZATION
Chapter 2 details the current study by expanding upon the need for such research, the study's methodology, and the study's results. Chapter 3 provides general conclusions of the study as well as its limitations and suggestions for future works.

CHAPTER 2: THE USE OF FORMATIVE ASSESSMENTS IN TRADITIONAL AND HYBRID LECTURE-LABS OF INDUSTRIAL ENGINEERING UNDERGRADUATES & THEIR MOTIVATIONAL PROFILES
Sarah Gidlewski 1,2 , Paul Componation 2 , Richard Stone 2 A paper to be submitted to The Journal of Engineering Education ABSTRACT BACKGROUND Online learning is one way to increase the quality and accessibility of STEM higher education. Most previous research on online learning suffers from methodological deficiencies and doesn't focus on STEM undergraduates .

PURPOSE/HYPOTHESIS
The aim of this study was to examine the relationships between the mode of viewing a lecture, participants' motivational profiles, the use of a formative assessment or not on a laboratory exercise, and participants' performance on the laboratory exercise and summative assessment. DESIGN/METHOD Students in a freshman and sophomore industrial engineering class were randomly assigned to watch a lecture either live or online and then complete a hands-on 1 Primary researcher and author 2 Graduate student, Professor, and Assistant Professor, respectively, Department of Industrial and Manufacturing Systems Engineering, Iowa State University laboratory exercise either with or without a formative assessment. All participants completed a demographics survey, Work Preference Inventory, and summative assessment.

RESULTS
This study showed no difference among the summative assessment scores of participants in each of the four conditions. Four variables explained 33.7% of the variability in summative assessment scores: the participant's intrinsic subscale challenge score, whether the participant used a formative assessment or not, the quantity of unique observations on the laboratory exercise, and the participant's gender. Lastly, participants scored higher on the extrinsic major scale and subscales than participants in two other studies. CONCLUSIONS This study corrected methodological deficiencies found in other online learning research on Industrial Engineering undergraduates and found no difference in the learning outcomes of students who watched the lecture online as opposed to live. Participants also had a higher extrinsic major scale and subscales scores than those of samples of psychology students and management majors.

INTRODUCTION
Marshall Hill asserts, "The accessibility and quality of public higher education will largely determine the competitiveness of the U.S. workforce for the next half century and the ability of our people to meet the challenges of citizenship in an increasingly complex world" (as cited in State Higher Education Executive Officers Association, 2013). While it is difficult to contest that accessible and quality higher education is essential for a healthy nation, a consensus has not been reached on how to provide such an education . Therefore, research is needed on how to provide quality and accessible public higher education.
Of particular importance is the development of professionals in science, technology, engineering, and math (STEM). Professionals in science and engineering related fields are responsible for over 50% of the sustained economic expansion in the U.S. yet constitute only 5% of U.S. workers (Adkins, 2012). While STEM professionals are essential for private industry, they are also vital to the safety of the U.S. Pentagon officials have noted the necessity for more students to pursue STEM careers so the U.S. can confront future security challenges .
The proportion of STEM degrees has declined for the past 10 years and analyses indicate the need for approximately 1 million more STEM professionals in the next ten years than are currently anticipated to graduate (President's Council of Advisors on Science and Technology, 2012). Non-STEM occupations are projected to grow 9.8% as compared to 17.0% for STEM occupations in 2008-2018 (Langdon, McKittrick, Beede, Khan, & Doms, 2011). Over 60% of students who begin college intending to pursue a STEM major do not graduate in these areas.
Many of these students perform well in other majors but, "describe the teaching methods and atmosphere in introductory STEM classes as ineffective and uninspiring" (President's Council of Advisors on Science and Technology, 2012).
The question stands: how to make STEM higher education easily accessible and of high quality? The use of technology is one method being explored as a way to make education more accessible with online education being critical to the long-term strategy of almost 70% of higher education institutions (Allen & Seaman, 2013). Questions arise as to whether the use of technology affects the quality of learning. There are thousands of studies on technology in the classroom. However, very few focus on undergraduate education and those do study undergraduate education usually consist of methodological shortcomings .
The gap in research with rigorous experimental design on online education of STEM undergraduates must be filled for the prosperity and safety of the U.S. This research was designed to examine if and how the mode of viewing a lecture, either live or online, and the use of a formative assessment on a laboratory exercise relates to undergraduate Industrial Engineering (IE) student performance on the laboratory exercise and related summative assessment. In addition, the relationship between participants' motivational profiles and performance will be examined to determine if students' motivational profiles relate to excellence or challenges in traditional and hybrid classrooms with and without the use of formative assessments.

ONLINE LEARNING
The use of the Internet to deliver course content and assess knowledge has grown drastically. Changing Course: Ten Years of Tracking Online Education in the United States defines online courses as those with 80% or more of content delivered online; blended or hybrid courses as those with 30-79% of content delivered online; web-facilitated courses as those with 1-29% of content delivered online; and traditional courses as those with no content delivered online. Under those definitions, in the fall of 2002, 1,602,970 students were enrolled in at least one online course at postsecondary institutions. In the fall of 2011, there were 6,714,792 students taking at least one online course, an all-time high of 32.0% (Allen & Seaman, 2013).
This increase does not inherently support nor refute the effectiveness of online learning.
William Bowen eloquently summarizes the current state of online learning: How effective has online learning been in improving (or at least maintaining) learning outcomes achieved by various populations of students? Unfortunately, no one really knows the answer to either this question or the obvious follow-on query about cost savings. There have been literally thousands of studies of online learning, and Kelly Lack and I have attempted to catalog them and summarize their import. This has been a daunting-and, we have to say, discouraging-task. Very few of these studies are relevant to the teaching of undergraduates, and the few that are relevant almost always suffer from serious methodological deficiencies. (2012, p. 27).
A deficiency is an aspect of experimental methodology that precludes valid, generalized conclusions from being ascertained.
One deficiency is terms like online, hybrid, and blended learning mean different things to  (Lack, 2013). A review of e-learning identified fortysix different terms synonymous with "e-learning" (Brown, Charlier, & Pierotti, 2012). Other methodological deficiencies include lack of random assignment of participants to the face-toface or online/hybrid course, failure to control for pre-existing differences in treatment groups, small sample size, and non-explicit explanation of the variability between online/hybrid sections and face-to-face sections (Lack, 2013).
The impact on student learning in online and hybrid classes as compared to traditional classes varies depending on the studies being considered. A meta-analyses conducted by the U.
S. Department of Education found, "the learning outcomes for students in purely online conditions and those for students in purely face-to-face conditions were statistically equivalent" (Means, Toyama, Murphy, Bakia, & Jones, 2010, p. xv). Blended learning (include elements of online and face-to-face learning), however, had an effect size of 0.35 as compared to purely faceto-face instruction (Means et al., 2010). In other words, 63.7% of participants in the blended learning groups had higher scores than those in the face-to-face learning groups (Coe, 2002).

Bradford S. Bell and Jessica E. Federman succinctly summarize the findings of the metaanalyses and reviews in E-Learning in Postsecondary Education in stating:
E-learning is at least as effective as, and in some cases more effective than, classroom instruction. But taking into account various methodological and instructional factors can change the findings -typically not reversing them but rather weakening or eliminating the observed benefits of e-learning. Furthermore, some of the meta-analyses found widely varying effect sizes for the relationship between e-learning and the learning outcomes, with some studies finding e-learning much more effective than classroom instruction and others finding it much less effective. Such variability suggests that other explanationssuch as aspects of the instruction, teacher effectiveness, or student characteristicsaccount for the relative effectiveness of e-learning in the studies. (2013, p. 174).
In order to examine the effectiveness of online learning in STEM higher education, rigorous experiments must be deployed. The results can inform on how to ensure high quality and accessible STEM higher education.

FORMATIVE & SUMMATIVE ASSESSMENTS
Formative assessments are used to inform and further learning whereas summative assessments are used to quantify learning. It is the difference in the functions of a formative assessment and summative assessment that defines them (Black & Wiliam, 1996). The present study utilizes the definition of formative assessment coined by Black and Wiliam in 2009: Practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited. (p. 9).
From this definition, formative assessments must result in better decisions in teaching and learning. This leads to improved learning over what would have occurred without the formative assessment. There have been few initiatives in education with such robust evidence to support the assertion that student performance can be improved through formative assessments (Black & Wiliam, 2004). A formative assessment consists of five key strategies: 1. "Clarifying and sharing learning intentions and criteria for success 2. "Engineering effective classroom discussions, questions, and learning tasks that elicit evidence of learning 3. "Providing feedback that moves learners forward 4. "Activating students as instructional resources for one another 5. "Activating students as the owners of their own learning" (Wiliam & Thompson, 2008) Current research on the use of formative assessments in a blended classroom environment focuses on formative assessments employed via technology (e.g. Gikandi, Morrow, & Davis, 2011). Conversely, there is little work on formative assessments employed via traditional methods in a blended classroom. The current study focuses on formative assessments employed during a traditional, hands-on laboratory exercise that is preceded by viewing a lecture either live or online and how this relates to student performance on the laboratory exercise and related summative assessment.

INTRINSIC & EXTRINSIC MOTIVATION
There are two types of motivation: intrinsic motivation and extrinsic motivation.
Intrinsic motivation is, "the motivation to engage in work primarily for its own sake, because the work itself is interesting, engaging, or in some way satisfying" (Amabile et al., 1994). Extrinsic motivation is, "the motivation to work primarily in response to something apart from the work itself, such as reward or recognition or the dictates of other people" (Amabile et al., 1994).
While a person's temporary motivation may change in specific situations, most theorists accept the possibility motivation can be a relatively stable, enduring trait (Amabile, 1993). However, contrasting theories exist as to how intrinsic and extrinsic motivation relate to each other and thus can be measured.
The Academic Motivation Scale measures motivation as a continuum from amotivation (lack of motivation) to extrinsic motivation to intrinsic motivation (Vallerand et al., 1992).
However, later research did not find support for the motivation continuum (Fairchild et al., 2005). Theorists question the adage that intrinsic and extrinsic motivation are mutually exclusive constructs. Rather, there are conditions where extrinsic and intrinsic motivation may coincide (Covington & Mueller, 2001), and it is possible for extrinsic motivators to enhance intrinsic motivation (Eisenberger & Cameron, 1996).
The Work Preference Inventory (WPI) was designed by Amabile, Hill, Hennessey, and Tighe (1994) to assess an individual's intrinsic and extrinsic motivation as two distinct constructs. Extensive factor analytic research was conducted to develop the WPI. Its current seventh edition consists of 30 questions written in the first person, in which the individual is asked how much the statement describes him or her from 1 (never or almost never true) to 4 (always or almost always true). While originally written for working adults, there is a version for college students in which statements regarding salary and promotions are restated using grades and awards.
The WPI consists of two primary scales, intrinsic and extrinsic motivation, each of which has two subscales. The intrinsic motivation scale contains the subscales challenge (five questions, e.g. "I enjoy trying to solve complex problems") and enjoyment (ten questions, e.g. "What matters most to me is enjoying what I do"). The extrinsic motivation scale consists of the subscales outward (ten questions, e.g. "I am strongly motivated by the recognition I can earn from other people") and compensation (five questions, e.g. I am keenly aware of the GPA goals I have for myself").
There were 1,363 undergraduate participants at two universities who participated in the study by Amabile et al The intrinsic and extrinsic scales showed a weak negative correlation (r = -0.22) while the two intrinsic subscales were moderately positively correlated (r = 0.44) and the two extrinsic subscales were moderately positively correlated (r = 0.36). Retests were conducted on subsamples of the student participants at intervals between 6 and 54 months. The test-retest reliability at 6-months and 54-months was 0.84 and 0.70 for intrinsic motivation and 0.94 and 0.73 for extrinsic motivation, respectively.
In the creation of the WPI, covariance structure analyses were performed to determine the fit of the two-factor model (intrinsic, extrinsic) and the four-factor model (enjoyment, challenge, outward, and compensation). The adjusted goodness-of-fit index (AFGI) was higher for the four-factor model than then two-factor model (for students, 0.83 and 0.73, respectively).
Amabile et al. indicates the motivational structure is more complex than intrinsic and extrinsic.
However, the researchers proceeded with using the primary and secondary scales because the grouping of items on each scale is conceptually meaningful and the scales relate to other measures in useful ways, the fit is within acceptable range, and the items on each scale are most correlated with the items on the same scale (Amabile et al., 1994). Loo (2001) examined the WPI in her study with 200 management undergraduates from six classes in a small Canadian university. Loo's analysis complemented Amabile et al.'s study in that it showed better support for the four-factor model than the two-factor model. "… scores on the WPI scales showed construct validity as indicated by the pattern of intercorrelations among the scales and the pattern of correlations between the WPI scales and the Values Scale scales" (Loo, 2001).
The WPI motivational profiles of undergraduate STEM majors had not been examined.
Nor had the interplay between the use of a formative assessment and how a lecture is viewed been examined. This research focused on these areas.

RESEARCH QUESTIONS
The experiment was designed to fulfill the following four objectives: 1. Examine the relationship between the mode of viewing a lecture, either live or online, and student performance on a laboratory exercise and summative assessment.
2. Examine the relationship between the use of a formative assessment and student performance on the related laboratory exercise and summative assessment.
3. Examine the relationship between a student's motivational profile and their performance on a laboratory exercise and summative assessment.
4. Examine the relationship of students' summative assessment scores to data.

METHOD PARTICIPANTS
Participants were students at a large Midwestern university in two lower level industrial engineering (IE) courses. The first course is typically taken the fall semester of an IE student's freshman year; the second course is typically taken the fall semester of an IE student's sophomore year. The university's Institutional Review Board approved the study. A researcher visited the IE courses two weeks prior to the start of the study to explain the purpose of the study, required time commitment, and tasks sought of participants.
Of the 144 participants who signed up for the study, 124 participants (86.1%) began the study. Of the 124 participants, 110 (88.7%) successfully completed it. Two students were unable to complete the study due to circumstances beyond their control. The remaining twelve students did not successfully complete the study for one of the two following reasons.
First, in the demographics survey, to verify participants viewed the lecture in the manner assigned to them, researchers asked participants to self-report whether they watched the lecture online or live. Researchers assigned six participants the lecture condition opposite that which they self-reported. Because it was not possible to be certain how the participants viewed the lecture, the researcher removed the six participants from the final data set.
Secondly, the demographics survey asked participants, "Is there a reason this lab might have been more difficult for you than for others?" Six participants marked "Yes" or left the question blank. The researcher removed the participants from the final data set under the supposition these students were unable to complete the study to the best of their ability.
The demographics surveys showed of the 110 participants, 41 participants (37.3%) were female and 69 participants (62.7%) were male. There were 48 students (43.6%) in the freshman level IE course, 57 participants (51.8%) were in the sophomore level IE course, and five participants (4.5%) were in both courses. The students in both courses had the option to complete an alternate activity in addition to the study to receive extra credit in the second course.
All participants were IE majors, except for one pre-business major.

RESEARCH DESIGN
The researcher randomly assigned half the participants to watch a 15-minute lecture live and the other half to watch the same lecture pre-recorded online ( Figure 1). The live lectures took place on a Tuesday and Wednesday and the pre-recorded lecture was available beginning Tuesday. Participants returned for a 45-minute laboratory that consisted of completing a demographic survey, the WPI, a laboratory exercise, and a summative assessment. There were 21 laboratory time-slots available throughout Wednesday, Thursday, and Friday of the same week. Half of the participants completed the laboratory exercise with a formative assessment and the other half without a formative assessment.  The demographics survey consisted of 12 questions (Appendix A). These questions solicited basic information (e.g. age, gender, GPA, etc.), information that could potentially relate to their performance on the laboratory exercise (e.g. "Is there a reason this lab might have been more difficult for you than for others?"), and information that could potentially relate to their motivational profile (e.g. "Do you have a job?").

The laboratory exercise involved participants using the K24 Blowhard Reactor
Maintenance Manual (Appendix B) to replace the K24 Blowhard Reactor's cartridge (Figure 3).

Figure 3: K24 Blowhard Reactor, Inside
The K24 Blowhard Reactor was an inert apparatus purchased from Auburn Engineering with over twenty intentional devise, display, and instruction defects. In completing the nine steps detailed in the K24 Blowhard Reactor Maintenance Manual, participants could see unsafe, difficult, and inefficient device, display, and instruction design. Participants were asked to list these defects (e.g. the K24 Blowhard Reactor had superfluous screws on the cover plate). All participants were provided with the Lab Objectives and Lab Procedures ( Figure 4) and half of the participants were also provided with the formative assessment ( Figure 5).  The laboratory exercise with a formative assessment provided an outline in the form of a table for students to note the defects. It followed the nine steps in the K24 Blowhard Maintenance Manual and provided dedicated space for recording defects in device, display, and instruction design. The instructions also reminded participants to consider defects at each step by stating, "When completing each step in the "K24 Blowhard Reactor Maintenance Manual".
Researchers designed the formative assessment to enable the student: to be aware of and understand the objectives of the activity, to determine his or her current accomplishments, and to decide what next steps to take to meet the objectives. In order to increase the student's awareness and comprehension of the objectives, the formative assessment elaborated upon the basic instructions and reminded the student two additional times to consider safety, efficiency, and effectiveness. The laboratory exercise without the formative assessment did not clarify or reinforce the objectives.
To enable students to reflect upon and determine their achievements, the structure allowed space for each step. By breaking it down by steps, the formative assessment provided the student with points to reflect on the task and verify if there were any device, display, or instruction defects. The laboratory exercise without the formative assessment did not provide logical reflection points, thereby reducing the possibility that the student would review the task he or she had completed nor review the objectives and be reminded of what to consider when completing the next step.
The summative assessment consisted of 15 multiple-choice questions (Appendix C) covering material taught in the lecture and reemphasized during the laboratory exercise. The purpose of the assessment was to quantify how much the students learned through the lecture and laboratory exercise.

EXPERIMENT IMPLEMENTATION
The researcher conducted the live lecture during the last 15 minutes of the regular class session. There were two sections of the freshman level IE course and one section of the sophomore level course. To reduce variability in the lectures, the researcher presented the live lecture in one section through random selection of the freshman level IE course; participants in the other section viewed the online lecture. Using a random number generator, researchers randomly assigned participants in the sophomore level course to view the lecture live or online.
The university's distance education services recorded the live lecture, enabling a professional quality recording. The researcher emailed a link to the online lecture to participants in the online condition who then watched it at their convenience prior to participating in the laboratory component. Up to eight participants could partake during each laboratory session. Each participant was assigned to an individual workstation ( Figure 6) that contained general instructions (Appendix D), the K24 Blowhard Reactor, and a pen.  Researchers also examined the laboratory exercise by looking at defects noted during the beginning and end of the activity. When the participants began to go through the K24 Blowhard Reactor Maintenance Manual, they encountered defects related to seven unique observation categories. At the end of the process, the participants encountered defects that corresponded to eight different observation categories. Researchers recorded whether participants noted at least one defect in the "front-end" observation categories and at least one defect in the "back-end" categories, or if he/she failed to note either a front-end defect, back-end defect, or both.
Lastly, researchers scored the summative assessments and recorded the percentage of questions participants correctly answered.

RESULTS
Analyses were performed to examine the original four objectives: 1. Examine the relationship between the mode of viewing a lecture, either live or online, and student performance on a laboratory exercise and summative assessment.
2. Examine the relationship between the use of a formative assessment and student performance on the related laboratory exercise and summative assessment.
3. Examine the relationship between a student's motivational profile and their performance on a laboratory exercise and summative assessment. 4. Examine the relationship of students' summative assessment scores to data.
In addition, analyses revealed noteworthy observations outside of the original objectives.

OBJECTIVE 1:
An ANOVA was completed to compare the mean summative assessment scores of the four condition's (two treatments, each with two levels, either to watch the lecture live or online and complete the laboratory exercise with or without a formative assessment). No statistical differences (p = 0.655) were found among any of the groups. With regards to the laboratory exercise, researchers recorded the number of participants who noted two specific observations that were specifically emphasized, verbally and pictorially, in the lecture. T-tests showed no differences between conditions (p = 0.817 for defect #1, p = 0.545 for defect #2). In conclusion, the data indicates the mode of viewing the lecture, either live or online, was not significant in affecting student performance on either the laboratory exercise or summative assessment

OBJECTIVE 2:
Participants who completed the laboratory exercise with the formative assessment noted an average of 9.39 more total observations and 6.09 more unique observations than participants who completed the laboratory exercise without the formative assessment (Table 1). Participants who completed the laboratory exercise with the formative assessment were more likely to note both a front-end and back-end defect than participants who completed the laboratory exercise without the formative assessment (p < 0.001) ( Table 2). Researchers conducted a t-test between the total number of observations and the participants who scored 80% or greater and those who scored below 80% on the summative assessment. Students who scored 80% or better on the summative assessment noted an average of 4.30 more total observations (3.72 more unique observations) as compared to students who scored below 80%.
Therefore, evidence supports that the formative assessment related to increased student engagement with the laboratory exercise. However, the data did not support a direct link between the use of a formative assessment and student performance on the summative assessment.

OBJECTIVE 3:
No relationships were found among the participants' motivational profiles (two subscales of extrinsic motivation, compensation and outward, and two subscales of intrinsic motivation, challenge and enjoyment) and their performance on the laboratory exercise or summative assessment (Table 3). Other results of note included the finding that students with a 3.0 out of 4.0 GPA or higher scored an average of 0.21 points higher on the extrinsic subscale compensation (p = 0.019). Also, females scored an average of 0.17 points higher than males on the intrinsic subscale enjoyment (p = 0.017).

Researchers compared this study's sample motivational profile with Amabile et al.'s and
Loo's studies (Table 4) because the population of each study varied: the majority (85.8%) of Amabile et al.'s participants were in an undergraduate psychology course or seminar (Amabile et al., 1994) and Loo's participants were volunteer management undergraduates (Loo, 2001).
Except for one participant, all students in this study majored in industrial engineering. The effect size was used for the comparison instead of a p-value as p-values are influenced by sample size and small differences may be statistically significant when the sample size is large (Berben, Sereika, & Engberg, 2012). As Amabile et al.'s student study had 1,363 participants, the effect size was used to articulate the practical importance of the differences.  (Coe, 2002). Medium effect size, as defined by d around 0.5 (Cohen, 1969), was found for extrinsic primary and subscales scores in comparison to Amabile et al.'s and Loo's studies (Amabile et al., 1994;Loo, 2001).

OBJECTIVE 4:
Out of all possible data, the combination of four sources in a linear regression model explained 33.7% of the variability in summative assessment scores. The model used participants' intrinsic subscale challenge score (p = 0.011), whether they used a formative assessment or not in completing the assignment (p = 0.001), the quantity of unique observations on the assignment (p < 0.001), and the participant's gender (p = 0.044) as explanatory variables.
Notably absent is the mode in which the lecture was viewed.

DISCUSSION
The question posed was: how to make STEM higher education easily accessible and of high quality? This study narrowed the scope to focus on IE undergraduates and how the mode of viewing a lecture related to their learning. This study also examined the relationship between the use of a formative assessment and the student's unique motivation profile to the student's performance on a laboratory exercise and summative assessment.
There are several findings that will be discussed. First, student performance on a laboratory exercise and summative assessment in connection with whether student viewed the lecture live or online will be discussed. Next, the finding that the formative assessment performed as intended to further students' engagement and learning will be expanded on. Third, notes on similarities between this and other similar studies will be discussed. Last to be discussed will be that 33.7% of the variability in summative assessment scores could be explained by a combination of four factors and what this implies.
This study addressed methodological deficiencies of prior research by focusing on undergraduate students and utilizing random assignment of participants to treatments, a large sample size, and explicit description of the procedures used. Within this robust, experimental framework studying university undergraduates, the mode in which students' viewed the lecture was unrelated to their performance on the laboratory exercise and summative assessment.
The formative assessment performed as intended by increasing student engagement with the laboratory exercise as demonstrated by the more total observations and more unique observation categories. They were also more likely to remain engaged as evidenced by observations from both the beginning and end of the laboratory exercise. While results did not show a difference among the four conditions in their summative assessment, it was noted that students had higher summative assessment scores recorded more observations (total or unique).
This lack of direct relationship between use of formative assessment and summative assessment score is a source of further research. One possible explanation is differences between the conditions existed that were not captured by the demographics survey or WPI.  (Amabile et al., 1994;Loo, 2001). These findings suggest industrial engineering undergraduates may tend to be more extrinsically motivated than students pursuing psychology or management undergraduate degrees.
When examining the data in its entirety, 33.7% of variability in summative assessment scores can be explained by participants' intrinsic subscale challenge score, whether they used a formative assessment or not, the quantity of unique observation categories noted on the laboratory exercise, and the participant's gender. The variability explained in the summative assessment scores through the participant's use of a formative assessment or not and the quantity of unique observations speaks to engagement with the laboratory exercise related to a higher summative assessment score. The variability explained by participants' intrinsic subscale challenge score also makes sense in that the external reward (in this case, extra credit) was unrelated to participants' summative assessment score. However, participants with high intrinsic subscale challenge motivation were motivated by the novelty of the task-at-hand. That the participants' gender also accounted for part of the variability (women tended to score higher on the summative assessment than men) remains a point for further research.
This study represents a robust experiment on online learning. From this study, additional research may be performed to examine other aspects of online learning that diminish or bolster student performance. The study also supports the use of formative assessments to encourage engagement independent of how a lecture is viewed. Lastly, this study gives insight into the motivational profiles of undergraduate IE students.
There is still room for growth and expansion. This study examined one learning experience and sample pool. An avenue for future research is to replicate the study expanding upon the methodology detailed in the present study. This study consisted of one lecturelaboratory session that students received extra credit for participating in. Future research could investigate semester-long courses, ideally over multiple years. In addition, the study should be conducted in universities throughout the U.S. and in multiple STEM disciplines. Such steps will

CHAPTER 3: GENERAL CONCLUSIONS
In conclusion, while there are many studies on online learning, few focus on undergraduates and most suffer from methodological deficiencies . This study focused on undergraduate industrial engineers and utilized a robust methodology to research the relationship between the mode in which a lecture is viewed and student performance on a laboratory exercise and summative assessment. No differences were found. The methodology detailed in this thesis represents a solid stepping-stone upon which further research can build upon to continually strive for accessible and high quality STEM education.
In addition, half of the participants used a formative assessment when completing the laboratory exercise and all students completed a demographics survey and Work Preference Inventory. 33.7% of the variability in summative assessment scores was attributed to four explanatory variables: the participant's intrinsic subscale challenge score, whether the participant used a formative assessment or not, the quantity of unique observations on the laboratory exercise, and the participant's gender. Further research is required to determine why participants' intrinsic subscale challenge score and gender accounted for variability in participants' summative assessment scores.
An unanticipated but noteworthy observation is that participants of this study had a much higher extrinsic major scale and subscales scores than participants of two other studies, which consisted mostly of psychology students and management majors, respectively. Researchers suggest examining the motivational profile of other undergraduates in STEM and non-STEM majors.

K24 Blowhard Reactor Maintenance Manual
Diagnosing Equipment Failure: The K24 Blowhard Reactor represents the state-of-the-art in automatic process control. Nonetheless, for various reasons it may occasionally stop functioning. If the unit has ceased to operate, use the following procedures to diagnose and repair any defective components.