Next Article in Journal
Static Malware Analysis Using Low-Parameter Machine Learning Models
Next Article in Special Issue
A First Approach to Co-Design a Multimodal Pedagogic Conversational Agent with Pre-Service Teachers to Teach Programming in Primary Education
Previous Article in Journal
Reviving Antiquity in the Digital Era: Digitization, Semantic Curation, and VR Exhibition of Contemporary Dresses
Previous Article in Special Issue
Enhancing Collaborative Learning and E-Mentoring in a Smart Education System in Higher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparison between Online Quizzes and Serious Games: The Case of Friend Me

by
Lampros Karavidas
,
Georgina Skraparli
and
Thrasyvoulos Tsiatsos
*
Department of Informatics, Aristotle University of Thessaloniki, GR-54124 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Computers 2024, 13(3), 58; https://doi.org/10.3390/computers13030058
Submission received: 21 December 2023 / Revised: 16 February 2024 / Accepted: 21 February 2024 / Published: 23 February 2024
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)

Abstract

:
The rapid changes in digital technology have had a substantial influence on education, resulting in the development of learning technologies (LTs) such as multimedia, computer-based training, intelligent tutoring systems, serious games, social media, and pedagogical agents. Serious games have demonstrated their effectiveness in several domains, while there is contradictory data on their efficiency in modifying behavior and their possible disadvantages. Serious games are games that are specifically created to fulfill a primary goal other than entertainment. The objective of our study is to evaluate the effectiveness of a serious game designed for the self-assessment of students concerning their knowledge of web technologies on students with an equivalent online quiz that uses the same collection of questions. The primary hypotheses we stated were that those utilizing the serious game would experience better results in terms of engagement, subjective experience, and learning compared to those using the online quiz. To examine these research questions, the IMI questionnaire, the total number of completed questions, and post-test grades were utilized to compare the two groups, which consisted of 34 undergraduate students. Our findings indicate that the serious game users did not have a better experience or better learning outcomes, but that they engaged more, answering significantly more questions. Future steps include finding more participants and extending the experimental period.

1. Introduction

The rapid advancements in digital technology have a profound impact on education. The introduction of the internet into our daily lives marked a transformative shift in the way individuals interact with information [1]. Furthermore, numerous studies support the fact that users have become increasingly attracted to social media platforms and online gaming [2,3,4,5,6,7,8]. This development has prompted inquiries into the potential role of technology in education and the possible effects of this integration.
As a result of the integration of technology into educational settings, the term learning technologies (LTs) was created. According to Jan Elen and Geraldine Clarebout [9], learning technology is defined as “a field of study and ample practices of mainly two different types: technology for learning and technology of learning. Technology for learning pertains to the use of technology during (the support of) learning processes… In line with the more general meaning of ‘technology’ as the application of scientific insights to solve practical problems, ‘technology of learning’ relates to the question on how scientific findings with respect to (supporting) learning can actually be used to support learning processes”. Learning technologies comprise a combination of web tools, software applications, and mobile technologies that merge the technological and pedagogical aspects of internet advances and mobile devices [10]. Their purpose is to facilitate the entire learning process, including its design, development, delivery, and management [11]. Among the technologies explored for their integration into education are multimedia learning, computer-based training, intelligent tutoring systems, serious games, social media, and pedagogical agents [12]. In particular, Dabbagh et al. [13] identified five evolving models in the field of learning technologies (LTs): Massive Open Online Courses (MOOCs), mobile learning, social media, Augmented Reality (AR), and Game-Based Learning (GBL).
As the number of individuals with mobile phones and internet access continues to rise, technology’s integration in educational settings has become more common, leading to the expansion of distance and online learning [1,13]. A notable advantage of LTs is their flexibility; that education can be delivered either synchronously or asynchronously. Additionally, LT focuses on the use of personal, portable, and handheld devices, facilitating learners’ convenient access to learning resources [14].
As previously mentioned, serious games are a type of learning technology. The term “serious game” was introduced by Abt in his book “Serious Games”, published in 1970, in which he characterized them as having an “explicit and carefully thought-out educational purpose and are not intended to be played primarily for amusement” [15]. While Abt originally referred to card and board games, serious games are now available in digital formats. Therefore, serious games are widely defined as games designed to serve a primary purpose beyond entertainment. The distinction between serious games and standard video games centers around their objectives. Serious games are oriented towards acquiring new knowledge, skills, or behavioral changes; while it is not mandatory for all serious games to have the feature of fun in them, it is considered essential for players to immerse themselves in the gaming experience [16]. Furthermore, there is disagreement regarding the term “game” within the context of “serious games”, as it is associated with a voluntary occupation of individuals. However, in the case of serious games, individuals may be obligated to play the game, such as in a training scenario. Overall, although there is debate about the term “serious game”, it has gained general acceptance [16,17,18].
Serious games are implemented in a wide variety of contexts and appear in fields such as business, the military, education, well-being, cultural heritage, health care, and more [17]. Extensive study has been conducted on their effectiveness, with evidence supporting their positive impact in various fields. Research demonstrates that serious games are effective in enhancing learning, promoting desirable behaviors, and offering engaging experiences, as well as providing assessment tools [19,20,21,22,23]. On certain occasions they are as effective as their analog counterparts, such as when assessing emotional intelligence [23]. Nevertheless, conflicting evidence exists regarding their effectiveness in generating behavioral change, and negative reactions to competitive serious games have been recorded [19,24]. Moreover, potential drawbacks such as a lack of interest, negative perceptions towards gaming, and limited computer or tablet proficiency have been identified. Thus, despite their significant promise, the creation of successful and impactful serious games remains a challenging task.
In the context of self-regulation, studies indicate that serious games could positively impact self-efficacy [25,26]. These serious games leverage effective design mechanisms, including narrative contexts, feedback, avatars, simulations, goals, a variety of difficulty levels, and social interactions [27]. It needs to be mentioned that they have shown positive results in health management, such as diabetes [28], heart failure [29], and asthma self-management [30]. Furthermore, according to Roozeboom et al. [25], serious games effectively measure generic learning features, fostering active, self-directed engagement and contributing to self-assessment and self-regulated learning.
An example of a serious game used for self-evaluation is Serena Supergreen [31], a point-and-click game designed for adolescents, especially girls. It aims to enhance their self-evaluation of technical competence through a problem-solving approach. Two evaluations were conducted. In the first one, 54 students from two German secondary schools, aged 13 to 15, participated by playing the game for two school hours, resulting in a positive impact on players’ self-evaluations of their competence, but only when assessed through a specific perceived technical competence measure. However, there were no changes in the self-concept of their technical abilities after playing the game. The short version of the game also did not significantly impact their intrinsic motivation towards technical tasks, although it did show positive changes in task interest. In the second evaluation, 39 students (mean age 15.13) at a German secondary school played the complete version of the game (180 h). The results indicated that the game positively influenced their perceived technical competence, self-concept of their technical abilities, and intrinsic motivation towards technical tasks. In summary, the results of this game’s evaluations indicate that the game increases players’ perceived technical competence, self-concept of their technical ability, and intrinsic motivation with respect to technical tasks. Moreover, the feedback strategies within the game are highlighted as contributing to the positive effects on the players’ self-evaluations of their competence.
In the context of software development, the use of serious games to teach software engineering has gained attention, indicating that real-world activities may include game design elements [32]. While further research is required, there is evidence to support the integration of serious games into software engineering education as a valuable adjunct to conventional teaching approaches [33]. In addition, serious games have gained specific interest in the learning of programming languages, indicating a growing trend in utilizing serious games for educational purposes [34]. Within this domain, serious games have gained attention as complementary tools to traditional teaching methods, with a focus on enhancing learning experiences and engagement [35].
The application of serious games in learning programming presents a promising approach to engaging students, incorporating motivation and enhancing the overall entertainment value of the learning process [36]. Specifically designed to facilitate a deep understanding of programming concepts and languages, these games offer an alternative and engaging way for learners to immerse themselves in simulations or experiments within the field of computer science [34]. Some examples of these serious games are ProBot [37] and Py-rate Adventures [36]. ProBot is a digital game designed to reinforce and improve students’ abilities in terms of sequencing, defined iteration, and nesting. The game follows a problem-solving approach and implements a score-comparing mechanism to encourage students to analyze their solutions and seek better ones. The evaluation of this serious game, in which 123 engineering university students from Colombia University, aged 16-18 years old, participated as part of their core course “Programming fundamentals”, demonstrated its effectiveness in enhancing the students’ understanding of programming concepts. On the other hand, Py-rate Adventures is a 2D platform game designed to teach basic programming concepts using Python, aiming to engage and educate players with no prior programming knowledge. An evaluation, conducted with 31 graduates and postgraduate students of an Interdepartmental Programme of Postgraduate Studies in Information Systems with different bachelor degrees (economics, business administration, informatics, etc.), yielded positive results in terms of player experience and short-term learning. In summary, the increasing interest in and diverse applications of serious games in software development underscore their potential as engaging tools for teaching programming, offering a promising avenue by which to enhance learning experiences and foster student engagement.
The aim of our study is to assess the effectiveness of a serious game designed for the self-assessment of undergraduate students concerning their knowledge of web technologies, in comparison to a similar online quiz that utilizes the same set of questions. The main hypotheses we proposed were that those using the serious game would obtain greater results in terms of engagement, subjective experience, and learning, as opposed to those using the online quiz. In order to investigate these research questions, the IMI questionnaire, the overall number of answered questions, and post-test scores were employed to compare the two groups, which comprised 34 undergraduate students. Our data suggest that the users of serious games did not have an enhanced subjective experience or achieve better learning outcomes. However, they did exhibit higher levels of engagement by responding to a much larger number of questions in comparison to the users of the learning platform.
The current paper is organized in the following way: Section 2 outlines the specific Materials and Methods employed in our experiment. It discusses the presentation of the serious game “Friend Me”, the similar online quiz, the research goals of the study, the tools used for measurements (questionnaires and tests), and the experimental technique that was followed. Section 3 provides an examination of the data and the outcomes of the evaluation. Section 4 comprises an in-depth discussion and the conclusions of the findings obtained in our experiment. Section 5 addresses the constraints of our investigation and outlines our forthcoming actions.

2. Research Goals, Materials, and Methods

This section presents the materials created and the procedure followed to test our hypothesis that educational games with questions tend to be more engaging and have a greater impact on academic performance than learning platform quizzes.

2.1. Research Goals

As stated before, the present study aims to compare users’ subjective experience, engagement, and learning results by using either a learning platform, like Moodle, or by using a serious game like “Friend Me”, to answer computer science questions.
Thus, the first research goal (RG1) of the paper is to explore the participants’ subjective experience related to the main activity they performed, which involved answering multiple questions either using online quizzes or a serious game. Their experience is measured in multiple dimensions such as interest and enjoyment, perceived competence, effort and value, and usefulness, among others. The hypothesis is that serious game users will have a better experience than those using the online quiz.
The second research goal (RG2) of this article is to examine whether the participants were more engaged while using the serious game compared to using the learning platform. The hypothesis is that the serious game users will engage more compared to the online quiz users.
The third research goal (RG3) of this paper is to establish the learning impact of the serious game compared to the learning platform on participants’ academic performance. The hypothesis is that the serious game users will perform better on the final test compared to the online quiz users.

2.2. Participants

To test the hypotheses of our experiment, a total of 34 undergraduate computer science students were included in the activity, all of whom were taking the same class about Virtual Learning Environments (VLEs). This common course was elective for the students. The participants’ ages ranged from 21 to 41 years old, with a mean age of 21.97 years, a median age equal to 21, and a standard deviation of 3.49. Among the participants, 23.52% were female and the remaining 76.47% were male. The participants were also asked about how often they engaged in playing digital games.. The majority of them, 32.53%, play digital games at least once per week. The remaining respondents consist of 2.94% who do not engage in video gaming; 23.53% who play video games rarely, less than once a month; 20.59% who play video games on a monthly basis; and 20.59% who play video games every day.
The participants were randomly separated into two groups. The experimental group comprised 16 individuals, whereas the control group comprised 18 people. Neither group had any prior exposure to the questions utilized in the main activity.
Prior to the experiment all participants were asked to sign a consent form.

2.3. Materials

2.3.1. The “Friend Me” Serious Game

“Friend Me” [38] is a serious game designed to facilitate the learning of any programming language, either from scratch or through practice, making it particularly suitable for novice programmers. This game was evaluated by undergraduate students from the computer science department, and, after receiving their comments, a new version of the game was created, incorporating improvements based on their feedback and introducing new features [39]. As a result, the game was assessed twice [38,39] by users with the same profile as the target group of our experiment in terms of player experience, namely in the areas of Usability, Confidence, Challenge, Satisfaction, Social Interaction, Fun, Focused Attention, and Relevance, ensuring its quality and making it optimal for utilization, as it is tailored to meet the specific requirements of the course.
The research described in this paper was conducted using the improved version of the game. The current version of the game includes questions related to web development, focusing on enhancing and practicing users’ Hyper Text Markup Language (HTML), Cascading Style Sheets (CSS), and PHP knowledge.
It is designed to provide teachers the ability to effortlessly update its questions regularly, and it is easy to expand. The game is accessible through a web browser or downloadable via the Google Play Store, allowing players to choose their preferred platform. However, a constant internet connection is necessary throughout gameplay.
The game employs various elements, including goal-setting, interactions, rewards, and feedback, to engage learners and motivate them to persist in their learning journey. Leveraging competition as a motivational factor, “Friend Me” encourages players to strive to pursue exceptional performance. Each player has a personal account to monitor and track their progress.
The game’s scenario revolves around a character attempting to make new friends in a new school by assisting their classmates (non-players characters, NPCs), who number fifteen. These classmates assign missions to the player, which are five unique missions that are repeated. Each mission consists of a question—either theoretical or code-based—that should be answered correctly to accomplish the mission. For those learning a programming language from scratch, the game offers mini courses to cover fundamental programming concepts, which were not used in our case as the students already had some basic knowledge.
Beyond answering the questions, players must explore and interact with the game’s environment, transitioning from a question-and-answer format to a more immersive gaming experience (Figure 1). The game contains three types of questions: multiple choice, putting code blocks in the right order, and writing a code’s result. In each question, there is a timer at the top-left part of the screen, set to one minute, challenging players answer within the allocated time. Otherwise, their response is counted as a mistake. In addition, the game displays a title at the top of the screen to indicate the question’s context.
In a multiple-choice type question (Figure 2a), users are presented with four possible answers, with only one being correct. Upon selecting their answer, players receive immediate visual and sound feedback, confirming the accuracy of their response (Figure 2b,c). If they make a mistake, the correct answer is not revealed.
In the putting-code-blocks-in-the-right-order type of question (Figure 3a), five blocks are presented on the user’s screen, and only four are required, which players must arrange in the correct order. Users can adjust the order until the final block is placed, at which point the game provides visual and sound feedback, indicating correct and incorrect placements (Figure 3b). In the case of a wrong answer, the correct order is revealed (Figure 3c).
In the final question type (Figure 4a), writing the code’s result, users must accurately write the output of the code as it would appear when executed. After submitting their answer by clicking the “Done” or “Unlock” button, visual and auditory feedback is provided without revealing the correct answer, in the case of users answering incorrectly (Figure 4b,c).
If players wish to answer multiple questions consecutively, they can approach the computer area. When they decide to stop answering questions, they can return to the main game by clicking the return (left arrow) button on their screens.
For users practicing their programming skills, questions are randomly selected from the database. Players can track their performance, viewing the total number of questions they have answered correctly, their rank, the top three players and their scores (scores are calculated based on the number of correct answers, with each correct answer earning 1 point), earned badges linked to learning goals, and the mood of classmates towards players, reflecting successful or unsuccessful mission outcomes.

2.3.2. The Online Quiz

The online quiz was created to be similar to the serious game “Friend Me”. It was created in a shared Moodle course that all the participants were registered on. On every attempt the quiz randomly showed 15 questions from the question bank created in the Moodle course. The Moodle settings that were utilized to create an interactive quiz experience like that of the game included (a) Timing, (b) Layout, (c) Question behavior, and (d) Review options.
The Timing settings were configured to enforce a time restriction of 15 min for the quiz. This was done to ensure that participants, like those who were engaged in the serious game, had an allocation of 1 min per question, given that there were 15 questions in total. In terms of the quiz’s Layout options, each question was chosen to be displayed on a separate page. This was done to provide a similar experience to the serious game, where the player could only view one question at a time on their screen. In addition, concerning the Layout settings, they were adjusted to reorder the possible answers and questions in a manner comparable to the game’s functionality. Moreover, instant feedback was provided to the users. Specifically, the system indicated the accuracy of a response using the textual labels “Correct”, “Partially correct”, or “Incorrect”, as well as accompanying colored highlights that transmitted the same information.
Three sorts of questions were created, including multiple choice questions, code block ordering questions, and code result writing questions, similar to the serious game. The questions were exactly the same as those in the serious game. The questions functioned in the same manner as the questions in the serious game as well.

2.4. Instruments

The Intrinsic Motivation Inventory (IMI) questionnaire [40,41] was used to measure the subjective experience of the participants while using either the serious game or the learning platform to gain knowledge.
It consists of 45 items organized into the following seven categories: (a) Interest/Enjoyment, (b) Perceived Competence, (c) Effort/Importance, (d) Pressure/Tension, (e) Perceived Choice, (f) Value/Usefulness, and (g) Relatedness. In accordance with the construction guidelines of the questionnaire, the Relatedness category was not utilized in our study due to the absence of any interactions between participants during the experiment, meaning that the questionnaire handed to the participants consisted of 37 items. The questions were rated on a 7-point Likert scale, with the anchors being 1 for strongly disagree, 2 for disagree, 3 for slightly disagree, 4 for neither agree nor disagree, 5 for slightly agree, 6 for agree, and 7 for strongly agree.
Meanwhile, two sets of questions were prepared to serve as a pre-test and post-test questionnaire. The test was nine open-ended questions designed to assess participants’ pre- and post-test knowledge of web technologies, including HTML, CSS, and the PHP programming language. All the questions assessed fundamental HTML proficiency, with six of them further evaluating CSS expertise, and the remaining three assessing participants’ PHP knowledge, both in the pre-test and the post-test questionnaire.
In relation to the questions utilized in the main activity, three types of questions were created, including multiple choice questions, code block ordering questions, and code result writing questions. The generated collection of questions has a grand total of 124 questions, covering both theoretical and code-based content. The distribution of questions is as follows: there are 49 HTML questions (34 multiple choice, 12 putting code blocks in the right order, and 3 writing the code’s result), 34 CSS questions (24 multiple choice and 10 putting code blocks in the right order) and 41 PHP questions (15 multiple choice, 14 putting code blocks in the right order, and 12 writing the code’s result).
The course professor designed both the pre-test and post-test questionnaire, as well as the set of questions utilized in the serious game and the online quiz. The instructor of the course, who is an expert in Virtual Learning Environments (VLEs) as well as internet technologies, validated the validity of the questions.

2.5. Procedure

The research goals were achieved using a three-phase experiment, which included the Pre-test Phase, the Main Activity Phase, and the Post-test Phase. The participants were instructed to utilize different educational technology tools throughout the Main Activity Phase according to the group they were allocated to.
The participants were informed that the experiment aimed to assess the system that they utilized. They were also informed that the experiment would include some knowledge tests for statistical purposes, ensuring that they would not feel pressured to do well. Furthermore, it was explicitly stated that their participation or performance in the activity would have no impact on their overall mark in the course. The participants, after agreeing to voluntarily participate in the experiment, also possessed the right to end their involvement in the study at any given moment.
The participants were informed about each phase only after the previous one had been completed. The specifics of each phase were provided during a lecture and were also posted as announcements on the Moodle page of their shared course. The entire experiment lasted a period of three weeks.

2.5.1. The Pre-Test Phase

During the Pre-test phase, the course students that volunteered to participate in the experiment were handed a test, the pre-test questionnaire. The test aimed to assess their prior knowledge of web technologies such as HTML, PHP, and CSS, which students were expected to acquire throughout the Main Activity Phase. The test was presented in class, and it was uploaded as a Moodle quiz. It was explicitly stated that their performance would have no impact on their final mark for the course. The participants were given a two-hour timeframe to complete their test, although less time was required. Upon finishing the quiz the players did not get any feedback or a score.
While taking the test the students were randomly divided into two groups. One group served as the control group, while the other group served as the experimental group. Upon completing the test they were informed about which group they belonged to.
Afterwards, the software used in the Main Activity Phase was presented to each group. In particular, a quiz from the main activity was filled by the researchers and shown to the control group. As participants were already familiar with Moodle no further explanation was needed.
Meanwhile, the researchers in the experimental group played the serious game “Friend Me” for a period of time and demonstrated its many features. Participants were instructed to engage with the online version of the game during class. The researchers provided answers to any questions that arose. The objective of this brief session was to familiarize the participants with the game prior to its use in the main activity.

2.5.2. The Main Activity Phase

The Main Activity Phase lasted for three weeks. The participants were informed that they were able to start playing the game and start engaging with the online quiz. Each group had to fulfil a set of criteria to ensure that the participants had adequately used the software.
The individuals assigned to the control group had to answer an online quiz. The online quiz displayed a set of questions in a random sequence from the same pool of questions utilized in the serious game. The quiz had 15 questions, selected at random, and participants received immediate feedback on their responses. The participants were asked to undertake the quiz at least four times within the designated time frame. The quiz featured a strict time constraint of 15 min, allowing just one minute for each question. However, there were no restrictions on the number of attempts permitted for taking the quiz.
The experimental group was encouraged to play the game as long as they wanted to. However, in order to guarantee that they had answered a sufficient number of questions and had actively participated in the game, they were directed to refer to the Total Coverage bar, depicted in Figure 5. They were advised to aim for a score of at least twenty questions. The function of this bar is to display the player’s activity by showing the number of questions they have answered correctly in relation to the total number of questions available in the database. Additionally, they were recommended to interact with a minimum of two non-player characters (NPCs). Each question had a duration of one minute.
For the Main Activity Phase, the participants used the software from their own personal computers whenever they wanted to and from their own space. The researchers were constantly available via email to provide any clarifications regarding the aforementioned criteria.

2.5.3. The Post-Test Phase

Following the conclusion of the three-week timeframe, the participants were given access to a final quiz. During class they were asked to complete the final assessment, known as the post-test questionnaire, which assessed their understanding of web technologies after successfully completing the main activity.
Upon the completion of the assessment by all individuals, they were subsequently given the IMI questionnaire. The survey was conducted over the internet. The participants were notified that the purpose of this questionnaire was to enhance our comprehension of their experience and gather feedback for possible improvements to the overall procedure’s structure.

3. Results

3.1. Data Analysis and Results

The statistical analysis was performed using the SPSS 29 statistical package.

3.1.1. Analysis and Results of the First Research Goal (RG1)

Our main hypothesis for the first research goal was that serious game users would have a better experience than those using the online quiz. Thus, our null hypothesis was that there would be a difference in the scores of every dimension of the IMI questionnaire between the serious game and online quiz users.
Several studies have relied on the IMI questionnaire to assess the intrinsic motivation and overall experience of users when utilizing digital learning systems, including serious games [42,43,44,45] and learning management systems for e-learning [46,47]. In [48] it was used as a way to gather data regarding the intrinsic motivation of users using a course on an e-learning platform and an equivalent mobile serious game, with the objective of determining whether the serious game significantly increased users’ intrinsic motivation.
Prior to any statistical analysis, a reliability test was performed for each of the seven factors of the IMI questionnaire using Cronbach’s alpha measure. The Interest/Enjoyment factor score was 0.9. the Perceived Competence score was 0.87, the Effort/Importance factor score was 0.86, the Pressure/Tension score was 0.83, the Perceived Choice score was 0.85, and the Value/Usefulness score was 0.91. Therefore, every factor exhibited a level of internal consistency (measured by Cronbach’s alpha) above 0.70. Ref. [49] stated that alpha coefficients of at least 0.70 are considered acceptable.
In order to investigate the first research goal (RG1), we conducted a comparison between the outcomes of the IMI questionnaire from the experimental group and the control group. The scoring instructions of the questionnaire state that some item replies should be reversed by subtracting the item response from 8 before then using the resultant number as the item score. Subsequently, a score may be computed for every factor by taking the average of all the items associated with that factor. Consequently, we obtained a score for each participant for every factor, which we utilized to compare the two groups.
Subsequently, we performed an independent samples’ t-test comparing the two groups. The independent variable was the group variable, with a value of 0 representing the experimental group and 1 representing the control group. The dependent variables were the factor scores of each participant in the IMI questionnaire.
Given that the sample size was less than 50 (N ≤ 50), we employed the Shapiro–Wilk test to assess whether the dependent variables of both the control and experimental groups conformed to a normal distribution. The p-value was more than 0.05 for the data sets for each factor in both groups, as seen in Table 1.
As a result, we could further investigate the independent samples’ t-test. The obtained data were statistically insignificant, with a p-value greater than 0.05 in the 95% confidence range. The table labelled as Table 2 presents Lavene’s test for the homogeneity of variance assumption, t-values, degrees of freedom, two-tailed significance p-values, and the effect size (Cohen’s d) for each element of the IMI questionnaire in both the experimental and control group.
To conclude, an independent sample t-test was conducted to compare the scores in every dimension of the IMI questionnaire for serious game and online quiz users. The analysis revealed no significant difference in the scores for serious game users in the Interest/Enjoyment (M = 5.16, SD = 1.05), the Perceived Competence (M = 4.73, SD = 1.06), the Effort/Importance (M = 3.96, SD = 1.13), the Pressure/Tension (M = 2.85, SD = 1.29), the Perceived Choice (M = 5.25, SD = 0.92), and the Value/Usefulness (M = 5.43, SD = 1.27) dimensions and for online quiz users in the Interest/Enjoyment (M = 5.16, SD = 1.05), the Perceived Competence (M = 4.69, SD = 0.99), the Effort/Importance (M = 4.03, SD = 1.21), the Pressure/Tension (M = 2.88, SD = 1.35), the Perceived Choice (M = 5.11, SD = 1.30), and the Value/Usefulness (M = 5.35, SD = 0.90) dimensions.
Nevertheless, regarding the Interest/Enjoyment factor, although no significant findings were observed, the level of significance was relatively close to 0.05 (p = 0.14) and the effect size was equal to 0.510, which can be described as a medium effect size [50]. The descriptive statistics, alongside the medium effect size, reveal that the serious game received a higher score (Mean = 5.17, SD = 1.06) compared to the online quizzes (Mean = 4.59, SD = 1.17), even if statistical significance was not achieved. A boxplot depicting these variables is given in Figure 6.
Consequently, our primary hypothesis was rejected, indicating that the users of the serious game did not have a superior experience during the main activity compared to the group who used the online quiz.

3.1.2. Analysis and Results of the Second Research Goal (RG2)

The main hypothesis of our second research goal was that the users of serious games would exhibit higher levels of engagement in comparison to the users of online quizzes. Thus, our null hypothesis was that there would be a difference in the number of answered questions between the serious game and online quiz users. The number of answered questions has been utilized as a measure of engagement, serving as a shared metric for both the serious game and the online quiz. Previous studies have utilized the number of quiz attempts as an indicator of engagement with a learning module [51,52]. However, in our experiment, this measure is not applicable because the questions in the serious game were presented individually. Therefore, we have converted the quiz attempts into the number of questions answered, as the more times someone attempts a quiz more questions are answered.
The data were obtained from the log data of the serious game and the online quiz results.
A Shapiro–Wilk test indicated that both the control and experimental variables deviated from a normal distribution, as evidenced by their significances being below 0.001. Therefore, we conducted the independent samples Mann–Whitney test, which yielded a significance level of p < 0.001. The mean rank for the experimental group is 26, whereas the mean rank for the control group is 9.94. A boxplot of the results is depicted in Figure 7. The figure includes individuals whose values exceeded or fell below the maximum and minimum values of each boxplot, shown by asterisks. Each asterisk is accompanied with a number that serves as the ascending identifying number for each participant in the experiment.
A Mann–Whitney U test was conducted to compare the number of questions answered by serious game and online quiz users. The analysis revealed a significant difference in the values for serious game users (Mdn = 136, IQR = 61) and online quiz users (Mdn = 60, IQR = 4): U = 8, Z = −4.865, p < 0.001. The results indicate that serious game users answered significantly more questions than the online quiz users.

3.1.3. Analysis and Results of The Third Research Goal (RG3)

Our third research goal had as its main hypothesis that serious game users would have a higher knowledge gain compared to online quiz users. Thus, our null hypothesis was that there would be a difference in the post-test scores between the users of serious games and online quizzes. Pre-test and post-test scores have traditionally been utilized in educational research to examine the impact of educational innovations [53]. In our study, we are using post-test scores as a measure to determine whether serious game players acquired more knowledge than online quiz users.
However, first, we should establish whether there was any significant difference in the prior knowledge of web technologies between the two groups. To do so, we conducted the independent samples Mann–Whitney U test between the two groups. The independent variable was the group variable (0 for the experimental and 1 for the control group) and the dependent variable was the score on the pre-test questionnaires.
As our sample size is smaller than 50 (N ≤ 50), the Shapiro–Wilk test was used to examine whether the two pre-test score variables (control and experimental) did not follow a normal distribution, with significances equal to p = 0.02 < 0.05 and p = 0.01 < 0.05, respectively. Therefore, we conducted the independent samples Mann–Whitney test, which yielded a significance level of p = 0.93 > 0.05, which led us to retain our null hypothesis that the distribution of the pre-test scores was the same across both groups. We can see the plot of these two variables in Figure 8. This allows us to investigate further the performance of the users in the post-test questionnaire.
As the sample size is smaller than 50 (N ≤ 50), the Shapiro–Wilk test was used to examine whether the two post-test score variables (control and experimental) did not follow a normal distribution, with significances equal to p < 0.01 < 0.05 and p = 0.01 < 0.05, respectively. Therefore, we conducted the independent samples Mann–Whitney test, which yielded a significance level of p = 0.99 > 0.05, which led us to retain our main hypothesis that the distribution of the post-test scores is the same across both groups. We can see the plot of these two variables in Figure 9.
An independent samples Mann–Whitney test was conducted to compare the scores on the final test between the serious game and online quiz users. Our analysis revealed no significant difference in the values for serious game users (Mdn = 7, IQR = 2.5) and online quiz users (Mdn = 7.75, IQR = 1.63): U = 143.5, Z = −0.17, p = 0.986. The results indicate that the serious game users did not perform better in their post-test questionnaire compared to the online quiz users.

4. Discussion and Conclusions

The aforementioned results are concluded and discussed in detail for each specific research goal in the following paragraphs.

4.1. First Research Goal (RG1)

Regarding the first research goal, our main hypothesis was rejected, suggesting that the participants who used the serious game did not have a superior experience in the primary activity compared to the group that utilized the online quiz. Specifically, the users of the serious game did not experience a significant increase in their sense of competence compared to the online quiz users. They reported putting in equal effort and feeling similar levels of pressure as the participants in the control group. Additionally, the perceived level of choice they had was similar to that of the online quiz users. Finally, they discovered both tools to be equally valuable.
Current research indicates that online quizzes are both motivating and able to provide a comparable experience to serious games. This aligns with the findings that Moodle, and learning platforms in general, are an effective tool for teachers to motivate students [54], particularly through the use of e-assessments [55].
Furthermore, the comparable motivation levels seen between participants taking online quizzes and those playing the serious game can be attributed to the competitive goal structure of the game, which has been found to elicit not as strong a motivation compared to games with a cooperative goal structure [56]. In our serious game, a scoreboard displaying the top three players is visible to all participants. The user’s objective is to either achieve a rating that is among the top three players or to build relationships with non-player characters (NPCs) by answering questions properly. This might have resulted in the game becoming too competitive, as the player’s primary objective is to answer questions properly, thus reducing its entertainment value to that of a simple online quiz.
In addition, the game inlcuded rewards such as points and badges, which usually leads to enjoyment according to [57], as was also proved in [58], but the respondents also had the choice to answer questions using a computer station in the game. By bypassing the socializing and tasks, players would be able to answer questions continually, which might have prevented them from obtaining items and exploring the game area. This may have also diminished the level of enjoyment for certain individuals.

4.2. Second Research Goal (RG2)

Regarding our second research goal (RG2), our main hypothesis was confirmed, indicating that the experimental group had a considerably greater number of answered questions compared to the control group. Consequently, the serious game effectively enhanced the participants’ engagement with the activity, resulting in a significant rise in the number of questions they answered.
A recent study demonstrated that serious games have been proven to be an effective method in recent years for enhancing student engagement, among other benefits [59]. An investigation was conducted [60], utilizing serious games to improve performance during the COVID-19 period. The study found that the suggested pedagogical model received high praise from students in terms of engagement, increased motivation, and a meaningful learning experience. These findings support our own discovery that students answered a greater number of questions through the serious game compared to the online quiz during the Main Activity Phase.
The finding that the players of the serious game exhibited higher levels of engagement is substantial, as engagement in virtual learning typically leads to a pleasant learning experience and a sense of fulfilment [61]. Moreover, the findings of [62] indicate that different kinds of engagement, such as behavioral, cognitive and emotional, exhibit a substantial influence on the learning outcomes of online learning.

4.3. Third Research Goal (RG3)

Concerning our third research goal (RG3), our primary hypothesis that serious game users would perform better in the post-test questionnaire compared to the online quiz users was rejected, as the findings were not significant. This can also be easily seen through descriptive statistics, as the experimental group had a mean of 6.75 and an SD = 2.41, while the control group had a mean of 7 and an SD = 2.21.
This aligns with the findings of a conducted study [25], which utilized three empirical studies to demonstrate the efficacy of serious games compared to standard classroom instruction in terms of learning aspects and results. Although the findings indicated that the students had a higher level of competence when utilizing serious games as a tool for learning, and they reported high engagement and motivation, this improvement was not reflected in their performance on knowledge tests. The study argued that the assessment methods used on the students were not aligned with their learning experience in the serious game, which was one of the determining factors of these results.
In addition, further research [63] aimed to establish the correlation between the enjoyment of games, self-reported cognitive and motivational learning improvements, and test outcomes through an empirical investigation. The study determined that the extent to which students found studying enjoyable had a significant influence on their motivation and level of involvement, but that it did not have any noticeable impact on their measured learning outcomes.
The absence of substantial findings with regard to this research question might potentially be attributed to time constraints. The entire duration of the experiment lasted three weeks, which may have been insufficient to adequately demonstrate the potential influence of the game on academic achievement. In [64], a direct correlation between the duration of instructional minutes in an academic year and standardized test results was found at school sites. If the participants had utilized the serious game for an extended duration, it is likely that they would have exhibited improved performance, taking it as a fact that active engagement typically results in enhanced learning outcomes, as was discussed in our previous research goal.
According to the meta-analysis [65], it is difficult to verify the association between engagement and learning. Based on [66], even if the users were more engaged compared to traditional teaching, based on their feedback questionnaires, observation found no significant difference in the knowledge gains between the two groups. Although users may have been initially attracted to the idea of learning through a game and may have enjoyed it more than traditional methods, minimal knowledge gains were found due to them being distracted by the game.

4.4. Conclusions

The present study concludes that, throughout the duration of the experiment, the level of engagement exhibited by the participants utilizing the serious game was significantly higher than that of the online quiz users, as evidenced by their larger number of answered questions (RG2). This might be attributed to their heightened interest in the game, which led them to prefer continuing to play and then to respond to questions, as opposed to accessing an online quiz.
Regarding our first research goal, there were no significant findings in terms of the subjective experiences of the two groups, control and experimental. However, the fact that the Interest/Enjoyment dimension of the questionnaire had a significance level pretty close to 0.05 confirms that the game is very motivating for users and suggests that it would be advantageous to replicate the experiment with a larger sample size.
Furthermore, the absence of any noteworthy discoveries regarding the serious game in the questionnaire, even in areas where serious games are known to have an impact, may be influenced by the number of questions that participants were required to answer, as previously described. This may have significantly impacted dimensions such as the perceived pressure experienced during the utilization of the serious game in the Main Activity Phase, as well as the dimension of Perceived Choice. Although the users had the option to withdraw from the experiment without facing any consequences, individuals may have felt compelled to complete the experiment, turning it into another academic obligation.
Another element that might have affected these results was the serious game that was used. Even though the game consisted of a set of questions created by the professor of the course, who is an expert in his field, and although it had been evaluated twice regarding the players’ experience, by undergraduate students and by experts, for its usability, some elements are missing that are considered to be essential to enhance multimedia learning [67]. The fact that the NPCs did not use a voice that was either human or computer-generated when the user interacted with them might have decreased the emotional engagement of the game, making it less appealing to users.
Moreover, the Pre-training principle [67] suggests that, before asking the players to answer to some questions, a brief introduction to the key concepts might be beneficial to them. As the game was tailored to the needs of the course, and it was utilized as supplementary material, similar to online quizzes with feedback, no learning was achieved through teaching, but only through self-assessing one’s knowledge and through feedback, as would happen when answering online quizzes. This might have negatively affected several dimensions of the IMI questionnaire, such as the effort dimension, as both the online quiz and the serious game were equally hard for participants. However, in our case, as the serious game was customized to the course and was meant to be compared with regular quizzes on a learning platform, if the game included tips it would make it unsuitable for us to compare the two to figure out the best way to provide practice quizzes to students.
Finally, regarding the third research goal, the grades from the post-test questionnaire did not show any significant difference. The outcome may have been influenced by limitations such as the duration of the experiment and the type of post-test questions used.

5. Limitations and Future Steps

It is important to acknowledge the limitations of the current study when evaluating its results. The control and experimental groups have a limited sample size. Moreover, despite the fact that only a small sample of participants was used, no qualitative evaluation was conducted, which results in us having no comments with which to better explore the results of our research questions. Furthermore, the duration of the experiment may not have been adequate to demonstrate significant improvements in performance on the final exam and exceptional learning outcomes (RG3). The requirement for a minimal level of engagement with the materials to consider a participant as having effectively utilized them might potentially have had a negative impact on the participants’ experience (RG1). Since the participants were computer science students, they could have acquired knowledge about the web technologies included in the experiment from external sources that they found on their own and might have answered relevant questions, potentially affecting the results of the post-test questionnaire (RG3). Furthermore, the study exclusively employed a serious game of a certain genre for the purpose of comparing it with online quizzes. This constraint restricts our capacity to make comprehensive generalizations regarding serious games.
Future research should include samples of participants lacking a computer science background and provide these tools, the quizzes and the serious game, alongside a set of relevant educational resources. Furthermore, we want to prolong the duration of the experimental time in which the material is applied to include an entire semester, consisting of 18 weeks. It is anticipated that the serious game players would achieve improved post-test scores due to their increased engagement, which is likely to result in enhanced learning outcomes. Ultimately, a more extensive pool of participants will be utilized, perhaps resulting in significant discoveries about the factor of Interest/Enjoyment from the IMI questionnaire.
In future work it will be essential to include a qualitative evaluation, which involves conducting interviews or focus groups, in addition to the quantitative evaluation. This would assist researchers in addressing gaps in their quantitative data findings and enhancing their comprehension of the results, if outcomes similar to those of the first and third research goals arise, without any significant findings. Utilizing a variety of serious games with distinct game mechanics, aesthetic and narrative designs, and musical scores that align with the learning objective of our present project would also be beneficial, as the results would not be affected by the particular game that was chosen by the researchers.

Author Contributions

Conceptualization, L.K. and G.S.; methodology, L.K. and G.S.; software, L.K. and G.S.; validation, L.K. and G.S.; formal analysis, L.K.; investigation, L.K.; resources, L.K. and G.S.; data curation, L.K. and G.S.; writing—original draft preparation, L.K. and G.S.; writing—review and editing, T.T.; visualization, L.K.; supervision, T.T.; project administration, T.T.; funding acquisition, T.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to restrictions enforced by the process of the Aristotle University ethics and legislation committee.

Acknowledgments

We express our gratitude to H. Apostolidis for his invaluable assistance and to the participants in the experiment who generously dedicated their time to supporting our research efforts.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Säljö, R. Digital tools and challenges to institutional traditions of learning: Technologies, social memory and the performative nature of learning. J. Comput. Assist. Learn. 2010, 26, 53–64. [Google Scholar] [CrossRef]
  2. van Rooij, A.J.; Schoenmakers, T.M.; van de Eijnden, R.J.; van de Mheen, D. Compulsive Internet Use: The Role of Online Gaming and Other Internet Applications. J. Adolesc. Health 2010, 47, 51–57. [Google Scholar] [CrossRef] [PubMed]
  3. Van den Eijnden, R.; Koning, I.; Doornwaard, S.; Van Gurp, F.; Ter Bogt, T. The impact of heavy and disordered use of games and social media on adolescents’ psychological, social, and school functioning. J. Behav. Addict. 2018, 7, 697–706. [Google Scholar] [CrossRef] [PubMed]
  4. Dolan, R.; Conduit, J.; Frethey-Bentham, C.; Fahy, J.; Goodman, S. Social media engagement behavior: A framework for engaging customers through social media content. Eur. J. Mark. 2019, 53, 2213–2243. [Google Scholar] [CrossRef]
  5. Neumann, D.; Huddleston, P.T.; Behe, B.K. Fear of Missing Out as motivation to process information: How differences in Instagram use affect attitude formation online. New Media Soc. 2023, 25, 220–242. [Google Scholar] [CrossRef]
  6. Kim, K.-S.; Sin, S.-C.J.; Yoo-Lee, E.Y. Undergraduates’ Use of Social Media as Information Sources. Coll. Res. Libr. 2014, 75, 442–457. [Google Scholar] [CrossRef]
  7. Silaban, P.H.; Silalahi, A.D.K.; Octoyuda, E. Understanding consumers’ addiction to online mobile games and in apps purchase intention: Players stickiness as the mediation. J. Manaj. dan Pemasar. Jasa 2021, 14, 165–178. [Google Scholar] [CrossRef]
  8. Xu, Z.; Turel, O.; Yuan, Y. Online game addiction among adolescents: Motivation and prevention factors. Eur. J. Inf. Syst. 2012, 21, 321–340. [Google Scholar] [CrossRef]
  9. Elen, J.; Clarebout, G. Learning Technology. In Encyclopedia of the Sciences of Learning; Seel, N.M., Ed.; Springer: New York, NY, USA, 2012; pp. 1980–1981. [Google Scholar] [CrossRef]
  10. Haleem, A.; Javaid, M.; Qadri, M.A.; Suman, R. Understanding the role of digital technologies in education: A review. Sustain. Oper. Comput. 2022, 3, 275–285. [Google Scholar] [CrossRef]
  11. Kitsantas, A.; Dabbagh, N. Learning to Learn with Integrative Learning Technologies (ILT): A Practical Guide for Academic Success; Information Age Publishing: Charlotte, NC, USA, 2010. [Google Scholar]
  12. Graesser, A.C. Evolution of Advanced Learning Technologies in the 21st Century. Theory Into Pract. 2013, 52 (Suppl. S1), 93–101. [Google Scholar] [CrossRef]
  13. Dabbagh, N.; Benson, A.D.; Denham, A.; Joseph, R.; Al-Freih, M.; Zgheib, G.; Fake, H.; Guo, Z. Evolution of Learning Technologies: Past, Present, and Future. In Learning Technologies and Globalization: Pedagogical Frameworks and Applications; Dabbagh, N., Benson, A.D., Denham, A., Joseph, R., Al-Freih, M., Zgheib, G., Fake, H., Guo, Z., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 1–7. [Google Scholar] [CrossRef]
  14. Chan, T.-W.; Roschelle, J.; Hsi, S.; Kinshuk Sharples, M.; Brown, T.; Patton, C.; Cherniavsky, J.; Pea, R.; Norris, C.; Soloway, E.; et al. One-to-one technology-enhanced learning: An opportunity for global research collaboration. Res. Pract. Technol. Enhanc. Learn. 2006, 1, 3–29. [Google Scholar] [CrossRef]
  15. Abt, C.C. Serious Games; University Press of America: Lanham, MA, USA, 1987. [Google Scholar]
  16. Marsh, T. Serious games continuum: Between games for purpose and experiential environments for purpose. Entertain. Comput. 2011, 2, 61–68. [Google Scholar] [CrossRef]
  17. Laamarti, F.; Eid, M.; El Saddik, A. An Overview of Serious Games. Int. J. Comput. Games Technol. 2014, 2014, 1–15. [Google Scholar] [CrossRef]
  18. Djaouti, D.; Alvarez, J.; Jessel, J.-P. Classifying Serious Games: The G/P/S Model; IGI Global: Hershey, PA, USA, 2011; p. 118. [Google Scholar] [CrossRef]
  19. Zhonggen, Y. A Meta-Analysis of Use of Serious Games in Education over a Decade. Int. J. Comput. Games Technol. 2019, 2019, e4797032. [Google Scholar] [CrossRef]
  20. Granic, I.; Lobel, A.; Engels, R.C.M.E. The benefits of playing video games. Am. Psychol. 2014, 69, 66–78. [Google Scholar] [CrossRef]
  21. Giglioli, I.A.C.; Ripoll, C.d.J.; Parra, E.; Raya, M.A. EXPANSE: A novel narrative serious game for the behavioral assessment of cognitive abilities. PLoS ONE 2018, 13, e0206925. [Google Scholar] [CrossRef] [PubMed]
  22. Wiemeyer, J.; Kliem, A. Serious games in prevention and rehabilitation—A new panacea for elderly people? Eur. Rev. Aging Phys. Act. 2011, 9, 41–50. [Google Scholar] [CrossRef]
  23. Marengo, A.; Pagano, A.; Soomro, K.A. Serious games to assess university students’ soft skills: Investigating the effectiveness of a gamified assessment prototype. Interact. Learn. Environ. 2024, 1–17. [Google Scholar] [CrossRef]
  24. De Troyer, O. Towards effective serious games. In Proceedings of the 2017 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), Athens, Greece, 6–8 September 2017; pp. 284–289. [Google Scholar] [CrossRef]
  25. Bakhuys Roozeboom, M.; Visschedijk, G.; Oprins, E. The effectiveness of three serious games measuring generic learning features. Br. J. Educ. Technol. 2017, 48, 83–100. [Google Scholar] [CrossRef]
  26. David, O.A.; Magurean, S.; Tomoiagă, C. Do Improvements in Therapeutic Game-Based Skills Transfer to Real Life Improvements in Children’s Emotion-Regulation Abilities and Mental Health? A Pilot Study That Offers Preliminary Validity of the REThink In-game Performance Scoring. Front. Psychiatry 2022, 13, 828481. [Google Scholar] [CrossRef]
  27. Nørlev, J.; Sondrup, K.; Derosche, C.; Hejlesen, O.; Hangaard, S. Game Mechanisms in Serious Games That Teach Children with Type 1 Diabetes How to Self-Manage: A Systematic Scoping Review. J. Diabetes Sci. Technol. 2022, 16, 1253–1269. [Google Scholar] [CrossRef]
  28. Lieberman, D.A. Video Games for Diabetes Self-Management: Examples and Design Strategies. J. Diabetes Sci. Technol. 2012, 6, 802–806. [Google Scholar] [CrossRef]
  29. Radhakrishnan, K.; Toprac, P.; O’Hair, M.; Bias, R.; Kim, M.T.; Bradley, P.; Mackert, M. Interactive Digital e-Health Game for Heart Failure Self-Management: A Feasibility Study. Games Health J. 2016, 5, 366–374. [Google Scholar] [CrossRef]
  30. Sarasmita, M.A.; Larasanty, L.P.F.; Kuo, L.N.; Cheng, K.J.; Chen, H.Y. A Computer-Based Interactive Narrative and a Serious Game for Children With Asthma: Development and Content Validity Analysis. J. Med. Internet Res. 2021, 23, e28796. [Google Scholar] [CrossRef] [PubMed]
  31. Kapp, F.; Spangenberger, P.; Kruse, L.; Narciss, S. Investigating changes in self-evaluation of technical competences in the serious game Serena Supergreen: Findings, challenges and lessons learned. Metacognition Learn. 2019, 14, 387–411. [Google Scholar] [CrossRef]
  32. Passos, E.B.; Medeiros, D.B.; Neto, P.A.S.; Clua, E.W.G. Turning Real-World Software Development into a Game. In Proceedings of the 2011 Brazilian Symposium on Games and Digital Entertainment, Salvador, Brazil, 7–9 November 2011; pp. 260–269. [Google Scholar] [CrossRef]
  33. Caulfield, C.; Xia, J.; Veal, D.; Maj, S. A Systematic Survey of Games Used for Software Engineering Education. Mod. Appl. Sci. 2011, 5, 6. [Google Scholar] [CrossRef]
  34. Yassine, A.; Chenouni, D.; Berrada, M.; Tahiri, A. A Serious Game for Learning C Programming Language Concepts Using Solo Taxonomy. Int. J. Emerg. Technol. Learn. (IJET) 2017, 12, 3. [Google Scholar] [CrossRef]
  35. Sideris, G.; Xinogalos, S. PY-RATE ADVENTURES: A 2D Platform Serious Game for Learning the Basic Concepts of Programming With Python. Simul. Gaming 2019, 50, 754–770. [Google Scholar] [CrossRef]
  36. Xinogalos, S. Programming Serious Games as a Master Course: Feasible or Not? Simul. Gaming 2017, 49, 8–26. [Google Scholar] [CrossRef]
  37. Cadavid, J. Digital Competition Game to Improve Programming Skills. Educ. Technol. Soc. 2012, 15, 288–297. [Google Scholar]
  38. Skraparli, G.; Karavidas, L.; Tsiatsos, T. Dynamic Serious Game for Developing Programming Skills. In New Realities, Mobile Systems and Applications; Auer, M.E., Tsiatsos, T., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 580–592. [Google Scholar] [CrossRef]
  39. Skraparli, G.; Akritidis, M.; Karavidas, L.; Tsiatsos, T. Improvement and Evaluation of Serious Game “Friend Me”. In Learning in the Age of Digital and Green Transition; Auer, M.E., Pachatz, W., Rüütmann, T., Eds.; Springer In-ternational Publishing: Cham, Switzerland, 2023; pp. 971–981. [Google Scholar] [CrossRef]
  40. Ryan, R.M.; Mims, V.; Koestner, R. Relation of reward contingency and interpersonal context to intrinsic motivation: A review and test using cognitive evaluation theory. J. Personal. Soc. Psychol. 1983, 45, 736. [Google Scholar] [CrossRef]
  41. Ryan, R.M. Control and information in the intrapersonal sphere: An extension of cognitive evaluation theory. J. Personal. Soc. Psychol. 1982, 43, 450. [Google Scholar] [CrossRef]
  42. Alexandrovsky, D.; Friehs, M.A.; Grittner, J.; Putze, S.; Birk, M.V.; Malaka, R.; Mandryk, R.L. Serious snacking: A survival analysis of how snacking mechanics affect attrition in a mobile serious game. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–18. [Google Scholar]
  43. Bessa, D.; Rodrigues, N.F.; Oliveira, E.; Kolbenschag, J.; Prahm, C. Designing a serious game for myoelectric prosthesis control. In Proceedings of the 2020 IEEE 8th International Conference on Serious Games and Applications for Health (SeGAH), Vancouver, BC, Canada, 12–14 August 2020; pp. 1–5. [Google Scholar]
  44. Guillén-Climent, S.; Garzo, A.; Muñoz-Alcaraz, M.N.; Casado-Adam, P.; Arcas-Ruiz-Ruano, J.; Mejías-Ruiz, M.; Mayordomo-Riera, F.J. A usability study in patients with stroke using MERLIN, a robotic system based on serious games for upper limb rehabilitation in the home setting. J. Neuroeng. Rehabil. 2021, 18, 41. [Google Scholar] [CrossRef] [PubMed]
  45. Hashim, N.A.; Abd Razak, N.A.; Gholizadeh, H.; Abu Osman, N.A. Video game–based rehabilitation approach for individuals who have undergone upper limb amputation: Case-control study. JMIR Serious Games 2021, 9, e17017. [Google Scholar] [CrossRef] [PubMed]
  46. Jurgelaitis, M.; Čeponienė, L.; Čeponis, J.; Drungilas, V. Implementing gamification in a university-level UML modeling course: A case study. Comput. Appl. Eng. Educ. 2019, 27, 332–343. [Google Scholar] [CrossRef]
  47. Facey-Shaw, L.; Specht, M.; van Rosmalen, P.; Bartley-Bryan, J. Do badges affect intrinsic motivation in introductory programming students? Simul. Gaming 2020, 51, 33–54. [Google Scholar] [CrossRef]
  48. Leenaraj, B.; Arayaphan, W.; Intawong, K.; Puritat, K. A gamified mobile application for first-year student orientation to promote library services. J. Librariansh. Inf. Sci. 2023, 55, 137–150. [Google Scholar] [CrossRef]
  49. Nunnally, J.C. Psychometric Theory, 2nd ed.; McGraw-Hill: New York, NY, USA, 1978. [Google Scholar]
  50. Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  51. Doherty, C. An investigation into the relationship between multimedia lecture design and learners’ engagement behaviours using web log analysis. PLoS ONE 2022, 17, e0273007. [Google Scholar] [CrossRef]
  52. O’Dowd, I. Using learning analytics to improve online formative quiz engagement. Ir. J. Technol. Enhanc. Learn. 2018, 3, 30–43. [Google Scholar] [CrossRef]
  53. Dugard, P.; Todman, J. Analysis of pre-test-post-test control group designs in educational research. Educ. Psychol. 1995, 15, 181–198. [Google Scholar] [CrossRef]
  54. Aikina, T.; Bolsunovskaya, L. Moodle-based learning: Motivating and demotivating factors. Int. J. Emerg. Technol. Learn. (IJET) 2020, 15, 239–248. [Google Scholar] [CrossRef]
  55. Elshareif, E.; Mohamed, E.A. The Effects of E-Learning on Students’ Motivation to Learn in Higher Education. Online Learn. 2021, 25, 128–143. [Google Scholar] [CrossRef]
  56. Peng, W.; Hsieh, G. The influence of competition, cooperation, and player relationship in a motor performance centered computer game. Comput. Hum. Behav. 2012, 28, 2100–2106. [Google Scholar] [CrossRef]
  57. Plass, J.L.; Homer, B.D.; Kinzer, C.K. Foundations of game-based learning. Educ. Psychol. 2015, 50, 258–283. [Google Scholar] [CrossRef]
  58. King, D.L.; Delfabbro, P.H.; Griffiths, M.D. The role of structural characteristics in problematic video game play: An empirical study. Int. J. Ment. Health Addict. 2011, 9, 320–333. [Google Scholar] [CrossRef]
  59. Yu, Z.; Sukjairungwattana, P.; Xu, W. Effects of Serious Games on Student Engagement, Motivation, Learning Strategies, Cognition, and Enjoyment. Int. J. Adult Educ. Technol. (IJAET) 2022, 13, 1–15. [Google Scholar] [CrossRef]
  60. Arias-Calderón, M.; Castro, J.; Gayol, S. Serious games as a method for enhancing learning engagement: Student perception on online higher education during COVID-19. Front. Psychol. 2022, 13, 889975. [Google Scholar] [CrossRef] [PubMed]
  61. Farrell, O.; Brunton, J. A balancing act: A window into online student engagement experiences. Int. J. Educ. Technol. High. Educ. 2020, 17, 25. [Google Scholar] [CrossRef]
  62. Wang, C.; Mirzaei, T.; Xu, T.; Lin, H. How learner engagement impacts non-formal online learning outcomes through value co-creation: An empirical analysis. Int. J. Educ. Technol. High. Educ. 2022, 19, 1–26. [Google Scholar] [CrossRef]
  63. Iten, N.; Petko, D. Learning with serious games: Is fun playing the game a predictor of learning success? Br. J. Educ. Technol. 2016, 47, 151–163. [Google Scholar] [CrossRef]
  64. Jez, S.J.; Wassmer, R.W. The impact of learning time on academic achievement. Educ. Urban Soc. 2015, 47, 284–306. [Google Scholar] [CrossRef]
  65. Girard, C.; Ecalle, J.; Magnan, A. Serious games as new educational tools: How effective are they? A meta-analysis of recent studies. J. Comput. Assist. Learn. 2013, 29, 207–219. [Google Scholar] [CrossRef]
  66. Wrzesien, M.; Raya, M.A. Learning in serious virtual worlds: Evaluation of learning effectiveness and appeal to students in the E-Junior project. Comput. Educ. 2010, 55, 178–187. [Google Scholar] [CrossRef]
  67. Mayer, R.E. Multimedia learning. In Psychology of Learning and Motivation; Academic Press: Cambridge, MA, USA, 2002; Volume 41, pp. 85–139. [Google Scholar]
Figure 1. “Friend Me”: interactions during a mission.
Figure 1. “Friend Me”: interactions during a mission.
Computers 13 00058 g001aComputers 13 00058 g001b
Figure 2. “Friend Me”: multiple-choice type questions. (a) Demonstration of a question in the game; (b) visualization of a correct answer; (c) visualization of a wrong answer.
Figure 2. “Friend Me”: multiple-choice type questions. (a) Demonstration of a question in the game; (b) visualization of a correct answer; (c) visualization of a wrong answer.
Computers 13 00058 g002
Figure 3. “Friend Me”: putting-code-blocks-in-the-right-order type of question. (a) Demonstration of a question in the game; (b) visualization of correct and incorrect placements; (c) revelation of the correct order.
Figure 3. “Friend Me”: putting-code-blocks-in-the-right-order type of question. (a) Demonstration of a question in the game; (b) visualization of correct and incorrect placements; (c) revelation of the correct order.
Computers 13 00058 g003
Figure 4. “Friend Me”: questions that involve writing the code’s result. (a) Demonstration of a question in the game; (b) visualization of correct answer; (c) visualization of wrong answer.
Figure 4. “Friend Me”: questions that involve writing the code’s result. (a) Demonstration of a question in the game; (b) visualization of correct answer; (c) visualization of wrong answer.
Computers 13 00058 g004
Figure 5. Total Coverage bar of the serious game “Friend Me”.
Figure 5. Total Coverage bar of the serious game “Friend Me”.
Computers 13 00058 g005
Figure 6. Boxplot of the Interest/Enjoyment variables.
Figure 6. Boxplot of the Interest/Enjoyment variables.
Computers 13 00058 g006
Figure 7. Boxplot of the total number of answered questions.
Figure 7. Boxplot of the total number of answered questions.
Computers 13 00058 g007
Figure 8. Independent Samples Mann–Whitney U Test of the pre-test scores.
Figure 8. Independent Samples Mann–Whitney U Test of the pre-test scores.
Computers 13 00058 g008
Figure 9. Independent Samples Mann–Whitney U test of the post-test scores.
Figure 9. Independent Samples Mann–Whitney U test of the post-test scores.
Computers 13 00058 g009
Table 1. Shapiro–Wilk Normality test.
Table 1. Shapiro–Wilk Normality test.
VariableGroupSig.
Interest/Enjoyment00.785
10.509
Perceived Competence00.769
10.831
Effort/Importance00.784
10.964
Pressure/Tension00.67
10.52
Perceived Choice00.964
10.487
Value/Usefulness00.102
10.303
Table 2. Independent sample test for factors of the IMI questionnaire.
Table 2. Independent sample test for factors of the IMI questionnaire.
VariableFSig.tdfTwo-Sided pEffect Size (Cohen’s d)
Interest/Enjoyment0.0480.8271.486320.1470.510
Perceived Competence0.120.7310.113320.910.039
Effort/Importance0.1870.6680.175320.862−0.06
Pressure/Tension0.1210.730.059320.953−0.02
Perceived Choice1.6270.2110.359320.7220.123
Value/Usefulness2.6840.1110.203320.840.07
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karavidas, L.; Skraparli, G.; Tsiatsos, T. A Comparison between Online Quizzes and Serious Games: The Case of Friend Me. Computers 2024, 13, 58. https://doi.org/10.3390/computers13030058

AMA Style

Karavidas L, Skraparli G, Tsiatsos T. A Comparison between Online Quizzes and Serious Games: The Case of Friend Me. Computers. 2024; 13(3):58. https://doi.org/10.3390/computers13030058

Chicago/Turabian Style

Karavidas, Lampros, Georgina Skraparli, and Thrasyvoulos Tsiatsos. 2024. "A Comparison between Online Quizzes and Serious Games: The Case of Friend Me" Computers 13, no. 3: 58. https://doi.org/10.3390/computers13030058

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop