Abstract

This study aims to adapt and validate an Arabic version of the students’ satisfaction scale. It tries to measure students’ satisfaction with the McGraw–Hill Education Connect platform in Saudi Arabia. It provides Saudi and Arab academics with a valid instrument for further studies and interventions to improve students’ learning and environments. The study examined items to establish content, construct, convergent, and discriminant validity. It used two-phase Chemistry 101-student samples (N = 50 and N = 193). The exploratory factor analysis (EFA) using the maximum likelihood extraction method and the Promax rotation method was used to explore the survey’s constructs in the pilot phase. It supported the five-factor construct of the survey. Three competitive construct models were investigated using the confirmatory factor analysis in the main phase study. The model that fitted the study data and satisfied reliability and validity standards was a second-order model identifying two primary constructs distinctively: satisfaction (N = 3, α = 0.912) and utility (N = 19, α = 0.965). The utility scale was composed of four subscales: understanding (N = 5, α = 0.913), studying (N = 3, α = 0.896), preparation (N = 4, α = 0.893), and usability (N = 7, α = 0.913). The results indicated that student’s overall satisfaction with MCGH Connect was significantly met (M = 3.52, SD = 0.176). Also, students were significantly satisfied with the MGHE Connect utility (M = 3.51, SD = 0.221). The highest level of satisfaction was understanding (M = 3.60, SD = 0.170), and the lowest was with preparation to classes (M = 3.23, SD = 0.259). Students were equally satisfied with using MGHE Connect to understand the materials, study and review for exams, and friendliness.

1. Introduction

Nowadays, blended learning (b-learning) became simplified learning and the new traditional approach among higher education institutions [13]. B-learning can be seen as a combination of traditional teaching and the e-learning environment based on the principle that face-to-face and online activities are optimally integrated into a unique learning experience [47].

Moreover, educators’ call given the advancements in computer technology and the Internet had led textbook publishers to increase the incorporation of pedagogically related technological supplements [8]. Text technology supplements (TTSs) are considered as specific technologies in the broader category of computer-assisted learning [8]. TTS becomes more prolific in higher education as complementary tools to assist student learning [9].

Many publishers and researchers claimed that textbooks’ supplement products improve learning efficiency, time management, in-class discussions, student engagement, personalized learning experience, exam scores, course grades, and overall satisfaction with the course and course work [2, 1012]. However, others disagreed [1315]. Moreover, these systems provide just-in-time feedback to students and let instructors intervene at the right time to support students [16, 17]. A McGraw–Hill Education (MGHE) product is one of those supplements that uses interactive learning technology to enable a more personalized learning experience by enhancing students’ engagement with the course content and learning activities [12].

Learners’ satisfaction with b-learning played a crucial role in evaluating its effectiveness and measuring such programs’ quality [1, 2, 1821]. Institutions implement b-learning to meet learners’ needs; thus, it is equally important to measure their perceived satisfaction to determine programs’ effectiveness [2]. Evaluating learning effectiveness and learners’ satisfaction is interconnected [1, 2224]. Learners’ satisfaction is positively correlated with the quality of learning outcomes, and studies established a relationship between students’ perception of satisfaction and their learning environment and their quality of learning [2527]. Learners’ satisfaction is critical for learners to continue using blended learning [25]. That is why institutions involved in blended learning should be concerned about increasing learning satisfaction. Chen and Tat Yao summarized that it is essential to understand learners’ attitudes, perceptions, acceptance, and satisfaction to evaluate instructional design success based on technology [2].

Moreover, institutions can intentionally provide learning environments with appropriate supplements when the factors influencing students’ satisfaction were identified [25]. Understanding the factors influencing student satisfaction with blended learning can help design a learning environment and positively impact the student learning experience [25]. Standard measures of learners’ satisfaction in blended courses use students’ overall satisfaction with the experience, perceived quality of teaching and learning, and ease of use of technology [20, 21]. Although students’ satisfaction is not necessarily associated with achievement, satisfied students are more likely to accomplish their cognitive goals [27].

Although students’ satisfaction concerns institutions to provide quality education, the field remains in a preliminary stage where more valid and reliable instruments are needed [28]. Also, there is a need to deeply understand the components of perceived satisfaction and quality of blended learning [27]. Interventions based on reliable data that support students’ learning are critical [29]. Accurate data is necessary to support learning improvement and measure progress toward the goal [29]. The survey results are usually used to make recommendations for intervention curriculum, faculty training, and products directed at developing teaching methods. That is why it is essential to rely on well-designed and validated instruments [29, 30].

Instruments that have been initially developed in a particular language for use in some contexts could be made appropriate to use in one or more other languages or contexts [31, 32]. In such cases, the translation/adaptation process aims to produce an instrument with comparable psychometric qualities as the original following a specific procedure [30, 31, 3335], and the instrument developer should evaluate the target population’s validity [31].

Validity is an instrument’s ability to measure what is supposed to measure a latent construct [36] (p. 55). It could be validated by examining the content, construct, convergent, and discriminant validity [36] (p. 55) [37]. Content validity can be established when subject matter experts examine the constructs, including the definitions and items for each construct [28]. Once the content validity is established, the instrument is implemented to examine the construct validity.

The purpose of construct validity is to determine if the constructs being measured are a valid conceptualization of the phenomena being tested [28]. If given items did not load on the intended construct, they should be eliminated, as they were not an adequate measure of that construct [37]. In confirmatory factor analysis (CFA), construct validity is achieved when the fitness indices for a construct are satisfied [30].

Convergent validity is achieved when all items in a measurement model are significantly correlated with the respective latent constructs [30, 35, 38]. It could also be verified by using the average variance extracted (AVE) for every construct. The AVE estimate is the average amount of variation that the latent construct can explain in the observed variables to which it is theoretically related [37, 39] (pp. 600–638). The AVE of all latent constructs should be above 0.5 to establish convergent validity [40]. Discriminant validity indicates that the measurement model of a construct is free from redundant and unnecessary items. It checks if items within a construct intercorrelate higher than they correlate with other items from other constructs theoretically supposed not to relate [30, 37, 38, 41].

This study aims to adapt and validate an Arabic version of the students’ satisfaction scale. It aims to measure students’ satisfaction with the MGHE Connect platform in Saudi Arabia and provide Saudi and other Arab academics with a valid instrument for further studies and interventions to improve students’ learning and environments.

2. Materials and Methods

2.1. Materials

The study started with a survey proposed by Gearhart [8]. Gearhart’s survey consisted of 30 items covering four general categories of perception: satisfaction (N = 5, α = 0.87), utility (N = 12, α = not reported), usability (N = 9, α = 0.87), and perceived value (N = 4, α = 0.91). Utility-scale items consisted of three subscales: understanding (N = 4, α = 0.66), studying (N = 4, α = 0.73), and preparation (N = 4, α = 0.87). Gearhart described these categories as follows [8]:(i)Satisfaction concerns whether the tool generally met the needs of the students(ii)Utility relates to how students used the technology, and it includes three subscales:(iii)Understanding reflects the degree to which students thought Connect helped them to comprehend the material better(iv)Preparation measures the students’ use of Connect to introduce course content before discussions and lectures(v)Studying assesses the use of technology to review for exams(vi)Usability gauges student perceptions about access and user-friendliness(vii)The perceived value indicates if it is worth it (p.13)

The survey’s response scaling ranged from 1 = strongly disagree to 5 = strongly agree. Two items from the satisfaction subscale, items 3 and 5, were negatively worded and reverse-coded before analysis.

Although students’ perceptions survey developed by Gearhart consisted of 30 items [8], when the researcher contacted the author to get the permission and the items list, the researcher received a list of 34 items classified as follows: satisfaction (N = 5), utility (N = 16), usability (N = 9), and perceived visibility (N = 4) [42]. Satisfaction and usability scales were identical in both versions, while the utility scale was not. In [8], the utility-scale items (N = 12) were categorized into three subscales: understanding, studying, and preparation, while in [42], all items (N = 16) were grouped under one scale. Thus, the researcher started the investigation with more items to reach the final validated scale.

Nevertheless, since the licenses of access to students in this study were free according to some arrangements between Yanbu Industrial College and McGraw–Hill company agent, the perceived visibility was excluded from this adaptation study. Thus, the researcher started the adaptation work using 30 items where five represented satisfaction, 16 represented Utility, and nine represented usability (see Appendix A).

2.2. Procedure

The researcher followed the International Test Commission (ITC) Guidelines and other literature for translating and adapting tests [30, 31, 33, 4350]. The 30-item survey was translated from English into Arabic by two bilingual experts, and the translations were discussed with the researcher to consolidate in one version. Then, the translated version was translated back into English by two other bilingual experts. Semantic adaptation and some corrections and discussions were made by the researcher and the other two experts to consensus the translated survey’s initial version.

The translated version was then sent to seven professional subject matter experts in educational technology, e-learning, computer science, and chemistry to review the content relevance to constructs and the items for each construct. One item from the Usability scale was deleted (item 22) since 71% of reviewers disagreed that it was relevant to Usability or other scales.

After verifying the content validity, the survey was administered using a 50-participant pilot sample to examine the instrument and its items empirically using EFA. The researcher also interviewed ten respondents to check if they had any questions, concerns, or comments about the survey. The survey was then ready for further empirical investigations in the primary phase using CFA.

2.3. Participants

The study used two samples in two phases. In the pilot study phase, a cluster sample was used by selecting two sections randomly out of ten sections (each section has on average 25 students) of Chemistry 101 offered in Yanbu Industrial College located in the west region of Saudi Arabia in the Fall semester of 2019. The two-section sample was composed of 55 students. The sample students in that semester used MGHE Connect for the whole semester as a supplement platform in addition to the face-to-face method. After using MGHE Connect for 15 weeks and before the final exams started, the students were invited to respond to that phase’s survey. The number of students who responded was 50, and the participants’ ages ranged between 19 and 21.

Next semester, Spring 2020, there were also ten Chemistry 101 sections (each section has on average 23 students) whose students used MGHE Connect in the same way. After 15 weeks of using MGHE Connect in their learning, they were invited to respond to the survey. This final implementation sample was composed of 193 students whose ages were between 19 and 22 years.

2.4. Data Analysis

The study used exploratory factor analysis (EFA) using the maximum likelihood extraction method and the Promax rotation method to explore the survey’s constructs in the pilot sample. The Promax rotation method was used since constructs were correlated [51]. The EFA solution used Kaiser’s criterion (eigenvalue > 1) to retain factors [52]. Kaiser–Meyer–Olkin measure of sampling adequacy (KMO > 0.7) and Bartlett’s test sphericity value () were used to examine if factor analysis is appropriate to use [53]. IBM SPSS version 20 was used to analyze the data.

The study assessed the proposed competing models by using CFA using IBM SPSS AMOS v. 22 [47, 54] (pp. 103–122) [5256]. The maximum likelihood method was used to estimate parameters. The construct validity was examined using six fit indices: Chi/df (<5), CFI (>0.9), GFI (>0.9), TLI (>0.9), SRMR (<0.08), and RMSEA (<0.08) [57, 58]. The convergent validity was examined using indicators load onto the expected factors (>0.4) and the AVE (>0.5) [30, 40, 59, 60] (pp. 73–84). The discriminant validity was examined using the Fornell and Larcker’s criterion [61]. The AVE of each construct should be higher than its maximum shared variance (MSV) with any other construct [62, 63]. The shared variance (SV) is represented by the square of the correlation between any two constructs [37].

The study used Cronbach’s alpha (α) to assess the reliability (α should be >0.7) [61, 64] and the item-total correlation between each item and its construct (r should be >0.4) [65]. The composite reliability (CR) of each latent variable was also estimated because it is a more suitable indicator of reliability than Cronbach’s Alpha [40, 66]. MaxR(H), which refers to McDonald’s construct reliability, was also estimated. The coefficient (H) describes the latent construct’s relationship and its measured indicators [40]. Means and parametric tests were used to describe Likert scale responses and test the significance of differences [67].

3. Results and Discussion

3.1. Pilot Study Results

The study started with the three-construct version suggested by the original author [42], and it indicated that Cronbach’s alpha was as follows: satisfaction (N = 5, α = 0.785), utility (N = 16, α = 0.935), and usability (N = 8, α = 0.851). The item analyses showed the item-total correlations and alpha if item deleted which are shown in Table 1.

The results indicated that the three proposed constructs were internally consistent. However, three items (3, 20, and 25) had low item-total correlation coefficients (<0.4) and contributed negatively to Cronbach’s alpha. Thus, they were removed from the suggested survey item list.

Then, EFA was conducted to explore the preliminary constructs of the survey. The Kaiser–Meyer–Olkin measure of sampling adequacy (KMO = 0.772) and Bartlett’s test sphericity value (approx. chi-square = 1054.012, df = 325, ) indicated that factor analysis is appropriate to use. The EFA solution using Kaiser’s criterion retained five factors. The total sum of squared loadings was 65.32%, and the extracted five factors and item loadings are shown in Table 2.

The EFA solution, shown in Table 2, supported the five-factor construct. Three items (9, 18, and 29) showed cross-loadings on factors, and two items (10 and 21) were loaded less than 0.4 on factors.

Thus, items 10 and 21 were removed, and EFA was conducted again to show the solution in Table 3.

The EFA solution again supported the five-factor construct. All items were loaded adequately on the respective factor (>0.40). Items 11 and 27 were swapped between constructs.

Cronbach’s alphas of the pilot version constructs were as follows: satisfaction (N = 4, α = 0.820), understanding (N = 5, α = 0.911), studying (N = 4, α = 0.860), preparation (N = 4, α = 0.855), usability (N = 7, α = 0.877), and utility which covers understanding, studying, and preparation (N = 13, α = 0.930). The internal consistency of all constructs/subconstructs was improved, and there was no indication for any further modification required at this phase.

3.2. Main Study Results

In this phase, the survey was implemented to examine construct, convergent, and discriminant validities. Based on the survey’s theoretical background [8] and the empirical findings of the EFA in the pilot phase, three proposed underlying construct/subconstruct models of the survey were intended to examine (see Figures 13):Model 1: first-order three-factor constructs. This model represented the proposed constructs sent by Gearhart through personal communication in which the survey items were categorized into three constructs: satisfaction (N = 4), utility (N = 13), and usability (N = 7) [42]. In this model, the Utility construct was not classified into more subconstructs, as Gearhart suggested in [8].Model 2: first-order five-correlated-factor constructs. This model represented what was suggested by the EFA findings of the pilot study. In this model, the five correlated constructs were satisfaction (N = 4), understanding (N = 5), studying (N = 4), preparation (N = 4), and usability (N = 7).Model 3: three-factor first-order with higher-order factor constructs. This model was proposed based on Gearhart’s inputs in [8] and the pilot study’s EFA findings. In this model, the first-order three-factor constructs were understanding (N = 5), studying (N = 4), and preparation (N = 4). These three factors were constructed under a higher-order factor known as utility (N = 13), which is correlated with two other factors: satisfaction (N = 4) and usability (N = 7).

The three proposed models were tested using CFA with the maximum likelihood estimation method. Kaiser–Meyer–Olkin measure of sampling adequacy (KMO = 0.959) and Bartlett’s test sphericity value (approx. chi-square = 4107.110, df = 276, ) showed that the Factor Analysis is appropriate to use. The fit indices of the CFA conducted for the three proposed models are as shown in Table 4.

The CFA solutions showed poor fit indices for model 1 (CFI < 0.90, GFI < 0.90, TLI < 0.90, RMSEA > 0.08) while both model 2 and model 3 fitted the study data adequately (CFI > 0.90, TLI > 0.90, SRMR < 0.08, SRMSEA < 0.08). Only GFI was below the cutoff (0.90). Thus, model 2 and model 3 were considered to achieve the construct validity criteria with a slight advantage of model 2.

3.3. Reliability and Validity Evidence

The study started investigating model 2 to examine the construct reliability, convergent validity, and discriminant validity, and the results are as shown in Table 5. The results indicated that the composite reliability (CR) and McDonald’s construct reliability (MaxR(H)) were high (>0.7), establishing the reliabilities of all five constructs suggested in model 2. Also, the AVE values of all constructs were above 0.5, establishing the constructs’ convergent validity.

However, MSV values were higher than AVE values in all constructs, and the square root of the AVE value, which is shown on the diagonals in bold faces, was not consistently higher than the rest of the interconstruct correlations. This finding indicated that the model did not achieve discriminant validity [40]. Thus, the researcher moved to model 3 to investigate if it would satisfy the required validities.

When model 3 was examined, the results are as shown in Table 6.

The composite reliability (CR) and McDonald’s construct reliability (MaxR(H)) were satisfied (>0.7), and all values of the average variance extracted (AVE) of higher-order constructs in model 3 were above 0.5, indicating that the convergent validity of these constructs was established. However, MSV values were higher than AVE values. The square root of the AVE value, shown on the diagonals in bold faces, was not consistently higher than the rest of the higher-order interconstruct correlations, as shown in Table 6. These findings indicated that model 3 achieved construct reliability, construct validity, and convergent validity successfully but had a discriminant validity problem.

Neither model 2 nor model 3 was acceptable because of the lack of discriminant validity. Thus, further modifications and investigations were needed to resolve the discriminant validity issue and reach a better solution.

3.4. Alternative Models

A minimal modification was conducted to achieve the discriminant validity while maintaining the content validity that has already been established. Research literature suggests that grouping highly correlated constructs, using higher-order constructs, and eliminating some items could resolve the discriminant validity challenge [37].

The researcher selected model 3 to modify because it already had higher-order constructs. The highest correlation coefficient between constructs in this model was between utility and usability constructs (r = 0.981). Thus, the researcher grouped the usability construct with the utility construct to propose a new model 4, as shown in Figure 4.

The fit indices of model 4 were χ2/df = 2.186 (<3), CFI = 0.927 (>0.9), SRMR = 0.054 (<0.08), and RMSEA = 0.079 (<0.08), indicating that data fit the model. AVEs were 0.652 and 0.873 for satisfaction and utility constructs, respectively. The correlation coefficient between satisfaction and utility constructs was 0.860. Since the square root of the AVE of the satisfaction construct (0.807) was less than the correlation between satisfaction and utility constructs, this model still lacks discriminant validity. Gaskin and Lim suggested deleting item 13 to improve the model [68]. After removing item 13, the fit indices of the new model were χ2/df = 2.000 (<3), CFI = 0.941 (>0.9), SRMR = 0.048 (<0.08), and RMSEA = 0.072 (<0.08), indicating that data fitted the proposed model better. AVEs were 0.652 and 0.929 for satisfaction and utility constructs, respectively. The square of the correlation coefficient between satisfaction and utility constructs was 0.741, which was higher than the AVE of satisfaction construct (0.652), indicating that the model still lacks discriminant validity.

One more thing that could help achieve the discriminant validity is to improve the satisfaction construct [69]. The item loadings on the satisfaction construct showed that item 5 had the lowest loading on its respective construct. When item 5 was deleted, AVEs were 0.780 and 0.865 for satisfaction and utility constructs, respectively. The square of the correlation coefficient between satisfaction and utility was 0.734, which was less than the AVE values of the satisfaction and utility constructs, indicating that the model has achieved discriminant validity. The fit indices of the model were χ2/df = 2.079 (<3), CFI = 0.942 (>0.9), GFI = 0.83 (<0.9), TLI = 0.934 (>0.9), SRMR = 0.046 (<0.08), and RMSEA = 0.075 (<0.08) indicating that data fit the proposed model better.

The composite reliability CR and AVE for all construct/subconstructs were 0.994 (0.780), 0.998 (0.865), 0.990 (0.681), 0.987 (0.758), 0.986 (0.673), and 0.987 (0.602) for satisfaction, utility, understanding, studying, preparation, and usability construct/subconstruct, respectively. This finding indicated that this model successfully achieved construct reliability, construct validity, and convergent validity in addition to discriminant validity. The model constructs, loadings, and variance explained for the optimal model are shown in Figure 5.

When the three proposed models were tested using the primary study sample, the results supported the five-factor constructs (model 2 and model 3) rather than the three-factor construct (model 1). CFA findings consistently supported the construct reliabilities, construct validity, and convergent validity of proposed models. However, none of them achieved discriminant validity. It was clear that the survey items and dimensions were highly correlated and challenging to achieve discriminant validity. Farrell and Rudd in [37] and Yale et al. in [70] suggested solving the challenge using higher-order grouped constructs or increasing the satisfaction construct’s AVE. Finally, the study reached the model that satisfied the discriminant, construct, and convergent validity.

Even though high associations among items and constructs make the survey highly internally consistent and reliable, it raised the challenge to achieve discriminant validity. The survey’s lack of discriminant validity indicated that different constructs’ total scores could not be interpreted clearly. The study tried to detect distinct constructs as much as possible, and it reached the two higher-order constructs solution as shown in model 5. These study findings, to some extent, supported the construct validity suggested by Gearhart [8]. The survey measures the overall satisfaction and utility dimension, which has four subscales measuring understanding, studying, preparation, and usability.

3.5. Students’ Satisfaction

Based on the validated survey, the subscale means of students’ satisfaction are shown in Table 7 and Figure 6.

The results indicated that student’s overall satisfaction with MCGH Connect was met (M = 3.52, SD = 0.176) and above neutral (M = 3.0), t (192) = 5.663, . Also, students were satisfied with its utility (M = 3.51, SD = 0.221), t (192) = 7.319, . The highest level of satisfaction was with understanding (M = 3.60, SD = 0.170), t (192) = 7.802, , and the lowest was with preparation (M = 3.23, SD = 0.259), t (192) = 3.012, . These findings support what Gearhart found in [8].

The t-tests for paired differences between utility subscales are shown in Table 8.

Results indicated that the significant differences in satisfaction means between understanding and preparation (∆M = 0.36917, SD = 0.6804, t (192) = 7.538, ), studying and preparation (∆M = 0.34672, SD = 0.91129, t (192) = 5.286, ), and preparation and usability (∆M = −0.34845, SD = 0.63893, t (192) = −7.576, ) were significant. In all differences, the satisfaction level of preparation was significantly lower than others. Students were equally satisfied with using MGHE Connect to understand materials, study for exams, and its usability.

In sum, the study indicated an overall students’ satisfaction with MCGH Connect, which means that the tool generally meets student needs. It indicated that MCGH Connect is adequate in helping students understand and comprehend study. Also, it helped them in studying and reviewing for exams. Students thought MGHE Connect helped them prepare for classes and study course content ahead of lectures and class discussions. However, students thought that MGHE Connect was significantly less helpful in preparation compared to understanding and studying. The study showed that students were as satisfied with MGHE Connect usability as understanding and studying.

Unlike studies that found that MGHE Connect was ineffective in improving student academic performance measures [1315], this study showed how student perceptions were positive about MGHE Connect. These study findings showed the essential of considering direct measures such as exam scores and indirect measures such as survey measures to assess blended learning programs [14, 15, 7173]. Using both types of measures might resolve the ambiguity in assessing such programs’ effectiveness [70]. The study findings also agreed with what [27, 70] stated that student self-reports of learning have no relationship with actual learning. Students might perceive tools to substantially impact their learning while it has no impact on direct learning measures.

4. Conclusion

This study aimed to adapt and validate a scale to assess students’ satisfaction with MGHE Connect in Saudi Arabia. It aimed to provide a valid instrument measuring students’ satisfaction with MGHE Connect to further studies and interventions to improve students’ learning and environments. The study followed a well-established procedure to translate the survey and establish content validity. It examined survey items to establish the construct, convergent, and discriminant validities, and composite reliabilities. The only model that fitted the data and satisfied all reliability and validity standards was a second-order model. Two primary constructs were distinctively identified: satisfaction and utility. The utility scale was composed of four subscales: understanding, studying, preparation, and usability. The survey constructs were strongly associated, and it was challenging to establish discriminant validity for proposed models. Higher-order grouped construct allowed achieving discriminant validity in addition to already established reliability and validity coefficients. The study’s final version of the survey was reliable and valid (see Appendix B), and it can be used in further studies and interventions. The study showed that MGHE Connect, in general, met the students’ needs and satisfaction. MGHE Connect significantly helped students comprehend course materials better, study before exams to get better scores, and prepare for class discussion in advance. However, students’ use of MGHE Connect in preparation was significantly less than using it to understand and study. Also, students were satisfied with MGHE Connect’s ease of use and friendliness. The study showed how useful MGHE Connect was based on students’ perceptions in Saudi Arabia. However, this study is limited since the sample used may not represent the public as a whole.

Data Availability

The data used to support the findings of this study are available upon request to the author.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Acknowledgments

The author would like to thank Dr. Christopher Gearhart, Tarleton State University, USA, for sharing the original survey items. Also, the author thanks Dr. Saeed Al-Qahtani, Yanbu, English Language Institute Deputy Managing Director, Dr. Mahmoud Alabdallah, Dr. Bijal Kottukkal Bahuleyan, Dr. Islam Khan, Prof. Adulkareem Al-Alwani, Dr. Adnane Habibm, Dr. Osman Barnawi, Mr. Wahieb Al-Baroudi, Mr. Mohammad Al-Johani, Mr. Adel Almotairi, Mr. Omar Alkhowaiter, Mr. Salem Alsufyani, and Mr. Sultan Almalki for their valuable contributions in translating, reviewing, and implementing the survey instrument.

Supplementary Materials

The items and subscales of the initial version of the scale are listed in Appendix A. Also, the items of the final version of the validated survey are shown in Appendix B. (Supplementary Materials)