Construction and Validation of a Scale to Measure Islamic Primary School Teachers’ Readiness in Implementing Emancipated Curriculum Referring to the Technological Pedagogical and Content Knowledge

: This research aims to develop and validate a scale to measure the readiness of science teachers in Islamic primary schools in implementing the Emancipated Curriculum referring to the Technological Pedagogical and Content Knowledge (TPACK). Eight development steps by DeVellis were used to develop the scale. A total of 224 respondents, comprising six experts and 218 science teachers in Central Java Province and the Special Province of Yogyakarta, agreed to participate. Data were analyzed using SEM-PLS. The development process successfully created a scale of 34 valid and reliable items. These items consist of Technological Knowledge factors (four items), Pedagogical Knowledge factors (14 items), Content Knowledge factors (four items), Pedagogical Content Knowledge factors (four items), Technological Content Knowledge factors (two items), Technological Pedagogical Knowledge factors (two items), and Technological Pedagogical and Content Knowledge factors (three items). A high reliability was obtained for the scale developed (0.983). This validated scale is ready to be used to examine teachers’ readiness in implementing the Emancipate Curriculum. The measurement with this scale can illustrate the extent to which teachers have TPACK as a provision for implementing the curriculum so that policy recommendations can be made.


Introduction
Indonesia's lagging in education is one reason the government implemented a reform curriculum called the Kurikulum Merdeka (Emancipated Curriculum) (Gumilar et al. 2023).This curriculum is intended to prepare a generation that is resilient to face the technological revolution (Randall et al. 2022), the demands of the 21 st century (Faiz and Faridah 2022;Faiz, Parhan, and Ananda 2022), and Society 5.0 (Indarta et al. 2022).The Emancipated Curriculum provides unrestricted space for teachers to create activities that lead students to achieve 21 st -century skills, i.e., creative and innovative, critical thinking and problemsolving, communication and collaboration, information literacy, media, and information and communication technology (Yue 2019), in which this creates challenges for teachers.
The main feature of the Emancipated Curriculum is student-centered approach, which gives teachers the freedom to instruct students in determining how to behave, process, and think for students' self-development, therefore this curriculum has a significant potential to increase students' motivation and enthusiasm for learning (Lince, 2022).Emancipated Curriculum is also relevant to the digital era of Industrial Revolution 4.0 and Society 5.0, where the information students receive through digital media and technology can stimulate their creativity (Darmayani 2022).Therefore, teachers should also be aware of the Industrial Revolution 4.0 demands that alter teachers' roles in 21 st -century classrooms (Shafie, Abd Majid, and Shah Ismail 2019).
Teachers are now encouraged to teach using a student-centered approach and utilizing technologies in their classrooms.A Technological Pedagogical and Content Knowledge (TPACK) framework can reach the teacher's new roles.TPACK is a conceptual framework for preparing teachers to use technology effectively in learning (Chai, Koh, and Tsai 2010;Kaplon-Schilis and Lyublinskaya 2015;Kurt et al. 2014;Mishra and Koehler 2006;Niess 2011;Santos and Castro 2021), aligning with the demands of the Emancipated Curriculum.In this study, the TPACK framework was combined with indicators for ECI according to Minister of Religion Decree No. 183 and 184 of 2019 to portray the readiness of science teachers at Islamic primary schools in implementing the Emancipated Curriculum.Minister of Religion Decree No. 347 of 2022 concerning Guidelines for Emancipated Curriculum Implementation in Islamic Schools states that the Emancipated Curriculum will be implemented in Islamic Schools gradually starting in the 2022/2023 academic year, and some schools were appointed as pilot projects to implement this curriculum.
The ECI leaves challenges for science teachers in elementary schools as the Emancipated Curriculum policy integrated natural and social sciences as a single subject (natural-social science or IPAS).Natural science, which is inquiry-based and closer to scientific discovery (Breiner et al. 2012), must collaborate with social science, which has the characteristics of solving social problems involving social intelligence (Talitha and Sari 2016).Natural and social sciences, however, have the same characteristics, namely reasoning and scientific thinking, which is a strong basis in cognitive science (Berland and McNeill 2010;Dunbar and Fugelsang 2005) for studying scientific ways of knowing and understanding the world.
As the main actors in ECI, teachers should be well-trained and ready to change their mindset and performance to succeed in the ECI.Although the government has prioritized the development of teacher competencies and professional development programs to support the ECI, teachers still need to build their capacities for quality teaching, differentiated instruction, as well as assessment to determine student learning needs and to monitor progress and attainment (Randall et al. 2022).Some research highlighted that teachers find many obstacles in implementing the curriculum due to a lack of preparation, such as the absence of definite textbooks, being stuck with the previous curriculum (Syarochil & Abadi, 2023), and limited facilities and infrastructure, such as digital learning media (Ellen and Sudimantara 2023).In addition, there is little research on Islamic primary school teachers' readiness for ECI, primarily focusing on natural-social science subjects.No instrument has been found to capture the teachers' readiness; therefore, this research is fundamental to designing a standard instrument for measuring teacher readiness in ECI, referring to TPACK.

Research Method
Eight development steps by DeVellis (2003) were carried out to develop a reliable and valid scale (see Figure 1).

Results and Discussion
Combining the ECI and TPACK components produces 34 valid items with very high reliability (0.983).The results of each step will be described below.

Step 1. Determining the construct
As an initial stage, the construct to be determined is the teacher's readiness for ECI.The scale developed refers to the ECI guidelines and TPACK framework.This phase also included a comprehensive literature review on ECI and TPACK.The existing TPACK scale from previous studies was adapted to determine the construct.

Step 2. Making item statements
Writing statement items is often the most challenging scale development process.The initial scale consists of 36 items spread across seven aspects, namely Technological Knowledge (TK) (six items), Pedagogical Knowledge (PK) (14 items), Content Knowledge (CK) (four items), Pedagogical Content Knowledge (PCK) (five items), Technological Content Knowledge (TCK) (two items), Technological Pedagogical Knowledge (TPK) (2 items), and Technological Pedagogical Content Knowledge (TPACK) (three items).

Step 3. Determining scale format
Choosing a response format is an essential step of scale development.This scale uses a semantic differential ranging from 1 to 5; the closer to 5, the more towards a positive answer.The score on the scale is calculated based on the number of points selected by the respondent.

Step 4. Reviewing of statement items by experts
Obtaining content validation is also an essential part of the scale development process.A total of 36 initial items were assessed for content validity by six experts through a Forum Group Discussion (FGD).Four faculty members and two primary education teachers who have implemented the Emancipated Curriculum were involved in this step.
The experts assigned a score to each item examined and offered comments and feedback on the scale's content clarity and conciseness.The expert assignments were then converted to a four-scale (excellent, good, fair, poor), as in Table 1.A total of 36 items were declared to meet valid criteria, with an average score range of 3.67 to 4 with excellent criteria.Nevertheless, experts and practitioners provide some suggestions.Some items need to be adjusted to the terms in the Emancipated Curriculum, such as the use of teaching modules, not the lesson plan.The term P5 (the Pancasila Student Profile Strengthening Project) needs to be added to the Rahmatan Lill Alamiin (Islamic values) Student Profile.Another suggestion was to add a statement of diagnostic assessment, which is a characteristic of ECI.The experts' suggestions were then followed up for improvement.
Step 5. Considering the inclusion of validation items According to DeVellis (2003), it is necessary to determine the validity of the scale for several additional items to be included in the scale.The experts suggested adding one item in the Technological Knowledge aspect and four in Pedagogical Knowledge.Thus, the total number of items became 41.

Step 6. Field Testing
After determining the items in the scale, the scale must be administered to a large sample of subjects.The final version of the Islamic primary teachers' readiness in ECI referring to the TPACK scale was given to 305 science teachers of Islamic primary schools in two provinces, namely Central Java and Special Province of Yogyakarta, but 218 people were willing and agreed to be involved as respondents (130 teachers from Central Java and 88 teachers from Special Province of Yogyakarta).This scale was sent via Google Forms and paper via mail, but most respondents were willing to respond through Google Forms.
Step 7. Evaluating the items After administrating the scale to a large and representative sample, determining the nature of the latent variables underlying the set of items and measuring the reliability of internal consistency is an essential step in the scale development process.At this stage, quantitative research methods are involved to determine the extent to which the instrument, SEM-PLS software, is valid and reliable.Construct validity for each knowledge domain subscale (TK, PK, CK, PTK, PCK, TCK, TPACK) was analyzed using the Partial Least Square (PLS) method assisted by SmartPLS 4.0 software to assess internal consistency reliability, convergent validity, discriminant validity.The validity test includes the unidimensionality, local dependence, and monotonicity tests.
Unidimensionality aims to test the identity of indicators (items) so that one variable can only explain them (J F Hair et al. 2019;De Ayala 2022).The unidimensionality of the instrument was analyzed using Confirmatory Factor Analysis (CFA) via Jamovi (The Jamovi project, 2022) software by considering the values of Chi-square (CMIN), standardized root means square residual (SRMR), root mean square error of approximation (RMSEA), Comparative Fit Index (CFI), and the Tucker-Lewis Index (TLI).Several fit index criteria that must be met in the unidimensionality test are (1) for SRMR to be close to 0.60 or below; (2) for RMSEA to be close to 0.80 or below; (3) for CFI and TLI to be close to 0.90 or above (Hu and Bentler 1998;Whittaker and Schumacker 2022).Because the number of respondents exceeds 200, the p-value is no longer significant.To calculate model fit, we used CMIN relative (CMIN/df) with a value criterion of below 5.00 (Wheaton et al. 1977;Castéra et al. 2020).Several items had to be removed because they disturbed the unidimensionality of the instrument, including TK4, TK6, TK7, PK7, PK8, PK17, and PK18.
The next test is the local dependence test on the statement items.Local dependence is the dependence of a statement item on other statements, which can be caused by the similarity of latent variables (J F Hair et al. 2019;De Ayala 2022).The local dependence value is obtained through the metric residual relationship test from CFA, with the requirement that must be met that the metric residual relationship value is less than 0.25 (Edelen and Reeve 2007;Chen and Thissen 1997).The CFA test demonstrates no interresidual relationship with a value above 0.25.This shows that the statements used do not have any dependencies between the designed variables.
The final test is the monotonicity test using the "Mokken" package (Van der Ark 2015) in the R-Studio software (RStudio Team 2021).The model fit for monotonicity was tested using the scalability of the H-coefficient with the H-coefficient value that must be met being H≥0.30 for an item and H≥0.50 for all items (Mokken 1971;Klaufus et al. 2021).The results of the monotonicity test found that each item had criteria that met the model fit requirements with a value range between 0.635 to 0.758.Then, for all items, a value of 0.706 was obtained, which shows that the monotonicity of the instrument can be said to be valid.
The instrument's reliability was tested with CFA to determine the Cronbach's alpha value and factor loadings of each indicator.The scale and indicators can be reliable if they have a Cronbach's alpha and factor loadings value of more than 0.700 (J F Hair et al. 2019;Joseph F. Hair et al. 2021).The Cronbach's alpha obtained was 0.983, which can be categorized as very reliable.The indicators on the scale have reliable factor loading values ranging from 0.776 to 0.946.
The next step is to calibrate the scale using the Graded Response Model (GRM) to analyze the probability of a respondent's ability to choose a high category compared to choosing a low category on a statement item by considering the degree of discrimination (a) and the degree of difficulty of the question (b) (De Ayala, 2022;Samejima, 2018).The degree of discrimination aims to test the ability of the question items to differentiate respondents' abilities based on the level of difficulty of the question items.The criteria for the degree of discrimination are divided into (1) low (0.4 to 0.99); (2) moderate (1.00 to 2.09); and (3) high (above 2.10).The degree of difficulty of the items is the quality of the items in eliminating respondents with a low level of ability.In other words, respondents with low ability will only respond to statements with a low level of difficulty, whereas respondents with high ability will be able to respond to statements with a high level of difficulty (Fernandes et al. 2020;Baker and Kim 2017).The item fit criteria were tested by testing Orlando-Thissen's S-X 2 , with a p-value criterion of more than 0.001 (Orlando and Thissen 2000;Fernandes et al. 2020).
The calibration test results in Table 2 show the value of the degree of discrimination of the questions with a medium to high criteria range (1.541 to 3.322).Regarding the degree of difficulty of the statements, extreme values (b) ranged from -3.763 to 4.297.Each category of answer choices has a wide range of values, which shows the ability of the question item to have a choice category that functions well.Figure 2 shows that each item has almost the same probability pattern and function of answer choice categories.This figure shows that the higher the respondent's ability (x-axis), the lower the probability of choosing the low-choice category (y-axis).For item fit, the p-value ranges from 0.003 to 0.944, indicating that all items are in the fit category.Thus, the scale items developed in this study can discriminate respondents' abilities at each level.Figure 2. Scale item category curve GRM also measures the total information curve of each item, which shows the item's potential to contribute information to the respondent's ability position in the entire range of assessment scores (De Ayala 2022; Baker and Kim 2017).Figure 3 shows that on the x-axis, there is information about the respondent's abilities, while on the y-axis, there is information on total items and item standard errors.From this figure, it can also be seen that the total information value and standard error range from logit -4 to logit 4, which has a high information value seen from the low standard error (close to 0).The total information graph has several peaks with peak values close to each other, with the highest peak at logit 1.

Figure 3. Scale total information curve
The final analysis in the form of differential item functioning (DIF) (see Table 3) aims to test the performance of items on respondents with two different characteristics (De Ayala 2022; Baker and Kim 2017).DIF analysis was done using the snowIRT module in Jamovi software (Seol, 2023;The Jamovi project, 2022).DIF analysis was carried out using the CMIN calculation, which was significant at 0.05 with the logistic regression detection method in the difNLR package (De Ayala 2022;Hladká and Martinková 2020).The variables tested in the DIF analysis are gender (DIF 1 ), respondent's education level (DIF 2 ), type of school (DIF 3 ), certification (DIF 4 ), ECI training (DIF 5 ), place of teaching (DIF 6 ), length of time teaching (DIF 7 ), and age (DIF 8 ).
answering questions TK1, TK2, TK3, TK5, TCK2, TPK1, and TPK2, while respondents who do not have teaching certification have an advantage in answering question items PK15.In the ECI training variable (DIF 5 ), respondents who have attended training have an advantage in answering questions CK3 and TPACK2, while respondents who have not participated in training have an advantage in answering questions PCK1.In the teaching location variable (DIF 6 ), respondents who teach in Central Java have an advantage in answering PK12, PK14, and PK15 questions, while respondents who teach in the Special Province of Yogyakarta have an advantage in answering PCK4 questions.In the variable of teachers' experiences (DIF 7 ), respondents who have teaching experience of less than 15 years have an advantage in answering questions TK1, TK2, TK3, TK5, TCK2, TPK1, and TPK2, while respondents who have taught for more than 15 years have more advantage in answering questions CK4, PCK2, and PCK4.In the age variable (DIF 8 ), respondents who are under 40 years of age have an advantage in answering questions TK1, TK2, TK3, TK5, TCK2, TPK1, and TPK2, while respondents who are over 40 years of age have an advantage in answering questions PK13, PK14, PK15, PK16, PCK2, and PCK5.
This research compiled a scale for the readiness of Islamic primary school teachers for the ECI referring to TPACK.Theoretically, the indicators in the ECI are closely related to TPACK, which is an essential element for lecturers, teachers, and preservice teachers in designing and implementing learning that contains a well-adjusted combination of knowledge between technology, pedagogy, and content.The TPACK model has been widely used in both quantitative (Chai et al., 2011;Hall et al., 2020;Pamuk et al., 2015;Yildiz Durak, 2019)and qualitative (Canbazoglu Bilici et al., 2016;Demir & Bozkurt, 2011;Groth et al., 2009;Koh et al., 2014;Mcgrath et al., 2011;Santos & Castro, 2021).In recent years, the TPACK model has also been used to investigate the development of teachers' TPACK according to different learning contexts, such as science (Canbazoglu Bilici et al., 2016;Jang & Tsai, 2013), mathematics (Hernawati & Jailani, 2019;Muhtadi et al., 2017;Niess et al., 2009), and English (Kurt et al., 2014;Solak & Cakir, 2014).A study conducted by Koh (2020) shows that TPACK can provide a theoretical framework for teaching and learning centers to compile and disseminate types of institutional knowledge through three approaches: technology modeling, pedagogical modeling, and deepening practice.Practically, the scale can provide a tool for teachers to self-evaluate their pedagogical practices.Teachers could identify which areas of their TPACK need improvement.

Conclusion
This study succeeded in developing a scale to measure the readiness of Islamic elementary school teachers for ECI, referring to the TPACK Framework, with 34 valid and reliable items (0.983).The scale contains indicators of TK (4 items), PK (14 items), CK (4 items), PCK (5 items), TCK (2 items), TPK (2 items), and TPACK (3 items).The scale can potentially be used as a tool to evaluate teacher readiness in implementing the independent curriculum.In addition, a portrait of teachers' readiness for ECI can be captured as a way to evaluate the curriculum itself and which training needs to be provided to elevate the teachers' knowledge and skills for ECI.