Developing a short assessment instrument for Technological Pedagogical Content Knowledge (TPACK.xs) and comparing the factor structure of an integrative and a transformative model

: Technological Pedagogical Content Knowledge (TPACK) is regarded as one of the most important models describing teachers’ competencies for successfully teaching with technology. TPACK is most frequently assessed by means of self-report questionnaires, which beside their inherent methodological limitations present constraints related either to the validity, reliability, or practical applicability of existing instruments. Furthermore, the internal structure of the TPACK framework is a topic of debate. The two goals of this study were (1) to develop a valid and reliable short questionnaire for measuring TPACK (TPACK.xs)


Introduction
In the past 30 years, teacher knowledge has been understood along the lines of Shulman's (1986Shulman's ( , 1987 views according to which skilled teachers are more than just experts in pedagogy and content but rather possess the unique combination of these two knowledge types, labelled "pedagogical content knowledge". More recently though, Mishra and Koehler (2006) proposed an extension of this model to include technological knowledge as a third core component for effectively teaching in the digital era. Their Technological Pedagogical Content Knowledge (TPACK) framework is based on the three core components pedagogical knowledge (PK), content knowledge (CK), and technological knowledge (TK), and four hybrid components formed at their intersections, namely pedagogical content knowledge (PCK), technological pedagogical knowledge (TPK), technological content knowledge (TCK), and technological pedagogical content knowledge (TPCK; see Fig. 1, for a detailed description see Koehler & Mishra, 2008;Mishra & Koehler, 2006). Since then, TPACK has inspired a great body of research and although a number of adaptations (e.g., Lee & Tsai, 2010) and extensions (e.g., Angeli & Valanides, 2009;Porras-Hern� andez & Salinas-Amescua, 2013) of the framework have been proposed, the original framework persists as the consistent core for representing teacher knowledge. However, two issues have raised concerns and sparked discussions in recent years: It is unclear how the different knowledge components are exactly related and how they can be measured. These issues will be described in detail in the following chapters.

Contrasting integrative and transformative views on TPACK
Although the TPACK framework can be considered as one of the most popular concepts in research on educational technology (Hew, Lan, Tang, Jia, & Lo, 2019), it has also been criticized for its lack of conceptual clarity and specificity (e.g., Angeli & Valanides, 2009;Brantley-Dias & Ertmer, 2013), as well as for its "fuzzy boundaries" (Graham, 2011;Kimmons, 2015). These issues have led to the emergence of a body of literature based on a wide range of definitions and interpretations of TPACK (e.g., Voogt, Fisser, Pareja Roblin, Tondeur, & van Braak, 2013;Petko, 2020). In particular, the different interpretations have given rise to two contrasting perspectives on the development of and the relations between TPACK components (e.g., Angeli & Valanides, 2009;Graham, 2011). In the first perspective, the integrative view, the central component of TPACK (i.e., TPCK) is considered to arise from the integration of the other components of teacher knowledge and thus be related to each of these domains. In this view, high levels of TPCK will be constituted by high levels of TPK, TCK, PCK, TK, PK, and CK. In contrast, the transformative view describes the intersections of knowledge components to give rise to unique bodies of knowledge which are more than the fusion of the core components. In other words, according to the transformative perspective TPCK cannot simply be accounted for by summing all other TPACK components, but rather it is a distinct form of knowledge which transforms beyond the components at its base. In this view, TPCK will be influenced by TPK, TCK, and PCK but not directly by TK, PK, and CK. Although Mishra and Koehler (2006) theoretically described TPACK along the lines of the transformative view, to date only a few researchers have made efforts to empirically validate this assumption (e.g., Angeli & Valanides, 2005;Jang & Chen, 2010;Jin, 2019). A few studies have investigated the interplay of TPACK knowledge domains by means of structural equation modeling, yet with regard to this question, results remain inconclusive (Celik, Sahin, & Akturk, 2014;Dong, Chai, Sang, Koh, & Tsai, 2015;Koh, Chai, & Tsai, 2013;Pamuk, Ergun, Cakir, Yilmaz, & Ayas, 2015). Instead of proofing the basic TPACK model, the uncertainties have led to numerous extensions of the original model, which tend to complicate the fundamental issues even further (e.g., TPACK-ICT: Angeli & Valanides, 2009;TPACK-deep: Kabakci Yurdakul et al., 2012;TPACK-XL: Saad, Barbar, & Abourjeili, 2012). Empirical evidence is required to unify views on TPACK and the framework proposed by Mishra and Koehler (2006). To this end, it is essential to provide the field of TPACK research with assessment tools which are not only valid and reliable, but that are also easy and practical to administer. A short and economically feasible tool could provide valuable advantages for assessing TPACK in various contexts as well as in combination with other relevant constructs (e.g., beliefs, self-efficacy).

Developing valid, reliable, and economical TPACK measures
Although providing a theoretical framework is essential for guiding reflections and inquiries regarding the integration of technology into educational systems, the true value of a framework lies in its ability to be traced in and make-sense of the real world (Frigg & Hartmann, 2018;Grønfeldt Winther, 2015). To this end, reliable and valid instruments are fundamental tools for assessing the consistency between theory and practice of TPACK research (Koehler, Shin, & Mishra, 2012;Niess, 2012). Furthermore, measuring and  Koehler and Mishra (2008).
analyzing TPACK data provides concrete information which is much needed to create consensus and bridge the gaps between the numerous theoretical perspectives of TPACK (Fisser, Voogt, van Braak, & Tondeur, 2015).
To date, the literature distinguishes different types of instruments to measure TPACK which can be divided into two broad categories, namely those of self-report nature and those which are performance-based (Fisser et al., 2015). In the former category, we find self-report questionnaires (open-ended and/or close-ended questions) and interviews. Performance-based assessments include lesson planning, teachers' classroom performance, and performance on specific tasks. Currently, self-report methods are the most frequently used approaches for measuring TPACK (Koehler et al., 2012;Willermark, 2018). Self-report instruments provide time and cost effective ways to easily collect large amounts of quantitative data, which if properly randomized, can be used for generalizations (Demetriou, Uzun € Ozer, & Essau, 2015). Nevertheless, there are also problems associated with this method which require consideration. Aside from the traditional limitations of self-report (e.g., social desirability bias, response bias, subjective or misinterpretations of items, response constraints due to fixed-choice questions; Demetriou et al., 2015) there are a number of issues pertaining specifically to current self-report methods for assessing TPACK (e.g., Abbitt, 2011).
A major issue consists in the fact that the lack of common definitions and fuzzy boundaries has resulted in producing a vast heterogeneity of self-report instruments. For the most part, these fail to provide sufficient reliability and validity evidence. Koehler et al. (2012) found that approximately two thirds of studies investigating TPACK via self-report were missing validity evidence and almost as many lacked that for reliability. Stemming from this issue, the literature presents inconsistent empirical support of the seven factor structure: Although a number of studies report successfully confirming the seven factor structure of TPACK (e.g., Deng, Chai, So, Qian, & Chen, 2017;Pamuk et al., 2015;Sahin, 2011;Schmidt et al., 2009), others find these components to be highly correlated thus distinguishing different factor structures (e.g., Archambault & Barnett, 2010;Chai, Koh, Tsai, & Tan, 2011;Jang & Tsai, 2012). These findings raise serious concerns regarding the framework's construct and discriminant validity (Koehler, Mishra, Kereluik, Shin, & Graham, 2014). As a result, not all existing self-report instruments assess TPACK holistically but instead many assess only subsets of its components (e.g., TK, PCK, and TPCK in Archambault & Barnett, 2010, or only the T-dimensions in Scherer, Tondeur, & Siddiq, 2017) or include qualitatively different components entirely (e.g., knowledge from critical reflection in . At present, one of the most widely used self-report instruments is the survey developed by Schmidt et al. (2009) for assessing pre-service teachers' TPACK knowledge domains. The distinguishing strengths of this survey are that it assesses all seven components and that it has been validated by a number of authors either in its original or adapted forms reporting high reliabilities (Cronbach's alpha >.80; e.g., Chai et al., 2010;Chai, Koh, & Tsai, 2011;Chai, Koh, Tsai, & Tan, 2011;Koh et al., 2010;Valtonen et al., 2017;Valtonen, Sointu, M€ akitalo-Siegl, & Kukkonen, 2015). Nevertheless, it also shares three disadvantages common among TPACK self-report instruments. First, self-report instruments assessing TPACK holistically tend to be quite long and thus cumbersome for using in practice (e.g., 52 items, Bilici, Yamak, Kavak, & Guzey, 2013;34 items, Chai, Koh, & Tsai, 2011;47 items, Schmidt et al., 2009;38 items, Valtonen et al., 2017). Second, these instruments present unequal distribution of items per subscale, which leads to different measurement accuracies (for an overview, see Pamuk et al., 2015). Third, a number of these instruments present features making them only relevant for specific populations of teachers, thus limiting the generalizability of their use. For example, the items of Schmidt et al. (2009) refer to four different subjects and (pre-service) teachers can only complete the questionnaire if they teach all these subjects (i. e., mathematics, social studies, science, and literacy). Other questionnaires refer to specific subjects (e.g., science in Jimoyiannis, 2010, or geography in Doering & Veletsianos, 2008) or technologies (e.g., "online environments" for assessing online teachers in Archambault & Barnett, 2010) and are therefore not generally applicable either. Addressing these three points in Schmidt et al.'s (2009) survey could greatly enhance its value and provide the field with a much sought for valid, reliable, and practical tool for conducting TPACK research.
Furthermore, the discussions of integrative versus transformative view of TPACK also have implications for determining the construct validity of assessment instruments (Graham, 2011). As mentioned previously, a few studies have attempted to provide evidence for Mishra and Koehler's (2006) transformative model of TPACK (e.g., Angeli & Valanides, 2009;Jang & Chen, 2010;Jin, 2019). Although these varied in their approaches, all reported supportive findings. Nevertheless, none of the studies investigated whether their transformative models really were superior to an integrative counterpart.

The present study
Presently, the TPACK model is one of the most prominent models of teacher knowledge for the effective use of digital technologies in teaching. However, as described above, TPACK research emphasizes several theoretical and methodical issues. In light of these issues, the aim of this study is to develop and validate a short self-report questionnaire (TPACK.xs). Addressing aspects of parsimony and practical usability, the goal for TPACK.xs is to provide a short instrument which validly and reliably measures all seven components of TPACK. Providing a shorter scale facilitates the integration of TPACK in large-scale studies and reduces the risk of respondent fatigue when answering questionnaires while still providing sufficient levels of accuracy and reliability (Rammstedt & Beierlein, 2014;Schweizer, 2011). The second goal of this study is to use TPACK.xs to investigate the internal structure of the TPACK framework, as well as the relations between individual components. Thus, the main research questions are formulated as follows: 1. Is it possible to validly and reliably measure TPACK using a short scale? 2. How are TPACK components internally related and do these relations reflect (a) the integrative or (b) the transformative view? Table 1 Descriptive statistics (M, SD), corrected item discrimination (r), and reliabilities (α/ω) of the initial TPACK questionnaire (42 items).

Sample
The study was conducted with pre-service upper secondary school teachers attending a compulsory course on teaching methodology as part of a teacher training program at a Swiss university. Participation was based on informed consent. With response rate of 54.2%, the final sample consisted of 117 respondents (63 females, 52 males, and 2 omitting their gender information) from two cohorts (fall semester 2018: n ¼ 49; spring semester 2019: n ¼ 68). The mean age of the sample was 31.8 years (SD ¼ 8.3; range: 22-56). As a prerequisite for enrolling in the teaching training program, all pre-service teachers held at least a bachelor's degree (and in the process of completing a master's degree) in the subject they were specializing. In total, the sample presented a range of 17 different teaching subjects. In terms of teaching experience, 70 (59.8%) of the pre-service teachers in the sample had no teaching experience, 31 (26.5%) reported having one to two years of experience, 11 (9.4%) had between three and six years, and 5 (4.3%) reported having more than six years. Given the focus of this study, pre-service teachers also reported whether they had attended the optional module on educational technology, with only 7 (6.0%) having completed this course.

Measures
The questionnaire developed in this study was largely based on the questionnaire of Schmidt et al. (2009). Given that the original content-related items from Schmidt et al. (2009) were formulated for elementary school teachers, CK, PCK, TCK, and TPCK subscales consisted in variations of a single item addressing four teaching subjects. In order to use the instrument in our sample and enhance its generalizability, items referencing content were reformulated similarly to Chai, Koh, and Tsai (2011) in a neutral but specific way reflecting a single "teaching subject" (e.g., "I have sufficient knowledge about my teaching subject" instead of "I have sufficient knowledge about mathematics"). Consequently, content-related subscales needed to be supplemented with additional items. For the CK subscale, items focusing on knowledge of theories and concepts of a subject were adapted from Valtonen et al. (2017). With regards to the PCK and TCK subscales, we developed novel items based on the current literature and the definitions provided by Mishra and Koehler (2006). Finally, TPCK items were again largely adapted from the questionnaire of Schmidt et al. (2009). In the TK scale, explicit mention was made of digital media (e.g., computer, tablet, mobile phone, Internet).
The initial questionnaire completed by the sample in this study was composed of 42 items (Table 1, with detailed comments on the original sources of the items). All items were rated on a five-point scale (1 ¼ "strongly disagree" to 5 ¼ "strongly agree").

Data analysis
Addressing the first research question, a reliability analysis followed by a confirmatory factor analysis (CFA) was chosen to test whether the data fit the theoretically expected structure and to construct the short-scale (Schmitt, 2011). In a first step, the reliabilities of the full set of items for the seven subscales were computed. Next to Cronbach's alpha, which has been criticized because it often underestimates internal consistency (e.g., Cho & Kim, 2015;Deng & Chan, 2017), McDonald's omega was calculated additionally as an alternative reliability measure (McDonald, 1999). In a second step, a CFA was conducted to test the structure and the internal consistency of this full scale. To develop the short-scale, we removed the items with the lowest item-discrimination in the reliability analysis and the lowest factor loadings in the CFA until each subscale was reduced to minimum number of items deemed by the authors necessary for capturing all relevant aspects of each knowledge component (i.e., face validity). The final model was once again tested for its reliability using Cronbach's alpha as well as McDonald's omega and by computing a CFA, in which some items within subscales were allowed to correlate where these could be theoretically justified (see Schmitt, 2011).
Considering the second research question, structural equation modeling (SEM) was used to investigate the relations between TPACK components. Models representing the integrative view (i.e., core and first level hybrids predicting TPCK) and the transformative view (i.e., core components predicting first level hybrids which predict TPCK) were defined and compared using the likelihood ratio test. To assess the indirect effects of core components through respective first level hybrids on TPCK, mediation analysis was conducted. All analyses were conducted in the R software environment (version 3.6.0; R Core Team, 2019) using the packages "psych" (version 1.8.12; Revelle, 2018), "lavaan" (version 0.6-3; Rosseel, 2012), and "semPower" (version 1.0.0; Moshagen, 2018;Moshagen & Erdfelder, 2016).
For CFA and SEM, various goodness of fit indicators have to be reported. As usual, we will report as measures of goodness of fit the Chi-square (X 2 ), the Bentler Comparative Fit Index (CFI), the Tucker Lewis Index (TLI), the Steiger-Lind Root Mean Square of Approximation (RMSEA), and considering our small sample size (N < 150) the Standardized Root Mean Square Residual (SRMR) for Note. TPACK subscale and respective prompt indicated in bold. *Reference to final coursework assignment completed by all participants in the sample. Scale: 1 (strongly disagree) to 5 (strongly agree); α ¼ Cronbach's alpha; ω ¼ McDonald's omega; N ¼ 117. a Schmidt et al. (2009). b Adapted from Schmidt et al. (2009). c Adapted from Chai, Koh, and Tsai (2011). d Adapted from Valtonen et al. (2017). e Developed by the authors. the CFA (Hooper, Coughlan, & Mullen, 2012;Hu & Bentler, 1999;Schreiber, Nora, Stage, Barlow, & King, 2006). The cut-off criteria were fixed as follows: CFI > 0.95, TLI > 0.95, RMSEA < 0.05 and a confidence interval of 0.05-0.10, and SRMR < 0.08 (Hooper et al., 2012;Hu & Bentler, 1999;McDonald & Ho, 2002;Schreiber et al., 2006). All analyses were based on a significance level of p � .05. Although the sample size seems rather small when considering older rule-of-thumb recommendations, more recent recommendations for CFA have shown that even small sample sizes can yield adequate models if the number of variables per factor is not too small and internal consistency is high (Gagne & Hancock, 2006;Wolf, Harrington, Clark, & Miller, 2013). In addition, we carried out post-hoc power analysis for each structural equation model.

Results
The first goal of this study was to develop a short-scale questionnaire (TPACK.xs). We started with a 42-item questionnaire, which is documented in Table 1. Although the reliabilities of the different scales are adequate, a CFA with the full set of 42 items grouped in seven factors does not yield a satisfactory fit of the model (X 2 (798) ¼ 1223.8, p ¼ .000; TLI ¼ 0.819; CFI ¼ 0.832; RMSEA ¼ 0.068, 90% CI [0.060; 0.074]; SRMR ¼ 0.084). Therefore, for each subscale, we removed items based on item discrimination and factor loadings as well as theoretical considerations with regard to construct facets (i.e., wordings leading to item redundancy or limitations of generalizability). For example, although the item pck4 showed lower factor loadings compared to that of pck6, the authors considered the aspect of "student evaluation" (pck4) as more comprehensive than "identifying student errors" (pck6). Based on these considerations, pck4 was retained over pck6. The final model consisted of seven components each with four items per subscale, total 28 items ( Table 2).
Addressing the second goal of this study, pathways between components were added according to the integrative (Model 1) and the transformative (Model 2) perspectives to investigate TPACK's internal structure. Structural equation modeling revealed satisfactory fits for both Model 1 (X 2 (330) ¼ 399.  Fig. 3 depicts the relations between components of the two models. Significant relations emerged between core components (PK, CK, TK) their respective first level hybrid components (PCK, TPK, TCK), as well as between the two hybrid components PCK and TPK with the central component TPCK. In contrast, none of the core components nor the hybrid component TCK were significantly related to TPCK. In comparing the models, results showed slightly lower Akaike information criterion (AIC) values for the transformative (Model 2; AIC ¼ 7080.5) compared to the integrative model (Model 1; AIC ¼ 7085.0), but the difference between the two was not significant (X 2 (3) ¼ 1.13, p ¼ .77).
Although the sample size is rather small, the post-hoc statistical power for the structural equation models is still reasonable (1-beta ¼ 0.78 for the confirmatory factor analysis in Fig. 2; 1-beta ¼ 0.81 for the integrative model and 1-beta ¼ 0.79 for the transformative model in Fig. 3).

Discussion and conclusion
Building on the work of Schmidt et al. (2009), Chai, Koh, and Tsai (2011), and Valtonen et al. (2017, the first goal of this study was to develop a short questionnaire to measure TPACK economically and practically. The results of the reliability analysis, the CFA-and also the SEM-models show that it is possible to assess the seven knowledge components of TPACK validly using TPACK.xs with four items per subscale. Each of the seven subscales emerged as reliable, with Cronbach's alphas between .77 and .91 and McDonald's omegas between 0.79 and 0.92. The CFA showed that each subscale can be sufficiently differentiated. However, it also revealed significant intercorrelations between subscales, with particularly high values between PK and PCK, PCK and TPCK, as well as TPK and TPCK. These patterns correspond to the results of other studies (e.g., Archambault & Crippen, 2009;Schmidt et al., 2009;Valtonen et al., 2019) and can also be explained theoretically (see Koehler et al., 2014). Overall, TPACK.xs is a valid and reliable instrument that can measure teacher knowledge parsimoniously. The short-scale is intended to facilitate the integration of TPACK measures in studies where questionnaire space is limited. Additionally, TPACK.xs focuses on one school subject while still being a generic scale with a wording that is applicable to different subjects. This is intended to help building up accumulated research evidence across subjects and school levels.
With regard to the second goal of investigating the internal relationships between the various components of TPACK, different conclusions can be drawn. On a holistic structural level, no significant difference emerged between the integrative and the transformative models. However, despite the integrative model defining TPCK as the intersection of both core and first level hybrid components, relations with core components were dissolved as they did not reach significance. This resulted in the integrative model naturally mirroring the structure of the transformative model. Thus, the findings here are in line with Mishra and Koehler's (2006) original definition of TPACK, as well as with the current literature supporting the transformative view (Angeli & Valanides, 2009;Jang & Chen, 2010;Jin, 2019). This means thatmeasured in its current form -TPCK is primarily influenced by the hybrid components TPK and PCK. In particular, TPK and TPCK seem to be closely related. However, deviating from the theoretical model, we did not find a significant influence of TCK on TPCK. When comparing our results with previous structural equation models of TPACK, some differences become apparent. Pamuk et al. (2015) found TPK, TCK, and PCK to be positive predictors for TPCK. Dong et al. (2015) and Koh et al. (2013) found TPK and TCK to be positive predictors for TPCK while PCK was not. Celik et al. (2014) found PCK and TCK to have a positive influence on TPCK but not TPK. In our study, it was TCK that did not have a substantial influence on TPACK while TPK and PCK did. These divergent findings raise the question how these differences can be explained. One reason might be that the TCK items have been reformulated in the present study to be better differentiated from TPCK. Another reason might be that the interplay of TPACK knowledge components is likely to differ across contexts (e.g., subject, school level, educational culture).
On the basis of these results, some conclusions can be drawn for the training and professional development of teachers. When TPACK is considered as transformative, growth in a component such as TK or PK does not automatically result in TPCK growth (see also Angeli, Valanides, & Christodoulou, 2016). This means that teacher development activities that focus on TK will not directly translate to TPCK. Instead, the transfer from each knowledge domain to another must be addressed in a deliberate way. Teacher training must therefore provide different learning opportunities to learn and exercise the different components of knowledge and, more importantly, their combinations. This supports findings from the literature emphasizing the crucial role of forms of "high-quality technology experience" during teacher training for developing TPCK (e.g., Foulger, Graziano, Schmidt-Crawford, & Slykhuis, 2017;Pamuk, 2012;Wang, Schmidt-Crawford, & Jin, 2018).

Limitations and future research
The present study has some limitations that need to be addressed in future research. The limitations relate mainly to (1) the sample, (2) the survey instrument, and (3) the cross-sectional design of this study.
(1) The sample examines only pre-service upper secondary school teachers at the beginning of their teacher training. Therefore, with the exception of content knowledge, these teachers naturally have little knowledge in the various components of TPACK (see also Koehler et al., 2014). To extend the general validity of the questionnaire, further studies are required to test the instrument as well as its individual subscales in larger samples with higher statistical power and across cultures and in different teacher populations. In addition, such an extension of the sample could also be crucial for shedding more light on the situated nature of TPACK.
(2) Further limitations can be stated with regard to the survey instrument. As with the majority of TPACK studies, the realm of context knowledge was not considered here. Although context has been conceptualized as a relevant body of knowledge (e.g., Mishra, 2019;Rosenberg & Koehler, 2015; see also Fig. 1) only a few studies have made empirical efforts to investigate it as an additional component of teachers' knowledge (e.g., Jang & Tsai, 2012 with context knowledge as CxK). Future research should therefore also take context and its various levels (e.g., micro, meso, macro; Porras-Hern� andez & Salinas-Amescua, 2013) into consideration as to shed more light on the structure and practical significance of this domain of knowledge. In addition to the lack of references to contextual factors, another consideration regarding the content of the instrument may be that assessing TPACK at the level of teaching subject could be a too broad approach given that knowledge may vary across topics. Therefore, one approach for future studies should be to investigate assessing TPACK more specifically (i.e., in relation to a specific teaching topic).
In addition, this instrument is based exclusively on self-reported knowledge (also see Chapter 1.2), and it is unclear how reliably pre-service teachers are able to report on their own knowledge (e.g., Drummond & Sweeney, 2017). Future research should triangulate self-declarations with other measures of TPACK such as lesson observations or performance assessments (e.g., Koehler et al., 2012) to overcome potential biases. This could also provide important evidence of self-reported TPACK's convergent validity (see also Kopcha, Ottenbreit-Leftwich, Jung, & Baser, 2014). Furthermore, in the future the complex relationship between TPACK and self-efficacy, self-regulation, beliefs, or attitudes concerning educational technology, should also be examined in more detail which is important for the discriminant validity of these relevant constructs. For example, analyses by Krauskopf and Forssell (2018) indicate that correlations between self-reported TPACK and other variables (e.g., computer use) are mediated by beliefs. Understanding how these aspects interact could be crucial for effectively improving technology integration in teacher training as well as in classroom practice.
(3) Finally, although our study supports a transformative view of TPACK, the effects are only correlational. Longitudinal studies will be necessary to investigate the exact interplay of TPACK components and the reciprocal effects of their development over time.
Even when considering these limitations, the development of TPACK.xs can be regarded as a step towards a simplified and at the same time valid measurement of TPACK in survey studies, which also addresses the transformative nature of the combination of knowledge domains within this model. As an efficient and short instrument, TPACK.xs might be a particularly useful tool for extending research as it facilitates the integration of TPACK assessments into broader studies with multiple measures, as well as across various teacher populations.