The Prediction of Students’ Academic Performance With Fluid Intelligence in Giving Special Consideration to the Contribution of Learning

The present study provides a new account of how fluid intelligence influences academic performance. In this account a complex learning component of fluid intelligence tests is proposed to play a major role in predicting academic performance. A sample of 2, 277 secondary school students completed two reasoning tests that were assumed to represent fluid intelligence and standardized math and verbal tests assessing academic performance. The fluid intelligence data were decomposed into a learning component that was associated with the position effect of intelligence items and a constant component that was independent of the position effect. Results showed that the learning component contributed significantly more to the prediction of math and verbal performance than the constant component. The link from the learning component to math performance was especially strong. These results indicated that fluid intelligence, which has so far been considered as homogeneous, could be decomposed in such a way that the resulting components showed different properties and contributed differently to the prediction of academic performance. Furthermore, the results were in line with the expectation that learning was a predictor of performance in school.


IntroductIon
Numerous studies have demonstrated that intelligence is a main predictor of academic performance (e.g., Deary, Strand, Smith, & Fernandes, 2007;Watkins, Lei, & Canivez, 2007). Fluid intelligence that has been found to be especially closely related to general intelligence (Kvist & Gustafsson, 2008;McArdle & Woodcock, 1998) has frequently played a leading role in studies on the relationship with academic performance. Although this relationship has been regarded as a well-established fact, the source of the relationship still seems to be in need of a convincing account. Cattell's (1963Cattell's ( , 1987 investment hypothesis stating that individuals invest their fluid intelligence to acquire strategies and knowledge can be considered as an attempt to provide an account. More recently, the research has shifted to focus on the underlying cognitive processes. Attempts have been made to understand why and how complex cognitive processes influence stu-dents' academic performance (e.g., Ferrer & McArdle, 2004;Krumm, Ziegler, & Buehner, 2008). This paper adds another approach to this line of research: Fluid intelligence is decomposed into components showing different cognitive properties and contributing differently to the prediction of academic performance.

The position effect observed in intelligence tests
The new approach originates from the position effect research. This effect has frequently been observed in items of intelligence tests. It denotes the dependency of responses to items on the position of the items within a test (Schweizer, Troche, & Rammsayer, 2011). Since intelligence tests are composed of a number of items showing a high degree of similarity, there is a high possibility of observing the position effect among the items within a test (e.g., Kubinger, Formann, & Farkas, 1991;Schweizer et al., 2011;Schweizer, Schreiner, & Gold, 2009). Further, a few empirical studies have suggested that learning serves as the source of the position effect in intelligence items (Embretson, 1991;Ren, Wang, Altmeyer, & Schweizer, 2014;Verguts & De Boeck, 2000). This position effect provides the outset to investigate the question whether the assumed learning processes underlying the position effect could account for the relationship between fluid intelligence and academic performance.
The research on the position effect has a long history starting in the 50s (Campbell & Mohr, 1950). The work by Knowles (1988) who observed that in personality scales item reliability increases as a function of the item serial position was especially enlightening. The position-related change was also found in ability tests such as the Raven's Standard Progressive Matrices (Kubinger et al., 1991). The results of these studies indicate that the response to the items becomes increasingly consistent as testing continues. The more recent focus of this line of research is to represent the position effect observed in intelligence items by means of advanced confirmatory factor analysis (CFA) models (e.g., Ren, Goldhammer, Moosbrugger, & Schweizer, 2012;Schweizer et al., 2009)

Complex learning as source of the position effect accounts for academic performance
There are reasonable grounds suggesting learning as the source of the position effect. First, the position effect appears to be associated with the similarity among the items of a test and the similarity provides opportunities for test-takers to detect the regularities and extrapolate them from one item to the next one. Since items of many fluid intelligence tests are dominated by only a few underlying rules (Carpenter, Just, & Shell, 1990), it is quite likely that test-takers are able to infer these rules and improve their ability to solve the items as testing continues. Second, previous research work conducted in the framework of IRT suggested that such kind of learning did occur in completing items of an intelligence test even without direct external feedback (e.g., Fischer & Formann, 1982;Verguts & De Boeck, 2000).
The nature of learning associated with the position effect of intelligence items was made explicit by a more recent study in considering both associative learning and complex learning (Ren et al., 2014). While associative learning represents an individual's ability to form and maintain new associations between the knowledge items stored in memory, complex learning mainly reflects an individual's ability to acquire and develop a series of goal-directed strategies based on the use of abstract rules (cf. Anderson, Fincham, & Douglass, 1997). The study by Ren et al. (2014)  The revelation of complex learning as the main source of the position effect was especially revealing with respect to the prediction of academic outcomes on the basis of fluid intelligence. Fluid intelligence has been considered as a causal factor in learning activities, especially in novel situations (Kvist & Gustafsson, 2008). This argument has been bolstered by empirical studies demonstrating a substantial relationship between learning and fluid intelligence when the learning tasks are new and complex (e.g., Tamez, Myerson, & Hale, 2008). Additionally, the investment hypothesis and related empirical research suggest that fluid intelligence supports the acquisition of skills and knowledge across a wide spectrum of domains including arithmetic skills and vocabulary (Cattell, 1987;Ferrer & McArdle, 2004). Therefore, it appears reasonable to hypothesize complex learning as an underlying source that gives rise to the association between fluid intelligence and knowledge acquisitions.

Participants
The data of the present study came from a large research project conducted across China to assess children's and adolescents' cognitive, academic and social development. The sample used for this paper was defined by students enrolled at 10 junior secondary schools located at a medium-sized city in south China. There were 2,277 students (1,176 males and 1,101 females) in the second year of the junior secondary schools with an average age of 13.53 years (SD = 0.28). Data were collected at the beginning of the academic year. Since the reasoning tests and the academic tests were administered separately (within one week), a total of 17 participants had missing scores on either the reasoning scores or the academic scores. The loss was very small because data collection was conducted during normal teaching time, and absence from school is rare in China. Data of those participants were excluded from analysis.

Measures
The measures included two analogical reasoning tests (figural and numerical versions) to assess fluid intelligence. Academic performance was assessed by standardized math and verbal tests. All these tests came from the test reservoir developed for the national research project 1 and have gone through rigorous construction processes (Dong & Lin, 2011).

Reasoning tests
Fluid intelligence was assessed using analogy tasks combining different contents. The figural reasoning (FR) test consisted of 19 items each presented in the form of analogy patterns composed of geometric figures (see Figure 1 for an example). To complete each item, participants had to infer the rule underlying the first pattern and to apply the rule to complete the second pattern by choosing a correct figure out of four alternatives. The 19 items of this test were presented in an ascending order of difficulty. The numerical reasoning (NR) test was the numerical equivalent of the FR test. The elements of the patterns were simple numbers composed according to underlying rules. This test consisted of 22 items presented also in an ascending order of difficulty.
Participants had 8 min to complete each test. The time limit was chosen on the basis of the results of several pilot testing sessions to make sure that participants had sufficient time to try to complete each item of each test. The response to each item of the tests was recorded as binary data. According to the technical report of these tests (Dong & Lin, 2011), internal consistency indexed by Cronbach's αs was computed based on a national norm of 12,000 junior middle school students. The internal consistencies were .77 for the FR and .86 for the NR. Criterion validity of the reasoning test was established on the basis of 120 stu-dents. The Matrix Reasoning subtest of the Wechsler Intelligence Scale for Children (WISC-IV) served as an external criterion for the reasoning test. Correlations of the FR and NR tests with WISC-IV Matrix Reasoning were .66 (p <.01) and .64 (p <.01) respectively.

academic tests
The math and verbal tests were constructed strictly according to

Statistical analysis
Individual items provided the basis for analyzing the data of the reasoning tests. The research approach selected for decomposing and representing the constant and the position components of the reasoning tests were special CFA models addressed as the fixed-links models (cf. Schweizer, 2008). A characteristic of the fixed-links models is that factor loadings are constrained according to theory-based expectation so that the variances of the manifest variables are decomposed into independent components. Independence of the latent components means that latent variables are prevented from accounting for the same variances and covariances. If the latent variables were allowed to correlate with each other, this would very likely lead to substantial correlations of both latent variables with the same criterion measures. In this case, it may become virtually impossible to demonstrate whether the increasing component that represents the position effect is correlated to a higher degree with the criterion than the other latent variable.  The limits proposed by Kline (2005) were referenced to evaluate the model-data fit. In addition, competing non-nested models were compared on the basis of Akaike Information Criterion (AIC). Lower AIC values reflect better model-data fit, and the model with the lowest AIC value is preferred.

results
The item-based scores of the reasoning tests are presented in Table 1.
Descriptive results for the two reasoning tests, the math and verbal tests and their respective dimensions, as well as the intercorrelations among the variables are presented in Table 2. All correlations reached significance at the .01 level (two-tailed).

The representation of the components of fluid intelligence
As described in the Method section, three measurement models were examined for each reasoning test. Table 3 presents the fit results of the models. A comparison of the constant model and the other two models for each reasoning test clearly indicated that the consideration of the position effect reduced the χ 2 and AICs considerably. Although the outcomes of CFIs for the position-related models were not very favorable, they could be considered as acceptable since the large sample size affected the statistics on which the CFI was based. Table 3 also indicates that the quadratic models showed better fits than the linear models, as can be seen from the obviously lower AIC value of the quadratic models. These fit results indicate an advantage of representing the position effect according to the quadratic function. Therefore, the two quadratic models were selected for further analyses. The scaled variances

Accounting for academic performance by components of fluid intelligence
The   the latent structure of the second-order cFA model with the constant and learning components of fluid intelligence as higher-order factors which were derived from the four components of the reasoning tests. completely standardized factor loadings and completely standardized error variances of the latent variables are also presented (** p < .01). the correlations between the constant and the position components were fixed to zero. obvious. These results suggest that the reason why fluid intelligence predicts academic outcomes is that highly intelligent individuals are especially efficient in learning new skills in novel and complex situations, which seems to lead to high potential for achieving success in academic activities.
The present finding was in accordance with, and updated two lines of previous research. One line of research conducted in the framework of psychometric studies has found a positive relationship between fluid intelligence and the rate of learning, or learning in real-life situations (e.g., Klauer & Phye, 2008;Tamez et al., 2008). This line of research suggests that a fundamental aspect of fluid intelligence is the ability to learn in novel situations, as was clearly demonstrated by the current study that a learning component was represented and derived from measures of fluid intelligence. Furthermore, the findings of the current study updated previous work that was conducted to test the investment hypothesis which provides insight into the learning function of fluid intelligence for acquiring strategies and knowledge (Ferrer & McArdle, 2004). Although direct evidence supporting Cattell's (1963Cattell's ( , 1987 investment hypothesis was limited by the cross-sectional nature of this study, the result that the learning component of fluid intelligence had a substantial correlation with math and verbal performance underscored the importance of the learning function implicated in fluid intelligence. It is necessary to note that since the learning and the constant components of fluid intelligence were not orthogonal, it was quite likely that these two components accounted for an overlapping part of the variance of math or verbal performance. In spite of that, it was clear from the current result that the learning component played a more important part than the other component of fluid intelligence in predicting academic performance. In addition, although those components of fluid intelligence accounted for a large part of the variances of academic performance, other factors such as conscientiousness, motivation, and so forth should also play a crucial role in predicting students' academic achievements (e.g., Mega, Ronconi, & De Beni, 2014). Lastly, concerning the fit statistics of the measurement models, although both RMSEAs and SRMRs were acceptable, the CFIs were not at or above .90. This finding may partly be due to the large number of variables within each model (cf. Kenny & McCoach, 2003).
To conclude, the current study decomposed measurements obtained by two reasoning measures into two components and showed that these components differently related to two types of academic achievement. The results indicate that reasoning data, which have been considered as homogeneous, can be decomposed in such a way that the resulting components show different properties. Furthermore, the results are in line with the expectation that learning is a predictor of performance in school. To be more specific, the position component that mainly reflects complex learning accounted for a larger part of the variance of academic performance than that of the constant component of fluid intelligence. These findings provide evidence of how tests of fluid intelligence predict academic performance and justify the use of intelligence tests as educational tools. Furthermore, the finding that the learning component of fluid intelligence predicts a substantial part of the variance of academic achievement provides empirical evidence supporting Cattell's (1963Cattell's ( , 1987 investment hypothesis, and also provides insight into the learning function of fluid intelligence for acquiring strategies and knowledge of various domains. the prediction model including the constant and learning components of fluid intelligence as predictor variables and math and verbal achievements as predicted variables. All completely standardized path coefficients reached the level of significance (** p < .01). the path coefficient from the learning component to each predicted variable was statistically larger than the one from the constant component.