Open-access Invariance across sex, school, and educational level to Learning Approaches Scale (EABAP)

Invariância entre Sexo, Escola e Nível Educacional da EABAP

Invarianza entre sexo, escuela y nivel educativo de la EABAP

Abstract

The Learning Approaches Scale (EABAP) showed evidence of structural and external validity in assessing the deep and surface approach of elementary and high school students. However, this evidence is supported only by participants from a single school. The present study evaluates the generality of EABAP by verifying through the multigroup confirmatory factor analysis whether this scale is invariant across sex, type of school, and educational level variables. The sample consisted of 2,148 students from elementary school II, high school, and higher education in public and private schools. The results indicate configural, metric, and partial scalar invariance for the sex variable; configural, partial metric, and partial scalar invariance for the educational level variable; and configural, partial metric, and scalar invariance for the type of school variable. We conclude that it is possible to compare the means of the latent variables measured by EABAP for the groups analyzed in this sample.

Keywords: Validity; Psychological testing; Educational psychology

Resumo

A Escala de Abordagens de Aprendizagem (EABAP) apresenta evidências de validade estrutural e validade externa para avaliar a abordagem profunda e superficial de estudantes do ensino fundamental II e ensino médio. Não obstante, essas evidências são sustentadas apenas por participantes de uma escola. O presente estudo avalia a generalidade da EABAP, verificando por meio da análise fatorial confirmatória multigrupo se essa escala é invariante para a variável sexo, tipo de escola e nível educacional. A amostra foi composta por 2.148 estudantes do ensino fundamental II, ensino médio e ensino superior, oriundos de escolas públicas e particulares. Os resultados indicam invariância configural, métrica e escalar parcial para a variável sexo; invariância configural, métrica parcial e escalar parcial para a variável nível educacional; e invariância configural, métrica parcial e escalar para a variável tipo de escola. Conclui-se que é possível comparar médias das variáveis latentes mensuradas pela EABAP nos grupos analisados nesta amostra.

Palavras-chave: Validade; testes psicológicos; psicologia educacional

Resumen

La Escala de Enfoques de Aprendizaje (EABAP) presenta evidencias de validez estructural y validez externa para evaluar el enfoque profundo y superficial de los estudiantes de primaria y secundaria. Sin embargo, las evidencias solo están respaldadas por participantes de una escuela. El presente estudio evalúa la generalidad de la EABAP, verificando mediante el análisis factorial confirmatorio si esta escala es invariante para las variables sexo, tipo de escuela y nivel educativo. La muestra estuvo conformada por 2148 estudiantes de primaria II, bachillerato y educación superior, de colegios públicos y privados. Los resultados indican invarianza de configuración, métrica y escalar parcial para la variable sexo; invarianza configuracional, métrica parcial y escalar parcial para la variable nivel educativo e invarianza configuracional, métrica parcial y escalar para el tipo de variable de escuela. Concluimos que es posible comparar las medias de las variables latentes medidas por EABAP en los grupos analizados en esta muestra.

Palabras clave: Validez; Tests psicológicos; Psicología educacional

Predictive studies on academic performance indicate that many variables are associated with educational outcomes. The most important predictors are intelligence (Gomes, 2010b, 2011b, 2012; Gomes & Borges, 2007; Gomes et al., 2022; Gomes & Golino, 2012; Ohtani & Hisasaka, 2018), metacognition (Gomes, Golino et al., 2014; Ohtani & Hisasaka, 2018), and socioeconomic variables (Gomes, Amantes et al., 2020; Gomes, Fleith et al., 2020; Gomes & Jelihovschi, 2019; Gomes, Lemos et al., 2020; Pazeto et al., 2019; Selvitopu & Kaya, 2021). There are several variables of secondary importance, such as the students’ approaches to learning (Gomes, 2010c, 2011a, 2013; Takase & Yoshida, 2021), motivation for learning (Gomes & Gjikuria, 2018; Nauzeer & Jaunky, 2021; Nunes et al., 2022), their beliefs around teaching and learning (Gomes & Borges, 2008), learning styles (Gomes, Marques et al., 2014), and academic self-reference (Costa et al., 2017; Nunes et al., 2022).

These predictors are sustained by an implicit or explicit assumption that the active interaction of the subject with the objects of knowledge is essential for better learning and achievement (Cardoso et al., 2019; Pereira et al., 2019), which is in agreement with the constructivist approach (Gomes, 2007, 2010a; Gomes & Borges, 2009; Pires & Gomes, 2018) and with neuropsychology (Dias et al., 2015).

Deep approach and surface approach are two variables of educational psychology that predict school performance. Deep approach is defined in theory as the combination of strategies and motivations that mobilize the active interaction of the subject with objects of knowledge. In turn, surface approach is defined as the combination of strategies and motivations that mobilize the subject’s passive interaction with objects of knowledge (Gomes, 2013).

The theory of learning approaches conceptually defines that the active interaction of the subject with the objects of knowledge represented by the deep approach, is the fundamental condition for better quality learning because active interaction determines learning through conceptual understanding of what is learned, plus an understanding of how the learned subject relates to prior knowledge. Meaningful learning is a consequence of this kind of interaction. Furthermore, the learning approach theory argues that active interaction is driven by intrinsic motives, that is, the subject interacts with the objects of knowledge because the interaction itself produces benefits to the subject. The theory also proposes the existence of an interaction which is opposite to active interaction. This interaction is represented by what the theory calls the surface approach. The theory holds that this interaction generates a poor quality learning because it determines an absence of conceptual understanding and the production of fragments by the absence of relevant relationships between what is learned and prior knowledge. Moreover, the learning approaches theory also theorizes that this passive interaction is guided by motives extrinsic to the interaction itself with the objects of knowledge, mainly those motives that generate a disengagement of the subject (Gomes, Araújo et al., 2020). In short, the theory of learning approaches has a conceptual framework that helps explain how people learn and how teaching can foster better learning (Rodrigues & Gomes, 2020). Throughout the learning approach theory, it is possible to integrate, at the theoretical level, the aforementioned predictors, in terms of their role as mobilizers of the subject’s active interaction with the objects of knowledge.

The theory of learning approaches contributes to the field of education and psychology by investigating how school activities and assessments impact the way students interact with objects of knowledge and, consequently, learn more or less effectively. The theory also contributes to the diagnosis of learning problems, as well as to the planning of pedagogical interventions. The prime assumption is that the deep approach, as opposed to the surface approach, provides better quality learning (Rodrigues & Gomes, 2020).

One contribution of learning approaches theory is the understanding of the way that the approaches are used in different groups and contexts. Research in this area has looked at differences in the use of approaches in male and female students of different nationalities, educational levels, and courses (Asikainen & Gijbels, 2017; Chiesi et al., 2016; Freiberg-Hoffmann & Romero-Medina, 2020; Immekus & Imbrie, 2009). This kind of information about differences between groups is relevant for the diagnosis of student learning and pedagogical planning. However, the field of psychometrics cautions that comparisons between groups are effectively valid only if they are supported by invariance analysis. What the analysis of invariance does is to provide evidence that the scores used for the comparison are valid for the different groups (Putnick & Bornstein, 2016).

There are different levels of analysis of whether the scores are adequate. The first of these involves the knowledge of whether the constructs of interest are measured in both groups. For example, when comparing the deep and surface approaches in men and women, there is a need to have evidence that the measurement instrument used is able to measure these two approaches in both sex. This analysis is called configural invariance. Two other levels are required for the comparison to be valid. One is the analysis of whether the items have the same importance for the composition of the scores. In a more technical language, it involves analyzing whether the factor loadings of the constructs in relation to the items are the same in the groups. This analysis, called metric invariance, is mandatory, because you cannot compare groups if the items have different weights. For example, suppose that when comparing the deep approach in men and women all items have the same weight, i.e. the value weight 1, except for one item. In this item, the correctness weight for men is 1 and the correctness weight for women is 10. In this case, the measurement instrument used is biased in favor of women, so that many differences in favor of women are false. Finally, it is necessary to assess whether the items of the measuring instrument show equal difficulty for the groups. This is analyzed in so-called scalar invariance analysis, where the distances between the values representing an item’s score with respect to the latent variable are inspected. For instance, items in a self-report instrument might represent learning approach behaviors and have three response options. The options would indicate how much the respondent believes the behavior indicated by the item is present in his or her academic life: “Not at all” would have a value of 1; “More or Less” would have a value of 2; and “Totally” would have a value of 3 points. In this case, each of the items would have a score of 1, 2 or 3. In terms of this score, the distance between the value 1 and 2 is the same distance between the value 2 and 3. However, the item scores and their distances are necessarily entered into a score of the estimated latent variable itself, which in our example is either the deep approach or the surface approach. For example, an item may have a distance of 3 points between scores 1 and 2 within the latent variable score, just as scores 2 and 3 of this item may have a distance of 4 points within the latent variable score. These distances need to be equal between the groups being compared. For example, if in this item the distance is 3 points in the latent variable score for scores 1 and 2 for men and 10 points for women, this means that it is much easier for men to score option 2 instead of 1 compared to women, which biases the measure. This is why the scalar invariance analysis is mandatory, as it avoids this kind of bias and allows for proper comparison between the groups (Putnick & Bornstein, 2016).

The literature on learning approaches shows few studies about invariance. Through a search in the Capes journals portal, which gathers 455 databases with diverse contents, and using the keywords “Approaches to learning”; “Students’ Approaches to learning”; “Learning approaches”; “Studying approach”; “Deep approach” and “surface Approach”, combined with the keywords “Measurement Invariance”; “Measurement Equivalence”; “Multigroup Confirmatory Factor Analysis”; “Factorial Invariance”; “Invariance”; “CFA” and “SEM”, we found only 19 articles on invariance, where 9 of them appeared twice. After screening these articles, we found that only 3 studies performed invariance analysis on some measurement instrument of the approaches. In addition to this systematic search, potential studies were also sought through the references of the selected articles themselves (snowball technique) and Google Scholar, resulting in the addition of only 1 more article. It is relevant to mention that, after this extensive search, only 4 studies on invariance were found. One of them investigated the invariance of approaches in the sex variable (Duff, 2002), two in the country variable (Chiesi et al., 2016; Immekus & Imbrie, 2009) and one in the undergraduate course variable (Freiberg-Hoffmann & Romero-Medina, 2020).

The proper comparison through analysis of invariance of approaches as a function of educational level, school type, and sex is important because it allows for testing arguments of the approaches theory itself. For example, the theory of approaches assumes that as students advance in educational levels, they increase the use of the deep approach and decrease the use of the surface approach (Asikainen & Gijbels, 2017). Learning approaches theory also argues that approaches are context dependent (Asikainen & Gijbels, 2017), so that the use of approaches can be expected to be distinct in different types of schools, such as public and private, and sex. The comparison between male and female has occurred since the earliest studies on learning approaches. Duff’s (2002) study stands out as the only attempt to perform an invariance analysis to compare between male and female. However, the methodology of the study is inadequate because Duff (2002) does not perform an invariance analysis of the items on the instrument used. His invariance analysis involves the summation of groups of items (composite scores). Duff’s (2002) choice is understandable, because at the time confirmatory factor analyses were not very accessible and popular software only had estimators that required the observable variables to be continuous. Probably, the technical difficulty of the time made it impossible for the author’s analysis to be adequate. In summary, it is possible to state that, to the best of our knowledge, there is no study on approaches that have satisfactorily performed invariance in the variables educational level, type of school, and sex.

In Brazil, there are few initiatives with the objective to create an agenda of studies on learning approaches (Fontes & Duarte, 2019). Up to the present moment, there is knowledge of the existence of a single self-report instrument made in Brazil to measure the approaches: The Learning Approach Scale (EABAP). EABAP has presented evidence of structural and external validity in assessing the deep and surface approach of Brazilian students in elementary and high school (Gomes, 2010c, 2011a, 2013; Gomes et al., 2011; Gomes & Golino, 2012b). Besides, EABAP has influenced the construction of many other tests, like the Students’ Learning Approach Test - Identification of Thinking Contained in Texts (SLAT-Thinking) (Gomes, Linhares et al., 2021; Gomes, Quadros et al., 2020), SLAT-Thinking Second Version (Gomes, & Nascimento, 2021a, 2021b), and Approach-in-Process Test (Gomes & Rodrigues, 2021).

Gomes et al. (2011) tested a model which assumed that EABAP is capable of measuring the deep and surface approaches, assuming that the nine target items of the deep approach were only loaded by this factor, and similarly, the eight target items of the surface approach were only loaded by the latter factor. Gomes et al. (2011) shows evidence that this model is adequate to represent the factorial structure of the EABAP. Furthermore, they found that the deep and surface approaches correlate negatively.

The objective of the present study is to evaluate the invariance of EABAP, evaluating the scale invariance across the variables sex, type of school, and educational level. A relatively large and diverse sample is used in this study. The scores of deep and surface approaches of male and female students (sex), private and public schools (the type of school), who attended elementary school II, high school, and higher education (educational level) are compared.

Method

Participants

The sample of this study comes from three studies: 709 participants from the study of Gomes (2010c), 791 participants from the study of Costa (2014) and 648 participants from the study of Gomes, Quadros et al. (2020). The sample comprised 2148 students, 415 from elementary school II (19.32%), 1085 from high school (50.51%), and 648 from higher education (30.17%). Private education institutions represented 71.55% of the sample, comprising 415 elementary school II students (27%), 735 high school students (47.82%), and 387 higher education students (25.18%). Of public-school students, 349 (57.12%) are from state schools and 262 (42.88%) are from federal schools. The female sex represented 52.14% of the sample, whereas the sample age ranged between 8 and 68 years (M= 17.67 and SD= 5.68).

Instrument

Learning Approach Scale (EABAP)

The EABAP was created by C. M. A Gomes. It is a self-reported scale composed of 17 items, with each item consisting of a statement that expresses behaviors of the student which represent either surface or deep approaches (Table 1). The respondent must assess the frequency with which the behavior described in each item occurs in his/her academic life, answering according to a Likert scale that ranges from (1) not at all to (5) completely. The EABAP measures deep and surface approaches to learning. Eight items measure the surface approach and nine items measure the deep approach. In the specific domain of the surface approach, the EABAP generates a minimum raw score of 8 points and a maximum raw score of 40 points; the minimum raw score in the specific domain of deep approach is 9 points and the maximum raw score is 45 points. There is evidence of internal and external validity that EABAP measures surface and deep approaches and predicts academic performance in elementary school II and high school (Gomes, 2010c, 2011a, 2013; Gomes et al., 2011; Gomes & Golino, 2012b).

Table 1
Learning Approach Scale (EABAP) Items

Procedures

This study used data from Gomes (2010c), Costa (2014), and Gomes, Quadros et al. (2020). The EABAP was applied collectively, with no time limit, and always by properly trained psychologists or psychology students.

The data from Gomes (2010c) was approved by the ethics committee of the Federal University of Minas Gerais (ETIC 456/07) and collected in early 2008 in a private school in Belo Horizonte. The data from Costa (2014) was approved by the ethics committee of the Federal University of Minas Gerais (364.253) and collected in 2013 in six schools, three public state schools and one private school in Belo Horizonte, one public federal school, and one private school in Viçosa. The data from Gomes, Quadros et al. (2020) was approved by the ethics committees of the Federal University of Minas Gerais (n. 3,377,700) and the Santa Catarina State University (n. 3,293,418) and was collected in 2019 in private and public colleges of the state and federal networks located in Belo Horizonte, Divinópolis, and Itaúna in Minas Gerais, as well as in Joinville, Santa Catarina, Brazil.

Data analysis

The Gomes et al. (2011) model of the correlated approaches will be evaluated in the complete sample of this study, using the item confirmatory factor analysis (CFA), through the Weighted Least Squares Mean and Variance Adjusted (WLSMV) estimator, because the items of EABAP are ordinal and have a raw score of 1 to 5. The model of the correlated approaches defines that the latent variable of the deep approach loads items 3, 5, 7, 9, 13, 14, 15, 16, and 17 while the latent variable of the surface approach loads items 1, 2, 4, 6, 8, 10, 11, and 12 of EABAP. These two latent variables are correlated.

The goodness of fit of the model is verified using the Comparative Fit Index (CFI) and the Root Mean Error Approximation (RMSEA), the first must be ≥ .90 and the second <.10 for the model not to be rejected. For a good data fit, the model needs to present CFI values ​​≥ .95 and RMSEA <.06 (Cangur & Ercan, 2015; Kline, 2016). Despite its relevance, the literature points out that the cutoff points of these indexes are quite arbitrary and not very accurate. The reader interested in this problematization can refer to the work of Xia and Yang (2019).

The generality of EABAP will be evaluated through an analysis of its configural invariance, weak invariance (metric), and strong invariance, regarding sex (male and female), educational level (elementary school II, secondary and higher education) and type of school (private and public) of the sample.

In invariance analysis, the configural invariance model is taken as the basis while the more restrictive models (metric and scalar invariance) are compared with it. The configural model is the simplest level of invariance, as it only assumes that the scores of people in certain groups pertain to the same latent variables (Putnick & Bornstein, 2016). The configural model must have CFI ≥ .90 and RMSEA < .10 in order not to be rejected. If it is rejected, it means that the scale is not invariant, at the simplest level, so the analysis must then be finished.

In this study, the configural model to be analyzed is the model of the correlated approaches, which will be previously tested in the complete sample. This model defines that the deep and surface approaches, as well as, the measurement errors (part of the item variance not explained by the two approaches), are the existing latent variables that explain the variation in people’s responses to EABAP items. If the configural model is not rejected, then it is compared to the metric invariance model. This is more restricted than the configural model, as it determines that the latent variables of the configural model and the factorial loadings are equal in the two compared samples. The metric model is rejected if it presents ΔCFI > .002, ΔRMSEA > .000 and a p-value < .01 for the chi-square differences, in favor of the configural model (Putnick & Bornstein, 2016). The last step of the invariance analysis is the comparison between the configural model and the scalar invariance model. In addition to determining the same restrictions as the previous models, this model also defines that the threshold values ​​of the EABAP items are the same for the compared samples. The same criteria used to compare the configural model and the metric invariance model are used to determine whether the scalar invariance model should be rejected. If the scalar invariance model is rejected, then the procedure for identifying the parameters that caused the model rejection will be performed. These parameters will be relaxed and partial invariance models will be tested to find out whether there is a partial scalar invariance and whether this level of partial invariance allows the comparison among the means of the latent variables, regarding the compared groups. According to the literature, there is no well-defined cutoff point, but partial scalar invariance that does not reach more than 20% of the instrument’s parameters can be considered non-compromising, which allows us to conclude that the instrument has strong invariance (Putnick & Bornstein, 2016).

The groups are compared by the true scores of the latent variables, estimated in the scalar invariance model. The first group is taken as the reference group with zero mean. In the scalar invariance model, the other groups’ values are always constructed comparing to the reference group. This comparison is defined in terms of standard deviation. For example, in the scalar invariance model if a group has a mean of 0.50 in a given latent variable, it says that it is 0.50 standard deviation higher than the reference group. Following the suggestions of Hattie (2009) for the effect in educational data, we considered d ≤ 0.2 as small effect size, d = 0.4 as medium effect size and d ≥ 0.6 as large effect size.

All of the analyses were performed using the lavaan packages (Rosseel, 2012), and semTools (Jorgensen et al., 2020) packages of the statistical software R (R Core Team, 2020). The reliability analysis involved the estimation of the alpha and omega indexes in the complete sample (Gomes et al., 2018; Raykov, 2001), which was calculated using the semTools package (Jorgensen et al., 2020). We presented the alpha since it is commonly used in the literature, but made use of McDonald’s omega to assess factor reliability. Values greater than or equal to .65 were classified as acceptable (Kalkbrenner, 2021).

Results

The model of correlated approaches showed an acceptable data fit (see CFI and RMSEA in Table 2). The deep approach factor has factor loadings between .55 and .79 (M= .65, SD= .08) and the surface approach between .43 and .67 (M= .58, SD= .07). The correlation between the factors is -.57. Cronbach’s alpha index for the deep approach was .86 and for the surface approach, .80. McDonald’s omega for the deep approach was .86 and for the surface approach, .78.

Table 2
Results Of The Confirmatory Factor Analysis And Invariance

The comparison between men and women, exhibits configural invariance, because this model shows an acceptable data fit (see CFI and RMSEA in Table 2). The difference between the configural and metric models in the CFI and RMSEA was .001 and .000, respectively, so we did not reject the (weak) metric model. The invariance of the (strong) scalar model was rejected for presenting ΔCFI > .002, ΔRMSEA > .000, and Δχ² (Δgl) p-value <.01, in favor of the configural model. While the partial scalar models M4s and M5s present a worse fit than the configural model, the partial scalar model M6s presents adjustment similar to the configural model (see Table 2). The M6s partial scalar model eliminated the constraints at the thresholds 2, 3, and 4 of item 5. Therefore, we conclude that EABAP can be used to compare the deep and surface approach of men and women in the sample of this study.

The educational level variable indicates configural invariance (see CFI and RMSEA in Table 2). The metric invariance was rejected for exhibiting ΔCFI> .002, ΔRMSEA> .001 and Δχ² (Δgl) p-value <.01, in favor of the configural model. While the partial metric models M3n, M4n, and M5n have a worse data fit than the configural model, the partial metric model M6n has similar data fit in comparison to the configural model (see Table 2). The scalar invariance was rejected, but the partial scalar model M8n, which only relaxes the factorial loading on item 4, has similar data fit in comparison to the configural model. In sum, we conclude that EABAP can be used to compare the deep and surface approach of the students in elementary school II, high school, and higher education included in the sample of this study.

The configural invariance model for type of school variable has an acceptable fit (see CFI and RMSEA in Table 2). The metric invariance was rejected because it produced a ΔCFI> .002, ΔRMSEA> .001 and Δχ² (Δgl) p-value <.01, in favor of the configural model. The partial metric models M3e, M4e, and M5e has a worse data fit than the configural model, the partial metric model M6e has a similar adjustment in comparison to the configural model (see Table 2). The M7e scalar model was not rejected since it produced a ΔRMSEA of .000 in comparison to the configural model, indicating that EABAP can be used to compare the approaches of the students from public and private schools in the sample of this study.

Table 3 shows the differences between the groups in terms of deep and surface approaches, their statistical significance and effect size. The male and female students in this sample show no statistically significant difference in the deep approach. There is an almost moderate difference in the surface approach in favor of men (Table 3). High school students in the sample of this study report a greater deep approach and less surface approach than the students of elementary school II. The higher education students in the sample of this study exhibit a greater deep approach and less surface approach than high school students. This result suggests that higher educational levels are associated with an increment in the deep approach and a decrease in the surface approach, according to the sample of the study (Table 3). The greatest difference in the deep approach was in favor of higher education in comparison to high school. School types have small effect sizes, where the public school’s students have higher scores in the deep approach and lower scores in the surface approach in comparison to the private school’s students in the sample of this study (Table 3).

Table 3
Mean Differences In The Deep And Surface Approaches

Discussion

In this study we investigated the invariance of sex, educational level, and type of school in the EABAP by using a large and heterogeneous sample. The model tested in this study, called correlated approaches, has not been refuted. The deep and the surface approach showed a negative correlation, which corroborates the work of Gomes et al. (2011). This study shows evidence of configural, metric, and scalar invariance of the EABAP, with respect to the variables mentioned above. There were cases in which the full scalar invariance was rejected, but the partial scalar invariance, with very few relaxations, remained invariant. The literature on invariance has used the recommendation of Dimitrov (2010) according to which a partial model with less than 20% of relaxed parameters does not compromise the invariance.

Our study also brought evidence that there is no difference between men and women of the sample in the deep approach, on the other hand, there is an almost moderate difference in the surface approach in favor of men. Unfortunately, there is a scarcity of studies about the invariance of instruments that measure students’ approaches to learning. The comparison between men and women is made in several studies, but we only found one study, by Duff (2002), in which the comparison was based on an analysis of invariance. This study employed a small sample of students from a single university in the UK and performed a confirmatory factor analysis of composites, using the maximum likelihood estimator. It is not possible to know if the estimator was chosen correctly because the author does not report an analysis of the multivariate normality of the composites. Furthermore, a proper analysis would require a multigroup item confirmatory factor analysis. Therefore, Duff’s study (2002) differs significantly from the analysis carried out in this article about the use of the estimator and the use of composites instead of items. The result of Duff’s study shows an absence of invariance in the sex variable already in the configural model for the Revised Approaches to Studying Inventory. Despite the absence of studies of invariance on gender, Severiens and Ten Dam (1994) meta-analysis indicates that men tend to have more extrinsic motivations while women have more intrinsic motivations. The authors argue that this difference may be the result of greater competitiveness and focus on academic grades found in men. Several studies have been conducted on the differences between men and women and learning approaches, but their results are quite contradictory, sometimes indicating no difference, sometimes in favor of either women or men (McDonald et al., 2017).

The present study found evidence that the progression from elementary education II to higher education is associated with an increase in the deep approach and a decrease in the surface approach. There was an almost large effect size in the deep approach when comparing high school to higher education. It is noteworthy that we did not find any study that performed the analysis of invariance to compare the students’ approaches to learning in different educational levels. However, the theory of learning approaches itself, assumes that the increasing of deep approach and decreasing of surface approach occurs across educational levels. The theory conceives the existence of a positive relationship between deep approach and cognitive development and a negative relationship between surface approach and cognitive development (Asikainen & Gijbels, 2017).

The score of public school’s students was higher in the deep approach and lower in the surface approach in comparison to private school’s students. However, it must be taken into account that the public school’s students in this sample include universities and federal and state high school institutes which represent the Brazilian elite of students in secondary education and the private school students belong to educational institutions of lesser prestige in comparison to the public institutions.

Considering the limitations of the study, the EABAP invariance analyses were performed with samples of convenience, despite its size and heterogeneity, so that other studies need to be performed with new samples. This study considered three important variables to make the analysis of invariance but did not consider other variables that were also relevant, such as type of discipline enrolled by the student. Further studies should take into account whether EABAP is invariant in different university courses and different disciplines in the academic curriculum.

Although we have shown the differences between the groups in our sample, we need to point that no evidence capable of representing the groups analyzed has been shown. In order to affirm that men present a greater superficial approach than women, we would need to have a representative sample of these groups. In Brazil, due to its size, logistical difficulties, and very high costs, it is practically impossible to collect representative samples. Therefore, we suggest that those who wish to apply EABAP to compare groups, should perform invariance analysis on their own samples. Our focus involved invariance analysis with the specific intent to provide initial evidence that the EABAP can be used by clinicians and educational psychologists when carrying out comparisons. Our evidence indicates relevance of the EABAP for comparing the groups analyzed. Because it is initial evidence, it may be restricted only to characteristics of the sample analyzed.

This paper, as far as we understand, is the first work that investigates the invariance of the approaches in the variables sex, educational level, and type of school, allowing comparisons empirically well supported of the groups related to these variables. We hope that this article will encourage researchers in the area of students’ approaches to learning who wish to compare groups to use invariance analysis, since this is a basic condition in order to make feasible comparisons (Gomes, de Araujo et al., 2021).

Acknowledgments:

Agradecimentos: Heitor Blesa Farias is a FAPEMIG master degree scholarship.

References

  • Asikainen, H., & Gijbels, D. (2017). Do Students Develop Towards More Deep Approaches to Learning During Studies? A Systematic Review on the Development of Students’ Deep and Surface Approaches to Learning in Higher Education. Educational Psychology Review, 29(2), 205-234. https://doi.org/10.1007/s10648-017-9406-6
    » https://doi.org/10.1007/s10648-017-9406-6
  • Cangur, S., & Ercan, I. (2015). Comparison of model fit indices used in structural equation modeling under multivariate normality. Journal of Modern Applied Statistical Methods, 14(1), 152-167. https://doi.org/10.22237/jmasm/1430453580
    » https://doi.org/10.22237/jmasm/1430453580
  • Cardoso, C. O., Seabra, A. G., Gomes, C. M. A., & Fonseca, R. P. (2019). Program for the neuropsychological stimulation of cognition in students: impact, effectiveness, and transfer effect on student cognitive performance. Frontiers in Psychology, 10, 1-16. https://doi.org/10.3389/fpsyg.2019.01784
  • Chiesi, F., Primi, C., Bilgin, A., Lopez, M., Del Carmen Fabrizio, M., Gozlu, S., & Tuan, N. (2016). Measuring University Students’ Approaches to Learning Statistics: An Invariance Study. Journal of Psychoeducational Assessment, 34(3), 256-268. https://doi.org/10.1177/0734282915596125
    » https://doi.org/10.1177/0734282915596125
  • Costa, B. C. G. (2014). Investigando a Relação entre Auconceito, Autoeficácia e Autoestima: Construção de evidências a partir da Escala de Cognições Acadêmicas Autorreferentes [Dissertação de Mestrado, Universidade Federal de Minas Gerais]. http://hdl.handle.net/1843/BUOS-AU3M76
    » http://hdl.handle.net/1843/BUOS-AU3M76
  • Costa, B. C. G., Gomes, C. M. A., & Fleith, D. S. (2017). Validade da Escala de Cognições Acadêmicas Autorreferentes: autoconceito, autoeficácia, autoestima e valor. Avaliação Psicológica, 16(1), 87-97. https://doi.org/10.15689/ap.2017.1601.10
    » https://doi.org/10.15689/ap.2017.1601.10
  • Dias, N. M., Gomes, C. M. A., Reppold, C. T., Fioravanti-Bastos, A., C., M., Pires, E. U., Carreiro, L. R. R., & Seabra, A. G. (2015). Investigação da estrutura e composição das funções executivas: análise de modelos teóricos. Psicologia: teoria e prática, 17(2), 140-152. https://doi.org/10.15348/1980-6906/psicologia.v17n2p140-152
    » https://doi.org/10.15348/1980-6906/psicologia.v17n2p140-152
  • Dimitrov, D. M. (2010). Testing for factorial invariance in the context of construct validation. Measurement and Evaluation in Counseling and Development, 43(2), 121-49. https://doi.org/10.1177/0748175610373459
    » https://doi.org/10.1177/0748175610373459
  • Duff, A. (2002). Approaches to learning: Factor invariance across gender. Personality and Individual Differences, 33(6), 997-1010. https://doi.org/10.1016/S0191-8869(01)00208-2
    » https://doi.org/10.1016/S0191-8869(01)00208-2
  • Fontes, M. A., & Duarte, A. M. (2019). Aprendizagem de estudantes do ensino técnico brasileiro: motivos, investimento e satisfação. Educação e Pesquisa, 45, e192610. https://doi.org/10.1590/s1678-4634201945192610.
    » https://doi.org/10.1590/s1678-4634201945192610
  • Freiberg-Hoffmann, A., & Romero-Medina, A. (2020). Validación del Approaches and Study Skills Inventory for Students (ASSIST) en Universitarios de Buenos Aires, Argentina. Acción Psicológica, 16(2), 1-16. https://doi.org/10.5944/ap.16.2.23042
    » https://doi.org/10.5944/ap.16.2.23042
  • Gomes, C. M. A. (2007). Softwares educacionais podem ser instrumentos psicológicos. Psicologia Escolar e Educacional, 11(2), 391-401. https://doi.org/10.1590/S1413-85572007000200016
    » https://doi.org/10.1590/S1413-85572007000200016
  • Gomes, C. M. A. (2010a). Avaliando a avaliação escolar: notas escolares e inteligência fluida. Psicologia em Estudo, 15(4), 841-849. http://www.redalyc.org/articulo.oa?id=287123084020
    » http://www.redalyc.org/articulo.oa?id=287123084020
  • Gomes, C. M. A. (2010b). Estrutura fatorial da Bateria de Fatores Cognitivos de Alta-Ordem (BaFaCalo). Avaliação Psicológica , 9(3), 449-459. http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S1677-04712010000300011&lng=pt
    » http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S1677-04712010000300011&lng=pt
  • Gomes, C. M. A. (2010c). Perfis de Estudantes e a relação entre abordagens de aprendizagem e rendimento Escolar. Psico (PUCRS. Online), 41(4), 503-509. http://revistaseletronicas.pucrs.br/ojs/index.php/revistapsico/article/view/6336
    » http://revistaseletronicas.pucrs.br/ojs/index.php/revistapsico/article/view/6336
  • Gomes, C. M. A. (2011a). Abordagem profunda e abordagem superficial à aprendizagem: diferentes perspectivas do rendimento escolar. Psicologia: Reflexão e Crítica, 24(3), 438-447. https://doi.org/10.1590/S0102-79722011000300004
    » https://doi.org/10.1590/S0102-79722011000300004
  • Gomes, C. M. A. (2011b). Validade do conjunto de testes da habilidade de memória de curto-prazo (CTMC). Estudos de Psicologia (Natal), 16(3), 235-242. https://doi.org/10.1590/S1413-294X2011000300005
    » https://doi.org/10.1590/S1413-294X2011000300005
  • Gomes, C. M. A. (2012). Validade de construto do conjunto de testes de inteligência cristalizada (CTIC) da bateria de fatores cognitivos de alta-ordem (BaFaCAlO). Gerais: Revista Interinstitucional de Psicologia, 5(2), 294-316. http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S1983-82202012000200009&lng=pt&tlng=pt
    » http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S1983-82202012000200009&lng=pt&tlng=pt
  • Gomes, C. M. A. (2013). A Construção de uma Medida em Abordagens de Aprendizagem. Psico (PUCRS. Online) , 44(2), 193-203. http://revistaseletronicas.pucrs.br/ojs/index.php/revistapsico/article/view/11371
  • Gomes, C.M.A., Amantes, A., & Jelihovschi, E.G. (2020). Applying the regression tree method to predict students’ science achievement. Trends in Psychology, 28, 99-117. https://doi.org/10.9788/s43076-019-00002-5
    » https://doi.org/10.9788/s43076-019-00002-5
  • Gomes, C. M. A., Araujo, J. D., & Castillo-Díaz, M. A. (2021). Testing the Invariance of the Metacognitive Monitoring Test. Psico-USF, 26, 685-696. https://doi.org/10.1590/1413-82712021260407
    » https://doi.org/10.1590/1413-82712021260407
  • Gomes, C. M. A., de Araujo, J., & Jelihovschi E. G. (2020). Approaches to learning in the non-academic context: constructo validity of learning approaches test in video game (lat-video game). International Journal of Development Research, 10 (11), 41842-41849. https://doi.org/10.37118/ijdr.20350.11.2020
    » https://doi.org/10.37118/ijdr.20350.11.2020
  • Gomes, C. M. A., & Borges, O. N. (2007). Validação do modelo de inteligência de Carroll em uma amostra brasileira. Avaliação Psicológica , 6(2), 167-179. http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S1677-04712007000200007&lng=en&tlng=pt
    » http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S1677-04712007000200007&lng=en&tlng=pt
  • Gomes, C. M. A., & Borges, O. N. (2008). Avaliação da validade e fidedignidade do instrumento crenças de estudantes sobre ensino-aprendizagem (CrEA). Ciências & Cognição (UFRJ), 13(3), 37-50. http://www.cienciasecognicao.org/revista/index.php/cec/article/view/60
    » http://www.cienciasecognicao.org/revista/index.php/cec/article/view/60
  • Gomes, C. M. A., & Borges, O. N. (2009). O ENEM é uma avaliação educacional construtivista? Um estudo de validade de construto. Estudos em Avaliação Educacional, 20(42), 73-88. https://doi.org/10.18222/eae204220092060
    » https://doi.org/10.18222/eae204220092060
  • Gomes, C. M. A., Farias, H. B., & Jelihovschi, E. G. (2022). Approaches to learning does matter to predict academic achievement. Revista de Psicología, 40(2), 905-933. https://doi.org/10.18800/psico.202202.010
    » https://doi.org/10.18800/psico.202202.010
  • Gomes, C. M. A., Fleith, D. S., Marinho-Araujo, C. M., & Rabelo, M. L. (2020). Predictors of students’ mathematics achievement in secondary education. Psicologia: Teoria e Pesquisa, 36, e3638. https://doi.org/10.1590/0102.3772e3638
    » https://doi.org/10.1590/0102.3772e3638
  • Gomes, C. M. A., & Gjikuria, E. (2018). Structural Validity of the School Aspirations Questionnaire (SAQ). Psicologia: Teoria e Pesquisa , 34, e3438. https://doi.org/10.1590/0102.3772e3438
    » https://doi.org/10.1590/0102.3772e3438
  • Gomes, C. M. A., & Golino, H. F. (2012). Validade incremental da Escala de Abordagens de Aprendizagem (EABAP). Psicologia: Reflexão e Crítica , 25(4), 400-410. https://doi.org/10.1590/S0102-79722012000400001
    » https://doi.org/10.1590/S0102-79722012000400001
  • Gomes, C. M. A., Golino, H. F., & Menezes, I. G. (2014). Predicting School Achievement Rather than Intelligence: Does Metacognition Matter? Psychology, 5, 1095-1110. https://doi.org/10.4236/psych.2014.59122
    » https://doi.org/10.4236/psych.2014.59122
  • Gomes, C. M. A., Golino, H. F., Pinheiro, C. A. R., Miranda, G. R., & Soares, J. M. T. (2011). Validação da Escala de Abordagens de Aprendizagem (EABAP) em uma amostra Brasileira. Psicologia: Reflexão e Crítica , 24(1), 19-27. https://doi.org/10.1590/S0102-79722011000100004
    » https://doi.org/10.1590/S0102-79722011000100004
  • Gomes, C. M. A., Golino, H. F., & Peres, A. J. de S. (2018). Análise da fidedignidade composta dos escores do ENEM por meio da análise fatorial de itens. European Journal of Education Studies, 5(8), 331-344. https://doi.org/10.5281/zenodo.2527903.
  • Gomes, C. M. A., & Jelihovschi, E. (2019). Presenting the regression tree method and its application in a large-scale educational dataset. International Journal of Research & Method in Education, 43(2), 201-221. https://doi.org/10.1080/1743727X.2019.1654992
    » https://doi.org/10.1080/1743727X.2019.1654992
  • Gomes, C. M . A., Lemos, G. C., & Jelihovschi, E. G. (2020). Comparing the predictive power of the CART and CTREE algorithms. Avaliação Psicológica , 19(1), 87-96. https://doi.org/10.15689/ap.2020.1901.17737.10
    » https://doi.org/10.15689/ap.2020.1901.17737.10
  • Gomes, C. M. A., Linhares, I. S., Jelihovschi, E. G., & Rodrigues, M. N. S. (2021). Introducing rationality and contente validity of SLAT-Thinking. International Journal of Development Research , 11(1), 43264-43272. https://doi.org/10.37118/ijdr.20586.01.2021
    » https://doi.org/10.37118/ijdr.20586.01.2021
  • Gomes, C. M. A., Marques, E. L. L., & Golino, H. F. (2014). Validade Incremental dos Estilos Legislativo, Executivo e Judiciário em Relação ao Rendimento Escolar. Revista E-Psi, 2, 31-46. https://revistaepsi.com/artigo/2013-2014-ano3-volume2-artigo3/
    » https://revistaepsi.com/artigo/2013-2014-ano3-volume2-artigo3/
  • Gomes, C. M. A., & Nascimento, D. F. (2021a). Presenting SLAT-Thinking Second Version and its content validity. International Journal of Development Research , 11(3), 45590-45596. https://doi.org/10.37118/ijdr.21368.03.2021
    » https://doi.org/10.37118/ijdr.21368.03.2021
  • Gomes, C. M. A., & Nascimento, D. F. (2021b, setembro). Evidences of validity of Students’ Learning Approach Test: Identification of Thinking Contained in Texts 2. Anais completos do XVI Congresso Internacional Galego-Português de Psicopedagogia, Universidade do Minho, Braga, Portugal. https://www.researchgate.net/publication/356725706_Evidencias_de_validade_do_Teste_de_Abordagens_de_Aprendizagem_Identificacao_do_Pensamento_contido_em_Textos_2
    » https://www.researchgate.net/publication/356725706_Evidencias_de_validade_do_Teste_de_Abordagens_de_Aprendizagem_Identificacao_do_Pensamento_contido_em_Textos_2
  • Gomes, C. M. A., Quadros, J. S., Araujo, J., & Jelihovschi, E. G. (2020). Measuring students’ learning approaches through achievement: structural validity of SLAT-Thinking. Estudos de Psicologia , 25(1), 33-43. https://doi.org/10.22491/1678-4669.20200004
    » https://doi.org/10.22491/1678-4669.20200004
  • Gomes, C. M. A., & Rodrigues, M. N. S. (2021). Teste Abordagem-em-Processo. https://doi.org/10.13140/RG.2.2.17602.71363/2
    » https://doi.org/10.13140/RG.2.2.17602.71363/2
  • Hattie, J. (2009). Visible Learning: A synthesis of over 800 meta-analyses relating to achievement. London: Taylor & Francis.
  • Immekus, J. C., & Imbrie, P. K. (2009). A Test and Cross-Validation of the Revised Two-Factor Study Process Questionnaire Factor Structure Among Western University Students. Educational and Psychological Measurement, 70(3), 495-510. https://doi.org/10.1177/0013164409355685
    » https://doi.org/10.1177/0013164409355685
  • Jorgensen, T. D., Pornprasertmanit, S., Schoemann, A. M., & Rosseel, Y. (2020). semTools: Useful tools for structural equation modeling (R package version 0.5-3) [Computer software]. The Comprehensive R Archive Network. Available from https://CRAN.R-project.org/package=semTools
    » https://CRAN.R-project.org/package=semTools
  • Kalkbrenner, M. T. (2021). Alpha, omega, and H internal consistency reliability estimates: Reviewing these options and when to use them. Counseling Outcome Research and Evaluation, 14(1), 77-88. https://doi.org/10.1080/21501378.2021.1940118
    » https://doi.org/10.1080/21501378.2021.1940118
  • Kline, R. B. (2016). Principles and Practice of Structural Equation Modeling (FourthEd.).New York: Guilford.
  • McDonald, F., Reynolds, J., Bixley, A., & Spronken-Smith, R. (2017). Changes in approaches to learning over three years of University undergraduate study. Teaching & Learning Inquiry, 5(2), 65-79. http://dx.doi.org/10.20343/teachlearninqu.5.2.6
    » https://doi.org/10.20343/teachlearninqu.5.2.6
  • Nauzeer, S., & Jaunky, V. C. (2021). A Meta-Analysis of the Combined Effects of Motivation, Learning and Personality Traits on Academic Performance. Pedagogical Research, 6(3), em0097. https://doi.org/10.29333/pr/10963
    » https://doi.org/10.29333/pr/10963
  • Nunes, C., Oliveira, T., Santini, F. de O., Castelli, M., & Cruz-Jesus, F. (2022). A Weight and Meta-Analysis on the Academic Achievement of High School Students. Education Sciences, 12, 1-17. https://doi.org/10.3390/educsci12050287
    » https://doi.org/10.3390/educsci12050287
  • Ohtani, K., & Hisasaka, T. (2018). Beyond intelligence: A meta-analytic review of the relationship among metacognition, intelligence, and academic performance. Metacognition and Learning, 13, 179-212. https://doi.org/10.1007/s11409-018-9183-8
    » https://doi.org/10.1007/s11409-018-9183-8
  • Pazeto, T. C. B., Dias, N. M., Gomes, C. M. A., & Seabra, A. G. (2019). Prediction of arithmetic competence: role of cognitive abilities, socioeconomic variables and the perception of the teacher in early childhood education. Estudos de Psicologia , 24(3), 225-236. https://doi.org/10.22491/1678-4669.20190024
    » https://doi.org/10.22491/1678-4669.20190024
  • Pereira, B. L. S., Golino, M. T. S., & Gomes, C. M. A. (2019). Investigando os efeitos do Programa de Enriquecimento Instrumental Básico em um estudo de caso único. European Journal of Education Studies , 6(7), 35-52. https://doi.org/10.5281/zenodo.3477577
    » https://doi.org/10.5281/zenodo.3477577
  • Pires, A. A. M., & Gomes, C. M. A. (2018). Proposing a method to create metacognitive school exams. European Journal of Education Studies , 5(8), 119-142. https://doi.org/10.5281/zenodo.2313538
    » https://doi.org/10.5281/zenodo.2313538
  • Putnick, D. L., & Bornstein, M. H. (2016). Measurement invariance conventions and reporting: The State of the Art and Future Directions for Psychological Research. Developmental Review, 41, 71-90. https://doi.org/10.1016/j.dr.2016.06.004.
    » https://doi.org/10.1016/j.dr.2016.06.004
  • Raykov, T. (2001). Bias of coefficient alpha for fixed congeneric measures with correlated errors. Applied Psychological Measurement, 25(1), 69-76. https://doi.org/10.1177/01466216010251005
    » https://doi.org/10.1177/01466216010251005
  • R Core Team (2020). R: A Language and Environment for Statistical Computing.Vienna, Austria: R Foundation for Statistical Computing. Available from https://www.R-project.org/
    » https://www.R-project.org/
  • Rodrigues, M. N. dos S. & Gomes, C. M. A. (2020). Testing the hypothesis that the deep approach generates better academic performance. International Journal of Development Research , 10 (12), 42925-42935. https://doi.org/10.37118/ijdr.20579.12.2020
    » https://doi.org/10.37118/ijdr.20579.12.2020
  • Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48(2), 1-36. https://doi.org/10.18637/jss.v048.i02
    » https://doi.org/10.18637/jss.v048.i02
  • Selvitopu, A., & Kaya, M. (2021). A Meta-Analytic Review of the Effect of Socioeconomic Status on Academic Performance. Journal of Education, 1-13. https://doi.org/10.1177/00220574211031978
    » https://doi.org/10.1177/00220574211031978
  • Severiens, S. E., & Ten Dam, G. T. (1994). Gender differences in learning styles: A narrative review and quantitative meta-analysis. Higher education, 27(4), 487-501. https://doi.org/10.1007/BF01384906
    » https://doi.org/10.1007/BF01384906
  • Takase, M., & Yoshida, I. (2021). The relationships between the types of learning approaches used by undergraduate nursing students and their academic achievement: A systematic review and meta-analysis. Journal of Professional Nursing, 37(5), 836-845. https://doi.org/10.1016/j.profnurs.2021.06.005
    » https://doi.org/10.1016/j.profnurs.2021.06.005
  • Xia, Y., & Yang, Y. (2019). RMSEA, CFI, and TLI in structural equation modeling with ordered categorical data: The story they tell depends on the estimation methods. Behavior Research Methods, 51(1), 409-428. https://doi.org/10.3758/s13428-018-1055-2.
    » https://doi.org/10.3758/s13428-018-1055-2

Edited by

  • Editor:
    Evandro Morais Peixoto

Publication Dates

  • Publication in this collection
    28 June 2024
  • Date of issue
    2024

History

  • Received
    11 Apr 2022
  • Reviewed
    10 July 2023
  • Accepted
    08 Oct 2023
location_on
Universidade de São Francisco, Programa de Pós-Graduação Stricto Sensu em Psicologia R. Waldemar César da Silveira, 105, Vl. Cura D'Ars (SWIFT), Campinas - São Paulo, CEP 13045-510, Telefone: (19)3779-3771 - Campinas - SP - Brazil
E-mail: revistapsico@usf.edu.br
rss_feed Acompanhe os números deste periódico no seu leitor de RSS
Acessibilidade / Reportar erro