Open-access The Influence of Cognitive Ability on Cognitive Biases Generated by the Representativeness Heuristic

Abstract

Purpose:  This study aims to investigate the influence of cognitive ability on cognitive biases generated by the representativeness heuristic.

Design/methodology/approach:  The data collection was performed through questionnaires in order to measure the cognitive ability of 1,064 Brazilian accounting students and professionals using the Cognitive Reflection Test. To perform the analysis, we used the Student’s t-test, analysis of variance, correlations, and regressions.

Findings:  Our initial findings indicate that cognitive ability only influences the incidence of base rate insensitivity and illusion of validity biases, indicating that the higher the cognitive ability, the lower the incidence of these biases in decision making. However, robustness tests expand this influence to misconceptions of chance and regression fallacy biases. Furthermore, we show that there is a difference of means between gender, level of education, and geographic region and the representativeness heuristic biases.

Originality/value:  This paper contributes to the literature on behavioral accounting considering that although investigations into this subject do exist, no study has been performed in the accounting area that involves all the cognitive biases of one heuristic in a single study.

Keywords: cognitive biases; behavioral accounting; Cognitive Reflection Test

Resumo

Objetivo:  Este estudo tem como objetivo investigar a influência da capacidade cognitiva nos vieses cognitivos gerados pela heurística da representatividade.

Metodologia:  A coleta de dados foi realizada por meio de questionários visando medir a capacidade cognitiva de 1.064 estudantes e profissionais de contabilidade brasileiros utilizando o teste de reflexão cognitiva. Para realizar a análise, utilizamos o teste t de Student, análise de variância, correlações e regressões.

Resultados:  Nossos resultados iniciais indicam que a capacidade cognitiva apenas influencia a incidência dos vieses da insensibilidade à taxa básica e da ilusão de validade, indicando que quanto maior for a capacidade cognitiva, menor é a incidência desses vieses na tomada de decisão. Testes de robustez, no entanto, expandem essa influência aos vieses dos equívocos de chance e da falácia da regressão. Além disso, mostramos que há uma diferença de meios entre gênero, nível de escolaridade e região geográfica e os vieses da heurística da representatividade.

Contribuições:  Este trabalho contribui para a literatura sobre contabilidade comportamental, considerando que, embora existam pesquisas sobre o tema, ainda não foi realizado nenhum estudo na área da contabilidade que envolva todos os vieses cognitivos de uma heurística em um único estudo.

Palavras-chave: Vieses cognitivos; contabilidade comportamental; teste de reflexão cognitiva

1 Introduction

Humans continuously make judgments and decisions. In this process, it is common for the brain to use shortcuts to reduce the cognitive effort. Lilienfeld, Lynn, Ruscio, and Beyerstein (2010) considered the hypothesis that if a person walks down a street and sees a masked man running out of a bank with a gun, they will probably try to get out of the way as quickly as possible. That person will perceive similar characteristics in the man to those of a bank robber, usually portrayed on television and (or) in movies.

In this situation, a mental shortcut is used to make the most sensible decision. However, this will not always lead to the best options. Our brain sometimes makes mistakes without us noticing. In order to judge correctly and make the right decisions, it is necessary to understand the lapses that we commit, so that we are not victims of them. This helps individuals to act more consciously, increasing the likelihood of achieving the intended goals.

Considering that human behavior is influenced by several psychological aspects, which can distort the rational process of decision making (Kimura, 2003), Birnberg, Luft, and Shields (2007) claim that there is a strong relationship between accounting and human behavior, since psychological aspects have an impact on accounting.

Accounting aims to provide information to various internal or external users. Such information can influence actions or behaviors directly through the informational content of the transmitted message, or indirectly through behavior, even when the information is generated and transmitted. One of the challenges is to understand whether cognitive biases affect the judgment process of professionals responsible for preparing accounting information.

Birnberg (2011) affirms that studies on behavioral accounting are richer because of the topics covered, the methods used, and the range of sub-areas in accounting on which they are performed. The growth of these studies has been followed and benefited by a similar increase in behavioral studies in other areas. The empirical evidence that challenges the idea of the existence of a rational individual (Hogarth, 1975; Mussweiler & Englich, 2005; Tversky & Kahneman, 1974; Veeraraghavan, 2010) encourages new studies in the behavioral area. These studies prove that a heuristic, which reduces complex decision-making tasks, can also lead to severe errors of judgment.

Some studies have investigated the relationship between biases and individuals’ cognitive ability (Frederick, 2005; Hoppe & Kusterer, 2011; Oechssler, Roider, & Schmitz, 2009), which is a brain mechanism used to perform any activity, whether simple or complex. Researchers (Frederick, 2005; Kahneman & Frederick, 2002) point out that decision-making theory divides cognitive processes into two types: those that are executed quickly and not very consciously (called type 1) and those that are more reflexive (known as type 2).

Considering that individuals are prone to various heuristic-driven biases in the decision-making process (Ramiah, Zhao, & Moosa, 2014), cognitive biases from the representativeness heuristic, such as base rate insensitivity, are well known in theory. However, until recently there has only been limited research on how to deal with those biases in managerial decision making (Ohlert & Weissenberger, 2015).

Therefore, our study aims to investigate the influence of cognitive ability on cognitive biases generated by the representativeness heuristic, thus contributing to reflections on the importance of understanding biases arising from this heuristic.

Decision biases should be studied in detail for several reasons. These include: the interest that the subject itself generates; the emergence of practical implications; and the possibility of clarifying the psychological processes that underlie perception and judgment, since by knowing the biases accountants can minimize their cognitive failures, either when preparing information for stakeholders or when using that information for decision making.

Our study reveals that there is an inverse relationship between cognitive ability and the base rate insensitivity and illusion of validity biases, indicating that the higher the cognitive ability, the lower the incidence of these biases in decision making. Regarding the respondents’ perceptions on whether they make judgments and decisions based on intuition or reason, it can be noted that the individuals with a high CRT score considered their judgment and decision making to be based more on reason than those with lower scores, who showed greater use of intuition. Additionally, robustness tests expand this negative influence to misconceptions of chance and regression fallacy biases.

We also show that there is a difference of means between genders in base rate insensitivity, insensitivity to sample size, illusion of validity, misconception of chance, and regression fallacy biases. For the level of education, the biases impacted by this variable were base rate insensitivity and insensitivity to sample size, confirming the idea that educational level affects individuals’ behavior in the decision-making process (Bellouma & Belaid, 2016; Khan, Naz, Qureshi, & Ghafoor, 2017).

Finally, we demonstrate that respondents’ residing in the Midwest, Southeast, and South regions are less sensitive to the base rate insensitivity and illusion of validity biases than those residing in the Northeast and North regions, which may be explained by cultural factors (Chen, Kim, Nofsinger, & Rui, 2007).

In summary, our results show that the cognitive biases that arise from the representativeness heuristic influence the decision-making process. Thus, they should be added to the descriptive model, which captures real human behavior, providing an improvement in the explanatory power of economic models by considering the behavior of non-specialist economic agents and of those specialists who sometimes fail to make optimal decisions when faced with complex problems.

According to Thaler (2016), there are two types of decision-making models: normative, which characterize optimal solutions to specific problems; and descriptive, which seek to capture how humans behave. Thus, descriptive models, which supposedly deal with irrelevant factors, can help to improve the explanatory power of economic models. In this vein, our study contributes to the literature as it seeks to understand and reinforce the possible influence of cognitive ability on cognitive biases and people’s judgments and decisions. Furthermore, our study shifts the focus of the discussion about theoretical principles of the best economic decision-making model to shed light on the impact of cognitive ability on biases and how they affect judgment and decision making. In this sense, this paper contributes to the literature on behavioral accounting since, despite the existence of investigations of cognitive biases in several areas (Bellouma & Belaid, 2016; Dohmen, Falk, Huffman, & Sunde, 2010; Hoppe & Kusterer, 2011; Liberali, Reyna, Furlan, Stein, & Pardo, 2011; Oechssler et al., 2009; Ohlert & Weissenberger, 2015; Ramiah et al., 2014; Toplak, West, & Stanovich, 2011), to the best of our knowledge, no previous study analyzes the relationship between cognitive ability and all the cognitive biases of the representativeness heuristic in a single study, and with a large sample (of 1,064 participants, whereas the sample size of these previous studies ranges between 13 and 449 participants).

Based on this view, we highlight that our study differs from those of Hoppe and Kusterer (2011) and Ohlert and Weissenberger (2015) since they only examine the base rate insensitivity bias and do not explore the other biases that arise from the representativeness heuristic (insensitivity to sample size, misconceptions of chance, illusion of validity, and insensitivity to predictability).

Moreover, it differs from previous research that analyzes different cognitive biases or fallacies, such as conservatism (Oechssler et al., 2009), risk aversion (Dohmen et al., 2010), loss aversion, high confidence level, anchoring, and self-serving biases (Bellouma & Belaid, 2016), and conjunction and disjunction fallacies in probability judgments (Liberali et al., 2011).

One similar approach to ours was taken by Toplak et al. (2011). However, they analyze an index of 15 different cognitive biases (including only a few representativeness heuristic biases, such as insensitivity to sample size and regression fallacy), and do not explore all the cognitive biases that arise from the representativeness heuristic, nor the individual effect of the CRT on each cognitive bias.

We consider that this individual analysis of the relationship the CRT has with each bias is relevant since our results show that whereas cognitive ability significantly influences the incidence of base rate insensitivity, illusion of validity, misconceptions of chance, and regression fallacy biases, it does not significantly influence insensitivity to sample size and insensitivity to predictability biases.

Besides the contribution made by analyzing the relationship between cognitive ability and all the biases arising from the representativeness heuristic, this research contributes to business practice by demonstrating that individuals who are (or will be) directly involved in a decision-making process are susceptible to the cognitive biases generated by the representativeness heuristic, which may negatively affect companies if these individuals do not consider rational premises in the decision-making process.

Considering that knowing the representativeness heuristic biases allows for decision makers to mitigate its incidences, our study contributes to the literature by pointing out that errors such as the tendency of individuals to (i) erroneously judge the likelihood of a situation, neglecting the base rate when they come across descriptive information, even if irrelevant (base rate insensitivity); (ii) assume that future results will be directly predictable from past results, presuming a perfect correlation between them (regression fallacy); (iii) overestimate their ability to interpret and predict an outcome accurately when analyzing a set of data which show a consistent pattern with their beliefs (illusion of validity); and (iv) believe that a sequence of events generated by a random process represents its essential characteristics (misconception of chance). These can be mitigated by increasing cognitive ability, such as through statistical training that allows us to identify and thus demand logical reasoning based on a more reflective cognitive process (type 2).

Finally, considering that heuristics are a process of simplifying the decision-making process, it is important to expand the discussions about them since, based on the amount of information that individuals process daily in practice, it is common to use heuristics in decision making. Through their use, individuals may engage in behavioral biases, reducing the probability of rational decisions due to the higher propensity for intuitive decisions, requiring more caution in the decision-making process.

The remainder of this paper is structured as follows. Section 2 reviews the previous literature about the Cognitive Reflection Test (CRT) and representativeness heuristic biases. Section 3 introduces the methodological procedures. Section 4 presents and discusses our results. Section 6 presents the theoretical and practical implications of the results, including future research opportunities.

2 Literature Review

2.1 Cognitive Reflection Test (CRT)

Frederick (2005) defined cognitive reflection as the ability or willingness to resist providing the answer that first comes to mind. His Cognitive Reflection Test (CRT) precisely serves the purpose of measuring this disposition of thinking. Oechssler et al. (2009) consider the CRT to be a quick and simple test that can be compared to more complex intelligence tests concerning its results.

The CRT differentiates more impulsive (type 1) decision-makers from more reflexive ones (type 2). To do so, each of the three CRT problems has an intuitive answer, though incorrect, that quickly comes to mind. Those problems induce people to use type 1 processing in thinking that the test is easy to understand and solve. The activation of type 2 deliberative processes is presumably necessary to dissociate the intuitive response and calculate the correct response (Browne, Pennycook, Goodwin, & McHenry, 2014).

In the CRT, individuals may respond mistakenly by processing information impulsively. The proposition that the three CRT problems generate incorrect intuitive responses is based on the following: all possible incorrect answers provided by people are intuitive; even when answering correctly, the wrong answer is initially considered, resulting from introspection, verbal reports, and making notes; and individuals who respond correctly to the test find it more difficult than those who provide the wrong answers because those who make mistakes do not observe the existing complexity since they use impulsive intuition (Frederick, 2005).

Individuals who answer all three items of the test correctly are considered to be within the high cognitive ability group; those who get one or two answers correct are in the average cognitive ability group; and those who answer all three problems/questions incorrectly form the low cognitive ability group. This information is necessary for the data analysis of this study since it is possible to relate the CRT to the incidence of cognitive biases using this resource.

Some studies relate intelligence to biases in information processing; however, this correlation does not always exist, as is the case with anchoring, whose incidence is unrelated to level of intelligence (West, Meserve, & Stanovich, 2012). Intelligent people can act ignorantly due to having false beliefs, because of a lack of or contamination of the mental apparatus (Barrouillet, 2011). On the other hand, studies have shown the relationship between biases and the individual’s cognitive ability (Frederick, 2005; Hoppe & Kusterer, 2011; Oechssler et al., 2009; Stanovich, West, & Toplak, 2011). In these studies, the CRT was an adequate tool to measure the relationship between cognitive ability and biases.

Oechssler et al. (2009) identified in their study that individuals with high CRT scores commit less logical fallacies and are less self-reliant than those with low scores. According to Moritz, Siemsen, and Kremer (2014), cognitive reflection can be used to investigate errors in decision making. If decision heuristics are based on misguided intuition, decision-makers with more cognitive reflection should be better able to replace those responses and apply type 2 processing to improve their decision making. Furthermore, Evans and Stanovich (2013) believe there is a clear and relevant distinction between immediate and natural responses on the one hand, and reflective responses on the other, to many questions regarding reasoning and judgment.

2.2 Representativeness heuristics and generated cognitive biases

There are three major heuristics used in decision making under uncertainty: availability (rules created to measure the chances of an event occurring), anchoring (anchor-based assessment), and representativeness, which is the focus of this study (Tversky & Kahneman, 1974). In the representativeness heuristic, an event or sample is judged probable/plausible when it represents the essential characteristics of its population and reflects the process by which it was generated (Hogarth, 1975; Kahneman & Frederick, 2002; Kahneman & Tversky, 1972; Tversky & Kahneman, 1973).

In the representativeness heuristic, McDowell, Occhipinti, and Chambers (2013) believe that people make their judgments based on the similarity of an event or object with something known. Additionally, Uribe, Manzur, and Hidalgo (2013) consider that the judgments in this heuristic are usually extrapolations based on an analysis of some individual examples, allowing people to make inferences regardless of sample size.

According to Kahneman and Tversky (1972), the biases generated by the representativeness heuristic are insensitivity to prior probability or frequency at the base rate; insensitivity to sample size; misconceptions of chance; regression fallacy; illusion of validity; and insensitivity to predictability.

Base rate insensitivity refers to people ignoring probabilities. Tversky and Kahneman (1974) exemplified base rate insensitivity through the following experiment. One group received brief personality descriptions of several people with information that was randomly selected from a group of 100 professional engineers and lawyers. Another group received the information that the personality descriptions were taken from a group of 70 engineers and 30 lawyers. In a third group, the individuals were informed that the data were regarding a group of 30 engineers and 70 lawyers. All the groups studied by Tversky and Kahneman (1974) were asked to evaluate the chance of each description being of an engineer. Thus, it was expected that the probability of any description belonging to an engineer would be higher in the second group than in the third group, which had more lawyers. The results show that the participants in the experiment evaluated the probability of the description belonging to an engineer based on stereotypes, without taking into account the probabilities. These results were confirmed by further studies (Joyce & Biddle, 1981; Kahneman, 2003; Stanovich & West, 2000).

Insensitivity to sample size refers to people ignoring the number of people in a sample. This explains why people, including scientists, are often confident in small sample-based statistical tests and their conclusions. One example is the test performance of students of different size classes. The chance of a small class doing well in tests is higher than that of larger classes (Kane & Staiger, 2002).

Misconception of chance is the expectation that a sequence of events generated by a random process will represent its essential characteristics, even for a short sequence. For example, consider two results of a coin: HTHTHT and HHHTTT (where: H = heads, and T = tails). People tend to consider the first sequence more likely because the second one seems less random (Tversky & Kahneman, 1974).

In the case of regression fallacy, people assume that future results will be directly predictable from past results, assuming there is a perfect correlation between future and past data. Situations of spurious correlations are examples where this bias can result in bad decisions.

Illusion of validity bias occurs due to individuals’ overconfidence in their own predictions, without ascertaining the validity of the information. This is also known as overconfidence. This is the case, for example, of a lot of information that is posted on the internet and often spread without verifying its validity. Thus, to reach a decision it is necessary to verify the accuracy of the information.

Lastly, insensitivity to predictability refers to forecasts made by someone based on the representativeness of the information. If the description is highly favorable, a very high profit will seem more reasonable. Hence, several studies have been carried out based on the representativeness heuristic.

Considering that the effect of various behavioral biases, including representativeness, affect the decision-making process, Ramiah et al. (2014) show that corporate treasurers who are involved in the decision-making process exhibit signs of behavioral biases, such as representativeness, anchoring, self-serving, high confidence, and loss aversion biases.

Base rate insensitivity, one of the biases of the representativeness heuristic, was studied by Ohlert and Weissenberger (2015), who examined how management accountants should prepare information in order to reduce the phenomenon of base rate insensitivity in probability judgments to guide managerial decision making.

Ohlert and Weissenberger (2015) suggest that a visual-based information format, especially in comparison to a tabular representation, significantly reduces the fallacy of neglecting base rates. Moreover, the results reveal a significant relationship between the base rate fallacy and the person’s cognitive style, suggesting that the base rate fallacy decreases in line with people’s preference to process information analytically.

In a similar study, Bellouma and Belaid (2016) show that capital managers exhibit various behavioral biases, such as the self-serving, overconfidence, loss aversion, representativeness, and anchoring biases, and that these biases affect their decisions. Moreover, it was verified that fundamental factors such as size, credit rating, firm performance, gender, age, education, and industry can affect the decisions of working capital managers.

Considering that studies such as those of Frederick (2005), Oechssler et al. (2009), and Dohmen et al. (2010) have identified that low cognitive ability measured through the CRT positively influences the incidence of cognitive biases in decision making, Hypothesis 1 considers that the incidence of cognitive biases generated by the representativeness heuristic is influenced by the cognitive ability of the individuals surveyed.

This hypothesis is based on the perspective that people with higher cognitive ability differ from those with lower cognitive ability in various ways, such as in influence on judgment and decision making (Frederick, 2005).

Given that Frederick (2005) identified that individuals with lower CRT scores perceived the test to be easier by solely making use of type 1 processing, and those with higher CRT scores used type 2 information processing, which is slow, conscious, and reflective, Hypothesis 2 considers that there is a statistical difference in the degree of difficulty perceived by the respondents’ in the three CRT items and their cognitive ability.

Finally, based on earlier evidence that gender (Ohlert & Weissenberger, 2015), level of education (Bellouma & Belaid, 2016), and region of residence (Tekçe & Yilmaz, 2015) influence cognitive biases, Hypothesis 3 considers that the respondents’ characteristics (gender, geographic region, and level of education) influence the cognitive biases generated by the representativeness heuristic.

3 Methodological Procedures

3.1 Data collection

Our data source was a survey that aimed to identify the cognitive ability of accountants and graduate students in accounting, measured by the Cognitive Reflection Test (CRT). The influence of cognitive ability on cognitive biases arising from the representativeness heuristic will be verified, consequently ascertaining if these biases affect judgment and decision making (Figure 1).

Figure 1
Synthesis of the main assumption of this study

The survey was composed of three sections (Table 1). The first part consisted of sociodemographic information. The second part was for measuring cognitive ability through the CRT. The third consisted of 30 questions, five for each of the biases. In the development of the scenarios to measure the studied biases, the focus was not on evaluating the accounting knowledge of the respondents; thus, the questions were regarding everyday situations.

Table 1
Synthesis of the Data Collection Instrument

Each question of the third part was measured by a scale ranging from 0 to 10, where 0 was designated as “I completely disagree” and 10 as “I completely agree.” Moreover, the questions were randomly arranged to try to prevent one biased response from affecting another response to the same bias.

We chose to use the CRT because it is the shortest and one of the most useful measures of cognitive performance available among the cognitive ability measurement tests. It has been widely used in previous studies (Bialek & Pennycook, 2018; Erceg & Bubic, 2017; Ross, Hartig, & McKay 2017; Simonovic, Stupple, Gale, & Sheffield, 2017). Among the cognitive ability tests, it is one that measures the capacity or willingness to respond without thinking, similar to what happens with heuristics.

The CRT can be compared, in terms of the relationship between its score and observed behavior, to more complex tests of intelligence, which require more time to obtain the necessary data for its measurement. However, although the CRT is a widely used measure of the propensity to engage in analytic or deliberative reasoning, there are some limitations of this test (Bialek & Pennycook, 2018).

Prior experience with the CRT or any similar test has a substantial influence on the CRT score. Besides this influence, more educated participants (secondary education, university) also tend to achieve higher scores than less educated participants. However, although the items seem to be of medium difficulty, the CRT may not be suitable for the highly educated, because they solve all items (Stieger & Reips, 2016). We highlight this possible limitation since we examine more educated individuals.

A total of 1,138 individuals answered the questionnaires through a form available on the internet. The high number of responses was due to the network of contacts and a prize raffle among the participants. Excluding questionnaires that were incomplete, in duplicate, and outliers, the valid sample totaled 1,064 respondents, of which 47.3% were females.

3.2 Dimensionality and reliability of the scale

Dimensionality and reliability were verified after the data collection. In order to create a scale, it is essential that the items are one-dimensional, meaning that they are strongly associated with each other and represent a single concept. Unidimensionality was tested through factorial analysis and the reliability was measured by the Cronbach’s alpha.

We also used the Bartlett sphericity test, Kaiser-Meier-Olkin (KMO) test, the measure of sampling adequacy (MSA), communalities, and the explained variance (Table 2).

Table 2
Statistical measures for dimensionality and scale reliability through factorial analysis

The dimensionality and reliability of the cognitive biases results led to the removal of eight questions. The final result indicates that the instrument presented on average good reliability as measured by Cronbach’s alpha (internal consistency), indicating that the measurements can evaluate the phenomenon studied. After removing some variables, the dimensionality analysis verified that the variables within the construct are associated with each other. No additional analysis was performed for the CRT scale since it has already been validated in previous studies.

3.3 Data analysis

In order to test the hypothesis, it was necessary to determine the mean of the cognitive biases studied, to separate the CRT into three groups (those that fit the three items of the test will be considered members of the high cognitive ability group, those who get one or two questions right will be put in the average cognitive ability group, and those who get all of them wrong will constitute the low cognitive ability group), to then verify if the means are statistically different.

To evaluate the differences between the means of the groups, according to pre-established hypotheses, we used the Student’s t-test for situations with two groups. For three or more groups, we used the analysis of variance (ANOVA). To use the ANOVA test, the assumptions of normality and equality of variance-covariance matrices were tested.

Regarding data normality, the central limit theorem was considered, where the mean vectors converge to the normal multivariate distribution in large samples, as is the case of this study. The equality of variance-covariance matrices was verified using Levene statistics. In the case of the existence of heteroscedasticity, the Welch and Brown-Forsythe tests were used. In addition, considering that the ANOVA does not identify which means are different, we performed Tukey’s test of honestly significant difference (HSD).

Finally, in line with prior studies (Liberali et al., 2011; Toplak et al., 2011), we performed a robustness test using the Spearman correlation (due to the non-normality of the data) and ordinary least squares (OLS) regressions to examine the relationship between the CRT score and the cognitive biases generated by the representativeness heuristic.

In the regression analysis, since Kang and Park (2018) suggest considering the influence of individuals’ demographic profiles on psychological aspects related to heuristic biases, we follow prior literature and control for gender (Ohlert & Weissenberger, 2015), age (Dohmen et al. 2010; Koehler, 1996), and educational level (Bellouma & Belaid, 2016; Chen et al., 2007; Khan et al., 2017).

These control variables also mitigate possible problems of our research design that may arise from the confounding effect between cognitive biases and cognitive ability (Almeida, 2019), in which a variable can influence both the dependent and the independent variable. Thus, considering the role of gender and academic background as confounding factors that may affect cognitive biases (Wang, Jusup, Shi, Lee, Iwasa, & Boccaletti, 2018) and cognitive ability (Frederick, 2005), we try to mitigate this problem by controlling for gender and educational level.

4 Presentation and Analysis of Results

4.1 Characterization of participants

In the validated forms, most of the participants were graduate students (51%), of which 22% had completed their graduation courses; 11.8% of the participants had already completed their masters or doctorate degrees. The majority of the respondents lived in the northeastern region of Brazil (77%), were between 20 and 29 years of age (56%), and had some professional experience (62%), with an average time in their profession of approximately two years.

4.2 CRT responses

Table 3 presents a results overview of the CRT items. Regarding the first item (bat and ball), it is possible to verify that 562 (52.8%) of the participants answered the question intuitively (type 1), while 450 (42.3%) responded in a reflexive way (type 2). Concerning the second item (machines and shirts), there was a total reduction of both intuitive (485) and reflexive (376) responses, thus increasing the number of other wrong answers (19.1%). Regarding the third item of the CRT (water lily), the percentage of intuitive responses observed was 46.8% for the participants who settled for the first answer that spontaneously came to their minds.

Table 3
Distribution of the valid sample by responses to CRT items

Based on these questions, it was possible to classify the respondents (1,064) into the CRT levels. 405 (38.1%) of the participants demonstrated a low cognitive ability by using only type 1 processing, 452 (42.5%) showed an average cognitive ability, and 207 (19.2%) showed a high cognitive ability, making use of type 2 (reflective) processing. These percentages are similar to those obtained by Frederick (2005). However, the average number of right answers of the respondents of this study was 1.81, which is higher than the value obtained by Frederick (1.24).

4.3 Analysis of study hypotheses

Hypothesis 1 seeks to verify whether the incidence of biases generated by the representativeness heuristic is influenced by the cognitive ability of the individuals surveyed. To verify the difference between the group means for high cognitive ability (CRT = 3), average cognitive ability (CRT = 1 or 2), and low cognitive ability (CRT = 0), ANOVA will be used, where the null hypothesis tested is equality of means of the dependent variable throughout the group (Table 4). In the case of heteroscedasticity in the variances, the Welch and Brown-Forsythe tests will be used.

Table 4
Comparison of cognitive biases studied in the high, medium, and low cognitive ability groups

The base rate insensitivity bias means are 6.63, 6.25, and 5.15, in the groups with CRT = 0, 1-2, 3, respectively. This finding suggests that the lower the capacity, the higher the sensitivity of the respondents to bias, neglecting the base rate and directing their attention to other descriptive information, based on previous beliefs and intuitions. This is consistent with the evidence of Ohlert and Weissenberger (2015) that base rate insensitivity decreases in line with people’s preference for processing information analytically.

Similarly, Hoppe and Kusterer (2011) also show that individuals in the low (high) CRT group are more (less) susceptible to the base rate insensitivity bias. Thus, it is important to point out that when the data required for judgment and decision making have some description, accountants need to be careful not to ignore the base rate.

When performing the homogeneity test of the variances, through the Levene statistic, we observed the occurrence of heteroscedasticity (p-value <0.0001). For this, the tests of Welsh and Brown-Forsythe were used, which reveal that there is a statistically significant difference between the means of the base rate insensitivity and insensitivity to sample size.

Another bias that presented a statistically significant difference between the means in the three CRT groups was the illusion of validity. The Levene homoscedasticity test (p-value = 0.279) confirmed equal variances in the three groups; so ANOVA could be used, which indicated a statistically significant difference between the means (p-value = 0.006).

This result suggests that people with low cognitive ability tend to spread information without checking its validity. Thus, when using information in the decision-making process, it is necessary to verify the rigor of its content, even if it seems to be true. Based on this, individuals will know the possibility of the incidence of the illusion of validity in their judgment and decision-making process, resisting the urge to believe the truth of given data without first confirming the accuracy of their content.

The misconception of chance bias, as well as the insensitivity to the base rate and illusion of validity biases, presented a lower average in the groups of respondents, with three correct answers in the CRT (5.41), and higher means in the groups with one and two correct answers (5.49) and none (5.62). However, we verified through ANOVA that this difference is not statistically significant.

In the case of the other biases, insensitivity to predictability, insensitivity to sample size, and regression fallacy, the inverse relationship between the incidence of cognitive bias and the CRT groups was not found, corroborating studies such as those of Barrouillet (2011) and West et al. (2012), showing that there is not always a negative correlation between intelligence and bias, since intelligent individuals can act unconsciously.

The Tukey test (Table 5) was used to show the means separation obtained in the base rate insensitivity and illusion of validity biases by CRT group. In the case of the other biases, the means separation into different groups was not performed.

Table 5
Tukey’s test for insensitivity to base rate and illusion of validity biases by CRT

Table 5 confirms the results shown in Table 4 for the insensitivity to base rate and illusion of validity biases. Respondents with high cognitive ability presented a lower mean (5.15) for the insensitivity to predictability bias; thus, they are classified into a separate group of participants with average cognitive ability, mean (6.25), and low cognitive ability (6.63).

From this, it can be concluded that the higher the bias means, the greater the probability of an individual composing the lower CRT group, due to the significant statistical difference between their means. This finding corroborates Oechssler et al. (2009) in that individuals with low cognitive abilities are more susceptible to cognitive biases since the results show that the biases in the low (high) CRT group are significantly more (less) pronounced.

For the validity bias, the results are similar: respondents with CRT = 3 registered a lower mean (2.46), being grouped by Tukey’s test in a separate group of participants with CRT = 0 (3.23) and 1-2 (3.31).

Hypothesis 2 was formulated to verify whether there is a statistical difference between the respondents’ degree of difficulty in the three items of the CRT and their perception as to the form of judgment and decision making. The ANOVA test was also used to verify the difference between the means in the groups (Table 6).

Table 6
Comparison of the degree of difficulty and how it considers judgment and decision making by CRT classification

Both the respondents’ degree of difficulty regarding the three items in the CRT test and how these participants considered their judgment and decision making (based on reason or intuition), compared with the number of correct answers in the CRT, had statistically significant differences between their means (p-value <0.001 for both relationships).

According to Table 6, in the first relationship, the respondents who did not answer any of the items correctly or those who got 1 or 2 items correct in the CRT obtained higher averages (4.70 and 5.29, respectively) than those who answered 3 items correctly in the CRT (4.41). The equality of means test between CRT = 3 and CRT = 1 and 2, as well as the same test for CRT = 1 and 2 and CRT = 0 (not shown in the Table), had high t statistics (-3.950 and 3.245, respectively). However, it is possible to verify that the mean of participants with CRT = 0 (4.70) was very similar to those with CRT = 3 (4.41).

Regarding the number of correct answers in the CRT, these results differ slightly from those obtained by Frederick (2005). However, when comparing the two extreme groups, the equality of means test reveals that those who scored higher in the cognitive test also indicated greater difficulty in the same test.

Regarding the respondents’ perception of how they judge and make decisions, whether based on intuition or reason, it can be noted from comparing the CRT scores that the participants with CRT = 3 achieved higher means, meaning they considered their judgment and decision making to be based more on reason than those with lower scores (0.1 and 2), who use intuition more.

It is possible to observe (Table 7) the separation of means obtained for the degree of difficulty and the form of judgment and decision making via Tukey’s test according to the respondents’ opinion in groups 1 and 2, thus corroborating the results of Table 6.

Table 7
Tukey’s test for the degree of difficulty and the type of judgment and decision making according to the respondents’ opinion on the CRT

Regarding the opinion of the study participants on how they judge and make decisions, based either on reason or intuition, comparing with the CRT test and using the three groups classification, we found that the higher the number of questions asked in the CRT, the lower the mean, meaning that judgments and decision making are more based on reason and the use of type 2 processing. The lower the number of correct answers, the higher the mean, in which judgment and decision making are more based on intuition.

Thus, these results confirm the second hypothesis of the study, indicating the existence of a statistical difference in the degree of difficulty perceived by the respondents in the three items of the CRT and their cognitive ability.

Hypothesis 3 was formulated to identify whether there is a difference between the impact of education level (technical level, completed graduation, incomplete graduation, specialization, masters or doctorate degrees), gender, and the region of residence (Midwest, Northeast, North, Southeast, and South). ANOVA was used to verify the difference between the means in the groups classified by region and level of education. The Student’s t-test for equality of means was used to evaluate the differences between the means of cognitive biases and the gender of the respondents.

Regarding educational level, the ANOVA (Table 8) showed significant insensitivity to base rate (p-value of 0.000) and to sample size (p-value of 0.043). The others presented values ​​higher than 0.1, with no effect of education level being verified in the results. However, the extreme response values suggest that it may have had an influence.

Table 8
Comparison of cognitive biases studied by educational level

To show the differences between the educational level means of the base rate insensitivity and insensitivity to sample size biases, Tukey’s test was applied, as evidenced in Table 9.

Table 9
Tukey’s test for base rate insensitivity and insensitivity to sample size biases by the level of education

Based on the results, we verify that the means of the respondents with master’s and doctorate degrees are statistically different from those who only have a technical course in accounting, which in Brazil represents 11 years of studying. In other words, there is a difference between these two levels of education, which can represent up to ten years of studying. Regarding insensitivity to sample size bias, it was found that there is a statistical difference between the participants with technical levels and those with incomplete and complete graduation and doctoral degrees.

Regarding gender (Table 10), the influence was higher than for academic training and geographic region. Of the six biases studied, five presented a mean difference. The female values were higher in the base rate and misconception of chance biases; the mean of males was higher for sample size, illusion of validity, and regression fallacy.

Table 10
Influence of gender on the biases.

These findings suggest differences between genders in the judgment and decision making process involving biases of the representativeness heuristic. However, we did not determine the factors that led to the mean of one gender being superior to the other for any one specific type of bias.

Despite this, Ohlert and Weiseenberger’s (2015) findings demonstrate that the tendency of being subject to the base rate fallacy decreases in line with individuals’ preference for processing information analytically, which is significantly related to gender, where men have significantly fewer judgment errors than women.

In Table 11, the analysis by the respondents’ region found that this variable influences the base rate insensitivity (p-value = 0.000) and the illusion of validity (0.006). Thus, the tests showed that the region has a partial effect on the biases. Due to the geographical dimensions of Brazil, we can assume that the region would be a proxy for cultural aspects. In this sense, the results found seem to suggest that these aspects may be at least partially important.

Table 11
Influence of region on the biases

However, despite the assumptions of normality and equality of variance-covariance matrices in the ANOVA being tested, these results should be analyzed carefully due to the unbalanced sample, in which the significant variables correspond to the groups with the highest response rate of the questionnaires (Northeast), as well as the lowest response rate (North).

The Tukey test (Table 12) complements the ANOVA results, identifying 3 groups for base rate insensitivity bias by region. In a comparison of the first group with the second, the statistically significant difference between the means of the South region (5.14) and the Northeast (6.43) is verified. There was also a significant difference between the Midwest region (5.27) and the North (6.75), as well as between the South (group 1) and North regions (group 3).

Table 12
Tukey’s test for base rate insensitivity and illusion of validity biases by region

4.4 Robustness tests

We performed robustness tests to examine the relationship between the CRT score and cognitive biases generated by the representativeness heuristic. We first performed a Spearman correlation due to the non-normality of the variables (Shapiro-Wilk test). This showed that there is a negative relationship between the individuals’ cognitive reflection and the cognitive biases generated by the representativeness heuristic, since individuals with a higher CRT score tended to exhibit lower levels of base rate insensitivity, misconceptions of chance, regression fallacy, and illusion of validity biases, as evidenced in Appendix B Appendix B - Correlation Matrix 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 CRT score 1 2 Base rate insensitivity -0.200 1 3 Insensitivity to sample size 0.029 0.337 1 4 Misconceptions of chance -0.100 0.344 0.346 1 5 Regression fallacy -0.328 0.646 0.288 0.220 1 6 Illusion of validity -0.207 0.313 0.243 0.324 0.251 1 7 Insensitivity to predictability -0.015 0.214 0.173 0.231 0.141 0.244 1 8 Gender 0.214 -0.090 -0.009 -0.043 -0.082 -0.032 0.061 1 9 Age 0.061 -0.087 -0.036 -0.016 -0.057 0.053 0.028 0.151 1 10 Technical level 0.011 0.038 -0.003 -0.027 -0.001 -0.043 0.020 0.027 0.030 1 11 Completed graduation 0.008 0.072 0.002 0.020 0.036 -0.003 -0.017 -0.015 0.161 -0.032 1 12 Specialization 0.046 -0.051 -0.047 -0.061 -0.003 -0.009 -0.023 0.039 0.326 -0.024 -0.218 1 13 Master’s degree 0.053 -0.114 -0.035 -0.057 -0.093 -0.014 0.076 0.058 0.294 -0.018 -0.168 -0.124 1 14 Ph.D. degree 0.090 -0.102 0.003 -0.016 -0.122 -0.025 -0.006 0.056 0.252 -0.010 -0.097 -0.072 -0.055 1 Bold coefficients are statistically significant at 5% level. .

This evidence expands our previous findings that cognitive reflection was negatively related to the incidence of base rate insensitivity and illusion of validity biases. In this sense, we believe that this negative relationship between CRT and representativeness heuristic biases contradicts the evidence from the study of Toplak et al. (2011), which found, through correlation analyses, a positive and significant relationship between the CRT and the heuristic and biases index.

One possible reason for this divergence is grounded on the view that Toplak et al. (2011) did not individually analyze the relationship between each bias and the CRT. It may also be due to the analysis being of an index composed of other biases that do not arise from the representativeness heuristic.

Thus, our results expand this previous finding by showing that although cognitive ability is negatively and significantly related to the base rate insensitivity, illusion of validity, misconceptions of chance, and regression fallacy biases, it is not significantly related to the insensitivity to sample size and insensitivity to predictability biases.

In addition, we find similar results in the OLS regression analysis since the CRT score has a negative influence on the base rate insensitivity, misconceptions of chance, regression fallacy, and illusion of validity models, as evidenced in Table 13.

Table 13
Relationship between CRT score and representativeness heuristic biases

We performed regressions with robust standard errors when the White test p-values were below 5%, in order to avoid heteroscedasticity. We included the educational level variable Incomplete Graduation in the base group (constant) in order to avoid the dummy variable trap, and since this variable was leading to high collinearity in the econometric model.

We highlight that all variables present a variance inflation factor (VIF) below 5, showing no multicollinearity problems across the models. However, the non-normally distributed residuals across specifications, as evidenced by the Shapiro-Wilk test, limit the possibility of generalizing the results to the whole population. Thus, also considering that we did not select a random sample, our results are only valid for the sample analyzed.

Our results corroborate the view that individuals with high cognitive ability are less susceptible to cognitive biases in decision making (Dohmen et al., 2010; Frederick, 2005; Oechssler et al., 2009), expanding our prior results to the influence of misconception of chance and regression fallacy biases since these variables are statistically significant in Table 13.

However, similarly to Hoppe and Kusterer’s (2011) results, we cannot conclude that the individuals’ performance in the CRT is a good predictor for their susceptibility to all behavioral biases analyzed since the susceptibility to the insensitivity to sample size and insensitivity to predictability biases does not vary with the CRT score.

The overall results also do not confirm the view that gender influences all the cognitive biases generated by the representativeness heuristic, as shown in Table 10, since this variable was only significant in the insensitivity to predictability model. Based on our sample, males are more likely than females to make intuitive predictions based on insufficient information, expanding Ohlert and Weissenberger’s (2015) view that males are more susceptible to representativeness heuristic biases, such as base rate insensitivity.

Furthermore, we do not find across all the models that this susceptibility increases with age, as evidenced by Koehler (1996), since the age variable only has a positive influence on the base rate insensitivity, illusion of validity, and misconceptions of chance models. However, this finding is consistent with the results of Dohmen et al. (2010) in that age may influence some biases, such as risk aversion, but not others.

Finally, corroborating the view that differences in educational level may influence cognitive biases (Chen et al., 2007), our results show that educational levels, especially specialization and master’s degrees, which are significant across several models, are negatively associated with representativeness heuristic biases. Thus, our results corroborate the view that individuals with higher educational levels are less likely to be affected by heuristics, such as representativeness (Khan et al., 2017).

5 Conclusion

Understanding that cognitive biases can help minimize cognitive failures allows us to understand their influence on information processing, thus improving decision-making capacity, since errors can be corrected if they are known. Thus, this study aimed to investigate whether cognitive ability influences the occurrence of cognitive biases generated by the representativeness heuristic.

We verified that there is an inverse relationship between cognitive ability and the base rate insensitivity and illusion of validity biases, indicating that the higher the cognitive ability, the lower the incidence of these biases in decision making. Further analyses also expand this negative influence to misconceptions of chance, showing that higher cognitive ability negatively influences the occurrence of this representativeness heuristic bias.

Additionally, we identify a lower degree of difficulty in the three CRT items for individuals with CRT = 3; however, individuals with CRT = 0 also perceived the test as easy. The higher the CRT score, the higher the perception that judgment and decision making are related to reason.

Regarding gender, a difference was observed for all biases except for insensitivity to predictability, therefore corroborating with Ohlert and Weiseenberger’s (2015) findings, which demonstrate that the tendency of being subject to the base rate fallacy, one of the representativeness heuristic biases, decreases in line with people’s preference for processing information analytically. This is, in turn, significantly related to gender, indicating that men have a significantly lower judgment error than women.

For the level of education, the biases impacted by this variable were base rate insensitivity and insensitivity to sample size, confirming the idea that educational level affects individuals’ behavior in the decision-making process, as reported by Bellouma and Belaid (2016).

For the regions, the biases affected by this variable were base rate insensitivity and illusion of validity. We identified that respondents residing in the Midwest, Southeast, and South regions are less sensitive to these biases than those residing in the Northeast and North regions, which may be explained by cultural factors.

In summary, the results of this study indicate that accounting students and professionals, like any other individual, are subject to cognitive biases of the representativeness heuristic. Thus, it is necessary for these individuals to understand and avoid them, in order to fulfill their main purpose, which is to provide useful information to stakeholders. It should be emphasized that the findings of this study confirm that individuals with high cognitive ability can act unconsciously.

However, based on the view that individuals with a higher educational level tend to solve all the CRT items (Stieger & Reips, 2016), we consider one of the limitations of our study to be that the analyzed sample may tend to present higher levels of CRT scores. This occurs because the majority of the participants are undergraduate students, graduate professionals, and master or doctoral students.

Another limitation is that we do not perform procedures for selecting a random sample. Thus, our results are only valid for the sample analyzed. Furthermore, in avoiding an extensive questionnaire, another limitation is that we chose only one measure of cognitive ability (CRT score). Hence, we recommend that future studies examine other alternatives for this test, such as the Wonderlic Personnel Test (WPT), the Need for Cognition scale (NFC), and the self-reported SAT and ACT.

For future studies, we also suggest applying the study instrument (Appendix A Appendix A ̵ Research Instrument ) to individuals from different courses to verify if the findings are consistent, or if there are differences due to their education area. A different study could make use of the questionnaire by applying it to any area of study to prove the incidence of biases in judgment and decision-making processes. We also suggest using experiments to investigate whether situations that simulate reality and have control variables also evidence the incidence of representativeness heuristic biases; as well as developing and applying a new instrument that addresses cognitive biases of availability, anchoring, and adjustment heuristics.

Finally, it is important to highlight that our results may be subject to confounding effects. However, we believe that the statistical approach used in the study avoided this problem, even though it was not possible to obtain an effectively random sample.

Despite mitigating possible confounding effects of gender and academic background on our dependent (cognitive biases) and independent variables (cognitive ability), when we test specific cognitive biases, it is possible that other biases are interrelated. Thus, future studies could explore this by examining possible confounding effects between cognitive biases.

References

  • Almeida, S. (2019). Do as i do, not as i say: Incentivization and the relationship between cognitive ability and risk aversion. Revista Brasileira de Economia, 73(4), 413-434.
  • Barrouillet, P. (2011). Dual-process theories of reasoning: The test of development. Developmental Review, 31(2-3), 151-179.
  • Bellouma, M., & Belaid, F. (2016). Decision-making of working capital managers: A behavioral approach. Journal of Business Studies Quarterly, 7(4), 31-43.
  • Bialek, M., & Pennycook, G. (2018). The cognitive reflection test is robust to multiple exposures. Behavior Research Methods, 50(5), 1-7.
  • Birnberg, J. G. (2011). A proposed framework for behavioral accounting research. Behavioral Research in Accounting, 23(1), 1-43.
  • Birnberg, J. G., Luft, J., & Shields, M. D. (2007). Psychology theory in management accounting research. In C. Chapman, A. G. Hopwood, & M. D. Shields (Eds.), Handbooks of Management Accounting Research, (Vol. 1, pp. 113-135). Amsterdam, Netherlands: Elsevier Science.
  • Browne, M., Pennycook, G., Goodwin, B., & McHenry, M. (2014). Reflective minds and open hearts: Cognitive style and personality predict religiosity and spiritual thinking in a community sample. European Journal of Social Psychology, 44(7), 736-742.
  • Chen, G., Kim, K. A., Nofsinger, J. R., & Rui, O. (2007). Trading performance, disposition effect, overconfidence, representativeness bias, and experience of emerging market investors. Journal of Behavioral Decision Making, 20(4), 425-451.
  • Dohmen, T., Falk, A., Huffman, D., & Sunde, U. (2010). Are risk aversion and impatience related to cognitive ability? American Economic Review, 100(3), 1238-1260.
  • Evans, J. S. B. T., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8(3), 223-241.
  • Erceg, N., & Bubic, A. (2017). One test, five scoring procedures: Different ways of approaching the cognitive reflection test. Journal of Cognitive Psychology, 29(3), 381-392.
  • Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25-42.
  • Hogarth, R. M. (1975). Cognitive processes and the assessment of subjective probability distributions. Journal of the American Statistical Association, 70(350), 271-289.
  • Hoppe, E. I., & Kusterer, D. J. (2011). Behavioral biases and cognitive reflection. Economics Letters, 110(2), 97-100.
  • Joyce, E. J., & Biddle, G. C. (1981). Are auditors’ judgments sufficiently regressive? Journal of Accounting Research, 19(2), 323-349.
  • Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. The American Economic Review, 93(5), 1449-1475.
  • Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. Heuristics and biases: The psychology of intuitive judgment. 49-81. http://dx.doi.org/10.1017/CBO9780511808098.004
    » http://dx.doi.org/10.1017/CBO9780511808098.004
  • Kahneman, D., & Tversky, A. (1972). Subjective probability: A judgment of representativeness. Cognitive Psychology, 3(3), 430-454.
  • Kane, T. J., & Staiger, D. O. (2002). The promise and pitfalls of using imprecise school accountability measures. The Journal of Economic Perspectives, 16(4), 91-114.
  • Kang, M., & Park, M. J. (2018). Employees’ judgment and decision making in the banking industry: The perspective of heuristics and biases. International Journal of Bank Marketing, 37(1), 382-400.
  • Khan, H. H., Naz, I., Qureshi, F., & Ghafoor, A. (2017). Heuristics and stock buying decision: Evidence from Malaysian and Pakistani stock markets. Borsa Istanbul Review, 17(2), 97-110.
  • Kimura, H. (2003). Aspectos comportamentais associados às reações do mercado de capitais. RAE-eletrônica, 2(1), 1-14.
  • Koehler, J. J. (1996). The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges. Behavioral and Brain Sciences, 19(1), 1-17. https://doi.org/10.1017/S0140525X00041157
    » https://doi.org/10.1017/S0140525X00041157
  • Liberali, J. M., Reyna, V. F., Furlan, S., Stein, L. M., & Pardo, S. T. (2011). Individual differences in numeracy and cognitive reflection, with implications for biases and fallacies in probability judgment. Journal of Behavioral Decision Making, 25(4), 361-381.
  • Lilienfeld, S. O., Lynn, S. J., Ruscio, J., & Beyerstein, B. L. (2010). 50 Great myths of popular psychology: Shattering widespread misconceptions about human behavior. Hong Kong: Wiley-Blackwell.
  • McDowell, M. E., Occhipinti, S., & Chambers, S. K. (2013). The influence of family history on cognitive heuristics, risk perceptions, and prostate cancer screening behavior. Health Psychology, 32(11), 1158-1169. https://doi.org/10.1037/a0031622
    » https://doi.org/10.1037/a0031622
  • Moritz, B., Siemsen, E., & Kremer, M. (2014). Judgmental forecasting: Cognitive reflection and decision speed. Production and Operations Management Society, 23(7), 1146-1160.
  • Mussweiler, T., & Englich, B. (2005). Subliminal anchoring: Judgmental consequences and underlying mechanism. Organizational Behavior and Human Decision Processes, 98(2), 133-143.
  • Oechssler, J., Roider, A., & Schmitz, P. W. (2009). Cognitive abilities and behavioral biases. Journal of Economic Behavior & Organization, 72(1), 147-152.
  • Ohlert, C. R., & Weissenberger, B. E. (2015). Beating the base-rate fallacy: An experimental approach on the effectiveness of different information presentation formats. Journal of Management Control, 26(1), 51-80.
  • Ramiah, V., Zhao, Y., Moosa, I., & Graham, M. (2014). A behavioral finance approach to working capital management. The European Journal of Finance, 22(8-9), 1-26.
  • Ross, R. M., Hartig, B., & McKay, R. (2017). Analytic cognitive style predicts paranormal explanations of anomalous experiences but not the experiences themselves: Implications for cognitive theories of delusions. Journal of Behavior Therapy and Experimental Psychiatry, 56, 90-96.
  • Simonovic, B., Stupple, E. J. N., Gale, M., & Sheffield, D. (2017). Stress and risky decision making: Cognitive reflection, emotional learning or both. Journal of Behavioral Decision Making, 30(2), 658-665.
  • Stanovich, K., West, R. F., & Toplak, M. E. (2011). The complexity of developmental predictions from dual process models. Developmental Review, 31(1-2), 103-118.
  • Stieger, S., & Reips, U. (2016). A limitation of the Cognitive Reflection Test: Familiarity. PeerJ, 4, 1-12.
  • Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences, 23(5), 645-726.
  • Tekçe, B., & Yilmaz, N. (2015). Are individual stock investors overconfident? Evidence from an emerging market. Journal of Behavioral and Experimental Finance, 5, 35-45.
  • Thaler, R. H. (2016). Behavioral economics: Past, present, and future. American Economic Review, 106(7), 1577-1600.
  • Toplak, M. E., West, R. F., Stanovich, K. E. (2011). The cognitive reflection test as predictor of performance on heuristics-and-biases tasks. Memory & Cognition, 39(7), 1275-1279.
  • Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207-232.
  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131.
  • Uribe, R., Manzur, E., & Hidalgo, P. (2013). Exemplars’ impacts in marketing communication campaigns. Journal of Business Research, 66(10), 1787-1790.
  • Veeraraghavan, K. (2010). Role of behavioral finance: A study. International Journal of Enterprise and Innovation Management Studies, 1(3), 109-112.
  • Wang, Z., Jusup, M., Shi, L., Lee, H., Iwasa, Y., & Boccaletti, S. (2018). Exploiting a cognitive bias promotes cooperation in social dilemma experiments. Nature Communications, 9(1), 1-7.
  • West, R. F., Meserve, R. J., & Stanovich, K. E. (2012). Cognitive sophistication does not attenuate the bias blind spot. Journal of Personality and Social Psychology, 103(3), 506-519.
  • Evaluation process:
    Double Blind Review
  • Copyrights:
    RBGN owns the copyrights of this published content
  • Plagiarism analysis
    RBGN performs plagiarism analysis on all its articles at the time of submission and after approval of the manuscript using the iThenticate tool.

Appendix A ̵ Research Instrument

Appendix B - Correlation Matrix

1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 CRT score 1 2 Base rate insensitivity -0.200 1 3 Insensitivity to sample size 0.029 0.337 1 4 Misconceptions of chance -0.100 0.344 0.346 1 5 Regression fallacy -0.328 0.646 0.288 0.220 1 6 Illusion of validity -0.207 0.313 0.243 0.324 0.251 1 7 Insensitivity to predictability -0.015 0.214 0.173 0.231 0.141 0.244 1 8 Gender 0.214 -0.090 -0.009 -0.043 -0.082 -0.032 0.061 1 9 Age 0.061 -0.087 -0.036 -0.016 -0.057 0.053 0.028 0.151 1 10 Technical level 0.011 0.038 -0.003 -0.027 -0.001 -0.043 0.020 0.027 0.030 1 11 Completed graduation 0.008 0.072 0.002 0.020 0.036 -0.003 -0.017 -0.015 0.161 -0.032 1 12 Specialization 0.046 -0.051 -0.047 -0.061 -0.003 -0.009 -0.023 0.039 0.326 -0.024 -0.218 1 13 Master’s degree 0.053 -0.114 -0.035 -0.057 -0.093 -0.014 0.076 0.058 0.294 -0.018 -0.168 -0.124 1 14 Ph.D. degree 0.090 -0.102 0.003 -0.016 -0.122 -0.025 -0.006 0.056 0.252 -0.010 -0.097 -0.072 -0.055 1 Bold coefficients are statistically significant at 5% level.
  • Responsible Editor:
    Prof. Dr. Joelson Sampaio

Publication Dates

  • Publication in this collection
    26 Apr 2021
  • Date of issue
    Jan-Mar 2021

History

  • Received
    01 Nov 2018
  • Accepted
    07 July 2020
location_on
Fundação Escola de Comércio Álvares Penteado Fundação Escola de Comércio Álvares Penteado, Av. da Liberdade, 532, 01.502-001 , São Paulo, SP, Brasil , (+55 11) 3272-2340 , (+55 11) 3272-2302, (+55 11) 3272-2302 - São Paulo - SP - Brazil
E-mail: rbgn@fecap.br
rss_feed Acompanhe os números deste periódico no seu leitor de RSS
Acessibilidade / Reportar erro