Open-access Impairments of facial detection in tobacco use disorder: baseline data and impact of smoking duration

Abstract

Objective:  Chronic tobacco consumption, classified as tobacco use disorder (TUD), has been associated with a variety of health problems. Investigations of face processing in TUD are hampered by lack of evidence. Here, we evaluated facial detection in TUD and assessed test-retest reliability for a facial detection task.

Methods:  Participants were instructed to detect the orientation (either left or right) of a face when it was presented with a face/non-face pair on the monitor screen, using Bayesian entropy estimation. Bland-Altman analysis and intraclass correlation coefficients were used to test the reliability of the task. The general linear model and Bayesian statistics were then used to evaluate differences between TUD (n=48) and healthy controls (n=34).

Results:  The reliability of the task was high for the 96 stimuli presentations. Slower reaction times (p < 0.001) and lower discrimination index (p < 0.001) were observed in the TUD group than for healthy controls. Mediation analysis indicated direct effects of smoking duration on reaction time (p < 0.001) and discrimination index (p < 0.001).

Conclusions:  Overall, we observed high reliability of this task and reduction of facial detection in tobacco use disorder. We conclude our findings are significant for public health initiatives and call for follow-up studies.

Face processing; face detection; visual processing; smoking; addiction; tobacco use disorder; psychiatry


Introduction

Cigarettes contain many harmful substances, such as nicotine. One of the main assumptions regarding nicotine (and, consequently, tobacco) addiction is that nicotinic acetylcholine receptors (nAChRs) in the brain are upregulated by binding with nicotine, which leads to an increase in the number of these receptors.1-3

Chronic tobacco consumption, identified as tobacco use disorder (TUD) according to the DSM criteria, is a public health problem4 and has also been associated with a variety of health problems.5 Some authors report that smoking and TUD appear to be linked to visual impairments.6-8 Their main findings indicated that healthy nonsmokers had better performance for several domains of early-stage visual processing (e.g., contrast processing and chromatic discrimination) than heavy smokers. An unanswered question, however, is whether TUD affects not only form and color perception, but also facial detection.

Face detection requires extraction of what features are common to faces.9 The ability to detect and, subsequently, process the details of a visual scene is determined by the ability of the visual system to isolate and characterize differences in contrast, color, and shape, to cite just a few.10,11 Thus, the ability to detect faces in visual scenes involves the detection of variations not only in facial characteristics,9 but also in environmental and health conditions. In addition, it is related to the perceptive capacity of the observer. Face processing also involves the decoding of low, medium, and high spatial frequencies. Facial detection is different from higher-level aspects of face processing (i.e., those involving both sensory and cognitive processes), such as recognition, identification, and expression.12 However, impairments in facial detection can lead to impairments in facial recognition.13

Impairments in facial detection in smokers can affect their daily functioning or quality of life. de Almeida et al.13 investigated the effects of smoking on facial detection and concluded that (i) awareness of the presence of a face can be understood as the core of higher-order processes, and (ii) heavy smokers performed worse than healthy controls. However, some drawbacks were observed due to a small sample size, absence of correlations between duration of smoking and their findings, and no generalzability for individuals with TUD. With this in mind, our main purpose is to extend and replicate these past findings.

Chronic smoking can result in tobacco addiction, and the interactions of tobacco compounds with neurotransmitters, such as dopamine, acetylcholine, and glutamate can affect visual processing.14 These substances affect the functioning of different regions of the brain related to visual processing and facial detection, such as the retina, the fusiform gyrus, the primary visual cortex, and the prefrontal cortex.15

The present research had two objectives: a) evaluate reproducibility and test-retest reliability of a facial detection task; and b) investigate whether participants with TUD would present impairments in facial detection. This research was divided into two studies. Study 1 describes evaluation of baseline values (using 48-, 96-, and 122-stimulus presentations) and reproducibility and reliability of the facial detection task, providing coefficients of reliability and intraclass correlation coefficients. Study 2 describes investigation of facial detection in healthy nonsmokers and participants with TUD. The aim was to investigate the effects of TUD on individual face detection ability. The study hypothesis was that TUD would be associated with impairments of facial detection, and that these impairments would be related to duration of smoking.

Study 1: Baseline data and reliability of a facial detection task

Method

Participants

Healthy nonsmokers (n=57) had no eye abnormalities, as revealed on funduscopic or optical coherence tomographic examination. Vision was normal or corrected-to-normal (with visual acuity of at least 20/20). The participants were aged 19-40 years (mean ± standard deviation [SD] = 30.3±6.8), and participation was voluntary and non-remunerated.

All participants were screened for cognitive impairment using the Mini-Mental State Examination (MMSE), were above the cutoff point,16 and did not satisfy any of the criteria for specific disorders according to the Structured Clinical Interview for DSM-5 (SCID).17 The exclusion criteria for participants were: age < 20 or > 45 years (to avoid adding confounding variables associated with the effects of aging on visual processing), current history of neurological disorder, cardiovascular disease, history of head trauma, chronic exposure to solvents, and current use of medications that may affect visual processing and cognition. These participants were part of a large database (involving different countries) investigating the test-retest reliability of different software or tasks.

The participants self-reported having no caffeine dependence or withdrawal. They were asked to abstain from caffeine-containing products beginning at midnight prior to the start of testing.7

Stimuli

The stimuli were presented on an Apple iMac (21.5-inch screen, 1,920 × 1,080 pixels, refresh rate 60 Hz. All measurements were performed with binocular vision. Monitor luminance was controlled and calibrations were performed using a DisplayCAL photometer.

Facial stimuli were randomly taken from the Fundação Educacional Inaciana (FEI) database,18 an adaptation of the Face Recognition Technology (FERET) database,19 which has been widely used in facial recognition research. An oval mask was applied to remove visual cues (e.g., hair, texture). Gaussian smoothing was applied on each image and luminance was averaged to a grayscale value of 127 (Figure 1A). The image and screen background were set to this equivalent value. Faces were segmented (200 × 240 pixels) into 30 squares, each with a size of 33 × 40 pixels, and each square was rotated, with equal probabilities, by 45°, 90°, or 180° to create the non-faces. For further information about the stimuli used herein, see Comfort et al.20

Figure 1
Illustration of our study design. Examples of the created faces and non-faces (A), training trials (B), and procedure (C) used in the facial detection task. DI = discrimination index; RT = reaction time.

We opted to use Bayesian entropy estimation to avoid bias error (difference between estimation and the true value obtained) and mean squared error (difference between squared difference and estimation).21 This design is an interesting approach for psychophysical studies22; it is as rigorous as the staircase method, and we expected a reduction of response bias. Based on the criteria proposed by Kontsevich & Tyler,23 the task followed some steps using a psi-probit method. This psychometric function is similar to the adaptive function (i.e., each stimulus presentation depends on the participant’s response). Accordingly to the proposed method, each trial consists of nine steps (automatically run in MATLAB): 1) calculate probability of eliciting a response given the stimulus; 2) estimate probability of the next trial based on Bayes’ rule; 3) estimate the entropy of this probability; 4) estimate the expected response; 5) estimate the minimum expected entropy; 6) run a trial; 7) keep the posterior probability from step 2; 8) find an estimate based on the posterior probability distribution; and 9) return to step 1 until a specified number of stimuli presentation is provided.

Procedure

The test was directly followed by a retest, and participants completed the facial detection task on the same day. The 48 presentations consisted of 24 face images (12 males and 12 females) and 24 non-face images (12 males and 12 females). Three conditions were used: 48 stimulus presentations, 96 presentations, and 122 presentations. Bayesian adaptive estimation23 was used for stimulus presentation. The presentation times ranged from 16.7 and 3,006 ms. This randomization avoided possible learning effects or response bias. The training phase consisted of some trials in which the faces or non-faces had suprathreshold contrast values (i.e., had high contrast and the absence of Gaussian smoothing) and were presented without temporal limits (Figure 1B). The task was run on MATLAB version R2016a (MathWorks, Natick, USA) using the PsychToolbox.24,25

Facial detection task

The participants were instructed to maintain fixation on a small black fixation cross at the center of the monitor. A two-alternative forced choice (2-AFC) method was used. The participants’ task was to indicate, using the keyboard, the direction (either left or right) of a face when present in a face/non-face pair (Figure 1C). The detection task was begun only after all participants understood the procedure.

The presentation time in each trial was selected to yield the maximum expected information for the prediction of the expected mean threshold (minimum of 75% correct responses to reach the criteria proposed by Kontsevich & Tyler23). Threshold is the minimum amount of stimulus energy, or contrast, that one needs to detect it. In view of this, the presentation time within trials depends on the response (i.e., right or wrong) and was randomized to reach these criteria. The ocular distance to the screen was set to 100 cm. In the facial detection task, once a response was generated, the experiment progressed to the subsequent trial.

Statistical analysis

For each condition, data distribution was presented using measures of central tendency and dispersion. Distributions for each group were compared using the Monte Carlo method for skewness and kurtosis, with a cutoff value of > 1.96.26,27 Statistical analysis was performed using SPSS version 23.0 and MATLAB version R2018b.

To explore test-retest bias, paired t-tests were used for each of the three database presentations (Bonferroni-corrected). To evaluate test-retest agreement, a Bland-Altman analysis was conducted. More specifically, for each pair of databases, the following measures were calculated: mean, SD, limits of agreement (LoAs), 95% confidence intervals (95%CIs) of the mean, 95%CIs of the LoAs, and coefficient of repeatability (COR). Such measures are fundamental for reporting the Bland-Altman indices correctly. We calculated intraclass correlation coefficient (ICC) using a two-way random model with absolute agreement. For a more detailed description of Bland-Altman indices or the test reliability measure (ICC), see Fernandes et al.28

Ethics statement

This research followed the ethical principles of the Declaration of Helsinki and was approved by the ethics committee of Centro de Ciências da Saúde, Universidade Federal da Paraíba (CAAE 60944816.3.0000.5188). Written informed consent was obtained from all participants.

Results

Descriptive statistics for each database

Detailed descriptive statistics of the databases are presented in Table 1. Mean values for the three databases are comparable for test-retest measurements. All of the retest sessions presented lower means and SDs, a difference which may have resulted from fatigue or unknown factors.

Table 1
Descriptive statistics of the task test-retest measures
Reproducibility of the facial detection task

The paired t-test for the three databases presentations showed significant differences for 48 (t[56] = 2.292, p = 0.025, Hedges’ g = 0.303 [95%CI 0.084 to 0.655]), and 122 (t[56] = 2.188, p = 0.033, Hedges' g = 0.290 [95%CI 0.100 to 0.637]) stimulus presentations. No significant differences were observed for the 96 stimulus presentation (t[57] = 0.841, p = 0.409).

Bayesian statistics for 48-stimulus presentations indicated the data were approximately 1.60 more likely to occur under the alternative hypothesis H1. The error percentage was < 0.001%, which indicated great stability of the algorithm used to obtain the results (BF10 = 1.60, δ = 0.39, with a central 95% credible interval for δ [0.038 to 0.749]). Maximum value for robustness indicated high evidence for H1 instead of H0 (BF10 = 2.479). The same pattern was observed for 122-stimulus presentations (BF10 = 1.30, δ = 0.36, with a central 95% credible interval for δ [0.036 to 0.736]; maximum value for robustness, BF10 = 2.112). No significant differences were found for the 96-stimulus presentation (BF10 = 0.040; maximum value, BF10 = 0.998).

Bland-Altman values for 48-, 96-, and 122-stimulus presentations are given for all test-retest combinations in Table 2. The mean test-retest differences deviated slightly from zero indicating good reproducibility of the task, with exception of the 122-stimulus presentations (mean deviation of 0.7 units). The mean varied from 0.02 to 0.07. The upper limits for 48-stimulus (0.18) and 96-stimulus (0.24) presentations were similar, again with the exception of 122-stimulus presentations (0.57). The same pattern was observed for the lower limits, which ranged from -0.13 to -0.42; again, the 122-stimulus presentations presented higher values. The 95%CIs of LoAs were larger for the 122 presentations. The CORs for each database were 0.16 (48-stimulus presentations), 0.23 (96-stimulus presentations), and 0.49 (122-stimulus presentations). Although the COR for 122-stimulus presentations was higher than the others, this was very likely due to measurement noise, since the other indices for 122-stimulus presentations condition deviated from those reported for 48- and 96-stimulus presentations. The coefficients of variation were also larger for this sample (Table 1).

Table 2
Bland-Altman indices and intraclass correlation coefficients for the three databases
Reliability of the facial detection task

Results of the facial detection reliability analysis revealed low-to-moderate ICCs for the 48- (0.08; 95%CI -0.57 to 0.45), 96- (0.29; 95%CI -0.24 to 0.59), and 122-stimulus (0.14; 95%CI -0.36 to 0.48) presentations. These results can be explained by taking into account that ICC is estimated by relating within- and between-participant measurement variance. Thus, it is expected that, for a homogeneous sample, ICC should be low. This can be further observed in Fernandes et al.,28 where a sample of color vision-deficient observers resulted in higher ICC values due to the greater heterogeneity of this sample when compared to normal trichromats. Considering the presence of test-retest bias in the 48- and 122-stimulus presentations and the reported indices, the choice of using 96 presentations can avoid the bias of learning effects (48 presentations) or fatigue (122 presentations). In view of this, we opted to employ 96-stimulus presentations for our second study.

Study 2: Facial detection in tobacco use disorder

Method

Participants

Forty-eight healthy nonsmokers (mean age = 30.22 years; SD = 7.58 years) and 34 participants with tobacco addiction (mean age = 32.84 years; SD = 7.85 years) were recruited from the general population. Participants were aged 20 to 45 years, and had neither retinal nor eye impairments as self-reported and based on previous examinations. All participants were screened for possible cognitive impairment (baseline measures) using the MMSE.16

All of smokers met the criteria for TUD according to the DSM-5, currently smoked > 20 cigarettes/day, and had a score > 7 on the Fagerström Test for Nicotine Dependence (FTND).29 Smokers were allowed to smoke until the beginning of the experiment (as in our previous studies) and were free from cognitive disorders.7 All of the participants with TUD had no comorbidities such as attention disorder and did not use nicotine patches in recent years. In addition, the participants with TUD reported no withdrawal or attempt to stop smoking.

Participants were excluded if were unable to complete the session due to any reason (e.g., lack of motivation; n=4) or fulfilled the criteria for any substance abuse disorder (e.g., alcohol; n=2). The participants did not have ocular diseases and had been examined by an ophthalmologist during the previous 12 months. Female participants were tested outside their luteal phase. All of the healthy controls fulfilled the criteria for never-smokers (lifetime consumption of < 15 cigarettes).30 The exclusion of substance abuse disorder involving tobacco was not applied for the TUD group (other substance abuse was not allowed), based on the DSM-5 criteria.17 The use of medications that might affect cognitive processing (e.g., benzodiazepines) was also an exclusion criterion. The groups were matched for gender, age, and education level. These participants were part of a moderate-scale database investigating TUD, and some of the controls participated in previous tasks.6,7

Stimuli and procedures

The stimuli were run on an ACER 8565U computer with a NVIDIA MX130 graphics card and presented on a 17-inch LED monitor with 1,366 × 786 resolution and a refresh rate of 85 Hz. All of the measurements were performed with binocular vision. Monitor luminance was controlled and calibrations were performed using DisplayCAL (ArgyllCMS, displaycal.net).

The stimuli were the same as those used in Study 1 (Figure 1). First, we explained the purpose of this research and described the testing protocol in detail. Then, the participants underwent the facial detection task. The participants were allowed to take breaks as desired. The task consisted of 96-stimulus presentations.

Statistical analysis

For each condition, the distribution of data was presented using measures of central tendency and dispersion. The data distributions were assessed for normality by comparing values of skewness and kurtosis. Statistical analysis was performed using SPSS version 23.0 and MATLAB version R2018b.

Parametric statistical tests were used to analyze the data. To compare groups on the nominal variable gender, the nonparametric chi-square test was conducted. For comparisons between groups (demographics), the t-test for independent measures was used. Hedges’ g was used to assess effect sizes for the t-tests.

A multivariate analysis of variance (MANOVA) was conducted to analyze the results of the detection task for reaction time and discrimination index (two dependent variables). There was homogeneity of variance-covariance (Box’s M). No multicollinearity was observed. Absence of multivariate outliers was checked assessing Cook's distance (4nk1).

Canonical discriminant analysis was used as a post-hoc test. Post-hoc analyses were conducted using Bonferroni correction. Omega squared (ω2=SSb dfb MSwSSt + MSw) was used to assess effect sizes (ω2 reduces bias).31

Product-moment and point-biserial correlation analyses were performed to test for association between demographics and results of the facial detection task. The authors hypothesized that TUD (X) would influence performance (Y) for the facial detection task through the mediator smoking years (M). Data were resampled 5,000 times32 using SPSS macro PROCESS (Model 4).

Ethics statement

This research followed the ethical principles of the Declaration of Helsinki and was approved by the ethics committee of Centro de Ciências da Saúde, Universidade Federal da Paraíba (CAAE 60944816.3.0000.5188). Written informed consent was obtained from all participants.

Results

The groups did not differ in age (t[80] = 1.735, p = 0.087), level of education (t[80] = 0.482, p = 0.632), or gender (χ1 = 0.455, p = 0.654). The main characteristics of the sample are shown in Table 3.

Table 3
Demographic characteristics of the sample (n=82)

A MANOVA indicated significant differences between groups for the facial detection task (F2,79 = 27.93, p < 0.001, Pillai’s trace = 0.414, ω2 = 0.82 [95%CI 0.58 to 1.10]). Participants with TUD had slower reaction times (p < 0.001, Hedges’ g = 1.34; 95%CI 0.86 to 1.84), and lower accuracy for discrimination index (p < 0.001, Hedge’s g = 1.38; 95%CI 0.90 to 1.89) for the facial detection task. The main results are shown in Figure 2.

Figure 2
Results of the facial detection task. The horizontal line displays the median, the boxes represent the 25th to 75th percentiles, and the whiskers represent the range. Circles represent individual participant means for reaction time (A) and discrimination index (B).

Bayesian statistics were also calculated. Differences between TUD and healthy controls were found for reaction time (BF10 = 9.94, < 0.008%; this indicated high evidence in favor of H1 over H0-, with posterior R2 = 0.27). The same pattern was found for discrimination index (BF10 = 5.93, < 0.002%; this indicated high evidence in favor of H1 over H0-, with posterior R2 = 0.30)

No correlations were found between age and reaction time (r = 0.07, p = 0.46), level of education and reaction time (r = 0.28, p = 0.03), or gender and reaction time (r = -0.19, p = 0.14). The same pattern was found for discrimination index (all p-values > 0.05). Nevertheless, duration of smoking (years) correlated both with reaction time (r = 0.62, p < 0.001; 95%CI 0.34 to 0.78) and with discrimination index (r = -0.49, p = 0.02; 95%CI -0.64 to -0.06). Vovk-Sellke maximum odds were 31.98 and 4.70 for reaction time and discrimination index, respectively. The main results are shown in Figure 3.

Figure 3
Scatter diagram for the facial detection task. Solid lines represent the regression line. Dotted points represent 95% confidence interval (95%CI) curves. Circles represent individual means for the tobacco use disorder (TUD) group. The left panels are data for reaction time (A, top) and discrimination index (B, bottom). The right panels are the residuals for reaction time (A, top) and discrimination index (B, bottom).

The total effects on reaction time were significant (c = 0.100, bias-corrected and accelerated [BCas] 95% BCas: 0.067 to 0.137, z-value = 5.89, p < 0.001; standard error [SE]: 0.017). Direct effects were not observed (c' = 0.018, 95% BCas: -0.073 to 0.120, z-value = 0.46, p = 0.46; SE: 0.038). However, indirect effects were observed (a1b1 = 0.084, 95% BCas: 0.002 to 0.178, z-value = 2.362, p = 0.018; SE: 0.038).

With regard to discrimination index, total effects (c = 5.509, 95% BCas: 3.714 to 6.987, z-value = 6.32, p < 0.001; SE: 0.871) and direct effects (c' = 5.528, 95% BCas: 1.954 to 8.346, z-value = 2.72, p = 0.006; SE: 2.032) were observed. However, indirect effects were not observed (a1b1 = 0.020, 95% BCas: -2.680 to 2.721, z-value = 0.011, p = 0.089; SE: 1.834).

Discussion

Our main purpose was to investigate the effects of TUD on individuals’ face detection ability and provide baseline values for a facial detection task. Study 1 assessed baseline and reproducibility data for a facial detection task. The findings indicated the facial detection task could be carried out with 48-stimulus presentations, but additional analysis indicated that it was affected by temporal factors or measurement errors. On the other hand, the 96-stimulus presentations showed absence of test-retest bias for (i.e., absence of temporal, learning, or fatigue effects). Furthermore, both the Bland-Altman indices and the ICC parameters (Table 2) indicated that the task was reliable, especially when using 96- instead of 48- or 122-stimulus presentations.

The findings from Study 2 indicated that individuals with TUD performed worse in the facial detection task (both reaction time and discrimination index) than nonsmokers. As expected, duration of smoking (in years) affected the performance of individuals with TUD and had a direct effect on the outcomes when used as a mediator. As this was a cross-sectional study, the results of mediation analyses could be influenced by other factors that we are unable to explain (e.g., homogeneity of the sample). However, this approach calls for further studies using other mediators.

Considering past findings about the relationship between TUD and visual impairments,7 it is possible that long-term smokers have some degree of impairment in facial detection, particularly related to the early stages of visual processing. Although no neural mechanisms supporting this hypothesis are yet known, an explanation can be inferred from some of the alterations induced by the existing components in tobacco. They are related to the synthesis, release, or uptake of neurotransmitters present in primary and secondary areas of the visual cortex, or in areas responsible for facial processing.

These findings are relevant to the understanding of the processing of low- and high-level visual stimuli in terms of contrast sensitivity and chromatic discrimination using noise-adapted stimuli, which could involve the three visual pathways (magno-, parvo-, and koniocellular).33,34

Impairments in early-stage visual processing can be associated with the activation of several cortical areas involved in face processing (e.g., superior temporal sulcus, occipital lobe face area, right mid-fusiform gyrus, and fusiform area),9 whether in the case of low-level stimuli (line drawing of faces) or high-level stimuli (hair and texture), although activation in those areas may not be necessarily related to early-stage visual processing.9 Future studies – preferably, controlled clinical trials – are needed to replicate the results reported herein.

As noted above, there is no detailed pharmacological or physiological explanation for our findings, but this study provides relevant information. First, it is important to take into account that activity of the CYP2A6 enzyme can influence nicotine metabolism in males and females,1,14 and this could lead to biases in face detection. No differences associated with the participants’ demographics were observed, and the effects could not be distinguished from those attributable to chronic tobacco consumption. As we did not assessed serum cotinine concentrations35 or carbon monoxide36,37 in all participants, and our sample size did not allow this direct association, a more extensive investigation of smoking status and of any possible confounding factors is necessary.

Investigations of early impairments in visual processing are necessary, and the use of nicotine replacement therapies (e.g., gum or patches) may help our understanding of some of the changes in cognitive processing triggered by nicotine use. Such knowledge may help in the development and promotion of policies to directly help individuals with TUD before tobacco-related impairments affect their visuocognitive processing abilities. The use and development of drugs that mitigate the effects of abstinence (or craving) and have direct actions on nAChRs may also help improve smoking cessation efforts.38

Future research should include other variables (smoking intensity, cessation, presence of comorbidities) and seek to replicate our findings in larger samples. Other biochemical measures should be used and correlated with serum cotinine or carbon monoxide. Addressing some of the questions raised by our findings may help promote policies that seek to act directly on tobacco addiction. Also, it helps to understand why some populations have high rates of tobacco consumption.39

In summary, the ability to detect, recognize, and process faces constitutes one of the key neurocognitive bases for many social-order phenomena, e.g., social attribution.40 Our findings propose, without establishing causal relationships, that substantial enough impairments are observed in the facial discrimination ability of subjects with TUD to warrant further research. We trust our findings will contribute to future studies on whether TUD affects other aspects of facial processing (e.g., facial discrimination or facial recognition), as well as future research into nicotine replacement therapy for smoking cessation or even into the use of nicotine41,42 for patients with low vision.

Acknowledgements

Financial support for this study was provided by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq; 305258/2019-2) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).

The authors thank Pamela D. Butler for proofreading and helping improve this article with valuable comments.

References

  • 1 Govind AP, Vezina P, Green WN. Nicotine-induced upregulation of nicotinic receptors: underlying mechanisms and relevance to nicotine addiction. Biochem Pharmacol. 2009;78:756-65.
  • 2 Levin ED. Nicotinic receptors in the nervous system. New York: CRC Press; 2001.
  • 3 Metherate R. Nicotinic acetylcholine receptors in sensory cortex. Learn Mem. 2004;11:50-9.
  • 4 World Health Organization (WHO). WHO report on the global tobacco epidemic, 2015: raising taxes on tobacco. Geneva: WHO; 2015.
  • 5 Brody AL. Functional brain imaging of tobacco use and dependence. J Psychiatr Res. 2006;40:404-18.
  • 6 Fernandes TM, Silverstein SM, de Almeida NL, Dos Santos NA. Psychophysical evaluation of contrast sensitivity using Gabor patches in tobacco addiction. J Clin Neurosci. 2018;57:68-73.
  • 7 Fernandes TP, Silverstein SM, Almeida NL, Santos NA. Visual impairments in tobacco use disorder. Psychiatry Res. 2018;271:60-7.
  • 8 Kunchulia M, Pilz KS, Herzog MH. Small effects of smoking on visual spatiotemporal processing. Sci Rep. 2014;4:7316.
  • 9 Tsao DY, Livingstone MS. Mechanisms of face perception. Annu Rev Neurosci. 2008;31:411-37.
  • 10 Maher S, Ekstrom T, Tong Y, Nickerson LD, Frederick B, Chen Y. Greater sensitivity of the cortical face processing system to perceptually-equated face detection. Brain Res. 2016;1631:13-21.
  • 11 Ohayon S, Freiwald WA, Tsao DY. What makes a cell face-selective? The importance of contrast. Neuron. 2012;74:567-81.
  • 12 Chen Y, Norton D, Ongur D, Heckers S. Inefficient face detection in schizophrenia. Schizophr Bull. 2008;34:367-74.
  • 13 de Almeida NL, Fernandes TMP, Comfort WEM, dos Santos NA. Tobacco use and lower accuracy in a novel facial detection task. Psychol Neurosci. 2018;11:393-403.
  • 14 Balfour DJ, Munafò MR. The neuropharmacology of nicotine dependence. New York: Springer; 2015.
  • 15 Karama S, Ducharme S, Corley J, Chouinard-Decorte F, Starr JM, Wardlaw JM, et al. Cigarette smoking and thinning of the brain’s cortex. Mol Psychiatry. 2015;20:778-85.
  • 16 Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12:189-98.
  • 17 American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5). Arlington: American Psychiatric Publishing; 2013.
  • 18 Thomaz CE, Giraldi GA. A new ranking method for principal components analysis and its application to face image analysis. Image Vis Comput. 2010;28:902-13.
  • 19 Phillips PJ, Wechsler H, Huang J, Rauss PJ. The FERET database and evaluation procedure for face-recognition algorithms. Image Vis Comput. 1998;16:295-306.
  • 20 Comfort WE. Viewing face detection and Individuation in the context of spatial frequency and schizophrenia [dissertation]. São Bernardo do Campo: Universidade Federal do ABC; 2015.
  • 21 Archer E, Park IM, Pillow JW. Bayesian entropy estimation for countable discrete distributions. J Mach Learn Res. 2014;15:2833-68.
  • 22 Teufel C, Subramaniam N, Fletcher PC. The role of priors in Bayesian models of perception. Front Comput Neurosci 2013;7:25.
  • 23 Kontsevich LL, Tyler CW. Bayesian adaptive estimation of psychometric slope and threshold. Vision Res. 1999;39:2729-37.
  • 24 Brainard DH. The psychophysics toolbox. Spat Vis. 1997;10:433-6.
  • 25 Pelli DG. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis. 1997;10:437-42.
  • 26 Antonius R. Interpreting quantitative data with SPSS. London: SAGE Publications; 2003.
  • 27 Tabachnick BG, Fidell LS. Using multivariate statistics. 5th ed. Boston: Allyn & Bacon/Pearson Education; 2007.
  • 28 Fernandes TP, Santos NA, Paramei GV. Cambridge Colour Test: reproducibility in normal trichromats. J Opt Soc Am A. 2020;37:A70-80.
  • 29 Heatherton TF, Kozlowski LT, Frecker RC, Fagerström KO. The Fagerström test for nicotine dependence: a revision of the Fagerström tolerance questionnaire. Br J Addict. 1991;86:1119-27.
  • 30 Pomerleau CS, Pomerleau OF, Snedecor SM, Mehringer AM. Defining a never-smoker: results from the nonsmokers survey. Addict Behav. 2004;29:1149-54.
  • 31 Lakens D. Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Front Psychol. 2013;4:863.
  • 32 Hayes AF. Introduction to mediation, moderation, and conditional process analysis: a regression-based approach. New York: Guilford Press; 2013.
  • 33 Fernandes TP, Shaqiri A, Brand A, Nogueira RL, Herzog MH, Roinishvili M, et al. Schizophrenia patients using atypical medication perform better in visual tasks than patients using typical medication. Psychiatry Res. 2019;275:31-8.
  • 34 Santos NA dos, Simas ML de B. Contrast sensitivity function: indicator of the visual perception of form and of the spatial resolution. Psicol Reflex Crit. 2001;14:589-97.
  • 35 Yamazaki H, Horiuchi K, Takano R, Nagano T, Shimizu M, Kitajima M, et al. Human blood concentrations of cotinine, a biomonitoring marker for tobacco smoke, extrapolated from nicotine metabolism in rats and humans and physiologically based pharmacokinetic modeling. Int J Environ Res Public Health. 2010;7:3406-21.
  • 36 Cunnington AJ, Hormbrey P. Breath analysis to detect recent exposure to carbon monoxide. Postgrad Med J. 2002;78:233-7.
  • 37 Robinson JC, Forbes WF. The role of carbon monoxide in cigarette smoking. I. Carbon monoxide yield from cigarettes. Arch Environ Health. 1975;30:425-34.
  • 38 Benowitz NL. Nicotine addiction. N Engl J Med. 2010;362:2295-303.
  • 39 Fernandes TM, de Andrade MJ, Santana JB, Nogueira RM, Dos Santos NA. Tobacco use decreases visual sensitivity in schizophrenia. Front Psychol. 2018;9:288.
  • 40 Todorov A, Olivola CY, Dotsch R, Mende-Siedlecki P. Social attributions from faces: determinants, consequences, accuracy, and functional significance. Annu Rev Psychol. 2015;66:519-45.
  • 41 Fernandes TP, Hovis JK, Almeida N, Souto JJS, Bonifacio TA, Rodrigues S, et al. Effects of Nicotine Gum Administration on Vision (ENIGMA-Vis): study protocol of a double-blind, randomized, and controlled clinical trial. Front Hum Neurosci. 2020:14:314. doi: 10.3389/fnhum.2020.00314
    » 10.3389/fnhum.2020.00314
  • 42 Almeida NL, Rodrigues SJ, Gonçalves LM, Silverstein SM, Sousa IC, Gomes GH, et al. Opposite effects of smoking and nicotine intake on cognition. Psychiatry Res. 2020;293:113357.

Publication Dates

  • Publication in this collection
    28 Sept 2020
  • Date of issue
    Jul-Aug 2021

History

  • Received
    6 May 2020
  • Accepted
    21 June 2020
location_on
Associação Brasileira de Psiquiatria Rua Pedro de Toledo, 967 - casa 1, 04039-032 São Paulo SP Brazil, Tel.: +55 11 5081-6799, Fax: +55 11 3384-6799, Fax: +55 11 5579-6210 - São Paulo - SP - Brazil
E-mail: editorial@abp.org.br
rss_feed Acompanhe os números deste periódico no seu leitor de RSS
Acessibilidade / Reportar erro