Open-access Validation studies on migraine diagnostic tools for use in nonclinical settings: a systematic review

Estudos de validação de ferramentas de diagnóstico de enxaqueca (migrânea) para uso em ambientes não clínicos: uma revisão sistemática

Abstract

Background  Migraine underdiagnosis and undertreatment are so widespread, that hence is essential to diagnose migraine sufferers in nonclinical settings. A systematic review of validation studies on migraine diagnostic tools applicable to nonclinical settings can help researchers and practitioners in tool selection decisions.

Objective  To systematically review and critically assess published validation studies on migraine diagnostic tools for use in nonclinical settings, as well as to describe their diagnostic performance.

Methods  A multidisciplinary workgroup followed transparent and systematic procedures to collaborate on this work. PubMed, Medline, and Web of Science were searched for studies up to January 17, 2022. The QUADAS-2 was employed to assess methodological quality, and the quality thresholds adopted by the Global Burden Disease study were used to tail signaling questions.

Results  From 7,214 articles identified, a total of 27 studies examining 19 tools were eligible for inclusion. There has been no high-quality evidence to support any tool for use of migraine diagnosis in nonclinical settings. The diagnostic accuracy of the ID-migraine, structured headache and HARDSHIP questionnaires have been supported by moderate-quality evidence, with sensitivity and specificity above 70%. Of them, the HARDSHIP questionnaire has been the most extensively validated. The remaining 16 tools have provided poor-quality evidence for migraine diagnosis in nonclinical populations.

Conclusions  Up till now, the HARDSHIP questionnaire is the optimal choice for diagnosing migraine in nonclinical settings, with satisfactory diagnostic accuracy supported by moderate methodological quality. This work reveals the crucial next step, which is further high-quality validation studies in diverse nonclinical population groups.

Keywords: Migraine Disorders; Diagnosis; Sensitivity and Specificity; Systematic Review

Resumo

Antecedentes  O sub-diagnóstico e o subtratamento da enxaqueca são tão difundidos que, portanto, é essencial para diagnosticar os portadores de enxaqueca em ambientes não-clínicos. Uma revisão sistemática dos estudos de validação das ferramentas de diagnóstico da enxaqueca aplicáveis a ambientes não-clínicos pode ajudar os pesquisadores e profissionais nas decisões de seleção de ferramentas.

Objetivo  Revisar sistematicamente e avaliar criticamente estudos de validação publicados sobre ferramentas de diagnóstico da enxaqueca para uso em ambientes não-clínicos, bem como descrever seu desempenho diagnóstico.

Métodos  Um grupo de trabalho multidisciplinar seguiu procedimentos transparentes e sistemáticos para colaborar neste trabalho. PubMed, Medline e Web of Science foram pesquisados por estudos até 17 de janeiro de 2022. O QUADAS-2 foi empregado para avaliar a qualidade metodológica, e os limites de qualidade adotados pelo estudo da Global Burden Disease foram usados para responder a questões de sinalização.

Resultados  De 7.214 artigos identificados, um total de 27 estudos examinando 19 ferramentas foram elegíveis para inclusão. Não houve evidência de alta qualidade para apoiar qualquer ferramenta para o uso de diagnóstico de enxaqueca em ambientes não clínicos. A precisão diagnóstica do ID-Migraine, questionário de dor de cabeça estruturada e questionário HARDSHIP foram apoiados por evidências de qualidade moderada, com sensibilidade e especificidade acima de 70%. Deles, o questionário HARDSHIP foi o mais amplamente validado. As 16 ferramentas restantes forneceram provas de má qualidade para o diagnóstico de enxaqueca em populações não-clínicas.

Conclusões  Até agora, o questionário HARDSHIP é a escolha ideal para o diagnóstico da enxaqueca em ambientes não-clínicos, com precisão diagnóstica satisfatória apoiada por uma qualidade metodológica moderada. Este trabalho revela o próximo passo crucial, que é a realização de mais estudos de validação de alta qualidade em diversos grupos populacionais não-clínicos.

Palavras-chave: Transtornos de Enxaqueca; Diagnóstico; Sensibilidade e Especificidade; Revisão Sistemática

INTRODUCTION

Migraine ranks as the second leading cause of disability worldwide according to the 2017 Global Burden of Disease (GBD) study.1 Even though migraine does not cause death,2 this condition leads to 45.1 million (95% uncertainty interval [UI]: 29 to 62.8) disability-adjusted life years (DALYs) each year, and is responsible for 599 (95% UI: 386 to 833) per 100,000 population of age-standardized DALY rate.3 That is equivalent to 45.1 million years of healthy life lost each year. It has been estimated that approximately 2% of the gross domestic product globally is lost annually due to migraines.4 However, despite the debilitating effects of migraines, more thanhalf of migraine patients have never consulted amedical practitioner,5 and more than two-thirds have not received any treatments.6

Therefore, considering the low disease awareness, it is essential toallow more patientsto bediagnosed in nonclinical settings. Several systematic reviews of migraine identification tools have been published, but their inclusion criteria are tools that support clinical decisions for primary care practitioners.7, 8 Even though advanced digital diagnostic tools such as wearable headsets and machine learning programs have appeared recently,9 the diagnosis of migraines remains largely reliantonphysician interpretation. The performance of currently available migraine diagnostic tools that are usable in nonclinical setting is unclear. We attempted to bridge this gap byconductingasystematicreviewand providing toolselection advice for researchers and practitioners.

Although evidence-based International Classification of Headache Disorders (ICHD) criteria are available, they are intended for professional use only.10 This is because the technical concepts in the criteria, such as photophobia and phonophobia, are not easily understood by lay respondents. If a study is trying to apply a diagnostic tool for migraine in nonclinical settings, the tool must be validated to demonstrate that it is methodologically reasonable in comparison to the “gold standard”11. The “gold standard” for migraine diagnosis has been widely accepted as a clinical diagnosis made by a neurologist, based on the latest ICHD criteria after physical examinations and reviewing the patients’ medical history,12 as there has been no objective biological/instrumental marker for the diagnosis of migraine.13

A systematic review of validation studies can aid in understanding existing evidence on diagnostic tools for use in nonclinical settings. As a result, we performed this systematic review with the objectives of 1) assessing the methodological quality of published validation studies on migraine diagnostic tools that have been reported to be usable in nonclinical settings, and 2) describing their diagnostic accuracy, including sensitivity, specificity, positive predictive value (PPV), or negative predictive value (NPV).

Following the introduction, section 2 details the methods for this systematic review. Next, section 3 is the presentation of the results. This is followed by section 4, which discusses the findings and quality issues of existing evidence. Ultimately, the conclusions on tool selection and suggestions for future work are provided.

METHODS

We followed the Cochrane guidelines14 for methodology, and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) for reporting. The obtained data was secondary; therefore, ethical approval was not required. The protocol for this systematic review was registered on the International Prospective Register of Systematic Reviews (PROSPERO), with registration ID CRD42021296848.

Multidisciplinary workgroup

In August 2021, a workgroup that included five academics with extensive knowledge in public health, one with knowledge in statistics, and one neurologist with practical experience, was formed for this systematic review. Additionally, as supporting members, both methodologists and medical librarians were involved.

From August 2021 to April 2022, the workgroup and supporting members met at least once a week, either face to face or virtually, to conceptualize the research framework, establish objectives and eligibility criteria, search for evidence, appraise quality, integrate and analyze evidence, and conclude.

Eligibility criteria

The inclusion and exclusion criteria are illustrated in ►Table 1. We examined validation studies that focused on tools for migraine diagnosis, classification, or screening (hereinafter referred to as diagnostic tools) in nonclinical settings, whose eligible users were adults (≥ 18 years old). The “gold standard” reference was a clinical diagnosis given by a neurologist, relying on the ICHD criteria, who was blind to the tool’s diagnosis.

Table 1
Inclusion and exclusion criteria

Information sources and search strategies

Prior to commencing, a search had been conducted to ensure that we were not unnecessarily duplicating a review that had been done by other scholars. Studies published from the foundation of the databases until January 17, 2022, were searched in three electronic databases: PubMed, Medline, and Web of Science. To avoid missing any relevant studies, subject terms from the controlled vocabulary were combined with free-text terms. The complete search strings for the three databases are mentioned in ►Table S1 (Supplementary Material, available online only). Additional articles were manually located from citations and references of the included studies.

Study screening

The Endnote 20 (Clarivate Analytics, London, UK) software was used for screening, removing duplicates, and recording. After searches and removal of duplicates, two independent reviewers (DW and HNZ), blinded to each other, decided simultaneously whether each article was meeting the aforementioned eligibility criteria via studying titles and abstracts. After the initial screening, full-text articles were reviewed by at least two workgroup members (DWand RRT). Any discrepancies between both reviewers in terms of the inclusion and exclusion of studies were resolved through a consensus after discussing with a third reviewer (LPW).

Methodological quality assessment

The methodological quality of the included studies was appraised using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2), which is separated into two parts: risk of bias and applicability. The signaling questions in QUADAS-2, according to its developers, should be tailored to the subject of the review.15 The quality thresholds for population-based studies on headaches were established by the GBD studies (►Table S2Supplementary Material),16 which were then applied to tail signaling questions in the present study (►Table S3Supplementary Material).

Each QUADAS-2 domain was assessed, and each study was given a rating of “high risk/concern,” “low risk/concern,” or “unclear.” For the overall rating of risk of bias or applicability, overall “low risk of bias” or “low concern regarding applicability” was given to a study that obtained “low” in all domains, and overall “at risk of bias” or “concerns regarding applicability” was given to a study that obtained “high” or “unclear” in one or more domains.15 Furthermore, we classified “quality” into 3 groups: “high quality” (overall “low risk of bias” in combination with overall “low concern regarding applicability”), “moderate quality” (one domain receiving “high” or “unclear” risk of bias in combination with overall “low concern regarding applicability”), and “poor quality” (all other rating combinations). Two independent reviewers (DWand YC) appraised the methodological quality of included studies. To resolve any disagreements, a third (TL) was invited.

Data collection

For the included articles, construct data collection forms were developed and piloted. Fields extracted from each study included tool characteristics (name, aim, and language), first author, year of publication, sample characteristics (sample size and participant demographics), reference standard, time interval, and diagnostic accuracy (sensitivity, specificity, PPV, and NPV). Two reviewers (DW and MKAK) registered independently for data extraction. Any disagreements were identified and resolved by another reviewer (LPW). Insufficient accuracy statistics were calculated and supplemented by the RevMan (Cochrane, London, UK) software, version 5.4, and all outcomes were double-checked and recalculated. Pooled data were demonstrated when possible.

RESULTS

Literature search results

Figure 1 describes the PRISMA flow diagram. The search retrieved 7,213 publications, of which 3,362 duplicates were excluded. A manual search yielded 1 additional article. After determining eligibility on titles and abstracts, 119 papers remained for full-text review. Finally, 27 studies, published between 1991 and 2022, were included.

Figure 1
PRISMA flow diagram of the study screening process.

Tool description

In total, 19 tools have been reported as being abletodiagnose migraine for adults in nonclinical contexts. The studies’ characteristics arranged by tool names are detailed in ►Table 2. Among them, 14 tools were designed for total migraine diagnosis (ID-migraine,17, 18, 19 extended version of ID-migraine,20 MS-Q,21, 22, 23 simple questionnaire,24 Michel’s standardized migraine diagnosis questionnaire,25 diagnostic headache diary,26 DMQ3,27 ID-CM,28 HUNT,29 HUNT3,30 HUNT4,31 self-administered headache questionnaire,32 HARDSHIP questionnaire,33, 34, 35, 36 and POEM37), 9 tools for migraine with aura (extended version of ID-migraine,20 visual aura rating scale,38, 39 DMQ3,27 Finnish migraine-specific questionnaire,40 LUMINA,41 HUNT3,30 HUNT4,31 Italian ICHD-II-based questionnaire,42 and POEM37), and 4 tools for migraine without aura (DMQ3,27 Finnish migraine-specific questionnaire,40 Italian ICHD-II-based questionnaire,42 and POEM37). The structured headache questionnaire43 and ID-CM28 can determine chronic migraine, and the self-administered headache questionnaire32 can recognize a combination of migraine and tension-type headaches.

Table 2
Characteristics of included studies

The HARDSHIP questionnaire has been validated by 4 studies,33, 34, 35, 36 the ID-migraine17, 18, 19 and MS-Q21, 22, 23 by 3 each, and the visual aura rating scale has been validated by 2 studies.38, 39 All of the other tools have been validated by a single study.

Study description

Among included research papers, 10 studies conducted the validation by enrolling a general population sample,20, 26, 28, 29, 30, 31, 33, 35, 36, 43 2 by enrolling university students,18, 25 14 by enrolling patients,19, 21, 22, 23, 24, 27, 32, 34, 37, 38, 39, 40, 41, 42 and 1 by enrolling workers.17 Approximately 37% (10 out of 27) of studies involved probability sampling or census,19, 21, 23, 24, 33, 35, 36, 40, 42, 43 whereas the remainder involved nonprobability sampling. In sum, 17,198 individuals took part in the 27 validation studies, with sample sizes ranging from 4926 to 9,346.21 The mean age varied between 22.0318 and 58.4,31 notwithstanding 11 studies which failed to provide this information. The percentage of female patients was higher than that of males among studies reporting sex ratios.

The 27 validation studies covered 17 languages, with English being the most frequent one.26, 28, 37, 41 Among cross-cultural works that required translation, adaptation, and validation, 8 studies implemented a backward-translation verification,17, 23, 33, 34, 35, 36, 39, 43 whereas 4 studies did not.18, 19, 25, 26 A great number of studies (n = 15) administered migraine diagnostic tools through self-completed questionnaires;20, 21, 22, 23, 25, 27, 28, 29, 30, 31, 32, 37, 39, 40, 41 9 of them administered tools through interviews by headache experts or trained interviewers,17, 26, 33, 34, 35, 36, 38, 42, 43 and the remaining 3 works did not specify how the validations were conducted.18, 19, 24 The reference standard of the included studies was a clinical diagnosis based on the ICHD, editions 1, 2, 3β, or 3, depending on when the validations were completed. The time interval between tool diagnosis and the reference standard was less than 1 month in 11 studies,17, 19, 21, 22, 23, 25, 32, 34, 38, 39, 42 whereas in others it was more than 1 month or was not mentioned.

Quality assessment

A summary table of methodological quality assessments for each study is presented in ►Table 3. Overall, all studies are “at risk of bias,” with 63% of them related to the participant selection domain,17, 18, 20, 22, 25, 26, 27, 28, 29, 30, 31, 32, 34, 37, 38, 39, 41 70.4% to the index test domain,18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 29, 30, 31, 32, 37, 38, 39, 40, 41 37% to the reference standard domain19, 20, 22, 23, 28, 31, 37, 38, 40, 41 and 63% to the flow and timing domain.18, 20, 24, 26, 27, 28, 29, 30, 31, 33, 35, 36, 37, 40, 41, 42, 43 Moreover 51.9% of studies were identified as having “concerns regarding applicability”19, 21, 22, 23, 24, 27, 32, 34, 37, 38, 39, 40, 41, 42 with the domain of participant selection being the most dominant cause. ►Figure 2 depicts the cumulative bar plot of included studies’ risks of bias and applicability concerns.

Table 3
Methodological quality assessment for each study according to the QUADAS-2

Figure 2
Cumulative bar plot of included studies’ risks of bias and applicability concerns.

Diagnostic accuracy

Table 4 gives the diagnostic accuracy of these tools for migraine diagnosis. Due to the fact that the majority of studies were of poor quality, caution should be exercised when considering pooled data; thus, no metanalysis was performed. The sensitivity spanned from 2429 to 100%,42 while the specificity spanned from 2926 to 100%.20, 29, 40, 42 According to the GBD criteria, diagnostic tools for migraine with both sensitivity and specificity ≥ 70% are desirable.16 In included studies, 19 studies that validated 14 tools exhibited sensitivity and specificity levels above 70%,17, 20, 21, 22, 23, 24, 27, 28, 32, 33, 34, 36, 37, 38, 39, 40, 41, 42, 43 with 3 of them reporting both above 90%.27, 38, 42 However, due to the different cutoffs for migraine-positive diagnosis among the tools, a direct comparison of diagnostic accuracy was challenging.

Table 4
Summary of studies reporting on the diagnostic accuracy of migraine diagnostic tools in non-clinical settings

Whether a study has good methodological quality determines if it can generate unbiased estimates of diagnostic accuracy.14 It should be noted that no tool has been supported by high-quality evidence, regarding the use in nonclinical circumstances. The diagnostic accuracy of the ID-migraine,17 structured headache questionnaire,43 and HARDSHIP questionnaire33, 34, 35, 36 have been supported by moderate-quality evidence, with satisfactory sensitivity and specificity. Of them, the HARDSHIP questionnaire was the most extensively validated. The evidence of the remaining 16 tools for use in the nonclinical population has been of poor quality; thus, their diagnostic performance should be generalized with caution.

DISCUSSION

Summary of findings

This systematic review identified 27 studies that validated 19 tools currently used for migraine diagnosis without the need for doctor consultation. For use in nonclinical settings, no tool has been supported by high-quality evidence; the diagnostic accuracy of 3 tools (the HARDSHIP questionnaire, ID-migraine, and structured headache questionnaire) has been supported by moderate-quality evidence, and the remaining tools only have studies which provided poor-quality evidence. The quality assessment findings are largely consistent with a previously published systematic review focusing on chronic headache disorders.7

The ID-migraine, structured headache questionnaire, and HARDSHIP questionnaire have demonstrated satisfactory diagnostic accuracy in nonclinical settings, supported by moderate-quality evidence. The HARDSHIP questionnaire has been the most widely validated of all. In poor-quality studies, evidence for diagnostic accuracy is limited by certain shortcomings; as these quality issues have the potential to impair the robustness of these studies, we caution against extrapolating outcomes. Considering the evidence mentioned above, it is suggested that the HARDSHIP questionnaire is the optimal choice for diagnosing migraine in nonclinical settings to date.

Public health significance

Underdiagnosis and undertreatment of migraines are common, especially due to its trivialization.44 Furthermore, migraine is stigmatized, and people typically conceal migraine attacks due to guilt about missing work and fear of workplace retaliation and dismissal.45 Patients themselves are also an obstacle to better care, usually due to mistrust in doctors’ abilities. However, this could be related to the fact that few individuals contact their physicians regarding this matter, and, hence, are unable to benefit from medical expertise or available treatments.46

This necessitates the advancement of a migraine diagnostic tool, allowing for effective case detection in nonclinical contexts, such as the community or workplace. The reason for guidelines to suggest that migraine diagnostic tools designed for use in nonclinical settings should be validated in populations in these settings against the “gold standard”11 is because the diagnostic accuracy of a tool may vary with the population being tested, target contexts, and many other factors.14 The present systematic review provides sufficient details about existing migraine diagnostic tools for application in nonclinical settings. The public health significance of this review is important, since we anticipate that it can inform decisions on how to choose and utilize these tools for researchers and practitioners, to promote earlier diagnosis, initiation of appropriate treatment, and reduction in disease burden.

Quality issues of existing evidence and recommendations for future research

High risks of bias and/or applicability concerns in methodological quality are important limitations of the robustness of a study.14 Among included studies, self-administration of a diagnostic tool, delay between diagnostic tool and reference standard, lack of representativeness of nonclinical populations, absence of blindness, and poor study flow are the leading sources of risk of bias, while participant selection is the leading source of concerns regarding applicability in nonclinical settings.

Self-administered diagnosis, which was frequently employed in the included studies,20, 21, 22, 23, 25, 27, 28, 29, 30, 31, 32, 37, 39, 40, 41 may introduce information bias because it does not help with question clarification, low literacy assistance, or participant engagement when compared with face-to-face or telephone interviews.11 Furthermore, as migraine progression between tool diagnosis and “gold standard” is likely to vary, the time interval between them should preferably be less than 1 month.11 In terms of sampling methods, some studies enrolled patients from clinical settings,19, 21, 22, 23, 24, 27, 32, 34, 37, 38, 39, 40, 41, 42 preliminarily screen-positive subjects,17, 41 or case-control designs,32, 37 who had more typical or more extreme symptoms, resulting in inflated sensitivity and specificity estimations.14 A further quality issue is that several studies17, 18, 20, 25, 26, 28, 29, 30, 31 recruited an unrepresentative convenience/volunteer sample, despite being from nonclinical circumstances, which could introduce selection bias. Also, a low participation rate (< 70%) cannot guarantee representativeness.11 Next, in studies where no blindness existed between the tool’s diagnosis and the reference standard,19, 20, 22, 23, 28, 31, 37, 38, 40, 41 the interpretation of tools’ results could be influenced by knowledge of the reference standard results.15 Additionally, not all participants received the same reference standard in several studies: some had a face-to-face clinical interviews with a neurologist, while others had a telephone interview,18, 26, 42 which may lead to biased diagnostic performance.47

Even though all of the included tools were reported to be applicable to nonclinical settings, we discovered a primary applicability concern with respect to participant selection: participants in 14 validation studies19, 21, 22, 23, 24, 27, 32, 34, 37, 38, 39, 42 were healthcare users, who were more likely to be disabled and had rehearsed their medical histories. They did not match our target nonclinical population, resulting in a lack of external validity.11 Furthermore, cross-cultural validation was lacking for some tools.18, 19, 25, 26

This systematic review seeks to provide relevant and up-to-date information on the use of migraine diagnostic tools in nonclinical contexts, as well as to uncover knowledge gaps. A crucial next step is more high-quality validation studies in diverse samples in the nonclinical population against the “gold standard.”

It is suggested that future studies enhance their methodological quality, with particular attention to interview administration, time interval, sampling methods, response rate, blindness, and study flow. The most important is that tools should be validated among the general population. Moreover, the diversity of the global population, particularly in terms of ethnicity, culture, and language, warrants cross-cultural validation.

Strengths and limitations

This is, to our best knowledge, the first systematic review of studies validating migraine diagnostic tools applicable to nonclinical settings. Multidisciplinary workgroup collaboration, a combination of comprehensive search strategies for multiple electronic databases and manual searches, an explicit and systematic methodology, and rigorous quality assessment are the strengths of our systematic review.

However, this work has several limitations. The first is the inclusion of only English-language peer-review articles. Certain studies were also excluded because they did not report diagnostic accuracy; however, if these authors provided specific data, such as prevalence, we were able to calculate some outcomes. Also, the probability of publication bias cannot be ruled out. Following that, quantitative synthesis and data comparison were not easy due to the quality of evidence and heterogeneity of the included studies. The various cut-off levels, which were a compromise between false positives and false negatives, resulted in non-comparability among studies.

In conclusion, up to now, the HARDSHIP questionnaire is the optimal choice for diagnosing migraine in nonclinical settings, with satisfactory diagnostic accuracy supported by moderate methodological quality. The significance of this study is to inform tool selection decisions for researchers and practitioners, contributing to earlier diagnosis, treatment initiation, and disease burden reduction. For better migraine case identification in nonclinical settings, future high-quality validation studies among varied nonclinical population groups are encouraged, with a methodological emphasis on interview administration, time interval, sampling methods, response rate, blindness, and study flow.

  • Support
    The study was supported financially by the Youth Science and Technology Talent Promotion Project of the Guizhou Educational Department, China (粉教合KY字[2022]241号).

References

  • 1 GBD 2017 Disease and Injury Incidence and Prevalence Collaborators. Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990-2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet 2018;392 (10159):1789–1858
  • 2 World Health Organization. Geneve. Organization 2016;8: ••• cited 2022 Feb 15 Headache disorders; Available from: https://www.who.int/news-room/fact-sheets/detail/headache-disorders
    » https://www.who.int/news-room/fact-sheets/detail/headache-disorders
  • 3 Neurology Collaborators GBDGBD 2016 Neurology Collaborators. Global, regional, and national burden of neurological disorders, 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016. Lancet Neurol 2019;18(05): 459–480
  • 4 Saylor D, Steiner TJ. The Global Burden of Headache. Semin Neurol 2018;38(02):182–190
  • 5 Takeshima T, Wan Q, Zhang Y, et al. Prevalence, burden, and clinical management of migraine in China, Japan, and South Korea: a comprehensive review of the literature. J Headache Pain 2019;20(01):111
  • 6 Katsarava Z, Mania M, Lampl C, Herberhold J, Steiner TJ. Poor medical care for people with migraine in Europe – evidence from the Eurolight study. J Headache Pain 2018;19(01):10
  • 7 Potter R, Probyn K, Bernstein C, Pincus T, Underwood M, Matharu M. Diagnostic and classification tools for chronic headache disorders: A systematic review. Cephalalgia 2019;39(06):761–784
  • 8 van der Meer HA, Visscher CM, Vredeveld T, Nijhuis van der Sanden MW, Hh Engelbert R, Speksnijder CM. The diagnostic accuracy of headache measurement instruments: A systematic review and meta-analysis focusing on headaches associated with musculoskeletal symptoms. Cephalalgia 2019;39(10):1313–1332
  • 9 Woldeamanuel YW, Cowan RP. Computerized migraine diagnostic tools: a systematic review. Ther Adv Chronic Dis 2022; 13:20406223211065235
  • 10 International Headache Society. UK: The Society; 2017 [cited 16 Aug 2021]. Guidelines; Available from: https://ihs-headache.org/en/resources/guidelines/
    » https://ihs-headache.org/en/resources/guidelines/
  • 11 Stovner LJ, Al Jumah M, Birbeck GL, et al. The methodology of population surveys of headache prevalence, burden and cost: principles and recommendations from the Global Campaign against Headache. J Headache Pain 2014;15:5
  • 12 Charles A. The pathophysiology of migraine: implications for clinical management. Lancet Neurol 2018;17(02):174–182
  • 13 Ashina S, Olesen J, Lipton RB. How Well Does the ICHD 3 (Beta) Help in Real-Life Migraine Diagnosis and Management? Curr Pain Headache Rep 2016;20(12):66
  • 14 Deeks J, Bossuyt P, Leeflang M, Takwoingi Y, Flemyng E. London: Cochrane; 2022 [cited 2 Jan 2022]. Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy (Version 2.0); Available from: https://training.cochrane.org/handbook-diagnostic-test-accuracy
    » https://training.cochrane.org/handbook-diagnostic-test-accuracy
  • 15 Whiting PF, Rutjes AW, Westwood ME, et al; QUADAS-2 Group. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011;155(08):529–536
  • 16 Diseases Injuries Collaborators GBDGBD 2019 Diseases and Injuries Collaborators. Global burden of 369 diseases and injuries in 204 countries and territories, 1990-2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet 2020;396 (10258):1204–1222
  • 17 Siva A, Zarifoglu M, Ertas M, et al. Validity of the ID-Migraine screener in the workplace. Neurology 2008;70(16):1337–1345
  • 18 Wang X, San YZ, Sun JM, et al. Validation of the Chinese Version of ID-Migraine in Medical Students and Systematic Review with Meta-Analysis Concerning Its Diagnostic Accuracy. J Oral Facial Pain Headache 2015;29(03):265–278
  • 19 Csépány É, Tóth M, Gyüre T, et al. The validation of the Hungarian version of the ID-migraine questionnaire. J Headache Pain 2018; 19(01):106
  • 20 Streel S, Donneau A-F, DardenneN, etal. Validation ofan extended French version of ID Migraine™ as a migraine-screening tool. Cephalalgia 2015;35(05):437–442
  • 21 Láinez MJ, Castillo J, Domínguez M, Palacios G, Díaz S, Rejas J. New uses of the Migraine Screen Questionnaire (MS-Q): validation in the Primary Care setting and ability to detect hidden migraine. MS-Q in Primary Care. BMC Neurol 2010;10:39
  • 22 Láinez MJA, Domínguez M, Rejas J, et al. Development and validation of the migraine screen questionnaire (MS-Q). Headache 2005;45(10):1328–1338
  • 23 Delic D, Ristic A, Grujic B, et al. Translation and Transcultural Validation of Migraine Screening Questionnaire (MS-Q). Med Arh 2018;72(06):430–433
  • 24 Gervil M, Ulrich V, Olesen J, Russell MB. Screening for migraine in the general population: validation of a simple questionnaire. Cephalalgia 1998;18(06):342–348
  • 25 Rueda-Sánchez M, Díaz-Martínez LA. Validation of a migraine screening questionnaire in a Colombian university population. Cephalalgia 2004;24(10):894–899
  • 26 Phillip D, Lyngberg A, Jensen R. Assessment of headache diagnosis. A comparative population study of a clinical interview with a diagnostic headache diary. Cephalalgia 2007;27 (01):1–8
  • 27 Kirchmann M, Seven E, Björnsson A, et al. Validation of the deCODE Migraine Questionnaire (DMQ3) for use in genetic studies. Eur J Neurol 2006;13(11):1239–1244
  • 28 Lipton RB, Serrano D, Buse DC, et al. Improving the detection of chronic migraine: Development and validation of Identify Chronic Migraine (ID-CM). Cephalalgia 2016;36 (03):203–215
  • 29 Hagen K, Zwart JA, Vatten L, Stovner LJ, Bovim G. Head-HUNT: validity and reliability of a headache questionnaire in a large population-based study in Norway. Cephalalgia 2000;20(04): 244–251
  • 30 Hagen K, Zwart J-A, Aamodt AH, et al. The validity of questionnaire-based diagnoses: the third Nord-Trøndelag Health Study 2006-2008. J Headache Pain 2010;11(01):67–73
  • 31 Hagen K, Åsberg AN, Uhlig BL, Tronvik E, Brenner E, Sand T. The HUNT4 study: the validity of questionnaire-based diagnoses. J Headache Pain 2019;20(01):70
  • 32 Fritsche G, Hueppe M, Kukava M, et al. Validation of a german language questionnaire for screening for migraine, tension-type headache, and trigeminal autonomic cephalgias. Headache 2007; 47(04):546–551
  • 33 Ayzenberg I, Katsarava Z, Mathalikov R, et al; Lifting The Burden: Global Campaign to Reduce Burden of Headache Worldwide and Russian Linguistic Subcommittee of International Headache Society. The burden of headache in Russia: validation of the diagnostic questionnaire in a population-based sample. Eur J Neurol 2011;18(03):454–459
  • 34 Herekar AD, Herekar AA, Ahmad A, et al. The burden of headache disorders in Pakistan: methodology of a population-based nationwide study, and questionnaire validation. J Headache Pain 2013;14:73
  • 35 Rao GN, Kulkarni GB, Gururaj G, et al. The burden of headache disorders in India: methodology and questionnaire validation for a community-based survey in Karnataka State. J Headache Pain 2012;13(07):543–550
  • 36 Yu SY, Cao XT, Zhao G, et al. The burden of headache in China: validation of diagnostic questionnaire for a population-based survey. J Headache Pain 2011;12(02):141–146
  • 37 Kaiser EA, Igdalova A, Aguirre GK, Cucchiara B. A web-based, branching logic questionnaire for the automated classification of migraine. Cephalalgia 2019;39(10):1257–1266
  • 38 Eriksen MK, Thomsen LL, Olesen J. The Visual Aura Rating Scale (VARS) for migraine aura diagnosis. Cephalalgia 2005;25(10): 801–810
  • 39 Kim BK, Cho S, Kim HY, Chu MK. Validity and reliabilityof the self-administered Visual Aura Rating Scale questionnaire for migraine with aura diagnosis: A prospective clinic-based study. Headache 2021;61(06):863–871
  • 40 Kallela M, Wessman M, Färkkilä M Validation of a migraine-specific questionnaire for use in family studies. Eur J Neurol 2001;8(01):61–66
  • 41 van Oosterhout WP, Weller CM, Stam AH, et al. Validation of the web-based LUMINA questionnaire for recruiting large cohorts of migraineurs. Cephalalgia 2011;31(13):1359–1367
  • 42 Abrignani G, Ferrante T, Castellini P, et al. Description and validation of an Italian ICHD-II-based questionnaire for use in epidemiological research. Headache 2012;52(08): 1262–1282
  • 43 El-Sherbiny NA, Shehata HS, Amer H, et al. Development and validation of an Arabic-language headache questionnaire for population-based surveys. J Pain Res 2017;10:1289–1295
  • 44 Rains JC, Penzien DB, Martin VT. Migraine and women’s health. J Am Med Womens Assoc 2002;57(02):73–78
  • 45 Martinez LF, Ferreira AI. Sick at work: presenteeism among nurses inaPortuguesepublichospital. Stress Health 2012;28(04):297–304
  • 46 Peters M, Abu-Saad HH, Robbins I, Vydelingum V, Dowson A, Murphy M. Patients’ management of migraine and chronic daily headache: a study of the members of the Migraine Action Association (United Kingdom). Headache 2005;45(05):571–581
  • 47 Rutjes AW, Reitsma JB, Di Nisio M, Smidt N, van Rijn JC, Bossuyt PM. Evidence of bias and variation in diagnostic accuracy studies. CMAJ 2006;174(04):469–476

Publication Dates

  • Publication in this collection
    05 June 2023
  • Date of issue
    Apr 2023

History

  • Received
    29 Apr 2022
  • Reviewed
    13 July 2022
  • Accepted
    29 July 2022
location_on
Academia Brasileira de Neurologia - ABNEURO R. Vergueiro, 1353 sl.1404 - Ed. Top Towers Offices Torre Norte, 04101-000 São Paulo SP Brazil, Tel.: +55 11 5084-9463 | +55 11 5083-3876 - São Paulo - SP - Brazil
E-mail: revista.arquivos@abneuro.org
rss_feed Acompanhe os números deste periódico no seu leitor de RSS
Acessibilidade / Reportar erro