Open-access A cross-sectional study analyzing the quality of YouTube videos as a source of information for COVID-19 intubation

Abstract

Introduction:  There are many possible sources of medical information; however, the quality of the information varies. Poor quality or inaccurate resources may be harmful if they are trusted by providers. This study aimed to analyze the quality of coronavirus disease 2019 (COVID-19)- related intubation videos on YouTube.

Methods:  The term COVID-19 intubation was searched on YouTube. The top 100 videos retrieved were sorted by relevance and 37 videos were included. The video demographics were recorded. The quality of the videos was analyzed using an 18-point checklist, which was designed for evaluating COVID-19 intubation. Videos were also evaluated using general video quality scores and the modified Journal of the American Medical Association score.

Results:  The educational quality was graded as good for eight (21.6%) videos, moderate for 13 (35.1%) videos, and poor for 16 (43.2%) videos. The median safe COVID-19 intubation score (SCIS) was 11 (IQR = 5-13). The SCISs indicated that videos prepared in an intensive care unit were higher in quality than videos from other sources (p < 0.05). The length of the video was predictive of quality (area under the curve = 0.802, 95% CI = 0.658-0.945, p = 0.10).

Conclusions:  The quality of YouTube videos for COVID-19 intubation is substandard. Poor quality videos may provide inaccurate knowledge to viewers and potentially cause harm.

KEYWORDS Airway management; COVID-19; Coronavirus; Hand washing; Intubation

YouTube (www.youtube.com) is the second most visited website in the world behind Google.1 Free and easy access to YouTube makes it one of the most popular sources of information. Considering its popularity and easy accessibility, YouTube offers invaluable opportunities for dissemination of medical information. However, the quality of unfiltered information may be unscientific, misleading, or even harmful.2,3 Intubation in a patient with COVID-19 carries a high risk to healthcare providers because of the highly contagious nature of the disease, which is transmitted by droplets or aerosols. Although there are some videos on YouTube about COVID-19 intubation, the quality of these videos has not been evaluated. We therefore aimed to assess the quality of COVID-19 intubation videos that are accessible on You- Tube.

The term COVID-19 intubation was searched on You- Tube on May 9, 2020. The only search filter used was the sort by filter of relevance, which is the default filter for a typical YouTube search. Using methods previously described, on the assumption that it is rare for users to go beyond the first 100 videos for a specific search term, only the first 100 videos were evaluated.2 The search was performed using a cleared-cache web browser that consists of the most current version of Google Chrome in incognito mode with all available updates enabled. The main researcher prescreened the top 100 videos and created a watch list. First, two of the researchers (BA, TS) independently reviewed and scored the videos, then a third researcher (OC) reviewed and resolved any final discrepancies between the first two researchers. Only videos in English (or with comments or subtitles in the English language) were included, as English is a global language. Duplicates and irrelevant videos were excluded. Videos without a demonstration of intubation or that were unrelated to COVID-19 were also excluded. Videos that met the study criteria were assessed in terms of video length, total number of views, days online, daily views, likes, dislikes, upload source, video recording place, general video quality, JAMA, and COVID-19 intubation score. The upload source was classified as an intensive care unit, an emergency room, or an operating room. When the upload source could not be determined, it was classified as other. Approval by the Institutional Review Board for this report was unnecessary because only publicly accessible data were used.

There were no validated evaluation tools available to assess online information regarding COVID-19 patients intubation. Thus, to determine the educational quality of video content, the authors BA and TS created a novel 18-point Safe COVID-19 Intubation Score (SCIS) based on a recently published clinical consensus statement and current recommendations.4,5 The SCIS consists of 18 items including preparation, equipment, number of staff members, prevention measures, and precautions related to COVID-19 intubation recommendations. One point was assigned for each item fulfilled resulting in a maximum possible score of 18 points (Table 1). The quality of videos was graded based on the SCIS as (1) good, if SCIS > 13; (2) moderate, if SCIS 13 ≤ but > 7; and (3) poor, if SCIS ≤ 7. The reliability of the videos was assessed using the modified JAMA benchmark criteria.6 The JAMA benchmark assesses the reliability of online knowledge based on four parameters: authorship, attribution, disclosure, and currency. One point is given for each parameter. Four points indicate the information with the highest quality.

Table 1
Safe COVID-19 Intubation Score.

To evaluate the general video quality, the authors used a variation of the parameters defined in the Evaluation of the Video Media Guidelines. This tool consists of four sections (content, production, users, and presentation free of bias). The authors chose only the first three for the current study. These sections were previously used in another similar study.7 Each parameter was evaluated with a Likert-type scale from 0-5: 0 = does not apply; 1 = very unsatisfying; 2 = unsatisfying; 3 = regular; 4 = satisfying; and 5 = very satisfying. Therefore, each video could reach a maximum score of 70.

Data were analyzed using IBM SPSS statistics version 21.0 software (IBM Co., Armonk, NY, USA). The data distribution was assessed using a Shapiro-Wilk test. Numerical variables are presented as median values (IQR interquartile ratios) and categorical variables are reported as frequencies. Pairwise group comparison of numeric variables was performed by using Mann-Whitney U tests, while Kruskal-Wallis tests were used for comparisons of three or more groups. Categorical data were analyzed using Fishers exact test. Spearmans rho correlation test was used to assess the correlation between the parameters. Interrater reliability (IRR) was separately calculated for the SCIS using Cohens kappa coefficient (k). Kappa values were interpreted according to criteria defined by Landis and Koch.8 The cutoff points were obtained by evaluating the best Youden index (sensitivity + specificity-1) and the maximum area under the receiver operating characteristic (ROC) curve. A p-value < 0.05 was considered significant.

Among the 100 videos identified, irrelevant videos (n = 50), duplicates (n = 9), and non-English-language videos (n = 4) were excluded. A total of 37 videos were included in the study (available at http://dx.doi.org/10.17632/ 5nd4bv3dpk.2). The median video length was 5:31 minutes (IQR = 3:22-5:08). The median number of views was 2,734 (IQR = 730-20,377) and the median number of likes was 28 (IQR = 10-108). Of the videos included in the analysis, the first was uploaded on February 25, 2020, while the most recent was uploaded on April 19, 2020.

Regarding the SCIS, the median score was 11 (IQR = 5 -13). The IRR was calculated for SCIS parameters. The kappa scores were between 0.81 and 1.00 (perfect agreement) for 10 parameters, between 0.61 and 0.80 (substantial agreement) for six parameters, and between 0.41 and 0.60 (moderate agreement) for two parameters. The highest and lowest kappa scores were 1.00 and 0.54 respectively for SCISs. Of the 37 videos, 31 (83.8%) mentioned cuff inflation before ventilation. Thirty videos (81.1%) demonstrated the use of high-efficiency hydrophobic and video laryngoscopes (Table 2). The majority of videos mentioned the need for goggles (or face shields), an N95 respirator (or powered airpurifying respirator device), and clothing. Hand hygiene, use of double gloves, and doffing of personal protective equipment (PPE) were covered in fewer than one-third of the videos. According to the SCIS, 8 videos (21.6%) were graded as good, 13 (35.1%) as moderate, and 16 (43.2%) as poor. There was no statistically significant difference in the number of views (p = 0.22), daily views (p = 0.20), likes (p = 0.23), or the number of days online (p = 0.81) between those graded as good, moderate, and poor quality. The only variable that showed a significant difference was the length of the video (p = 0.005). ROC analysis showed that video duration could predict a good-quality video (area under the curve = 0.802, 95% CI = 0.658-0.945, p = 0.10). The cutoff value for predicting good quality was 5:50 minutes. This value had a sensitivity of 87.5%, and a specificity of 65.5%, for predicting good quality.

Table 2
Analysis of the content covered in 37 YouTube videos about safe COVID-19 intubation.

The SCIS positively correlated with the general video quality score, JAMA score, and length of the videos (rho = 0.875, p < 0.001; rho = 0.552, p < 0.001; rho = 0.508, p = 0.001, respectively). The recording location of the video was an intensive care unit for 12 (32.4%) videos, an operation room for eight (21.6%) videos, an emergency room for six (16.2%) videos, and other places for 11 (29.7%) videos. The SCIS and general video quality scores were significantly higher for intensive care unit-based videos than for the other videos (p < 0.05).

The main finding of this study was that YouTube videos do not provide sufficient, and comprehensive educational information for COVID-19 intubation. Poor results were found twice as often as good results in terms of SCIS. More importantly, hand hygiene, double gloving, and doffing (16.2%, 24.3%, 10.8% of videos, respectively) - which are key steps to preventing contamination - were demonstrated only in a limited number of videos. The median SCIS of the videos was 11, which also shows low-quality. Our findings are consistent with the results of previous studies. Keelan et al. first analyzed the content of YouTube-related immunization videos and found low-quality scores for various medical conditions.3 A report evaluating the quality of regional anesthesia videos found that half of the videos were of poor quality in relation to the procedure technique.9 Similarly, a study on the brachial plexus also showed low-quality scores.7 Umut et al. recently assessed endotracheal intubation videos on YouTube using their specific intubation score system, which included 15 items. They reported a mean score of 4.6/15 (± 2.7) among videos posted by academics.10

The study demonstrates that most of the videos related to COVID-19 intubation on YouTube is of poor quality, as many omit key steps to prevent COVID-19 transmission during procedure. Also, there was no correlation between the number of views and the quality of the content. As such, many viewers may obtain information from low-quality materials.

References

  • 1 Youtube.com Competitive Analysis, Marketing Mix and Traffic - Alexa. Available at: https://www.alexa.com/siteinfo/youtube.com . Accessed on April 21, 2020.
    » https://www.alexa.com/siteinfo/youtube.com
  • 2 Erdem H, Sisik A. The reliability of bariatric surgery videos in YouTube platform. Obes Surg. 2018;28:712-6.
  • 3 Keelan J, Pavri-Garcia V, Tomlinson G, et al. YouTube as a source of information on immunization: A content analysis. J Am Med Assoc. 2007;298:2482-4.
  • 4 Anaesthesia and caring for patients during the COVID-19 outbreak. Available at: https://www1.health.gov.au/internet/main/publishing.nsf/Content/cdna-song-novel-coronavirus Accessed on April 10, 2020
    » https://www1.health.gov.au/internet/main/publishing.nsf/Content/cdna-song-novel-coronavirus
  • 5 Orser BA. Recommendations for endotracheal intubation of COVID-19 patients. Anesth Analg. 2020;130:1109-10.
  • 6 Silberg WM, Lundberg GD, Musacchio RA. Assessing, controlling, and assuring the quality of medical information on the Internet: Caveant lector et viewor - Let the reader and viewer beware. JAMA. 1997;277:1244-5.
  • 7 Selvi O, Tulgar S, Senturk O, et al. YouTube as an informational source for brachial plexus blocks: evaluation of content and educational value. Braz J Anesthesiol. 2019;69:168-76.
  • 8 Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159-74.
  • 9 Tulgar S, Selvi O, Serifsoy TE, et al. YouTube as an information source of spinal anesthesia, epidural anesthesia and combined spinal and epidural anesthesia. Braz J Anesthesiol. 2017;67:493-9.
  • 10 Ocak U. Evaluation of the content, quality, reliability and accuracy of YouTube videos regarding endotracheal intubation techniques. Niger J Clin Pract. 2018;21:1651-5.

Publication Dates

  • Publication in this collection
    28 Feb 2022
  • Date of issue
    Mar-Apr 2022

History

  • Received
    07 July 2020
  • Accepted
    09 Oct 2021
location_on
Sociedade Brasileira de Anestesiologia (SBA) Rua Professor Alfredo Gomes, 36, Botafogo , CEP: 22251-080 , tel: +55 (21) 97977-0024 - Rio de Janeiro - RJ - Brazil
E-mail: editor.bjan@sbahq.org
rss_feed Acompanhe os números deste periódico no seu leitor de RSS
Acessibilidade / Reportar erro