Acessibilidade / Reportar erro

A YOLO-V5 approach for the evaluation of normal fillings and overhanging fillings: an artificial intelligence study

Abstract

Dental fillings, frequently used in dentistry to address various dental tissue issues, may pose problems when not aligned with the anatomical contours and physiology of dental and periodontal tissues. Our study aims to detect the prevalence and distribution of normal and overhanging filling restorations using a deep CNN architecture trained through supervised learning, on panoramic radiography images. A total of 10480 fillings and 2491 overhanging fillings were labeled using CranioCatch software from 2473 and 1850 images, respectively. After the data obtaining phase, validation (80%), training 10%), and test-groups (10%) were formed from images for both labelling. The YOLOv5x architecture was used to develop the AI model. The model’s performance was assessed through a confusion matrix and sensitivity, precision, and F1 score values of the model were calculated. For filling, sensitivity is 0.95, precision is 0.97, and F1 score is 0.96; for overhanging were determined to be 0.86, 0.89, and 0.87, respectively. The results demonstrate the capacity of the YOLOv5 algorithm to segment dental radiographs efficiently and accurately and demonstrate proficiency in detecting and distinguishing between normal and overhanging filling restorations.

Artifical Intelligence; Radiography, Panoramic; Deep Learning; Dentistry

Introduction

Dental fillings, a common procedure for treating cavities and restoring damaged teeth, can lead to complications if overhangs occur. Overhang is a situation in which a restorative material protrudes beyond the limits of the cavity preparation. This is considered a type of iatrogenic error in terms of the anatomical form of the restoration. This issue can be caused by faulty restorative methods and morphological variations in the cervical area of the tooth, such as concavities, fluting, and furcation. Such variations make it challenging to accurately adapt the matrix band and wedge to the gingival cavity margin, resulting in an ill-fitting restoration with overhang.11. Tarcin B, Gumru B, Idman E. Radiological assessment of alveolar bone loss associated with overhanging restorations: a retrospective cone beam computed tomography study. J Dent Sci. 2023 Jan;18(1):165-74. https://doi.org/10.1016/j.jds.2022.06.021
https://doi.org/10.1016/j.jds.2022.06.02...
To prevent overhanging fillings and associated complications, dentists must ensure precise placement and contouring of the filling material.11. Tarcin B, Gumru B, Idman E. Radiological assessment of alveolar bone loss associated with overhanging restorations: a retrospective cone beam computed tomography study. J Dent Sci. 2023 Jan;18(1):165-74. https://doi.org/10.1016/j.jds.2022.06.021
https://doi.org/10.1016/j.jds.2022.06.02...

2. Paolantonio M, Di Murro C, Cattabriga M. [Modifications in the clinical and microbiological parameters of the periodontal tissues after the removal of overhanging class-II amalgam fillings]. Minerva Stomatol. 1990 Aug;39(8):697-701. Italian.
-33. Loomans BA, Opdam NJ, Roeters FJ, Bronkhorst EM, Huysmans MC. Restoration techniques and marginal overhang in Class II composite resin restorations. J Dent. 2009 Sep;37(9):712-7. https://doi.org/10.1016/j.jdent.2009.05.025
https://doi.org/10.1016/j.jdent.2009.05....
However, studies indicate a significant prevalence of overhanging dental restorations (ODRs), ranging from 25% to 76% of restored tooth surfacess.44. Ghulam OA, Fadel HT. Can clusters based on caries experience and medical status explain the distribution of overhanging dental restorations and recurrent caries? A cross-sectional study in Madinah - Saudi Arabia. Saudi J Biol Sci. 2018 Feb;25(2):367-71. https://doi.org/10.1016/j.sjbs.2017.02.001
https://doi.org/10.1016/j.sjbs.2017.02.0...
,55. Brunsvold MA, Lane JJ. The prevalence of overhanging dental restorations and their relationship to periodontal disease. J Clin Periodontol. 1990 Feb;17(2):67-72. https://doi.org/10.1111/j.1600-051X.1990.tb01064.x
https://doi.org/10.1111/j.1600-051X.1990...

Given the transformative potential of artificial intelligence (AI) in dentistry, this study focuses on the role of deep learning, specifically deep convolutional neural networks (CNNs). AI in dentistry uses machine learning algorithms, computer vision, and natural language processing to improve diagnostics and treatment planning, providing opportunities for better patient outcomes and increased efficiency.

AI is a rapidly evolving field that has the potential to transform many aspects of healthcare, including dentistry. AI in dentistry involves the use of machine learning algorithms, computer vision, natural language processing, and other AI technologies to improve diagnostic and treatment options, enhance patient outcomes, and increase efficiency. AI can be used in many areas, such as image analysis, treatment planning, patient communication, robotic dentistry, and predictive analytics in dentistry.66. Orhan K, Jakhap R. Introduction to artificial intelligence. In: Orhan K, Jagtap R, editors. Artificial intelligence in dentistry. Springer International; 2024. p. 1-7.,77. Jagtap R, Bayrakdar IS, Orhan K. Advantages, disadvantages, and limitations of ai in dental health. In: Orhan K, Jagtap R, editors. Artificial intelligence in dentistry. Springer International; 2024. p. 235-46. AI algorithms can analyze radiographic images to detect and diagnose conditions such as dental caries, periodontal disease, and oral cancer.88. Corbella S, Srinivas S, Cabitza F. Applications of deep learning in dentistry. Oral Surg Oral Med Oral Pathol Oral Radiol. 2021 Aug;132(2):225-38. https://doi.org/10.1016/j.oooo.2020.11.003
https://doi.org/10.1016/j.oooo.2020.11.0...
Deep learning, a subfield of AI, has attracted much attention in recent years due to its rapid development. Among the various deep learning models, deep CNNs have been extensively studied. They offer excellent performance in analyzing image data, including detection, classification, quantification, and segmentation. This is due to the development of self-learning algorithms and advances in computing power.99. Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional neural networks for radiologic images: a radiologist's guide. Radiology. 2019 Mar;290(3):590-606. https://doi.org/10.1148/radiol.2018180547
https://doi.org/10.1148/radiol.201818054...
Deep learning is being explored in the dental field to identify and analyze various anatomical variables such as orthodontic landmarks, dental caries, periodontal disease, and osteoporosis. However, these applications are still in the early stages of development.77. Jagtap R, Bayrakdar IS, Orhan K. Advantages, disadvantages, and limitations of ai in dental health. In: Orhan K, Jagtap R, editors. Artificial intelligence in dentistry. Springer International; 2024. p. 235-46.,1010. Lee JH, Kim DH, Jeong SN. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020 Jan;26(1):152-8. https://doi.org/10.1111/odi.13223
https://doi.org/10.1111/odi.13223...
,1111. Jaju PP, Bayrakdar IS, Jaju S, Vidhi S, Orhan K, Jagtap R. Applications of artificial intelligence in dentistry. In: Orhan K, Jagtap R, editors. Artificial intelligence in dentistry. Springer International; 2024. p. 43-68.

The current study aims to assess the performance and efficacy of deep learning algorithms, particularly the YOLOv5x architecture, in identifying both normal and overhanging dental fillings using performance metrics. The study is framed around hypotheses to clarify whether these algorithms can effectively detect normal and overhanging dental fillings with high sensitivity and accuracy (H1) or not (H0). Given the critical need for precise, fast, and sensitive detection in dental image analysis, our research aims to contribute to the advancement of automated dental diagnostic systems.

Methods

Ethical approval and study design

The study protocol was authorized by the 04.10.2022/22), and it adhered to the Helsinki Declaration’s standards (NCT06022731). In our study, a YOLOV5 model implemented in Pytorch was used to create adequate filling and overhanging filling models (CranioCatch, Eskisehir, Turkey) in panoramic radiographs obtained from different orthopantomographic devices (Figure 1).

Figure 1
Schematic representation of the study design

Data Input

The data sets were obtained from the images of patients who applied to our clinic for various dental purposes. Radiographs were obtained using different panoramic devices (68–85 kVp, 10–14 mA, 10–13 s, minimum total beam filtering with a 2.5-mm Al equivalent layer and a pixel size of 48 μm). Images of individuals with mixed dentition were not included as this may cause errors during labeling. Radiographic images obtained by incorrect positioning of the patient or containing metal artifacts were excluded from the study. This study was conducted with 1850 images for overhanging fillings and 2473 images for adequate fillings.

Labeling and Training of Data

Labeling is the process of identifying relevant regions in an image and deciding to which region they belong. For this purpose, the outer borders of the filling and overhanging fillings in the images were defined by polygonal segmentation and saved in the .JSON file type. Labeling was performed on panoramic radiography images by two restorative dentistry specialists on the same flat panel monitor in a semi-dark room. All labeling was then reviewed by three oral and maxillofacial radiology specialists. The collected data was analyzed using an artificial intelligence application. A total of 10480 labels were made in 2473 radiographs for filling, and 2491 labels were made in 1850 images for overhanging filling.

The segmentation model was first anonymized, then panoramic radiography images of varied sizes were resized to 640x320 resolution for dental filling and to 1280x640 resolution for overhanging tooth filling. A random sequence was created by using the open-source Python programming language and Opencv-Pytorch-Numpy-Pandas-Torch, Vision-Torch-Tensorboard-Seaborn libraries. To prevent the images participating in the training from being used for retesting, the data set was divided into three parts: 80% training, 10% validation, and 10% testing (Table 1).

Table 1
Distribution of training, testing, and validation groups for filling and overhanging filling.

  1. Training group: The data used to train the model and makes up 80% of the dataset;

  2. Validation group: 10% of the data set that is independent of the training of the model and specifies the examples that the model should not see in this period. If it is necessary to end the training or revise the training variables, the model is tested on this data set;

  3. Test group: This group, which makes up 10% of the images, is the part where the model trained using the training and validation data is tested.

In order to estimate and generate the optimal AI algorithm weight factors, validation and training validation datasets were used. The model success was checked with the dataset of the test group that was trained using transfer learning with a pre-trained model.

Deep-learning Algorithm

2D CNN architectures implemented using the PyTorch library in the training phase are very important for effective segmentation. YOLOv5, a state-of-the-art CNN architecture, plays a crucial role in training the model for dental filling and overhanging filling segmentation. 2D CNN architectures implemented with the PyTorch library in the Python programming environment were used. The model was trained using the YOLOv5x version segmentation algorithm over 500 epochs.

Deployment is also an essential requirement in real-life use. YOLOv5 uses a genetic algorithm to generate junction boxes. If the default ones are not good, the process is called auto anchor, which recalculates the junction boxes to fit the data. This is used in conjunction with the k-means algorithm to create advanced k-means anchor boxes. This is one of the reasons why YOLOv5 works so well, even on different datasets. Another reason for such good training and detection results of the YOLOv5 model is the mosaic amplification. In simple terms, mosaic amplification combines four different images into one image using different magnification techniques. This teaches the model to deal with different and complex images. The PyTorch model output was evaluated using a 10% portion of the test dataset, resulting in accuracy, precision, and recall metrics. Additionally, Mean Average Precision (mAP@0.5) values were computed, taking into account the IoU threshold set at 50%.

Statistical Analysis

The performance of the model was evaluated using the data obtained in a confusion matrix. In statistical analysis in the context of artificial intelligence studies, the confusion matrix serves as a fundamental component for evaluating the performance of classification models. It provides a detailed comparison between the predicted and actual results (Table 1) enabling the assessment of classification accuracy and error patterns. The aim of using the confusion matrix is to systematically analyze the model’s ability to correctly classify instances into their respective classes and to identify areas for improvement. In addition to the confusion matrix, performance metrics such as true positive (TP), false positive (FP), and false negative (FN) values were also computed to gauge the model’s effectiveness.

Sensitivity (true positive rate, TPR), precision (positive predictive value, PPV), and F1 score were selected as key parameters for their relevance to the study’s objectives. By utilizing these metrics, statistical analysis aims not only to quantify the model’s performance but also to assess its robustness and generalization capabilities.

a. Sensitivity, true positive rate (TPR): TP/ (TP + FN)

b. Precision, positive predictive value (PPV): TP/ (TP+FP)

2 T P / ( 2 T P + F P + F N )

Results

Within the scope of the study, 10480 labels were made in a total of 2473 images for filling, of which 8350 labels in 1978 images were used as the training dataset. For overhanging filling, a total of 2481 labels were made in 1850 images, and 1994 labels were used for 1480 images in the training dataset (Table 1). Sensitivity, precision, and F1 scores for filling were 0.95, 0.97, and 0.96, respectively. For overhanging filling, these values were 0.86, 0.89, and 0.87, respectively (Table 2). The model achieved a mAP@0.5 of 0.86 for the filling, while for the overhanging filling class, it demonstrated a mAP of 0.62. Additionally, the estimates of filling and overhanging filling restorations are illustrated in Figure 2 and 3.

Table 2
AI model prediction performance values from confusion matrix.

Figure 2
Estimated segmentation images generated by real and artificial intelligence model of overhanging filling and normal filling.

Figure 3
Panoramic radiograph depicting estimated segmentation images, like Figure 2, generated by a real and artificial intelligence model for detection of overhanging and normal fillings in a separate patient.

Discussion

In the literature, diverse studies across various populations consistently report different rates of overhanging restorations, with an emphasis on their association with dental complications. The detection of carious lesions or overhanging restorations on the contact areas of posterior teeth can be challenging using conventional clinical examination methods. For this reason, a combination of clinical evaluations, such as visual and tactile examinations, and radiographic evaluations is the most reliable method for diagnosing overhanging margins. It is therefore recommended to use both clinical and radiographic evaluations when diagnosing overhanging margins.

In dentistry, intraoral and panoramic radiographs are common imaging techniques used to diagnose, treat and monitor patients. Although they provide 2D images of complex 3D structures, they have limitations such as lower image quality, geometric distortions, and overlaps that can affect the reliability of measurements. Standardization can also be challenging.11. Tarcin B, Gumru B, Idman E. Radiological assessment of alveolar bone loss associated with overhanging restorations: a retrospective cone beam computed tomography study. J Dent Sci. 2023 Jan;18(1):165-74. https://doi.org/10.1016/j.jds.2022.06.021
https://doi.org/10.1016/j.jds.2022.06.02...
,1212. Pack AR, Coxhead LJ, McDonald BW. The prevalence of overhanging margins in posterior amalgam restorations and periodontal consequences. J Clin Periodontol. 1990 Mar;17(3):145-52. https://doi.org/10.1111/j.1600-051X.1990.tb01078.x
https://doi.org/10.1111/j.1600-051X.1990...
Despite its lower resolution, panoramic radiography is a recommended method in dentistry for comprehensive diagnosis and treatment planning. This protocol enables the selection of additional intraoral periapical radiographs in specific areas, which can capture a larger area of the oral cavity while providing a complete examination of the teeth and surrounding bone with a lower radiation dose. The primary advantage of panoramic radiographs is that they show all teeth, which facilitates the detection of impacted teeth, foreign bodies in the jaws, and significant abnormalities in the number, position, and morphology of teeth.1313. Thanathornwong B, Suebnukarn S. Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks. Imaging Sci Dent. 2020 Jun;50(2):169-74. https://doi.org/10.5624/isd.2020.50.2.169
https://doi.org/10.5624/isd.2020.50.2.16...
,1414. Ba-Hattab R, Barhom N, Osman SA, Naceur I, Odeh A, Asad A, et al. Detection of periapical lesions on panoramic radiographs using deep learning. Appl Sci (Basel). 2023;13(3):1516. https://doi.org/10.3390/app13031516
https://doi.org/10.3390/app13031516...
However, there is considerable variation in the ability of dentists to interpret panoramic radiographs, which can be influenced by their individual skills, experience, and biases. These limitations in the interpretation of panoramic radiographs may lead to misdiagnosis or inappropriate treatment.1414. Ba-Hattab R, Barhom N, Osman SA, Naceur I, Odeh A, Asad A, et al. Detection of periapical lesions on panoramic radiographs using deep learning. Appl Sci (Basel). 2023;13(3):1516. https://doi.org/10.3390/app13031516
https://doi.org/10.3390/app13031516...

To overcome these challenges, deep learning (DL) models built on CNNs have recently emerged. They serve as the foundation for training computer vision systems. These models use various frameworks and methods, such as ResNet, Inception, Plain CNN, YOLO, and Detectron2, to perform tasks like classification, detection, and segmentation. By predicting an object’s bounding box, these frameworks can increase the accuracy of detection and segmentation tasks. YOLOv5 belongs to the category of computer vision models and is available in four primary versions: small (s), medium (m), large (l), and extra-large (x). These versions exhibit increasing levels of accuracy. YOLOv5 performs extremely well for this purpose compared to other innovative techniques. YOLOv5 makes training and inference on custom datasets extremely easy and relatively straightforward. It provides a quick training opportunity with a ready-made dataset in the proper format. Several export options are available, and completing an object detection pipeline involves more than just training and inference of models.

Our team conducted a study using the Faster R-CNN Inception v2 architecture, where we achieved an F1 score of 0.87 for the filling detection.1515. Basaran M, Çelik Ö, Bayrakdar IS, Bilgir E, Orhan K, Odabas A, et al. Diagnostic charting of panoramic radiography using deep-learning artificial intelligence system. Oral Radiol. 2022 Jul;38(3):363-9. https://doi.org/10.1007/s11282-021-00572-0
https://doi.org/10.1007/s11282-021-00572...
Our primary aim was to investigate whether the YOLOv5x architecture, which is known for its success in the COCO dataset, can also improve the detection of filling in panoramic radiographs. By expanding our dataset and using a more advanced architecture, we observed a significant improvement in performance metrics. Since accurate and effective detection of restorations is a basic requirement for our study, the YOLOv5 algorithm was used, which performs effectively in terms of speed and accuracy, outperforming most of its previous versions. In the literature, the YOLOv5 model was later used in various applications, and the model began to gain confidence through effective results. In addition, YOLOv5’s real-time processing ability, ease of use, scalability, and adaptability to various fields and datasets, and its ability to stay up to date through constant feedback are other reasons for its use in our study.

Recent research has investigated the use of DL in a range of medical conditions and clinical situations, including tooth detection, localization, and identification of oral cancer. The potential of AI to improve treatment approaches in dentistry makes it a promising tool for future trends in healthcare thanks to its accuracy and versatility.1616. Chen CC, Wu YF, Aung LM, Lin JC, Ngo ST, Su JN, et al. Automatic recognition of teeth and periodontal bone loss measurement in digital radiographs using deep-learning artificial intelligence. J Dent Sci. 2023 Jul;18(3):1301-9. https://doi.org/10.1016/j.jds.2023.03.020
https://doi.org/10.1016/j.jds.2023.03.02...
Thanathornwong and Suebnukarn (2020) proposed the use of a DL-based object detection method to identify periodontally compromised teeth in digital panoramic radiographs.1313. Thanathornwong B, Suebnukarn S. Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks. Imaging Sci Dent. 2020 Jun;50(2):169-74. https://doi.org/10.5624/isd.2020.50.2.169
https://doi.org/10.5624/isd.2020.50.2.16...
The aim was to reduce the effort required for diagnosis by saving evaluation time and providing automated screening documentation. The proposed approach has the potential to improve the precision and coherence of the diagnostic process while reducing the burden on clinicians. The broader potential of AI in dentistry, especially its accuracy and versatility, makes it a promising tool for future trends in healthcare.

Many studies have compared the performance of CNNs to that of human experts in various clinical scenarios. Despite potential inaccuracies in human examination methods, CNN methods have consistently demonstrated comparable accuracy, specificity, and sensitivity to human experts in various clinical scenarios.1010. Lee JH, Kim DH, Jeong SN. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020 Jan;26(1):152-8. https://doi.org/10.1111/odi.13223
https://doi.org/10.1111/odi.13223...
,1717. Lee JS, Adhikari S, Liu L, Jeong HG, Kim H, Yoon SJ. Osteoporosis detection in panoramic radiographs using a deep convolutional neural network-based computer-assisted diagnosis system: a preliminary study. Dentomaxillofac Radiol. 2019 Jan;48(1):20170344. https://doi.org/10.1259/dmfr.20170344
https://doi.org/10.1259/dmfr.20170344...
,1818. Ariji Y, Yanashita Y, Kutsuna S, Muramatsu C, Fukuda M, Kise Y, et al. Automatic detection and classification of radiolucent lesions in the mandible on panoramic radiographs using a deep learning object detection technique. Oral Surg Oral Med Oral Pathol Oral Radiol. 2019 Oct;128(4):424-30. https://doi.org/10.1016/j.oooo.2019.05.014
https://doi.org/10.1016/j.oooo.2019.05.0...
The study highlights the usefulness of digital technologies to improve dental practice and patient outcomes. There have been some studies on the application of deep learning algorithms for the radiographic detection of dental fillings in dental radiographs.

Baydar et al. (2023) developed a deep learning algorithm based on a convolutional neural network (CNN) to automatically detect dental fillings in bitewing radiographs. The CNN was trained on a dataset of over 1,000 bitewing radiographs and achieved an over 95% accuracy in detecting dental fillings.1919. Baydar O, Rózylo-Kalinowska I, Futyma-Gabka K, Saglam H. The U-net approaches to evaluation of dental bite-wing radiographs: an artificial intelligence study. Diagnostics (Basel). 2023 Jan;13(3):453. https://doi.org/10.3390/diagnostics13030453
https://doi.org/10.3390/diagnostics13030...
In another study with bitewing radiographs, researchers developed a deep learning algorithm based on a CNN to automatically detect overhanging fillings.2020. Mao YC, Chen TY, Chou HS, Lin SY, Liu SY, Chen YA, et al. Caries and restoration detection using bitewing film based on transfer learning with CNNs. Sensors (Basel). 2021 Jul;21(13):4613. https://doi.org/10.3390/s21134613
https://doi.org/10.3390/s21134613...
Compared with our study findings, the success of the filling models was similar. Considering that bitewing radiographs provide more detailed images due to resolution and central beam angulation, we think that the success of the models for panoramic radiographs is high and that it will increase with the use of the new CNN model and the participation of diversified labels in a large amount of data.

Besides, these studies used deep convolutional neural networks (DCNNs) as the CNN architecture for their deep learning models. The specific models used were VGG-16/19, AlexNet, and ResNet-50. VGG-16 and ResNet-50 are both well-established and widely used CNN architectures with a large number of pre-trained models available. This can make it easier to implement and fine-tune models for specific tasks. They have been shown to perform well on a wide range of image classification tasks, including medical imaging tasks. These architectures can be easily trained on relatively small datasets. However, both VGG-16 and ResNet-50 are relatively deep models with a large number of parameters, which can make them slower to train and require more computational resources. These architect models are primarily designed for image classification tasks and may not perform as well on object detection tasks, such as identifying the exact location of dental fillings. YOLOv5 is specifically designed for object detection tasks, which makes it well-suited for identifying the exact location of normal fillings or overhanging fillings in radiographs. YOLOv5 has a relatively light architecture compared to VGG-16 and ResNet-50, which means it can be trained faster and require fewer computational resources.2121. Murthy JS, Siddesh GM, Lai WC, Parameshachari BD, Patil SN, Hemalatha KL. Object detect: a real-time object detection framework for advanced driver assistant systems using YOLOv5. Wirel Commun Mob Comput. 2022;2022:1-10. https://doi.org/10.1155/2022/9444360
https://doi.org/10.1155/2022/9444360...

22. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv 2015 Apr 10:4091556. https://doi.org/10.48550/arXiv.1409.1556
https://doi.org/10.48550/arXiv.1409.1556...
-2323. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE; 2016. p. 770-8.

In their 2021 study, Mao et al. investigated four distinct CNN models for restoration detection, evaluating their performance and establishing GoogleNet as the most effective.1616. Chen CC, Wu YF, Aung LM, Lin JC, Ngo ST, Su JN, et al. Automatic recognition of teeth and periodontal bone loss measurement in digital radiographs using deep-learning artificial intelligence. J Dent Sci. 2023 Jul;18(3):1301-9. https://doi.org/10.1016/j.jds.2023.03.020
https://doi.org/10.1016/j.jds.2023.03.02...
Notably, Mao et al.’s study lacked categorization of restorations based on factors such as normal/overhanging, a distinction that our current study incorporates. Furthermore, their comparison of models did not include the Yolo model, which we have employed in our study. Çelik and Çelik (2022) investigated the success of 10 different CNN algorithms, including Yolo-v3, in restoration and implant identification, and found the success of YOLO to be lower. However, in that study, the researchers preferred object detection for labeling. Since we used the segmentation method for labeling in our study, we benefited from the success of YOLO in this sense.2424. Çelik B, Çelik ME. Automated detection of dental restorations using deep learning on panoramic radiographs. Dentomaxillofac Radiol. 2022 Dec;51(8):20220244. https://doi.org/10.1259/dmfr.20220244
https://doi.org/10.1259/dmfr.20220244...

To the best of our knowledge, there is no study that investigated the success of model for detecting overhanging filling, and the success of a YOLO-v5x model developed in this study has been presented. While the impact of dental fillings on segmentation success in both 2- and 3-dimensional tooth segmentation remains a topic of debate, no studies have evaluated the effectiveness of DL algorithms, the most commonly utilized method in dentistry, for detecting dental fillings. It is worth noting that comparing detection performance across studies may be unrealistic, particularly considering that structures labelled in other studies predominantly exhibit radiolucent characteristics, such as caries and apical lesions, falling within the scope of density-based analysis. The use of deep learning algorithms for detecting normal fillings and overhanging fillings on panoramic radiographs shows promise for improving the accuracy and efficiency of dental diagnosis and treatment planning. In addition, a noteworthy strength of our research lies in the images from different devices. This multi-device approach not only broadened the scope of our analysis but also enhanced the robustness and generalizability of our findings. Incorporating data from various imaging platforms allowed us to account for potential device-specific variations, providing a more comprehensive understanding of the phenomena under investigation. This diversity in image sources contributes with the reliability and robustness of our findings, reflecting the real-world scenario where a variety of imaging devices are used by dentists. This approach sets our research apart and establishes a solid foundation for future investigations in this field.

A limitation of our study is that it relies exclusively on retrospective radiology without incorporating clinical dental examinations, which are fundamental for a comprehensive dental assessment. However, our study indirectly aims to expedite the identification of areas that should be carefully examined during intra-oral evaluations, rather than completely disregarding the clinical assessment. This approach should make targeted treatment planning more effective. Another limitation is the exclusion of radiographs with errors/poor image quality that occur in the reality of the dental clinical workflow. This situation partially increases the success of the model and distances it relatively far from the reality of dental practice. Although only one algorithm was used in our study, which has the advantage of today’s simple and fast application and a high success rate, future research should compare multiple algorithms in the field of data engineering and AI. Moreover, longitudinal studies evaluating the long-term impact of DL on treatment planning are essential to understand the full potential of artificial intelligence in dentistry. These avenues of investigation are likely to provide more robust findings regarding the feasibility and effectiveness of artificial intelligence applications in dental practice.

Conclusion

In conclusion, our study developed a YOLOv5-based DL algorithm, achieving high accuracy with a 0.95 sensitivity, 0.97 precision, and F1 score of 0.96 for normal filling, and a 0.86 sensitivity, 0.89 precision, and F1 score of 0.87 for overhanging filling. The results highlight the proficiency of YOLOv5 in accurately identifying dental anomalies, offering promising applications for enhanced efficiency and precision in dental diagnosis and treatment planning. While acknowledging the study successes, future research should address the inherent limitations, ensuring the algorithm’s robustness across diverse clinical scenarios. The YOLOv5 algorithm emerges as a valuable tool that will transform dental diagnostics, promising improved accuracy and efficiency in the identification of normal fillings and overhanging fillings.

References

  • 1
    Tarcin B, Gumru B, Idman E. Radiological assessment of alveolar bone loss associated with overhanging restorations: a retrospective cone beam computed tomography study. J Dent Sci. 2023 Jan;18(1):165-74. https://doi.org/10.1016/j.jds.2022.06.021
    » https://doi.org/10.1016/j.jds.2022.06.021
  • 2
    Paolantonio M, Di Murro C, Cattabriga M. [Modifications in the clinical and microbiological parameters of the periodontal tissues after the removal of overhanging class-II amalgam fillings]. Minerva Stomatol. 1990 Aug;39(8):697-701. Italian.
  • 3
    Loomans BA, Opdam NJ, Roeters FJ, Bronkhorst EM, Huysmans MC. Restoration techniques and marginal overhang in Class II composite resin restorations. J Dent. 2009 Sep;37(9):712-7. https://doi.org/10.1016/j.jdent.2009.05.025
    » https://doi.org/10.1016/j.jdent.2009.05.025
  • 4
    Ghulam OA, Fadel HT. Can clusters based on caries experience and medical status explain the distribution of overhanging dental restorations and recurrent caries? A cross-sectional study in Madinah - Saudi Arabia. Saudi J Biol Sci. 2018 Feb;25(2):367-71. https://doi.org/10.1016/j.sjbs.2017.02.001
    » https://doi.org/10.1016/j.sjbs.2017.02.001
  • 5
    Brunsvold MA, Lane JJ. The prevalence of overhanging dental restorations and their relationship to periodontal disease. J Clin Periodontol. 1990 Feb;17(2):67-72. https://doi.org/10.1111/j.1600-051X.1990.tb01064.x
    » https://doi.org/10.1111/j.1600-051X.1990.tb01064.x
  • 6
    Orhan K, Jakhap R. Introduction to artificial intelligence. In: Orhan K, Jagtap R, editors. Artificial intelligence in dentistry. Springer International; 2024. p. 1-7.
  • 7
    Jagtap R, Bayrakdar IS, Orhan K. Advantages, disadvantages, and limitations of ai in dental health. In: Orhan K, Jagtap R, editors. Artificial intelligence in dentistry. Springer International; 2024. p. 235-46.
  • 8
    Corbella S, Srinivas S, Cabitza F. Applications of deep learning in dentistry. Oral Surg Oral Med Oral Pathol Oral Radiol. 2021 Aug;132(2):225-38. https://doi.org/10.1016/j.oooo.2020.11.003
    » https://doi.org/10.1016/j.oooo.2020.11.003
  • 9
    Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional neural networks for radiologic images: a radiologist's guide. Radiology. 2019 Mar;290(3):590-606. https://doi.org/10.1148/radiol.2018180547
    » https://doi.org/10.1148/radiol.2018180547
  • 10
    Lee JH, Kim DH, Jeong SN. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020 Jan;26(1):152-8. https://doi.org/10.1111/odi.13223
    » https://doi.org/10.1111/odi.13223
  • 11
    Jaju PP, Bayrakdar IS, Jaju S, Vidhi S, Orhan K, Jagtap R. Applications of artificial intelligence in dentistry. In: Orhan K, Jagtap R, editors. Artificial intelligence in dentistry. Springer International; 2024. p. 43-68.
  • 12
    Pack AR, Coxhead LJ, McDonald BW. The prevalence of overhanging margins in posterior amalgam restorations and periodontal consequences. J Clin Periodontol. 1990 Mar;17(3):145-52. https://doi.org/10.1111/j.1600-051X.1990.tb01078.x
    » https://doi.org/10.1111/j.1600-051X.1990.tb01078.x
  • 13
    Thanathornwong B, Suebnukarn S. Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks. Imaging Sci Dent. 2020 Jun;50(2):169-74. https://doi.org/10.5624/isd.2020.50.2.169
    » https://doi.org/10.5624/isd.2020.50.2.169
  • 14
    Ba-Hattab R, Barhom N, Osman SA, Naceur I, Odeh A, Asad A, et al. Detection of periapical lesions on panoramic radiographs using deep learning. Appl Sci (Basel). 2023;13(3):1516. https://doi.org/10.3390/app13031516
    » https://doi.org/10.3390/app13031516
  • 15
    Basaran M, Çelik Ö, Bayrakdar IS, Bilgir E, Orhan K, Odabas A, et al. Diagnostic charting of panoramic radiography using deep-learning artificial intelligence system. Oral Radiol. 2022 Jul;38(3):363-9. https://doi.org/10.1007/s11282-021-00572-0
    » https://doi.org/10.1007/s11282-021-00572-0
  • 16
    Chen CC, Wu YF, Aung LM, Lin JC, Ngo ST, Su JN, et al. Automatic recognition of teeth and periodontal bone loss measurement in digital radiographs using deep-learning artificial intelligence. J Dent Sci. 2023 Jul;18(3):1301-9. https://doi.org/10.1016/j.jds.2023.03.020
    » https://doi.org/10.1016/j.jds.2023.03.020
  • 17
    Lee JS, Adhikari S, Liu L, Jeong HG, Kim H, Yoon SJ. Osteoporosis detection in panoramic radiographs using a deep convolutional neural network-based computer-assisted diagnosis system: a preliminary study. Dentomaxillofac Radiol. 2019 Jan;48(1):20170344. https://doi.org/10.1259/dmfr.20170344
    » https://doi.org/10.1259/dmfr.20170344
  • 18
    Ariji Y, Yanashita Y, Kutsuna S, Muramatsu C, Fukuda M, Kise Y, et al. Automatic detection and classification of radiolucent lesions in the mandible on panoramic radiographs using a deep learning object detection technique. Oral Surg Oral Med Oral Pathol Oral Radiol. 2019 Oct;128(4):424-30. https://doi.org/10.1016/j.oooo.2019.05.014
    » https://doi.org/10.1016/j.oooo.2019.05.014
  • 19
    Baydar O, Rózylo-Kalinowska I, Futyma-Gabka K, Saglam H. The U-net approaches to evaluation of dental bite-wing radiographs: an artificial intelligence study. Diagnostics (Basel). 2023 Jan;13(3):453. https://doi.org/10.3390/diagnostics13030453
    » https://doi.org/10.3390/diagnostics13030453
  • 20
    Mao YC, Chen TY, Chou HS, Lin SY, Liu SY, Chen YA, et al. Caries and restoration detection using bitewing film based on transfer learning with CNNs. Sensors (Basel). 2021 Jul;21(13):4613. https://doi.org/10.3390/s21134613
    » https://doi.org/10.3390/s21134613
  • 21
    Murthy JS, Siddesh GM, Lai WC, Parameshachari BD, Patil SN, Hemalatha KL. Object detect: a real-time object detection framework for advanced driver assistant systems using YOLOv5. Wirel Commun Mob Comput. 2022;2022:1-10. https://doi.org/10.1155/2022/9444360
    » https://doi.org/10.1155/2022/9444360
  • 22
    Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv 2015 Apr 10:4091556. https://doi.org/10.48550/arXiv.1409.1556
    » https://doi.org/10.48550/arXiv.1409.1556
  • 23
    He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE; 2016. p. 770-8.
  • 24
    Çelik B, Çelik ME. Automated detection of dental restorations using deep learning on panoramic radiographs. Dentomaxillofac Radiol. 2022 Dec;51(8):20220244. https://doi.org/10.1259/dmfr.20220244
    » https://doi.org/10.1259/dmfr.20220244

Publication Dates

  • Publication in this collection
    30 Sept 2024
  • Date of issue
    2024

History

  • Received
    15 Jan 2024
  • Accepted
    11 June 2024
  • Received
    25 July 2024
Sociedade Brasileira de Pesquisa Odontológica - SBPqO Av. Prof. Lineu Prestes, 2227, 05508-000 São Paulo SP - Brazil, Tel. (55 11) 3044-2393/(55 11) 9-7557-1244 - São Paulo - SP - Brazil
E-mail: office.bor@ingroup.srv.br