Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Academy Activities – ADI Reports
Academy Activities- ADI Comments
Academy Activities- ADI Convocations
Academy Activities- ADI Cover story
Academy Activities- ADI International projects
Case Report
Editorial
Guest Editorial
Letter to Editor
Letter to the Editor
Obituary
Opinion Corner
Opinion Piece Article
Opinion Piece Articles
Original Research Article
Policy Papers
Research Article
Review Article
Systematic Review
Systematic Reviews and Meta-analysis
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Academy Activities – ADI Reports
Academy Activities- ADI Comments
Academy Activities- ADI Convocations
Academy Activities- ADI Cover story
Academy Activities- ADI International projects
Case Report
Editorial
Guest Editorial
Letter to Editor
Letter to the Editor
Obituary
Opinion Corner
Opinion Piece Article
Opinion Piece Articles
Original Research Article
Policy Papers
Research Article
Review Article
Systematic Review
Systematic Reviews and Meta-analysis
View/Download PDF

Translate this page into:

Systematic Review
8 (
2
); 91-97
doi:
10.25259/JGOH_30_2025

Deep learning in dental diagnostics: Caries detection through smartphone photographs – A systematic review

Department of General Dentistry, JDC Healthcare PLLC, Spring, United States
Department of General Dentistry, Brident Dental and Orthodontics, Houston, United States.
Author image

*Corresponding author: Niranjani Krothapalli, Department of General Dentistry, JDC Healthcare PLLC, Spring, United States. nirukrothapalli@gmail.com

Licence
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, transform, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.

How to cite this article: Krothapalli N, Cherukumalli Kapalavayi N. Deep learning in dental diagnostics: Caries detection through smartphone photographs – A systematic review. J Global Oral Health. 2025;8:91-7. doi: 10.25259/JGOH_30_2025

Abstract

Tooth decay is a common problem worldwide and detecting it early is crucial in preventing serious complications at a later stage. However, many people, due to socioeconomic factors, geographical barriers, do not have easy access to dentists. This review looks at how deep learning, a subset of artificial intelligence (AI), can help detect caries using photographs captured with smartphones. Nowadays, smartphones are widely available and have good cameras that can take clear pictures of teeth. Deep learning models can analyze these pictures to identify cavities. The present study reviewed studies published between 2005 and 2025 taken from major research databases to evaluate how well these technologies work for early cavity detection, especially for people with limited dental care. The findings show that deep learning models using smartphone images can detect visible cavities with good accuracy. Methods such as improving image quality and combining different deep learning techniques made the detection better. This approach is low-cost and easy to use, which makes it ideal for basic dental screenings in low-income or hard-to-reach areas. However, detecting very early-stage cavities is still challenging with this approach. Factors such as saliva, lighting, and camera angles can lower the quality of the pictures and affect the performance of these AI models. In addition, these models need large and varied collections of tooth images to train the models properly, but gathering these can be expensive and challenging. Using deep learning with images captured through a smartphone offers a promising and accessible way to screen for tooth decay. More research is needed to improve the detection of early cavities and to build larger, more diverse image databases to help train these models better. This technology could make dental care easier to reach many people around the world.

Keywords

Artificial intelligence
Caries
Deep learning
Images
Smartphones

INTRODUCTION

Tooth decay, or dental caries is one of the most common health problems in the world, affecting billions of people of all ages. Globally, oral diseases impact more than 3.7 billion people, with untreated caries in permanent teeth being the most dominant dental issue.[1] Approximately 2.3 billion adults experience dental decay in their permanent teeth, while around 530 million children are affected by caries in their primary teeth.[2] If dental caries is not addressed promptly, they can result in pain, infection, tooth loss, and broader health complications, severely weakening quality of life and increasing healthcare burdens.[3] Early identification of caries is very important for preventing disease progression and minimizing the necessity for invasive treatments. The invasive treatments are not only painful but also expensive, which are less accessible to disadvantaged populations.[4] Even though tooth decay is very common, it is still hard to detect it early, especially in communities that do not have easy access to dentists or proper dental tools.

Socioeconomic factors, geographical barriers, and lack of awareness about oral health all play an important role in the ongoing gaps in dental care.[5]

Smartphone ownership has largely surged in recent years with nearly 7 billion users worldwide.[6] This presents a potential solution to bridge this gap. Smartphones are equipped with advanced cameras which allow individuals to capture high-quality images of their oral cavity. This enables remote assessment through tele-dentistry or automated analysis using artificial intelligence (AI) technologies.

Deep learning is a subset of AI that has greatly improved image analysis in the healthcare industry by making it possible to automatically detect and classify diseases with high accuracy from complex images. It has been widely adopted in areas such as dermatology, radiology, and ophthalmology because of its strong performance in identifying complex patterns.[7,8] In dental research, deep learning has shown promising results in identifying caries from radiographic images.[9,10] Recently, the focus has shifted toward using smartphone-captured intraoral photographs for caries detection, which allows for more accessible, easy-to-use, and scalable diagnostic approaches.[11] Early studies have determined promising results in the ability of deep learning models to accurately detect dental caries from smartphone images and this suggests potential for use in screening programs and tele-dentistry platforms.[11,12] The aim of this systematic review is to synthesize the current research on the application of deep learning techniques for the detection of dental caries using smartphone photographs. This study will evaluate how effective and practical these new technologies are, as well as their potential to improve early cavity detection for people who have limited access to dental care.

MATERIALS AND METHODS

A systematic review was conducted following an organized and transparent approach, in accordance with the preferred reporting items for systematic reviews and meta-analyses[13] guidelines, to locate, assess, and synthesize all the studies relevant to the topic. The study has been registered in the Open Science Framework (DOI: 10.17605/OSF.IO/5EBVN; https://archive.org/details/osf-registrations-5ebvn-v1). A systematic literature search was conducted on June 27, 2025, using three electronic databases: PubMed, IEEE Xplore, and Google Scholar. The aim of this study was to identify the studies published between 2005 and 2025 that investigated the use of deep learning for the detection of dental caries using intraoral photographic images captured by smartphones. Database-specific search strings detailed below were applied for this search.

  • In PubMed, the search used ([Caries OR cavity OR dental caries OR dental cavity] AND [machine learning OR AI OR deep learning] AND [intraoral images OR photographic images]).

  • In IEEE Xplore, the search used (Caries OR cavity OR dental caries OR dental cavity) AND (machine learning OR AI OR deep learning) AND (intraoral images OR photographic images).

  • For Google Scholar, the search used: Caries | cavity | “dental caries” | “dental cavity” “machine learning” | “artificial intelligence” | “deep learning” “intraoral images” | “photographic images” - “systematic review.” Harzing’s Publish or Perish software was utilized to structure and extract results from Google Scholar.[14]

Inclusion criteria included studies involving the detection of dental caries as the primary objective, studies utilizing intraoral photographic images specifically captured using smartphone cameras, articles published in the English language, studies involving a dataset of at least 500 images and studies which used deep learning techniques for image classification and analysis. Exclusion criteria included studies focused on conditions other than dental caries, articles published in any language other than English, studies using radiographic images, quantitative light-induced fluorescence (QLF) images, or any non-photographic imaging modality, image source was not clearly specified or disclosed, studies using techniques other than deep learning including traditional machine learning or rule-based approaches and systematic reviews, meta-analyses, narrative reviews, case reports, or surveys.

The selection of studies followed a two-step screening process carried out independently by the first and second authors. Initially, the titles and abstracts of all identified records were examined to assess their relevance to the review objectives. Studies that could not be clearly excluded at this stage were then evaluated in full text. Both reviewers conducted the full-text screening independently. After screening, the results were compared, and both authors were in full agreement regarding the inclusion of studies, with no conflicts requiring further resolution.

Data from the selected studies were collected using a form designed for this review. The first author filled out the form by going through each included study. The collected data were then reviewed and confirmed together with the second author to make sure everything was accurate and agreed upon. For each study, the following details were recorded: the first author’s name, the year of publication, the deep learning model used, the number of images in the dataset, the type of caries classification, and performance measures such as accuracy, sensitivity, specificity, precision, F1-score, and mean average precision (mAP). All of this information are clearly detailed in Table 1.

Table 1: Results of deep learning models for caries detection through smartphone photography.
Author/year Deep learning model (s) Dataset size Caries classification Accuracy (%) Sensitivity/recall (%) Specificity (%) Precision (%) F1-score/mAP
Ding et al., 2021[16] YOLOv3
(Original
group)
7980
(combined images)
Primary
caries
N/A 49.59 N/A 76.92 0.60 (F1), 55.63 (AP)
Ding et al., 2021[16] YOLOv3 (Original
group)
7980
(combined images)
Secondary caries N/A 52.38 N/A 91.67 0.67 (F1), 56.78 (AP)
Ding et al.,2021[16] YOLOv3 (Enhanced group) 7980
(combined images)
Primary
caries
N/A 52.07 N/A 81.82 0.64 (F1), 68.21 (AP)
Ding et al., 2021[16] YOLOv3 (Enhanced group) 7980
(combined images)
Secondary caries N/A 33.33 N/A 100 0.50 (F1), 65.17 (AP)
Ding et al., 2021[16] YOLOv3 (comprehensive group) 7980
(combined images)
Primary
caries
N/A 69.42 N/A 93.33 0.80 (F1), 85.09 (AP)
Ding et al., 2021[16] YOLOv3 (comprehensive group) 7980
(combined images)
Secondary caries N/A 52.38 N/A 100 0.69 (F1), 85.87 (AP)
Tareq et al., 2023[15] Hybrid YOLO ensemble+
VGG16
1703
(augmented images)
Overall
caries detection
86.96 88 N/A 89 0.88 (F1)
Mahaveerakannan et al., 2024[15] Faster R-CNN 1903 images Cavitated versus non-cavitated 84.3 75 87.7 66.4 N/A
Mahaveerakannan et al., 2024[15] YOLOv3 1903 images Cavitated versus non-cavitated 88.5 72.3 94 78.2 N/A
Mahaveerakannan et al., 2024[15] RetinaNet 1903 images Cavitated versus non-cavitated 84 64.3 90.9 68.8 N/A
Mahaveerakannan et al., 2024[15] Single-Shot Multi-Box Detector 1903 images Cavitated versus non-cavitated 82 0 97.7 97.4 N/A
Mahaveerakannan et al., 2024[15] Faster R-CNN 1903 images Visually non-cavitated versus no surface change 61.2 37.8 72.5 50.8 N/A
Mahaveerakannan et al., 2024[15] YOLOv3 1903 images Visually non-cavitated versus no surface change 67.9 24.5 88.8 62.6 N/A
Mahaveerakannan et al., 2024[15] RetinaNet 1903 images Visually non-cavitated versus no surface change 66.8 27.6 84.4 62.4 N/A
Mahaveerakannan et al., 2024[15] Single-shot multi-box detector 1903 images Visually non-cavitated versus no surface change 68.9 0 98.5 0 N/A

mAP: Mean average precision, R-CNN: Regions with convolutional neural networks, VGG16: Visual geometry group 16, YOLOv3: You Only Look Once version 3, N/A: Not applicable

RESULTS

The initial literature search conducted across three databases – PubMed, IEEE Xplore, and Google Scholar – identified a total of 203 records, comprising 67 from PubMed, 30 from IEEE Xplore, and 106 from Google Scholar. Following the removal of 2 duplicate records and exclusion of 2 articles published in languages other than English, 199 unique records remained for further screening. The first phase of screening involved evaluating the titles of all records against the predefined inclusion and exclusion criteria, which resulted in 46 articles being selected for abstract review. During the second phase, abstracts were thoroughly examined to determine their relevance, reducing the pool to 15 articles for full-text assessment. Subsequently, the full texts of these 15 articles were retrieved and carefully reviewed in detail to assess their eligibility according to the review criteria. After this comprehensive evaluation, 3 studies satisfied all inclusion requirements and were included in the final systematic [Figure 1].

Preferred reporting items for systematic reviews and meta-analyses flow diagram showing search strategy.
Figure 1:
Preferred reporting items for systematic reviews and meta-analyses flow diagram showing search strategy.

Although the initial database search identified 203 studies, only 3 met all the inclusion criteria for this review. This low number highlights that the use of deep learning for caries detection, specifically from smartphone photographs, is still an emerging area of research. Many excluded studies either used other imaging methods such as radiographs, non-smartphone imaging, applied traditional machine learning instead of deep learning, or lacked clear information about their datasets. Another major reason is the limited availability of large, well-annotated image datasets captured using smartphones. Without these standardized and diverse datasets, it is difficult to train and validate deep learning models effectively. This limitation not only reduces the number of eligible studies but also points to a larger challenge in the field. To support progress in this area, future research should focus on creating open, high-quality image databases and developing clear protocols for capturing consistent, usable intraoral photographs with smartphones. These steps are essential for advancing AI tools that can reliably detect dental caries, especially in resource-limited settings.

Evaluating deep learning models for detecting dental caries mainly depends on their diagnostic accuracy. This is measured by metrics such as accuracy, sensitivity, specificity, precision, F1-score, and mAP. A comparative analysis across various studies highlights both promising results and ongoing challenges.

Accuracy: Measures by dividing the total amount of information tests by the number of categorized datasets[15]

Sensitivity: Measures the proportion of correctly identified positives[15]

Specificity: Measures the proportion of accurately detected negatives[15]

Precision: Shows how well the number was predicted when split by the total amount of information fitting into that category[15]

F1-score: The F1-score is the summed average of precision and recall[16]

mAP: The mAP is the average of the average precision values for multiple validation sets of individuals, and it is used as a measure of detection accuracy in object detection.[16]

Ding et al.,[16] used the You Only Look Once version 3 (YOLOv3) algorithm to detect dental caries in oral photographs taken by smartphones. The study used three datasets: Augmented images (n = 3,990), enhanced images (n = 3,990), and combined augmented and enhanced images (n = 7,980). An independent test set of smartphone photographs from 70 patients was also used. The experiments were organized into three groups based on data processing methods: The original group trained the model using the unchanged dataset; the enhanced group utilized an augmented version of the dataset for training; and the comprehensive group combined both the original and enhanced datasets to train the model. The mAP for the original group with YOLOv3 algorithm was 56.20%, with primary caries recognition precision at 76.92%, recall at 49.59%, and F1-score at 0.60. For secondary caries, the precision was 91.67%, recall was 52.38%, and F1-score was 0.67. With the enhanced group, the algorithm achieved a mAP of 66.69%. The primary caries identification precision was at 81.82%, recall was at 52.07%, and F1-score was at 0.64. For secondary caries, the enhanced group showed 100% precision, 33.33% recall, and an F1-score of 0.50. The comprehensive group algorithm yielded the highest mAP at 85.48%. The primary caries identification had 93.33% precision, 69.42% recall, and an F1-score of 0.80. Secondary caries identification in the comprehensive group had 100% precision, 52.38% recall, and an F1-score of 0.69. The results indicated that the detection capabilities of the model significantly improved when trained with image augmentation and enhancement techniques. Tareq et al.,[17] used 1703 augmented images from 233 de-identified tooth specimens, captured with a consumer smartphone without specialized equipment. The methodology involved state-ofthe-art ensemble modeling, test-time augmentation (TTA), and transfer learning processes, evaluating YOLO (v5s, v5m, v5l, v5x) derivatives and augmenting the best results with models such as ResNet50, ResNet101, VGG16, AlexNet, and DenseNet. Outcomes were assessed using precision, recall, and mAP. The results demonstrated that the YOLO model ensemble achieved a mAP of 0.732, an accuracy of 0.789, and a recall of 0.701. When this model was transferred to VGG16, it showed a diagnostic accuracy of 86.96%, with a precision of 0.89 and a recall of 0.88. This performance surpassed other base object detection methods using freehand, non-standardized smartphone photographs. The authors concluded that this virtual computer vision AI system, which combines a model ensemble, TTA, and transferred deep learning, can predict dental cavitations from nonstandardized photographs with reasonable clinical accuracy.

Mahaveerakannan et al.,[15] study used a dataset of 1,903 intraoral photographs from 696 participants, captured with an iPhone 7, for training and assessment. These images were labeled by a qualified dentist based on the International Caries Classification and Management System standards. The images were classified as sound: No surface change (NSC) (Class 0), visually non-cavitated (VNC) (Class 1), or cavitated lesions (C) (Classes 2 and 3). For cavitated lesion detection, YOLOv3 demonstrated the highest sensitivity at 75%, while faster regions with convolutional neural networks (R-CNN) achieved the best accuracy at 88.5% and the highest specificity at 94%. However, for the more complex task of identifying early caries (NSC vs. VNC), the sensitivity of all models significantly dropped: YOLOv3 to 37.8%, Faster R-CNN to 24.5%, RetinaNet to 27.6%, and single-shot multi-box detector (SSD) to 0%. Despite these limitations, all the models, maintained specificities exceeding 87% for cavitated lesions and over 72% for VNC lesions. The study concludes that YOLOv3 and Faster R-CNN show significant potential for real-world applications in detecting cavitated lesions, but further improvements in input image quality and expanded training data are critical for enhancing early caries detection.

DISCUSSION

In recent years, AI, particularly deep learning, has seen significant advancements in image classification and highly impacting medical image analysis. Deep learning is now considered the state-of-the-art machine learning approach due to its superior performance in tasks such as detection of lesion and classification.[18,19] The development of convolutional neural networks (CNNs) which is a deep learning algorithm has significantly advanced image analysis.[19] CNNs have been successfully used by researchers in the field of dentistry for several tasks, such as recognizing teeth, classifying dental structures, and diagnosing dental caries.[20] These algorithms are capable of processing large amounts of images, learning from millions of images, and identifying the patterns and irregularities that might be missed by human observers.[15] This capability automates operations such as photograph processing, leading to faster and more accurate outcomes.[15] The widespread availability of open-source and free libraries, along with the powerful computing resources, has significantly increased the adoption of deep learning by researchers and clinicians for image interpretation.[19]

Smartphones are commonly used and reasonably priced in many countries, making them a powerful tool for improving how often people get their oral health checked. This is especially helpful in underserved areas, where access to dental care is limited.[15] The utilization of basic images captured through smartphones can help reduce the need for expensive dental equipment and the costs involved with it. This makes it cheaper and easier for people to get dental check-ups, especially for basic screening in places where advanced tools or dental clinics are not available. As a result, more people can access dental care, even in remote or low-income areas.[16] Studies have consistently shown that using smartphone photographs to check the teeth and detect cavities can be practical with reasonable accuracy.[16] The main goal is to develop tools that let people take pictures of their teeth at home and get a basic idea of their oral health. This could reduce the need for expensive dental equipment or having a dentist present for early check-ups and cavity detection.[16]

While the reviewed studies reported good accuracy for detecting advanced or cavitated caries, performance significantly declined for early-stage lesions such as VNC areas. Several factors contribute to this challenge. First, early cavities are tricky to spot because they look like small white or brown spots, not clear holes. It is hard for computers to tell them apart from the healthy part of the tooth or flaws in the picture. These features are small, low-contrast, and easily confused with glare, plaque, or natural tooth texture, especially in non-standardized smartphone images.[15] Second, the datasets used for training often lack sufficient examples of early lesions. For instance, in Mahaveerakannan et al.’s study, the number of labeled VNC images was far lower than cavitated ones, leading to poor model sensitivity for early detection – ranging from 0% to 37.8% across models.[15] This imbalance in the datasets can cause models to overfit to more obvious lesions and miss subtle cases.

Furthermore, the data used to train these models often have biases that make them less useful in real-world situations. For example, they might have too many pictures of only certain tooth surfaces, or from people in a narrow age range (which changes how cavities look). Lighting and camera angles can also be all over the place. These issues mean that the models do not work as well for different people or in various clinic settings. For instance, in Ding et al.’s study, even with better images, the YOLOv3 model still did not catch enough cases because of these data problems.[15] The use of deep learning techniques based on smartphone photographs for caries detection has both significant strengths and weaknesses. The major strength lies in the accessibility and cost-effectiveness offered by this approach. By leveraging mobile devices, these technologies provide a non-invasive, affordable, and widely accessible means for caries screening.[15,17,21,22] Furthermore, the reviewed studies consistently demonstrated promising performance for cavitated lesions. Models showed high accuracy and sensitivity in detecting and classifying advanced or clearly cavitated caries. This indicates a strong capability for classification based on smartphone-captured images.[15,17,21,22] Finally, improvements in model robustness have been achieved through image processing techniques. Data augmentation, image enhancement, ensemble modeling, and transfer learning significantly improved model performance and its ability to handle non-standardized images, making the systems more adaptable to real-life conditions.[15-17]

A significant and consistent limitation is the challenge in detecting early-stage caries. Research studies reviewed in this study reported lower sensitivity and accuracy when trying to identify VNC or incipient lesions. This is a critical disadvantage, as early detection is important for effective preventive interventions.[15-17] Despite significant advancements, the quality of images captured by smartphones can be significantly influenced by various factors such as the presence of saliva, lighting conditions, camera angles, and the interference of soft tissues within the oral cavity.[16] Achieving high-resolution images using standardized image capturing methods is necessary and also establishing guidelines for users on how to capture images can help achieve consistent results. Furthermore, the demand for large, high-quality datasets poses an important challenge, as the processes of image acquisition and annotation are labor-intensive and expensive.[16] Furthermore, small datasets can lead to overfitting of the models.[16]

CONCLUSION

Deep learning in combination with smartphone photography offers a promising methodology for a non-invasive, affordable, and accessible way for basic screening of dental caries with accuracy comparable to traditional examinations. Techniques such as data augmentation, image enhancement, ensemble modeling, and transfer learning have improved the model performance on various non-standardized images. However, detecting early-stage cavities remains challenging due to factors such as quality of the images, plaque, and lighting interference. Another important issue is that most of the data used to train and test these models often have small and limited datasets.

Another important consideration moving forward is the ethical handling of patient images. Ensuring proper data privacy, secure storage, and consent protocols is very critical as more personal dental images are collected and analyzed by AI systems. As the field advances, building large, diverse, and well-annotated datasets will be very essential – but this should be done with clear ethical guidelines to protect patient rights and privacy. To make faster progress, international collaboration and the creation of open-source dental image databases should be encouraged. Such initiatives can reduce duplication of effort and can aid the development of more robust and generalizable AI models for global dental health applications.

Ethical approval:

Institutional Review Board approval is not required.

Declaration of patient consent:

Patient’s consent is not required as there are no patients in this study.

Conflicts of interest:

There are no conflicts of interest.

Use of artificial intelligence (AI)-assisted technology for manuscript preparation:

The authors confirm that there was no use of artificial intelligence (AI)-assisted technology for assisting in the writing or editing of the manuscript and no images were manipulated using AI.

Financial support and sponsorship: Nil.

References

  1. . Oral health. . Available from: https://www.who.int/news-room/fact-sheets/detail/oral-health [Last accessed on 2025 Jun 28]
    [Google Scholar]
  2. . Key facts about oral health. Available from: https://www.fdiworlddental.org/cleftcare [Last accessed on 2025 Jun 28]
    [Google Scholar]
  3. , , , . Quality of life related to oral health and its impact in adults. J Stomatol Oral Maxillofac Surg. 2019;120:234-9.
    [CrossRef] [PubMed] [Google Scholar]
  4. . Detection, diagnosis, and monitoring of early caries: The future of individualized dental care. Diagnostics (Basel). 2023;13:3649.
    [CrossRef] [PubMed] [Google Scholar]
  5. , , , , , . Facilitators and barriers to oral healthcare for women and children with low socioeconomic status in the united states: A narrative review. Healthcare (Basel). 2023;11:2248.
    [CrossRef] [PubMed] [Google Scholar]
  6. . Number of smartphone users worldwide from 2016 to 2023. . Statista. Available from: https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide [Last accessed on 2025 Jun 28]
    [Google Scholar]
  7. , , , , , , et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60-88.
    [CrossRef] [PubMed] [Google Scholar]
  8. , , , , , , et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115-8.
    [CrossRef] [PubMed] [Google Scholar]
  9. , , , , , . Deep learning for dental caries detection on bitewing radiographs. Oral Radiol. 2021;37:340-7.
    [CrossRef] [Google Scholar]
  10. , , . Transforming dental caries diagnosis through artificial intelligence-based techniques. Cureus. 2023;15:e41694.
    [CrossRef] [Google Scholar]
  11. , et al. Automated diagnosis of dental caries using deep learning and a smartphone camera: A pilot study. IEEE Access. 2020;8:118074-83.
    [Google Scholar]
  12. , , , , . Caries detection on intraoral images using artificial intelligence. J Dent Res. 2022;101:158-65.
    [CrossRef] [PubMed] [Google Scholar]
  13. , , , , , , et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.
    [CrossRef] [PubMed] [Google Scholar]
  14. . Publish or perish. Ver 8. Available from: https://harzing.com/resources/publish-or-perish [Last accessed on 2025 Jun 27]
    [Google Scholar]
  15. , , , , . A deep learning application for identifying cavities in dentistry: Utilizing smartphone-taken intraoral photos In: 2024 international conference on sustainable communication networks and application (ICSCNA), Theni, India. . p. :718-23.
    [CrossRef] [Google Scholar]
  16. , , , , , , et al. Detection of dental caries in oral photographs taken by mobile phones based on the YOLOv3 algorithm. Ann Transl Med. 2021;9:1622.
    [CrossRef] [PubMed] [Google Scholar]
  17. , , , , , , et al. Visual diagnostics of dental caries through deep learning of non-standardised photographs using a hybrid YOLO ensemble and transfer learning model. Int J Environ Res Public Health. 2023;20:5351.
    [CrossRef] [PubMed] [Google Scholar]
  18. , , , . Deep learning in medical image analysis. Adv Exp Med Biol. 2020;1213:3-21.
    [CrossRef] [PubMed] [Google Scholar]
  19. , , , , . The evolution of artificial intelligence in medical imaging: From computer science to machine and deep learning. Cancers (Basel). 2024;16:3702.
    [CrossRef] [PubMed] [Google Scholar]
  20. , , , , . Artificial intelligence in dental radiology: A narrative review. Ann Med Surg. 2025;87:2212-7.
    [CrossRef] [PubMed] [Google Scholar]
  21. , , . Automated caries detection with smartphone color photography using machine learning. Health Inform J. 2021;27:14604582211007530.
    [CrossRef] [PubMed] [Google Scholar]
  22. , , , . AI-powered dental caries detection In: 2024, 26th international multi-topic conference (INMIC), Karachi, Pakistan. . p. :1-6.
    [CrossRef] [Google Scholar]
Show Sections