|
|
REVIEW ARTICLE |
|
Year : 2019 | Volume
: 11
| Issue : 1 | Page : 3-10 |
|
The Use of Deep Convolutional Neural Networks in Biomedical Imaging: A Review
Yu-Cheng Chen1, Derek Jin-Ki Hong2, Chia-Wei Wu1, Muralidhar Mupparapu MDS, Dip. ABOMR 2
1 Department of Learning Sciences and Technologies, Division of Oral and Maxillofacial Radiology, University of Pennsylvania School of Dental Medicine, Philadelphia, PA, USA 2 Department of Oral Medicine, Division of Oral and Maxillofacial Radiology, University of Pennsylvania School of Dental Medicine, Philadelphia, PA, USA
Date of Web Publication | 9-Aug-2019 |
Correspondence Address: Muralidhar Mupparapu Department of Oral Medicine, Division of Oral and Maxillofacial Radiology; Professor of Oral Medicine; Director, Division of Oral and Maxillofacial Radiology, University of Pennsylvania School of Dental Medicine, Philadelphia, PA 19104 USA
 Source of Support: None, Conflict of Interest: None  | Check |
DOI: 10.4103/jofs.jofs_55_19
Introduction: This review sought to present fundamental principles of deep convolutional neural networks (DCNNs) and provides an overview of its applications in medicine and dentistry. Materials and Methods: Scientific databases including PubMed, Science Direct, Web of Science, JSTOR, and Google Scholar were used to search for relevant literature on DCNN and its applications in the medical and dental fields from 2010 to September 2018. Two independent reviewers rated the articles based on the exclusion and inclusion criteria, and the remaining articles were reviewed. Results: The comprehensive literature search yielded 110,750 citations. After applying the exclusion and inclusion criteria, 340 articles remained that pertained to the use of DCNN in medicine and dentistry. Further exclusion based on nonbiomedical applications yielded a total of 26 articles for review. Conclusion: Advances in the development of neural network systems have permeated into the medical and dental fields, particularly in imaging and diagnostic testing. Researchers are attempting to use deep learning as an aid to assess medical images in clinical applications and its optimization will provide powerful tools to the next generation. However, the authors caution that these tools serve as supplements to improve diagnosis and not replace the medical professional.
Keywords: artificial intelligence, biomedical imaging, cone beam CT, digital imaging, neural networks, systematic review
How to cite this article: Chen YC, Hong DJ, Wu CW, Mupparapu M. The Use of Deep Convolutional Neural Networks in Biomedical Imaging: A Review. J Orofac Sci 2019;11:3-10 |
How to cite this URL: Chen YC, Hong DJ, Wu CW, Mupparapu M. The Use of Deep Convolutional Neural Networks in Biomedical Imaging: A Review. J Orofac Sci [serial online] 2019 [cited 2023 Jun 9];11:3-10. Available from: https://www.jofs.in/text.asp?2019/11/1/3/264186 |
Introduction | |  |
Biomedical radiography indices predominate in clinical diagnosis and treatment and show important visual information of the human body. However, it is time consuming to trace and diagnose radiographs with the unaided eye and the result is always subjective. In 2012, Krizhevsky et al.[1] demonstrated the outstanding ability of deep convolutional neural networks (DCNNS) in image recognition and classification. Hence, several promising and successful applications of DCNN have been introduced in the field of medical[2],[3],[4],[5],[6] and dental[7],[8] radiography.
The DCNN was developed as a computerized implementation of human-like intelligence that was inspired by the biological nervous system. Biological neural networks consist of neurons comprised of a central cell body, a long axon, and branching dendrites. Dendrites act as feelers that pick up electrical activity and send that information to the cell body. The cell body accumulates this information and sends a signal down the axon to the brain tissue. Thus, a group of neurons that are tightly packed together and connected to their neighbors can signal and communicate with each other. Based on this biological system, computerized neural networks are suitable for fields where large amounts of data need to be analyzed efficiently. It is already used in applications such as natural language processing,[9] computer vision, speech recognition,[10] and online recommendation systems.[11]
Convolutional neural networks (CNNs), which are multilayer neural networks, are able to learn from basic features, edges, dots, bright spots and dark spots, and other patterns at the primary layer and then work higher up to clearly identify an objective. CNN learn differently from other machine learning algorithms by matching parts of the image rather than the image as a whole that enables the CNN to tolerate patterns that are shifted, bigger or smaller, thicker or thinner, and rotated. In this article, we discuss common layers used in CNN including convolution, pooling, rectified linear units (ReLU), and fully connected layers.
The first filtered layer, convolution, is based on the mathematical method of filtering that generates a complete map of whole image that is the sum of its matching patterns. This process involves (1) mapping to a patch of the image, (2) multiplying by each image pixel, (3) adding them up, and (4) dividing by the total number of pixels in the feature.[12] After convolving all the features through designated filters, a convolution layer is created from this stack of filtered images.
The next filtered layer, pooling, is used to make the CNN less sensitive to position. Pooling shrinks the image stack by choosing a window size, iterating through a moving range, and then taking the maximum value from each window across all the filters of the image. The pooling layer is built up by shrinking each chosen matrix obtained in the previous step.[12]
The final filtered layer, ReLU, are computational unit processes that are seldom used. ReLU involves a process called normalization. Normalization is a way to preserve valid arithmetic computation by converting each negative value to zero. ReLU layers show similar results to the previously described filtered layer except for its absence of negative values. Deep learning algorithms designed with ReLU demonstrate a faster training time compared to traditional neuron models. A faster learning process is crucial to the performance of large datasets like those used in medical imaging.
After processing all the layers, the final output is determined by a fully connected layer. The fully connected layer takes the highly filtered images and translates them to votes. The votes are assigned values called voting weights or connection strengths. By repeatedly and deeply stacking the previously described layers, namely the convolution layer, pooling layer, ReLU layer, and fully connected layer, we form the DCNN.
There is, however, a need for precaution when designing the CNN. For example, within the CNN, the computed outcome of each neuron should be independent to the outcome of a neuron located at the next layer. However, for a large number of parameters in networks within the same layer, neural networks might become interdependent to each other and overfit To combat overfitting, the dropout method is used[13] by randomly dropping neurons during training (with a probability of 0.5 for each hidden neuron) to reduce coadaptions between each neuron. Using the dropout method, the generated CNN architectures can produce a result that is more accurate and robust.The DCNN is constructed by a large number of layers. Features and voting weights are obtained from backpropagation and DCNN learns on its own. Errors that exist in the computed result are processed by backpropagation to determine how the network adjusts to the gradient descent or minimum error point. This calculated information is useful for optimizing parameterization of the CNN in subsequent trials. Although there is no standard method to modify filtered layer parameters, performance of the CNN can be improved by altering the size, number, or features of convolution including the mapped window size, the iterated stride of pooling, the number of fully connected neurons, the numbers of each type of layer, and the order of layers.
Materials and Methods | |  |
Literature was searched using five different databases, namely, Google Scholar, Web of Science, Science Direct, JSTOR, and PubMed from 2010 to September 2018. The keywords used were Deep convolutional neural networks, Deep neural networks, DCNN. The search yielded a total of 110,750 citations on the topic. After scrutiny, removal of the duplicates, and application of exclusion [physics of artificial intelligence (AI), computer applications and algorithms, newspaper articles, and nonindexed citations] and inclusion criteria (relevant articles on DCNN, medically and dentally focused biomedical imaging applications), 340 articles remained. These articles were related to the use of DCCN in medical and dental fields. After further application of criteria (further removal of nonbiomedical applications), only 26 articles remained for full-text review by the authors [Figure 1]. | Figure 1 PRISMA flow chart showing the search strategies as well as the schema for identifying the final articles that are included in the review.
Click here to view |
Discussion | |  |
DCNN in medicine and dentistry was studied using various applications like medical image segmentation, medical image detection, medical image diagnosis, and dental image diagnosis for ease of understanding and the relevance of the AI applications in each area.
DCNN in medical image segmentation
Medical image segmentation is a method for dividing an image into multiple regions based on a specific feature.[14],[15] It allows the scientist to focus on an object based on shape, volume, relative position, and abnormality. To this end, DCNNs have been successfully demonstrated in several applications of medical image segmentation.
Ibragimov and Xing[16] were the first to use DCNN for image segmentation of the oral cavity, paranasal sinuses, and hypopharynx as organs-at-risk of head and neck cancers. The results of using DCNN showed segmentation of anatomy with recognizable boundaries, namely, the spinal cord, mandible, pharynx, and eye globes, but there were still concerns about the objects with vague boundaries. Moreover, segmented images of small objects with complex detail, such as the cerebral vasculature present inside the intracranial space, are difficult to detect in related pathologies. Meijs and Manniessing[17] demonstrated that an implementation of 3D CNN, which comprises a time-to-signal image as input with integration of spatial features at the final layer, can be used for segmentation arteries and veins in cerebral vasculature in 4D computed tomographic (CT) images.
Liu et al.[18] built a model that combines a deep CNN (SegNet) with the 3D simplex deformable approach and applied this model of segmentation to images of musculoskeletal tissue in magnetic resonance imaging. Three advantages of SegNet are as follows: (1) it was designed for analysis of high-resolution images, which is required in musculoskeletal imaging to show fine details such as thin cartilage, (2) its scheme provides high memory and efficient computation to reduce output time, and (3) it is easy to implement, which enables multiple musculoskeletal applications. The design of 3D simplex deformable modeling preserves information about the shape and surface of musculoskeletal structure. In short, their method produces rapid and accurate results in clinical studies.
Another use for CNN in clinical diagnosis is for segmentation of chronic wounds, which are difficult to assess because of intensity inhomogeneity, color distortion, and area changes over multiple weeks. To this end, Lu et al.[19] designed three methods: a fast, level set, model-based method to overcome intensity inhomogeneity, a spectral properties-based method to solve color distortion, and a CNN for segmentation of the wound region. This proposed model performs with high efficiency, reduced computation time, and robust confidence over both inhomogeneity and color distortion.
Finally, while application of DCNN to single-target segmentation has been well demonstrated, Hu et al.[20] focused on more the challenging problem of multiple abdominal organ segmentation, specifically the liver, spleen, and both kidneys. There are two major challenges in abdominal segmentation in CT images. First, surrounding soft tissues affect the shape and size of these organs with large variation. Second, there are fuzzy boundaries between these organs and surrounding soft tissue. These problems were successfully addressed by using a deep, fully CNN for initial segmentation and then further refining the result through time-implicit, multiple surface evolution.
DCNN in medical image detection
Medical image detection is the process of identifying medically significant features within an image,[21],[22],[23] for example, tumors, anatomical structures, and cells. It aims to assist the expert to increase detection rate and positive decision ratio. Within this field, the power of DCNNs in image recognition could be an important tool in diagnosis.
Bejnordi et al.[24] used DCNN to demonstrate that stroma are involved in breast cancer development. Their system is based on three DCNNs: Network I trained to classify fat, stroma, and epithelium; Network II generated a probability of cancer-related stroma; and Network III provided a probability that the slide contains invasive cancer. This threefold model of a DCNN showed the ability to classify and evaluate the biology of lesions in breast cancer.
To address the clinical challenges of interstitial lung diseases, Gao et al.[25] proposed a model of DCNN trained on whole CT images. Although this setup requires manual identification of regions of interest, it is better adapted in clinical use. The framework uses three attenuation scales, low, high, and normal lung attenuation, to categorize interstitial lung diseases before training the CNN. This proposed model has potential for predicting interstitial disease of the lung.
Acharya et al.[26] aimed to identify arrhythmia by using a CNN. First, they classified the electrocardiogram, a standard test used to monitor the heart activity, into five main classes: nonectopic, supraventricular ectopic, ventricular ectopic, fusion, and unknown. The proposed CNN comprises three convolution layers, three max pooling layers, and three fully connected layers. After data augmentation, the accuracy of classification of heart beats in original electrocardiogram dataset reached 94.03%.
The study by Saltz et al.[27] represents a milestone of digital pathology-based quantification. Their study presented successful detection of tumor-infiltrating lymphocytes, stained with hematoxylin and eosin, across 5000 patients and 13 cancer types. Two CNNs, for lymphocyte infiltration classification and necrosis segmentation, were used to map patterns in local size and density of tumor-infiltrating lymphocytes. Being able to quantify size and density provides important information about the pathology among tumor types, immune profiles, and overall survival.
DCNN in medical image diagnosis
Clinicians who interpret medical images can benefit from computer-aided systems that provide more detailed information about certain features[28],[29] that might help discriminate pathology and aid in treatment planning. There are several applications of medical image diagnosis via DCNN.
The standard methodology for diagnosing lung cancer (squamous carcinoma, adenocarcinoma, small cell carcinoma, and undifferentiated carcinoma) is histopathological assessment of biopsies. Li et al.[30] compared several DCNN models in lung cancer diagnosis, but the result still shows slightly lower area of curve (AUC, 0.9119), which suggests that their proposed models are challenged by large variation of patterns between different samples. In addition, their training model does not benefit from fine-tuning the pretrained model.
Glaucoma is caused by progressive loss of the retina ganglion cells that, in turn, affects the optic nerve and leads to partial vision loss. To tackle early diagnosis of digital fundus images, Raghavendra et al.[31] designed a 19-layer DCNN. Their study reports the confidence of their system and values of highest accuracy, sensitivity, and specificity (98.13%, 98%, and 98.30%, respectively) over testing of 1426 fundus images.
The diagnosis of the thyroid nodule is currently detected by ultrasound, a real-time and noninvasive diagnosis technology, to determine whether the nodule is malignant, indeterminate, or suspicious. Thyroid nodules are heterogeneous in appearance and have many internal components and vague boundaries that make it difficult to differentiate between benign and malignant. To eliminate operational error and improve the accuracy of the result, Ma et al.[32] hybridized two CNNs. The two networks were trained separately and then fused together to diagnose the thyroid nodule based on a softmax classifier. The proposed model results in an accuracy of approximately 83.02%.
To promote global standardization and diminish variation on assessment of prostate cancer, Ishioka et al.[33] designed a framework that combines the U-net and ResNet50. The U-net consists of 17 layers and an encoder–decoder structure for distinguishing whole and local pelvic structures. The ResNet50 reformulates the layers into residual functions that are trained in relation to the layer inputs. This method has the potential to output consistent diagnosis with improved cost-effectiveness and efficiency.
DCNN in dental image analysis
Although the applications of DCNN to medical image identification have been increasingly investigated by many groups in recent years, based on our knowledge, there are still limited numbers of research projects discussing the use of CNNs in dental image processing.
Wang et al.[34] first presented an article that used DCNNs in the diagnosis and analysis of dental radiographs. They shared analysis algorithms that teams built to overcome the challenging questions given at the IEEE International Symposium of the Biomedical Imaging competition in 2015. The three challenges were as follows: landmark prediction, standard method classification of cephalometric tracing, and bitewing radiographs segmentation. These groups developed four types of CNNs. In addition, Miki et al.[35] conducted research that focused on classifying tooth types in dental cone-beam CT images via an automated method of DCNN.
Choi et al.[36] proposed a four-module system to identify proximal dental caries: (1) horizontal alignment of pictured teeth to minimize performance degradation of the picture, (2) generation of a probability map via CNN, (3) crown extraction that is segmented by a level set method to mask crowns and suppress tooth decay probabilities in dental pulps, and (4) refinement. The results show that this system can be used to detect proximal dental caries.
Periodontal disease is common in all ages and are correlated with systemic diseases like endocarditis. To assist in the diagnosis and prediction of periodontally compromised teeth, Lee et al.[37] developed an architecture based on DCNN that consists of 16 convolution layers and two fully connected layers. The accuracy of their architecture in detecting periodontitis of premolars and molars is 81.0% and 76.7%, respectively. Further, Rana et al.[38] presented an autoencoder framework with convolutional layers to segment gingival diseases from oral images. This model successfully distinguishes between inflamed and healthy gingiva.
Recently, Lee et al.[39] studied DCNN using computed-assisted diagnosis (CAD) system for the detection of osteoporosis on panoramic radiographs. Panoramic radiographs were analyzed using a single-column DCNN (SC-DCNN), single column with data augmentation DCNN (SC-DCNN Augment), and multicolumn DCNN (MC-DCNN). The DCNN CAD system was compared to experienced oral and maxillofacial radiologist and the results showed high agreement between the two. They concluded that a DCNN-based CAD system may provide information for dentists for the early detection of osteoporosis. Most recently, another group used a deep learning-based CNN algorithm to detect and diagnose dental caries. Lee et al.[40] used DCNN based on GoogLeNet Inception v3 architecture to diagnose dental caries on periapical radiographs of premolar, molar, and both premolar and molar models. The algorithm was able to achieve an AUC of 0.917 on premolars, an AUC of 0.89 on molars, and an AUC of 0.845 on premolars and molars. The AUC of the premolar model was statistically significant in comparison to the other models.
Finally, as outlined by Mupparapu et al.,[41] DCNN and the age of machine learning has arrived in dentistry and medicine. As the utilization of CNN continues to grow, it has the potential to take on a more important role in helping clinicians diagnose diseases and make recommendations to clinicians. The intent of AI and DCNN was to not replace a clinician but be useful as a diagnostic adjunct and perhaps a tool for a second opinion and streamline and prioritize workflow based on the criticality of the intervention. The most important process in the machine learning is the segmentation itself as that will lead to easier machine learning and eventually autolearning mechanisms. [Figure 2],[Figure 3],[Figure 4] were generated using a special software program called ITK-SNAP (University of Pennsylvania, Philadelphia, PA, USA) that was jointly developed at the University of Pennsylvania and the University of Utah. | Figure 2 ITK-SNAP software-generated cone beam computed tomography (CBCT) slice of a mandible after thresholding segmentation of an area of dental infection that will go through further segmentation steps. (Image courtesy: Katherine Shi, University of Pennsylvania School of Dental Medicine.)
Click here to view |
 | Figure 3 ITK-SNAP software-generated cone beam computed tomography (CBCT) slice showing the second step in segmentation of the area in question. The next step would be full quantification of the area of interest. (Image courtesy: Katherine Shi, University of Pennsylvania School of Dental Medicine.)
Click here to view |
 | Figure 4 ITK-SNAP software-generated cone beam computed tomography (CBCT) slice showing the third step in segmentation process. Note the complete segmentation of the area of interest and thereby facilitating the quantification. Segmentation is an important process in the machine learning and application of artificial intelligence programs that will lead to self-learning modules by the computer program. (Image courtesy: Katherine Shi, University of Pennsylvania School of Dental Medicine.)
Click here to view |
Conclusion | |  |
This survey of the current applications of DCNNs in the medical and dental imaging shows the clear potential of DCNNs in aiding and improving the quality of image diagnosis and interpretation. However, there are still several barriers in the development of CNNs as applied to medicine including (1) privacy rights and legal issues, (2) large datasets, which are necessary for training DCNN, and (3) the enigmatic nature of DCNN algorithms, of which the exact mechanism of action is not fully known.
This first barrier concerns privacy rights and legal issues regarding medical data. The Health Insurance Portability and Accountability Act (HIPAA) protects the personally identifiable information of patients and restricts disclosure of such information. These privacy rights impair the collaboration between hospital providers, clinicians, and researchers with the architects of CNN algorithms. Additionally, data privacy makes it difficult for researchers to collect, analyze, and protect the personal information of patients.
Furthermore, massive amounts of data are necessary to train DCNNs to improve their accuracy and precision. However, curation of a high-quality dataset requires the coordination of multiple experts that is time consuming and not cost efficient. In addition, because medical systems and models of machinery widely vary, datasets for the same task show substantial variation that prohibits useful CNN training. In addition, rare diseases will nearly always lack datasets of substantial quantity.
Finally, the mechanism of backpropagation in a neural network is still fantastically complicated and scientists are not yet able to understand how it works or why it provides a better result compared to more well-understood methods. Because of its enigmatic machinations, it can sometimes produce unexpected results. Traditionally, medical experts are responsible for the diagnosis if the judgment is wrong, but what factors are to blame when a not fully defined AI produces erroneous results? Although researchers are working on finding optimal solutions in deep learning, unanswered questions still exist and there are still missing pieces of the puzzle. In addition to the heavy application of DCNN in myriad numbers of industries, such as autonomous cars and robots, the medical field is a critical area that could benefit greatly from DCNNs.
Deep learning will evolve once researchers tackle the challenges mentioned above, and its optimization will provide powerful tools to the next generation. Currently, researchers are attempting to use deep learning as an aid to assess medical images in clinical applications.
Promising results from a myriad of researchers have already been published, which paints a future for DCNNs and their use in healthcare. Future research may include models to predict or diagnose dental disease using imaging analytics to develop algorithms and also may include accessing huge libraries of images. The algorithms can analyze hundreds of thousands of radiographic studies to become more knowledgeable and accurate. However, the authors must caution that we might not replace the medical professional with AI technology, but instead let it serve as a supplemental tool to improve diagnosis and detection systems. Thus, we conclude that the use of deep learning methods will make a great impact in medical image analysis and patient healthcare.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
References | |  |
1. | Krizhevsky A, Sutskever I. ImageNet classification with deep convolutional neural networks. In: Pereira F, Burges CJC, Bottou L, Weinberger KQ, editors. Advances in Neural Information Processing Systems 25. Proceedings from Neural Information Processing Systems Conference; December 3-8, 2012. Lake Tahoe, NV: NIPS 25; 2012. pp. 1097-105. |
2. | Kallenberg M, Petersen K, Nielsen M, Ng AY, Diao P, Igel C et al. Unsupervised deep learning applied to breast density segmentation and mammographic risk scoring. IEEE Trans Med Imaging 2016;35:1322-31. |
3. | Kleesiek J, Urban G, Hubert A, Schwarz D, Maier-Hein K, Bendszus M et al. Deep MRI brain extraction: a 3D convolutional neural network for skull stripping. NeuroImage 2016;129:460-9. |
4. | Setio AAA, Ciompi F, Litjens G, Gerke P, Jacobs C, van Riel SJ et al. Pulmonary nodule detection in CT images: false positive reduction using multi-view convolutional networks. IEEE Trans Med Imaging 2016;35:1160-9. |
5. | Anthimopoulos M, Christodoulidis S, Ebner L, Christe A, Mougiakakou S. Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans Med Imaging 2016;35:1207-16. |
6. | Cheng JZ, Ni D, Chou YH, Qin J, Tiu M, Chang YC et al. Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Sci Rep 2016;6:24454. |
7. | Karimian N, Hassan SS, Mahdian M, Alnajjar H, Tadinada A. Deep learning classifier with optical coherence tomography images for early dental caries detection. In: Rechmann P, Fried D, editors. SPIE Proceedings Lasers in Dentistry XXIV; January 27-February 8, 2018. San Francisco, CA: SPIE; 2018. pp. 10473-3. |
8. | Srivastava M, Kumar P, Pradhan L, Varadarajan S. Detection of tooth caries in bitewing radiographs using deep learning. Workshop for ML. Thirty-first Annual Conference on Neural Information Processing Systems. December 4-7, 2017. Long Beach, CA: NIPS; 2017. p. ML4H. |
9. | Collobert R, Weston J. A unified architecture for natural language processing: deep neural networks with multitask learning. In: Cohen W, McCallum A, Roweis S, editors. Proceedings of the 25th International Conference on Machine Learning. July 5-9, 2008. Helsinki, Finland: ICLM; 2008. pp. 160-7. |
10. | Graves A, Mohamed A, Hinton G. Speech recognition with deep recurrent neural networks. In: Krishnamurthy V, Plataniotis K, editors. Proceedings of the 38th International Conference on Acoutics, Speech, and Signal Processing. May 26-31, 2013. Vancouver, Canada: ICASSP; 2013. pp. 6645-9. |
11. | Heng-Tze C, Levent K, Jeremiah H, Tal S, Tushar C, Hrishi A et al. Wide and deep learning for recommender systems. In: Karatzoglou A, Hidasi B, Tikk D, editors. Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. September 15, 2016. Boston, MA; New York; 2016. pp. 7-10. |
12. | Ciaburro G, Venkareswaran B. Neural Networks with R: Smart Models Using CNN, RNN, Deep Learning, and Artificial Intelligence Principles. Birmingham, UK: Pakt Publishing; 2017. |
13. | Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Machine Learning Res 2014;1:1929-58. |
14. | Liu X, Guo S, Yang B, Ma S, Zhang H, Li J et al. Automatic organ segmentation for CT scans based on super-pixel and convolutional neural networks. J Digit Imaging 2018;31:748-60. |
15. | Havaei M, Davy A, Warde-Farley D, Biard A, Courville A, Bengio Y et al. Brain tumor segmentation with deep neural networks. Med Image Anal 2017;35:18-31. |
16. | Ibragimov B, Xing L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med Phys 2017;44:547-57. |
17. | Meijs M, Manniessing R. Artery and vein segmentation of the cerebral vasculature in 4D CT using a 3D fully convolutional neural network. In: Petrick N, Mori K, editors. Proceedings Volume 10575, Medical Imaging 2018: Computer Aided Diagnosis; SPIE Medical Imaging. February 10-15, 2018. Houston, TX: SPIEE; 2018. p. 10575Q. |
18. | Liu F, Zhou Z, Jang H, Samsonov A, Zhao G, Kijowski R. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging. Magn Reson Med 2017;79:2379-91. |
19. | Lu H, Li B, Zhu J, Li Y, Li Y, Xu X et al. Wound intensity correction and segmentation with convolutional neural networks. Concurrency and Computation: Practice and Experience [Internet]. 2016. Available at: https://onlinelibrary.wiley.com/doi/pdf/10.1002/cpe.3927.29:e3927 [Cited September 2018]. |
20. | Hu P, Wu F, Peng J, Bao Y, Chen F, Kong D. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets. Int J Comput Assist Radiol Surg 2017;12:399-411. |
21. | Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542:115-8. |
22. | Rajpurkar P, Irvin J, Zhu K, Yang B, Mehta H, Duan T et al. CheXNet: radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv:1711.052225[cs].Available at: https://arxiv.org/pdf/1711.05225.pdf. [Cited September 14, 2017]. |
23. | Kirienko M, Sollini M, Silvestri G, Mognetti S, Voulaz E, Antunovic L et al. Convolutional neural networks detect local infiltration of lung cancer primary lesions on baseline FDG-PET/CT. Medical Imaging with Deep Learning [Internet]; 2018. Available at: https://openreview.net/pdf?id=BJ5Q13jiM. [Cited September 14, 2018]. |
24. | Bejnordi EB, Mullooly M, Pfeiffer RM, Fan S, Vacek PM, Weaver DL et al. Using deep convolutional neural networks to identify and classify tumor-associated stroma in diagnostic breast biopsies. Mod Pathol 2018;31:1502-12. |
25. | Gao M, Bagci U, Lu L, Wu A, Buty M, Shin HC et al. Holistic classification of CT attenuation patterns for interstitial lung diseases via deep convolutional neural networks. Comput Methods Biomech Biomed Eng Imaging Vis 2018;6:1-6. |
26. | Acharya UR, Oh SL, Hagiwara Y, Tan JH, Adam M, Gertych A et al. A deep convolutional neural network model to classify heartbeats. Comput Biol Med 2017;89:389-96. |
27. | Saltz J, Gupta R, Hou L, Kurc T, Singh P, Nguyen V et al. Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell Rep 2018;23:181-193.e7. |
28. | Sun W, Tseng TL, Zhang J, Qian W. Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data. Comput Med Imaging Graph 2017;57:4-9. |
29. | Kermany DS, Goldbaum M, Cai W, Valentim CCS, Liang H, Baxter SL et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018;172:1122-1131.e9. |
30. | Li Z, Hu Z, Xu J, Tao T, Chen H, Duan Z et al. Computer-aided diagnosis of lung carcinoma using deep learning: a pilot study. arXiv:1803.05471; March 2018. pp. 1-22. Available at: https://arxiv.org/abs/1803.05471. [Cited September 15, 2018]. |
31. | Raghavendra U, Fujita H, Bhandary SV, Gudigar A, Tan JH, Acharya UR. Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images. Inform Sci 2018;441:41-9. |
32. | Ma J, Wu F, Zhu J, Xu D, Kong D. A pre-trained convolutional neural network based method for thyroid nodule diagnosis. Ultrasonics 2017;73:221-30. |
33. | Ishioka J, Matsuoka Y, Uehara S, Yasuda Y, Kijima T, Yoshida S et al. Computer-aided diagnosis of prostate cancer on magnetic resonance imaging using a convolutional neural network algorithm. BJU Int 2018;122:411-7. |
34. | Wang CW, Huang CT, Lee JH, Li CH, Chang SW, Siao MJ et al. A benchmark for comparison of dental radiography analysis algorithms. Med Image Anal 2016;31:63-76. |
35. | Miki Y, Muramatsu C, Hayashi T, Zhou X, Hara T, Katsumata A et al. Classification of teeth in cone-beam CT using deep convolutional neural network. Computs Biol Med 2017;80:24-9. |
36. | Choi J, Eun H, Kim C. Boosting proximal dental caries detection via combination of variational methods and convolutional neural network. J Signal Process Syst 2018;9:87-97. |
37. | Lee JH, Kim D, Jeong SN, Choi SH. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J Periodontal Implant Sci 2018;48:114-23. |
38. | Rana A, Yauney G, Wong LC, Gupta O, Muftu A, Shah P. Automated segmentation of gingival diseases from oral images. In: Proceedings from2017 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT); November 6-8, 2017. Bethesda, MD: IEEE; 2017. pp. 144-7. Available at: IEEE Xplore. [Cited September 23, 2018]. |
39. | Lee JS, Adhikari S, Liu L, Jeong HG, Kim H, Yoon SJ. Osteoporosis detection in panoramic radiographs using a deep convolutional neural network-based computer-assisted diagnosis system: a preliminary study. Dentomaxillofac Radiol July 13, 2018. Available at: https://www.ncbi.nlm.nih.gov/pubmed/30004241. [Cited September 23, 2018]. |
40. | Lee J, Kim D, Jeong S, Choi S. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J Dentistry 2018;77:106-11. |
41. | Mupparapu M, Wu CW, Chen YC. Artificial intelligence, machine learning, neural networks, and deep learning: futuristic concepts for new dental diagnosis. Quintessence Int 2018;49:687-8. |
[Figure 1], [Figure 2], [Figure 3], [Figure 4]
This article has been cited by | 1 |
Assessment of impaired consciousness using EEG-based connectivity features and convolutional neural networks |
|
| Lihui Cai, Xile Wei, Yang Qing, Meili Lu, Guosheng Yi, Jiang Wang, Yueqing Dong | | Cognitive Neurodynamics. 2023; | | [Pubmed] | [DOI] | | 2 |
Detection of Gallbladder Disease Types Using Deep Learning: An Informative Medical Method |
|
| Ahmed Mahdi Obaid, Amina Turki, Hatem Bellaaj, Mohamed Ksantini, Abdulla AlTaee, Alaa Alaerjan | | Diagnostics. 2023; 13(10): 1744 | | [Pubmed] | [DOI] | | 3 |
Measuring flow speeds in natural waters by training an artificial neural network to analyze high-frequency flow-induced vibrations of tethered floats |
|
| Thomas F. Hansen | | Environmental Monitoring and Assessment. 2022; 194(2) | | [Pubmed] | [DOI] | | 4 |
ARTIFICIAL INTELLIGENCE IN DENTISTRY: A MILESTONE |
|
| DSV Sindhuja, Gaurvi Vikram Kamra, Ankur Sharma | | INDIAN JOURNAL OF APPLIED RESEARCH. 2021; : 49 | | [Pubmed] | [DOI] | | 5 |
Deep learning on ultrasound images of thyroid nodules |
|
| Yasaman Sharifi,Mohamad Amin Bakhshali,Toktam Dehghani,Morteza DanaiAshgzari,Mahdi Sargolzaei,Saeed Eslami | | Biocybernetics and Biomedical Engineering. 2021; | | [Pubmed] | [DOI] | | 6 |
A Convolutional Neural Network Reaches Optimal Sensitivity for Detecting Some, but Not All, Patterns |
|
| Fabian Hubert Reith,Brian A. Wandell | | IEEE Access. 2020; 8: 213522 | | [Pubmed] | [DOI] | |
|
 |
 |
|