Review Research of Medical Image Analysis Using Deep Learning

Bakhtyar Ahmed Mohammed1,2*, Muzhir Shaban Al-Ani1

1University of Human Development, College of Science and Technology, Department of Computer Science, Sulaymaniyah, KRG, Iraq; 2University of Sulaimani, College of Science, Department of Computer, Sulaymaniyah, KRG, Iraq

Corresponding author’s e-mail: University of Human Development, College of Science and Technology, Department of Computer Science, Sulaymaniyah, KRG, Iraq. E-mail: bakhtyar.mohammed@uhd.edu.iq
Received: 08-04-2020 Accepted: 19-08-2020 Published: 27-08-2020
DOI: 10.21928/uhdjst.v4n2y2020.pp75-90


ABSTRACT

In modern globe, medical image analysis significantly participates in diagnosis process. In general, it involves five processes, such as medical image classification, medical image detection, medical image segmentation, medical image registration, and medical image localization. Medical imaging uses in diagnosis process for most of the human body organs, such as brain tumor, chest, breast, colonoscopy, retinal, and many other cases relate to medical image analysis using various modalities. Multi-modality images include magnetic resonance imaging, single photon emission computed tomography (CT), positron emission tomography, optical coherence tomography, confocal laser endoscopy, magnetic resonance spectroscopy, CT, X-ray, wireless capsule endoscopy, breast cancer, papanicolaou smear, hyper spectral image, and ultrasound use to diagnose different body organs and cases. Medical image analysis is appropriate environment to interact with automate intelligent system technologies. Among the intelligent systems deep learning (DL) is the modern one to manipulate medical image analysis processes and processing an image into fundamental components to extract meaningful information. The best model to establish its systems is deep convolutional neural network. This study relied on reviewing of some of these studies because of these reasons; improvements of medical imaging increase demand on automate systems of medical image analysis using DL, in most tested cases, accuracy of intelligent methods especially DL methods higher than accuracy of hand-crafted works. Furthermore, manually works need a lot of time compare to systematic diagnosis.

Index Terms: Medical Image Analysis, Medical Image Modalities, Deep Learning, Convolutional Neural Network

1. INTRODUCTION

In the recent years, medical imaging has become the most and widest techniques to disease diagnose of human organs and anatomic vision of body. It is a broad range in digital image processing known for its effective, easiness, and safety to diagnose and follow-up diseases. Growing of huge multimodality data caused to growing of data analytics especially in medical imaging. The architecture of deep learning (DL) has depended on the neural network that includes layers to feature extraction and classification in medical image processing and includes many methods for different tasks [1]. DL has evolved in many fields such as computer-aided diagnosis (CAD), radiology, and medical image analysis which can include tasks, such as finding shapes, detecting edges, removing noise, counting objects, and calculating statistics for texture analysis or image quality [2].

In such short period, DL has owned of great role in training artificial agents to replace the complicated human manually scientific works at a reasonable time in various fields related to medical image analysis depend on public and private datasets [3]. The organs of human body vary in terms of complexity; thus some organs are more affected by the process of ionization. Hence, it is important to carefully employ medical image modalities with techniques related to medical diagnosing. Furthermore, the accuracy of these modalities is too important at the first step of medical image processing [4]. The accuracy depends on different sensors or medical image devices to take images according to the ray spectrums to the modality types. Many spectrums are used to imaging body modality, some of them have too strong radiation as; gamma, while others have weak radiation to the human body, such; magnetic resonance imaging (MRI) which uses radio frequency (RF) [5]. Deep artificial neural network (Deep ANN) model was innovated in 2009, from the very beginning this branch is developing till now. In the present time, deep neural network types are the strongest machine learning methods to analyze various medical imaging [6]. In general, medical image analysis consists of five processes, such as medical image classification, medical image detection, medical image segmentation, medical image registration, and medical image localization. Furthermore, graphical processing unit (GPU) is imperative hardware part that supports improvement and acceleration of medical imaging analysis processes, such as image segmentation, image registration, and image de-noising, based on various modalities such as X-ray, CT, positron emission tomography (PET), single photon emission computed tomography (SPECT), MRI, functional MRI (fMRI), ultrasound (US), optical imaging, and microscopy images. It enables parallel acceleration medical image processing to work in harmony with DL [7].

DL is rapidly leading to enhance performance in different medical applications [8]. Some important criterions have great role in the development of medical image analysis processes, such as region of interest (ROI) which has great role of early detection and localization, in such processes as predicting the bounding box coordinates of optic disk (OD) to diagnose of glaucoma and diabetic retinopathy diseases using DL methods [9], and colonoscopy diseases such as adenoma detection rate (ADR) using convolutional neural network (CNN) [10]. Within this process, automatic analysis supports with report generation, real-time decision support, such as localization and tracking in cataract surgery using CNN [11]. Large training set is another essential element since DL methods can learn strong image features for volumetric data as 3D images for landmark detection with many good ways to train these datasets [12]. Advancements in machine learning, especially in DL, can learn many medical imaging data features resulting from the processes such as identify, classify, and quantify patterns that aid of hand-crafted processes for medical image modalities using DL methods to automate interpretations [8].

However, medical imaging data includes noise, missing values, and inhomogeneous ROI which cause inaccurate diagnose. ROI provides accurate knowledge that aids clinical decision-making for diagnostics, treatment planning, and accurate feature extraction process cause accurate diagnostic and increases the accuracy [13]. Edge detection is another key process for medical imaging applications that can be used in image segmentation, usually according to homogeneity in the way of two criterions; classification, and detection of all pixels by CNN using filters [14]. CNN method can avail local features and more global contextual features, at the same time; regardless of the different methods adopted in the architecture of CNN [15]. The architecture of CNN capable to change such as used fully connected network FCN instead of CNN method using semantic segmentation to effectively and accurately detect brain tumor in MRI images [16].

Certainly, the advancement of medical image analysis is slower than medical imaging technologies, mostly because of the study of DL for components of medical image analysis and specifically CNN is a big necessity to improve accuracy of methods for components by working on lessens obstacles such as training datasets, and declining error rate.

2. MEDICAL IMAGE MODALITIES

The essentials of data types in medical image processing are medical images. There are various cases according to the body places, organs, and different disease that became physiologists to think of different techniques to show significant features related to the medical cases. Most of the techniques that used in medical imaging rely on visible and non-visible radiations except MRI. These techniques use various body organs based on cases. Multi-variability of these modalities is necessary because of some reasons. The most significant reason is effectiveness of some of these techniques to some specific tasks, such as MRI for brain and CT for lung. Another reason is the impact of ionizing radiation to human body according to impacts of ionizing which damages DNA atom and non-ionizing rays which does not have any side effect to human body organs [5].

MRI uses radiofrequency signals with a powerful magnetic field to produce images of the human tissues. MRI is dominant among other modality types because of its safety and rich information [17]. Usually, it is used in neurology and neurosurgery of brain and spinal. It shows human anatomy in all three planes; axial, sagittal, and coronal. It is used for quantitative analysis for most of the neurological diseases as brain [18]. Furthermore, it is able to detect streaming blood and secret vascular distortions. In spite MRI takes priority over others because of its characteristics which are superior image quality and ionizing radiation [19].

It is beneficial to process of accuracy enhancement, reduce noise, detection speed improvement, segmentation, and classification [17].

MRI of sub-cortical brain structure automatic and accurate segmentation using CNN to extract prior spatial features and train the methods on most of complicated features to improve accuracy which is effective for the processes, such as pre-operative evaluation, surgical planning, radiotherapy treatment planning, and longitudinal monitoring for disease progression [20]. It provides a wealth of imaging biomarkers for cardiovascular disease care and segmentation of cardiac structures [21]. Furthermore, it provides rich information about the human tissue anatomies so as to earn soft-tissue contrast widely. It is considered as a standard technique [17]. It provides detail and enough information about different tissues inside human body with high contrast and spatial resolution subsequently. It engages broadly to anatomical auxiliary examination of the cerebrum tissues [18]. Bidani et al. (2019) showed that MRI is important to diagnose dementia disease by scanning brain MRI which indicates by declining memory [22].

Geok et al. (2018) used MRI to brain stem and anterior cingulate cortex to classify migraine and none-migraine data using DL methods [23].

Another application of brain MRI is early detection and classification of multi-class Alzheimer’s disease [24]. Suchita et al. (2013) showed complexity of MRI brain diagnosis which is challengeable because of variance and complexity of tumors [25]. Padrakhti et al. (2019) showed brain MRI useful to age prediction, as brain age estimation [26].

During MRI data acquisition group of 2-D MRI images can represent as 3-D because a lot of frame numbers, like in brain. Many different contrast types of MRI images exist, including axial-T2 cases use to edematous regions and axial-T1 cases use to the healthy tissues and T1-GD uses to determine the tumor borders, cerebrospinal fluid (CSF) uses to edematous regions in fluid-attenuated invasion recovery (FLAIR). There are several types of contrast images such as FLAIR, T2-Weighted MRI (T2), T1-Weighted MRI (T1), and T1-GD gadolinium contrast enhancement [17].

Brain MRI is one of the best imaging techniques employed by researchers to detect the brain tumors in the progression phase as a model for both steps of detection and treatment [27]. It is useful to supply information about location, volume, and level of tumor malignancy [28]. Talo et al. (2018) showed that traditionally the radiologists selected MRI to find status of brain abnormality. The analysis of this process was time consumer and hard, to solve this problem, utilized computer-based detection aid accurately and speedy of diagnosis process [29].

Magnetic resonance spectroscopy (MRS) is a specific modality for the evaluation of thyroid nodules in differentiation of benign from malignant thyroid tissues [30]. PET is a type of nuclear medicine images, as scintigraphy technique, it is a common and useful medical imaging technique that is used clinically in the field of oncology, cardiology, and neurology [7]. SPECT can supply actual three-dimension anatomical image using gamma ray [7]. Elastography uses to liver fibrosis, tactile imaging, photo-acoustic imaging thermography, such as passive thermography, and active thermography, tomography, conventional tomography, and computer-assisted tomography [31]. Accurate features of CT images for chest diagnosis, such as ground glass opacity to detect COVID-19 pneumonia cases, made it use in training process in improving computer-aided methods as a fast process, also it aids the clinicians especially in the diagnosis of COVID-19 infection cases [32]. Optical coherence tomography (OCT) uses low coherence light to take two and three-dimension micrometer resolution within optical scattering. It is used to early diagnosis of retinal diseases [33]. OCT images show clearly intensity variances, low-contrast regions, speckle noise, and blood vessels. [34]. Furthermore, retinal image is another modality to measure retinal vessel diameter [35]. Sun et al. (2017) used another sensor which is portable fundus camera used for huge datasets of retinal image quality classification which is differ from diabetic retinopathy screening systems, using CNN algorithms [36]. Papanicolaou (PAP) smear is another medical image modality used to identify the cancerous variation of uterine cervix using the learning-based method by segmenting separated PAP-smear image cells [37]. Nguyen et al. (2018) tested microscopic image as another type of medical image modality that took from 2D-Hela and PAP-smear datasets [38]. Confocal laser endoscopy (CLE) is another medical image modality type that relied on to diagnose and detect brain tumor for its accuracy and effectiveness in carrying out the automatic diagnosis [39]. It is a type of advanced optical fluorescence technology which undergoing application assessments in brain tumor surgery while most of the images distorted and interpreted as non-diagnostic images [40]. In gastrointestinal diseases, new medical imaging technique innovated which known as wireless capsule endoscopy (WCE) to record WCE frame images to detect abnormal patterns [41]. It uses to diagnose of gastrointestinal diseases through a sensor which is quite small to swallow and capture every scenes of anatomical parts that pass through them [41]. Dermoscopic image is another useful modality and is dermoscopic images that use to skin lesion [42], [43]. Breast cancer (BrC) image is another type of well-known cancers that rely on such medical image modalities as mammography which known as X-ray of breast, US which is called sonogram [44]. Furthermore, histology images use to determine multi-size and discriminative patches to classify BrC [45]. Masood et al. (2012) determined fine-needle aspiration (FNA) data as another way to take breast sample [46]. M. hyper-spectral image (HSI) is another new modality use to diagnosis and early detection of oral cancer using CNN before surgery [47]. Dey et al. (2018) used it to early detect of oral cancer in habitual smokers [39]. N. Single X-ray projection uses to monitoring and radiotherapy tumor-tracking to analyze tumor motion [48].

3. MEDICAL IMAGE ANALYSIS

It is the process of analyzing medical images through medical image analysis techniques. These techniques are composed of five main components named, medical image classification, medical image detection, medical image segmentation, medical image localization, and medical image registration.

3.1. Medical Image Classification

This element of medical image analysis techniques is responsible for classifying labeled image classes based on their features. In this process, the homogeneity and heterogeneity features determine how the classes are categorized. In traditional methods, shape, color, and texture used to be key features for categorizing labeled image classes. Whereas, in modern methods, where DL is essential for labeling images, various algorithms have become fundamental tools for accurate multi-class label classification [49]. Categorization process is a technique that follows extraction process. It runs on selected features [27].

Litjens et al. (2017) departed classification process into two phases; either image classification and object or lesion classification. Image classification is the first medical image analysis process that depart the image into some smaller image sizes, but object classification works on the small data that identified earlier [50]. Suchita et al. (2013) identified different objects in the image as the main function of classification technique. Hence, she classified images into two main subdivisions; supervised; and unsupervised [25].

In supervised learning, datasets are the most significant reasons to teach the methods and increase accuracy through feature extraction process [22].

Wong et al. (2018) showed that MRI brain images are used to diagnose tumors and classify them according to classes as, no tumor, low grade gliomas, and glioblastomas. Those classes can also be subdivided as in gliomas which are classified to I and IV according to the World Health Organization classification [51].

Image quality determines the class of the examined images. Low image quality is considered inappropriate for diagnosis [52].

It is worth mentioning that some researchers use synonyms for classification, such as CADx. Among them, Ker et al. (2017) employed different terms to represent various CNN algorithms [53].

Rani (2011) explained data mining can be performed in many ways, all techniques are important in special manners and classification is an analysis technique used to retrieve important and relevant information about data. It can be applied as micro-classifications in mammograms, classification of chest X-rays, and tissue and vessel classification in MRI. When this technique in DL counts on CNN, it can come up with valuable benefits translated as proper working in noisy environments [54].

Suzuki (2017) compared between Massive Training Artificial Neural Network (MTANN) and CNN models. They are used to classifying lung nodules and non-nodules. Each has advantages that distinguish it from the other. For instance, in classification of lesions and non-lesions in CAD, MTANN scored a better result of decreasing False Positives. On the other hand, CNN is able to score higher accuracy level within areas under the ROC curve (areas under curve). For example, if MTANN manages to score 0.882 for lung nodules under ROI, then CNN will score 0.888 for seven tooth types under the same circumstances in computer vision [1].

Yamashita et al. (2018) explained that CAD has become a part of routine clinical work for detecting brain, breast, eye, chest, etc. For each organ, this classification process plays special role. For brain, CAD applies fMRI in two stages to detect autism spectrum disorder (ASD). During the first stage, CAD will identify the bio markers for ASD. While in the second stage, in two subdivision steps, CAD depends on fMRI with accuracy of 70% to identify the anatomical structure.

Certainly, CNN can be used as a magic tool for classification. Another advantage for CNN in this regard is using it for processing target objects separated from medical images. However, it is not deniable that this process requires a large number of training data [55].

Ruvalcaba-Cardenas et al. (2018) tested that 2D-CNN 3D-CNN models are well used for small class separation using single-photon avalanche diode sensor in low-light indoor and outdoor daytime conditions as long as using noise removal algorithm with 64 X 64-pixel resolution [56].

The process of identifying labels and lesions types requires a lot of sufficiency work specially to determine early treatment [14]. The whole chain process extracts the features of microscopic image classification [38]. Table 1 illustrates some important reviews of classification process.

TABLE 1 Mentions the classification methods for different body organs

thumblarge

3.2. Medical Image Detection

Finding abnormal objects are the main goal of medical Image Detection. Usually, detecting the abnormality happens through comparing two cases on the images. Most of the time, this process takes place with the aid of computer-aided detection (CAD). This starts with identifying objects on the images through the application of detector algorithms [16]. To reduce time consumption and reach efficient detection, experts have dedicated time and efforts to find faster and accurate methods. Marginal space learning is one of the significant approaches in which more efficient and faster in function compare to traditional methods [3]. The function of CAD in this process is to de-stress the radiologists who use manual diagnosis by easily selecting the abnormality on the images. From this standpoint, CAD can take different forms based on its function. The forms are, detection regions aid of processing techniques, set of extracted features, and extracted features fed in to classifier [8]. Diagnosing brain tumor through automatic detection may face difficulties that require smart intervention [64].

Actually, MRI is multi used for diagnoses for other diseases. Alkadi et al. (2018) used it for prostate cancer diagnosis to provide information on, location, volume, and level of malignancy [28].

The good thing about automated diagnosis for all medical imaging fields is the attempt to increase accuracy and reduces time consumption [65]. In neurodegenerative diseases, dementia for instance which causes of lessens in memory, language, and lack of wise [22]. That can boost the performance of CNN and improve detection and localization accuracy [41]. For super-pixel image analysis, different structure detection required. This engages image augmentation to aid CNN to extract the features from the original dermoscopy image data [43].

The role of detection lies in identifying abnormal among normal cases. The whole process is called CNN-based CAD system. Ker et al. (2017) employed computer aided in collaboration with 2D and 3D-CNN detection for various detection purposes, especially it used for lymph node detection to diagnose infection or tumor [53]. Tajbakhsh et al. (2016) shaded the light on detection process which is a complicate process. He divided into two stages, polyp detection, which works on increasing the rate of misdetection by finding perception changing features such as, color, shape, and size of colon features. However, the feature of shape is more affective compare to other features. Moreover, pulmonary embolism (PE) detection, which causes of blocking pulmonary arteries because blood clots that barrier transmit blood from lower extremity source to lung using CT pulmonary angiography (CTPA) which is time consuming, and death rate of PE is 30% but it becomes 2% with right treatment with implementing the deep CNN method [52].

The advantages and disadvantages of each practical technique lie in its outcome balance between accuracy and cost of operation in edge detection. Laplacian of Gaussian edge detector, convolve image by filter of high pass filter to find edge pixels so as to analyze edge pixel places from both sides. Canny edge detector which considers optimal edge detector so as to get the lowest error rate in the detection of real edge point and 2D Gabor filter which its utilization rely on frequency and orientation representations [35]. It is agreed that CAD works on ROI in image analysis. Meaning that, detection gathers the regions of interest in one limited area. This is can be seen in MRI of brain tumor, and it determines earliest signs of abnormality. Altaf et al. (2019) used 3D CNN to detect BrC using automated breast US image modality using sliding window technique to extract volume of interests, then using 3D-CNN to determine the possibility of existing tumor [59].

Experts and technology development have been working hard in this field to make medical image analysis more sufficient and fruitful. The attention of experts is not limited to software only, but hardware section is also receiving a good share of care. Every now and then, CAD is witnessing development in one way or another. Every trail to the purposes of reduces the errors and increases the accuracy [66]. Table 2 illustrates some important reviews of detection process.

TABLE 2 Mentions the detection methods for different body organs

thumblarge

3.3. Medical Image Segmentation

It is the process of analyzing a digital image to partitioning it into multiple regions. The main purpose of segmentation is to shade lights on objects detected on the image [68]. In another definition, medical image segmentation is the process of selecting anatomy body organ outlines accurately [3].

From the given definitions, we realize that segmentation is a complicate process. Therefore, researchers have been working on developing procedures to make it easier [15]. To accelerate different applications of automated segmentation process, pre-operative assessment, surgical planning, radiotherapy treatment planning, and longitudinal monitoring are added to the process [20]. Improvement of medical image segmentation can happen in many manners. To improve the physical support of this process, GPU is the key answer to do so [7]. Segmentation is either semantic or non-semantic. Semantic segmentation links each pixel in an image to a class label, whereas, non-semantic segmentation works on the similar shapes, such as clusters [51].

In segmentation process, the methods are changeable. Yet, the quality of the process will change accordingly. In medical image segmentation, MRI plays significant role in quantity image analysis [16].

Through MRI, the image is cut into many regions sharing similar attributes [6]. Dividing the images into ROI means that the image is divided to sections including objects, adjacent regions, and similar region pixels [13]. Through the application of CNN models, brain tumor tissues will be ready to labeling any small patch around each point. The labeling process will highlight intensity information inserted by multi-channel CNN methods [69].

Certainly, a successful segmentation requires detecting object boundaries. This process is called edge detection. By looking at the name, it indicates that the process involves many other factors that affect the edge shapes including geometrical and optical properties, separation conditions, and noise, in addition use for feature detection and texture analysis [14]. Within all this complicity, CNN will be able to diagnose brain tumor through MRI and automatic segmentation simplifies [64]. Like other medical image analysis techniques, segmentation is also a process of stages. Segmentation is either organ segmentation, or lesion segmentation. The role of organ segmentation is to analyze quantity such as volume and shape segmentation in clinical parameters. While lesion segmentation, combines object detection, organ, and substructure segmentation, and apply them in DL algorithms [50].

The outer look of segmentation is similar to quantitative assessment of medical meaningful pieces. Actually, in some functions segmentation depends on quantitative assessment for its application within a short period of time [55]. In surgical planning, segmentation is applied on 2D image slices to determine accurate boundary of the lesions to prepare them to the operation [53]. Medical image segmentation is either automatic or semi-automatic. Both work on extracting ROI, but for different body organs such as coronary angiograms, surgical planning, surgery simulations, tumor segmentation, and brain segmentation [70].

It separates and bounds different components of body organs automatically or semi-automatically to different tissue classes, pathologies, organs, and some biological criterions, according to various body organs [69]. In short, segmentation process aims to solve problems appears on regions of body organs such as brain, skin, and so on. For this purpose, medical image uses MRI, and CT to select optimal weights [71]. Another important process for medical imaging applications is edge detection which use in image segmentation usually according to homogeneity in the way of two criterions; classification, and detection of all pixels by CNN using filters [14].

Hamad et al. (2018) focused on pathology image segmentation as pre-requisite disease diagnosis to determine features, such as shape, size, and morphological appearances, for cancer of nuclei, glands, and lymphocytes [63]. Dey et al. (2018) shaded the lights on the three subdivisions of segmentation naming them, Otsu, to calculate quality of global threshold, Gradient Vector Flow Active Contour Method, to analyze dynamic or 3D image data [39].

Image quality has impact on segmentation process for it has to do with feature extraction, model matching, and object recognition [72]. Rupal et al. (2018) determined three soft tissues in normal brain using MRI technique, such as gray matter (GM), white matter, and CSF. He showed both of algorithms and GPU has a big role to speed up this process, with its many methods innovated to enhance the segmentation process [73].

Despite the factors that impact segmentation process, there are other reasons that enhance segmentation such as organs of body, modality image, and algorithm. On the other hand, segmentation faces challenges that hold the process back such as large variability in sensing modality, artifacts which vary from organ to organ, etc. Ngo et al. (2017) classified segmentation to active contour models, machine learning models, and hybrid active contour and machine learning models [74]. Table 3 illustrates some important reviews of segmentation process.

TABLE 3 Mentions the segmentation methods for different body organs

thumblarge

3.4. Medical Image Localization

Every method has different contour to select the location of the destination shapes from images, Wei et al. (2019) studied tumor localization on 3D images of three patients depending on; contour, location of tumor centroid in 3D space, and the angle of tumors to find error of tumor localization at different angles. The results showed that according to tumor motion and projection angles which exhibits that the CNN based method was more robust and accurate in real-time tumor localization [48].

Lan et al. (2019) explored that multiregional combination such as selective search, edge boxes, and abjectness is used to improve object localization that account as essential of the non-rigid and amorphous characteristics to improve object localization [41].

Urban et al. (2018) showed ADR aim of colonoscopy and accuracy according of colonoscopies for ADR. Advancements of computer-assisted image analysis especially DL models, such as CNNs which aid of making agent to perform its tasks to improve performance. It exhibits any increasing point of accuracy in manually work, as the result shows that real-time localized polyps and detection polyps higher than hand-crafted work [10].

Muthu et al. (2019) verified that appropriate hardware is beneficial of adequately localize brain tumor to achieve high accuracy of detection and classification using CNN [18].

Localization uses in every steps of applications while the radiology systems individually analyze and prepare reports without any human intrusion, especially in MRI and CT modality using CNN, such as CT images of neck, lung, liver, pelvis, and legs [53].

Mitra et al. (2018) improved localization process using OD in color retinal fundus images predicting the bounding box coordinates which work same as ROI. Some methods used to renew the frames of ROI as solitary regression predicament of image pixel values to ROI coordinates. CNN can predict bounding boxes depending on intersection of the union. It increases the chance of recovery and strengthens the detection diagnosis accuracy [9].

Oliver et al. (2018) proposed localizing multiple landmarks in medical image analysis to easily transfer our framework to new applications. It integrated various localizers, low test application algorithms, low amount of training data algorithms, and interoperability. The pros of this approach is detecting and localizing spatially correlated point landmarks [78].

Localization process usually comes before the detection process. Almost they are integrated together, especially in misdetection which relies on localization process [59].

Zheng et al. (2017) divided localization process into two steps, in the first process the abdomen area is selecting, while the second process is detecting and localizing the kidneys places. According to this, the body consists of three parts; above abdomen; head and thorax, abdomen, and legs. Diaphragm separates abdomen and thorax and an optimal slice index maximizing separation between abdomen and legs, second step is kidney which localize same as abdomen detection by axial image to determine the place of kidneys which use surrounding organs to determine the location of kidneys because kidney place is next to liver and spleen but the position of abdomen organs is not fixed, same as abdomen localization [12].

Banerjee et al. (2019) designed a framework that consists of CNN methods which implement to enhance the performance of localization, detection, and annotation of surgical tools. The proposed method can learn most of the features [11]. Table 4 illustrates some important reviews of localization process.

TABLE 4 Mentions the localization methods for different body organs

thumblarge

3.5. Medical Image Registration

Image registration involves determining a spatial transformation or mapping that relates positions in one image to corresponding positions in one or more other images. Image registration is transformation an image to the same digital image form according to the mapping points. Rigid is known as image coordinate transformation and only involves translation and rotation processes. Transformation maps parallel lines fixed with parallel lines is affine for map lines onto maps is projective and map lines on curves is curved or elastic [72].

The purpose of developing medical image modalities is to get higher resolutions and implementing multi-parametric tissue information at proper accuracy and time. It causes increasing the visually of image registration. Nowadays, it is very common to improve accuracy and speeding up in DL [6]. It involves two forms; mono-modal inside same device or multi-modal inside different devices. In general, it consists of four steps; feature detection, feature matching, transform model estimation, and image resampling and transformation [13]. Registration known as common image analysis task, its form of working is iterative framework. DL can properly increase registration performance and especially using deep regression networks to direct transformation [50]. Ker et al. (2017) exhibited that another benefit of medical image registration has indicated in neurosurgery or spinal surgery, to select the place of the mass or destination landmark, and to obtain systematic operation [53].

The most necessity to transmit from source to destination using appropriate method rely on; selecting modality into spatial alignment, and the fusion that is necessary for showing integrated data [72].

Marstal et al. created collaborative platform to registration process, as an open source for the medical algorithms which is the continuous registration challenge (CRC) that involves eight common datasets [79].

Ramamoorthy et al. (2019) showed that polycystic ovary syndrome is another women disease made from imbalance hormone of follicle stimulating hormone, and monitoring of cysts grow up by registration technique which apply through these steps; first step, is initial registration which inputs pre-processed US images. Second step is similarity measure–implement correlation coefficient on reference and source image. Third step is image transformation which monitors the growth of the cyst at initial stage and periodic checkups. Fourth step is final registration alignment. It is either mono-modal image or multi-modal images. Last step is optimization that optimizing the spatial information which is executed by changing affine point optimizer radius at various appointments that determine by gynecologists in addition to correlation coefficient similarly metrics and affine transformation [80].

In addition, registration process goes through the following steps; first is initial registration which feed them preprocessed images, second is similarity measurement in the way of correlation coefficient of reference and source image, third is image transformation which involve monitoring growth of the cyst monitoring at initial stage and periodic check-ups, fourth is final registration, and fifth is optimization [80]. Table 5 illustrates some important reviews of registration process.

TABLE 5 Mentions the registration methods for different body organs

thumblarge

4. DISCUSSION AND CONCLUSION

This study is a review over medical image modalities and most significant types. In this regard, the study focuses on medical image analysis and its components using DL. Medical image modalities clearly show how much the techniques or devices are important for medical image processing tasks, especially for medical image analysis. For a better approach, the study demonstrates the tremendous role of modalities that used in medical image processing by mentioning the most common modalities, such as MRI, SPECT, PET, OCT, CLE, MRS, CT, x-ray, WCE, BrC, PAP smear, HSI, and US. Furthermore, it exhibits how the modalities imperative to extract significant features from medical image values. Some significant diseases are reviewed after being diagnosed using some specific modalities. This is too beneficial to motivate to improve these tasks to implement those automatically using different approaches.

In medical image analysis, both medical image analysis and its components are properly introduced. It enumerates the components which are medical image classification, medical image detection, medical image segmentation, medical image localization, and medical image registration and defining them. For the sake of accurate results, the study reviewed some researches performed on each modality in various cases. Localization of anatomical structures is a prerequisite for many tasks in medical image analysis [81]. Medical image segmentation is defined in many ways according to its understanding. In simple words, image segmentation is the process of partitioning medical images into smaller parts [82].

Medical image detection is the process of localizing and detecting such important desired things inside medical imaging as objects detection, edge detection, and boundary detection [83]. Medical image classification is a process of illuminating different cases according to their similar features and selecting classes for them. It plays an essential role in clinical treatment and teaching tasks [84]. There are more than 120 types of brain and central nervous system tumors which classified as to less aggressive, such as benign: Grades I and II, aggressive, such as malignant; Grades III and IV, and the skull [73]. Early diagnosis of tumor has significant role of enhancement in increasing treatment possibilities.

The main aim of this survey study is to discuss about processing of medical image analysis and its components such as medical image classification, medical image detection, medical image segmentation, medical image localization, and medical image registration, depended on DL methods. Especially CNN is dominant model for computer vision which involves, such algorithms as; AlexNet, DenseNet, ResNet-18/34/50/152, VGGNet, Google Net, Inception-V3, pre-trained CNN, hybrid CNN, VGG-16, Inception-V4, fine-tuned VGG-16, carotid AlexNet, Inception-V4, 3D CNN, and Caffe CNN. It shows comparison between some different methods that used many public and private datasets for different medical image analysis components with different accuracies. It created table of medical image analysis components that represent many proposed methods and their process advantages. This approaches used for various human body organs with time progressing, which indicates CNN model algorithms preferred and have optimum accuracies compare to other DL methods for medical imaging. Most of the studies depend on using different medical image modalities and different public and private datasets in their types and sizes. The most accurate one among these approaches was brain MRI using CNN which imply these implemented approach that used to brain tumor were preferred. It looks the strong points, such as working on declining error rate and making strong training dataset for CNN because it is supervised learning method of these approaches and what are the weak points and how DL improved in medical image analysis.

REFERENCES

[1]. K. Suzuki. “Overview of deep learning in medical imaging“. Radiological Physics and Technology,vol. 10, no. 3, pp. 257-273, 2017.

[2]. D. Ravi, C. Wong, F. Deligianni, M. Berthelot and J. Andreau-Perez. “Deep learning for health informatics“. IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 1, pp. 4-21, 2017.

[3]. A. Maier, C. Syben, T. Lasser and C. Riess. “A gentle introduction to deep learning in medical image processing“. Zeitschrift für Medizinische Physik,vol. 29, no. 2, pp. 86-101, 2019.

[4]. J. K. Han. Terahertz medical imaging. In:“Convergence of Terahertz Sciences in Biomedical Systems“. Springer, Netherlands, pp. 351-371, 2012.

[5]. J. O'Doherty, B. Rojas-Fisher and S. O'Doherty. “Real-life radioactive men:The advantages and disadvantages of radiation exposure“. Superhero Science and Technology,vol. 1, no. 1, 2928, 2018.

[6]. A. S. Lundervold and A. Lundervold. “An overview of deep learning in medical imaging focusing on MRI“. Zeitschrift für Medizinische Physik, vol. 29, no. 2, pp. 102-127, 2019.

[7]. A. Eklund, P. Dufort, D. Forsberg and S. M. LaConte. “Medical image processing on the GPU-past, present and future“. Medical Image Analysis,vol. 17, no. 8, pp. 1073-1094, 2013.

[8]. D. Shen, G. Wu and H. Suk. “Deep learning in medical image analysis“. Review in Advance,vol. 19, pp. 221-248, 2017.

[9]. A. Mitra, P. S. Banerjee, S, Roy, S. Roy and S. K. Setua. “The region of interest localization for glaucoma analysis from retinal fundus image using deep learning“. Computer Methods and Programs in Biomedicine,vol. 165, pp. 25-35, 2018.

[10]. G. Urban, P. Tripathi, T. Alkayali, M. Mittal, F. Jalali, W. Karnes and P. Baldi. “Deep learning localizes and identifies polyps in real time with 96% accuracy in screening colonoscopy“. Gastroenterology, vol. 155, no. 4, pp. 1069-1078.e8, 2018.

[11]. N. Banerjee, R. Sathish and D. Sheet. “Deep neural architecture for localization and tracking of surgical tools in cataract surgery“. Computer Aided Intervention and Diagnostics in Clinical and Medical Images, vol. 31, pp. 31-38, 2019.

[12]. Y. Zheng, D. Liu, B. Georgescu, D. Xu and D. Comaniciu. “Deep Learning Based Automatic Segmentation of Pathological Kidney in CT:Local Versus Global Image Context“. Springer, Cham, Switzerland, pp. 241-255, 2017.

[13]. M. Berahim, N. A. Samsudin and S. S. Nathan. “A review:Image analysis techniques to improve labeling accuracy of medical image classification“. Advances in Intelligent Systems and Computing,vol. 700, pp. 1-11, 2018.

[14]. M. A. El-Sayed, Y. A. Estaitia and M. A. Khafagy. “Automated edge detection using convolutional neural network“. International Journal of Advanced Computer Science and Applications,vol. 4, no. 10, 11, 2013.

[15]. M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C. Pal, P. M. Jodoin and H. Larochelle. “Brain tumor segmentation with deep neural networks“. Medical Image Analysis,vol. 35, pp. 18-31, 2017.

[16]. S. Kumar, A. Negi, J. N. Singh, H. Verman. “A Deep Learning for Brain Tumor MRI Images Semantic Segmentation using FCN“. In:2018 4th International Conference on Computing Communication and Automation, Greater Noida, India, India, 14-15 Dec 2018, 2018.

[17]. M. K. Abd-Ellah, A. I. Awad, A. A. M. Khalafd and H. F. A. Hamed. “A review on brain tumor diagnosis from MRI images:Practical implications, key achievements, and lessons learned“. Magnetic Resonance Imaging, vol. 61, pp. 300-318, 2019.

[18]. R. P. M. Krishnammal and S. Selvakumar. “Convolutional Neural Network based Image Classification and Detection of Abnormalities in MRI Brain Images“. In:2019 International Conference on Communication and Signal Processing, Chennai, India, India, 4-6 April, 2019.

[19]. H. H. Sultan, N. M. Salem and W. Al-Atabany. “Multi-Classification of Brain Tumor Images Using Deep Neural Network“. IEEE Access, vol. 1, pp. 1-11, 2019.

[20]. K. Kushibar, S. Valverde, S. Gonzalez-Villa, J. Bernal, M. Cabezas, A. Oliver and X. Liado. “Automated sub-cortical brain structure segmentation combining spatial and deep convolutional features“. Medical Image Analysis, vol. 48, pp. 177-186, 2018.

[21]. F. Guo, M. Ng, M. Goubran, S. E. Petersen, S. K. Piechnik, S. N. Bauerd and G. Wright. “Improving cardiac MRI convolutional neural network segmentation on small training datasets and dataset shift:A continuous kernel cut approach“. Medical Image Analysis, vol. 61, 101636, 2020.

[22]. A. Bidani, M. S. Gouider and C. M. Traviesco-Gonzalez. “Dementia Detection and Classification from MRI Images Using Deep Neural Networks and Transfer Learning“. In:International Work-Conference on Artificial Neural Networks IWANN 2019,vol. 11506, pp. 925-933, 2019.

[23]. H. N. G. Geok, M. Kerzel, J. Mehnert, A. May and S. Wermter. “Classification of MRI Migraine Medical Data Using 3D Convolutional Neural Network“. ICANN 2018,vol. 11141, pp. 300-309, 2018.

[24]. Z. J. Islam and Y. Yanqing. “A novel deep learning based multi-class classification method for Alzheimer's disease detection using brain MRI data. In:International Conference, BI 2017, Beijing, China, November 16-18, 2017, Beijing, China, 2017.

[25]. S. Goswami and L. K. P. Bhaiya. “Brain Tumor Detection Using Unsupervised Learning Based Neural Network“. In:2013 International Conference on Communication Systems and Network Technologies, Gwalior, India, 6-8 April 2013.

[26]. H. S. A. Pardakhti. “Age prediction based on brain MRI image:A survey“. Journal of Medical Systems,vol. 43(8), 279, 2019.

[27]. H. Mohsen, A. E. S. A. El-Dahshan, E. S. M. El-Horbaty, A. B. M. Salem. “Classification using deep learning neural networks for brain tumors“. Future Computing and Informatics Journal,vol. 3, no. 1, pp. 68-71, 2018.

[28]. R. Alkadi, F. Taher, A. El-Baz and N. Werghi. “A deep learning-based approach for the detection and localization of prostate cancer in T2 magnetic resonance images“. Journal of Digital Imaging,vol. 32, no. 12, pp. 793-807, 2018.

[29]. M. Talo, U. B. Baloglu, O. Yildirim and U. R. Acharya. “Application of deep transfer learning for automated brain abnormality classification using MR images“. Cognitive Systems Research, vol. 54, pp. 176-188, 2018.

[30]. L. Aghaghazvini, P. Pirouzi, H. Sharifian, N. Yazdani, S. Kooraki, A. Ghadiri and M. Assadi. “3T magnetic resonance spectroscopy as a powerful diagnostic modality for assessment of thyroid nodules“. SciELO Analytics,vol. 62, no. 5, pp. 2359-4292, 2018.

[31]. A. Elangovan and T. Jeyaseelan. “Medical Imaging Modalities:A Survey. In:“2016 International Conference on Emerging Trends in Engineering, Technology and Science“. Pudukkottai, India, 24-26 Feb, 2016.

[32]. Y. Song, S. Zheng, L. Li, X. Zhang, X. Zhang, Z. Huang, J. Chen, H. Zhao, Y. Jie, R. Wang, Y. Chong, J. Shen and Y. Yang. “Deep learning Enables Accurate Diagnosis of Novel Coronavirus (COVID-19) with CT images“. medRxiv, 2020.

[33]. J. Men, Y. Huang, J. Solanki, X. Zeng, A. Alex, J. Jerwick, Z. Zhang, R. E. Tanzi, A. Li and C. Zhou. “Optical coherence tomography for brain imaging and developmental biology“. IEEE Journal of Selected Topics in Quantum Electronics,vol. 22, no. 4, 6803213, 2016.

[34]. L. Ngo, G. Yih, S. Ji and J. H. Han. “A Study on Automated Segmentation of Retinal Layers in Optical Coherence Tomography Images. In:2016 4th International Winter Conference on Brain-Computer Interface (BCI). Yongpyong, South Korea, 22-24 Feb, 2016.

[35]. L. Moraru, C. D. Obreja, N. Dey and A. S. Ashour. Dempster-Shafer Fusion for Effective Retinal Vessels Diameter Measurement. Elsevier, Amsterdam, Netherlands, pp. 149-160, 2018.

[36]. J. Sun, C. Wan, J. Cheng, F. Yu and J. Liu. “Retinal Image Quality Classification using Fine-Tuned CNN. In:OMIA 2017, FIFI 2017:Fetal, Infant and Ophthalmic Medical Image Analysis“. vol. 10554. Springer, Berlin, Germany, pp. 126-133, 2017.

[37]. Y. Song, J. Z. Cheng, D. Ni, S. Chen, B. Lei and T. Wang. “Segmenting Overlapping Cervical Cell in Pap Smear Images. In:2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13-16 April, 2016.

[38]. L. D. Nguyen, D. Lin, Z. Lin and J. Cao. “Deep CNNs for Microscopic Image Classification by Exploiting Transfer Learning and Feature Concatenation. In:2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27-30 May 2018.

[39]. S. Dey, D.N. Tibarewala, S. P. Maity and A. Barui. “Automated Detection of Early Oral Cancer Trends in Habitual Smokers. Elsevier, Amsterdam, Netherlands, pp. 83-107, 2018.

[40]. M. Izadyyazdanabadi, E. Belykh, M. Mooney, N. Martirosyan, J. Eschbacher, P. Nakaji, M. C. Preul and Y. Yang. “Convolutional neural networks:Ensemble modeling, fine-tuning and unsupervised semantic localization for neurosurgical CLE images“. The Journal of Visual Communication and Image Representation, vol. 1, pp. 10-20, 2018.

[41]. L. Lan, C. Ye, C. Wang and S. Zhou. “Deep convolutional neural networks for WCE abnormality detection:CNN architecture, region proposal and transfer learning“. IEEE Access,vol. 7, pp. 30017-30032, 2019.

[42]. A. H. Shahin, A. Kamal and M. A. Elattar. “Deep Ensemble Learning for Skin Lesion Classification from Dermoscopic Images“. In:2018 9th Cairo International Biomedical Engineering Conference, Cairo, Egypt, Egypt, 20-22 Dec. 2018.

[43]. S. V. Georgakopoulos, K. Kottari, K. Delibasis, V. P. Plagianakos and I. Maglogiannis. “Improving the performance of convolutional neural network for skin image classification using the response of image analysis filters“. Neural Computing and Applications, vol. 31, no. 6, pp. 1805-1822, 2019.

[44]. G. Murtaza, L. Shuib, A. W. A. Wahab, G. Mujtaba, H. F. Nweke, M. A. Al-Garadi, F. Zulfiqar, G. Raza and N. A. Azmi. “Deep learning-based breast cancer classification through medical imaging modalities:State of the art and research challenges“. Artificial Intelligence Review, 53, pp. 1-66, 2019.

[45]. Y. Li, J. Wu and Q. S. Wu. “Classification of breast cancer histology images using multi-size and discriminative patches based on deep learning. IEEE Access,vol. 7, pp. 21400-21408, 2019.

[46]. A. M. Ahmad, G. Muhammad and J. F. Miller. “Breast Cancer Detection Using Cartesian Genetic Programming evolved Artificial Neural Networks. In:GECCO '12 Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, Philadelphia, Pennsylvania, USA, July 07-11, 2012.

[47]. P. R. Jeyaraj E. R. S. Nadar. “Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm“. Journal of Cancer Research and Clinical Oncology, vol. 145, no. 4, pp. 829-837, 2019.

[48]. R. Wei, F. Zhou, B. Liu, X. Bai, D. Fu, Y. Li and B. Liang. “Convolutional neural network (CNN) based three dimensional tumor localization using single X-ray projection“. IEEE Access,vol. 7, pp. 37026-37038, 2019.

[49]. Z. Lai and H. F. Deng. “Medical image classification based on deep features extracted by deep model and statistic feature fusion with multilayer perceptron“. Computational Intelligence and Neuroscience, vol. 2018, pp. 1-13, 2018.

[50]. G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. W. V. Laak, B. Van Ginneken and C. S. Diagnostic. “A survey on deep learning in medical image analysis“. Medical Image Analysis,vol. 42, pp. 60-88, 2017.

[51]. K. C. L. Wong, T. Syeda-Mahmood and M. Moradi. “Building medical image classifiers with very limited data using segmentation networks“. Medical Image Analysis,vol. 49, pp. 105-116, 2018

[52]. N. T. Member, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. Kendall, M. Gotway and J. Liang. “Convolutional neural networks for medical image analysis:Full training or fine tuning“. IEEE Transactions on Medical Imaging,vol. 35, no. 5, pp. 1299-1312, 2016.

[53]. J. Ker, L. Wang, J. Rao and T. Lim. “Deep learning applications in medical image analysis“. IEEE Access, vol. 6, pp. 9375-9389, 2017.

[54]. K. U. Rani. “Analysis of heart diseases dataset using neural network approach“. International Journal of Data Mining and Knowledge Management Process, vol. 1, no. 5, pp. 1-8, 2011.

[55]. R. Yamashita, M. Nishio, R. K. G. Do and K. Togashi. “Convolutional neural networks:an overview and application in radiology“. Insights into Imaging, vol. 9, no. 4, pp. 611-629, 2018.

[56]. A. D. Ruvalcaba-Cardenas, T. Scolery and G. Day. “Object classification using deep learning on extremely low-resolution time-of-flight data“. In:2018 Digital Image Computing:Techniques and Applications (DICTA), Canberra, Australia, Australia, 10-13 Dec 2018.

[57]. E. Ahn, A. Kumar, M. Fulham, D. Feng and J. Kim. “Convolutional sparse kernel network for unsupervised medical image analysis“. Medical Image Analysis, vol. 56, pp. 140-151, 2019.

[58]. Z. Wu, S. Zhao, Y. Peng, X. He, X. Zhao, K. Huang, X. Wu, W. Fan, F. Li, M. Chen, J. Li, W. Huang, X. Chen and Y. Li. “Studies on different CNN algorithms for face skin disease classification based on clinical images“. IEEE Access, vol. 7, pp. 66505-66511, 2019.

[59]. F. Altaf, S. M. S. Islam, N. Akhtar and N. K. Janjua. “Going deep in medical image analysis:Concepts, methods, challenges and future directions“. IEEE Access, vol. 7, pp. 99540-99572, 2019.

[60]. K. M. Hosny, M. A. Kassem and M. M. Foaud. “Classification of skin lesions using transfer learning and augmentation with Alex-net“. PLoS One,vol. 14, no. 5, p. e0217293, 2019.

[61]. J. Arevalo, F. A. Gonzalez, R. R. Pollan, J. L. Oliveira and M. A. G. Lopez. “Convolutional neural networks for mammography mass lesion classification“. In:2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25-29 Aug 2015.

[62]. M. F. B. Othman, N. B. Abdullah and N. F. Kamal. “MRI Brain Classification Using Support Vector Machine. In:2011 4th International Conference on Modeling, Simulation and Applied Optimization, Kuala Lumpur, Malaysia, 19-21 April 2011.

[63]. S. H. Shirazi, S. Naz, M. I. Razzak, A. I. Umar and A. Zaib. “Automated Pathology Image Analysis“. Elsevier, Pakistan, pp. 13-29, 2018.

[64]. N. C. Ouseph and K. Shruti. “A reliable method for brain tumor detection using cnn technique“. IOSR Journal of Electrical and Electronics Engineering, vol. 1. pp. 64-68, 2017.

[65]. A. Srivastava, S. Sengupta, S. J. Kang, K. Kant, M. Khan, S. A. Ali, S. R. Moore, B. C. Amadi, P. Kelly, S. Syed and D. E. Brown. “Deep Learning for Detecting Diseases in Gastrointestinal Biopsy Images. In:2019 Systems and Information Engineering Design Symposium (SIEDS), Charlottesville, VA, USA, USA, 26-26 April 2019.

[66]. R. M. Summers. “Deep Learning and Computer-Aided Diagnosis for Medical Image Processing:A Personal Perspective“. Springer International Publishing, Switzerland, pp. 3-10, 2017.

[67]. G. Carnerio, Y. Zheng, F. Xing and L. Yang. Review of Deep Learning Methods in Mammography, Cardiovascular,and Microscopy Image Analysis. Springer, Switzerland, 2017, pp. 11-35.

[68]. V. V. Kumar, K. S. Krishna and S. Kusumavathi. “Genetic algorithm based feature selection brain tumour segmentation and classification“. International Journal of Intelligent Engineering and Systems, vol. 12, no. 5, pp. 214-223, 2019.

[69]. D. Zikic, Y. Ioannou, M. Brown and A. Criminisi. “Segmentation of brain tumor tissues with convolutional neural networks“. MICCAI Workshop on Multimodal Brain Tumor Segmentation Challenge (BRATS) At, Boston, Massachusetts. pp. 36-39, 2014.

[70]. A. Norouzi, M. S. M. Rahim, A. Altameem, T. Saba, A. E. Rad, A. Rehman and M. Uddin. “Medical image segmentation methods, algorithms, and applications“. IETE Technical Review,vol. 31, no. 3, pp. 199-213, 2014.

[71]. N. Dey and A. S. Ashour. “Computing in Medical Image Analysis“. Elsevier, Amsterdam, Netherlands, pp. 3-11, 2018.

[72]. N. Padmasini, R. Umamaheswari and M. Y. Sikkandar. “State-of-the-Art of Level-Set Methods in Segmentation and Registration of Spectral Domain Optical Coherence Tomographic Retinal Images. Elsevier, United Kingdom, 2018, pp. 163-181.

[73]. R. R. Agravat and M. S. Raval. “Deep Learning for Automated Brain Tumor Segmentation in MRI Images“. Elsevier, United Kingdom, pp. 183-201, 2018.

[74]. T. A. Ngo and G. Carneiro. “Fully automated segmentation using distance regularised level set and deep-structured learning and inference. In:L. Lu, Y. Zheng, G. Carneiro, L. Yang, (eds) “Deep Learning and Convolutional Neural Networks for Medical Image Computing. Advances in Computer Vision and Pattern Recognition“. Springer, Cham, 2017, pp. 197-224.

[75]. J. Bernal, K. Kushibar, M. Cabezas, S. Valverde, A. Oliver and X. Llado. “Quantitative analysis of patch-based fully convolutional neural networks for tissue segmentation on brain magnetic resonance imaging“. IEEE Access,vol. 7, pp. 89986-90002, 2019.

[76]. R. Ceylan and H. Koyuncu. “ScPSO-Based Multithresholding Modalities for Susoicious Region Detection on Mammograms“. Elsevier, Amsterdam, Netherlands, pp. 109-135, 2018.

[77]. N. Dhungel, G. Carneiro, A. P. Bradley. Combining deep learning and structured prediction for segmenting masses in mammograms. In:L. Lu, Y. Zheng, G. Carneiro, L. Yang, (eds). “Deep Learning and Convolutional Neural Networks for Medical Image Computing. Advances in Computer Vision and Pattern Recognition“. Springer, Cham, pp. 225-240, 2017.

[78]. A. O. Mader, C. Lorenz, M. Bergtholdt, J. von Berg, H. Schramm, J. Modersitzki and C. Meyer. “Detection and localization of spatially correlated point landmarks in medical images using an automatically learned conditional random field“. Computer Vision and Image Understanding,vol. 176-177, pp. 45-53, 2018.

[79]. K. Marstal, F. Berendsen, N. Dekker, M. Staring and S. Klein. “The Continuous Registration Challenge:Evaluation-as-a-Service for Medical Image Registration Algorithms“. In:2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, Italy, 8-11 April 2019.

[80]. S. Ramamoorthy, R. Vinodhini and R. Sivasubramaniam. “Monitoring the growth of Polycystic Ovary Syndrome using Mono-modal Image Registration Technique“. In:International Conference on Data Science and Management of Data (CoDS-COMAD'19), Kolkata, India, January 03-05, 2019.

[81]. B. D. de Vos, J. M. Wolterink, P. A. de Jong, T. Leiner, M. A. Viergever and I. Isgum. “Conv net-based localization of anatomical“. IEEE Transactions on Medical Imaging, vol. 36, no. 7, pp. 1470-1481, 2017.

[82]. U. Bagci. “Medical image computing CAVA:Computer Aided Visualization“. University of Central Florida, Florida, 2017.

[83]. M. M. Murray, M. L. Rosenberg, A. J. Allen, M. Baranoski, R. Bernstein, J. Blair, C. H. Brown, E. Caine, S. Greenberg and V. M. Mays. “Violence and Mental Health:Opportunities for Prevention and Early Detection:Proceedings of a Workshop“. The National Academies Press, Washington, DC, 2018.

[84]. S. S. Yadav and S. M. Jadhav. “Deep convolutional neural network based medical image classification for disease diagnosis“. Journal of Big Data,vol. 6, 113, 2019.