Can Machines Perform Medical Diagnosis?
By Dr. Maheshi Dissanayake
Artificial intelligence (AI) is intelligence demonstrated sans emotion by machines and deep learning is a type of an AI function that imitates the way a human brain works in processing acquired data inputs in given situations which utilise this knowledge for decision making. It is now in use almost everywhere. It instructs/teaches machines with processing capacity such as computers or mobile phones to do what comes naturally to humans, by assimilating the necessary inputs: learn by example.
Due to the depth and breadth of selflearning capacity associated with deep learning architectures, today it has become one of the hot research areas finding applications in not just computer vision but in marketing, industrial automation, big data, and ‘Internet of Things’ (IoT) as well. For instance, deep learning can be found in driverless cars, and voice controlled devices. Deep Learning has the potential to transform the entire landscape of healthcare.
In medical imaging, visual representations of parts of the human body such as organs, bones, and tissues are created for clinical purposes for monitoring, diagnosing, and treating, of diseases and injuries. Analysing these medical images for diagnosis using machines (computer programs) has taken flight in the past decade. One of the very first deep learningbased applications approved for clinical use was the retinopathy detection software (IDx-DR) by Digital Diagnostics (formerly known as IDx) in April 2018. From there onwards, deep learning is also deployed by many other healthcare applications, such as CT image reconstruction, MRI image analysis for stroke detection, and breast cancer tumour analysis.
How does deep learning actually work?
Most deep learning models use neural network architectures, which closely mimic the example-based learning process of human intelligence. These neural networks consists of number of layers with different functions and calculations (convolution is preferred) which assist the model to learn the basic structures, colour maps, and texture patterns in an input image. The term ‘deep’ is adopted when the number of layers are high and number of learned parameters are in millions. Simply put, a deep learning-based algorithms works in the following way: Input data is fed to the deep neural network through the input layer.
Filtered and punctured output of each bottom layer is passed to the top layers until the output layer is reached. The inter-connection arrangement between layers, such as cascade, parallel, and skip connections depends on the deep learning architecture chosen for the problem. The function of the output layer is simply to give the answer to the problem the network is supposed to answer. To assist the example-based learning curve, each input data is pre-labelled following the research problem and for all the input images after each training loop, the model check the accuracy of learned outcome against the label given, along with the input image.
What type of specific tasks could a deep learning model perform?
There are several tasks a trained deep learning model could perform, especially in the healthcare domain. In a border scope, detection, classification, and segmentation of medical images can be performed by a deep learning model. Detection: A deep learning model can be easily trained to learn features for positive and negative detection through labelled data. The functions used by each layer to extract features of the input can be fine-tuned so as to look for ‘it contains’ characteristics of a specific disease. A model will be trained using these features and to place a bounding box around the detected abnormality. Classification: Deep learningbased classification can be easily achieved by following a learning architecture very similar to the detection task.
Here, after learning the image characteristic called ‘feature map’ that makes it a part of a specific group, the network will predict whether the test data contains specific features or patterns or not. If the pattern is present the test data will be categorised as diseased. Segmentation: Another popular task performed by deep learning models in medical image analysis is the segmentation.
In Segmentation, the input is closely analysed to look for specific features as well as to determine the location of these features. A segmentation model is trained based on these features which will assist the localisation as well as detection. Once deployed it will extract features from the test input and group them based on the problem of interest, placing close attention to the location of the features. Ultimately, it results in a segmented image. If such is the case, one should ask…
Is deep learning the future of medical image analysis?
Deep learning has reached an indispensable level in today’s healthcare industry, especially in developed nations. They are powerful and accurate enough for first line of screening through medical images, especially in radiology. They provide automation and cut down the total labour requirement and the expenses of maintenance of the healthcare system. It has the potential to make healthcare accessible and affordable to all, especially in low-recourse environments. Although the potential exists, to what extent should we rely on a machine? Only the future could tell as to what extent deep learning will make its way into clinical practice.
(The author is a senior Lecturer at Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Sri Lanka. One of her research interests is medical image analysis using artificial intelligence techniques. She has already published her research team’s work related to AI-based medical image analysis, such as brain tumour analysis and tuberculosis prediction, in reputed journals and conferences.)