• mkim180@pusan.ac.kr
  • 49, Busandaehak-ro, Yangsan-si, South Korea
Research
Airway Extraction from Chest CT using Deep Learning

While invasive procedures such as thoracoscopy can be used for tissue biopsy in the chest, they often increase the patient’s burden and may result in poor clinical outcomes. As a less invasive alternative, tissue samples can be collected through bronchoscopy. However, this requires a detailed map of the airway to navigate toward the target lesion. Conventional pixel intensity-based airway segmentation methods suffer from high false-positive rates (FPR) and often fail to detect small peripheral airways. To overcome these limitations, deep learning (DL)-based approaches have been proposed, resulting in improved scores. Nevertheless, they still exhibit limited sensitivity for fine airways, and the incompleteness of ground-truth labels for supervised learning poses additional challenges. In this study, we propose an Encoder-Guided Attention U-Net to enhance the sensitivity of airway detection in chest CT images. The proposed model is capable of detecting deeper and finer airway branches even under incomplete supervision. It achieved state-of-the-art (SOTA) performance in terms of Tree Detection Ratio (TDR) and Branch Detection Ratio (BDR) in the long-term validation phase of the ATM’22 Challenge. In this study, we are collaborating with the Department of Respiratory Medicine at Yangsan Pusan National University Hospital to develop a new image-guided method based on deep learning that can provide easy and accurate guidance for accessing peripheral lesions, aiming to overcome the limitations of peripheral lung lesion tissue biopsy and successfully perform early lung cancer tissue examination.


Lung Cancer Segmentation in PET-CT using Deep Neural Networks

Lung cancer is one of the most prevalent and deadly types of cancer, and early diagnosis followed by timely treatment is critically important for improving patient outcomes. In some cases, lesions may not be clearly visible in CT images, which is why PET-CT scans are sequentially acquired to provide a more comprehensive diagnosis. PET imaging highlights regions of high metabolic activity, which helps identify potential malignancies. However, determining whether these areas actually represent cancerous tissue requires expert interpretation, typically performed by experienced nuclear medicine physicians. This diagnostic process is time-consuming, labor-intensive, and subject to inter-observer variability. To address these challenges, deep learning (DL)-based computer-aided diagnostic tools are gaining attention for their ability to automatically detect and segment cancerous lesions in PET-CT images. In this study, we propose a U-Net-based DL algorithm that automatically extracts lung cancer regions from PET-CT scans. The proposed framework includes image registration to handle multimodal data, ground truth preprocessing (mask pre-processing), and PET-CT axial alignment, along with post-processing steps to enhance sensitivity.


Pathological Image Compression Using Deep Learning Techniques

The demand for pathological examinations is increasing, but the management of pathological slides is challenging due to issues such as loss and damage. Therefore, large medical institutions are transitioning to digital pathology using scanners. Digitized pathological images through scanners have ultra-high resolution, and the whole-slide imaging method requires large storage capacity, reaching gigabytes per slide. Therefore, the development of compression techniques beyond JPEG2000 is necessary to store multiple sample images in the long term. Compression techniques can reduce server costs for storage capacity, as well as decrease transmission and loading times of image data, thereby enhancing diagnostic efficiency. In this study, we aim to improve the compression ratio of pathological images while minimizing the loss of image quality by applying state-of-the-art deep learning techniques such as instant NGP or coordinate-based MLP. Additionally, we intend to develop standardized image file formats using AI-based compression technology to aid in generating training data for image interpretation and disease prediction in the future. This project is supported by the Seegene Medical Foundation.


Development of an Object Detection Algorithm for Prediction of Vocal Cords location Prediction

When imaging the inside of a patient's throat, it is often difficult to visually confirm the exact location of the vocal cords due to surrounding structures. In order to assist with diagnosis, a research project is underway to develop an algorithm that can predict the location of the vocal cords even when they are not directly visible. We are collaborating with the Departments of Anesthesiology and Pain Medicine at Pusan National University Yangsan Hospital and Dongguk University Ilsan Hospital for this study.


Screening scabies from camera images using deep learning techniques

With the increase in nursing facilities and nursing hospitals due to aging population, the importance of contact infectious diseases has also risen. In particular, the incidence of scabies patients has been steadily increasing. However, it is difficult to diagnose scabies accurately and quickly using conventional molecular biological techniques and microscopy-based methods. Therefore, in this study, we are collecting (camera) image data of skin lesions in scabies patients, applying them to the construction of artificial neural networks, and developing an app that can assist healthcare professionals in the diagnosis of scabies using image analysis and deep learning techniques. This research is being conducted in collaboration with the Nursing Science Research Institute at Pusan National University.


Deep learning approaches for bone marrow edema detection and interpretation in dual-energy CT

Dual-energy computed tomography can be an excellent substitute for magnetic resonance imaging to identify bone marrow edema; however, it has rarely been used in practice owing to low contrast issues. To overcome these problems, we constructed a framework based on deep learning techniques to screen for diseases from axial bone images and identify the local positions of bone lesions. To cope with the scarcity of labeled samples, we developed a generative adversarial network (GAN) beyond conventional augmentation (CA) methods based on geometric transformation to extend new expressions. We developed the concepts of data augmentation optimized for GAN to stably generate synthetic images and methods to train a classification model on real and synthetic samples. In addition, we developed an explainable AI technique that leverages principal component analysis to facilitate the visual analysis of the network’s results. (Collaborate with the Department of Radiology, Pusan National University Yangsan Hospital)


Prediction of bone mineral density in CT using deep learning with explainability

Bone mineral density (BMD) is a key feature in diagnosing bone diseases. Although computational tomography (CT) is a common imaging modality, it seldom provides bone mineral density information in a clinic owing to technical difficulties. Thus, a dual-energy X-ray absorptiometry (DXA) is required to measure bone mineral density at the expense of additional radiation exposure. In this study, We developed a deep learning framework to estimate the bone mineral density from an axial cut of the L1 bone on computational tomography. In addition, We identified that the network intensively sees a local area spanning tissues around the vertebral foramen using explainable artificial intelligence techniques. This method is well suited as an auxiliary tool in clinical practice and as an automatic screener for identifying latent patients in computational tomography databases. (Collaborate with Busan Medical Center, Department of Orthopedic Surgery, Busan) This study was supported by Institute of Information and communications Technology Planning and Evaluation (IITP) grant funded by the Korea government (MSIT) (Artificial Intelligence Convergence Research Center (Pusan National University). 


Development of a deep learning program for confirmation of ureters driving in non-contrast enhanced lumbar CT images

During CT scans, contrast agents are used to increase the visibility of specific organs, blood vessels, or tissues by enhancing the contrast of structures or fluids within the body. However, contrast agents can have negative effects, such as causing allergic reactions or stomach cramps. In rare cases, they can even cause serious adverse reactions that may result in the death of patients. This study aims to develop an AI system that can identify the ureter without the need for a contrast agent. The study is being conducted in collaboration with the Department of Neurosurgery at Pusan National University Hospital.


KOR ENG

Contact
AMI lab