• mkim180@pusan.ac.kr
  • 49, Busandaehak-ro, Yangsan-si, South Korea
Researchs
3D Ultrasound Panorama Imaging using Neural Radiance Field

Ultrasound 3D imaging can be achieved by a mechanically oscillating array, but the limitation is a narrow field of view. An alternative method is Panoramic imaging which forms a wide 3D view by sequentially stacking 2D sections from freehand sweeps with a standard transducer. However, few studies have considered the poor spatial resolution along the elevational direction. Although each transmission beam is focused by a probe lens, the beam passes through a wide region in near-field and far-field. As a result, each 2D image is the projection result of the thick beam. In this study, we aim to develop a deep-learning to reconstruct a 3D volume from the projections and enhance the elevational resolution. Our contribution is applying NeRF to the task of 3D panoramic imaging to optimally combine the projection images.


Automatic Organ Segmentation and Disease Diagnosis in Medical Abdominal Ultrasound using Deep learning

This project was initiated as an industry-academia collaborative research supported by the Ministry of Science and ICT and conducted by the Practical Problem-solving Research Group at Pusan National University. Medical Innovation Co., Ltd., a company in Busan, proposed the research topic and a team composed of undergraduate researchers is currently conducting the research. The team is developing a deep learning-based algorithm that automatically segments organs and measures their size in medical abdominal ultrasound, aiming to assist in disease diagnosis. Based on the results of this technological development, the goal is to proceed with the development of an AI interpretation solution for ultrasound images, making it the world's first AI-based medical image diagnostic support device.


Ultrasound Image Reconstruction using Deep Learning

The Ultrafast Ultrasound (US) framework enables high frame-rate imaging by using plane or divergent waves to capture a wide field of view at once. To improve contrast and spatial resolution, multiple waves are transmitted at different angles to form one image. Currently, Deep Learning (DL) is being used in many medical fields to improve image quality or quickly reconstruct an image from raw data under ill-posed acquisition conditions. We are developing an end-to-end DL model to obtain a more enhanced US B-mode image from a smaller volume of raw data and examining its performance when supervised by a set of high-quality images. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT)


Real-time Photoacoustic and Ultrasound System and Imaging using Deep learning techniques

Photoacoustic imaging provides optical contrast at large depths within the human body at ultrasound spatial resolution. By integrating real-time photoacoustic and ultrasound (PAUS) modalities, PAUS imaging has the potential to become a routine clinical modality bringing the molecular sensitivity of optics to standard ultrasound imaging. Current approaches, however, cannot obtain high-quality maps of vascular structure or oxygen saturation in vessels due to technical limitations in data acquisition. Deep learning (DL) using data-driven modeling with minimal human design has been very effective in medical imaging, medical data analysis, and disease diagnosis, and has the potential to overcome many of the technical limitations of current PAUS systems. The study is to develop our PAUS imaging system and DL models to obtain higher quality images and more accurate quantifications. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT).


Ultrasound Vascular Imaging using Deep Learning

Ultrasound Doppler imaging is commonly used to display vascular structure and quantify the blood flow speed or blood volume. This process involves transmitting sequential pulses at a specific time interval to trace object motion, acquiring corresponding spatiotemporal data, and separating blood signals from tissue clutter. Currently, filtering methods based on singular value decomposition (SVD) has been widely adopted to facilitate the isolation of independent components. However, they significantly overlap on eigenspaces especially when a short acquisition time is required to obtain one image frame. In this study, we explore a deep learning framework to replace the SVD filtering process, with the aim of generating an enhanced vascular image given transmission numbers with a lower computation burden. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT).


Freehand 3D Ultrasound Reconstruction without Positioning Sensor using Deep Learning

Clinical Ultrasound is one of the universal imaging modalities. To capture a large field of view, a special transducer such as a 2D-array or mechanically oscillating array is needed, but for certain clinical applications, the field is limited. An alternative method is panoramic imaging, which involves sequentially aligning image sections acquired from freehand sweeps with a standard transducer. An external device can be used to track the transducer’s motion to reconstruct 3D volume perfectly from 2D sections. However, such sensors can provide incorrect measurements in a clinical setting due to various optical or electrical interferences. In this paper, we explore a deep learning (DL) framework that can predict scan trajectories using US data and images without the need for an external device. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT).


Contact
AMI lab