• mkim180@pusan.ac.kr
  • 49, Busandaehak-ro, Yangsan-si, South Korea
Research
Deep Learning-based Ultrasound Scan Motion Estimation and 3D Volume Reconstruction

Conventional ultrasound (US) systems with one-dimensional (1D) array transducers are typically used to capture cross-sectional images of regions of interest (RoIs). Despite their advantages—such as cost-efficiency, safety, and real-time imaging capabilities—US imaging requires skilled sonographers due to the limited two-dimensional (2D) field of view (FoV). To address this limitation, we propose a deep learning (DL)-based scan motion estimation framework composed of a ResNet-based encoder, a correlation operation, and a customized global-local attention module. The estimated relative motions are integrated to reconstruct the absolute scan trajectory. Each 2D US frame is then aligned in 3D space using the reconstructed scan path, thereby generating a 3D US volume. Furthermore, this approach enables the reconstruction and visualization of vascular structures by integrating Power Doppler (PD) or Photoacoustic (PA) ultrasound imaging. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT)


Automatic Organ Segmentation and Disease Diagnosis in Medical Abdominal Ultrasound using Deep learning

This project was initiated as an industry-academia collaborative research supported by the Ministry of Science and ICT and conducted by the Practical Problem-solving Research Group at Pusan National University. Medical Innovation Co., Ltd., a company in Busan, proposed the research topic and a team composed of undergraduate researchers is currently conducting the research. The team is developing a deep learning-based algorithm that automatically segments organs and measures their size in medical abdominal ultrasound, aiming to assist in disease diagnosis. Based on the results of this technological development, the goal is to proceed with the development of an AI interpretation solution for ultrasound images, making it the world's first AI-based medical image diagnostic support device.


Deep learning (DL)-based Speed of Sound (SoS) reconstruction

This study proposes a DL-based framework to estimate SoS map from the ultrasound(US) signals, such as raw radiofrequency (RF) data and B-mode images. In the encoding stage, both the RF data and B-mode images are refined via ResNet-based networks, in parallel. Then, the decoder performs SoS map reconstruction by incorporating RF and B-mode features. The training data, simulated using the k-wave toolbox, reflect real-tissue conditions, such as density, SoS, and anatomical variations. Experimental results demonstrate enhanced quantitative scores(MAE, PSNR, Correlation), along with precise reconstructed SoS maps from US raw signals. We expect that the proposed method has potential to contribute clinical applications, by B-mode quality enhancement and tissue properties inspection.


Real-time Photoacoustic and Ultrasound System and Imaging using Deep learning techniques

Photoacoustic imaging provides optical contrast at large depths within the human body at ultrasound spatial resolution. By integrating real-time photoacoustic and ultrasound (PAUS) modalities, PAUS imaging has the potential to become a routine clinical modality bringing the molecular sensitivity of optics to standard ultrasound imaging. Current approaches, however, cannot obtain high-quality maps of vascular structure or oxygen saturation in vessels due to technical limitations in data acquisition. Deep learning (DL) using data-driven modeling with minimal human design has been very effective in medical imaging, medical data analysis, and disease diagnosis, and has the potential to overcome many of the technical limitations of current PAUS systems. The study is to develop our PAUS imaging system and DL models to obtain higher quality images and more accurate quantifications. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT).


Ultrasound Vascular Imaging using Deep Learning

Ultrasound Doppler imaging is commonly used to display vascular structure and quantify the blood flow speed or blood volume. This process involves transmitting sequential pulses at a specific time interval to trace object motion, acquiring corresponding spatiotemporal data, and separating blood signals from tissue clutter. Currently, filtering methods based on singular value decomposition (SVD) has been widely adopted to facilitate the isolation of independent components. However, they significantly overlap on eigenspaces especially when a short acquisition time is required to obtain one image frame. In this study, we explore a deep learning framework to replace the SVD filtering process, with the aim of generating an enhanced vascular image given transmission numbers with a lower computation burden. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT).


Ultrasound Image Reconstruction using Deep Learning

The Ultrafast Ultrasound (US) framework enables high frame-rate imaging by using plane or divergent waves to capture a wide field of view at once. To improve contrast and spatial resolution, multiple waves are transmitted at different angles to form one image. Currently, Deep Learning (DL) is being used in many medical fields to improve image quality or quickly reconstruct an image from raw data under ill-posed acquisition conditions. We are developing an end-to-end DL model to obtain a more enhanced US B-mode image from a smaller volume of raw data and examining its performance when supervised by a set of high-quality images. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT)


3D Ultrasound Panorama Imaging using Neural Radiance Field

Ultrasound 3D imaging can be achieved by a mechanically oscillating array, but the limitation is a narrow field of view. An alternative method is Panoramic imaging which forms a wide 3D view by sequentially stacking 2D sections from freehand sweeps with a standard transducer. However, few studies have considered the poor spatial resolution along the elevational direction. Although each transmission beam is focused by a probe lens, the beam passes through a wide region in near-field and far-field. As a result, each 2D image is the projection result of the thick beam. In this study, we aim to develop a deep-learning to reconstruct a 3D volume from the projections and enhance the elevational resolution. Our contribution is applying NeRF to the task of 3D panoramic imaging to optimally combine the projection images.


KOR ENG

Contact
AMI lab