Edit this page

NA-MIC Project Weeks

Back to Projects List

Real-time ultrasound AI segmentation using Tensorflow and PyTorch models

Key Investigators

Project Description

The module “Segmentation U-Net”, from extension SlicerAIGT, applies deep learning models on an ultrasound image stream to generate the predicted segmentation in real time. This is shown in the following example, where it is used to detect tumour tissue (highlighted in red) on breast images. That way, we can apply a live volume reconstruction on this prediction and visualize the complete region of interest (in this case, the area of the tumour). Another instance, using spine images, is shown in (Figure 1).

      

Currently, this module supports models trained with the TensorFlow ecosystem. However, in recent years, PyTorch has become an increasingly popular machine learning framework, especially in medical imaging applications (an example of this is the MONAI framework, which is based on PyTorch).

We have developed a separate module to run the inference of PyTorch model for the segmentation of breast ultrasound images: Breast Lesion Segmentation (Figure 2). However, our module does not integrate parallel processing to enable real-time image segmentation.

In this project, we aim to adapt the current “Segmentation U-Net” module to enable the use of models trained with both ecosystems, PyTorch and TensorFlow, for real-time ultrasound image segmentation.

In addition, we will discuss further improvements for this module. For instance, automatically visualize the prediction overlayed on the input ultrasound image and avoid changing to different modules to activate the visualization.

Objective

  1. Adapt the current “Segmentation U-Net” module to support models trained with PyTorch
  2. Automatically display the AI segmentation overlayed on the input ultrasound image

Approach and Plan

  1. Integrate a TensorFlow/PyTorch model selector, so the module would automatically use the one give by the user
  2. Develop the image pre-and post-processing required by the PyTorch model
  3. Record an ultrasound image stream and run the inference in real time using a PyTorch model
  4. Apply the selected prediction transform on the output volume automatically

Progress and Next Steps

  1. The module uses the file extension to know the model framework (.h5 for TensorFlow and .pth or .pt for PyTorch) and execute the corresponding actions in each case

  1. We have recorded a stream from a breast ultrasound phantom where an inclusion that simulates injured tissue is shown. A PyTorch model previously trained with the Dataset BUSI was used to run the inference for the real-time segmentation.

  1. When we the box “Use separate process for prediction” is not checked, we automatically apply the prediction transform selected and display the AI segmentation overlayed on the input ultrasound image (as it was shown before). When this box is checked, the input stream and the prediction have different frame rate and it is more convenient to visualize the prediction in a different view, so we should make it visible manually.

Next Steps

Illustrations

Previous work:

          

Figure 1. Real-time spine segmentation and volume reconstruction using the module “Segmentation U-Net”

Figure 2. Segmentation of breast ultrasound images using the module “Breast Lesion Segmentation”

Background and References

This project is based on the previous Segmentation Unet and Breast Lesion Segmentation modules:

Integration of PyTorch and Slicer: