Edit this page

NA-MIC Project Weeks

Back to Projects List

SlicerModalityConverter Extension - addition of new models and use case examples

Key Investigators

Project Description

SlicerModalityConverter is a 3D Slicer extension designed for medical image-to-image (I2I) translation.

The ModalityConverter module provides a user-friendly interface for integrating multiple AI models trained for I2I translation (currently MRI-to-CT). It also supports GPU acceleration for faster inference, and is designed to allow users to easily integrate custom models.

More about the module here.

Objective

  1. Integration of new translation models for T1w-to-T2w MRI translation.
  2. Creating use case examples with video tutorials.

Approach and Plan

  1. Integrate two new pre-trained models for T1w-to-T2w translation (presented in this study and released in this repository), following the guidelines reported in the module documentation for integrating custom models.

  2. Create video tutorials demonstrating the common uses of existing models. For example, show how to use MRI-to-synthetic CT translation models to extract the skull’s representation from a T1w brain MRI.

Progress and Next Steps

Preogress

  1. The integration of the two pre-trained T1w-to-T2w translation models was completed but not accepted. Although preliminary tests were performed, the achieved performance was not satisfactory for practical use. In addition, the training strategy adopted for these models makes their general usability and scalability difficult in a broader clinical/research context. For these reasons, the models were not officially integrated into the module.

  2. A CBCT-to-CT translation model for the head and neck district was successfully integrated into the module.

  3. A short tutorial was added to demonstrate how to extract the skull directly from a T1-w MRI using the MRHead example, exploiting the models available in the Modality Converter module.

Next steps

  1. Evaluate the possibility of integrating alternative, more robust T1w–T2w translation models with better generalization performance.

  2. Extend the tutorial section with additional use cases based on the currently integrated models.

Illustrations

Background and References

No response