Edit this page

NA-MIC Project Weeks

Back to Projects List

Conversion of MONAI Label trained network into a MONAI bundle

Key Investigators

Project Description

MONAI Label has become a very popular tool in the NA-MIC community for developing new trained models and incorporating expert feedback into the training process.

Unfortunately, it is currently not straightforward to take the models trained using MONAI Label and apply them in batch mode.

MONAI supports bundles, which are designed for batch mode processing, but the process of converting MONAI Label trained networks into MONAI bundle representation is not well understood and (per MONAI experts) currently requires support from MONAI developers.

In this project we want to explore the process of converting MONAI Label trained networks into MONAI bundle format, and demonstrate how the resulting bundles can be applied to datasets in NCI Imaging Data Commons.

Objective

  1. Develop a complete example transforming MONAI Label network to MONAI bundle.
  2. Improve existing documentation.
  3. Demonstrate how MONAI Label trained network converted to bundle can be applied to a representative sample of data from IDC.

Approach and Plan

  1. Use MONAI Label trained model for segmentation of vertebrae in CT as the use case.
  2. Identify MONAI documentation for transforming MONAI Label trained network into MONAI bundle format.
  3. Develop MONAI bundle from the network in 1.
  4. Select applicable representative subset of data from IDC and apply resulting bundle to a produce segmentations, save segmentations as DICOM SEG, confirm visualization with OHIF.
  5. Document the process and any refinements to the existing instructions.

Progress and Next steps

  1. We decided instead to convert the full CT segmentation MONAI label app from Andres to a bundle, as it has a single stage compared to the 3 stage vertebare pipeline. This model was trained on TotalSegmentator data and used a SegResNet architecture.
  2. We were able to convert the app to a bundle for inference! We had to modify a few transforms for orientation. Now you can use a single command to run inference instead of manually opening 3DSlicer and choosing data to run on.
  3. We tested the bundle on a spleen dataset from decathalon data (Figure 1 below).
  4. We can compare this approach to actual TotalSegmentator segmentation (Figure 2 below)
  5. Now we want to test on data from IDC (NSCLC-Radiomics patient that has some ground truth segmentation). Unfortunately we are getting a lot of CUDA memory errors since these datasets are a lot larger than the spleen dataset we previously tested on. We’re working on making changes to the inference.json file and are trying to crop the images before inference. (Figures 3 and 4)
  6. Future work involves solving these memory errors, saving the output as DICOM SEG, and a more thorough comparison between the MONAI bundle and TotalSegmentator output. Testing on large collections and comparing to the ground truth segmentations is also part of the future work.

Github repo: https://github.com/deepakri201/monai_full_ct_segmentation_bundle

Illustrations

Figure 1 - Full CT segmentation on subject from spleen decathalon data Figure 1 - Full CT segmentation on spleen data from decathalon data

Figure 2 - Comparison on spleen decathalon data of the MONAI full CT segmentation bundle we created (left) to the output TotalSegmentator produces (right) monai_bundle_vs_total_seg_spleen.webm

Figure 3 - Full CT segmentation on subject from IDC 02_03_23_full_ct_segmentation_success_idc

Figure 4 - Comparison on IDC data of the MONAI full CT segmentation bundle we created (left) to the output TotalSegmentator produces (right) monai_bundle_vs_total_seg_idc.webm

Discussion notes

  1. Identified MONAI Label network location; discussed the project with Stephen, identified relevant expertise on MONAI side; planning to have coordination meeting with Roya.
  2. Identified from Andres an example of a MONAI Label app and the corresponding MONAI bundle.
  3. Identified another possible example of a MONAI Label app and the corresponding MONAI bundle.
  4. In progress colab notebook for the conversion of spine localization task
  5. Discussion with Jesse, Andres, Stephen, Steve: Look into creating a bundle vs MONAI deploy app SDK. We started a public discussion of MONAI label app to bundle creation here
  6. Yesterday during the MONAILabel AWS workshop, we tried out the localization_spine step. It seemed to work on the dataset, but we noticed that the model was both named differently and in a different app folder. We wanted to make sure that we could also perform the localization_spine successfully either in Slicer or in a script. We started with the scripted version here, on a nifti file from VERSE, which was also used for training. We expected this to produce a somewhat reasonable segmentation of the spine, but it produced an empty segment. We tried the second stage of localization_vertebra model, and this produced a partial segmentation of one vertebra.
  7. We then explored into setting up MONAILabel locally. We managed to install everything and start the server on Windows (Mac is a WIP), and ran inference using the localization_spine model through Slicer on a nifti file from the training data in VERSE. This again yielded empty segments…
  8. We found a post in the Slicer discourse forum where others also had problems with the vertebrae_pipeline. However, this doesn’t completely address our problem as this one uses a network trained on TotalSegmentator data to perform segmentation of organs+vertebrae. Though the segmentations might be acceptable, this might not work for all as there is no preservation of the ordering of the vertebrae, which might be better addressed by the 3 stage vertebra_pipeline (localization_spine, localization_vertebra, vertebra_segmentation).
  9. We will talk to Nazim tomorrow to see if he has encountered issues using the localization_spine step on data that it was trained on. In the meantime, we will try out the updated model provided by Andres for vertebra segmentation here to make sure we can at least get results with this in Slicer. Perhaps we can convert this to a bundle first? –> As a test, this model worked on a dataset from the training set of TotalSegmentator,and also worked on a dataset from VERSE, with expected differences in segmentation accuracy because of resolution etc.
  10. Cosmin and I met with Nazim to talk about our issues with the localization_spine step in Slicer producing empty labels. We tried running all three stages and got a runtime error - tensor shape. We then tried the segmentation_spleen model on training data from Task09_Spleen, this should produce a proper spleen label. It did not, kind of a fragmented spleen. Is this a CPU vs GPU problem? Cosmin will try to test on his Linux machine that has a GPU. Do we have the lastest versions of the pretrained models? The spleen model is coming from here which is the most recent one. Nazim suggested trying to install everything again. I will also try segmentation_spleen model using a script.
  11. I posted on Slicer discourse about some issues with MONAI Label and the 3 stage vertebra segmentation pipeline. https://discourse.slicer.org/t/using-monailabel-for-vertebrae-segmentation/27511

  12. We tried installing the latest preview release of Slicer to see if inference worked with localization_spine on 2019 and 2020 VERSE dataset, it did not. We also tried the whole vertebrae pipeline and we have the same error with tensor shape size - RuntimeError: Expected 4D or 5D (batch mode) tensor with possibly 0 batch size and other non-zero dimensions for input, but got: [1, 1, 0, 0, 0]. Umang - also where is the temp file saved for the first localization_spine step?
  13. In the meantime we will try converting the full CT seg (trained using TotalSegmentator data) to a bundle. If that works we can go back to the vertebra pipeline? Steve suggested also that we do this instead of focusing on the vertebra. And maybe if we want to train the full CT with higher resolution data (change the target_spacing is the main thing we need to do?) we could think about this at a later stage.
  14. We’ve posted two issues for the vertebrae segmentation, here and here. We got some responses – they suggested running inference on a dataset from decathalon instead. Localization_spine ran successfully! Not the best, but there is some spine segmented. So this is probably because of the resolution. The original VERSE dataset is pretty high res, but looks like the target_spacing is (1.3,1.3,1.3) for localization_spine. So we will try resampling VERSE to the target_spacing and then try inference. We tried running the full vertebra segmentation on the spleen dataset, and all three stages seem to work with no errors related to tensor shape.
  15. We created the bundle for full ct segmentation, and here is the first run on a spleen dataset. We’ll have to fix the transforms.
  16. Steve suggested we might need to do something like this: https://github.com/LymphNodeQuantification/Monailabel-LNQ/blob/main/apps/radiology-retrain-2022-12/lib/infers/segmentation.py. We need to save out the nifti file at each stage of the transforms to see where the orientation changes. Check Invertd transform, Orientationd transforms etc.
  17. This post from yesterday on creating a bundle for SegResNet trained on TotalSegmentator data: https://github.com/Project-MONAI/MONAILabel/issues/1269
  18. I’m able to get the inference to work for the above! (image below). We had to remove the Orientationd transform. We will test on more data and start looking into vertebrae segmentation pipeline.

Background and References