Back to Projects List
Kaapana is a Kubernetes-based open source toolkit for platform provisioning in the field of medical data analysis. Kapana is leveraging a number of open source tools that are relevant for the NA-MIC community (specifically, OHIF Viewer, MITK, nnU-Net segmentation tools) and relies on DICOM for managing images, image-derived data and metadata.
In this project current, perspective and aspiring users of Kaapana will have the opportunity to work with the developers of the platform to get help with deploying and using the platform, and to discuss potential problems or directions for future development and collaboration.
Improved Slicer integration : we already have Slicer app added to Kaapana following the example of MITK (see https://github.com/fedorov/kaapana/tree/0.1.2-november-slicer). However, communication to / out of the app is quite clunky. Specifically, we have not figured out how to be able to select cases from the dashboard and open those directly in Slicer. Also, we would like to have a workflow that writes DICOM segmentations etc back into the DICOM server. Related to Integration of Desktop Apps.
collect-metadataDAG - 1) read
cohort_identifiers(this is the conf object that is accessed from
LocalWorkflowCleanerOperator2) write manifest to minio - this is as in
dag_collect_metadata.pyworkflow. Probably will be better to combine manifest export with launching Slicer with the cohort opened.
Integration with GCP Healthcare DICOM stores : right now we use dcm4chee as the DICOM server. This is problematic while deploying kaapana on the cloud, since 1) it is huge waste of resources: we already have our data in storage buckets, we need to replicate those files on attached disk (and attached storage is very expensive), then import into dcm4chee (which is very very very slow, and does not work for all types of DICOM objects - SRs are rejected); 2) I am not sure it is scalable to use dcm4chee. We can very easily set up a DICOM store under GCP Healthcare, which is cheaper, faster, is highly scalable, and can be accessed using standard DICOMweb interface with authentication. It would be extremely helpful to be able to use that GCP DICOM Store in place of dcm4chee. Related to Connecting/Using Kaapana to Google Cloud/Google Health/Google FHIR.
Integration with IDC : All of IDC data is available from public GCP buckets, egress is free. All you need is to have Google Cloud SDK https://cloud.google.com/sdk installed, and to do searching, one needs to have a GCP project and credentials. Maybe we can discuss this. Related to Data and model exchange across different sources.
Integration of new analysis tools into Kaapana : we have been developing use cases that utilize publicly available AI tools, starting from DICOM images and producing DICOM output, see some here: https://app.modelhub.ai/. It would be good to go over the process of adding one of those to kaapana as an experiment, so I can understand the process. We could also use prostate cancer segmentation model from MONAI zoo that we are going to investigate in this project: https://github.com/NA-MIC/ProjectWeek/pull/486/files#diff-1b4e320dd5db1df87192959dee521ff75d94129c1b97ede523d6b740271191b7R3. Related to Data and model exchange across different sources. Relatred questions:
Running Kaapana on Google Kubernetes Engine : while using GCP, we’ve been following an extremely naive and inefficient approach for deploying Kaapana. We allocate a fixed linux VM, and install it as if we are on a on-prem server. As I understand it, to fully leverage the power of k8s, it would make a lot more sense to use Google Kubernetes engine. My knowledge of k8s and microk8s is very close to 0, so maybe this is something that is highly trivial. Maybe we could experiment with this together. We can even set up a shared GCP projects where I can add you, so you can experiment directly. Related to Connecting/Using Kaapana to Google Cloud/Google Health/Google FHIR.
Maintenance of Kaapana instance : discuss the process of checking for security vulnerabilities, updating the developers of identified vulnerabilities, communicating the need to update to the users, look if scanning features available in GCP could be helpful.
developinstance on a linux laptop that was then used for development.