Edit this page

NA-MIC Project Weeks

Back to Projects List

Implementing support for running inference engines in CustusX

Key Investigators

Project Description

Running trained Deep Learning networks with inference engines. The focus will be on implementing this in CustusX.

Objective

  1. Start with implementing C++ support for running one pre-trained model.

Approach and Plan

  1. Use the FAST library for inference engine support.

Progress and Next Steps

The task of implementing support for multiple inference engines proved too large for Project Week. We ended up using the OpenVINO Toolkit directly. The OpenVINO inference engine allows us to run the trained networks on the various Intel devices (CPU, GPU, FPFA, Movidius Stick, …), so this choice still provides us with a decent multi-platform solution.

Illustrations

Processed example image

Background and References

CustusX is the toolbox we bring to the OR. It’s our tool for reusing results from old reserch projects.

Currently we got several research projects where deep learning networks are created: Examples from FAST

We want to be able to run these networks from inside CustusX to allow a more seamless integration in the OR. Some projects require the deep learning networks to run in real time, and in these cases they will need to run them on inference engines.

Video: Highlighting nerves and blood vessels on ultrasound images