Back to
Projects List
Automated Bone Segmentation and 3D Modelling using Tracked 2D Ultrasound Imaging
Key Investigators
- Nicholas Kawwas (Concordia University, Canada)
- Hassan Rivaz (Concordia University, Canada)
Project Description
This project aims to create sub-millilitre accurate bone modeling using Ultrasound imaging, Optical tracking (OptiTrack) and Deep learning. Modelling can be split into two key steps: segmentation and reconstruction. Segmentation considers each Ultrasound B-mode image and uses deep learning to automatically segment the bone. These segmented bone surfaces along with their respective OptiTrack coordinates associated with the US image are used to create a 3D model of the imaged bone. A free-hand sweep with the US probe will be used to generate a 3D volume of the bone, enabling fast, precise bone modelling at low cost and without radiation.
Objective
- Perform free-hand sweep capturing 2D US images along with the associated coordinates
- Reconstruct the bone using the bone surface, image coordinates and probe positioning
- Provide a fast, sub-millilitre precise 3D bone model with no radiation and low cost
Approach and Plan
- Perform free-hand sweep capturing 2D US images with Verasonics along with the associated coordinates from OptiTrack
- Segment automatically each US image, obtaining the bone location with deep learning
- Reconstruct the bone using the segmented bone surface shape, image coordinates and probe angles using Neural Fields
- Provide a fast, sub-millilitre precise 3D bone model with no radiation and low cost
- Visualize each slice and entire model in 3DSlicer
Progress and Next Steps
- Trained new segmentation model with high DICE and low surface distance error
- Collect US and OptiTrack data from thigh for femur reconstruction
- Train Neural Fields model like MaskField for 3D model generation from multiple 2D images and coordinates
Illustrations
No response
Background and References
No response