Edit this page

NA-MIC Project Weeks

Back to Projects List

Automatic Landmark Identification in 3D Cone-Beam Computed Tomography scans

Key Investigators

Project Description

We propose a novel approach that reformulates anatomical landmark detection as a classification problem through a virtual agent placed inside a 3D Cone-Beam Computed Tomography (CBCT) scan. This agent is trained to navigate in a multi-scale volumetric space to reach the estimated landmark position. The agent movements decision relies on a combination of Densely Connected Convolutional Networks (DCCN) and fully connected layers. Our method achieved high accuracy with an average of less than a 1.3mm error on the landmarks position without failures.


The goal is to have a model that automatically finds accurate landmarks in CBCT scans.

Approach and Plan

A virtual agent is placed inside a 3D CBCT scan. This agent is trained to navigate in a multi-scale volumetric space to reach the estimated landmark position. The decision making is processed through a deep neural network.

Progress and Next Steps

Done :

  1. Prepare the data to be used for training and prediction
  2. Train the model with a set of 6 landmarks and 60 CBCTs
  3. Test the accuracy of the prediction on new scans


  1. Train the model on new landmarks and new CBCTs set
  2. Create a slicer module that can be used to predict the landmark on various types of file
  3. Optimize the training method to make it accessible for clinicians to train on their own dataset


Selection of 6 landmark to test the method LM_SELECTION

Environment to search the agent Environment used for the landmark search

Architecture of the agent Agent used to find the landmark

The 3 steps to search the landmark Search_3Steps_labeled

Results : (error in mm)

Screen Shot 2022-01-21 at 10 50 56 AM

Project week results

During this project week I learned the basics on how to develop a slicer module. I spent this week on creating a first sketch of a future module that will be used to launch the landmark prediction. For now, it allows the user to browse folders where the AI models are located and create a menu where the clinician can choose which landmark to predict. Our prediction method can be trained with any type of 3D images. This module must be user friendly and flexible so any clinician can easealy train and predict new landmarks.

Browser to load the trained models Screen Shot 2022-01-20 at 11 47 24 PM

Landmarks menu generated after reading the model folder

2022-01-21 11-12-15