Back to Projects List
To develop a standardized head orientation approach for medical and dental images is crucial to improve the reliability of automated image analysis towards clinical decision-making. Manual and user-dependent head orientation is time-consuming and prone to errors. For this reason, this study aims to automatically obtain the desired standardized orientation of Cone Beam Computed Tomography scans, regardless of the patient’s positioning during the scan or any CT scanner initialization changes.
The Automated Standardized Orientation (ASO) tool presented in this work automatically identifies landmarks on 3D volumes regardless of orientation, using a deep learning landmark identification algorithm that handles images with random orientation (ALI_CBCT). ASO uses a landmark-based registration approach to automatically orient a 3D volume to a common space. The method aligns the identified landmarks to a set of reference ones. The method starts by aligning 3 randomly chosen landmarks and refines their position using an Iterative Closest Point (ICP) transform. The tool also allows user-selected landmarks for precision purposes. All the transforms computed during this process are concatenated and the final transform is applied to the CBCT volume.
To make ASO more robust, a pre-orientation algorithm has been developed. This part uses a deep learning algorithm to identify the head orientation and then rotates the volume to the desired orientation. This algorithm is currently being tested and will be implemented in the ASO module. The training has been realized with random rotations.
Automated Standerized Orientation (ASO) is an extension for 3D Slicer to perform automatic orientation either on IOS or CBCT files.
ASO module provide a convenient user interface allowing to orient different type of scans:
To select the Input Type in the Extension just select between CBCT and IOS here:
|Semi-Automated||Scans, Landmark files|
|Fully-Automated||Scans, ALI Models, Pre ASO Models (for CBCT files), Segmentation Models (for IOS files)|
To select the Mode in the Extension just select between Semi and Fully Automated here:
The Fully-Automated Mode
Inputsection is slightly different:
|Input Type||Input Extension Type|
|CBCT||.nii, .nii.gz, .gipl.gz, .nrrd, .nrrd.gz|
To select the Input Folder in the Extension just select your folder with Data here:
The input has to be IOS with teeth’s segmentation. The teeth’s segmentation can be automatically done using the SlicerDentalModelSeg extension. The IOS files need to have in their name the type of jaw (Upper or Lower).
The user has to choose a folder containing a Reference Gold File with an oriented scan with landmarks.
You can either use your own files or download ours using the
Download Reference button in the module
| Input Type | Reference Gold Files |
| ———– | ———– |
| CBCT | CBCT Reference Files |
| IOS | IOS Reference Files |
To select the Reference Folder in the Extension just select your folder with Reference Data here:
The user has to decide which landmarks he will use to run ASO.
|Input Type||Landmarks Available|
|CBCT||Cranial Base, Lower Bones, Upper Bones, Lower and Upper Teeth|
|IOS||Upper and Lower Jaw|
For IOS: The user has to indicate array name of labels in the vtk surface. By default the name is PredictedID.
The landmark selection is handled in the
For the Fully-Automated Mode, models are required as input, use the
Download Models Button or follow the following instructions:
A Pre-Orientation and ALI_CBCT models are needed
To add the Pre-Orientation models just download PreASOModels.zip, unzip it and select it here:
To add the ALI_CBCT models go to this link, select the desired models, unzip them in a single folder and select it here:
INSERT YOUR BLABLA HERE To add the Pre-Orientation models just download PreASOModels.zip, unzip it and select it here:
You can decide the Extension that the output files will have and the folder where they will go in here:
Now that everything is in order, just press the
RunButton in this section:
The implementation is based on iterative closest point’s algorithm to execute a landmark-based registration. Some preprocessing steps are done to make the orientation works better (and are described respectively in CBCT and IOS part)
a deep learning model is used to predict head orientation and correct it. Models are available for download (Pre ASO CBCT Models)
a Landmark Identification Algorithm (ALI CBCT) is used to determine user-selected landmarks
an ICP transform is used to match both of the reference and the input file
For the Semi-Automated mode, only step 3 is used to match input landmarks with reference’s ones.
Nathan Hutin (University of Michigan), Luc Anchling (UoM), Felicia Miranda (UoM), Selene Barone (UoM), Marcela Gurgel (UoM), Najla Al Turkestani (UoM), Juan Carlos Prieto (UNC), Lucia Cevidanes (UoM)
It is covered by the Apache License, Version 2.0: