Back to
Projects List
Improved automated segmentation of dental CBCT images with Auto3DSeg
Key Investigators
- Csaba Pinter (EBATINCA, Spain)
- Daniel Palkovics (Semmelweis University, Hungary)
- Andres Diaz-Pinto (NVIDIA, UK)
Project Description
Majority of currently available deep learning (DL) cone-beam computed tomography (CBCT) segmentation models were trained on data of healthy, completely dentated patients. These models might not produce accurate segmentations of datasets with dentoalveolar hard tissue defects. Our group has perviously developed a Deep Learning-based model for the automatic segmentation of dental cone-beam computed tomography (CBCT) scans which was trained on CBCT images with dentoalveolar pathological processes [1][2]. The current model uses a two-staged SegResNet-based architecture from MONAILabel. Despite of the relatively low sample training data it produced sufficient accuracy (93% compared to semi-automatic segmentation). However, the model’s robustness has to be improved. Using the MONAI Auto3DSeg framework and an enlarged training database the project aims to develop an improved model for the automatic segmentation of dental CBCT scans present with dentoalveolar pathological processes.
Objective
We have previously trained a two-stage SegResNet-based model for the automatic segmentation of dental CBCT scans. The project was initiated at the 36th project week.
The goal is to re-train the model including the new training data and the latest DL tools.
Approach and Plan
- Established and enlarged training database with uniformly annotated CBCT data.
- Decide for an adequate network framework and architecture (MONAI Auto3DSeg?)
- Come up with an initial configuration of the chosen architecture (stages, options, pre- and post-processing)
- Perform preliminary training on the available data
Progress and Next Steps
- Discussion with Andres:
- The existing teeth models were trained using MONAILabel using a two-stage approach
- Stage 1 did a single label teeth segmentation mainly to determine the narrower ROI for the next stage
- Stage 2 cropped the image to the spine label of stage 1’s ROI and ran the multi-stage inference
- Auto3DSeg does not inherently support a multi-stage approach, but it can be done by successively running two individual models
- Ebatinca developed a labelmap to labelmap model, which worked great and required little training data for vertebra posterior element removal, maybe we could leverage this approach
- The single stage approach for bone (and nerve) segmentation can be kept. Training using Auto3DSeg is very similar to training using MONAILabel
- Possible approaches for the teeth
- Try single stage segmentation; it is possible that Auto3DSeg has an superior performance even with a single stage (it automatically predicts the best hyperparameters)
- Reproduce the same two-stage approach as before. One model to get the ROI, and a subsequent step segments the individual teeth
- Different two-stage approach, where the first stage segments all teeth as a single label, and a second stage separates them to individual teeth (see vertebra body segmentation above)
- Implement all the above and vote
Illustrations
Two-stage SegResNet architecture
A: semi-automatic segmentation, B: deep learning segmentation
Background and References
- Hegyi, A., Somodi, K., Pintér, C., Molnár, B., Windisch, P., García-Mato, D., Diaz-Pinto, A., & Palkovics, D. (2024). Mesterséges intelligencia alkalmazása fogászati cone-beam számítógépes tomográfiás felvételek automatikus szegmentációjára [Automatic segmentation of dental cone-beam computed tomography scans using a deep learning framework]. Orvosi hetilap, 165(32), 1242–1251. https://doi.org/10.1556/650.2024.33098
- Palkovics, D., Hegyi, A., Molnar, B., Frater, M., Pinter, C., García-Mato, D., Diaz-Pinto, A., & Windisch, P. (2025). Assessment of hard tissue changes after horizontal guided bone regeneration with the aid of deep learning CBCT segmentation. Clinical oral investigations, 29(1), 59. https://doi.org/10.1007/s00784-024-06136-w