3D stereophotogrammetry has becoming increasingly common in biomedical and clinical fields as an economic, fast, flexible and safe (non-invasive, no radiation) way to achieve accurate 3D surface models. In particular, photogrammetry can acquire realistic surface texture that allows researchers and clinicians to accurately assess traits and placing landmarks.
To further reduce the cost and faciliate the use of photogrammetry, we want to build an open-source pipeline for photogrammetry in 3D Slicer for from digital image post-processing to photogrammetric 3d mesh reconstruction and texturing that incorporate open-source software and packages. Eventually, we will also create guidelines for clinicians and researchers, espically using mobile devices (e.g., smart phone), for Slicer-based photogrammetry.
vtkRenderer. First build a
vtkOpenGLPolyDataMapperto link to the mesh and store each texture image according to material names, and then add a
vtkOpenGLActorto refer to the content of the mapper. Finally, passing the actor to the rendere. However, for our dataset,
vtkRenderercrashed whenever I rendered the model with more than 14 texture images (on Windows), while w111 accompanied texture images. Steve confirmed the same issue on Mac. Below an example of rendering with 10 texture images.
3. I then followed Steve and Andras' suggestions to try using Blender to merge texture images. I merged the first 25 texture images into one using Blender and map to the model using the Texture Modeler in Slicer. Slicer successufully rendered the model with the texture. The result is showing below. The resolution is pretty low. This is probably because the scaling for each texture image after merging into one texture image. Below shows the 1st texture image (the dominant one) (left) and the merged one (right). In the merged one, the texture of the specimen basically concentrate at the lower left corner.
4. Future directions: - Merge texture images properly: - I did not merge all 111 texture images because Blender requires me to add the same texture mapping node for every image. It appears that we can do python scripting in Blender. Thus, it might be useful to explore looping through every image using python and later connect to Slicer. - Find a proper way to merge texture to retain the resolution. I'm also checking with the ODM people to see if they can do it. - Follwing Andras' suggestion, directly access vtkRenderer() in Slicer scene to have stable rendering. - Steve suggested geometry accuracy is more important than visual fidelity at this moment and we can archiving images for adding more algorithms in the future, such as machine learning. For the near future, we can focus on first getting a pipeline based on ODM. In the long run, we should definitely consider adding machine/deep learning algorithms, for example, to image registration, which is the foundation of geometric & texture accuracy in structure-from-motion photogrammetry. This can also greatly improve the efficieny of photo taking. Currently, we have to take a lot photos carefully to ensure proper registration but it is still tricky. We will have more discussions with Murat. - We will also discuss how much we can rely on Slicer & how much we have to use 3rd party software & packages. # Illustrations Example Example textured model (.obj) exported from WebODM viewed in MeshLab ![Picture1](https://user-images.githubusercontent.com/80793828/175820570-1dd33815-a151-4469-9f30-42470906fc0a.png) Example camera positions reconstructed by WebODM, viewed in WebODM ![Picture2](https://user-images.githubusercontent.com/80793828/175820800-78f81f9c-6d44-42a1-b8e2-c201abd04fc3.png) Example point cloud exported from webODM and loaded in Slicer using [this code](https://gist.github.com/pieper/e4ca5e4c753c5ed6c61656d25b93402c). ![image](https://user-images.githubusercontent.com/126077/174670532-75d16428-15a5-4647-8b80-7820fe4dfde3.png) ![image](https://user-images.githubusercontent.com/126077/174670684-eae5cc87-b0da-41cb-9a79-6a903148168f.png) # Background and References 1. The repository for SlicerPhotoGram: [https://github.com/SlicerMorph/PhotoGram](https://github.com/SlicerMorph/PhotoGram). 2. Currently, we have created a script [output_cropped_image.py](https://github.com/SlicerMorph/PhotoGram/blob/main/output_cropped_images.py) for loading digital image sequnece as a volume, crop each image using ROI tool for reducing background noise, and export each cropped slice as a tiff image. 3. WebODM for photogrammetry that rely on OpenCV and OpenSFM: [https://www.opendronemap.org/docs/](https://www.opendronemap.org/docs/) and [https://github.com/OpenDroneMap/WebODM](https://github.com/OpenDroneMap/WebODM). 4. We have a script for loading WebODM point clouds in Slicer: [https://gist.github.com/pieper/e4ca5e4c753c5ed6c61656d25b93402c](https://gist.github.com/pieper/e4ca5e4c753c5ed6c61656d25b93402c)