Next Article in Journal
Correlation between Time to Hyperbaric Oxygen Therapy and Delayed Neurological Sequelae in Acute Carbon Monoxide Poisoning Patients
Previous Article in Journal
Motion Contrast, Phase Gradient, and Simultaneous OCT Images Assist in the Interpretation of Dark-Field Images in Eyes with Retinal Pathology
Previous Article in Special Issue
Machine Learning Radiomics Signature for Differentiating Lymphoma versus Benign Splenomegaly on CT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Automatization of CT Annotation: Combining AI Efficiency with Expert Precision

1
Institute of Anatomy and Anthropology, Rīga Stradiņš University, LV-1010 Riga, Latvia
2
Department of Computer Graphics and Computer Vision, Riga Technical University, LV-1048 Riga, Latvia
3
Institute of Electronics and Computer Science, LV-1006 Riga, Latvia
*
Author to whom correspondence should be addressed.
Diagnostics 2024, 14(2), 185; https://doi.org/10.3390/diagnostics14020185
Submission received: 8 November 2023 / Revised: 12 January 2024 / Accepted: 13 January 2024 / Published: 15 January 2024
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)

Abstract

:
The integration of artificial intelligence (AI), particularly through machine learning (ML) and deep learning (DL) algorithms, marks a transformative progression in medical imaging diagnostics. This technical note elucidates a novel methodology for semantic segmentation of the vertebral column in CT scans, exemplified by a dataset of 250 patients from Riga East Clinical University Hospital. Our approach centers on the accurate identification and labeling of individual vertebrae, ranging from C1 to the sacrum–coccyx complex. Patient selection was meticulously conducted, ensuring demographic balance in age and sex, and excluding scans with significant vertebral abnormalities to reduce confounding variables. This strategic selection bolstered the representativeness of our sample, thereby enhancing the external validity of our findings. Our workflow streamlined the segmentation process by eliminating the need for volume stitching, aligning seamlessly with the methodology we present. By leveraging AI, we have introduced a semi-automated annotation system that enables initial data labeling even by individuals without medical expertise. This phase is complemented by thorough manual validation against established anatomical standards, significantly reducing the time traditionally required for segmentation. This dual approach not only conserves resources but also expedites project timelines. While this method significantly advances radiological data annotation, it is not devoid of challenges, such as the necessity for manual validation by anatomically skilled personnel and reliance on specialized GPU hardware. Nonetheless, our methodology represents a substantial leap forward in medical data semantic segmentation, highlighting the potential of AI-driven approaches to revolutionize clinical and research practices in radiology.

1. Introduction

Recent advancements in artificial intelligence (AI) have revolutionized numerous sectors, including automotive engineering, economics, finance, and particularly medical diagnostics and treatment planning. AI, a pivotal branch of computer science, involves designing algorithms capable of mimicking human tasks, thereby facilitating in-depth learning, remote teaching, prompt feedback, innovative assessment, and efficient data storage in education [1]. According to Saga et al. [2], AI’s precision and wide-ranging benefits have attracted extensive research interest. In medical contexts, AI has proven invaluable in managing large datasets, aiding in clinical decision-making, and enhancing the accuracy of medical practice [3,4,5,6,7,8,9]. Specifically, AI-powered tools have become integral in image-guided systems for surgical planning and simulation, leveraging advanced visualization technologies [10].
Moreover, the evolution of imaging techniques has markedly improved the visualization and identification of anatomical structures. AI’s application in anatomy education promises significant advancements, potentially enhancing both student learning and instructor teaching methodologies [11,12]. The scalability and speed of AI technologies offer unprecedented opportunities in educational settings [13]. In medical data analysis, AI’s precision is instrumental in CT and MRI scans, enabling the detailed detection of complex anatomical structures. This precision aids in various medical applications, from surgery planning to patient care. AI also plays a crucial role in developing and utilizing 3D anatomical models for educational purposes, enhancing students’ comprehension of human anatomy [14,15]. Visual representations in medical imaging are vital for professionals, researchers, educators, students, and patients. AI has significantly advanced medical data annotation methods, including in the study of human bone structures. AI algorithms efficiently process and update large datasets, creating detailed and accurate images for medical analysis [14,15].
This paper presents a semi-automatic workflow for the semantic segmentation of human spine CT data using AI tools. This method greatly reduces the time and resources required for manual segmentation, thereby offering significant benefits in AI and healthcare applications and proving effective upon testing. The workflow includes volume stitching for multiple series, AI-driven semantic segmentation, validation of anatomical correctness, and practical applications of segmented radiological data. Section 2 details the materials and methods, Section 3 presents the results, Section 4 analyzes the methodology and concludes the paper, discussing potential future research directions.

1.1. Performance Semantic Segmentation

This section outlines a method for performing semi-automatic semantic segmentation of computed tomography (CT) data, utilizing AI-based tools. The workflow is methodically structured into four distinct stages, two of which are mandatory and two optional, depending on the specific requirements of the dataset.
Volume Stitching (Mandatory in Certain Cases): This step is essential in instances where multiple series are involved, such as when the CT data encompasses separate scans of the cervical region and thoraco-abdominal areas. The stitching process integrates these series into a cohesive volume, facilitating a more comprehensive analysis.
Semantic Segmentation with AI (Mandatory): At this core stage, AI-driven instruments are employed to segment the CT data semantically. The AI tools identify and delineate various anatomical structures within the scans, streamlining the segmentation process.
Validation of Segmentation Anatomical Correctness (Mandatory): This critical step involves the verification of the AI-segmented data for anatomical accuracy. It ensures that the segmentation aligns correctly with established anatomical standards, thereby maintaining the precision and reliability of the results.
Post-Processing (Optional): In cases where further refinement or adjustments are necessary, post-processing can be applied. This step allows for the fine-tuning of the segmented data, addressing any specific needs or discrepancies that may arise from the initial AI segmentation.

1.2. Volume Stitching

In radiological imaging, particularly for semantic segmentation, the integration of individual volumes is paramount. This is especially true for complex anatomical structures like the spine, where a comprehensive and uninterrupted view is essential. The vertebral column extends from the cervical region at the top to the thoraco-abdominal region at the bottom. If these areas are scanned separately, the resulting images may be fragmented, failing to provide a holistic view of the spine’s anatomy.
Semantic segmentation, a process where each pixel in an image is classified into specific categories such as different tissues or structures [16], relies heavily on the completeness of the data. By stitching together separate volumes from cervical and thoraco-abdominal scans, we ensure that the segmentation algorithms have a full representation of the spine. This holistic view is crucial for achieving accurate and reliable segmentation outcomes.
Furthermore, different scanning protocols for cervical and thoraco-abdominal regions often lead to variations in image quality, resolution, or contrast [17]. By merging these distinct volumes, we create a uniform dataset, which is critical for consistent analysis. This uniformity significantly reduces the likelihood of errors or inconsistencies that can arise during the segmentation process. Moreover, working with a single, unified dataset, as opposed to multiple disjointed ones, greatly streamlines the analysis process, enhancing efficiency and accuracy.

1.3. Semantic Segmentation with AI

Semantic segmentation, a critical process in medical radiology, involves classifying each pixel in an image into specific classes, thereby extracting detailed anatomical and pathological information. This technique is instrumental in diagnosing diseases, planning treatments, and monitoring disease progression. Despite its importance, the implementation of semantic segmentation in medical radiology encounters several significant challenges.
One primary challenge is the inherent complexity of medical images. Modalities like MRIs, CT scans, and X-rays reveal detailed internal structures, where the diverse appearances of tissues and overlapping structures can complicate the segmentation process. Additionally, images are often subject to noise, artifacts, and distortions due to factors like patient movement, limitations of imaging devices, or specific scanning parameters [18,19]. These imperfections can lead to inaccurate segmentations. Another obstacle is the variability among patients. Factors such as age, genetics, and health conditions result in wide anatomical differences, often necessitating extensive customization or manual adjustments in standardized segmentation models. Further complexity arises from the high dimensionality of medical imaging; CTs and MRIs produce volumetric, 3D data, adding to the segmentation challenges.
A significant hurdle in semantic segmentation is the scarcity of accurately annotated data. Supervised learning models require extensive labeled datasets for training [20]. However, obtaining precise annotations from experienced radiologists is a resource-intensive and time-consuming task. Class imbalance in medical images, where crucial features like tumors may occupy a small portion of the image, can skew algorithms towards more dominant classes, leading to segmentation inaccuracies [21].
Inter-rater variability in medical image annotations poses another challenge. Even among expert radiologists, discrepancies in image interpretation and annotation are common [22]. This variability complicates the establishment of a consistent ground truth for training models. In certain applications, such as image-guided surgeries, real-time semantic segmentation is required, adding to the complexity.
Lastly, the issue of model generalization is a significant concern. Models trained with data from one medical institution or a specific imaging device may not perform well with data from different sources, due to variations in imaging protocols, patient demographics, or device characteristics [23]. This highlights the need for adaptable and robust models capable of handling diverse datasets.
The recent advancements in machine learning (ML) and deep learning (DL) algorithms have markedly increased the adoption of artificial intelligence (AI) in the medical field [24]. These developments, coupled with powerful new imaging modalities provided by advanced scanners, present unique challenges in visualization. Future healthcare technologies, driven by these advancements, are expected to enhance the quality of medical imaging while reducing associated costs, thereby simplifying the extraction and optimal combination of individual data [25].
Computer-aided platforms, leveraging these technological advancements, assist medical professionals in making critical clinical decisions related to diagnosis, disease tracking, and prognosis [26,27,28]. Medical image creation, tailored to individual patients’ medical data, enables professionals to visualize and detect specific manifestations of diseases, enhancing personalized medical care. AI algorithms play a crucial role in diagnosing diseases, planning effective treatment strategies, and monitoring treatment outcomes. These algorithms can analyze medical imaging data to create detailed illustrations of bone dislocations, fractures, or tumors, specifying their location, size, shape, and characteristics. Moreover, AI has demonstrated its capability to surpass human performance in certain tasks, such as image segmentation [29]. It can process a wide array of data types in the medical context, including outputs from imaging scanners, sensor data, or patient metadata. Modern computers, equipped with DL algorithms, are adept at semantic labeling and image classification. In medical data analysis, these algorithms focus on identifying anatomical landmarks, geometric descriptors, centerlines of structures, shape, deformation, and fiber orientation, thereby providing remarkable diagnostic accuracy for various diseases [30].
New methodologies in AI have been developed for the identification and classification of objects in medical images [31,32]. The creation of better methods and standards for designing, implementing, and evaluating clinical AI-enabled instruments is crucial to ensure their effectiveness and value to end users [33]. These algorithms, trained on large datasets, can predict outcomes on new data without explicit programming [34]. Current computer vision algorithms excel at identifying patterns in digital data, achieving human-level accuracy in object detection.
Image segmentation, a fundamental procedure in many medical imaging applications, involves delineating regions within images that are occupied by objects of interest [35]. Traditional segmentation methods, which often rely on areas and edges, face limitations due to factors such as non-uniform grayscale, individual differences, and the presence of artifacts and noise in images [36]. The advent of deep learning (DL) has brought about advanced architectures and feature extraction mechanisms that significantly enhance the segmentation of deformed anatomy [37], thereby improving the accuracy of disease diagnosis and minimizing redundant computations. Automatic segmentation of bones is a crucial step in deriving quantitative markers for accurate clinical diagnosis and in developing computer-aided decision support systems. In modalities like computed tomography (CT) and magnetic resonance imaging (MRI), understanding the pathology and observing changes in the anatomical structures, shape, size, and texture of bones are vital for initial disease diagnosis and monitoring its progression. For instance, in spine image analysis and 3D spine reconstruction applications, accurately locating and segmenting vertebrae from CT spinal images is fundamental [38]. Accurate bone segmentation provides a stable structural reference for both inter and intra-patient registration and internal organ localization, facilitating the automatic segmentation of internal organs [39]. However, despite the relative ease of visual observation of bones in CT images, challenges such as low signal-to-noise ratio, insufficient spatial resolution, and indistinct image intensity between spongy bones and soft tissues make the precise segmentation of individual bones a complex task [40]. Accurately segmenting the spine into individual vertebrae is crucial for diagnosing spine-related illnesses, especially for detecting and classifying bone damage, fractures, lesions, and tumors [41,42,43].

1.4. Validation of Segmentation Anatomical Correctness

To maintain the anatomical accuracy of the model, it is cross-referenced with credible academic references. Furthermore, the use of expert and accredited medical virtual applications is recommended. Complete Anatomy serves as a prime example of such tools, providing an accurate and intuitive interface for manipulating anatomical models. Developed by 3D4Medical under the aegis of Elsevier, Complete Anatomy stands as a sophisticated educational 3D anatomy platform, with the company’s expertise in the development of medical products dating back to 2009 [44,45].

1.5. Practical Use of Segmented Radiological Data

Semantically segmented medical datasets have become fundamental in advancing medical research and technology. Their integration into artificial intelligence (AI) and 3D technologies heralds a new era of opportunities and breakthroughs.
In medical AI, these datasets are indispensable. AI algorithms, trained on semantically segmented data, have transformed diagnostic processes by precisely identifying and classifying abnormalities [46]. This advancement aids radiologists and clinicians in early and accurate diagnosis, paving the way for personalized medicine. Furthermore, AI models utilizing these datasets can perform predictive analyses, anticipating disease progression, patient outcomes, and treatment efficacy [47]. Automated segmentation, another critical application, allows AI to process intricate medical images efficiently, reducing manual labor and ensuring uniformity in results [48,49,50].
Three-dimensional technologies have significantly benefited from these datasets, enabling the creation of detailed anatomical reconstructions [51]. Surgeons utilize these 3D models for meticulous procedural planning, enhancing patient outcomes and optimizing surgical times. Additionally, the integration of these datasets into virtual and augmented reality platforms enhances medical training and patient education, providing immersive experiences and real-time surgical guidance [52].
Beyond AI and 3D applications, semantically segmented datasets are pivotal in telemedicine [53]. They facilitate real-time sharing and analysis, ensuring access to expert care regardless of location. These datasets are at the forefront of contemporary medical innovation, enhancing current practices and setting the stage for future advancements.
In clinical contexts, computer-aided and semi-automated segmentation techniques are crucial for interpreting CT and MRI images for bone pathologies. Correct vertebra segmentation allows for monitoring the progression of diseases under treatment. Traditional manual segmentation is time-consuming, requiring an in-depth understanding of anatomy and extensive effort; for instance, segmenting a vertebral column can take up to 24 h, depending on data quality and scanning protocols. This duration may double for novices. Our proposed methodology enables non-specialists to perform initial segmentation of general, non-pathological structures semi-automatically. This process provides a basis for adjustments, allowing initial segmentation by AI, followed by manual validation and refinement against literature sources and anatomical standards.

2. Materials and Methods

The 3D Slicer software (version: 5.4; slicer.org (accessed on 30 August 2023) [54] was used in this work. It is important to highlight that the program can be easily downloaded and used at no cost, being an open-source software.
The dataset used in this research comprised 250 patient CT scans sourced from Riga East Clinical University Hospital.
In cases where it is necessary to merge multiple series in order to segment the whole anatomical structure, the steps below should be followed.

2.1. Methodology for Series Merging (Optional)

Data import
  • Start the 3D Slicer software.;
  • Access the DICOM database;
  • Choose the appropriate patient;
  • Identify two series intended for later merging;
  • Import the identified series.
Creation of region of interest (ROI)
  • Activate the Crop Volume module;
  • Under Input Volume, choose a series;
  • Under Input ROI, opt for Create ROI. The resulting volume will be named Crop Volume ROI;
  • Click the Center View button in the 3D visualization window;
  • Select the Fit to Volume option;
  • Resize the ROI to ensure both loaded volumes, presented as two individual entities, fit within this new ROI.
The process of stitching two volumes
  • Access the Stitch Volume module;
  • Under Stitched Volume ROI, select the Crop Volume ROI you previously created;
  • For Original Volume 1, choose the primary volume to which the secondary volume will be appended;
  • For Original Volume 2, select the other loaded volume;
  • Click the Create Stitched Volume button.
The Stitch Volume function is designed to create a unified composite image from multiple image volumes. It operates by selecting specific images and merging them within a user-defined rectangular region of interest (ROI). This function meticulously handles the trimming and fusion of overlapping areas in the volumes, dividing each area based on its proximity to the midpoint of the overlap.
Key to the process is the preservation of image resolution. The function precisely adopts the resolution of the initial input volume. To maintain image fidelity, nearest-neighbor interpolation is applied, which accounts for minor shifts in voxel placement. In situations where the image volumes have varying resolutions, windowed sync interpolation is utilized to achieve a uniform resolution across the merged image.
For areas outside the scope of the original volumes, the function assigns a default voxel value of zero. This assignment ensures clear demarcation of the merged areas from the non-covered regions. The process concludes with the generation of a consolidated image, confined within the defined ROI and matching the resolution of the first input volume.

2.2. Methodology for Segmentation

The creation of semantic mask|TotalSegmentator module
  • For Input Volume, choose the volume you have loaded;
  • In Segmentation Tasks, opt for Total;
  • Click the Apply’ button.
Deletion of unnecessary data
4.
Remove all items, retaining only the volume used in the prior step and the desired semantic segmentations of anatomical structures.
Masks validation and correction
5.
When smoothing is necessary:
  • For the Smoothing Method, select Median;
  • Set the Kernel Size to 3.00 mm;
  • Toggle the Apply to Visible Segments option to Enable.
6.
Verify the morphological accuracy of segmented structures using references from literature or digital platforms. If required, make adjustments utilizing the Draw, Paint (with the Sphere Brush feature activated), Erase, and Scissors tools.
7.
If there is a segmentation error where a part of one structure is identified as a segment of another bone, follow these steps to merge and rectify:
  • In the Data module, duplicate the segment that contains a section of the incorrect bone;
  • Utilize the Scissors tool from the Segment Editor module to remove everything except the mislabeled structure;
  • Using the Logical Operators tool, integrate the two segments of a single structure using the Add operation.
Data export
8.
Save the volume in .nrrd file format;
9.
Save segmentation for the volume in seg.nrrd file format.

3. Results

The methodology we discuss has been rigorously validated by applying it to a dataset of 250 patient CT scans from Riga East Clinical University Hospital. This validation specifically involved the semantic segmentation of the vertebral column, with a keen focus on accurately registering and naming each individual vertebra.
In selecting our patient cohort, we implemented a meticulously designed criterion to ensure a demographically balanced sample in terms of age and sex, thus enhancing the generalizability of our study’s findings. A pivotal inclusion criterion was the structural integrity of the vertebral column. We exclusively analyzed scans where the vertebrae were free from severe dispositions or deformities to minimize potential confounding factors that could bias our results.
The core of our workflow, aside from the optional step of volume stitching to create a single comprehensive volume, was the segmentation of the vertebral column. This involved the detailed semantic segmentation of all 25 bones in the vertebral column, including C1 to C7 (cervical), T1 to T12 (thoracic), L1 to L5 (lumbar), and the sacrum combined with the coccyx. An illustrative example of a semantically segmented vertebral column, as executed following our presented methodology, is depicted in Figure 1.
The segmentation was conducted under the supervision of an experienced radiologist and an anatomy professor, ensuring accuracy and adherence to established medical standards. All the created masks have been validated and corrected according to the methodology by comparing all the semantically segmented bones with the reference anatomy from the literature (Figure 2).
In our analysis, we identified instances of false registration anomalies (in 62 cases), a significant error in which two separate vertebrae were mistakenly identified and registered as a single anatomical entity. This misregistration presents a considerable source of error, adversely affecting both the qualitative and quantitative assessments of spinal structure.
Moreover, we observed segmentation inaccuracies where the delineated region either exceeded or fell short of the intended anatomical limits (in all cases). In some cases, the segmentation extended into adjacent tissues, while in others, it failed to include parts of the target structure. Such deviations result in a misrepresented portrayal of the spine’s true morphology, as exemplified in Figure 3.
These artifacts not only undermine the data’s accuracy but also present substantial challenges in clinical or research contexts that depend on precise anatomical demarcation. Ensuring the correctness of vertebral identification and maintaining the integrity of anatomical boundaries are crucial for the reliability and applicability of our findings in subsequent medical analyses and applications.
The successful conclusion of our study resulted in the creation of a composite volume, which effectively integrated two critical anatomical regions: the cervical and the thoraco-abdominal areas. This integration was complemented by their corresponding segmentation masks. A notable aspect of our data storage approach was the adoption of the Nearly Raw Raster Data (.NRRD) file format, diverging from the traditional Digital Imaging and Communications in Medicine (DICOM) format.
This deliberate choice to utilize the .NRRD format was driven by its several advantages, particularly in the context of machine learning applications. The .NRRD format facilitates faster data loading speeds, a crucial factor in efficiently processing large datasets in AI-driven research. This efficiency is especially beneficial when dealing with the complex, multi-region datasets like ours, where streamlined data handling can significantly enhance computational performance and analysis throughput.

4. Discussion

Our methodology employs AI in an easy-to-use format, providing operators with a graphical user interface (GUI). All necessary components related to different Slicer modules are pre-installed within the program, eliminating the need for external downloads.
However, a significant limitation of this methodology is the need for manual validation of all segmented anatomical structures. This process requires a high level of anatomical expertise and introduces potential sources of error and subjectivity. The segmentation process is prone to inaccuracies such as the omission of critical regions, incorrect merging of different anatomical regions, and imprecise registration of structures. For example, two adjacent vertebrae might be inaccurately registered as a single entity, affecting data integrity. Pathological anomalies and anatomical variations add complexity to the segmentation task, requiring both AI and human validators to uphold morphological precision and methodological robustness. The lack of quantitative benchmarks and metric evaluation for AI-segmented structures further complicates the validation process.
Additionally, the segmentation capabilities are limited by the TotalSegmentator software (ver. 2.0.4), which supports only a specific range of anatomical structures, thereby restricting the method’s applicability in broader anatomical studies [55,56].
This methodology introduces an enhanced approach to the annotation of radiological data. Its distinct advantage lies in its broad applicability to a variety of anatomical structures, making it a valuable tool in AI-driven projects, especially in the area of medical data segmentation. A notable feature of this methodology is its inclusive design, which expands the range of annotators beyond traditional radiological professionals. This inclusivity accelerates the data annotation process, enabling quicker completion of projects and resulting in efficient use of resources. Furthermore, for specialists in segmentation tasks, this approach offers a significant reduction in the time required, enhancing overall productivity.

Author Contributions

Conceptualization, E.E.; methodology, E.E.; software, E.E.; validation, E.E. and K.B.; formal analysis, E.E.; investigation, E.E.; resources, K.B. and E.E.; writing—original draft preparation, E.E. and D.K.; writing—review and editing, E.E., D.K., K.B., K.S. and M.P.; visualization, E.E.; supervision, E.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Rīga Stradiņš University (2-PĒK-4/97/2022, 21 February 2022).

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abdellatif, H.; Al Mushaiqri, M.; Albalushi, H.; Al-Zaabi, A.A.; Roychoudhury, S.; Das, S. Teaching, Learning and Assessing Anatomy with Artificial Intelligence: The Road to a Better Future. Int. J. Environ. Res. Public Health 2022, 19, 14209. [Google Scholar] [CrossRef] [PubMed]
  2. Saga, R. AI and Visualization with Medical Data: Medical Image and Deep Learning. J. Vis. 2018, 38, 19–22. [Google Scholar] [CrossRef]
  3. Minopoulos, G.M.; Memos, V.A.; Stergiou, K.D.; Stergiou, C.L.; Psannis, K.E. A Medical Image Visualization Technique Assisted with AI-Based Haptic Feedback for Robotic Surgery and Healthcare. Appl. Sci. 2023, 13, 3592. [Google Scholar] [CrossRef]
  4. Choudhry, P. Use of Artificial Intelligence in Healthcare Applications. Acta Sci. Neuro 2021, 4, 73–74. [Google Scholar] [CrossRef]
  5. Botwe, B.O.; Akudjedu, T.N.; Antwi, W.K.; Rockson, P.; Mkoloma, S.S.; Balogun, E.O.; Elshami, W.; Bwambale, J.; Barare, C.; Mdletshe, S.; et al. The Integration of Artificial Intelligence in Medical Imaging Practice: Perspectives of African Radiographers. Radiography 2021, 27, 861–866. [Google Scholar] [CrossRef]
  6. Zhuang, Y.; Jiang, N.; Xu, Y. Progressive Distributed and Parallel Similarity Retrieval of Large CT Image Sequences in Mobile Telemedicine Networks. Wirel. Commun. Mob. Comput. 2022, 2022, 1–13. [Google Scholar] [CrossRef]
  7. Zhang, Z.; Wang, L.; Zheng, W.; Yin, L.; Hu, R.; Yang, B. Endoscope Image Mosaic Based on Pyramid ORB. Biomed. Signal Process Control 2022, 71, 103261. [Google Scholar] [CrossRef]
  8. Lu, S.; Yang, B.; Xiao, Y.; Liu, S.; Liu, M.; Yin, L.; Zheng, W. Iterative Reconstruction of Low-Dose CT Based on Differential Sparse. Biomed. Signal Process Control 2023, 79, 104204. [Google Scholar] [CrossRef]
  9. Lu, S.; Liu, S.; Hou, P.; Yang, B.; Liu, M.; Yin, L.; Zheng, W. Soft Tissue Feature Tracking Based on Deep Matching Network. Comput. Model. Eng. Sci. 2023, 136, 363–379. [Google Scholar] [CrossRef]
  10. Golland, P.; Kikinis, R.; Halle, M.; Umans, C.; Grimson, W.E.L.; Shenton, M.E.; Richolt, J.A. AnatomyBrowser: A Novel Approach to Visualization and Integration of Medical Information. Comput. Aided Surg. 1999, 4, 129–143. [Google Scholar] [CrossRef]
  11. Liu, M.; Zhang, X.; Yang, B.; Yin, Z.; Liu, S.; Yin, L.; Zheng, W. Three-Dimensional Modeling of Heart Soft Tissue Motion. Appl. Sci. 2023, 13, 2493. [Google Scholar] [CrossRef]
  12. Shen, X.; Du, S.-C.; Sun, Y.-N.; Sun, P.Z.H.; Law, R.; Wu, E.Q. Advance Scheduling for Chronic Care Under Online or Offline Revisit Uncertainty. IEEE Trans. Automat. Sci. Eng. 2023, 1–14. [Google Scholar] [CrossRef]
  13. Lazarus, M.D.; Truong, M.; Douglas, P.; Selwyn, N. Artificial Intelligence and Clinical Anatomical Education: Promises and Perils. Anat. Sci. Educ. 2022; Early View. [Google Scholar] [CrossRef]
  14. Potočnik, J.; Foley, S.; Thomas, E. Current and Potential Applications of Artificial Intelligence in Medical Imaging Practice: A Narrative Review. J. Med Imaging Radiat. Sci. 2023, 54, 376–385. [Google Scholar] [CrossRef]
  15. Deng, Y.; Wang, L.; Zhao, C.; Tang, S.; Cheng, X.; Deng, H.-W.; Zhou, W. A Deep Learning-Based Approach to Automatic Proximal Femur Segmentation in Quantitative CT Images. Med. Biol. Eng. Comput. 2022, 60, 1417–1429. [Google Scholar] [CrossRef] [PubMed]
  16. Krithika Alias AnbuDevi, M.; Suganthi, K. Review of Semantic Segmentation of Medical Images Using Modified Architectures of UNET. Diagnostics 2022, 12, 3064. [Google Scholar] [CrossRef]
  17. Trattner, S.; Pearson, G.D.N.; Chin, C.; Cody, D.D.; Gupta, R.; Hess, C.P.; Kalra, M.K.; Kofler, J.M.; Krishnam, M.S.; Einstein, A.J. Standardization and Optimization of CT Protocols to Achieve Low Dose. J. Am. Coll. Radiol. 2014, 11, 271–278. [Google Scholar] [CrossRef]
  18. Alzain, A.F.; Elhussein, N.; Fadulelmulla, I.A.; Ahmed, A.M.; Elbashir, M.E.; Elamin, B.A. Common Computed Tomography Artifact: Source and Avoidance. Egypt. J. Radiol. Nucl. Med. 2021, 52, 151. [Google Scholar] [CrossRef]
  19. Noda, C.; Ambale Venkatesh, B.; Wagner, J.D.; Kato, Y.; Ortman, J.M.; Lima, J.A.C. Primer on Commonly Occurring MRI Artifacts and How to Overcome Them. RadioGraphics 2022, 42, E102–E103. [Google Scholar] [CrossRef]
  20. Ahmad, Z.; Rahim, S.; Zubair, M.; Abdul-Ghafar, J. Artificial Intelligence (AI) in Medicine, Current Applications and Future Role with Special Emphasis on Its Potential and Promise in Pathology: Present and Future Impact, Obstacles Including Costs and Acceptance among Pathologists, Practical and Philosophical Considerations. A Comprehensive Review. Diagn. Pathol. 2021, 16, 24. [Google Scholar] [CrossRef]
  21. Schutera, M.; Rettenberger, L.; Pylatiuk, C.; Reischl, M. Methods for the Frugal Labeler: Multi-Class Semantic Segmentation on Heterogeneous Labels. PLoS ONE 2022, 17, e0263656. [Google Scholar] [CrossRef]
  22. Brady, A.; Laoide, R.Ó.; McCarthy, P.; McDermott, R. Discrepancy and Error in Radiology: Concepts, Causes and Consequences. Ulster Med. J. 2012, 81, 3–9. [Google Scholar]
  23. Liang, X.; Nguyen, D.; Jiang, S. Generalizability Issues with Deep Learning Models in Medicine and Their Potential Solutions: Illustrated with Cone-Beam Computed Tomography (CBCT) to Computed Tomography (CT) Image Conversion. Mach. Learn. Sci. Technol. 2020, 2, 015007. [Google Scholar] [CrossRef]
  24. Sapci, A.H.; Sapci, H.A. Artificial Intelligence Education and Tools for Medical and Health Informatics Students: Systematic Review. JMIR Med. Educ. 2020, 6, e19285. [Google Scholar] [CrossRef] [PubMed]
  25. Diaz, O.; Kushibar, K.; Osuala, R.; Linardos, A.; Garrucho, L.; Igual, L.; Radeva, P.; Prior, F.; Gkontra, P.; Lekadir, K. Data Preparation for Artificial Intelligence in Medical Imaging: A Comprehensive Guide to Open-Access Platforms and Tools. Phys. Med. 2021, 83, 25–37. [Google Scholar] [CrossRef]
  26. Shi, F.; Wang, J.; Shi, J.; Wu, Z.; Wang, Q.; Tang, Z.; He, K.; Shi, Y.; Shen, D. Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19. IEEE Rev. Biomed. Eng. 2021, 14, 4–15. [Google Scholar] [CrossRef] [PubMed]
  27. Yin, G.; Zhang, L.; Dai, T. Application and Visualization of Human 3D Anatomy Teaching for Healthy People Based on a Hybrid Network Model. J. Healthc. Eng. 2022, 2022, 1–6. [Google Scholar] [CrossRef]
  28. Xia, Q.; Du, M.; Li, B.; Hou, L.; Chen, Z. Interdisciplinary Collaboration Opportunities, Challenges, and Solutions for Artificial Intelligence in Ultrasound. Curr. Med Imaging Rev. 2022, 18, 1046–1051. [Google Scholar] [CrossRef]
  29. Gillmann, C.; Smit, N.N.; Groller, E.; Preim, B.; Vilanova, A.; Wischgoll, T. Ten Open Challenges in Medical Visualization. IEEE Comput. Grap. Appl. 2021, 41, 7–15. [Google Scholar] [CrossRef]
  30. Al-Naser, Y.A. The Impact of Artificial Intelligence on Radiography as a Profession: A Narrative Review. J. Med Imaging Radiat. Sci. 2023, 54, 162–166. [Google Scholar] [CrossRef]
  31. Kawamoto, M.; Kamiya, N.; Zhou, X.; Kato, H.; Hara, T.; Fujita, H. Simultaneous Learning of Erector Spinae Muscles for Automatic Segmentation of Site-Specific Skeletal Muscles in Body CT Images. IEEE Access 2023, 1. [Google Scholar] [CrossRef]
  32. Kamiya, N.; Kume, M.; Zheng, G.; Zhou, X.; Kato, H.; Chen, H.; Muramatsu, C.; Hara, T.; Miyoshi, T.; Matsuo, M.; et al. Automated Recognition of Erector Spinae Muscles and Their Skeletal Attachment Region via Deep Learning in Torso CT Images. In Computational Methods and Clinical Applications in Musculoskeletal Imaging; Vrtovec, T., Yao, J., Zheng, G., Pozo, J.M., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2019; Volume 11404, pp. 1–10. ISBN 978-3-030-11165-6. [Google Scholar]
  33. Tulk Jesso, S.; Kelliher, A.; Sanghavi, H.; Martin, T.; Henrickson Parker, S. Inclusion of Clinicians in the Development and Evaluation of Clinical Artificial Intelligence Tools: A Systematic Literature Review. Front. Psychol. 2022, 13, 830345. [Google Scholar] [CrossRef] [PubMed]
  34. Madani, A.; Namazi, B.; Altieri, M.S.; Hashimoto, D.A.; Rivera, A.M.; Pucher, P.H.; Navarrete-Welton, A.; Sankaranarayanan, G.; Brunt, L.M.; Okrainec, A.; et al. Artificial Intelligence for Intraoperative Guidance: Using Semantic Segmentation to Identify Surgical Anatomy During Laparoscopic Cholecystectomy. Ann. Surg. 2022, 276, 363–369. [Google Scholar] [CrossRef] [PubMed]
  35. Jin, C.; Udupa, J.K.; Zhao, L.; Tong, Y.; Odhner, D.; Pednekar, G.; Nag, S.; Lewis, S.; Poole, N.; Mannikeri, S.; et al. Object Recognition in Medical Images via Anatomy-Guided Deep Learning. Med. Image Anal. 2022, 81, 102527. [Google Scholar] [CrossRef] [PubMed]
  36. Song, Y.; Ren, S.; Lu, Y.; Fu, X.; Wong, K.K.L. Deep Learning-Based Automatic Segmentation of Images in Cardiac Radiography: A Promising Challenge. Comput. Methods Programs Biomed. 2022, 220, 106821. [Google Scholar] [CrossRef]
  37. Wang, J.; Lv, Y.; Wang, J.; Ma, F.; Du, Y.; Fan, X.; Wang, M.; Ke, J. Fully Automated Segmentation in Temporal Bone CT with Neural Network: A Preliminary Assessment Study. BMC Med. Imaging 2021, 21, 166. [Google Scholar] [CrossRef] [PubMed]
  38. Cheng, P.; Yang, Y.; Yu, H.; He, Y. Automatic Vertebrae Localization and Segmentation in CT with a Two-Stage Dense-U-Net. Sci. Rep. 2021, 11, 22156. [Google Scholar] [CrossRef] [PubMed]
  39. Kim, S.; Bae, W.; Masuda, K.; Chung, C.; Hwang, D. Semi-Automatic Segmentation of Vertebral Bodies in MR Images of Human Lumbar Spines. Appl. Sci. 2018, 8, 1586. [Google Scholar] [CrossRef]
  40. Fu, Y.; Liu, S.; Li, H.H.; Yang, D. Automatic and Hierarchical Segmentation of the Human Skeleton in CT Images. Phys. Med. Biol. 2017, 62, 2812–2833. [Google Scholar] [CrossRef]
  41. Jakubicek, R.; Chmelik, J.; Jan, J. Vertebrae Segmentation in 3D CT Data: A Review of Methods and Evaluation Approaches. Curr. Med Imaging Rev. 2018, 14, 853–866. [Google Scholar] [CrossRef]
  42. Qadri, S.F.; Lin, H.; Shen, L.; Ahmad, M.; Qadri, S.; Khan, S.; Khan, M.; Zareen, S.S.; Akbar, M.A.; Bin Heyat, M.B.; et al. CT-Based Automatic Spine Segmentation Using Patch-Based Deep Learning. Int. J. Intell. Syst. 2023, 2023, 1–14. [Google Scholar] [CrossRef]
  43. Saeed, M.U.; Dikaios, N.; Dastgir, A.; Ali, G.; Hamid, M.; Hajjej, F. An Automated Deep Learning Approach for Spine Segmentation and Vertebrae Recognition Using Computed Tomography Images. Diagnostics 2023, 13, 2658. [Google Scholar] [CrossRef] [PubMed]
  44. Motsinger, S.K. Complete Anatomy. Available online: https://3d4medical.com/ (accessed on 30 August 2023).
  45. Nair, R.R.; Lindsey, A. Student Perception on Integration of 3D Complete Anatomy Software Application in Medical Curriculum. FASEB J. 2020, 34, 1. [Google Scholar] [CrossRef]
  46. Mekov, E.; Miravitlles, M.; Petkov, R. Artificial Intelligence and Machine Learning in Respiratory Medicine. Expert. Rev. Respir. Med. 2020, 14, 559–564. [Google Scholar] [CrossRef] [PubMed]
  47. Rezayi, S.; R Niakan Kalhori, S.; Saeedi, S. Effectiveness of Artificial Intelligence for Personalized Medicine in Neoplasms: A Systematic Review. BioMed Res. Int. 2022, 2022, 1–34. [Google Scholar] [CrossRef] [PubMed]
  48. Kart, T.; Fischer, M.; Küstner, T.; Hepp, T.; Bamberg, F.; Winzeck, S.; Glocker, B.; Rueckert, D.; Gatidis, S. Deep Learning-Based Automated Abdominal Organ Segmentation in the UK Biobank and German National Cohort Magnetic Resonance Imaging Studies. Investig. Radiol. 2021, 56, 401–408. [Google Scholar] [CrossRef]
  49. Lenchik, L.; Heacock, L.; Weaver, A.A.; Boutin, R.D.; Cook, T.S.; Itri, J.; Filippi, C.G.; Gullapalli, R.P.; Lee, J.; Zagurovskaya, M.; et al. Automated Segmentation of Tissues Using CT and MRI: A Systematic Review. Acad. Radiol. 2019, 26, 1695–1706. [Google Scholar] [CrossRef]
  50. Thomas, M.F.; Kofler, F.; Grundl, L.; Finck, T.; Li, H.; Zimmer, C.; Menze, B.; Wiestler, B. Improving Automated Glioma Segmentation in Routine Clinical Use Through Artificial Intelligence-Based Replacement of Missing Sequences With Synthetic Magnetic Resonance Imaging Scans. Investig. Radiol. 2022, 57, 187–193. [Google Scholar] [CrossRef]
  51. Bücking, T.M.; Hill, E.R.; Robertson, J.L.; Maneas, E.; Plumb, A.A.; Nikitichev, D.I. From Medical Imaging Data to 3D Printed Anatomical Models. PLoS ONE 2017, 12, e0178540. [Google Scholar] [CrossRef]
  52. Frajhof, L.; Borges, J.; Hoffmann, E.; Lopes, J.; Haddad, R. Virtual Reality, Mixed Reality and Augmented Reality in Surgical Planning for Video or Robotically Assisted Thoracoscopic Anatomic Resections for Treatment of Lung Cancer. J. Vis. Surg. 2018, 4, 143. [Google Scholar] [CrossRef]
  53. Hwang, S.; Song, Y.; Kim, J. Evaluation of AI-Assisted Telemedicine Service Using a Mobile Pet Application. Appl. Sci. 2021, 11, 2707. [Google Scholar] [CrossRef]
  54. Kikinis, R.; Pieper, S.D.; Vosburgh, K.G. 3D Slicer: A Platform for Subject-Specific Image Analysis, Visualization, and Clinical Support. In Intraoperative Imaging and Image-Guided Therapy; Jolesz, F.A., Ed.; Springer: New York, NY, USA, 2014; pp. 277–289. ISBN 978-1-4614-7656-6. [Google Scholar]
  55. Wasserthal, J.; Breit, H.-C.; Meyer, M.T.; Pradella, M.; Hinck, D.; Sauter, A.W.; Heye, T.; Boll, D.; Cyriac, J.; Yang, S.; et al. Total Segmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiol. Artif. Intell. 2023, 5, e230024. [Google Scholar] [CrossRef] [PubMed]
  56. Wasserthal, J. TotalSegmentator. 2023. Available online: https://github.com/wasserth/TotalSegmentator (accessed on 12 January 2024).
Figure 1. Semantically segmented vertebral column serving as proof of concept of the presented methodology (in total, 25 bones from C1 till sacrum combined with coccyx).
Figure 1. Semantically segmented vertebral column serving as proof of concept of the presented methodology (in total, 25 bones from C1 till sacrum combined with coccyx).
Diagnostics 14 00185 g001
Figure 2. Reference digital models of vertebras used during the validation methodology from Complete Anatomy software (ver. 10.0.1) [44].
Figure 2. Reference digital models of vertebras used during the validation methodology from Complete Anatomy software (ver. 10.0.1) [44].
Diagnostics 14 00185 g002
Figure 3. Illustrative examples of artifacts in anatomical segmentation. (A) Improper registration: A segment of the L4 vertebra is inaccurately identified as part of the L5 vertebra. (B) Omission of critical regions: This artifact represents the absence of specific anatomical regions that should have been included in the segmentation. (C) False registration: Soft tissues are incorrectly registered as constitutive elements of the L5 vertebra, compromising the integrity of the segmented structure.
Figure 3. Illustrative examples of artifacts in anatomical segmentation. (A) Improper registration: A segment of the L4 vertebra is inaccurately identified as part of the L5 vertebra. (B) Omission of critical regions: This artifact represents the absence of specific anatomical regions that should have been included in the segmentation. (C) False registration: Soft tissues are incorrectly registered as constitutive elements of the L5 vertebra, compromising the integrity of the segmented structure.
Diagnostics 14 00185 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Edelmers, E.; Kazoka, D.; Bolocko, K.; Sudars, K.; Pilmane, M. Automatization of CT Annotation: Combining AI Efficiency with Expert Precision. Diagnostics 2024, 14, 185. https://doi.org/10.3390/diagnostics14020185

AMA Style

Edelmers E, Kazoka D, Bolocko K, Sudars K, Pilmane M. Automatization of CT Annotation: Combining AI Efficiency with Expert Precision. Diagnostics. 2024; 14(2):185. https://doi.org/10.3390/diagnostics14020185

Chicago/Turabian Style

Edelmers, Edgars, Dzintra Kazoka, Katrina Bolocko, Kaspars Sudars, and Mara Pilmane. 2024. "Automatization of CT Annotation: Combining AI Efficiency with Expert Precision" Diagnostics 14, no. 2: 185. https://doi.org/10.3390/diagnostics14020185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop