Clin Orthop Surg. 2024 Feb;16(1):113-124. English.
Published online Jan 15, 2024.
Copyright © 2024 by The Korean Orthopaedic Association
Original Article

Automatic Segmentation and Radiologic Measurement of Distal Radius Fractures Using Deep Learning

Sanglim Lee, MD, Kwang Gi Kim, PhD,*, Young Jae Kim, PhD,* Ji Soo Jeon, BS,* Gi Pyo Lee, BS,* Kyung-Chan Kim, MD and Suk Ha Jeon, MD
    • Department of Orthopedic Surgery, Inje University Sanggye Paik Hospital, Seoul, Korea.
    • *Department of Biomedical Engineering, Gachon University College of Medicine, Incheon, Korea.
    • Department of Orthopaedic Surgery, National Medical Center, Seoul, Korea.
Received April 27, 2023; Revised September 10, 2023; Accepted October 24, 2023.

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background

Recently, deep learning techniques have been used in medical imaging studies. We present an algorithm that measures radiologic parameters of distal radius fractures using a deep learning technique and compares the predicted parameters with those measured by an orthopedic hand surgeon.

Methods

We collected anteroposterior (AP) and lateral X-ray images of 634 wrists in 624 patients with distal radius fractures treated conservatively with a follow-up of at least 2 months. We allocated 507 AP and 507 lateral images to the training set (80% of the images were used to train the model, and 20% were utilized for validation) and 127 AP and 127 lateral images to the test set. The margins of the radius and ulna were annotated for ground truth, and the scaphoid in the lateral views was annotated in the box configuration to determine the volar side of the images. Radius segmentation was performed using attention U-Net, and the volar/dorsal side was identified using a detection and classification model based on RetinaNet. The proposed algorithm measures the radial inclination, dorsal or volar tilt, and radial height by index axes and points from the segmented radius and ulna.

Results

The segmentation model for the radius exhibited an accuracy of 99.98% and a Dice similarity coefficient (DSC) of 98.07% for AP images, and an accuracy of 99.75% and a DSC of 94.84% for lateral images. The segmentation model for the ulna showed an accuracy of 99.84% and a DSC of 96.48%. Based on the comparison of the radial inclinations measured by the algorithm and the manual method, the Pearson correlation coefficient was 0.952, and the intraclass correlation coefficient was 0.975. For dorsal/volar tilt, the correlation coefficient was 0.940, and the intraclass correlation coefficient was 0.968. For radial height, it was 0.768 and 0.868, respectively.

Conclusions

The deep learning-based algorithm demonstrated excellent segmentation of the distal radius and ulna in AP and lateral radiographs of the wrist with distal radius fractures and afforded automatic measurements of radiologic parameters.

Keywords
Distal radius fractures; Computer-assisted radiographic image interpretation; Deep learning

Distal radial fractures are the most common fractures of the upper extremities. The treatment of these fractures depends on the fracture pattern and the age of the patient. Surgery is recommended for displacements > 10°–15° of dorsal tilt, < 15° of radial inclination, > 2–5 mm of ulnar shortening, and > 2 mm of intraarticular step-off.1) Research suggests that re-displacement or instability following closed reduction of distal radius fractures is related to the initial displacement.2, 3, 4, 5) The most common method for comparing the degree of displacement and alignment in distal radius fractures involves measuring radiologic parameters such as radial inclination, dorsal angulation, radial height (or radial length), and ulnar variance.6) These parameters are measured manually by clinicians. In clinical studies, measuring several angulations or lengths in hundreds or thousands of patients can be tedious and time-consuming.

Recently, deep learning techniques have been employed in medical imaging studies. In orthopedic surgery, these techniques assist in fracture detection and the automated measurement of radiologic parameters in X-ray imaging. Most automated measurements using deep learning have been reported in studies focusing on the spine or hip.7, 8, 9, 10, 11, 12) Deep learning-based detection or classification of distal radius fractures has also been reported.13, 14, 15) In clinical settings, nondisplaced distal radius fractures are diagnosed and managed not solely using radiological findings, but also using additional findings of physical examinations, such as tenderness and swelling. Hence, using deep learning to facilitate decision-making regarding the presence of fractures is not helpful. Conversely, it can be used to perform time-consuming and repetitive tasks to help clinicians.

In our research, we focused on the automated measurement of radiologic parameters in distal radius fractures using deep learning techniques. Suna et al.16) also focused on the automated computation of radiologic parameters using a deep learning technique. However, their study primarily concentrated on segmentation techniques and investigated distal radius fractures with less severe deformations. We present a more detailed algorithm capable of measuring the radiologic parameters of distal radius fractures with more severe deformities through automated segmentation using deep learning techniques. Additionally, we compare the predicted angles with those measured by an orthopedic hand surgeon.

METHODS

This study received approval from the Institutional Review Board of Inje University Sanggye Paik Hospital (No. SGPAIK 2021-07-009), and informed consent was waived, as it solely utilized wrist X-rays and limited epidemiological data, including age and sex.

Image Acquisition

We collected anteroposterior (AP) and lateral X-ray images of the wrist from patients with distal radius fractures from October 2002 to March 2019. Inclusion criteria encompassed patients with distal radius fractures treated conservatively with a minimum follow-up of 2 months. Exclusion criteria included any signs of prior surgeries, anatomical deformities, involvement of inflammatory arthritis at the wrist, or being under the age of 17. A total of 634 wrist images from 624 patients were included, with 78% being female. The mean age of the patients was 61 years, ranging from 17 to 94 years. Bilateral distal radius fractures were detected in 10 patients, with 5 experiencing simultaneous bilateral wrist fractures and 5 fractures occurring at different times. A total of 634 AP and 634 lateral radiographs were saved in the digital imaging and communications in medicine (DICOM) format, taken immediately after the fracture and 2 months after injury. We allocated 507 AP and 507 lateral images to the training set (80% of the images were used for model training and 20% were used for validation), while 127 AP and 127 lateral images were reserved for the test set. We used Image J (National Institutes of Health) to annotate the margins of the radius and ulna in polygon form, with confirmation by the first author, who is an orthopedic hand surgeon (SL). The confirmed images were used to generate mask images for ground truth in deep learning. Scaphoids in the lateral views were annotated using a box configuration to determine the volar side of the images.

Experiment Environment

The deep learning system used two NVIDIA GeForce RTX 2080 Ti 11GB GPUs (NVIDIA) and operated on Ubuntu 16.04.6. We used Python language (version 3.6.12) along with Tensorflow-GPU 1.13.1, Cuda 10.1, cuDNN 7.6.5, and Keras 2.3.0 for training and testing deep learning networks. Image processing and algorithm development were carried out using OpenCV-python 4.1.1, Pydicom 2.1.2, read-roi 1.6.0, and scipy 1.4.1.

Image Pre-processing

X-ray images were transformed into 8-bit portable network graphic (PNG) files for preprocessing and resized to 512 × 512 pixels. To enhance image uniformity, histogram matching was performed on the PNG images. Contrast-limited adaptive histogram equalization was performed to accentuate the contrast of the PNG images. Data augmentation included random rotation (–30° to 30°) and cropping (600 × 600–1,200 × 1,200), yielding a total of 5,070 AP and 5,070 lateral images and effectively increasing the dataset tenfold.

Image Segmentation

The segmentation of the radius and ulna was achieved using an attention U-Net, employing the Adam activation function and Dice coefficient loss function. The training process comprised 200 epochs, a batch size of 16, and a learning rate of 1e–3 (Fig. 1). When the loss function value did not decrease in 10 epochs, the training was stopped (“early stopping”). The training was stopped after 63 epochs on AP views and 58 epochs on lateral views. In lateral images, we identified the volar/dorsal side using a detection and classification model based on RetinaNet with the Adam optimizer function and focal loss function (epochs = 200, batch size = 4, learning rate = 1e–5).

Fig. 1
(A) Anteroposterior (left) and lateral (right) X-ray of the wrist in the test set. (B) Predicted masks of the radius (left) and ulna (middle) on anteroposterior X-ray and the radius (right) on lateral X-ray.

Automatic Measurement of Radial Inclination in AP Radiographs

The calculation of the longitudinal axis of the radial shaft was executed as follows: the point located at the distal 30% of the radial shaft axis was designated as the distal center point, while the point positioned 60 pixels proximal to the distal central point was designated as the proximal center point (Fig. 2A). The radius was rotated until the radial axis line between the proximal and distal center points coincided with the Y-axis (longitudinal axis) of the image (Fig. 2B).

Fig. 2
(A) Longitudinal axis of the radial shaft drawn using proximal and distal center points. (B) Radius rotated until the radius axis line coincided with the Y-axis of the image.

To identify the radial and ulnar sides, we applied the horizontal position intensity profile, employed previously by Reyes-Aldasoro et al.,17) to measure the thickness of trabecular and cortical regions in the third metacarpal bone. The intensity was horizontally detected through the x-axis (transverse axis) line (Fig. 3A). Because an abrupt intensity change was observed between the bone and soft tissue, abrupt intensity changes with more than 80 at 5-pixel intervals were counted on both sides of the distal center point, and the side with the lesser number was identified as the radial side (Fig. 3B).

Fig. 3
(A) Intensity horizontally detected from –120 pixels to +120 pixels through the X-axis line intersecting the distal center point. (B) Number of abrupt intensity changes counted at both sides of the distal center point to identify the radial (upper) and ulnar (lower) sides.

Using the illustrated rectangular box, we determined a radial styloid point and a distal ulnar point (Fig. 4A-D). We then identified a vertical point on the line that trespasses the distal ulnar point and is perpendicular to the radial axis line. The radial inclination was subsequently measured from the angle formed by these three determined points (Fig. 4E). If the radial styloid point is located proximal to the distal ulnar point, the radial inclination is assigned a negative value.

Fig. 4
(A) Most distally and radially located points identified as a radial styloid point. (B) Rectangular box abutting the outer margin of the radius illustrated. (C) Most distal point of the ulnar margin of the box determined as a standard (sd) point. (D) Point featuring the least distance from the standard point determined as a distal ulnar point. (E) Radial inclination measured from the angle made by a radial styloid point, distal ulnar point, and vertical point.

Automatic Measurement of Dorsal or Volar Tilt in Lateral Radiographs

The radial axis line was determined using the same method as that used for AP radiographs. The most distal point on the radius was identified as distal point 1 and distal point 2 (Fig. 5A). To identify the volar and dorsal sides in the lateral radiographs, the scaphoid was annotated in a box configuration, and the volar side was labeled (Fig. 5B). After identifying the volar and dorsal sides, two distal points were determined as the distal volar and distal dorsal points. A vertical point was determined on the line trespassing the distal dorsal point, perpendicular to the radial axis line. Dorsal or volar tilt was measured as the angle between the distal, distal volar, and vertical points (Fig. 5C). If the tilt was dorsal, the angle had a positive value; if the tilt was volar, the angle had a negative value.

Fig. 5
(A) Most distal point on the radius determined as a distal point 1. Distal point 2 determined by the first point on the radius, wherein a straight line starting from a distal point 1 in the opposite direction is met. (B) Scaphoid annotated in box configuration and volar side was labeled using RetinaNet. (C) Dorsal or volar tilt measured from the angle made by a distal dorsal point, distal volar point, and vertical point.

Automatic Measurement of Radial Height in AP Radiographs

Using the illustrated rectangular box that runs parallel to the radial axis, the ulnar head point was identified as the most distal point of the ulnar head, excluding the ulnar styloid. This point was defined as either the location where the horizontal line of the rectangular box of the ulnar bone meets the most distal part of the ulnar head, excluding the ulnar styloid, or the point where a tangent drawn from the ulnar styloid tip touches down on the ulnar head surface (Fig. 6A). The measurement of radial height involved determining the distance between two lines trespassing the ulnar head point and the radial styloid point, perpendicular to the radial axis line (Fig. 6B). If the radial styloid point was located proximal to the ulnar head point, the radial height had a negative value.

Fig. 6
(A) Ulnar head point defined as the point where the horizontal line of the rectangular box of the ulnar bone meets the most distal part, excluding the ulnar styloid, or as the point where the tangent from the ulnar styloid tip touches down on the ulnar head surface. (B) Algorithm measured the radial height, radial inclination, and dorsal tilt.

Evaluation of Segmentation and Statistical Analysis of Predicted Angles

Pixels in the segmented radius and ulna were defined as positive pixels, while others were classified as negative pixels. By comparing the 254 ground truth images and those predicted using deep learning, pixels were categorized into four fundamental indicators of a confusion matrix: true positive (TP), false positive (FP), false negative (FN), and true negative (TN). The performance of the segmentation model was evaluated by calculating accuracy, sensitivity, specificity, and Dice similarity coefficient (DSC). Recall and mean average precision were used to evaluate the classification model for the volar/dorsal side.

Accuracy = (TP + TN) / (TP + FP + FN + TN); Sensitivity (recall) = TP / (TP + FN); Specificity = TN / (FP + TN); DSC = 2TP / (2TP + FP + FN); Precision = TP / (TP + FP)

The algorithm measured radial inclinations, dorsal/volar tilts, and radial heights using predicted images. These measurements were compared to those obtained manually through paired t-tests or Wilcoxon signed-rank tests (selected based on the results of a normality test). Intraclass correlation coefficient (ICC) analysis was also conducted, and the results were plotted as scatter and Bland-Altman plots. Using the angles predicted using deep learning, correlations among radiologic parameters were analyzed. Moreover, correlations between the initial parameters and values at the 2-month follow-up were analyzed. A p-value less than 0.05 was considered statistically significant.

RESULTS

Performance of Segmentation

Our deep learning segmentation model for the radius exhibited an accuracy of 99.98% and a DSC of 98.07% for AP images and an accuracy of 99.75% and a DSC of 94.84% for lateral images. The segmentation model for the ulna showed an accuracy of 99.84% and a DSC of 96.48%. The accuracy, sensitivity, specificity, and DSC of AP radiographs were higher than those of the lateral views (Table 1). To determine the dorsal or volar side using annotation of the scaphoid on the lateral radiographs, the recall and mean average precision of 254 images were 0.996 and 0.992, respectively.

Table 1
Performances of Segmentation of the Radius and Ulna Using Deep Learning

Performance of Automatic Measurement

To compare the radial inclinations measured using the algorithm with those measured manually, the Pearson correlation coefficient was 0.952 and the ICC was 0.975 (Table 2, Fig. 7). For dorsal/volar tilt, the correlation coefficient was 0.940 and the ICC was 0.968 (Fig. 8). The correlation coefficient and ICC for radial height were smaller than those for the other two parameters (Fig. 9). The correlation coefficients between radial inclination and dorsal tilt, calculated using angles predicted by deep learning and manual measurement, were –0.284 and –0.371, respectively. The scatter plot showed a similarly weak negative relationship (Fig. 10).

Fig. 7
Bland-Altman (A) and scatter plot (B) for radial inclinations (RI) measured by algorithms and manually. Manual: angles measured by manual measurement, Auto: angles measured by deep learning, SD: standard deviation.

Fig. 8
Bland-Altman (A) and scatter plot (B) for dorsal tilts (DT) measured by algorithms and manually. Manual: angles measured by manual measurement, Auto: angles measured by deep learning, SD: standard deviation.

Fig. 9
Bland-Altman (A) and scatter plot (B) for radial heights (RH) measured by algorithms and manually. Manual: angles measured by manual measurement, Auto: angles measured by deep learning, SD: standard deviation.

Fig. 10
Scatter plots of radial inclinations (RI) and dorsal tilts (DT) predicted by deep learning (A) and manual measurement (B). Auto: angles measured by deep learning, Manual: angles measured by manual measurement.

Table 2
Comparison of Radiologic Parameters Measured Manually and Predicted Using Deep Learning

The initial radial inclinations and those after 2 months were not statistically significantly different for each method, and they exhibited a strong positive relationship (Table 3, Fig. 11). Dorsal tilts after 2 months were smaller than the initial tilt angles (p < 0.001); however, the initial tilt and angles after 2 months featured a weak positive relationship (Fig. 12). Radial height exhibited a similar trend as radial inclination (Fig. 13).

Fig. 11
Scatter plots of initial and 2-month radial inclinations (RI) predicted by deep learning (A) and manual measurement (B). Auto: angles measured by deep learning, Manual: angles measured by manual measurement.

Fig. 12
Scatter plots of initial and 2-month dorsal tilts (DT) predicted by deep learning (A) and manual measurement (B). Auto: angles measured by deep learning, Manual: angles measured by manual measurement.

Fig. 13
Scatter plots of initial and 2-month radial height (RH) tilt predicted by deep learning (A) and manual measurement (B). Auto: angles measured by deep learning, Manual: angles measured by manual measurement.

Table 3
Comparison of Radiologic Parameters on Initial X-ray and at 2 Months Predicted Using Deep Learning and Measured Manually

DISCUSSION

Measuring the radiologic parameters of medical images is an important determining factor for planning treatment methods; however, it is tedious and time-consuming. Kreder et al.18) reported that inter-rater ICCs were 0.74 for palmar tilt, 0.38 for radial angle, and 0.44 for radial length. Intraobserver ICCs were 0.71 for palmar tilt, 0.39 for radial angle, and 0.49 for radial length, based on the manual measurement of healed distal radius fractures. The ICC for radial inclination by the algorithm and manual method was 0.975, with 0.968 for dorsal/volar tilt, and 0.868 for radial height. This implies that our automatic measurement was comparable to the manual measurement by an experienced orthopedic surgeon.

The images in this study included various types of distal radius fractures, ranging from nondisplaced to severely comminuted or displaced fractures. Measuring radiologic parameters in cases with severely displaced distal radius fractures is challenging for less experienced students or residents because anatomical knowledge is necessary to distinguish the superimposed margins of the fractured radius. Our algorithm, however, excels in measuring even negative radial inclination or radial height, particularly when the radial styloid is positioned proximal to the articular surface of the ulnar head. Notably, Suna et al.16) also proposed automated computation of radiologic parameters using deep learning but avoided addressing severely collapsed fractures with negative radial inclination or radial heights.

We assumed that using a limited dataset would be sufficient for achieving satisfactory segmentation performance as the object of interest, the radius and ulna, is clearly visible. Indeed, our evaluation metrics yielded high performance values with 634 AP and 634 lateral radiographs. In comparison, similar studies such as Suna et al.’s16) used 90 AP and 93 lateral X-rays for radius and ulna segmentation, but relied on 1,833 radiographs from the Stanford ML group’s MURA dataset for forearm segmentation. Korez et al.9) trained their model with 242 images to measure the sagittal spinopelvic balance using 97 X-rays. Pei et al.,7) in their study on hip-knee angle measurement, utilized 676 images. Additionally, Rouzrokh et al.12) conducted research on acetabular inclination and version using 600 AP and 600 lateral radiographs. Considering these precedents, our study, which employed 634 AP and 634 lateral radiographs for measuring radiologic parameters of the distal radius through deep learning, is considered adequate.

We selected the attention U-Net because it is one of the most used algorithms for segmentation, offering satisfactory performance and many advantages. The attention U-Net can take full images as input and produce pixel-wise outputs with relatively simple structures, while incurring low computational costs. Moreover, it can achieve good performance even when trained with very few labeled images. For classification purposes, we experimented with both Mask R-CNN and RetinaNet. Mask RCNN generated high-quality segmentation masks for the scaphoid, which, however, is unnecessary for our study, whereas RetinaNet stands out as one of the best object detection algorithms, providing sufficient performance for our classification model.

Because the differences between radiologic parameters measured manually and those predicted using deep learning did not follow a normal distribution, as shown by the normality test, Wilcoxon signed-rank test was used for comparison, as shown in Table 1. Regarding the average predicted radial inclinations and those measured manually, the p-value (p = 0.004) suggests a statistical difference. However, their means, standard deviations, 95% confidence intervals, medians, and interquartile ranges appeared similar. Furthermore, the Pearson correlation coefficient was 0.952, indicating an extremely strong association. Our statistical adviser confirmed the accuracy of the p-values. As the number of comparison subjects was substantial (n = 254), the standard error became sufficiently small, rendering the difference statistically significant, although it may appear clinically insignificant. However, the means, standard deviations, and 95% confidence intervals of the predicted and manually measured dorsa/volar tilt were similar, with a correlation coefficient of 0.94, and they did not exhibit statistically significant differences. Additionally, in the scatter plot of the predicted and measured values, radial inclination and dorsal/volar tilt featured similar correlated patterns. Therefore, despite the presence of a p-value, the predicted radial inclinations were comparable to the measured values.

The comparison between the predicted mean radial heights and those measured manually showed no statistically significant differences. However, the Pearson correlation coefficient and the ICC were smaller for radial inclinations and dorsal tilts in comparison. Deep learning showed excellent performance in the segmentation of the radius and ulna. Nonetheless, in cases involving displaced fractures or nonunion of the ulnar styloid base, the accuracy of locating the ulnar head point could be compromised at times.

The results of our study revealed a weak correlation between radial inclination and dorsal tilt. This can be attributed to the inclusion of various patterns of radius fractures in our study. Some cases featured minimal displacement, while others exhibited distal fragments that were dorsally displaced, tilted, or radially translated. Overall, the scatter plot depicting the relationship between radial inclination and dorsal tilt showed a scattered pattern, indicating a weak and inconsistent relationship.

Radial inclinations and height immediately after injury and after 2 months were not statistically significantly different and exhibited a strong positive relationship. Two explanations for this phenomenon are plausible: either radial inclination and height might not fully return to their normal values even after closed reduction or they might not be maintained in the reduced state within 2 months. Unfortunately, our study did not include radiographs acquired immediately after closed reduction, which makes ascertaining the exact cause challenging. However, the dorsal tilts after 2 months were smaller than the initial tilts, indicating that closed reduction had a beneficial effect on the dorsal angulation of the distal radius.

This study has several limitations. We only analyzed radiographs taken at the initial injury and 2 months after injury, which prevented us from assessing the effect of different degrees of closed reduction. Additionally, radiographs acquired immediately after reduction often exhibited artifact shadows owing to the sugar-tong splint, hindering the manual drawing of the margins of the distal radius and scaphoid. Second, the study included various types of distal radial fractures, ranging from nondisplaced to severe comminuted or displaced fractures. Third, data for this study were collected from a single hospital, and while we achieved excellent accuracy, external validation using data from different hospitals is necessary to assess the robustness of our algorithm. Fourth, the ground truth measurements were confirmed by a single hand surgeon, and interobserver variability was not evaluated. This limitation arose because only one physician was available to measure all patients’ radiographs. The annotated margins were confirmed and all radiologic measurements were performed twice by one surgeon. To mitigate interobserver variability, measurements were repeated until an agreed-upon result was obtained in cases where two measurements differed. Fifth, our study did not consider clinical factors such as wrist range of motion, clinical scoring system results, and patient satisfaction or further analyze the clinical implications of distal radius fracture radiologic parameters. Combining clinical and radiologic data could lead to patient-specific treatment decision protocols using deep learning. Sixth, our study did not incorporate three-dimensional analysis using computed tomography (CT) or magnetic resonance imaging (MRI) to analyze the type or configuration of fractures; we solely relied on X-rays. X-rays remain the primary imaging modality for measuring radiologic parameters in clinical practice. Ohs et al.19) reported a technique for the automated segmentation of fractured radius using the three-dimensional morphological geodesic active contours algorithm in high-resolution peripheral quantitative CT. We anticipate that future research will necessitate the inclusion of CT or MRI to enable a more comprehensive analysis of distal radius fractures.

To summarize, we present an algorithm that measures the radiologic parameters of distal radius fractures using a deep learning technique that can be used to measure the parameters at large scales for the investigation of distal radius fractures.

Notes

CONFLICT OF INTEREST:No potential conflict of interest relevant to this article was reported.

ACKNOWLEDGEMENTS

The authors appreciate the contribution of Hyung-Yung Lee (Department of Statistics, Pusan National University, Busan, Korea) who gave statistical advice and Jiyun Ha, RN (Inje University Sanggye Paik Hospital, Seoul, Korea), who collected radiological data.

References

    1. Del Pinal F, Jupiter JB, Rozental TD, Arora R, Nakamura T, Bain GI. Distal radius fractures. J Hand Surg Eur Vol 2022;47(1):12–23.
    1. Dias JJ, Wray CC, Jones JM. The radiological deformity of Colles’ fractures. Injury 1987;18(5):304–308.
    1. Altissimi M, Mancini GB, Azzara A, Ciaffoloni E. Early and late displacement of fractures of the distal radius: the prediction of instability. Int Orthop 1994;18(2):61–65.
    1. Hove LM, Solheim E, Skjeie R, Sorensen FK. Prediction of secondary displacement in Colles’ fracture. J Hand Surg Br 1994;19(6):731–736.
    1. Leone J, Bhandari M, Adili A, McKenzie S, Moro JK, Dunlop RB. Predictors of early and late instability following conservative treatment of extra-articular distal radius fractures. Arch Orthop Trauma Surg 2004;124(1):38–41.
    1. Lalone EA, Grewal R, King GJ, MacDermid JC. A structured review addressing the use of radiographic measures of alignment and the definition of acceptability in patients with distal radius fractures. Hand (N Y) 2015;10(4):621–638.
    1. Pei Y, Yang W, Wei S, et al. Automated measurement of hip-knee-ankle angle on the unilateral lower limb X-rays using deep learning. Phys Eng Sci Med 2021;44(1):53–62.
    1. Galbusera F, Niemeyer F, Wilke HJ, et al. Fully automated radiological analysis of spinal disorders and deformities: a deep learning approach. Eur Spine J 2019;28(5):951–960.
    1. Korez R, Putzier M, Vrtovec T. A deep learning tool for fully automated measurements of sagittal spinopelvic balance from X-ray images: performance evaluation. Eur Spine J 2020;29(9):2295–2305.
    1. Yang W, Ye Q, Ming S, et al. Feasibility of automatic measurements of hip joints based on pelvic radiography and a deep learning algorithm. Eur J Radiol 2020;132:109303
    1. Schwartz JT, Cho BH, Tang P, et al. Deep learning automates measurement of spinopelvic parameters on lateral lumbar radiographs. Spine (Phila Pa 1976) 2021;46(12):E671–E678.
    1. Rouzrokh P, Wyles CC, Philbrick KA, et al. A deep learning tool for automated radiographic measurement of acetabular component inclination and version after total hip arthroplasty. J Arthroplasty 2021;36(7):2510–2517.
    1. Tobler P, Cyriac J, Kovacs BK, et al. AI-based detection and classification of distal radius fractures using low-effort data labeling: evaluation of applicability and effect of training set size. Eur Radiol 2021;31(9):6816–6824.
    1. Bluthgen C, Becker AS, Vittoria de Martini I, Meier A, Martini K, Frauenfelder T. Detection and localization of distal radius fractures: deep learning system versus radiologists. Eur J Radiol 2020;126:108925
    1. Suzuki T, Maki S, Yamazaki T, et al. Detecting distal radial fractures from wrist radiographs using a deep convolutional neural network with an accuracy comparable to hand orthopedic surgeons. J Digit Imaging 2022;35(1):39–46.
    1. Suna A, Davidson A, Weil Y, Joskowicz L. Automated computation of radiographic parameters of distal radial metaphyseal fractures in forearm X-rays. Int J Comput Assist Radiol Surg 2023;18(12):2179–2189.
    1. Reyes-Aldasoro CC, Ngan KH, Ananda A, d’Avila Garcez A, Appelboam A, Knapp KM. Geometric semi-automatic analysis of radiographs of Colles’ fractures. PLoS One 2020;15(9):e0238926
    1. Kreder HJ, Hanel DP, McKee M, Jupiter J, McGillivary G, Swiontkowski MF. X-ray film measurements for healed distal radius fractures. J Hand Surg Am 1996;21(1):31–39.
    1. Ohs N, Collins CJ, Tourolle DC, et al. Automated segmentation of fractured distal radii by 3D geodesic active contouring of in vivo HR-pQCT images. Bone 2021;147:115930

Metrics
Share
Figures

1 / 13

Tables

1 / 3

PERMALINK