Application of Deep Learning Methods in a Moroccan Ophthalmic Center: Analysis and Discussion

Diabetic retinopathy (DR) remains one of the world’s frequent eye illnesses, leading to vision loss among working-aged individuals. Hemorrhages and exudates are examples of signs of DR. However, artificial intelligence (AI), particularly deep learning (DL), is poised to impact nearly every aspect of human life and gradually transform medical practice. Insight into the condition of the retina is becoming more accessible thanks to major advancements in diagnostic technology. AI approaches can be used to assess lots of morphological datasets derived from digital images in a rapid and noninvasive manner. Computer-aided diagnosis tools for automatic detection of DR early-stage signs will ease the pressure on clinicians. In this work, we apply two methods to the color fundus images taken on-site at the Cheikh Zaïd Foundation’s Ophthalmic Center in Rabat to detect both exudates and hemorrhages. First, we apply the U-Net method to segment exudates and hemorrhages into red and green colors, respectively. Second, the You Look Only Once Version 5 (YOLOv5) method identifies the presence of hemorrhages and exudates in an image and predicts a probability for each bounding box. The segmentation proposed method obtained a specificity of 85%, a sensitivity of 85%, and a Dice score of 85%. The detection software successfully detected 100% of diabetic retinopathy signs, the expert doctor detected 99% of DR signs, and the resident doctor detected 84%.


Introduction
Diabetic retinopathy is the primary cause of vision loss in working aged individuals around the world. Its silent progression makes it a sight threatening condition. By 2030, the number of patients with vision-threatening diabetic retinopathy (VTDR) is predicted to jump from 37.3 million to 56.3 million [1]. On the micro scale, according to the Moroccan Society of Ophthalmology [2], the prevalence of DR in Morocco is 35%, approaching 10% of legal blindness.
Diabetic retinopathy is a small-vessel consequence of diabetes and continues to be the leading cause of preventable ocular morbidity amongst working aged individuals. Diabetes currently affects 422 million people globally, with 600 million people expected to be impacted by 2040, primarily in low-and middle-income nations [3]. It is found in one-third of diabetics and it is related to a higher risk of life-threatening systemic vascular problems such as small vessels heart disease, cardiac failure, and stroke [4]. Prevention is the key factor to reduce the risk of diabetic retinopathy evolution by keeping blood pressure, blood glucose, and blood lipids under control. We distinguish many signs of DR, such as retinal hemorrhages, cottony nodules, and exudates. Diabetic retinopathy is a type of retinal microangiopathy. It entails changes in the vascular wall in addition to

Related Work
Using the IDRiD dataset [11], Xu et al. [12] worked on a segmentation model called FFU-Net (Feature Fusion U-Net), which improves U-Net architecture. To begin, the network's pooling layer is replaced with a convolutional layer to minimize the spatial loss of the retinal image. Then, they integrated multiscale feature fusion (MSFF) blocks into the encoders, one that helps the network learn multiscale features and enriches the information provided with skip connections and lower resolution decoders by fusing contextual channel attention (CCA) models. At last, the authors proposed a balanced focal loss function to address misclassification and data imbalance issues.
Kou et al. [13] proposed an enhanced residual U-Net (ERU-Net) for segmenting microaneurysms (MAs) and exudates (EXs). They evaluated ERU-NET's performance for MAs and EXs segmentation on three public datasets: IDRiD, DDR, and E-Ophtha. On these three datasets, the used architecture achieves AUC values of 0.9956, 0.9962, 0.9801, 0.9866, 0.9679, and 0.9609 for microaneurysm and exudates segmentation, which are higher than the original U-Net values.
Li et al. [14] presented MAU-Net, which is a retinal image segmentation method based on the U-Net structure, to segment retinal blood vessels. The authors used DRIVE, STARE, and CHASEDB1 to validate their method.
Zhang et al. [15] proposed a CNN architecture that incorporated the Inception-Res module, as well as densely connected convolutional modules into the U-Net model. The author tested their model on vessel segmentation from retinal images, MRI brain neoplasm segmentation datasets from MICCAI BraTS 2017, and lung CT scan segmentation data from Kaggle datasets. The results of the lung segmentation achieved an average Dice score of 0.9857. The results for brain tumor segmentation achieved a Dice score of 0.9867. The results for vessel segmentation achieved an average Dice score of 0.9582.
Dai et al. [16] developed a DL solution called DeepDR, which allows users to detect different stages of DR. An amount of 466,247 color fundus images were used for training. The detection results of different DR signs, such as microaneurysms, cotton spots, hard exudates, and hemorrhages were 0.901, 0.941, 0.954, and 0.967, respectively. DR classification into mild, moderate, severe, and proliferative achieved an area under the curves of 0.943, 0.955, 0.960, and 0.972, respectively.
Sambya et al. [17] used a U-NET model based on a residual network with sub-pixel convolution initialized to the nearest convolution. The suggested architecture was trained and validated on two publicly accessible datasets, IDRiD and e-ophtha, for microaneurysm and hard exudate segmentation. On the IDRiD dataset, the network obtains a Dice score of 0.9998, as well as 99.88% accuracy, 99.85% sensitivity, 99.95% specificity, for microaneurysms and exudates.
Yaday et al. [18] proposed a U-Net based approach for retinal vessel segmentation. Before starting the segmentation procedure, some preprocessing approaches are used to improve the image's impacted region. Then, a discrete double-tree Ridgelet transform (DT-DRT) is applied to the dataset to extract the features of the region of interest. The proposed segmentation achieved an accuracy of 96.01% in CHASE DB1, 97.65% in DRIVE, and 98.61% in STARE.
Toufique A. Soomro et al. [19] first used preprocessing steps to make the training process more efficient. They implemented the CNN model based on a variational autoencoder (VAE), which is a modified version of U-Net. Their main contribution to the CNN implementation is to replace all pooling layers with progressive convolution and deeper layers. The proposed model generates segmented vessels image. The authors used both DRIVE and STARE datasets to train and test their model. It gave a sensitivity of 0.739 and an accuracy of 0.948 on the DRIVE database. Additionally, a sensitivity of 0.748 and an accuracy of 0.947 are observed for the STARE database.
Swarup et al. [20] compared different architectures (UNET, TLCNN, PCNSVM, and rSVMbCNN). Then, they presented the selected retinal image segmentation method based on a Ranking Support Vector Machine (rSVM) with a convolutional neural network in deep learning for the detection of diabetic retinopathy. They started by computing the pixel-bypixel score with rSVM, and they then designed a deep convolutional neural network for retinal image segmentation followed by automatic anomaly detection using morphological operations. As a result, they achieved a segmentation accuracy of 96.4%, 97%, and 98.2% for three different databases-STARE, DIARETDB0, and DIARETDB1.
Many AI devices have been developed to revolutionize DR screening [21]. Pal et al. [22] applied the You Only Look Once version 3 (YOLOv3) algorithm to automatically detect hemorrhages in fundus images. The YOLOv3 algorithm recognized all red spots and surrounded them with multiple boxes. It identifies bounding boxes using a CNN-based model named Darknet53 and a squared error loss function, as well as logistic regression to determine an object's confidence score. Finally, non-max suppression was used to delete anything other than the best-fit bounding boxes. In order to train their model, the authors used the MESSIDOR dataset. It is made up of 1200 RGB Fundus images. Only 742 people were chosen out of a total of 1200, with 572 going through training and 170 going through validation. The average precision value of test data was 83.3%.
The results of a YOLO model were published by Rohan Akut [23]. It entails detecting microaneurysms and identifying their location on retinal images. He developed this algorithm based on 85,000 fundus images used for training. The dataset was divided into a ratio of 90-10%, with 90% going to training and 10% going to testing. An amount of 10% of the training dataset was used for validation. The model used enables the creation of a green bounding box around each microaneurysm.
Ming et al. [24] evaluated EyeWisdom in real world, which is an AI solution based on the YOLO detection system. It was created using 25,297 retinography (3785 from Peking Union Medical College Hospital and Henan Eye Hospital and 21,512 from the Kaggle dataset) [25]. The sensitivity was 90.4%, and the specificity was 95.2%.
Yang et al. [26] presented a collaborative learning framework for robust DR grading that integrates patch level lesion and image level grading features (CLPI). They used the IDRiD dataset as a lesion dataset, which contains 81 color fundus images (54 for training and 27 for testing) with pixel level annotations of lesions such as exudates, MAs, and hemorrhages. The authors also used image level datasets such as Messidor-1 [27], Messidor-2, LIQ-EyePACs [28], and other private datasets. They demonstrated that the proposed CLPI outperforms senior ophthalmologists, as well as SOTA algorithms. The authors demonstrated the reliability of CLPI by evaluating the DR grading methods in real world scenarios. The findings demonstrated the effectiveness of the lesion attention scheme, as well as the benefits of CLPI's end-to-end collaborative learning. Table 1 below summarizes the results of the literature review. The novelty of this paper lies in creating new software that can automatically detect all types of hemorrhages and exudates. Furthermore, the developed software can recognize laser marks, which are the results of a specific retina therapy and are not pathological signs and differentiate them from both hemorrhages and exudates even if they look similar.

Diabetic Retinopathy Segmentation
In computer vision and deep learning, image segmentation is the process of partitioning an image into segments or highlighted groups of pixels that are considered as meaningful entities [29]. To segment our color fundus images, we trained and tested a U-Net multi-class segmentation model on 200 labeled retinal images ( Figure 1). U-Net [30] has become a widely known medical image segmentation technique and has demonstrated excellent performance as a fully convolutional neural network [3,30]. The U-Net architecture contains two "paths." First, the contraction path, known as the encoder, is used to capture an image's context. In fact, it is a combination of convolution and "max pooling" that not only reduces the image size, but it also generates a feature map, thereby decreasing the number of network parameters. The symmetric expansion path, also known as the decoder, is the second path. Due to the transposed convolution, it also provides precise localization. What characterizes the U-Net architecture is the shortened connections between the layers of equal resolution from the analysis path to the expansion path. These connections provide important high-resolution features for the deconvolution layers.
pooling" that not only reduces the image size, but it also generates a feature map, thereby decreasing the number of network parameters. The symmetric expansion path, also known as the decoder, is the second path. Due to the transposed convolution, it also provides precise localization. What characterizes the U-Net architecture is the shortened connections between the layers of equal resolution from the analysis path to the expansion path. These connections provide important high-resolution features for the deconvolution layers.

Diabetic Retinopathy Detection
Object detection is an important branch of both image processing and computer vision. It is the process of detecting occurrences of a particular type of object in videos and images [31]. Deep learning has given a lot of attention to object detection algorithms [32]. The recent YOLO series of algorithms, known for their high speed and precision, have been used in different detection tasks. "YOLO" is an abbreviation for "You Only Look Once". In YOLO, a CNN architecture is used only once on the entire image to find the objects. The YOLO system calculates a large number of image features and detects all of the objects. This improves the model's computational efficiency and makes it more suitable for real-time applications. In our paper, we trained a fifth generation of YOLO model, also known as YOLOv5 architecture, to detect both hemorrhages and exudates ( Figure 2). The used model processes the entire image with a single neural network, then divides it into parts and predicts probabilities and bounding boxes for each object. The expected probability is used to weight these bounding boxes. An appropriate selection of the activation function is mandatory for DL networks. Leaky ReLU is used in the middle/hidden layer of YOLOv5, and the sigmoid function is used in the last detection layer [33].

Diabetic Retinopathy Detection
Object detection is an important branch of both image processing and computer vision. It is the process of detecting occurrences of a particular type of object in videos and images [31]. Deep learning has given a lot of attention to object detection algorithms [32]. The recent YOLO series of algorithms, known for their high speed and precision, have been used in different detection tasks. "YOLO" is an abbreviation for "You Only Look Once". In YOLO, a CNN architecture is used only once on the entire image to find the objects. The YOLO system calculates a large number of image features and detects all of the objects. This improves the model's computational efficiency and makes it more suitable for real-time applications. In our paper, we trained a fifth generation of YOLO model, also known as YOLOv5 architecture, to detect both hemorrhages and exudates ( Figure 2). The used model processes the entire image with a single neural network, then divides it into parts and predicts probabilities and bounding boxes for each object. The expected probability is used to weight these bounding boxes. An appropriate selection of the activation function is mandatory for DL networks. Leaky ReLU is used in the middle/hidden layer of YOLOv5, and the sigmoid function is used in the last detection layer [33].

Material
Care EIDON is a non-mydriatic retinal camera ( Figure 3). It is one of the first TrueColor confocal systems to set new quality standards in retinal imaging by combining the best features of scanning laser ophthalmoscopy (SLO) systems with those of basic fundus imaging. iCare EIDON is a retinal imaging system that provides high image quality,

Material
Care EIDON is a non-mydriatic retinal camera ( Figure 3). It is one of the first TrueColor confocal systems to set new quality standards in retinal imaging by combining the best features of scanning laser ophthalmoscopy (SLO) systems with those of basic fundus imaging. iCare EIDON is a retinal imaging system that provides high image quality, as well as a confocal view in a non-dilating procedure, as well as a wide field and ultrahigh resolution imaging. Furthermore, it distinguishes itself by offering users a variety of imaging modalities, including blue, TrueColor, and red-free and red confocal images, as well as infrared. In addition to this, it enables users to work in both fully manual and fully automated modes and to image through cataract and media opacities.

Material
Care EIDON is a non-mydriatic retinal camera ( Figure 3). It is one of th TrueColor confocal systems to set new quality standards in retinal imaging by com the best features of scanning laser ophthalmoscopy (SLO) systems with those of bas dus imaging. iCare EIDON is a retinal imaging system that provides high image q as well as a confocal view in a non-dilating procedure, as well as a wide field and high resolution imaging. Furthermore, it distinguishes itself by offering users a var imaging modalities, including blue, TrueColor, and red-free and red confocal ima well as infrared. In addition to this, it enables users to work in both fully manual an automated modes and to image through cataract and media opacities. Besides, a personal computer is connected to the iCare Eidon in order to facilit display, as well as the management and printing of the color fundus images. This puter will be used for the deployment of the proposed software that analyzes imag

Dataset
Our study is approved by the ethics committee of the Cheikh Zaid Internationa versity Hospital, and patient consent was obtained.
In order to test both the segmentation and detection methods, 1000 color fund ages were collected from the Foundation Ophthalmic Center of Rabat. These image taken by the EIDON retinograph, which can automatically produce composite imag allow an overview of the retina of the patient.

Mask Generation for Segmentation
An amount of 200 color images were taken from the Cheikh Zaïd Internationa versity in Rabat and then used for training and testing the U-Net algorithm. The implementation procedure is still in progress. More masks are being manually crea Besides, a personal computer is connected to the iCare Eidon in order to facilitate the display, as well as the management and printing of the color fundus images. This computer will be used for the deployment of the proposed software that analyzes images.

Dataset
Our study is approved by the ethics committee of the Cheikh Zaid International University Hospital, and patient consent was obtained.
In order to test both the segmentation and detection methods, 1000 color fundus images were collected from the Foundation Ophthalmic Center of Rabat. These images were taken by the EIDON retinograph, which can automatically produce composite images that allow an overview of the retina of the patient.

Mask Generation for Segmentation
An amount of 200 color images were taken from the Cheikh Zaïd International University in Rabat and then used for training and testing the U-Net algorithm. The image implementation procedure is still in progress. More masks are being manually created in order to improve the model dataset. Gimp software has been used to create masks. After being validated by expert ophthalmologists, all of them were saved as JPEG folders and divided into four folders (hard exudates, soft exudates, hemorrhages, and red small dots) (Figure 4). These four folders were then merged into two folders to keep only two classes, which are hemorrhages (small red dots and hemorrhages) and exudates (hard and soft exudates).
order to improve the model dataset. Gimp software has been used to create ma being validated by expert ophthalmologists, all of them were saved as JPEG fo divided into four folders (hard exudates, soft exudates, hemorrhages, and red sm (Figure 4). These four folders were then merged into two folders to keep only tw which are hemorrhages (small red dots and hemorrhages) and exudates (hard exudates).

Annotation Generation for Detection
For image detection, a dataset of 1000 local color fundus images taken Cheikh Zaïd International University Ophthalmic Center has been used followin proval of the Cheikh Zaïd International University hospital ethics committee. A tion with an age range of 39 to 75, a rate of severe pre-proliferative DR of 12% complicated DR of 1%, and a rate of legal blindness of 3% was selected. An amou of our images went for training, 20% went for testing, and 10% went for validati Annotations were created manually using LabelImg ( Figure 5). Bounding bo drawn manually around both hemorrhages and exudates, and then they were ".txt" files

Annotation Generation for Detection
For image detection, a dataset of 1000 local color fundus images taken from the Cheikh Zaïd International University Ophthalmic Center has been used following the approval of the Cheikh Zaïd International University hospital ethics committee. A population with an age range of 39 to 75, a rate of severe pre-proliferative DR of 12%, a rate of complicated DR of 1%, and a rate of legal blindness of 3% was selected. An amount of 70% of our images went for training, 20% went for testing, and 10% went for validation.
Annotations were created manually using LabelImg ( Figure 5). Bounding boxes were drawn manually around both hemorrhages and exudates, and then they were saved as ".txt" files.
order to improve the model dataset. Gimp software has been used to create masks. Afte being validated by expert ophthalmologists, all of them were saved as JPEG folders and divided into four folders (hard exudates, soft exudates, hemorrhages, and red small dots (Figure 4). These four folders were then merged into two folders to keep only two classes which are hemorrhages (small red dots and hemorrhages) and exudates (hard and sof exudates).

Annotation Generation for Detection
For image detection, a dataset of 1000 local color fundus images taken from the Cheikh Zaïd International University Ophthalmic Center has been used following the ap proval of the Cheikh Zaïd International University hospital ethics committee. A popula tion with an age range of 39 to 75, a rate of severe pre-proliferative DR of 12%, a rate o complicated DR of 1%, and a rate of legal blindness of 3% was selected. An amount of 70% of our images went for training, 20% went for testing, and 10% went for validation.
Annotations were created manually using LabelImg ( Figure 5). Bounding boxes were drawn manually around both hemorrhages and exudates, and then they were saved a ".txt" files The CNN model starts by resizing the input images to 448 × 448. A convolution i then performed through the horizontal and vertical application of several filters, with the aim of extracting the image features. Finally, to improve the accuracy of the bounding boxes of the detected signs, a removal of the non-max is performed ( Figure 6). The CNN model starts by resizing the input images to 448 × 448. A convolution is then performed through the horizontal and vertical application of several filters, with the aim of extracting the image features. Finally, to improve the accuracy of the bounding boxes of the detected signs, a removal of the non-max is performed ( Figure 6).

Experiments and Results
We run our notebook on a Colab Pro GPU Tesla P100-PCIE-16GB with high RAM We also used Pytorch for YOLOv5 and Keras with Tensorflow as the backend for U-Net.

Diabetic Retinopathy Segmentation
As preliminary results of the solution under development by our research team, we obtained an output of either the absence or the presence of DR. As shown in Figure 7, the segmentation enables separating DR signs from the retinal image background to make them recognizable by highlighting exudates in red and hemorrhages in green. The use o this technique, however, necessitates the manual creation of training and validation labels Absence of DR Presence of DR To assess the adequacy of the proposed solution, evaluation metrics (Table 2)

Experiments and Results
We run our notebook on a Colab Pro GPU Tesla P100-PCIE-16GB with high RAM. We also used Pytorch for YOLOv5 and Keras with Tensorflow as the backend for U-Net.

Diabetic Retinopathy Segmentation
As preliminary results of the solution under development by our research team, we obtained an output of either the absence or the presence of DR. As shown in Figure 7, the segmentation enables separating DR signs from the retinal image background to make them recognizable by highlighting exudates in red and hemorrhages in green. The use of this technique, however, necessitates the manual creation of training and validation labels.

Experiments and Results
We run our notebook on a Colab Pro GPU Tesla P100-PCIE-16GB with high RAM. We also used Pytorch for YOLOv5 and Keras with Tensorflow as the backend for U-Net.

Diabetic Retinopathy Segmentation
As preliminary results of the solution under development by our research team, we obtained an output of either the absence or the presence of DR. As shown in Figure 7, the segmentation enables separating DR signs from the retinal image background to make them recognizable by highlighting exudates in red and hemorrhages in green. The use of this technique, however, necessitates the manual creation of training and validation labels. To assess the adequacy of the proposed solution, evaluation metrics (Table 2)  To assess the adequacy of the proposed solution, evaluation metrics (Table 2) were calculated as follows:  The proposed segmentation method obtained specificity of 85%, a sensitivity of 85%, a Dice score of 85%, and a Jaccard score of 66% for hemorrhages and exudates segmentation.

Diabetic Retinopathy Detection
The fifth version of YOLO was used to detect hemorrhages and exudates in color fundus images. Ultralytics introduced it in June 2020, and it is now known as an advanced object detection algorithm. YOLOv5 provides four versions of the object detection network: YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x. In this work, the YOLOv5s model was used. In the YOLOv5 series, YOLOv5s is the network with the smallest depth and width. The following are the specifics of training the YOLOv5 model. Image size: 640; Batch size: 16; Data description: coco128.yaml; Yolo model: YOLOv5s.yaml; Configuration (cfg): models are described in YAML model configuration files in "model" directory. There are four versions of models of different sizes. "YOLOv5s.yaml" has been used to train.
There are four versions of the model with different sizes. "YOLOv5s.yaml" has been used to train. YOLOv5 was provided with two optimizers: Adam and Stochastic Gradient Descent (SGD). The default one is SGD. For this paper, we used the Adam optimizer, which is a better choice for smaller datasets. After being trained, our detection model allowed us to show diabetic retinopathy signs by drawing red bounding boxes around exudates and pink bounding boxes around hemorrhages. However, in order to assess the detection model's performance in real-life situations, we tested it on recent color fundus images in order to extend the model in real-world conditions. As depicted in Figure 8, our model was able to differentiate between laser impacts and exudates that look alike.

Abulcasis DR-AI Detection Software
In order to facilitate the use of the detection code, a graphical interface was created using QT Python and FPDF Python.

Abulcasis DR-AI Detection Software
In order to facilitate the use of the detection code, a graphical interface was created using QT Python and FPDF Python.
The first interface allows users to fill in fields with different information, such as patient's name and date, and they can also add comments, as shown in Figure 9. A "Validate" button allows the user to validate the filled fields and then to choose the color fundus image path, as well as to switch to the second interface.
The second interface permits us to select a specific image to analyze ( Figure 10). Once the "Detection" button is pressed, the third interface is displayed. A "Back" button allows one to go back to the first interface. The detection result is shown after that, and the user can also enlarge the resulting image in order to see different details of the detected signs of DR.
Furthermore, it is possible to add comments as shown on Figure 11, so as to describe the patient's case. A "Validate" button allows the user to validate the filled fields and then to choose the color fundus image path, as well as to switch to the second interface.
The second interface permits us to select a specific image to analyze ( Figure 10). Once the "Detection" button is pressed, the third interface is displayed. A "Back" button allows one to go back to the first interface. A "Validate" button allows the user to validate the filled fields and then to choose th color fundus image path, as well as to switch to the second interface.
The second interface permits us to select a specific image to analyze ( Figure 10). On the "Detection" button is pressed, the third interface is displayed. A "Back" button allow one to go back to the first interface. The detection result is shown after that, and the user can also enlarge the resultin image in order to see different details of the detected signs of DR.
Furthermore, it is possible to add comments as shown on Figure 11, so as to describ the patient's case. The detection result is shown after that, and the user can also enlarge the resulting image in order to see different details of the detected signs of DR.
Furthermore, it is possible to add comments as shown on Figure 11, so as to describe the patient's case. Diagnostics 2023, 13, x FOR PEER REVIEW 13 of 19 Figure 11. Detection result interface.
By clicking on validate a full report is shown containing the implemented information, as well as the resulting color fundus image ( Figure 12).  By clicking on validate a full report is shown containing the implemented information, as well as the resulting color fundus image ( Figure 12). By clicking on validate a full report is shown containing the implemented information, as well as the resulting color fundus image ( Figure 12).  A blind test was performed using different color fundus real images taken directly from the Icare Eidon modality. The Table 3 below shows clinical characteristics of 20 patients. Comparison results of detection software were summarized and presented in Table 4 below.  Figure 13 below represents precision and recall curves obtained using YOLOV5.
Diagnostics 2023, 13, x FOR PEER REVIEW 14 A blind test was performed using different color fundus real images taken di from the Icare Eidon modality. The Table 3 below shows clinical characteristics of 2 tients. Comparison results of detection software were summarized and presented in 4 below.  Figure 13 below represents precision and recall curves obtained using YOLOV Furthermore, a public dataset named DeepDRID was used in order to test Abu DR-AI software.
The proposed software detected all hemorrhages and exudates, as presented in ure 14. It also recognized laser impacts on the example (d) and did not consider th pathological signs.

Precision Recall
Confidence Confidence Furthermore, a public dataset named DeepDRID was used in order to test Abulcasis DR-AI software.
The proposed software detected all hemorrhages and exudates, as presented in Figure 14. It also recognized laser impacts on the example (d) and did not consider them as pathological signs.

Discussion
As shown in Table 5, each of the two studied solutions has advantages and limitations. What characterized segmentation is the fact that it allowed us to assign a label to every pixel in our color fundus images. Detection enables users to insert colored bounding boxes around the objects in order to make them recognizable.

Discussion
As shown in Table 5, each of the two studied solutions has advantages and limitations. What characterized segmentation is the fact that it allowed us to assign a label to every pixel in our color fundus images. Detection enables users to insert colored bounding boxes around the objects in order to make them recognizable. Table 5. Advantages and limitations of both segmentation and detection approaches.

Approach Segmentation Detection
Advantages -Highlights the segmented parts of the image with different colors, which makes them easily recognizable.
-Highlights the segmented parts of the image with different colors, which makes them easily recognizable.
-Detects, classifies, and segments every object by assigning a label to each pixel of the image.
-Inserts a bounding colored box around each object.

Limitations
-Requires the creation of learning masks. -The grey color of the output image may hide unsegmented signs.
-Requires the creation of learning masks.
However, since we aim to use this solution for early detection of DR signs, even the smallest exudates and hemorrhages must be taken into consideration and treated to prevent later complications. However, the grey color of the output image may hide unsegmented signs and details. Moreover, the detection method showed better results during the performed blind test.
The proposed software, named Abulcasis DR-AI, allows users to automatically detect DR signs ( Figure 15). Differently from the papers presented before, it can detect both types of exudates (hard and soft exudates) and hemorrhages. Abulcasis DR-AI has the capacity to recognize laser impact and does not consider them as DR signs. It has been tested on fundus images, and it has successfully detected 100% of diabetic retinopathy signs, including those not easily detectable with the naked eye. The expert doctor detected 99% of DR signs, and the resident doctor detected 84%. In performing the tests, no false positives were detected.

Approach Segmentation Detection
Advantages -Highlights the segmented parts of the image with different colors, which makes them easily recognizable. -Highlights the segmented parts of the image with different colors, which makes them easily recognizable. -Detects, classifies, and segments every object by assigning a label to each pixel of the image.
-Inserts a bounding colored box around each object.

Limitations
-Requires the creation of learning masks. -The grey color of the output image may hide unsegmented signs.
-Requires the creation of learning masks.
However, since we aim to use this solution for early detection of DR signs, even the smallest exudates and hemorrhages must be taken into consideration and treated to prevent later complications. However, the grey color of the output image may hide unsegmented signs and details. Moreover, the detection method showed better results during the performed blind test.
The proposed software, named Abulcasis DR-AI, allows users to automatically detect DR signs ( Figure 15). Differently from the papers presented before, it can detect both types of exudates (hard and soft exudates) and hemorrhages. Abulcasis DR-AI has the capacity to recognize laser impact and does not consider them as DR signs. It has been tested on fundus images, and it has successfully detected 100% of diabetic retinopathy signs, including those not easily detectable with the naked eye. The expert doctor detected 99% of DR signs, and the resident doctor detected 84%. In performing the tests, no false positives were detected.  Artificial intelligence is the future of early screening of diabetic retinopathy and blindness reduction. The solution we have developed could be used in medical caravans organized in regions with low density of hospital structures, with the aim of optimizing the flow of patients within the hospital structures, and thus it will allow people who do not have access to health care to benefit from medical assistance and to save travel costs for those who do not need an examination by a specialist.

Conclusions
With a growing diabetic population and a widening gap between demand and the number of trained resources, we believe that early screening and management of DR must be taken into account. Automation of DR screening would be beneficial in Morocco, where there are few doctors for a fast expanding patient population. Artificial intelligence is the future of early screening to address this health issue and help reduce blindness. The U-Net and YOLOv5 models were used and compared to detect hemorrhages and exudates in retinography images. Therefore, we aim to utilize the proposed method in medical caravans arranged in low-density areas with the goal of improving tele-screening in ophthalmology, optimizing patient flow within healthcare buildings, and allowing people who do not require a specialist checkup to save travel expenses. In upcoming projects, the creation of more labeled retinal images is mandatory for further training of our software in order to improve the metrics and make it able to recognize rare cases of DR. Nevertheless, Abulcasis DR-AI software still has limitations that should be improved, and more investigations should be done in order to create a grading tool of DR severity.

Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committe of ethics committee of the Cheikh Zaid International University Hospital (protocol code CEFCZ/PR/09/01/23).

Informed Consent Statement:
Informed consent was obtained from all subjects involved in the study.

Data Availability Statement:
A local dataset was created for training and testing the software, nevertheless it is unavailable due to privacy and ethical restrictions. The software was also tested using a public dataset called DeepDRID: github.com/mutual-ai/Deep-Diabetic-Retinopathy-Image-Dataset-DeepDRiD-, accessed on 20 February 2023.