Data fusion based lane departure warning framework using fuzzy logic [version 1; peer review: awaiting peer review]

Background: Lane detection is a difficult issue because of different lane circumstances. It plays an important part in advanced driver assistance systems, which give information about the centre of a host vehicle such as lane structure and lane position. Lane departure warning (LDW) is used to warn the driver about an unplanned lane exit from the original lane. The objective of this study was to develop a data-fusion LDW framework to improve the rate of detection of lane departure during daylight and at night. Methods: Vision-based LDW is a comprehensive framework based on vision-based lane detection with additional lateral offset ratio computations based on the detected X 12 and X 22 coordinates. The computed lateral offset ratio is used to detect lane departure based on predefined LDW identification criteria for vision-based LDW. Data fusion-based LDW was developed using a multi-input-single-output fuzzy logic controller. Data fusion involved lateral offset ratio and yaw acceleration response from the vision-based LDW and model-based vehicle dynamics frameworks. Real-life datasets were generated for simulation under the MATLAB Simulink platform. Results: Experimental results showed that fusion-based LDW achieved an average lane departure detection rate of 99.96% and 98.95% with false positive rates (FPR) of 0.04% and 1.05% using road footage clips #5–#27 in daytime and night-time, respectively. The average FPR using data fusion-based LDW reduced by 18.83% and 15.22% compared to vision-based LDW in daytime and night-time,


Introduction
According to 1, single vehicle road departure incidents account for the majority of road accidents.The frequency of deadly car accidents on the road has become one of today's most significant issues.Lane departures cause the bulk of highway fatalities, resulting in hundreds of deaths, thousands of injuries, and billions of dollars in damages each year.Malaysia has been rated as the nation with the greatest number of road related deaths per 100,000 inhabitants every year since 1996 2 .According to World Health Organization statistics from 2013, released in 3, Malaysia placed third among developing countries for hazardous roads, behind only Thailand and South Africa.As a consequence, car safety systems such as lane departure warning (LDW) systems have been proven to be essential for avoiding lane departure.
According to 2, the geographical distribution of the 750,000 annual fatalities due to road accidents in 1999 put almost half of them in Asia.Furthermore, the statistical data on road deaths published in 4 show an increase in worldwide traffic fatalities, which is consistent with the predicted future rise in road fatalities in different geographical regions revealed in 5.The total number of road fatalities predicted for 2020 is over 3.5 times more than the total number of road fatalities recorded in 1990, with South Asia bearing the brunt of the increase.The data trend displayed in 5 also shows that the overall number of victims of road accidents in developing nations is increasing.In contrast, in high-income nations, there has been a continuous decline over the past 20 years.The reduction in road deaths in highincome nations was mostly driven by legislative enforcement, such as the requirement that LDWS be installed in all vehicles sold in the country.
As a result, it is important to disclose the proportion of road deaths from motorised four-wheeled vehicles compared to motorcyclists, bicycles, and pedestrians in terms of World Health Organization sub-region categorization, as shown in 6.This article displays a breakdown of road traffic fatalities by road user group in World Health Organization sub-regions as well as the global average breakdown for the road user group.The breakdown of road user groups presented in 6 was based on public and unpublished information on country-specific road traffic injuries from 1999 to 2006.According to 6, the worldwide road traffic deaths among motorised four-wheeled vehicles are on average expected to make up almost half of all fatalities (45%), followed by pedestrian, motorcyclist 6 , and bicycle user groups at 31%, 18%, and 7%, respectively.
In 2013, a similar distribution of road traffic deaths by road user type was found, with four-wheeled vehicles accounting for 35%, followed by pedestrians, motorised two-or three-wheel vehicles, cyclists, and other road users accounting for 31%, 22%, 11%, and 1%, respectively.Based on statistical data contained in, four-wheeled vehicles continue to be the leading contributor to worldwide road deaths when compared to other road user categories.Moreover 7 , revealed that approximately 37.4% of all fatal vehicle crashes in the United States are caused by single vehicle lane departure.According to related research 8 , single-vehicle lane departure accidents accounted for the majority of road traffic fatalities caused by drifting into oncoming traffic, adjacent traffic, or off the highway.
Most road casualties have a close connection with the driver's behaviour, such as unintended steering wheel motions, dozing, negligence, fatigue, drowsiness, intoxication, or utilisation of cell phones 9 .As a result, automobile safety has become a worry for road users, as the majority of road fatalities are the result of a driver's faulty judgement of vehicle route 10 .Over the past decade, automobile safety has received a lot of attention, with numerous researchers working to improve car safety and comfort, according to 11.One of the main attempts by researchers has been to use a calculated risk indicator to provide a triggered warning signal to the driver right before the accident occurrence to prevent road casualties 12 , such as the LDW discussed in this article.
The present LDW framework is mostly made up of the environment detection component of the vision sensor, which detects the lane border, lane marker, and road contour.The determination of lane location is a critical component of LDW application.It is paramount to evaluate how the lane is detected and determine its accuracy with applicable metrics in various environmental conditions 13 .As a result, LDW is often only used on roads with well defined lane markers, and the systems may be harmed by erroneous activity and circumstances on the road.The example of erroneous activity on the road is failing to engage a turn signal before making a lane change.As a result, performance assessment criteria for the general LDW system include lane detection rate, false-positive rate, and false-negative rate.Numerous prior studies have looked at and reported on these 14 .
Image quality difficulties, low eyesight circumstances, and a variety of lane conditions are problems for LDW, according to 15. LDW decreases as a result of performance flaws brought on by environmental constraints.Examples of lane detection challenges from environmental limitations are the roadway lane markings in the daytime and night-time.Due to LDW constraints caused by environmental circumstances that make it difficult to identify correct lanes 16 , a new framework for LDW development is needed to improve the system's resilience in coping with the present difficulties.Most of the developed LDW techniques are based on the vision sensor and some have global positioning system (GPS) integration 17 .Still, the lane departure detection results depend on the linking reliability between the GPS and satellite.
This research aims to design a data fusion-based LDW framework that improve the lane departure detection in daytime and night-time driving environments.The main motivation is to investigate frameworks that combine vision data from visionbased LDW and vehicle dynamical state from model-based vehicle dynamics so that the effectiveness of data fusion-based LDW could be enhanced in solving lane departure detection problems.Considering the fact that various disturbances exist in a vision system, vision based LDW in lane departure detection performance could be severely degraded if the vision disturbances are appeared in vision-based lane detection.It is, thus, desirable to design data fusion-based LDW that is capable of enhancing the lane departure detection through the combination of yaw acceleration from model-based vehicle dynamics and lateral offset ratio from vision-based LDW, meanwhile accounting for the effect of vision disturbances in daytime and night-time driving environments.

Methods
As part of an intelligent transportation system, LDW plays a vital role in reducing road fatalities by giving a warning to the driver about any accidental lane departure.Prior to lane departure, the driver or monitoring system detects one lane boundary moving horizontally towards the centre of the front view.A lane departure may be recognised by analysing the horizontal position of each detected lane boundary, which corresponds to the X -coordinates of the bottom end-points of the lane borders in the image plane 18 .The technology issues a warning message when the vehicle reaches a set distance from the lane boundary.An LDW warns that a vehicle is about to leave its current lane or is about to cross a lane boundary.

Data fusion based lane departure warning framework
Figure 1 shows an overall data fusion-based LDW framework.The vision-based lane detection framework found in 19 is extended by determining lateral offset ratio based on the detected X 12 and X 22 coordinates.The model-based vehicle dynamics framework can be found in 20.The lane departure detection is based on the pre-defined vision-based LDW's lane departure identification for lateral offset ratio.However, all vision-based LDW systems have encountered performance constraints 21 .Undetectable lane boundary markings limit the performance of the LDW of vision-based systems and their supporting algorithms.Limitations include environmental conditions, highway condition, and other marker occlusions.Hence, many research communities are finding new ways to improve the LDW system.In this article, data fusion between vision data and vehicle data enhances the LDW results.The lateral offset ratio and yaw acceleration are calculated using a combination of vision-based LDW and model-based vehicle dynamics.These two signals are then utilised as the input variables for fuzzy logic.The computed fuzzy output variable of LDW, f (u), is then used for detecting lane departure based on the defined fuzzy logic rules for LDW.

Lateral offset ratio
In 22, the vehicle's lateral offset in relation to the lane centre was utilised to forecast lane departure.However, existing techniques depend on camera calibration to obtain the lateral offset, while vision-based LDW does not require any intrinsic or extrinsic camera parameters 23 .In this article, both lane boundary X -coordinates (X 12 and X 22 ) are analysed for each frame to compute the lateral offset ratio (LOR).A difference between the minimum selected and T H X m shows the horizontal pixel distance between the warning threshold and the observed X 12 or X 22 .Thus the horizontal distance of the pixel between the warning threshold and the detected X 12 or X 22 is computed by dividing the horizontal distance of the pixel between warning thresholds with the detected X m .The projection of a path in the image plane for lateral offset rate calculations is shown in Figure 2.
Lane departure identification in vision based lane departure warning framework Assume the car is travelling in the middle of the lane, parallel to the lane borders.The lateral offset ratio is therefore constant and equal to 0.25 as a function of the left bottom end-point of the left lane boundary, X 12 , the right bottom end-point of the right lane boundary, X 22 , one-half the horizontal width of the image plane, X m , and the warning threshold, T H . Assume that the car is travelling parallel to the lane borders and has left the lane's centre.The lateral offset ratio is constant in this instance, but it is less than 0.25.Because the vehicle does not seem to be leaving its lane, no LDW signal should be activated.As the vehicle approaches the lane border, the lateral offset ratio will decrease from 0.25 to -1.Because the vehicle seems to have strayed from its lane, an LDW signal should be activated.As a result, the word "Lane Departure" appears in the series of detected lane departure frame pictures.
Lane departure detection utilising lateral offset ratio in visionbased LDW is shown in Table 1, with values ranging from -1 to 0.25.For a lane departure zone, the lateral offset ratio range is -1 ≤ lateral offset ratio ≤ 0. Furthermore, a lateral offset ratio of zero indicates that the vehicle has crossed the alert threshold.As a result, the phrase 'Lane Departure' appears on the frame picture.For no lane departure zone, the lateral offset ratio ranges from 0 < lateral offset ratio ≤ 0.25, with the highest value indicating that the vehicle is in the middle of the lane.As a result, the phrase 'Lane Departure' is not visible on the frame picture.When the lateral offset ratio is equal to -1, the vehicle is crossing one lane border.As a result, the phrase 'Lane Departure' appears on the frame picture.

Fuzzy logic controller
Analysing only the value of lateral offset ratio using visionbased LDW is not adequate to detect the likelihood of lane departure because the vision-based method tends to provide more false warnings due to deterioration of road conditions.The computed lateral offset ratio from vision-based LDW is usually noisy, caused by a road sign, leading vehicle on the next lane, occluded lane boundaries, or poor condition of road painting 23 .These interferences multiply when computing lateral offset ratio in particular.
Fuzzy logic is used to smartly determine the LDW to eliminate such high rate differences in the computed lateral offset ratio.In this article, lateral offset ratio and yaw acceleration are chosen as the input variables for the fuzzy logic, reflecting the strictly dynamic characteristic of the output variable 26 .It is crucial to analyse vehicle dynamics responses such as yaw acceleration during lane departure activity.It has been shown that yaw acceleration response can provide earlier insight in predicting the forthcoming lane departure event compared to other vehicle dynamics responses.The fuzzification, rule-based inference engine 27 , and defuzzification used in data fusion-based LDW are presented in Figure 3.
Lateral offset ratio input and LDW, f (u), output variables can be divided into two membership function (MF) levels.In comparison,  output variables can be any value within the range of LDW, f (u), for various road conditions.The average gravity centre method for defuzzification is adopted.
Table 2 tabulates the centre and spread parameters used in the lateral offset ratio and yaw acceleration input MFs to compute the output variable of LDW, f (u).The PO Gaussian MF of the lateral offset ratio input variable is made up of 0.25 centre and 0.087 spread.Two Gaussian MFs are chosen due to visionbased LDW's lane departure identification for lateral offset ratio as described in Table 1.The PO Gaussian MF is made up of 0.1 centre and 0.03538 spread in yaw acceleration input variable.The ZE Gaussian MF is made up of 0 centre and 0.03538 spread.The NE Gaussian MF is made up of -0.1 centre and 0.03538 spread.Three Gaussian MFs are chosen due to full coverage for the range of yaw acceleration.
Data fusion-based LDW can be controlled smartly to the fuzzy rules established based on the control scheme shown in Table 3. Table 3    The testbed used for the experiment is shown in Figure 4, and the equipment used in the experimental testbed are shown in Table 5.The real-life datasets were generated by using a camera capturing the look ahead road footage and rotary encoders capturing steering wheel angle and vehicle speed responses.
The real-life datasets of road footage, steering wheel angle responses and vehicle speed responses were acquired off-line at the rate of 30Hz and trimmed into corresponding clip numbers as presented in Table 6-Table 7 for daytime and night-time driving environments, respectively.The data trimming is applied to the real-life datasets before being transferred to a laptop for running of the data fusion-based LDW simulation, as illustrated in Figure 4. Data trimming is required to ensure all the responses acquired from various sensors are synced at the exact time-stamp.
In this case, outlier/out of sync responses from the sensors at the early seconds of acquisition were usually discarded in order to match the reference time-stamp.
In order to validate the effectiveness of the proposed data fusion-based LDW framework, real-life datasets with variation of driving environments (daytime and night-time), road structure (straight and curving roads), outlier road features (occluded lane markers and arrow sign printed on road surface) were considered.Author had spent effortless hours manoeuvring the instrumented car to generate real-life datasets specifically to be used in the current study.
The experimental testbed used for acquiring road footage for data fusion-based LDW is also described in 19.In 19, this paper proposes a vision-based lane departure warning framework for lane departure detection under daytime and night-time driving environments.The traffic flow and conditions of the road surface for both urban roads and highways in the city of Malacca are analysed in terms of lane detection rate and false positive rate.The proposed vision-based lane departure warning framework includes lane detection followed by a computation of a lateral offset ratio.The lane detection is composed of two stages: pre-processing and detection.In the pre-processing, a colour space conversion, region of interest extraction, and lane marking segmentation are carried out.In the subsequent detection stage, Hough transform is used to detect lanes.Lastly, the lateral offset ratio is computed to yield a lane departure warning based    on the detected X -coordinates of the bottom end-points of each lane boundary in the image plane.

Ethical approval
Ethical approval was obtained from Multimedia University with approval number EA1902021.Authors submitted a self-declaration form on 17/05/2021 stating that the research was conducted from 01/02/2012 -30/04/2020.Ethical approval is required due to the institution requirement before the disclosure of the article.

Performance assessment
The number of lane departure frames, the number of identified lane departure frames, and the number of false-positive frames were manually tallied frame by frame for the performance assessment of lane departure detection.The formula used for the total number of lane departure frame was: where N TL -Total number of lane departure frame detected N DL -Total number of correctly detected lane departure frame N FL -Total number of false positive lane departure frame

Results
All three test situations, namely straight road, curving road, and false alarms, were included in the real-life datasets presented in Table 6  The formula used for the lane departure detection rate was: Lane departure detection rate 100% The formula used for the false positive rate was: False positive rate 100%  Data fusion-based LDW achieved an average lane detection rate of 99.96% and 0.04% false positive rate using real-life datasets in a day driving scenario.In the night-time driving scenario, the data-fusion LDW obtained an average lane detection rate of 98.95% and 1.05% false positive detection of lane departure using real-life datasets.The integration of vision systems such as a vision-based LDW and dynamic conditions for the vehicle substantially decreased the false positive rate for lane recognition and increased lane detection precision by elimination of superfluous LDW frames.

Discussion
In real-world situations, low illumination and road surface interference are often encountered in our daily driving.Thus, a standalone vision-based system may be unreliable under complex driving environments and road surface conditions.The examples of complex driving environments and road surface conditions encountered in the experiments were low illumination at night, worn lane markings, arrow signs, and occluded lane markings.As part of an intelligent transportation system, lane departure detection performance can be further enhanced by combining the The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com

Figure 3 .
Figure 3. Fuzzy logic blocks in data fusion-based lane departure warning.
the yaw acceleration input variable is divided into three levels of MF.They are defined as positive (PO) and negative (NE) for lateral offset ratio; lane departure (LD) and no lane departure (NLD) for LDW; f (u), and positive (PO), zero (ZE), and negative (NE) for yaw acceleration.Gaussian MF is chosen to improve fuzzification speed because the centre and spread need to be updated and Gaussian MF is very well known and widely used.The lateral offset ratio and yaw acceleration range are (-1, 0.25) and (-0.1, 0.1), respectively.The range of LDW, f (u), the output variable is a singleton within -5 to 0.6.The chosen range of Lane departure identification in data fusion based lane departure warning framework

Figure 4 .
Figure 4. Real-life dataset acquisition flow for data fusion-based lane departure warning.
25ble 4tabulates the lane departure identification in data fusion-based LDW, which uses the output of fuzzy logic, LDW, f (u).The LDW, f (u), has a range of values between -5 and 0.6 with the range of values of LDW, f (u), for lane departure falling between -5 ≤ f (u) ≤ 0. In addition, a value of LDW, f (u), equal to zero indicates the vehicle is crossing the warning threshold.Hence, the 'Lane Departure' text is presented in the frame image.The range of value of LDW, f (u), for no lane departure zone falls in between 0 < f (u) ≤ 0.6 with the maximum value of LDW, f (u), indicates the vehicle is located at the centre of the lane.Hence, no 'Lane Departure' text is presented in the frame image.The minimum value of LDW, f (u) = -5, indicates that the vehicle is crossing one lane boundary.Hence 'Lane Departure' text is presented in the frame image.Windows 10 operating system, using an Intel i5 1.60 GHz processor and 4GB RAM.Alternatively, Scilab's Xcos28is suggested as open-source alternative to replace MATLAB and Simulink.In order to simulate the model, real-life datasets (clip #5-#27) is input sequentially into the Simulink model.The Simulink model and its parameter are shared via Zenodo and Github25.

Table 2 . Centre and spread of the input membership function
(MF).LOR, lateral offset ratio; PO, positive; NE, negative; ZE, zero.

Table 5 . List of equipment used in real-life dataset acquisition for data fusion-based lane departure warning. Test bed equipment Quantity Description
-Table7.In both daylight and night-time driving situations, the effectiveness of data fusion-based LDW in lane departure detection is assessed.The efficacy of the data fusionbased LDW concept is shown by comparing the lane departure detection results for vision-based LDW and data fusion-based LDW.Table8-Table9show the results of lane departure detection based on vision-based LDW in daylight and night-time driving situations, respectively.Table10-Table11show the results of lane departure detection based on data fusion-based LDW in daylight and night-time driving situations, respectively.No public or known datasets for road panels, vehicle speed responses and steering wheel angle responses for the identification of the lane departure were identified for fair comparison between data-fusion based LDW and vision-based LDW.The lane detection findings of data-fusion and vision-based LDWs reported above are thus used for comparison with actual datasets.Real-life datasets for clips 5-27 consist of 58.670 street video frames for the lane identification study.As the real-life datasets consist of 23 different clips, each clip has an average lane detection rate and false-positive rate for lane departure.15,636 and 2,702 frames of all road video frames were LDW frames in vision-based LDW and data fusion-based LDW, respectively.
Data fusion-based LDW obtained substantial lane detection results in both daytime and night-time driving situations, in particular by decreasing the false positive rate of lane detection under unfavourable circumstances such as worn lane markings, poor light, obscured lane markings and other road signs.

Table 12 . Comparison of lane departure detection results using real-life datasets.
data fusion-based LDW system is given, which is made up of vision-based LDW and model-based vehicle dynamics, with multi-input-single-output fuzzy logic in between.The lateral offset ratio is used to determine if 'Lane Departure' text should be shown on the frame picture.This calculation is based on the identified textitX-coordinates of each lane boundary's bottom end-points in the picture plane.The LDW, f (u), is intelligently computed using multi-input-single-output fuzzy logic based on lateral off-set ratio input from vision-based LDW and vehicle's yaw acceleration reaction from model-based vehicle dynamics.To assess performance in lane identification and lane departure detection, road video from urban roads and a highway in Malacca was gathered.detectionrate,as shown in clips25and 26.Nonetheless, based on the findings of the experiments, data fusionbased LDW worked well throughout the day without being hampered by road surface conditions.Each frame was processed in about 18.7 milliseconds during the testing of data fusion-based LDW.Low light and road surface interference are common occurrences in real-world driving conditions.As a result, visionbased systems, particularly vision-based lane detection and vision-based LDW, are unreliable in challenging driving situations and road surface conditions.Low lighting at night, faded lane markings, arrow signs, and obscured lane markings were all instances of difficult driving situations and road surface characteristics observed in the tests.The focus of future study should be on all-weather test settings.The performance of data fusion-based LDW may be improved further by using adaptive tuning of fuzzy rules and MF parameters as part of an intelligent transportation system.Clip #19.https://doi.org/10.17632/kcxpm835gw.3 43 .Clip #21.https://doi.org/10.17632/cjptbmddpk.4 45 .Clip #23.https://doi.org/10.17632/5zjf62drv7.3 47 .Clip #24.https://doi.org/10.17632/r8vm7nbgvm.3 48.Clip #26.https://doi.org/10.17632/wmymrk79tg.3 50.Clip #27.https://doi.org/10.17632/wb4hgnr6k3.3 51 .Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).Data are available under the terms of the The MIT License (MIT). A