Skip to content
Publicly Available Published by De Gruyter (O) April 8, 2024

PRORETA 5 – building blocks for automated urban driving enhancing city road safety

PRORETA 5 – Bausteine zum automatisierten urbanen Fahren für höhere Verkehrssicherheit
  • Christoph Popp

    Dr.-Ing. Christoph Popp is currently senior engineer for measurement and automation at Persival GmbH. The mentioned research on the logic-based safety check for AI functions originates from his time as a research associate at the Institute of Automotive Engineering of Technical University of Darmstadt.

    EMAIL logo
    , Andreas Serov

    Andreas Serov is a research associate with the Cognitive Neuroinformatics of the University of Bremen. His research interests include state estimation, visual-inertial odometry, and SLAM.

    , Felix Glatzki

    Dr.-Ing. Felix Glatzki is currently systems engineer at Innovation Line Driverless of Business Area Autonomous Mobility (AM) at Continental. The mentioned research on specification and testing traffic rule compliance originates from his time as a research associate at the Institute of Automotive Engineering of Technical University of Darmstadt.

    , Christoph Ziegler

    Dr.-Ing. Christoph Ziegler was research associate with the Department of Control Methods and Intelligent Systems of Technical University of Darmstadt. Main fields of activity: analysis of human driving data and trajectory planning.

    , Andreea-Iulia Olaru

    Andreea-Iulia Olaru is a PhD student in the Department of Computer Science and Engineering of “Gheorghe Asachi” Technical University of Iaşi, Romania and a software engineer at Continental Autonomous Mobility Romania. Main fields of activity: AI-based trajectory prediction, Android development.

    , Jaime Maldonado , Joachim Clemens , Jürgen Adamy , Maxim Arbitmann , Florin Leon , Steven Peters , Kerstin Schill , Sighard Schräbler and Hermann Winner

Abstract

In the joint research project PRORETA 5, building blocks for automated driving in urban areas have been developed, implemented, and tested. The developed blocks involve an object tracking for cars, bicycles, and pedestrians that feeds a multimodal object prediction which is able to predict the traffic participants’ most likely trajectories. Then, an anytime tree-based planning algorithm calculates the vehicle’s desired path. Finally, logic-based safety functions ensure a collision-free trajectory for the ego vehicle. The mentioned building blocks were integrated and tested in a prototype vehicle in urban scenarios. Furthermore, a novel general framework for specifying and testing traffic rule compliance has been developed. In this paper, the automated driving concept of PRORETA 5 is introduced and the developed methods are briefly explained.

Zusammenfassung

Im Projekt PRORETA 5 wurden Funktionsbausteine für das automatisierte Fahren im urbanen Umfeld entwickelt, implementiert und getestet. Die entwickelten Elemente umfassen eine Objektverfolgung für Autos, Fahrräder und Fußgänger, welche als Grundlage für eine multimodale Objektprädiktion dient, die in der Lage ist, die wahrscheinlichsten Trajektorien der Verkehrsteilnehmer vorherzusagen. Anschließend berechnet ein baumbasierter Planungsalgorithmus den gewünschten Pfad des Fahrzeugs. Schließlich gewährleisten logikbasierte Absicherungsfunktionen eine kollisionsfreie Trajektorie für das Ego-Fahrzeug. Die genannten Bausteine wurden in ein Prototypenfahrzeug integriert und in städtischen Szenarien getestet. Darüber hinaus wurde ein neuartiger generischer Ansatz für die Spezifikation und Prüfung der Verkehrsregelkonformität entwickelt. In diesem Beitrag wird das automatisierte Fahrkonzept von PRORETA 5 vorgestellt und die entwickelten Methoden kurz erläutert.

1 Introduction and motivation

Automated Driving (AD) requires an interplay of environmental and proprioceptive perception, decision making, and performing actions depending on the traffic context. For perceiving the environment, the vehicle needs to be equipped with sensors that are able to acquire relevant information. Cameras, Light Detection And Ranging (LIDAR), Radio Detection And Ranging (RADAR), and ultrasonic sensors are typically used in this context to detect road markings, traffic signs, and other traffic participants among other things. Furthermore, the vehicle’s own state is estimated using Global Navigation Satellite System (GNSS) sensors, an Inertial Measurement Unit (IMU), and wheel odometers. After processing the sensors’ inputs, a macro and micro decision making takes place, where the former considers the ultimate goal, the global position, and the road network for route planning. The latter calculates a plan of movement and takes into account the vehicle’s immediate surroundings for collision avoidance, braking, and generally following the traffic flow. Finally, the actuators of the vehicle receive control signals to execute the plan and perform the vehicle’s movement.

PRORETA 5 is the fifth project of the successful research cooperation between Continental and the Technical University of Darmstadt and was running from 2019 to 2022. In contrast to PRORETA 1–4, PRORETA 5 also involves the University of Bremen and the Technical University of Iaşi. In the first project, an anti-collision system was developed. It is able to perform emergency brakings including potential steering maneuvers for vehicles moving in the same direction [1]. In PRORETA 2, the system was enhanced for more dynamic use cases like overtaking with oncoming traffic [2]. The detection of a drivable corridor and intervening decision making was investigated in the third project [3]. PRORETA 4 featured Machine Learning (ML) for the first time to analyze a personalized driving style. Additionally, driver’s gaze tracking and visual SLAM methods were investigated [4]. While the systems developed in the first four projects only assisted the vehicle operator, PRORETA 5 targets SAE automation level 4 [5]. Level 4 systems are also developed in other research projects like OPA3L [6], which is about autonomous shuttle rides in known suburban surroundings, UNICARagil [7], where a modular architecture for agile automated vehicle concepts has been developed, and the current follow-up project AUTOtech.agil [8], which also includes infrastructure sensors as well as cooperative concepts with control rooms and clouds. However, the PRORETA 5 project especially aims for mastering urban use cases without any active human contribution, similar to @CITY [9], which ran in parallel to PRORETA 5. To achieve this, the focus in PRORETA 5 is on handling the entirety of urban driving with the automated vehicle in a safe way. Besides perception and driving algorithms, unique elements in this project are a safeguarding concept for AI-based driving functions and a generic description of traffic rule compliance, that have been developed to meet the safety aspect. STADT:up [10] has started in 2023 and is doing further research on urban automated driving as well, also including a differentiated look at the interaction and communication concepts between users, vehicles, and other traffic participants. Besides the mentioned research projects, the state of the art of automated driving in serial vehicles is the SAE level 3 DRIVE PILOT [11] and the SAE level 4 valet parking system INTELLIGENT PARK PILOT [12] of Mercedes-Benz.

1.1 Use case

The research in the project focuses on urban scenarios developing novel approaches for different functions of AD systems. Urban environments pose particular challenges for AD. They are highly dynamic, with pedestrians and bicyclists quickly changing directions. Unclear priority situations, for example because of narrow roads, demand situational understanding and cooperation. Occlusions by parked vehicles or other objects obstruct other traffic participants. The use case of PRORETA 5 covers these challenges by developing an urban pilot for narrow roads. Table 1 gives a brief overview of the operational design domain (ODD) of the system.

Table 1:

Operational design domain of the PRORETA 5 system.

ODD category Description
Roadway infrastructure  Route network  City of Griesheim, Germany
 Road types  Bidirectional roads, two lane roads
 Speed limit  30 km/h on all roads
 Lane width  2.25 m – 3.5 m
 Intersections  Uncontrolled intersections
 Traffic control devices  Signs (205, 206, 207, 274, …), bollards (narrowed road)
 Horizontal curvature  ≤0.1 1/m
 On-street parking  Parallel parking on one or both sides
Env. conditions  Temperature  >0 °C and <40 °C
 Precipitation  Light rain
 Sky condition  Sunny, mostly sunny, partly sunny, mostly cloudy, cloudy
 Illuminance  Sunlight, full daylight, overcast day
Road users  Automobiles, bicyclists, pedestrians, motorcycles, scooters, micromobility vehicles, wheelchairs
Roadside objects  Overhanging vegetation, guard rails, trees

1.2 Contribution and overview

The PRORETA 5 system architecture is presented in Figure 1. The software architecture of the automated system is based on the sense-plan-act structure. In the sense layer, raw sensor data and an HD map are processed. A road model is created by localizing the vehicle inside the HD map using a GNSS reference system. By combining RADAR and LIDAR data, traffic participants of the current traffic situation are detected. In the planning layer, the movements of dynamic road users are predicted. Then, the trajectory planner combines all previously processed information into a future trajectory. In the act layer, the planned trajectory is safeguarded with the safety check module before it is executed by the feedback controller.

Figure 1: 
PRORETA 5 system architecture. The yellow marked modules were developed in this project.
Figure 1:

PRORETA 5 system architecture. The yellow marked modules were developed in this project.

Several building blocks for AD in urban environments have been developed in this project. Modules that are outside the scope of this project have been provided by Continental’s AD software stack. An object tracking based on RADAR and LIDAR data estimates the state of other traffic participants in the vehicle’s surveillance area. The tracking is performed on a manifold and described in Section 2.1. In addition to the object detection based on LIDAR and RADAR data, in PRORETA 5, the usage of visual saliency models for the detection of traffic participants in images was investigated. In Section 2.2, an overview of the main results and challenges of this approach is presented for the prospective implementation of saliency-based detection in forthcoming projects.

The object tracking output is fed into a multimodal object prediction presented in Section 2.3. The prediction module is a neural network trained on simulated and real-world data. The object prediction is able to predict the traffic participants’ most likely trajectories into the future, which supports decision making of the trajectory planner. The trajectory planner is a tree-based anytime algorithm that outputs the current best trajectory based on the ego vehicle’s speed, steering angle, road network, and predicted trajectories of other traffic participants (see Section 2.4). Before sending the trajectory of the planner to the vehicle controller, the logic-based safety check ensures that the planned trajectory is safe and can be performed collision-free. If this is not the case, the safety check performs a risk minimizing emergency maneuver into standstill. The safety check, which is based on RADAR data and unprocessed LIDAR point cloud data, is presented in Section 2.5. Furthermore, a framework for traffic rule compliance generating behavior constraints in the context of AD is developed and presented in Section 2.6.

2 PRORETA 5 system

The test vehicle in PRORETA 5 is a Volkswagen Passat B8, which is equipped with several environmental sensors. Figure 2 depicts the position of the sensors on the vehicle as well as the fields of view (the radial range of the fields of view is not true to scale). The forward-facing RADAR sensor of type Continental ARS430 has two independent scans for near and long range. Three Ibeo LUX-8 LIDAR sensors, each with eight layers, are installed at the front of the vehicle, directed straight ahead, to the front left, and to the front right. Another Ibeo LUX-8 is located at the rear of the vehicle and is oriented in the reverse direction of travel. All Ibeo LUX-8 sensors are connected to an Electronic Control Unit (ECU), where the individual scans are combined to a single point cloud and an object list is generated. Additionally, two Velodyne PUCK (VLP16) LIDAR sensors are mounted on the roof of the vehicle. Each one has 16 layers and covers a 360° field of view. Furthermore, the OxTS RT4000 reference system is used in the vehicle, which is an inertial navigation system with GNSS aided by Real-Time Kinematics (RTK).

Figure 2: 
Environmental sensor setup in the test vehicle with 1: RADAR Continental ARS430, 2: LIDAR Velodyne Puck (VLP 16), 3: LIDAR Ibeo LUX-8.
Figure 2:

Environmental sensor setup in the test vehicle with 1: RADAR Continental ARS430, 2: LIDAR Velodyne Puck (VLP 16), 3: LIDAR Ibeo LUX-8.

The main computer that is used to run the developed driving functions in the vehicle contains an Intel Xeon Gold 6230 CPU, 64 GB of RAM, and an Nvidia GeForce RTX 2080Ti GPU. The software modules running on this computer communicate using the open source software eCAL.[1] The investigated methods are presented in the following subsections.

2.1 Extended object tracking

In the realm of AD, the capability to estimate the kinematics and extent of objects in the immediate surroundings is crucial for safe operation and collision avoidance. Extended object tracking addresses this challenge by estimating the state of pedestrians, byciclists, and vehicles. In PRORETA 5, the object tracking is based on RADAR and LIDAR, which play a pivotal role in AD. An overview of different tracking approaches for the individual sensors is given in [13]. RADAR-based solutions are especially effective for detecting objects with refelective surfaces and estimating their velocities. LIDAR relies on processing point cloud data to track entities with complex shapes and provide better estimates of the object’s position.

Due to the availability of two object lists provided by the Continental ARS430 RADAR and the Ibeo LIDAR ECU, a high level fusion is performed instead of processing the RADAR’s and LIDAR’s low level sensor information. The object’s tracking state is partially provided by the respective sensor object list, which is then probabilistically fused by means of one Extended Kalman Filter (EKF) per object. Our solution is akin to [14], where the authors track vehicles with infrastructure-based sensing systems and propose a generic sensor interface for the tracking approach. However, our approach tracks objects relative to a moving ego vehicle instead of stationary sensors. Our approach tracks a reference point of the object, where a similar approach using LIDAR only is presented in [15]. Both sensors are fused in [16], where the authors only track an object’s 2D position and velocity, while our algorithm also estimates the object’s extent, turn rate, and 2D acceleration. We also provide detailed information of the measurement functions and the Jacobian derivations. The presented approach takes advantage of each sensor’s strength and can be easily extended to additional sensing systems such as camera. Furthermore, the presented high level fusion can potentially benefit from vehicle-to-vehicle (V2V) communication, where vehicles share motion information like position, speed, acceleration, turn rate, and other useful quantities among each other [17].

The state of an object at time t is defined as

(1) x = p L , t ψ L , t v O , t ψ ̇ t a O , t s t ,

where the capital letter subscripts indicate the respective coordinate frame the variable is given in. There are three relevant coordinate frames: the ego vehicle frame E, the frame of the tracked object O, and the local odometry frame L. p L and ψ L are the 2D position and the yaw angle in the local odometry frame L, respectively. v O and a O are the body-fixed 2D velocity and acceleration in the object’s coordinate frame O, respectively. The object’s yaw rate is given by ψ ̇ . Finally, the state contains the object’s length and width given by s . All the relevant frames, variables, and measures for the object tracking are displayed in Figure 3. The ego vehicle on the left is tracking an object on the right. Before performing EKF updates at time t + 1, the object’s position and orientation are predicted with

(2) g ( x ) = p L , t + R 2 ( ψ L , t ) v L , t Δ t norm ψ L , t + ψ ̇ L , t Δ t v O , t ψ ̇ t a O , t s t ,

where R 2(◦) ∈ SO(2) is the 2D rotation matrix of the respective angle and Δt is the time between t + 1 and t. Although the object’s acceleration is part of the state vector, we refrain from predicting the velocity using v O,t + a O,t  Δt, since this may lead to instability in certain cases. Special care is needed for estimating the angle ψ L ∈ [−π, π) with its discontinuity at 2 π k , k Z . In our case, we perform state estimation on a manifold and normalize the angle after each operation according to [18, eq. (24)].

Figure 3: 
Relevant frames, variables, and measures for the object tracking.
Figure 3:

Relevant frames, variables, and measures for the object tracking.

The RADAR measurement used for Kalman filter updates is given by

(3) z RADAR = p E , t ψ E , t v E , t a E , t s t .

The RADAR measurement offers the possibility to update all parts of the state vector (1) except for the object’s yaw rate. The measurement function results to

(4) h RADAR ( x ) = R 2 ψ L , t ego 1 p L , t p L , t ego norm ψ L , t ego + ψ L R 2 ψ L , t ego 1 R 2 ( ψ L , t ) v O , t v L , t ego R 2 ψ L , t ego 1 R 2 ( ψ L , t ) a O , t a L , t ego s t ,

where the superscript ego indicates measures of the ego vehicle. The LIDAR measurement used for Kalman filter updates is given by

(5) z LIDAR = p E , t ψ E , t v E , t ψ ̇ E , t ,

where the LIDAR measurement does not contain the object’s size, since the object’s size is estimated with RADAR measurements in our case. While the LIDAR cannot infer an object’s full dimensions when observing it from one specific side only, the RADAR is able to determine the size more precisely through reflections on the ground. In comparison to the RADAR measurement, the Ibeo Lux LIDAR ECU provides an estimate of the object’s yaw rate. The yaw rate is particularly helpful when predicting an object’s movement. The measurement function is given by

(6) h LIDAR ( x ) = R 2 ψ L , t ego 1 p L , t p L , t ego norm ψ L , t ego + ψ L R 2 ψ L , t ego 1 R 2 ( ψ L , t ) v O , t v L , t ego ψ ̇ t ψ ̇ t ego .

Tracking an object’s center with multiple sensor sources may lead to jumps in the estimated state, when not taking into account the different bounding box sizes [15]. This circumstance is visualized in Figure 4, where a vehicle is tracked from a rear view point and both bounding box centers exhibit a non negligible distance to each other. To alleviate this problem, a reference point of the bounding box, provided by the respective sensor object lists, is tracked instead. In Figure 3, the reference point is in the object’s center. When a measurement arrives with a different reference point that is being tracked, one has to change the position p L using

(7) f ( x ) = p L + R 2 ( ψ L ) o ,

where o is the offset between the previous and new reference point.

Figure 4: 
Exemplary LIDAR and RADAR bounding boxes of the same object with a rear view point and possible reference points (circles).
Figure 4:

Exemplary LIDAR and RADAR bounding boxes of the same object with a rear view point and possible reference points (circles).

The EKF’s covariance is propagated for the operations (2), (4), (6), (7) using Σ t+1 = JΣ t J , where J is the Jacobian. We leave out the definitions of the Jacobians G = g x , F = f x , H RADAR = h RADAR x , H LIDAR = h LIDAR x due to lack of space. However, these Jacobians can be computed using a symbolic computer algebra system.

The extended object tracking fuses processed information of the RADAR and LIDAR, which is sufficient to robustly track pedestrians, bicyclists, and vehicles. State estimation w.r.t. the extent of the object can be improved by also handling the width and length of the object on manifold as in [19].

2.2 Visual saliency

In general, a visual saliency model takes an input image and detects areas likely to contain whole, partial, or groups of objects based on salient features, such as size, color, and shape [20]. The detected areas are typically displayed using an intensity map, as illustrated in Figure 5. In the context of AD, saliency maps have been used for object detection applications. The salient areas in a map can serve as region proposals for object detection [21], [22]. Similarly, saliency maps also have been used to identify the areas of the scene [23] as well as the objects and traffic participants that attract the driver’s attention [24].

Figure 5: 
Coverage and agreement trade-off of saliency maps. Traffic image and binary mask of traffic participants taken from the KITTI semantic segmentation dataset [25].
Figure 5:

Coverage and agreement trade-off of saliency maps. Traffic image and binary mask of traffic participants taken from the KITTI semantic segmentation dataset [25].

Two applications of the saliency maps were investigated in PRORETA 5: saliency-based traffic participant detection [26] and detection of driver attention over traffic participants, along with the assessment of drivers’ attentive state based on measures of gaze allocation [27].

By focusing on the overall detection results, previous works on saliency-based object detection lack an analysis of the quality of the saliency maps in terms of their correspondence between the salient areas and the traffic participants in the image. In order to fill this gap, a systematic evaluation of different saliency models was conducted [26]. The evaluation revealed that saliency models exhibit a trade-off between the coverage of the salient areas and the agreement with the image segments containing traffic participants [26]. This trade-off is illustrated in Figure 5 with a set of saliency maps computed using the Spectral Residual model [28] at different resolutions (width × height). A coarse resolution produces a map with large connected salient areas covering several traffic participants. This results in good coverage and bad agreement, as shown in the 64 × 19 map where the smaller traffic participants in the background are contained within a single salient area. At higher resolutions, the model produces maps with small and sparse salient areas. This results in bad coverage and bad agreement for the traffic participants in the foreground, as shown in the 256 × 77 map.

The coverage and agreement trade-off can be thus summarized as follows: while a large salient area might cover many traffic participants simultaneously (i.e., good coverage with bad agreement), small and sparse salient areas might only cover them partially (i.e., bad coverage and agreement for mid-size traffic participants and good coverage and agreement for small traffic participants). This trade-off hinders the reliability of saliency models for detection tasks in challenging situations, as in images containing large traffic participants in the foreground and small ones in the background. Furthermore, saliency models tend to produce a large number of false positives due to high-contrast areas of the image, such as the contours of trees and buildings against the sky [26]. In conclusion, the results show that saliency detection needs to be adjusted for traffic participants appearing in different dimensions and that false positives caused by elements of the surroundings and the landscape need to be minimized [26]. It is important to note that saliency-based object detection was not integrated in the PRORETA 5 system illustrated in Figure 1 and thus it is not featured in the system demonstration reported in Section 5. Nevertheless, the results are reported here in order to provide a complete overview of the methods investigated in PRORETA 5.

2.3 Multimodal object trajectory prediction

In order to predict the future behavior of various traffic participants, a multimodal trajectory prediction approach is used, which is explained in detail in [29]. Trajectory prediction plays an important role in the automated vehicle industry, helping to anticipate the behavior of other road users and helping the vehicle make informed decisions, such as avoiding collisions or adjusting speed.

Therefore, this area of investigation has been addressed by numerous researchers, particularly focusing on diverse deep learning prediction techniques. Recent papers that report the application of such methods are reviewed in [30]. Currently, several benchmark datasets are available, e.g. [31], [32], alongside active corresponding competitions such as Argoverse [33].

To calculate an accurate trajectory prediction, multiple inputs are considered, such as the observed movement of the traffic participant, the movement of other surrounding traffic participants, road structure, traffic signs, and other environmental information. An important aspect is to consider the interactions of road users, e.g. overtaking maneuvers, a pedestrian crossing the street that can lead to different future trajectories based on the decision made during the traffic situation.

The approach that is used in the PRORETA 5 project is defined by a model with multimodal trajectory prediction [29] for traffic participants like cars, buses, trucks, bicycles, and pedestrians. Understanding the environment and intentions of other traffic users can improve the quality of the planned trajectory and offer a more stable and comfortable experience for the passengers in an automated vehicle. The methodimplemented for prediction is using vectors and an original design of a neural network that combines information about the observed objects, group context, and road context. This network was developed using the in D dataset [34] and a dataset created from real-world scenarios using the PRORETA 5 vehicle.

Besides its real-time capabilities, that allow the model to run on a real car, another advantage of this approach is that it combines several types of data about the agents, road information, and context and is capable of learning the interactions between traffic participants.

As presented in Figure 6, the model has an encoding part in order to extract information from the observed object (past trajectory and the type of the object), the group context (information about the surrounding objects near the observed object), and the road context (information about the road structure), followed by the decoding part in order to predict multiple possible trajectories (modes), more details in [29]. The encoding part consists in three fully connected layers with a leaky ReLU activation function for each observed agent, for the group context and also for the road context. For the group and for the road context, a symmetric function must be used, because such a function is insensitive to the order in which its operands are considered. In our case, a max pooling component is applied to the embeddings of the other traffic participants and road structure. The decoding part of the network extracts the information for future trajectories using three fully connected layer with a leaky ReLU activation function and a second fully connected layer with a sigmoid (logistic) activation function. The outputs represent the future trajectories with a probability for each mode (trajectory). For this network, information about past, group context, and road context were extracted in order to learn different traffic interactions. In this way, the network can associate a past trajectory with surroundings information.

Figure 6: 
Overview of the object trajectory prediction approach, based on [29].
Figure 6:

Overview of the object trajectory prediction approach, based on [29].

The training of the model is based on a conditional loss function that consists of two terms. The first term is a general loss that is applied to all modes and aims to decrease the mean squared error (MSE) of the trajectories weighted by their mode probabilities. The second term is a specific loss that is used only for the best mode, i.e., the mode with the smallest MSE. In order to integrate the object prediction in the PRORETA 5 vehicle, the Frenet coordinates are used. This improves the precision of the predicted trajectories by transforming curved road segments to straight ones.

Furthermore, the object trajectory prediction module was verified on the test track for different types of traffic participants in various traffic scenes. The prediction results can be improved by including more variation (e.g. speed, traffic participant type, and behavior) in the dataset.

2.4 Anytime tree-based trajectory planning

To plan the future trajectory of the automated system, a novel approach, CarPre trajectory planning (Monte Carlo Model Predictive Trajectory Planner) is used, which is explained in detail in [35]. In the following, the motivation as well as a short overview of the planning approach is presented.

In the state of the art, the planning problem is divided into behavioral planning and motion planning as in [36], [37] to cope with the complex, non-convex planning problem. While this approach is often chosen (e. g. [38], [39]), it has its disadvantages in the urban domain, since the finite set of high-level behavior classes must be chosen such that all possible traffic scenarios can be handled. However, the number of possible traffic scenarios is very large due to the potential high number of other traffic participants as well as the limited driving area. Therefore, determining the set of discretized behavior classes is difficult, which limits the approach of division. The common alternative of determining a behavioral trajectory (e. g. [40]) has the problem of ensuring that the motion planning maps the behavioral trajectory and does not converge to another local optimum. Therefore, the combined planning problem of behavioral and motion planning is considered as in [41], [42]. However, since this is complex and a long planning horizon is needed to make foresighted decisions, efficient algorithms are needed. One promising approach, which is used in behavior planning, e. g. [43], is Monte Carlo tree search (MCTS) [44]. MCTS is known to solve complex decision problems efficiently and is therefore chosen as the basis for CarPre planning. In our approach, a conversion to Frenet coordinates as in, e. g. [45], is omitted since lateral accelerations can no longer be taken into account during planning. This results in problems with sharp curves [41], such as 90° curves which commonly appear in urban areas. Likewise, a path-velocity decomposition as in e. g. [46] is omitted, because the lateral vehicle movements depend directly on the vehicle speed [47].

In CarPre trajectory planning, a trajectory tree is iteratively created by MCTS, where each tree node represents a possible vehicle state x ego and each edge a discretized action u ego. Here, each action is a composite pair of an acceleration a and a steering rate value ω. Applying an input u ego,t for a sampling time T in, a new state x ego,t+1 can be calculated from the previous state x ego,t using the kinematic single track model of [35] with the reference point on the middle of the front axle as discussed in [48]. This creates an equitemporal graph to extract the planned trajectory. To discretize the action space, the acceleration values are equally distributed and limited by the minimum and maximum acceleration:

(8) a d a min p Δ a a max | p Z .

For the lateral movements, the steering angle δ is discretized by equally distributing δ between the limits defined by the speed-dependent transformation from [47]:

(9) | δ max ( v ) | = min arcsin κ max l , arcsin a lat,max l v 2

with the maximum drivable path curvature κ max, the maximum lateral acceleration a lat,max, and the wheelbase l of the vehicle. This can be seen in Figure 7, where the extracted boundaries from [47] (dashed red lines) are adapted for the CarPre planner (red lines) to increase the comfort of the system. The steering rate ω is then chosen so that no discretization errors arise.

Figure 7: 
Discretization of the steering angle δ (orange lines) compared to extracted statistics of human drivers from [47] (2D histogram with a logarithmic scale). Figure taken from [35].
Figure 7:

Discretization of the steering angle δ (orange lines) compared to extracted statistics of human drivers from [47] (2D histogram with a logarithmic scale). Figure taken from [35].

To estimate the long-term effect of each short-term action of the vehicle, Monte Carlo simulations combined with a heuristic are used. The heuristic, extracted from human driving data, models steering to follow the driving lane as well as minimizing the jerk (i.e. choosing the same acceleration value as before). With this, the estimated reward for each node is always available and is enhanced with more iterations of the algorithm. This enables real-time applications since a planned trajectory can be extracted at any time. The algorithm is put into operation on the PRORETA 5 vehicle and a first evaluation of the algorithm can be found in [35], which reveals comfortable and foresighted driving. For future work, the result of the Monte Carlo simulations can be approximated using neural networks to increase the speed of convergence, which enables longer planning horizons.

2.5 Logic-based safety check for AI functions

As Nascimento et al. [49] point out in their study, besides their great potential in various fields of AD, AI approaches also pose risks regarding safety. Since it is difficult to gain information about the reasoning of AI algorithms, approaches for enhancing safety of AI-driven automated vehicles are e.g. to increase the amount of training data and especially the coverage of critical scenarios [50]. However, completeness of training data representing the entirety of reality can hardly be reached. There are several approaches for safeguarding an automated vehicle, like [51] and [52], but they mostly assume ideally working sensors and thus only safeguard a part of the automated vehicle. The goal of the so called Safety Check (SC) concept, which was previously presented in [53], [54], is to safeguard the results of the AI algorithms during operation in a separate module before the resulting trajectory is performed by the vehicle. Consequently, the SC only uses conventional approaches without learning-based AI approaches in order not to have the same AI-caused safety issues as the modules of the normal operation behavior planner. Since another goal of the SC is to be easily integrable, no additional safeguarding hardware is used. The SC only has access to the same sensor data as all other modules in the AD system but uses different approaches of interpreting them.

To check the safety of the currently planned trajectory and the state of the AD system, several submodules perform different checks. Besides the system health check, which checks if all modules and sensors in the AD system are running and sending data, the plausibility of the sensor data as well as the object list from the perception module are checked. Thus, the SC can rely on the correctness and completeness of the available sensor data or object lists for the object criticality check, which is another SC submodule used to identify collision critical objects. Furthermore, the behavior space conformity (see Section 2.6), the physical feasibility and the temporal stability of the planned trajectory are checked. The latter means the consistency of successively planned trajectories. The architecture of the SC is depicted in Figure 8. Further details about the architecture and the functionality of the submodules can be found in the dissertation of Popp [54].

Figure 8: 
Architecture of the SC module, based on [54].
Figure 8:

Architecture of the SC module, based on [54].

The value of the safety flag is set by the mentioned submodules and can be set to safe or unsafe. If none of the submodules rates the state of the AD system or the currently planned trajectory as unsafe, the planned trajectory is forwarded to the motion controller. Otherwise, if at least one submodule detects a safety issue, an emergency trajectory is initiated. Since the considered ODD includes a speed limit of 30 km/h, braking to standstill is mostly preferable to evasion maneuvers in safety critical situations. Thus, the preferred strategy to reach a minimal risk condition is to decelerate to standstill along the path of the currently planned or the last safe trajectory, depending on the kind of detected unsafe state.

The functionality of the safety check was verified on the test track as well as in public traffic, focusing on object list plausibility and object criticality [54]. The validation of the safety check module is still pending.

2.6 Specifying and testing traffic rule compliance

One major aspect of road safety is the conformity to traffic regulations. As for all other traffic participants, an automated vehicle is required to behave according to the traffic rules. Thus, within PRORETA 5, the consideration of traffic rule compliance is incorporated in the development process. From the stated requirement, it follows that a holistic consideration of traffic rule compliance is indispensible. This means that first the behavior constraints for the vehicle based on traffic rules need to be specified. Then, necessary functionalities on system level within the targeted ODD need to be derived and finally, test criteria and methods need to be developed to verify these functionalities.

Current approaches in the field of behavior frameworks for automated vehicles consider traffic rules, but either the rules are only regarded on a high level within a rule hierarchy [55] or they are modeled for specific scenarios only [56], [57]. Apart from this, various approaches aim to formalize traffic rules for testing general traffic rules at runtime in the vehicle [58]–[61]. Also, there exist approaches for specific traffic areas (e.g. specific intersections) [62], [63]. Still, there is no approach known to the authors that is applicable to different traffic domains by using an universal description of behavior constraints. Thus, within PRORETA 5 methods have been developed to close this gap.

In order to specify the behavior constraints, it is necessary to delineate which behavior is rule compliant and which is not within the entire ODD. This directly depends on the local scenery,[2] which represents the static traffic environment. Present elements (e.g. traffic signs) instantiate the rules that constrain the vehicle behavior. Current description approaches represent the scenery as a human perceives it, raising the questions: what is the actual necessary information from behavioral perspective and how may it be represented? These questions are answered with the usage of the Behavior-Semantic Scenery Description (BSSD), which was previously presented in [64] and [65]. This approach introduces a structure of attributes (speed limit, boundary, reservation, and overtake) that assigns the traffic-rule based behavioral information to the scenery and thus, directly specifying the behavioral constraints. Because these attributes directly describe behavioral rules and not elements of the traffic environments, they are applicable to any traffic area.

Figure 9 depicts one of the scenarios analyzed in PRORETA 5 with the BSSD of the local scenery. The green and red colored sections of the roadway depict lanes that are own-reserved and externally-reserved, respectively. The reservation describes the rules regarding priority and to reside in a certain space. The lane pointing in the driving direction of the ego vehicle is therefore own-reserved. The ego vehicle has priority within this lane and can stay permanently in this lane. Shifting over the dashed lane marking into the oncoming lane means moving into an externally-reserved space because priority must be given to oncoming vehicles. The ego vehicle is only allowed to stay in this lane as long as it does not hinder oncoming vehicles. This is expressed by the link (red arrow) of the reservation. As a consequence of the dashed lane marking, the boundary attribute is set to allowed. In both lanes, there exists a speed limit of 30 km/h due to the respective speed limit sign in the scenery. Because there are no further traffic signs, overtaking is allowed. In [66], a method to derive these attributes automatically from an HD-map is presented.

Figure 9: 
Overview of shift out of lane scenario with underlying BSSD description.
Figure 9:

Overview of shift out of lane scenario with underlying BSSD description.

With this representation, the constraints for vehicle behavior are described semantically. To be able to evaluate the vehicle behavior, it is necessary to quantify these constraints in order to receive test criteria. In [67], all behavioral attributes are formalized using predicate logic. These criteria were used within PRORETA 5 to evaluate the test cases. By using the systematic approaches of Schuldt [68] and Amersbach [69], a test plan for the validation of traffic rule compliance was derived. The test effort was thereby reduced by implementing equivalence partitioning [70, Part 4, p. 10], boundary value analysis [70, Part 4, p. 12] and combinatorial test design techniques [70, Part 4, p. 15]. A more detailed insight into this process is given in the dissertation of Glatzki [67].

For the introduced example in Figure 9, during development, insufficiencies of the vehicle behavior were found by applying the reservation attribute criterion. The criterion was violated because the ego vehicle was leaving the lane in front of the oncoming vehicle too late and thus, hindering it to continue in its lane.

Further research revealed that the ego vehicle had left the ego lane already in distance to the parked objects. Because the environmental perception had not yet picked up the oncoming object, the planner was unable to determine with sufficient reliability whether driving in the oncoming lane constitutes a violation. When the approaching of the oncoming object was eventually seen, though, the ego vehicle also failed to reposition itself in the ego lane. By modifying the cost function of the planner module, the ego vehicle moves out of the lane closer to the parked objects and the planner is thus able to better judge if it can move into the oncoming lane. With this adaption, the vehicle conformed to the reservation attribute. Therefore, the introduced method helped finding behavioral insufficiencies of the system and improving the system such that it conforms to the traffic rules.

Specifying and testing the traffic rule compliance uncovered various functional insufficiencies in the system. Since not only traffic rules are influencing the behavior, a clear hierarchy for all behavior specifications needs to be determined (e.g. [55]) in the future. A violation of the traffic rules might be reasonable if a collision is thereby avoided. This hierarchy needs to account for misbehavior of other traffic participants and possible compensation of the ego vehicle. For this, knowing and understanding the rules is essential, for which BSSD serves the basis for. An international agreement of the regulating legislative bodies on quantified limits for vehicle behavior will be needed.

3 System demonstration

The automated driving system has been extensively tested in simulation and real-world driving scenarios on the August-Euler airfield in the city of Griesheim, Germany. A video showcasing the system’s capabilities is provided.[3] Through the cohesive integration of the modules described in the previous chapter, the vehicle has effectively executed distinct driving scenarios with level 4 of SAE automation. Figure 10 displays the scenarios that the test vehicle carried out robustly and repeatedly on the test track. The scenarios comprise following the lane, following a preceding car, shift out of the lane to pass a parked vehicle, and approaching objects that require an emergency braking. The maximum velocity during the execution of the scenarios was 30 km/h. For lane following, the test vehicle completed several laps on the airfield without crossing lane boundaries. For the second scenario, a vehicle or bicyclist preceded the test vehicle with varying speeds forcing decelerations and accelerations of the automated car on its own lane. Furthermore, the vehicle has also been able to shift out of a lane when passing a parked car which extends onto the driving lane. In order to examine the behavior during emergency situations, the object tracking module simulates a failure by publishing an empty object list. The logic-based safety check relying on raw point cloud data registers obstacles that requires coming to a standstill and triggers an emergency braking.

Figure 10: 
Use case represented by typical urban scenarios.
Figure 10:

Use case represented by typical urban scenarios.

During a demonstration event, automotive industry experts were invited to be passengers when performing the scenarios described above. A total of 26 vehicle passengers answered a questionnaire about the system. The results are shown in Figure 11. The overall system has been rated 7.6 out of 10 points after the demonstration, see Figure 11(a). The given answers to five additional questions are illustrated in Figure 11(b). The vast majority of passengers agreed that driving in the automated vehicle felt comfortable and safe during closed loop. According to the passengers, the reaction time of the closed loop system was not too long. Note that the representation of the answer in Figure 11(b) deviates from the other boxes due to the formulation of the question. The passengers also agreed that the safety check module operated in an appropriate way and increased the general trust into the autonomous system.

Figure 11: 
Results of the demonstration survey. (a) Boxplot on a scale from 1 to 10, (b) boxplots for five questions on a scale from −2 (strongly disagree) to 2 (strongly agree).
Figure 11:

Results of the demonstration survey. (a) Boxplot on a scale from 1 to 10, (b) boxplots for five questions on a scale from −2 (strongly disagree) to 2 (strongly agree).

4 Conclusions

Within this article, building blocks for automated driving in urban traffic have been presented that were developed in the scope of the PRORETA 5 project. Addressing use cases that are typical for residential areas like object following, oncoming traffic, passing parked vehicles on the road and approaching crossing objects, several functional modules with novel approaches were integrated and tested in a real test vehicle.

In summary, the general applicability of the developed building blocks and their interaction within the developed system were verified in selected driving scenarios. The next steps are improving robustness and general applicability of the PRORETA 5 system and testing its safe functionality in public traffic.


Corresponding author: Christoph Popp, Institute of Automotive Engineering, Technical University of Darmstadt, Darmstadt, Germany, E-mail:

About the authors

Christoph Popp

Dr.-Ing. Christoph Popp is currently senior engineer for measurement and automation at Persival GmbH. The mentioned research on the logic-based safety check for AI functions originates from his time as a research associate at the Institute of Automotive Engineering of Technical University of Darmstadt.

Andreas Serov

Andreas Serov is a research associate with the Cognitive Neuroinformatics of the University of Bremen. His research interests include state estimation, visual-inertial odometry, and SLAM.

Felix Glatzki

Dr.-Ing. Felix Glatzki is currently systems engineer at Innovation Line Driverless of Business Area Autonomous Mobility (AM) at Continental. The mentioned research on specification and testing traffic rule compliance originates from his time as a research associate at the Institute of Automotive Engineering of Technical University of Darmstadt.

Christoph Ziegler

Dr.-Ing. Christoph Ziegler was research associate with the Department of Control Methods and Intelligent Systems of Technical University of Darmstadt. Main fields of activity: analysis of human driving data and trajectory planning.

Andreea-Iulia Olaru

Andreea-Iulia Olaru is a PhD student in the Department of Computer Science and Engineering of “Gheorghe Asachi” Technical University of Iaşi, Romania and a software engineer at Continental Autonomous Mobility Romania. Main fields of activity: AI-based trajectory prediction, Android development.

Acknowledgment

We kindly thank Continental and all the extraordinary people working there for their great cooperation and support within PRORETA 5.

  1. Research ethics: Not applicable.

  2. Author contributions: The authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: The authors state no conflict of interest.

  4. Research funding: None declared.

  5. Data availability: Not applicable.

References

[1] E. Bender, M. Darms, M. Schorn, U. Stählin, and R. Isermann, “Antikollisionssystem PRORETA auf dem Weg zum unfallvermeidenden Fahrzeug,” in Automobiltechnische Zeitschrift, 2007, pp. 337–341.10.1007/BF03221883Search in Google Scholar

[2] A. Hohm, R. Mannale, K. Schmitt, and C. Wojek, “Vermeidung von Überholunfällen,” in Automobiltechnische Zeitschrift, 2010, pp. 712–718.10.1007/BF03222197Search in Google Scholar

[3] E. Bauer, et al.., “PRORETA 3: an integrated approach to collision avoidance and vehicle automation,” Automatisierungstechnik, vol. 60, no. 12, pp. 755–765, 2021. https://doi.org/10.1524/auto.2012.1046.Search in Google Scholar

[4] J. Schwehr, et al.., “The PRORETA 4 city assistant system,” Automatisierungstechnik, vol. 67, no. 9, pp. 783–798, 2019. https://doi.org/10.1515/auto-2019-0051.Search in Google Scholar

[5] SAE International On-Road Automated Driving (ORAD) Committee, “Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles,” SAE Standard J3016, Rev. Apr. 2021. Available at: https://www.sae.org/standards/content/j3016_202104/; http://volunteers.sae.org/authors/FormattingCitations.pdf.Search in Google Scholar

[6] A. Folkers, et al.., “The OPA3L system and testconcept for urban autonomous driving,” in IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), 2022, pp. 1949–1956.10.1109/ITSC55140.2022.9922416Search in Google Scholar

[7] T. Woopen, et al.., “UNICARagil – disruptive modular architectures for agile, automated vehicle concepts,” in 27. Aachen Colloquium Automobile and Engine Technology: October 8th-10th, 2018, Eurogress Aachen, Germany. Veranstaltungstitel: 27. Aachen Colloquium Automobile and Engine Technology, Aachen, Germany, Institute for Automotive Engineering, RWTH Aachen, 2022, pp. 663–694. Available at: http://tuprints.ulb.tu-darmstadt.de/22039/.Search in Google Scholar

[8] RWTH Aachen, AUTOtech.agil, 2023. Available at: https://www.ika.rwth-aachen.de/de/kompetenzen/projekte/automatisiertes-fahren/autotech-agil.html Accessed: Aug. 17, 2023.Search in Google Scholar

[9] S. Bohnaker, @CITY, 2023. Available at: https://www.atcity-online.de Accessed: Aug. 17, 2023.Search in Google Scholar

[10] S. Bohnaker, STADT:up, 2023. Available at: https://www.stadtup-online.de Accessed: Aug. 17, 2023.Search in Google Scholar

[11] Mercedes-Benz, The Front Runner in Automated Driving and Safety Technologies, 2022. Available at: https://group.mercedes-benz.com/innovation/case/autonomous/drive-pilot-2.html Accessed: Aug. 17, 2023.Search in Google Scholar

[12] Mercedes-Benz, Mercedes-Benz and Bosch Driverless Parking System, 2022. Available at: https://group.mercedes-benz.com/innovation/product-innovation/autonomous-driving/intelligent-park-pilot.html Accessed: Aug. 17, 2023.Search in Google Scholar

[13] K. Granström, M. Baum, and S. Reuter, “Extended object tracking: introduction, overview, and applications,” J. Adv. Inf. Fusion, vol. 12, no. 2, pp. 139–174, 2017.Search in Google Scholar

[14] M. Herrmann, J. Müller, J. Strohbeck, and M. Buchholz, “Environment modeling based on generic infrastructure sensor interfaces using a centralized labeled-multi-Bernoulli filter,” in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 2414–2420.10.1109/ITSC.2019.8916923Search in Google Scholar

[15] Y. Zhang, X. Sun, H. Xu, and E. Yao, “Tracking multi-vehicles with reference points switches at the intersection using a roadside LiDAR sensor,” IEEE Access, vol. 7, pp. 174072–174082, 2019. https://doi.org/10.1109/access.2019.2953747.Search in Google Scholar

[16] H. Hajri and M.-C. Rahal, “Real time lidar and radar high-level fusion for obstacle detection and tracking with evaluation on a ground truth,” in 20th International Conference on Automation, Robotics and Applications Lisbon sept 24-25, 2018, 2018. Available at: https://hal.science/hal-01846271.Search in Google Scholar

[17] A. Demba and D. P. F. Möller, “Vehicle-to-vehicle communication technology,” in 2018 IEEE International Conference on Electro/Information Technology (EIT), 2018, pp. 0459–0464.10.1109/EIT.2018.8500189Search in Google Scholar

[18] C. Hertzberg, R. Wagner, U. Frese, and L. Schröder, “Integrating generic sensor fusion algorithms with sound state representations through encapsulation of manifolds,” Inf. Fusion, vol. 14, no. 1, pp. 57–77, 2013. https://doi.org/10.1016/j.inffus.2011.08.003.Search in Google Scholar

[19] L. A. Giefer, J. Clemens, and K. Schill, “Extended object tracking on the affine group aff(2),” in IEEE 23rd International Conference on Information Fusion (FUSION), 2020, pp. 1–8.10.23919/FUSION45008.2020.9190566Search in Google Scholar

[20] L. Zhang and W. Lin, Selective Visual Attention: Computational Models and Applications, Hoboken, NJ, USA, John Wiley & Sons – IEEE Press, 2013.10.1002/9780470828144Search in Google Scholar

[21] G. Silva, L. Schnitman, and L. Oliveira, “Multi-scale spectral residual analysis to speed up image object detection,” in 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images, 2012, pp. 79–86.10.1109/SIBGRAPI.2012.20Search in Google Scholar

[22] A.-K. Fattal, M. Karg, C. Scharfenberger, and J. Adamy, “Saliency-guided region proposal network for CNN based object detection,” in 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 2017, pp. 1–8.10.1109/ITSC.2017.8317756Search in Google Scholar

[23] T. Deng, K. Yang, Y. Li, and H. Yan, “Where does the driver look? Top-Down-Based saliency detection in a traffic driving environment,” IEEE Trans. Intell. Transp. Syst., vol. 17, no. 7, pp. 2051–2062, 2016. https://doi.org/10.1109/tits.2016.2535402.Search in Google Scholar

[24] Y. Rong, N.-R. Kassautzki, W. Fuhl, and E. Kasneci, “Where and what: driver attention-based object detection,” in Proc. ACM Hum.-Comput. Interact. 6.ETRA, 2022.10.1145/3530887Search in Google Scholar

[25] H. Alhaija, S. Mustikovela, L. Mescheder, A. Geiger, and C. Rother, “Augmented reality meets computer vision: efficient data generation for urban driving scenes,” in International Journal of Computer Vision (IJCV), 2018.10.1007/s11263-018-1070-xSearch in Google Scholar

[26] J. Maldonado and L. A. Giefer, “A comparison of bottom-up models for spatial saliency predictions in autonomous driving,” Sensors, vol. 21, no. 20, p. 6825, 2021. https://doi.org/10.3390/s21206825.Search in Google Scholar PubMed PubMed Central

[27] J. Maldonado and L. A. Giefer, “On the use of distribution-based metrics for the evaluation of drivers’ fixation maps against spatial baselines,” in Symposium on Eye Tracking Research and Applications. ETRA ’22, New York, NY, USA, Association for Computing Machinery, 2022.10.1145/3517031.3529629Search in Google Scholar

[28] X. Hou and L. Zhang, “Saliency detection: a spectral residual approach,” in IEEE Conference on Computer Vision and Pattern Recognition, 2007.10.1109/CVPR.2007.383267Search in Google Scholar

[29] A.-I. Patachi and F. Leon, “Multiagent multimodal trajectory prediction in urban traffic scenarios using a neural network-based solution,” Mathematics, vol. 11, no. 8, pp. 1–25, 2023. https://doi.org/10.3390/math11081923.Search in Google Scholar

[30] F. Leon and M. Gavrilescu, “A review of tracking and trajectory prediction methods for autonomous driving,” Mathematics, vol. 9, no. 6, pp. 1–37, 2021. https://doi.org/10.3390/math9060660.Search in Google Scholar

[31] H. Caesar, et al.., “nuScenes: a multimodal dataset for autonomous driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 11618–11628.10.1109/CVPR42600.2020.01164Search in Google Scholar

[32] S. Ettinger, et al.., “Large scale interactive motion forecasting for autonomous driving : the waymo open motion dataset,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, Association for Computing Machinery, 2022, pp. 9710–9719.10.1109/ICCV48922.2021.00957Search in Google Scholar

[33] B. Wilson, et al.., “Argoverse 2: next generation datasets for self-driving perception and forecasting,” in Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks), 2021.Search in Google Scholar

[34] J. Bock, R. Krajewski, T. Moers, S. Runde, L. Vater, and L. Eckstein, “The inD dataset: a drone dataset of naturalistic road user trajectories at German intersections,” in 2020 IEEE Intelligent Vehicles Symposium (IV), 2020, pp. 1929–1934.10.1109/IV47402.2020.9304839Search in Google Scholar

[35] C. Ziegler and J. Adamy, “Anytime tree-based trajectory planning for urban driving,” IEEE Open J. Intell. Transp. Syst., vol. 4, pp. 48–57, 2023. https://doi.org/10.1109/ojits.2023.3235986.Search in Google Scholar

[36] F. Garrido and P. Resende, “Review of decision-making and planning approaches in automated driving,” IEEE Access, vol. 10, pp. 100348–100366, 2022. https://doi.org/10.1109/access.2022.3207759.Search in Google Scholar

[37] B. Paden, M. Cap, S. Z. Yong, D. Yershov, and E. Frazzoli, “A survey of motion planning and control techniques for self-driving urban vehicles,” IEEE Trans. Intell. Veh., vol. 1, no. 1, pp. 33–55, 2016. https://doi.org/10.1109/tiv.2016.2578706.Search in Google Scholar

[38] S. Manzinger, C. Pek, and M. Althoff, “Using reachable sets for trajectory planning of automated vehicles,” IEEE Trans. Intell. Veh., vol. 6, no. 2, pp. 232–248, 2021. https://doi.org/10.1109/tiv.2020.3017342.Search in Google Scholar

[39] F. Seccamonte, J. Kabzan, and E. Frazzoli, “On maximizing lateral clearance of an autonomous vehicle in urban environments,” in IEEE Intelligent Transportation Systems Conference (ITSC), IEEE, 2019.10.1109/ITSC.2019.8917353Search in Google Scholar

[40] W. Lim, S. Lee, K. Jo, and M. Sunwoo, “Behavioral trajectory planning for motion planning in urban environments,” in IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 2017.10.1109/ITSC.2017.8317933Search in Google Scholar

[41] S. Heinrich, “Planning universal on-road driving strategies for automated vehicles,” Ph.D. thesis, Freie Universität Berlin, 2018.10.1007/978-3-658-21954-3Search in Google Scholar

[42] M. McNaughton, C. Urmson, J. M. Dolan, and J.-W. Lee, “Motion planning for autonomous driving with a conformal spatiotemporal lattice,” in 2011 IEEE International Conference on Robotics and Automation, IEEE, 2011.10.1109/ICRA.2011.5980223Search in Google Scholar

[43] K. Kurzer, M. Fechner, and J. M. Zollner, “Accelerating cooperative planning for automated vehicles with learned heuristics and Monte Carlo tree search,” in 2020 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2020.10.1109/IV47402.2020.9304597Search in Google Scholar

[44] R. Coulom, “Efficient selectivity and backup operators in monte-carlo tree search,” in Computers and Games, Berlin, Heidelberg, Springer, 2007, pp. 72–83.10.1007/978-3-540-75538-8_7Search in Google Scholar

[45] Y. Meng, Y. Wu, Q. Gu, and L. Liu, “A decoupled trajectory planning framework based on the integration of lattice searching and convex optimization,” IEEE Access, vol. 7, pp. 130530–130551, 2019. https://doi.org/10.1109/access.2019.2940271.Search in Google Scholar

[46] T. Puphal, M. Probst, M. Komuro, Y. Li, and J. Eggert, “Comfortable priority handling with predictive velocity optimization for intersection crossings,” in IEEE Intelligent Transportation Systems Conference (ITSC), 2019.10.1109/ITSC.2019.8917240Search in Google Scholar

[47] C. Ziegler, V. Willert, and J. Adamy, “Modeling driving behavior of human drivers for trajectory planning,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 11, pp. 20889–20898, 2022. https://doi.org/10.1109/tits.2022.3183204.Search in Google Scholar

[48] C. Popp, C. Ziegler, M. Sippel, and H. Winner, “Ideal reference point in planning and control for automated car-like vehicles,” IEEE Trans. Intell. Veh., vol. 8, no. 2, pp. 1415–1424, 2023. https://doi.org/10.1109/tiv.2022.3156370.Search in Google Scholar

[49] A. M. Nascimento, et al.., “A systematic literature review about the impact of artificial intelligence on autonomous vehicle safety,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 12, pp. 4928–4946, 2020. https://doi.org/10.1109/tits.2019.2949915.Search in Google Scholar

[50] S. Burton, L. Gauerhof, and C. Heinzemann, “Making the case for safety of machine learning in highly automated driving,” in Computer Safety, Reliability, and Security: SAFECOMP 2017 Workshops, ASSURE, DECSoS, SASSUR, TELERISE, and TIPS, Trento, Italy, September 12, 2017, Proceedings 36, Springer, 2017, pp. 5–16.Search in Google Scholar

[51] C. B. S. T. Molina, J. R. De Almeida, L. F. Vismari, R. I. R. Gonzalez, J. K. Naufal, and J. Camargo, “Assuring fully autonomous vehicles safety by design: the autonomous vehicle control (avc) module strategy,” in 2017 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), IEEE, 2017, pp. 16–21.10.1109/DSN-W.2017.14Search in Google Scholar

[52] T. Stahl, M. Eicher, J. Betz, and F. Diermeyer, “Online verification concept for autonomous vehicles–illustrative study for a trajectory planning module,” in 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), IEEE, 2020, pp. 1–7.10.1109/ITSC45102.2020.9294703Search in Google Scholar

[53] C. Popp, S. M. Ackermann, and H. Winner, “Approach to maintain a safe state of an automated vehicle in case of unsafe desired behavior,” in 14. Uni-DAS e.V. Workshop Fahrerassistenz und automatisiertes Fahren: 09.-11.05.2022, 2022, pp. 35–46.Search in Google Scholar

[54] C. Popp, “Simultaner Safety-Check von Trajektorien beim Automatisierten Fahren im Urbanen Verkehr,” Ph.D. thesis, Technische Universität Darmstadt, 2023.Search in Google Scholar

[55] A. Censi, et al.., “Liability, ethics, and culture-aware behavior specification using rulebooks,” in IEEE International Conference on Robotics and Automation (ICRA), 2019, pp. 8536–8542.10.1109/ICRA.2019.8794364Search in Google Scholar

[56] D. Lopez, R. Waldmann, C. Joerdens, and R. Rojas, Scenario Interpretation based on Primary Situations For Automatic Turning at Urban Intersections, 2017, pp. 15–23.Search in Google Scholar

[57] C. Zhao, et al.., “A right-of-way assignment strategy to ensure traffic safety and efficiency in lane change,” ArXiv abs/1904.06500, 2019. Available at: https://api.semanticscholar.org/CorpusID:119117034.Search in Google Scholar

[58] K. Esterle, L. Gressenbuch, and A. C. Knoll, “Formalizing traffic rules for machine interpretability,” CoRR abs/2007.00330, 2020. arXiv: 2007.00330. https://arxiv.org/abs/2007.00330.Search in Google Scholar

[59] C. Pek, S. Manzinger, M. Koschi, and M. Althoff, “Using online verification to prevent autonomous vehicles from causing accidents,” Nat. Mach. Intell., vol. 2, pp. 518–528, 2020. https://doi.org/10.1038/s42256-020-0225-y.Search in Google Scholar

[60] A. Rizaldi and M. Althoff, Formalising Traffic Rules For Accountability of Autonomous Vehicles, Gran Canaria, Spain, IEEE, 2015.10.1109/ITSC.2015.269Search in Google Scholar

[61] S. Shalev-Shwartz, S. Shammah, and A. Shashua, “On a formal model of safe and scalable self-driving cars,” 2018, arXiv: 1708.06374 [cs.RO].Search in Google Scholar

[62] S. Maierhofer, P. Moosbrugger, and M. Althoff, “Formalization of intersection traffic rules in temporal logic,” in 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, IEEE Press, 2022, pp. 1135–1144.10.1109/IV51971.2022.9827153Search in Google Scholar

[63] A. Karimi and P. Duggirala, Formalizing traffic Rules for Uncontrolled Intersections, Sydney, NSW, Australia, IEEE, 2020, pp. 41–50.10.1109/ICCPS48487.2020.00012Search in Google Scholar

[64] F. Glatzki, M. Lippert, and H. Winner, “Behavioral attributes for a behavior-semantic scenery description (BSSD) for the development of automated driving functions,” in IEEE International Intelligent Transportation Systems Conference (ITSC), 2021, pp. 667–672.10.1109/ITSC48978.2021.9564892Search in Google Scholar

[65] M. Lippert, F. Glatzki, and H. Winner, Behavior-Semantic Scenery Description (BSSD) of Road Networks For Automated Driving, arXiv, 2022. Available at: https://arxiv.org/pdf/2202.05211.Search in Google Scholar

[66] F. Glatzki and H. Winner, “Inferenz von Verhaltensattributen der Verhaltenssemantischen Szeneriebeschreibung für die Entwicklung automatisierter Fahrfunktionen,” in 14. Workshop Fahrerassistenz und automatisiertes Fahren, Darmstadt, 2022, pp. 151–163.Search in Google Scholar

[67] F. Glatzki, “Methodology for specifying and testing traffic rule compliance for automated driving,” Ph.D. thesis, Darmstadt, Technische Universität Darmstadt, 2023.Search in Google Scholar

[68] F. Schuldt, “Ein Beitrag für den methodischen Test von automatisierten Fahrfunktionen mit Hilfe von virtuellen Umgebungen,” Ph.D. thesis, Technische Universität Braunschweig, 2017.Search in Google Scholar

[69] C. T. Amersbach, “Functional decomposition approach – reducing the safety validation effort for highly automated driving,” Dissertation, Technische Universität Darmstadt, 2020.Search in Google Scholar

[70] International Organization for Standardization, ISO 29119: Software and Systems Engineering – Software testing, Geneva, Switzerland, ISO, 2022.Search in Google Scholar

[71] S. Ulbrich, T. Menzel, A. Reschka, F. Schuldt, and M. Maurer, “Defining and substantiating the terms scene, situation, and scenario for automated driving,” in IEEE 18th International Conference on Intelligent Transportation Systems (ITSC), Piscataway, NJ, 2015, pp. 982–988.10.1109/ITSC.2015.164Search in Google Scholar

Received: 2023-05-24
Accepted: 2024-01-10
Published Online: 2024-04-08
Published in Print: 2024-04-25

© 2024 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 30.4.2024 from https://www.degruyter.com/document/doi/10.1515/auto-2023-0092/html
Scroll to top button