Adaptation of the Automotive Product Development Process for AI Development

– Artificial Intelligence (AI) functionalities are increasingly being used in vehicle applications. While current product development models take the increasing proportion of software into account, the special requirements of artificial intelligence developments are hardly ever explicitly considered. The new requirements result both from increasing standardisation and regulation and from the iterative and explorative approach inherent in AI model development. This paper identifies the key adaptations to the standard automotive product development process that are required to cover the requirements of AI development. The adapted development model was trialled in two vehicle developments, the most important lessons learnt of which are summarised in this paper.


I. INTRODUCTION
The product development process (PDP), especially in the automotive industry, needs to comply with many norms and requirements, e.g., ISO 26262 and ISO/PAS 21448, which helped to address safety in a systematic and consistent way, but were not designed to accommodate technologies such as Machine Learning (ML) or Artificial Intelligence (AI) in general [1].The chosen PDP must fit in with the overall development process for the vehicle.In the automotive industry, the V-model process model is the norm, especially for Autosar compliant development [2].AI systems promise better or better tuned/online tuneable controllers, for example, in the classification of the battery and charging status of an electric vehicle, where conventional control mechanisms (e.g., PID controllers) are currently used [3], [4].However, AI and ML systems differ from conventional software in the way they are designed and developed.A circular process is almost always used for AI development due to the iterative and exploratory development [5].AI solutions are highly dynamic and cannot be systematically tested after their development in the same way as classical software, but rather on representative test sets [6].One major reason is that the explainability of these systems, that is how it got to its decision, is a hard and, to some extent, unsolved problem potentially causing lack of trust [7].Furthermore, predictability of suitability and performance of an AI model in a number of different dimensions, like performance, latency and memory requirements is vastly lower than the predictability in classical software development.Thus, adding AI into products requires continuous testing, integration and explanation, while being agnostic to the specific technology and AI architecture in use.That means that decisions on the most "fit-for-use" architecture and implementation are based on system validation and performance, in lean-agile methodology referred to as set-based design.
The paper is organised as follows: Section II outlines the adaption need of the PDP model for AI-based systems out of new regulation and practical use cases.Section III provides an overview of the top-level structure of the standard automotive PDP and its shortcomings, as well as the development model used in the use cases.Section IV describes the adaptions based on the new requirements.Section V discusses the main lessons learnt from applying the development model in two use cases.

A. Regulatory Requirements
In general, two types and standards and regulations for the PDP must be considered.
First, there are basic regulations and standards already in place covering the development and operation of automotive systems.The most important ones to be listed here are ISO 12207, ISO 15288, ISO 26262, ISO 27001 and Automotive SPICE.These contain specifications that are also relevant for AI development but do not specifically address the specifics of AI.

B. Use Cases
The adaption need of the traditional PDP model also arose in the context of two development projects: An AI-based system for classifying the condition and safeguarding the performance of a battery, as well as an e-trailer platform that can reliably estimate the condition of the trailer based on an AI [20].Although the requirements came directly from these development projects, they can largely be generalised to software development projects, including AI components in the automotive environment.

C. Concluded Requirements for a PDP Adaption
Derived from the standards and the specifics of the use cases, the following core requirements have been formulated for the PDP: • The rules and parameters of the AI system are not defined prescriptively and their outputs are inherently probabilistic -therefore, testing feasibility and possibility with methods of systematic test case derivation as in traditional software development is limited.• As a prediction of the quality and suitability of different models for a given problem is limited to generic experiences, several models can and should be considered for quite some time of the development.• The suitability and quality of an AI model strongly depend on the data used to train it.It is, therefore, of utmost importance to focus on the requirements for data within the intended "safe operating envelope".• As the training data is typically based on real world data, the data cleaning and wrangling process is an important part of the development process.Understanding the data, its information content and influence on the system behaviour, as well as its bias is important for the safety and performance of the AI.• An AI model, when reasonably trained (e.g., not overfitted, enough quality data, etc.) is capable of interpolation -yet extrapolation beyond training data must be avoided in a safety critical environment -due to the mathematically proven shortcomings of extrapolation.• To enable trust in the system (e.g., to make sure it does evaluate the "correct characteristics", thus validating the successful training), explanation of the behaviour should be achieved.• Predictions regarding model size and performance are unreliable.The behaviour of the models and their quality strongly depend on their operating environmentpredictions of system behaviour based on the development environment are usually unreliable.
• In general, the overall system must conform to the system requirements -specifically regarding inference time, data throughput, reliability, transferability, size, robustness, "correctness" within the probabilistic nature of the system (as defined by accuracy, precision, recall or other metrics) and running on the intended target environment.• As none of the above can be predicted from the AI model alone as it is created in a typical AI tool chain, assumptions regarding the "right arm of V" must be questioned.Not directly part of the PDP or development model, yet important for every product organisation, is the role of the data lifecycle as compared to the product lifecycle.A change in requirements might cause a need for, e.g., relabeling and retraining the whole AI component, while it could deteriorate the ML system performance.• Data is as important (arguably more important) to the AI solution as the SW code.It, therefore, must be kept under version and configuration management, as well and quality management like code itself.• Data can be an attack vector for AI systems ("data poisoning").Therefore, measures to ensure integrity of data, as well as its "usability, integrity and truthfulness" have to be taken.

III. STANDARD AUTOMOTIVE PDP & DEVELOPMENT MODEL
The standard reference model for automotive product development is ASPICE® [21].This development model and the associated detailed processes formed the basis for the comparison with the new AI requirements.Figure 1 shows an adaption of the development model used for the two use cases explained in Section II B. Most deviations occur within the activities and are explained in Section IV.However, individual activities have been added or emphasised.The main changes at the top level are as follows: • The two parallel development cycles of software and AI development are strongly interacting, which continuously integrate different AI/ML modules with the classically developed software components (e.g., Autosar compliant development).The illustration shows that these are agile development cycles rather than a waterfall-like development process.• The continuous monitoring & validation activity is not explicitly mentioned in ASPICE®, but a direct SOTIF requirement has to be considered on top level.• The data commissioning step has to ensure the secure disposal of data according to their future use.Several standards set out these requirements that is not considered by ASPICE®.The following section deals with the specific adjustments for AI development within the depicted activities.

IV. CHANGES BASED ON THE NEW REQUIREMENTS
Table I summarises the main changes to the conventional development process shown in Fig. 1.Ensure that the specific requirements for the creation and operation of an AI/ML system are captured and made measurable.

System requirements analysis
Perform AI risk analysis Consider and document AI specific requirements, e.g.: • Requirements towards the data and data acquisition for the AI system and data exchange requirements of the vehicle to other vehicles and infrastructure; • The desired performance level of the system; • Intended functionality of the AI component, intended use cases, update process requirements, foreseeable misuse of the AI, known functional insufficiencies and requirements supporting risk mitigation abilities during operation; • Requirements regarding transparency and explainability; • Requirements for operation of the system in the field; • Requirements for continuous validation of the system; • Fairness, privacy and security requirements (cf.risk assessment).Ensure that AI Operations requirements are identified, e.g.: • Monitoring and continuous validation; • Continuous check of effectiveness and performance; • Release concept and release management; • Update management.
• The use case and the problem addressed by the solution is clearly formulated, understood, agreed upon and documented; • The necessity of the AI solution is analysed and the result documented; • A risk analysis with regard to the application of AI and specific to AI is carried out and mitigation of risks is initiated; • Create a clear decision framework for the development, training and selection of the AI models.
System architecture Define selection criteria for the AI solution.
Clearly document the criteria which determine the selection of a given AI solution/architecture based on its performance.

System design
• Add AI specific failure modes to the FMEA procedure, e.g., performance failures, model failures and robustness failures; • Create a warning and degradation concept also for the AI component; • Describe the procedures supporting data collection and monitoring of data (for the intended AI function) during and after the development; • Describe designs and mechanisms for supporting risk mitigation abilities during operation and address known insufficiencies; • The verification and validation strategy must include the strategy for verifying and validating SOTIF of the AI component, including the validation targets.
AI solutions add additional failure modes to existing systems.The added failure modes to be considered in the FMEA are thought to provide additional insights in potential issues to be dealt with.
The requirements from the relevant functional safety standards must also be applied to the AI functionality.

SW requirements analysis
• Specify and document data requirements using the system requirements, software requirements and the AI use case(s); • Structure data requirements to ensure correctness, feasibility and verifiability and analyse the impact of these requirements, e.g., on cost, timeline and technical solution; ensure consistency between them and other requirements and software and system level; • Identify the impact of on the development and operational environment, including the data infrastructure; • Perform an analysis of the data requirements and potential failures to comply with these requirements to identify risks in the development of the system, the operation of the system or the operational infrastructure; • Link and version the documented requirements to the relevant SW and system requirements, as well as the related verification criteria; • Document and communicate the data requirements to all relevant stakeholders in a timely manner.
Add and detail AI specific SW and SW safety requirements (e.g., regarding privacy, security and licensing).
Refine the data requirements for the AI functionality (e.g., regarding availability and usability of data, as well as questions regarding amount, format and type of data).
It is obvious that the iterative and explorative nature of the AI development process does not allow for a prescriptive definition of these requirements.However, an initial planning and a continuous update and documentation of the learnings during development must be ensured by this process.It should be ensured (by interface definition, architectural measures and relevant coding and testing) that the AI component is only used within a relevant and defined operating envelope.Internal mechanisms to check the efficiency of fail safes against invalid input or obviously wrong outputs must be in place.

SW -Integration and integration testing
• Test the integrated systems AI functionality and verify its performance; • Confirm that the explanation of the AI model still holds true; • The AI component of the system created is an "encapsulated code unit", which is integrated like a library into the overall SW system.Integration procedures are, therefore, the same as for non-AI systems.
Verify and validate the AI model in the integrated system and check of functioning fail safe.

System integration and integration testing
Transferring the AI based system to the target hardware requires confirmations compared to standard integration, as the "HW architecture change" may negatively affect AI model performance.Two additional steps must be carried out: • Test the integrated system's AI functionality and verify its performance.This is to be done via relevant test drivers and validation dataset and using the intended HW sensors.
• Confirm that the explanation of the AI model still holds true.
Confirm that the transfer of the model (typically developed on a different HW architecture) does not negatively affect functionality, performance or explainability of the AI model and the overall system.

Continuous monitoring and validation
• Define the data that shall be monitored for continuous validation of the system, how to collect the data, the schedule/frequency and data gathering and validation activities; • Define and build up the necessary physical and technical elements to capture and collect the monitoring data on a regular basis, including an operations concept for this infrastructure; • Regularly gather real world data for analysis according to the monitoring and validation plan; • Check collected real world input data against the training data with statistical methods to identify and understand potential differences to training data of currently deployed systems; • Perform a performance evaluation of the currently deployed model against current input data.Check, if the model still performs within the defined performance parameters; • Decide, whether a retraining/update of the AI model in the field is necessary; • Document the results of the continuous validation.AI models are subject to a number of changes over time, which can affect the performance of the model.The continuous monitoring and validation process monitors the performance of in-field systems and delivers information regarding potential deterioration of system performance and necessary updates of the AI model.Furthermore, the continuous learning process must detect new or previously unknown scenarios containing triggers for hazardous behaviour.

Data decommissioning
• Identify the data types used in the system and define data categories based on, e.g., type, sensitivity, or legal requirements.Define retention policies and periods for different types of data.• Define different methods of data disposal and the standard operating procedures to correctly dispose of data (deleting, archiving, etc.); • Identify the data to be decommissioned.Ensure that all versions are considered.Assign a data disposal category to the data to ensure correct disposal; • Dispose the identified data according to the defined policies and standard operating procedures according to the disposal category; • Document the disposal of data according to the disposal policies.Ensure the documentation.
AI models or systems no longer used, use or contain data (training data, models, etc.) which must be disposed accordingly.This disposal can take different forms, e.g., secure deletion, secure archiving, repurposing, etc. Different data gathered through the operation might also be needed to be secured or archived (e.g., for regulatory purposes).

V. DISCUSSION
The development model was tested in the context of the veoPipe research project in two development projects for the stabilisation of an e-trailer and the detection of the state of charge of a battery management system of an electric vehicle [21].The test of the proposed adapted PDP proved that following it increased the reliability and functional safety of the system with AI functionalities.Nevertheless, the test revealed some deficits and gaps that need to be closed: • The previous development processes, methods and tool chain did not meet the increased requirements for stringency in documentation and version management.
The time and storage space requirements for training and validation data increased considerably.• Development effort and timeline were more difficult to estimate due to the iterative approach, which made it especially difficult to work with a client (e.g., Automotive Original Equipment Manufacturer), who scheduled classic gate reviews.• In practice, there was a lack of methods and hard criteria to assess when the next development phase could be started.For example, when the training data set was considered sufficiently representative.• The degree to which the AI's behaviour can be explained could significantly reduce the number of iterations and, thus, the development time.In the view of the authors, approaches with a higher level of explainability are, therefore, generally to be preferred, given the same level of suitability.• The set-based design approach inherent in an AI development led to considerable uncertainties regarding the effort and cost estimation of such a project.For example, a processor that had already been procured had to be replaced during development because the requirements for the processor architecture had changed significantly during the iterative development process.• In particular, the lack of valid and applicable standards for conformity assessment of AI systems ("certified AI") poses the risk of over-engineering by developers to secure themselves as much as possible for safety-critical applications.Guidelines such as the European AI Act are still on a very abstract level and do not contain specific criteria for practical implementation in companies.In the "Continuous monitoring and validation" step in particular, it was extremely difficult to determine in practice when the requirements of the standard had been met.We, therefore, conclude that most of the problems that have arisen are in the nature of the AI itself rather than in the development process.Adjustments to the development process tend to be easier to implement, as illustrated by the application of the hybrid development model.Most of the problems in the pilot projects would have been solved with a sufficiently high degree of explainability of the AI.In our view, increasing the explainability of AI behaviour should be the focus of further research activities.

Fig. 1 .
Fig. 1.Adapted development model.TABLE I CHANGES PER PHASE IN THE DEVELOPMENT MODEL Phase Change(s) Purpose Architecture definition and design set definition • Create a number of different options to find a solution fitting to the requirements and selection criteria of the model; • Define the internal architecture of the use of AI model; • Define and document a whole design set of models.Define the AI model architecture(s) used, based on the system and software requirements and the specific body of knowledge describing the match of model types to certain problems.Ensure the complete documentation of the models, especially with regard to the selected one.Software architecture • Ensure loose coupling of the AI component and continuous internal test and check features; • Document and ensure the fail safe measures in the architecture.

FUNDING
The research has been supported by the Bundesministerium für Wirtschaft und Klimaschutz der Bundesrepublik Deutschland (Federal Ministry for Economic Affairs and Climate Action of the Federal Republic of Germany) [grant number 19A21040C].