Microprocessors and Microsystems

Advancements in electronic systems’ design have a notable impact on design veriﬁcation technologies. The recent paradigms of Internet-of-Things (IoT) and Cyber-Physical Systems (CPS) assume devices immersed in physical environments, signiﬁcantly constrained in resources and expected to provide levels of security, privacy, reliability, performance and low-power features. In recent years, numerous extra-functional aspects of electronic systems were brought to the front and imply veriﬁcation of hardware design models in multidimensional space along with the functional concerns of the target system. However, different from the software domain such a holistic approach remains underdeveloped. The contributions of this paper are a taxonomy for multidimensional hardware veriﬁcation aspects, a state-of-the-art survey of related research works and trends enabling the multidimensional veriﬁcation concept. Further, an initial approach to perform multidimensional veriﬁcation based on machine learning techniques is evaluated. The importance and challenge of performing multidimensional veriﬁcation is illustrated by an example case study. © 2019 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license. ( http://creativecommons.org/licenses/by/4.0/ )


Introduction
Recently, several prominent trends in electronic systems design can be observed. Safety-critical applications in the automotive domain set stringent requirements for electronics certification, the Internet-of-Things (IoT) and Cyber-Physical Systems (CPS) devices are immersed in physical environments, significantly constrained in resources and expected to provide levels of security and privacy [1] , ultra-low power feature or high performance. Very complex electronic systems, including those built from the noncertified for reliability commercial-off-the-shelf components, are used for safety-and business-critical applications. These trends along with gigascale integration at nanoscale technology nodes and multi-/many-processor based systems-on-chip architectures have ultimately brought to the front various extra-functional aspects of the electronic systems' design at the chip design level. The latter include security, reliability, timing, power consumption, etc. There exist numerous threats causing an electronic system to violate its specification. In the hardware part, these are design errors (bugs), manufacturing defects and variations, reliability issues, such as soft errors and aging faults, or malicious faults, such as security attacks. Withal, there can also be bugs in the software part.
Hardware design model verification detects design errors affecting functional and extra-functional (interchangeably referred as nonfunctional ) aspects of the target electronic system. Strictly, the sole task of extra-functional verification of a design model is limited to detecting deviations that cause violation of extra-functional requirements. In practice, it often intersects with the task of functional verification [2,14] , thus establishing a multidimensional space for verification . A "grey area" in distinction between functional and extra-functional requirements may appear when an extrafunctional requirement is a part of design's main functionality. E.g., security requirements for some HW design can be split into extrafunctional and functional sets if the design's purpose and specified functionality is a system's security aspect, e.g. it is a secure cryptoprocessor.
The contributions of this paper are a taxonomy for multidimensional hardware verification aspects, a state-of-the-art survey of related research works towards enabling the multidimensional verification concept. Further, an approach is evaluated which performs multidimensional verification by using machine learn- ing techniques. The rest of this paper is organized as follows. Section 2 provides a taxonomy of multidimensional verification aspects. Sections 3 proposes a state-of-the-art survey with the key trends in verification for the main extra-functional aspects. Section 4 discusses the multidimensional verification challenges and presents a motivational example for the functional and power verification dimensions. Section 5 proposes adoption of machine learning techniques for support of design's multi-aspect features extraction and verification. Finally, Section 6 draws the conclusions.

Taxonomy of multidimensional verification aspects
In practice, relevance of each functional and extra-functional aspect strongly depends on the design type, target system application and specific user requirements. Following the design paradigm shifts, a number of extra-functional aspects have recently received significant academic research attention e.g., security. At the same time, there already exist established industrial practices for measuring and maintaining particular design qualities, e.g. the RAS (Reliability-Availability-Serviceability) aspect introduced by IBM [6] . While in the software engineering discipline, the taxonomy of extra-functional requirements has a comprehensive coverage by the literature [7][8][9][10][11][12] , it cannot be directly re-used for the HW verification discipline because of significant difference in the design models. Fig 1 introduces a taxonomy of multidimensional verification aspects derived from the performed literature review. The conventional functional concerns are safety and liveness properties, combinational and temporal dependencies along with data types , however this list can be extended for particular designs. The extrafunctional aspects can be strictly categorized into two groups: System Qualities and System Resources and Requirements (in bold). The main system qualities for extra-functional verification are manu-facturability of the design, security, in-field safety, reliability during the operational lifespan and a set of timing aspects. The second group embraces the power and architectural resources as well as design constraints set by the operational environment.
Several extra-functional aspects such as manufacturability , i.e. primarily yield and testability against manufacturing defects, faulttolerance, reliability (subject to transient, intermittent and permanent hardware faults) and several aspects from the System Resources and Requirements group do not have a direct correspondence in the software engineering discipline because of the distinct nature of faults and specification violations. Other aspects such as real-time constraints are very similar between the two domains. Table 1 presents a survey of recent publications targeting extrafunctional and multidimensional verification. Here, along with the specific extra-functional aspects details about the design model and verification approach are outlined, i.e., the design under verification type, verification engine, the level of abstraction, design representation language, compute model and the tool operated in the research. We pointed out such key points for all the recent up to 10-year old studies in this area. Further, in the following subsections, we focus on understanding trends for the extra-functional aspects that have the strongest attention in the literature, i.e. security, in-field reliability, timing and power.

Security aspects
Security is difficult to quantify as today there are no commonly agreed metrics for this purpose [1] . The key targeted security services [16] commonly represented as extra-functional aspects for verification are confidentiality, integrity and availability. Verifying Table 1 Survey of the state-of-the-art solutions for extra-functional and multidimensional verification. Pub.
Year security aspects is highly dependent on the type of attack and the attacker model assumed.
Many of the existing works in security verification (e.g. [22,24,26,29,30] ) are focusing on the integrity attribute, mostly addressing hardware trojan detection. There also exist works that additionally target [19,21,23,25,34] or are exclusively considering [27,28] the confidentiality aspect. Several solutions in security verification are restricted to specific target architectures or types of modules such as Reconfigurable Scan Networks (RSNs) [23,27] or macro-asynchronous micro-synchronous pipelines [30] . To that end, for complex hardware architectures (e.g. large IEEE1687 Reconfigurable Scan Networks or MPSoCs) the specific on-chip security features to be verified also tend to be very sophisticated. These may include on-chip mechanisms for attack prevention (firewalls, user management, communications' isolation), attack protection (traffic scrambling, encryption) and attack resilience (checkers for side-channel attacks, covert channel detection, attack recovery mechanisms). Several works consider security verification for NoCbased MPSoCs. [20] proposes a method to formally verify the correctness and the security properties of a NoC router. Some solutions in the security verification of NoCs do indirectly address reliability due to the fact that they implement hardware monitors that allow avoiding both, attacks and in-field faults [21,22] .
According to recent surveys [37] and [38] cache access driven side-channel attacks have become a major concern in hardware security. In modern processors, deep hierarchy of cache memory is implemented to increase system performance. However, this makes modern computing systems, including IoT devices, vulnerable to cache side-channel attacks. There exist several works addressing verification of the cache security. In [31] , the authors propose.
Computation Tree Logic (CTL) based modeling of timing-driven and access-driven cache attacks. This work concentrates on formally describing the attack types. Zhang and Lee [32] models cache as a state machine and proposes a metric based on the noninterference condition to evaluate the access-based cache vulnerability. Canones et al. [33] proposes a model to formally analyze the security of different cache replacement policies. None of the above-mentioned works consider multiple dimensions, or aspects.
An approach that is designed for modeling a multitude of extrafunctional aspects is the model-based engineering example of Architecture Analysis and Design Language (AADL) [19] . While, in principle, AADL allows representing several extra-functional aspects (called quality attributes in AADL), Hansson et al. [19] only concentrates on analysis of confidentiality as a part of verifying security in a system with multiple levels of security. The authors in [36] have targeted a general Uppaal Timed Automata based multiview hardware modeling and verification approach taking into consideration of the security view. The survey of related literature clearly shows that, up to this moment, there is virtually no work considering security verification in combination with other extrafunctional aspects.

Reliability aspects
The key drivers for the reliability aspect in today's designs are the recent industrial standards in different application domains such as IEC61508, ISO26262, IEC61511, IEC62279, IEC62061, RTCA/DO-254, IEC60601, etc. Integrated circuits used in highreliability applications, e.g. complying with high (Automotive) Safety Integrity Level -(A)SIL, must demonstrate low failure rates (modelled by FIT -Failures in Time) and high fault coverage (e.g. Single-Point Failure Metric SPFM and Latent Fault Metric LFM). These requirements ultimately mandate extra-functional validation efforts for reliability analysis, such as Failure Mode and Effects (Criticality) Analysis -FME(C)A and imply generalized use of methods and features, such as safety mechanisms, for error manage-ment. Functional safety is a property of the complete system rather than just a component property because it depends on the integrated operation of all sensors, actuators, control devices, and other integrated units. The goal is to reduce the residual risk associated with a functional failure of the target system below a threshold given by the assessment of severity, exposure, and controllability.
The dominant threats for reliability are, first, random hardware faults such as transient faults by radiation-induced single event effects or soft errors [15] , i.e. a subject for Soft-Error Reliability (SER). Second, these are extreme operating conditions, electronic interference and intermittent to permanent faults by process or timedependent variations, such as aging induced by Bias Temperature Instability (BTI) [13] , where the latter is a subject for Life-Time Reliability (LTR). Reliability verification challenge is emphasized by the adoption of advanced nanoscale implementation technology nodes and high complexity of systems, utilizing tens or hundreds of complex microelectronic components and embedding large quantities of standard logic and memory. Moreover, these designs integrate IP cores from multiple design teams making reliability evaluation task to be scattered and complex. Initiatives such as RIIF (Reliability Information Interchange Format) [39] , allow the formalization, specification and modeling of extra-functional, reliability properties for technology, circuits and systems.
Similar to other aspects, reliability in large complex electronic systems, e.g. safety-critical CPSs, may be tackled starting at high level of abstraction. System's fault tolerance is formally checked using UPPAAL and timed automata models generated from AADL specifications [41] . HW design models and tools at such a level also enable verification of interference of several extra-functional design aspects [36] . There are research works relying on design softerror reliability verification by fault-injection campaigns, e.g. [49] , or formal analysis, e.g. error-correction code (ECC) based mechanisms against single-bit errors in memory elements [48] . Burlyaev and Fradet [42] proposes a general approach to verify gate-level design transformations for reliability against single-event transients by soft errors that combines formal reasoning on execution traces. Thompto and Hoppe [43] and Kan et al. [44] focus on the RAS (Reliability, Availability and Serviceability) group of extra-functional aspects outlined by IBM for complex processor designs where embedded error protection mechanisms and designs intrinsic immunity (due to various masking) to errors is evaluated by fault injection. Vinco et al. [45,46] propose extensions to system descriptions in the IP-EXACT format to enable multi-layer representation and simulation of several mutually influencing extrafunctional aspects of smart system designs such as lifetime reliability, power and temperature. A complex approach to verification of multiple reliability concerns (soft errors, BTI, etc.) across layers in industrial CPS designs is proposed in [47] as a collaborative research result in the IMMORTAL project. Last but not least, addressing the need for reliability verification automation tools, in [50] , authors propose a fully automated tool INFORMER to estimate memory reliability metrics by circuit-level simulations of failure mechanisms such as soft-errors and parametric failures.
The survey clearly shows that currently there is a very small number of works considering verification of reliability together with other aspects.

Timing aspects
Functional temporal properties are essential part of sequential designs' specification that are often modelled for functional verification by Computational Tree Logic (CTL), applied for formal approaches, and Linear Temporal Logic (LTL) temporal assertions expressed arbitrarily, e.g. in Property Specification Language (PSL), System Verilog Assertions (SVA) or systematically, e.g. in Universal Verification Methodology (UVM). In the extra-functional context, these can be extended to specific requirements and properties such as: real-time (RT), performance, throughput, latency, on-chip communication time constraint, worst-case execution time constraints, etc. Several works have been widely studying these timing properties. Some researchers are mainly focused on generating timing properties to reduce the verification effort, for example, state space and cost [54,56,65] . Other works instead use the timing properties to assess whether the system under verification is correctly functioning or not [55,62,64] . In the following, we discuss state of the art for each timing aspect.
A real-time system describes hardware and software systems subject to a real-time constraint , that ensures response within a specified time. The correctness of the function depends both on the correctness of the result and also the timeliness of the periods. In [54] , an approach to verify the timed Petri-Net model is proposed. A non-instantaneous model is abstracted from the timed Petri-Net model in a hierarchical structure. The non-instantaneous model which is verified with a model-checking tool is used to reduce the state space of the timed Petri-Net model for verification with a satisfiability modulo theories solvers [76,77] . The timed Petri-Net is used to model the interacting relations of the software components and the binding relations between software and hardware in a certain period of time. Görgen et al. [65] introduces a tool called CONTREX to complement current activities in the area of predictable computing platforms and segregation mechanisms with techniques to compute real-time properties. CONTREX enables energy-efficient and cost-aware design through analysis and optimization of real-time constraint. The authors in [62] proposed a method to combine real-rime constraint aspect of a model with energy-aware real-time (ERT) behaviors of the model into UPPAAL for formal verification.
Throughput is a measure of how many units of information a system can process in a given amount of time. In [66] , a verification environment has been proposed to estimate the throughput of a SoC. The intention of the paper is to judge whether the verification system can handle SOC verification and provide the necessary performance in terms of speed and throughput. Khamis et al. [67] introduced a Universal Verification Methodology (UVM) environment to measure throughput of a NoC. UVM is a SystemVerilog class library explicitly designed to help and build modular reusable verification components and test-benches. It is an industry standard, so it is possible to acquire UVM IP from other sources and reuse them.
Performance refers to the amount of work which is done during a process, for instance, executing instructions per second. In [56] , a framework has been developed to analyze performance of a system design. The framework is based on stochastic modeling and simulation and it is applied on a set of NoC topologies. The methodology uses a selective abstraction concept to reduce complexity.
When referring to hardware, latency is the time required for a hardware component to respond to a request made by another component. However, in the cast of hardware, latency is sometimes referred to as the access time . In [55] , an analysis tool is developed to work with the AADL models [78] to assure the correctness of a scheduling model that binds the relation of different components in a model.
On-chip communication time constraints refer to the requirements on the start and end times of each task in a system critical path, which is the sequence of tasks that cannot be delayed without delaying the entire system. For instance, in [51] and [52] a framework has been proposed, which is based on a set of quality of service aware NoC architectures along with the analysis methodology including selected relevant metrics that enable an efficient trade-off between guarantees and overheads in mixed-criticality application scenarios. These architectures overcome the notion of strictly divided regions by allowing non-critical communication pass through the critical region, providing they do not utilize common router resources. Such problem formulation is relevant to facilitate the usage of NoC technology by safety-critical industries such as avionics.
The worst-case execution time of a computational task is the maximum length of time the task could take to execute on a specific hardware platform. The designer of a system can employ techniques such as schedulability analysis to verify that the system responds fast enough [40] . For instance, Zimmermann et al. [64] presents an approach to generate a virtual execution platform in SystemC to advance the development real-time embedded systems including early validation and verification. These virtual execution platforms allow the execution of embedded software with strict consideration of the underlying hardware platform configuration in order to reduce subsequent development costs and to allow a short time-to-market by tailoring and exploring distributed embedded hardware and software architectures.
Last but not least, a few works also take into account dependencies between several extra-functional aspects. For instance, the work in [62,65] and [56] present the effect of optimizing timing properties (performance and latency) on power consumption or the study in [64] performs the effect of decreasing execution time on power consumption. Such analysis is mostly limited to two extra functional aspects or neglected at all [53][54][55]69] , while design timing constraints can strongly influence not only power consumption but reliability, security, availability, etc. as well as functional properties.

Power aspects
In commercial flows, verification of the power aspect can be addressed relatively independently from the functional verification dimension. The power intent and detailed power modelling can be done starting at TLM or RTL with minimal interference with the HDL functional description, e.g. using the Accellera introduced Unified Power Format (UPF) employed for power-aware design verification automation by commercial tools especially with the latest UPF3.0 [60] or Cadence/Si2 Common Power Format CPF [61] . For the advanced device implementation technologies, power specification implies multi-voltage design with up to tens of power domains and may consider dynamic and adaptive voltage scaling.
In the recent research works, design verification against the power aspect is performed at different abstraction levels with a trade-off between speed and accuracy. Some works such as [58,59,74,75] perform power analysis at system level targeting high simulation speed and low power optimization flexibility similar to the accuracy achievable at lower levels. In [58] , the authors applied their approach to SRAM and AES encryption IPs and obtained a significant simulation speed-up in comparison to gate-level simulation with a high fidelity of the system-level power simulation. A promising software tool for power simulation in SystemC designs is the Powersim framework [59,75] . In [59] , a methodology to estimate the dissipation of energy in hardware at any level of abstraction is proposed. In [75] , the authors propose a SystemC class library aimed at calculation of energy consumption of hardware described at system level. The work in [73] introduces a series of tools (PowerBrick (construct power library for standard cell library), PowerMixer (for RTL/gate-level estimator), PoweMixer ip (IPbased model builder), PowerDepot (estimate system-level power consumption)) which can be tightly linked and enable the power analysis from layout, gate-, RT-, IP-to system level with a good simulation speed while retaining high accuracy. The power aspect verification could benefit from a holistic multi-level modelling, such as e.g. [17] available for functional verification. Rafiev et al. [56] , Vinco et al. [45,46] , Kang et al. [62] , Zimmermann et al. [64] , Görgen et al. [65] , are aimed at methodologies suitable for specific applications (such as cyber-physical system [62] ) that assume verification of extra-functional aspects such as power, timing, thermal at the system level.
This extra-functional aspect has a tight relation to the implementation technology assumed for the synthesis of the design model under verification. With planar bulk MOSFET technology known for exponential growth of the static leakage power for smaller device geometries and employment of FinFET and Tri-Gate-Transistors in the advanced technology nodes, the CMOS device parameters are essential for this analysis [57] .

Machine learning based techniques
The complex problem of multidimensional verification can be assisted by the recent advances in the machine learning discipline. This type of approaches (along with e.g. evolutionary algorithms) is particularly suitable for multi-aspect optimization problems where formal deterministic approaches may lack scalability.
Machine Learning (ML) is the concept of a machine learning from examples and making predictions based on its experience, without being explicitly programmed [82] . Previous works have shown that ML can be used for verification purposes at different levels. In [83] , machine learning was introduced in physical design analysis. The feasibility of ML in physical design verification (e.g., lithography hotspot detection) was investigated, and a reference model for application was presented. Based on this work [84] used ML to increase the speed of the performance evaluation (power and area) of a circuit design after physical design by a factor of 40 as well as performing a Design Rule Check. In [85] , ML was used to predict the timing behavior of the final floorplan of a circuit during the Place & Route routine and thus, shifting the analysis to an earlier design stage. In [79] , the analysis is moved even to higher abstraction level. The high-level synthesis (HLS) resource usage and timing estimation was improved by train ML models with data from real implementations. Thus, the design flow can be assisted with machine learning and predict accurate values even in very early design stages. Machine learning was further applied for Security Verification in [80,81,86] , where it was used to detect Hardware Trojans based on features from the Gate Level Netlist. In Section 5 , we propose an approach to assist the multidimensional verification flow by using machine learning techniques to estimate a reliability metric, as well as timing metric.

The challenges of multidimensional verification
The performed analysis of the state of the art has outlined a gap in methodologies and tools for holistic multidimensional verification of hardware design models.
Different from functional verification, approaches for extrafunctional hardware design aspects' verification remain underdeveloped even when tackled in isolation. Here, one of the key issues is a lack of established metrics for verification confidence. For a particular functional verification plan, the functional dimension usually includes conventional structural (code) coverage metrics, functional coverage [3] in form of asserted and assumed properties and design parameters along with stimuli quality assessment by model mutations [18] . The metrics for confidence in extrafunctional dimension verification results may be challenging as in practice the requirements are subjective and can be specified as a mixture of quantitative and qualitative constraints . Accurate hardware verification in a particular dimension requires both sufficient extra-functional design modeling and the extra-functional aspects target modeling [36] . There is a limited number of dedicated commercial tools and common standards for extra-functional verifica- tion flows. In particular, for the security dimension the JasperGold SPV [35] is one of the few such commercial tools that stand out from the academic research frameworks. Finally, the issue of eliciting the extra-functional requirements [4,5] is a challenging task as ambiguity and (sometimes conflicting) interdependency of the extra-functional aspects in the specifications increases complexity and may leave gaps in the multidimensional verification plans.
Unfortunately, there is no established hardware design methodology supporting multidimensional verification plans for mutually influencing functional and extra-functional aspects. There is a very limited number of research works going beyond analysis of one extra-functional verification aspect under constraints of another as the complexity of the problem grows extremely fast with the number of dimensions (interdependent constraints) and the electronic system size. The first works in this direction are, for example, Vinco et al. [46] and Vain et al. [36] .
Ultimately, results of multidimensional verification campaigns proposed in this work are subject to be represented in a multidimensional space, as illustrated in Fig. 2 a. Here is shown an illustration of six hypothetical independent verification campaigns in a three-dimensional verification space. A verification campaign in this example shows the level of confidence in the different dimensions -(F)unctionality, (P)ower and (S)ecurity. In this illustrative example, only three aspects are taken into consideration. Obviously, on the demand the verification engineers can involve different dimensions. Here, the different colors of the lines represent different multi-dimensional spaces e.g. as Campaign_1 in blue lines stand for the verification result considering three extra functional aspects i.e., functional, power and security aspects at the same time. The figure shows the interdependency of these three requirements and thus can help the designers to choose the most suitable design combination. Subsequently, Compaign_2 represents the combination of functional and security aspects, Compaign_3 demonstrates the combination of functional and power aspect, etc. Thus the Radar-charts, as shown in Fig. 2 b, are an instrument for summarizing multidimensional verification results for a large number of dimensions, (where the dimensions can be ordered to emphasize correlation or interdependencies between adjacent dimensions).

Motivational example
Single-dimension verification campaigns ignoring interdependencies between the dimensions may lead to gaps in the overall electronic system quality. As an example to show the importance of multidimensional verification, let us consider an actual verification campaign of an open-source NoC framework Bonfire [71,72] .  The design under verification is in RTL VHDL and implements a 2 × 2 NoC infrastructure (processing elements excluded). The verification plan considered 2-dimensional verification campaign targeting functionality and power consumption requirements. For the former, assertion-based functional verification by simulation was employed targeting statement, branch, condition and toggle coverage metrics and satisfaction of a set of temporal simple-subset PSL assertions. For the latter, a set of power targets were extracted for the targeted silicon implementation assuming a particular switching activity (set to 12 mW in this example).
Among documented design errors in the Bonfire project, the bug f1, as shown in Fig. 3 , is an example of a functional misbehavior due to improper usage of write and read pointers in the FIFO. The figure represents the code errors in the red line and the corrected versions of the code lines in blue. The bug f1 and the bug p1 demonstrate the error in Figs. 3 and 4 , respectively. The bug p1 causes violations of specified power consumption targets because of unnecessary excessive use of a fault-tolerance structure related counter. The report of such a power consumption is described in Table 2 . The power consumption is shown in the cell Total Power which is composed of the dynamic power, i.e. the Switching Power in the interconnects and the Internal Power in the logic cells, and the insignificant (for the target technology) static leakage power Leak Power. As summarized in the first row, for the bug f1 the Total Power is equal to 10.211 (consistence with the power consumption requirement). Similarly, in the third row, which rep-resents the power consumption for the correct version of the code, the total power is equal to 10.184. This report prove that even if there is a bug (bug f1) in the code but still the power consumption requirement is met. In contrary, for the bug p1, even though there is no functional errors, the Total Power consumption is reported which is equal to 22.137. Thus the bug p1 results in a double power consumption compared to the correct implementation and violates the power targets in the specification. This fact prove that it is critical to know how and where the code should be modified in order to reduce the power consumption as well as maintain functional correctness. In general, the above simple motivation example demonstrates the challenge of interdependency of different aspects when requirements in more than one dimension are present.

Machine learning to tackle the challenges of multidimensional verification
As it can be seen in the previous sections, multidimensional verification is a complex multi-aspect optimization problem. Machine learning algorithms are known to be able to learn complex relationships and have been used for several optimization problems. Section 3.5 has shown that machine learning techniques were already successfully used for estimating several different single verification metrics. This suggests that machine learning can be also used for solving multidimensional verification problem. There- Table 2 Power consumption of the Bonfire system implementation: corrected and with bugs f1 and p1.  fore, an initial approach is proposed which is based on machine learning techniques in order to tackle this multidimensional verification challenge.

Proposed methodology
The proposed approach targets to predict two different verification metrics based on the same feature set extracted from the gate-level netlist of a given circuit. These two different metrics are Prediction of De-Rating and Path delay. The first metric to predict is the De-Rating or Vulnerability Factor, which are related to the reliability verification flow and a major metric of the failure analysis. The second metric is the path delay and related to the timing analysis. This metric is usually obtained during the synthesis or place and route stage of the design development. Therefore, machine learning can help to shift the analysis to an earlier design stage.
A possible application scenario consists in extracting a list of circuit feature and training a ML tool with a limited set of reference inputs (the values of the selected circuit features) and  expected outputs (reliability and timing metrics). Depending on the exhaustiveness of the training campaign, the trained ML tool can provide actual reliability metrics from a limited list of circuit features while spending far less resources (CPU time, EDA tools licenses, man-power) than using classical methods.

Prediction results
The proposed idea was implemented and evaluated on a practical example. Therefore, a set of features is defined which characterizes each flip-flop instance in the circuit. The feature set is composed of static elements (cell properties, circuit structure, synthesis attributes) and dynamic elements (signal activity). After extracting the features for the full list of circuit instances, reference data was obtained. The Functional De-Rating per flip-flop was determined through first-principles fault simulation approaches and the path delay was extracted by a classical static timing analysis. One part of the reference dataset is used to train the machine learning model and the remaining data is used to validate and benchmark the accuracy of the trained tool.
As a circuit under test, the Ethernet 10GE MAC Core was used which is available as RTL description from OpenCores. The circuit consists of control logic, state machines, FIFO controllers and memory interfaces. By synthesizing the design with NanGate FreePDK45 Open Cell Library, 1054 flip-flops have been identified and the corresponding features have been extracted.
Several machine learning models have been evaluated, such as the Linear Least Squares, Ridge (with linear and non-kernels), k-Nearest Neighbors (k-NN), Decision Tree (CART) and Support Vector regression (SVR, with linear and non-linear kernels). It has been noted that especially the linear models are not very suitable to predict the reliability metrics. The non-linear models perform much better and the Support Vector regression with Radial Basis Function (RBF) as kernel functions is among the best. There-fore, the SVR model with RBF kernel function is used for the following presentation of the prediction results. Figs. 5 and 7 show the prediction of the two metrics. When 50% (527 flip-flops) of the data are used to train the model and the remaining 50% was used to evaluate the model. The performance of regression models is usually evaluated by using the Coefficient of Determination ( R 2 ) score and the model reaches a score of R 2 = 0.844 to predict the Functional De-Rating and R 2 = 0.975 to predict the path delay. Figs. 6 and 8 show the learning curve of the model. This curve describes the performance of the model for different sizes of the data set used for training and the remaining data set used for the evaluation. The learning curves suggest that by using more than 50% of the available data for training doesn't significantly improve the prediction performance. However, it can also be seen that by using less than 50% still a valuable prediction can be performed. Thus, by allowing a slight reduction of accuracy, the cost of an exhaustive analysis can still be reduced.
The practical example has shown that machine learning can be successfully applied for different verification purposes. In order use ML to support the multidimensional verification problem, features from different design stages need to be extracted and used to train a unified model or several separated models. These can be used to predict the required verification metrics.

Conclusion
In the recent years, numerous extra-functional aspects of electronic systems were brought to the front and imply verification of hardware design models in multidimensional space along with the functional concerns of the target system. Targeting at understanding of this new verification paradigm, we have performed a comprehensive analysis of the state of the art and presented a taxonomy for multidimensional hardware verification aspects, an up-todate survey of related research works and trends towards enabling the multidimensional verification concept and investigated the potential of machine learning based techniques to support the concept. As the result of the performed analysis of the state of the art we have outlined a gap in methodologies and tools for holistic multidimensional verification of hardware design models and the key challenges.

Declaration of competing interest
All authors have participated in (a) conception and design, or analysis and interpretation of the data; (b) drafting the article or revising it critically for important intellectual content; and (c) approval of the final version.
Xinhui Lai is one of the early stages researchers in the RESCUE European Training Network. She is doing her research work and Ph.D in Tallinn University of Technology which is one of the institutions evolved in the RESCUE project. She got her bachelor and master diploma in Electronic Engineering from Politecnico di Torino, Italy, in October 2014 and April 2017 respectively. Her research is focused on design error functional verification and automated debug, i.e. localization and correction, as well as verification of extra-functional interdependent aspects in nanoelectronic system design such as security, reliability, power/performance envelope.