Knowledge Driven Synthesis Using Resource-Capability Semantics for Control Software Design

This paper presents a knowledge-driven approach for automated synthesis of controllers in three different use-cases. The approach addresses the engineering challenge posed by Industrie 4.0, which requires fast, reliable, and flexible integration of multiple heterogeneous hardware and software components. Manual design approaches are not scalable for large systems due to their complexity. The proposed approach captures resource-capability knowledge and uses a reasoning-based synthesis mechanism to compose a controller design for a plant goal. The approach uses domain-specific languages (DSLs) to describe the components, their interfaces, and capabilities. The generated control designs are executable codes that implement the control strategy. The proposed approach reduces the average engineering time by 70% and generates on an average 60% of the executable code in each use-case. The approach uses a knowledge repository to store resource-capability knowledge and enables rapid prototyping and iterative design. The proposed approach provides a promising solution to automate the synthesis of controllers in different use-cases with multiple heterogeneous hardware and software components satisfaction.


I. INTRODUCTION
Large-scale industrial systems such as power plants, robotic systems for inventory management and manufacturing plants have hundreds of heterogeneous hardware and software components. These plant systems operate through a hierarchy of controllers. In many systems, the controller is implemented as a collection of software routines collectively called the control software of the system. Krupitzer et al. [1] view control software as the nervous system of any discrete plant control system which connects with the different underlying hardware and software components, interacts with the elements, and controls their functioning. The control software is responsible for several aspects of the plant's functioning, like, ensuring its safety and correct operation, fault handling, and achieving stakeholder objectives. Even a small The associate editor coordinating the review of this manuscript and approving it for publication was Engang Tian . bug in the control software can cause a massive loss in cost and life. A case-study by Dźwiarek [2] discusses the accidents caused by failures in control software, which further strengthens the importance of a robust and accurate controller design.
Designing a large-scale plant controller is a complex task, including identifying components, understanding their usage and applicability to the context, and finally integrating them in a specific pattern that ensures objective accomplishment. A study conducted by Törngren and Sellgren [3] shows that significant design effort gets spent identifying the system objectives and suitable components. Consider, for example, the design of Telescope Management System (TelMgt) [4] for the Square Kilometer Array (SKA) Radio Telescope, a project in which the first was author. It took four years and 40 engineers from various domains to design the telescope. The designing efforts and complexity increase by several factors as the number of components in a plant increases, making it challenging to design such large systems in a short period manually. Along with manual efforts, there is also a margin of design errors during the designing phase, which adds to the engineering cost. Manual efforts in designing large-scale plant controllers require significant designing efforts in identifying system objectives and suitable components, which is a time-consuming and error-prone task.
Industrie 4.0 has set high expectations from future systems, which need to be intelligent, accurate, connected, efficient and adaptable. These demands have pushed the need to revisit the engineering approaches to enable fast, resilient and verified systems. Going forward, we see a need for an approach that can enable automated offline engineering and rapid development even while the systems are operating, and a design synthesis is an approach that we propose to mitigate the needs.
To address these needs, we propose a knowledge-driven controller synthesis approach. Our proposed approach aims to address the need for automated design engineering and rapid development of large-scale plant controllers, given the challenges and complexity involved in their manual design. Knowledge about the plant's components is captured in order to identify suitable components for a given objective and then to synthesize a controller that orchestrates them. To capture the knowledge about components, we abstract the component behavior and usage using a concept we call Capability. In real world projects, engineers with good knowledge about components and their capabilities can reason about a component's applicability for a specific objective. We create a capability ontology to explicate the capability of knowledge of the components. This ontology becomes the basis for which offered Capability can satisfy the required Capability for a plant objective.
We evaluated our approach by generating control software for three different plant systems, namely, a pick-and-place robot, a smart meeting room, and a vehicle entry gate automation. The results showed that our approach generated 60% of the executable code that implements the control strategy, reducing the engineering cost and error margin while ensuring the quality and correctness of the control software.
We believe that our approach can be extended to other domains that require large-scale control software development to enable efficient and fast systems that meet future demands. In Section II we discuss the related work in the area of control design using the capabilities concept, and present the proposed approach in Section III. In Section IV, we discuss the capabilities concept, followed by a formal definition of capabilities using Transition Systems [5] in section V. In Section VI, we work out a vehicle entry automation scenario with the formal definitions. The initial prototype and the results are discussed in sections VII and VIII. In section IX, we discuss the threats to validity for our approach and how our approach compares against modern deep learning and artificial intelligence based approaches. Finally, in Section X we conclude the paper with a brief discussion on future work.

II. RELATED WORK
The thesis by Nuzzo [6] proposes a design methodology addressing the complexity and heterogeneity of cyber-physical systems through assume-guarantee contracts to formalize the design process. The approach enables the realization of system architectures and control algorithms in a hierarchical and compositional way. In this approach, the components are specified by contracts and systems by compositions of contracts. The thesis also establishes a link between the theory of contracts and interfaces to resolve the compatibility problem for systems with uncontrolled inputs and controlled outputs. Soururi [7] has proposed a compositional approach to automate the design creation using mechatronic modularity. This research intends to explore machines and systems' inherent mechatronic modularity as a base for their control software. The thesis proposes a methodology for developing component software for mechatronic components, enhancing the concept of Intelligent Mechatronic Components (IMC) on account of better composability. The approach aims at enhancing the performance of software design and maintenance, as well as systems flexibility and design compositionality. The work introduces a rule-based approach for the automatic configuration of physical mechatronic components and a uniform architecture for control software design of IMCs. This architecture envisages the machines' control to be composed of modular software components with a standardized interface.
Component-Oriented Software Engineering (COSE) supports the fast development and deployment of softwareintensive systems. In addition to the construction-level support through component-based approaches, componentorientation further speeds up the earlier analysis and modeling phases by domain-specific guidance and cuts the work down for the declarative modeling that targets code generation object-oriented approaches. Sometimes, componentbased approaches also depend on other paradigms for modeling the problem and the solution spaces. For example, the development of ambient intelligent systems [8] is guided by domain-specific notions and constructs that quickly map to existing components.
The approach proposed by Francesca et al. [9] proposes a novel automatic design approach for robot swarms, called AutoMoDe (automatic modular design). AutoMoDe generates an individual-level behavior in the form of a probabilistic finite state machine by searching for the best combination of preexisting parametric modules. In other words, AutoMoDe develops control software by selecting, via an optimization algorithm, the topology of a probabilistic finite state machine, the modules to be included, and the value of their parameters. The set of modules and the rules to compose a probabilistic finite state machine represent the automatic design process's bias.
Dai et al. [10] propose a knowledge-driven service-based capability orchestration engine for cyber-physical systems, which dynamically composes services to process task or 52528 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. goal requests from humans or devices. The approach uses SQWRL (Semantic Query-Enhanced Web Rule Language) queries to reason over an ontological knowledge base that defines five elements: Equipment, Data, ServiceContract, ServiceRequest, and Report. The service-based approach is limited to a specific context, which makes it challenging to reuse service designs for different scenarios. Secondly, the approach does not incorporate a knowledge description interface and service description interface, which could limit the system's ability to learn and adapt to changing requirements. Lastly, the approach does not explicitly map how the composing services get translated into machine control design, which the approach could validate.
The article by Wautelet and Kolp [11] presents a framework for model-driven development using services to encapsulate business processes, allowing domain analysts to focus on the core business and added value of the enterprise. The framework provides models at strategic, tactical, and operational levels to develop an agent-oriented software system. The strategic level uses a service model to understand the business's high-level values and quality expectations, while the tactical level describes the business processes automated by the system, with actors' accountability and responsibility visualized. The models are then mapped to operational models that instantiate the Belief/Desire/Intentions paradigm, proposing entities mapping to the real-life organization. The framework builds a business-driven transformation process that aligns and traces the business and IT system, ensuring better alignment and traceability between the two. Such approaches are suitable for models that do not undergo frequent changes and mostly remain static. For dynamical systems like manufacturing plants etc., that undergo topological or objective changes, models become restrictive. Also, this approach is focused on business models and not mechanistic models, keeping it out of scope of the interface-behaviorinteraction paradigm.
A model-driven-approach by Wautelet and Kolp [11] proposes a framework for developing agent-oriented software with strategic, tactical, and operational views. It aims to address the limitations of existing models and provide a comprehensive approach. The framework includes a strategic analysis model based on the concept of Service, which incorporates elements of quality and risk management. It also discusses actors' responsibility assignment and utilizes classical models for tactical knowledge representation. The operational perspective is represented through three different models focused on implementing agent software using the Belief, Desires, Intentions (BDI) paradigm. The study outlines the structure and organization of the content, highlights related work, emphasizes the need for a high-level vision, and introduces various models for different decision levels in agent-oriented software engineering. The capability concept is also mentioned in this study to describe abstraction for the goals of IT systems. However, there is no formal discussion and reasoning of capabilities. The paper is focused on IT systems where mostly all components are well known and engineers understand the domain very well.
Each of the above research and other similar efforts contribute towards composition and automated design/strategy generation. However, the problem of creating the requirement specifications and component description still exists. Another gap is the nature of these approaches to support a specific application domain and not the generic area of discrete control. The need for an approach that addresses the control domain and enables re-use of the domain for any target application domain like robotics, CPS, IoT, etc., still exists. As defined in the approaches discussed above, the synthesis approach lacks the following: 1) control domain semantics, 2) simplification of the synthesis approach and 3) an integrated environment supporting the control software engineering.

III. OUR APPROACH
We consider the objective of automating the synthesis of controller designs that align with stakeholder goals. This objective involves identifying suitable components and establishing the necessary connections between the controller and these components. We aim to enhance the level of abstraction when specifying this objective. Our approach simplifies the synthesis process, resulting in a controller design that can be translated into executable code. To achieve these goals, we utilize the concept of capability, which abstracts the operational details of a component. We have developed domainspecific languages (DSLs) to capture component interfaces and capability knowledge. Additionally, we employ a custom workflow description language to capture the plant objective. Figure 1 provides a visual representation of the conceptual architecture of our approach.

A. SOLUTION ARCHITECTURE
As shown in Figure 1, the architecture comprises of two different types of knowledge bases: i) RDF/OWL-based Ontologies to store knowledge about the components and their capabilities and ii) A machine learning model to learn and store the knowledge about the different types of workflows and their corresponding required capabilities.
The end-user (engineer) provides a very high-level goal description (e.g., Warehouse automation, Smart Home) that becomes the input for the machine learning model to predict a suitable workflow and the required capabilities based on the learning data. The synthesizer uses the generated workflow and required capabilities to query the RDF/OWL-based knowledge repository and get the list of components that provide the workflow's capabilities. This capability requirement satisfaction is executed based on the reasoning queries that perform type checks between the required and the offered capabilities. Once the synthesizer identifies the suitable components through knowledge queries, the synthesizer uses an algorithm to stitch the workflow with the components' capabilities and generate a hierarchical controller, ensuring the workflow execution by orchestrating the components. Finally, the generated controller can be used to further autogenerate the execution code and auto-deploy the solution.
To arrive at the proposed architecture, the following research steps were taken: • Requirements gathering: The first step was to identify the requirements of the system. This involved understanding the domain and the problem that the architecture was supposed to solve. The requirements were gathered by studying existing systems, interviewing domain experts, and analyzing use cases.
• Knowledge capturing strategy: Once the requirements were gathered, a strategy was developed to capture the necessary knowledge for the system. The strategy involved capturing knowledge about the component • s and their capabilities using OWL/RDF based knowledge repositories. It also involved capturing dynamic and implicit knowledge about changing contexts and their effect on workflow activities using a machine learning-based approach.
• Synthesis algorithm development: With the knowledge capturing strategy in place, algorithms were developed to synthesize workflows and generate controllers. The synthesis algorithms involved using the machine learning model to predict suitable workflows and their required capabilities based on learning data. The synthesizer then used the generated workflow and required capabilities to query the RDF/OWL-based knowledge repository and get the list of components that provide the workflow's capabilities. The synthesizer then used an algorithm to stitch the workflow with the components' capabilities and generate a hierarchical controller.
• Implementation: The final step involved implementing the proposed architecture. This involved developing the necessary software and hardware components, integrating them, and testing the system.
• The knowledge capturing strategy was populated by using OWL/RDF based knowledge repositories to capture the knowledge about the components and their capabilities. To capture dynamic and implicit knowledge, a machine learning-based approach was used. Recurring Neural Networks were studied and implemented to learn and store the knowledge about the different types of workflows and their corresponding required capabilities. The training data was captured using an Activity DSL, which captured the learning information. To demonstrate the effectiveness of the proposed architecture, experiments were conducted by comparing manual engineering against the proposed approach(VIII). Tables 1 and 2 were generated to compare the effort, code generation, and design time reduction achieved with the proposed approach. The experimental results showed a significant reduction in design time and manual effort, as well as a high percentage of code generation using the proposed approach.
The objective of a controller is to achieve the plant goals by orchestrating the plant components. For example, a warehouse automation robotic system needs to pick objects from locations and place them in the destination locations. The activities and tasks to be executed need to be specified, as well as the required capabilities for that task. For example, 1) The task Detect the item, needs a Sensing Capability 2) The task Hold the item, needs a Gripping Capability.
Workflow activities, their sequence, and required capabilities for various activities are dynamic and change according to the context. To capture this dynamic and implicit knowledge about the changing context and its effect on the activities, we use a machine learning-based approach. The approach conceptualization is in progress, and we are studying Recurring Neural Networks to implement the learning approach. To capture the training data, we use an Activity DSL [12], which captures the learning information. trained with data, the model is then used to predict the task workflow with bindings to any input goal's required capabilities. This approach is currently at a conceptual level, and we are working towards implementing it. To represent the workflow, we have developed an Activity domain specific language inspired by the SysML activity diagram.

2) COMPONENT AND CAPABILITY KNOWLEDGE
We capture the knowledge about the components and their capabilities using an OWL/RDF based knowledge repository. The component interface and capabilities are static knowledge and do not change irrespective of the context. The OWL/RDF based knowledge ontologies enable type checks while reasoning about the applicability of a component offering a capability type against a required capability type. The component knowledge gets stored by capturing its interface and behavior using the M&CML domain specific language [13]. Using this DSL, we can capture the control interface of components in terms of commands, events, alarms, data, and behavior as a state machine. Components offer abstract capabilities, based on which engineers decide the applicability of a specific component. For example, a warehouse task like Picking will need a component that offers the abstract Capability to Pick. A robotic gripper arm and a suction arm both offer the Capability to Pick. We discuss the capability concept and ontology in the next subsection.

IV. CAPABILITIES CONCEPT
We start with a definition of device capabilities better suited to the systems world (rather than enterprises): Device capabilities capture the transformations it can produce in the state of its environment, and possibly its state, with associated behaviors and characteristics. E.g., a toasting capability focuses on the change in the state of bread, while a projection capability focuses on the sequence of images exhibited. Capability descriptions usually involve domain-specific terminology. Since both state and behavior are observable quantities, capabilities can be seen as validity constraints over the sequence of observables associated with a system, along with associated characteristics such as timing and reliability.
Capability specifications are usually functional abstractions of operational behaviour, and apply at every level of a system: low-level capabilities such as TemperatureSensing; equipment level capabilities such as BottleFilling, to systemlevel capabilities such as CarryingOutRadioTelescopeObservations. It is also possible to frame capabilities at the level of individual actions (e.g., grasping), tasks (fetch item), collections of related actions (e.g., equipment configuration) or goals (building security monitoring and management). Capabilities may have associated features (e.g., auto shut-off, notification features): augmentations to the basic functionality provided.
Using a capability involves a series of interactions between the device and its environment (including users/operators). A Device Capability Self-description should capture all the information required to deploy/access the device and utilize the capability to achieve system goals. This includes not only capability specifications but also operational interfaces and usage protocols, control and life cycle management interfaces and protocols, configurabilities, behaviors, and outcomes including variations e.g., failure modes and their handling, physical characteristics and affordances, dependencies, and assumptions about operating context. The completeness of this content is based on the framework presented in [14] and validated against the informational requirements of the reconfiguration life cycle activities.
Capability Composition involves determining a target configuration and operating model that produces a match between desired system goals and capability descriptions of available resources, using domain and patterns knowledge. Compositionality reasoning involves determining the expected outcomes from devices in a specified configuration and interacting according to a specified operating model, based on domain and contextual knowledge. 1) Capability specifications enable compositionality reasoning at a higher level of abstraction, in terms of functional decomposition knowledge (into lower-level functions or tasks) and quality characteristics reasoning.
2) Interface and usage model specifications, together with collaboration patterns/processes knowledge, enables reasoning about outcomes at the level of sequences of actions and effects on the state. 3) Complete capability self-descriptions expand the scope of reasoning to include the full range of scenarios and concerns, not only faults, security, and safety, but also control, life cycle management, dependencies, physical aspects, and assumptions about the context.
This enables separation of concerns in compositionality reasoning: capabilities required, collaboration models, detailed configuration, and operational models. The Capability concept has been studied in the past by Aitken et al. [15], and Wang et al. [16] in the context of problem-solving methods and interoperability, respectively. For our work, we define Capability as a component's ability to deliver the desired outcome by interacting with the environment. The environment can be the other interacting components or a controller. E.g., a robotic arm offers Picking capability, a camera offers Imaging capability, and a mobile four-wheeled chassis offers Mobility capability. It is a very humane nature to understand components from the view of capabilities and reason about a task by comparing the required capability against the offered capability. E.g., a task Picking a box, can be performed by a robotic arm and not by a camera due to the different capabilities offered by the two components. We consider Capabilities as wrappers around the component's usage model, which exposes their abstract engineering behavior. However, just a capability name cannot be useful to reason about the component; it is also necessary to know the component's usage model. A usage model for a capability captures the control actions required to invoke the abstract functional capability from a component. To capture the usage model, we borrow the concept of Session Types [17] to model the interaction-protocol from the component's side. The interaction-protocol defines the control actions that the controller should perform to use to component's capability. While synthesis, the controller implements the complement [17] of the component's interaction protocol. The figure below shows the structure of the capability-ontology.
As seen in Figure 2, the capability ontology has the following classes: i) Capability: Describes the capability name, and links the Capability with the component through its interface. This class also captures the necessary control actions to initialize the component. The synthesized controller needs to ensure that it executes the initialization actions before invoking a capability. ii) Control Capabilities: Describes the type of control actions that need to be executed/handled by the interacting party to invoke a capability from the component. The capabilities can be invoked by either firing a command or handling an event, alarm, receiving a data stream, or executing a script operation. The ControlCapabilities need to be invoked by the controller to orchestrate multiple components. iii) Capabilities Outcome: Describes the outcomes that need to be handled or addressed by the other parties interacting with the component. The outcomes are the results of the errors that a component produces. To capture the capability description as per the capability ontology, we developed a capability DSL that provides a textual interface to create descriptive knowledge models.
A detailed description of the capability elements are given in the next subsection.

V. TRANSITION SYSTEMS VIEW OF CAPABILITIES
Given goals, capability descriptions, and domain patterns knowledge, we need a mathematical formalism that supports compositionality reasoning and verification and preferably enables the synthesis of target configurations and operating models. Transition systems with Tabuada interconnects [5] fit this need particularly well, from a functional perspective. Our framing is that i) each device is a transition system, ii) the system-of-interest, consisting of interacting devices, is a collection of interacting transition systems, iii) the physical and informational flows among them is as a Tabuada interconnect, and iv) the system's physical environment is another transition system. First we describe the formalism, taken from [5], and then explore its usage for compositionality reasoning and synthesis.

A. TRANSITION SYSTEM
where X S is the state space of S, X 0 S ⊆ X is the set of initial states of S, U S is the set of inputs or actions of S,

C. INTERCONNECT AND INTERACTIONS
Interconnect and interactions are defined as : Formally, given systems A and B, an interconnect between A and B is a relation I ⊆ X A × X B × U A × U B . An interconnect can be easily generalized to more than two entities. An interconnect constrains the possible interactions between systems.

D. SYSTEM COMPOSITION
System composition is defined as: Let A and B be two systems connected via an interconnect I. The composition of A and B, denoted A × I B is a system C defined as follows: . A transition in the composite system is obtained by constraining the transitions of the individual component subsystems so that they respect the interconnect.

E. FUNCTIONAL GOALS
Functional goals can be expressed as a set of observables Y R along with a set of traces B R ⊆ Y ω R over Y R . The operating model synthesis problem is to construct as a solution a system S, and identify a subset I ⊆ X 0 S such that, (a) Y R ⊆ Y S , and φ ⊆ B S (I ) ⊆ B R . In other words, the design the problem consists of constructing a system S and initializing it to I so that its behavior originating in I is a subset of the required behavior. In this case, we say that the solution (S, I ) implements the requirement specification (Y R , B R ).
The solution system S consists of a collection of interacting and suitably initialized components. In the simplest case of interaction, S consists of an environment E and a device D and an interconnect I. D is capable of meeting a requirement specification R, if there exists an environment E, and interconnect I and an initialization I ⊆ X 0 E × X 0 D such that (E × I D, I ) implements R. In practice, a device is capable of meeting fixed requirements. A advertises all the requirements it is capable of meeting. Each capability has a protocol consisting of interconnects, initialization, and sequence of interactions necessary with the environment to implement a requirement.

F. SYNTHESIS
Synthesis of solutions involves connecting multiple components in order to implement a (more substantial) requirement. In some cases, this may involve the introduction of an additional component, Controller C, and interconnect I, such that the revised composition satisfies the requirement. Of course, the problem of controller synthesis to satisfy such a mathematical relationship is a well-studied problem.

VI. EXAMPLE USECASE -VEHICLE ENTRY AUTOMATION
To explore the fit between the transition systems formalism, device self-descriptions, goal specifications, and compositionality reasoning, we examine a simple Vehicle Entry Automation scenario. The scenario consists of a boom barrier gate that opens only when an authorized vehicle approaches. The vehicle contains an RFID tag that is detected by an RFID reader. The RFID reader retrieves the vehicle identity, and based on the authorization, allows the vehicle to pass through the boom barrier gate.

A. REQUIREMENT/GOAL SPECIFICATION
The specification of requirements begin by identifying the set of high-level observables and desirable behavior. At a high level, an observable has two components: vehicle and barrier. A vehicle is either present (with an id) or absent(⊥). The barrier either allows a vehicle identified with an id, or denies the vehicle, or remains in a ready state until a vehicle arrives. The observables in the requirement can be defined as: Y R = Vehicle × Barrier where Vehicle = vehicle(n : N) + ⊥ and Barrier = allow(n : N) + deny(n : N) + ready.
The desired behavior is a predicate on traces over the highlevel observables. If at time instant i there is a vehicle with id n : N, then at some future instant j, if n is present in S, then the barrier allows n; otherwise, it denies n. The assumptions in the requirements are that when the vehicle arrives, the barrier is ready. This is captured by the required behavior

barrier[j] = deny(n) and forall i< k < j barrier[k] = ready
Note that the specification does not preclude out of order servicing by the barrier. In a physical implementation this is possible if there is a bank of barriers.

B. SOLUTION SYSTEM
In this and the next sub-section we give a high level specification of the system B representing Boombarrier and then show that B is capable of meeting the boom barrier requirement specification R.
Note that the vehicles arrive only when B is in the ready state. This is captured via an interconnect that specifies that the inputs to B come from an environment E and that the environment ensures that vehicles arrive only when B is ready. Formally, let I(x E , u E , x B , u B ) be the predicate: It is easily verified that the composition of B and E (say H ) implements the requirement specification R (modulo the projection of the output of H to the observables Vehicle × Barrier). The construction of H is relatively mechanical and hence excluded. In practice, the above boom barrier system is composed of multiple devices, like an RFID reader R, identity validator V , and actuator A. Each component comes with its capability specification. These specifications are used to interconnect the devices, and the resultant composite system is then shown to implement the behavior of B.

C. REALISATION OF BOOOMBARRIER SYSTEM USING COMPONENT DEVICES
The components in our scenario are: 1) RFID Sensor 2) Validator and 3) Actuator. RFID Sensor is modelled as a transition system that provides a Sensing capability, to receive an RFID tag signal and output the detected tag, with on/off states. The interface and the behaviour of the RFID sensor R is defined below, as a transition system: 1) X R = {idle, detecting, detected(n)} is the set of all states supported by the device. 2) X 0 R = {idle}, is the initial state of the device. 3) U R = {σ (n), on, off , ⊥}, where σ is the input corresponding to the detection of the RFID signal, n ∈ N is the RFID tag identity and ⊥ is an empty input when S does not receive any input. 4) Y R = {tag(n), on, ⊥} is the set of outputs from S. Using the device interface defined above, we define the inherent behaviour of R using a function f : In the above behavior descriptions, the observable input σ is received from the environment. On receiving sigma the R detects a RFID signal and transitions to detected(n) state, where n ∈ N is the identity value. This becomes the basis for identifying the offered behavior of the RFID reader device. We define the observable behavior of the sensor as a trace of the runs executed by the system. The observable behavior of the system B ω R = {(_, _), (σ (n), tag(n)), (_, _)}. Similarly, we define the interface and behavior for the other devices in the solution space.
Validator provides an Authorisation capability, to look up tag information and map it to a boolean flag indicating authorised or not. The interface and the behaviour of the Validator V is defined as a transition system as: 1) X V = {alive, down} is the set of all states supported by the validator server. 2) X 0 V = {down}, is the initial state of the device. 3) U V = {validate(n), ⊥}, where validate is the input command to validate a certain number, n ∈ N is any number and ⊥ is an empty input when V does not receive any input. 4) Y V = {result}, is a boolean item corresponding to a valid and invalid number respectively. Using the device interface defined above, we define the inherent behaviour of V using the function f : Actuator(Boom Barrier) provides a Controlled Entry capability, which receives an authorisation boolean flag, and raises the gate if the flag is true. The interface and the behaviour of the Actuator A is defined below, as a transition system: 1) X A = {on, off , opened, ready} is the set of all states supported by the actuator.
2) X 0 A = {off }, is the initial state of the device. 3) U A = {open A , close A , power-on, ⊥} 4) Y A = {gate-opened, gate-ready}, are the observable outcomes.
Using the device interface defined above, we define the inherent behaviour of A using the function f : To infer the capability of a system composed with the R, V , and A, we show that the observed behaviors of the devices are the subset of the desired observed behavior of the target system B. The observed (context-independent)behavior of the new composed system C can be expressed as The contextual annotations are required to understand the observed behaviors of the devices in terms of the application context. E.g., The input σ (n) to R corresponds to the input v(n) for the desired system S, and hence σ (n) ≡ v(n). Similarly, allow(n) ≡ open A , passed ≡ gate-opened, deny ≡ close A , restricted ≡ gate-ready.

By making the above substitutions in
Therefore, we can infer that a system C composed of R, V , and A will have the capability to produce the desired behavior for B.
We see that the transition systems formalism is a good fit for modeling the behavior of physical as well as logical devices and components and their interactions. Transition system specifications match up well with those of a capability self-description. At first glance, it seems to include only operational behaviors, but it can easily be expanded to include control and life cycle management. Dependencies and context can be handled as components, inputs from which represent variations (e.g., power QoS) that affect component behavior. Component faults are modelled as states that affect behaviour. Interconnects reflect connectivities between the components. We can also raise the level of abstraction so that each transition represents an entire capability, for example, Authorization. Transition systems can be used also for reasoning about system transition strategies. Domain knowledge patterns are relationships that are inputs to compositionality reasoning. In short, the transition systems formalism captures in mathematics the complete semantics of the composition problem. The only aspects left out are physical structures and quality characteristics reasoning, which need to be handled separately.
It is not the case that this formalism will be used to construct mathematical proofs for every system composition scenario manually. Instead, its value is that 1) It provides clarity on self-description contents and representation e.g., that behavior should be expressed as traces and relationships over high-level observables. 2) If we use different algorithms to address different parts of the problem e.g., functional composition, transition strategy, fault management, etc., it provides an overarching framework for reasoning about the completeness of the approach and the correctness and contribution of each algorithm. 3) In some cases e.g., controller synthesis, we may be able to directly use the transition systems formalism to guide the creation of methods and tools to solve the problem.
Thus its primary value is that it frames the entire problem for us and approach to the solution in a formal way that encourages precision, reasoning and automation.

E. DEVICE CAPABILITY SELF-DESCRIPTIONS
In principle, device self-descriptions should include all the information needed for capability compositionality reasoning and enable the life cycle activities:

1) FUNCTIONALITY, FEATURES, AND QUALITY CHARACTERISTICS
The functionality, features, and quality characteristics are typically specified in domain terminology. It is necessary to standardize the domain vocabularies used for capability description as part of Industry 4.0, to enable the dynamic composition of capabilities across devices from many independent providers.

2) INTERFACE SPECIFICATIONS
The interface specifications(descriptions) should cover all the layers of interfaces, physical, technological, technical, and functional. Covers not only operational but also control and life cycle management interfaces. May be specified as conformance to standards.

3) USAGE AND INTERACTION MODELS
The usage and interaction models capture usage protocols associated with each device interface. In order to facilitate compositionality reasoning, protocol descriptions should preferably be in the form of session types [17] for the interaction, extended with the roles and functional responsibilities of each participant, the resulting state changes, exhibited behavior, and quality characteristics. Physical usage models are expressed in terms of affordances [18] which identify the actions that can be performed on the device e.g. install, deploy.

4) DEPENDENCIES AND CONTEXT ASSUMPTIONS
Dependencies and context assumptions capture assumptions about the resources needed for the device to operate, and assumptions about the environment and context entities involved (such as bread for toaster), including their behaviors and characteristics, e.g., shielding from noise, QoS of power supply, operator dependencies, etc. This enables reasoning about the well-formedness of a target configuration.

5) DEVICE CONFIGURABILITIES
Device configurabilities capture their operating modes and configuration parameters and their effect on functions, interactions, and quality characteristics.

6) VARIATIONS, FAILURE MODES, AND HANDLING
This capture the variations in inputs and device state (including undesired variations: security and safety threats, and failure modes), and their effect on behaviors and characteristics and the actions for handling fault situations. It can readily be appreciated that this is a much more exhaustive set of specifications than even human-readable datasheets. In practice, not every device provider may choose to create exhaustive domain-standards-conformant self-descriptions enabling full compositionality reasoning, instead limiting themselves to basic capability specifications and perhaps operational and other interfaces. As indicated earlier, we can organize capability self-descriptions into elaboration levels, which enable successively more complete compositionality reasoning. Any missing information puts the burden on engineers to manually ensure that the residual concerns are addressed properly.

F. DOMAIN AND CONTEXTUAL KNOWLEDGE SPECIFICATION
Capability composition requires domain patterns knowledge, including goal decomposition knowledge, and process specifications of how multiple entities collaborate to achieve particular goals. Goal decomposition knowledge should include configuration patterns which capture: 1) The participant roles involved in goal accomplishment.
For example, toasting involves a user, bread, and a toaster. 2) Capabilities/responsibility specifications for each participant in the pattern. 3) Relationships among the participants, including any physical connectors. 4) How individual characteristics of each participant compose to generate the overall quality characteristics of the goal. Dynamic reconfiguration also requires contextual knowledge about stakeholder value, goals, and the operational environment. Goals are typically desired capability specifications for the overall system, established at system design time and modified or augmented with contextual goals, and decomposed down to the level of the reconfiguration scope e.g.; a tele-meeting system may have meeting-specific goals that indicate whether capabilities to share presentations or whiteboards are needed. Stakeholder value specifications take the form of parametric relationships that express how system capability characteristics affect stakeholder value, e.g., how video QoS and communication lags may impact meeting effectiveness. Operational environment specifications are relevant if any device behavior models identify dependencies on particular operational environment parameters, e.g., ambient temperature.

VII. INITIAL PROTOTYPE
This section describes a prototype that we created to explore and demonstrate the derivation of operating models from capability specifications, and dynamic reconfiguration based on requirements change. Consider a meeting room enabled with IoT devices such as audio-video system, ambiance controllers, etc. The meeting process is specified as a high-level task description that orchestrates device usage towards stakeholder objectives.
Our prototype demonstrated how actions in the high-level task description bind to capabilities offered by specific devices, and how a smart controller for the meeting room is automatically synthesized to orchestrate the devices using their device-specific control interfaces. Figure 3 below depicts the architecture of the implemented solution. Subsections below discuss some of the key features of the architecture.
We implemented a knowledge repository storing knowledge of device capabilities used in a smart meeting room. We implemented a Domain Specific Modelling Language (DSML) to express this knowledge in terms of devicespecific commands, responses, events/alarms, and behavior in terms of a state machine. The described capabilities can be atomic (e.g., lights, cameras) or composite (handle communication faults). Figure 4 shows an example of a resource capability. The left side of Figure 4 shows the task flow description, which is mapped to a higher-level capability. The right side of the figure shows the description of a Video Conferencing Capability. The high-level task description is specified using an activity flow specification (DSL) that binds tasks to device capabilities. A synthesizer transforms the task description to the design of an orchestration logic. This design is finally realized as a smart controller design model for the meeting room. We use the idea of a model-to-model(M2M) transformation and code generation [19] to realize this controller. We also showed that an updated task specification that involved a new device with capability specifications triggers the generation of an updated controller, which can be automatically downloaded to the target system, reconfiguring its operating model. This exploratory prototype showed us the value and feasibility of controller synthesis to achieve dynamic reconfiguration based on capabilities knowledge, for a moderately complex environment such as a meeting room.

VIII. INITIAL RESULTS
We have tried out this approach to automatically synthesize the controller for a Kuka Bot [20], a Smart Meeting Room, and an automated vehicle entry system. We automatically generated the controller design and the computer program implementing the design for all these three use-cases.

A. EXPERIMENTAL SETUP
To evaluate the effectiveness of our approach, we conducted experiments on three use-cases:-

B. MANUAL ENGINEERING PROCESS
For the manual engineering process, we hired 3 experienced developers/engineers to design and develop the use-cases. The developers used standard tools like SysML, UML to describe the components and their interfaces, textual documents to describe the component capabilities, and workflow editors to describe the workflows. The resulting designs were manually coded to realise the control software.

C. AUTOMATED APPROACH
For the automated approach, we used our framework to automatically synthesize the design for each use-case. The engineers used M&CML to describe the component interface and behavior. Capability DSL was used to model the capabilities of the components and Activity DSL was used to store the knowledge about the domain workflow. We used the same descriptions of the components and their capabilities, as well as the workflow described in the manual engineering process. Our framework generated executable code based on the component descriptions and workflow. Tables 1 and 2 show the experimental results for each usecase. We compared the time and generated code required for manual engineering against our automated approach. The results show that our approach significantly reduces the design time and manual effort required for each use-case, while also generating a significant portion of the executable code.

D. EXPERIMENTAL RESULTS
The results show that our approach significantly reduces the time required to design and develop the use-cases, with reductions ranging from 50% to 85%. Our approach also generates a significant portion of the executable code, with reductions ranging from 60% to 80%. Overall, the results demonstrate the effectiveness of our approach in reducing the manual effort and time required for developing complex systems.
The use-cases are described in brief below:-

E. KUKA BOT DESIGN
Warehouse domains deploy robotics arms to pick and place items. The Kuka Bot is a robot having the following components i) camera, ii) a gripper, and iii) a mechatronic arm. The engineers used M&CML DSL to describe the components and their interfaces, capability DSLs to describe the component capabilities, and Activity DSL to describe the workflow:-i) Detect the item iii) Open the gripper iii) Move the arm to the item iv) Close the gripper v) Move to the drop location and vi) Open the gripper. The use-case took 1500 lines of code and seven days for a team of 3 engineers to design and develop. With our approach, the use-case took just three days to automatically synthesize the design [12] and reduce 60% of design time.

F. SMART MEETING ROOM
In the Smart Meeting Room use-case, we deployed our approach to automatically synthesize a controller, enabling automated setup of a meeting room. The meeting room had the following components such as i) air conditioning(AC) system providing Air Conditioning Capability ii) ambiance system providing Ambience Capability and iii) video and audio conference systems providing Conference Capability.

IX. THREATS TO VALIDITY
The potential threats to the validity of our approach are listed below: 1) Sampling bias: The domain experts and existing systems that were used to gather requirements may not be representative of all possible scenarios. This could lead to an incomplete understanding of the domain and inaccurate requirements. 2) Knowledge capture limitations: The OWL/RDF-based knowledge repositories used for capturing component knowledge may not contain all necessary information. Additionally, the machine learning approach used for capturing dynamic and implicit knowledge may not generalize well to new contexts. 3) Synthesis algorithm limitations: The algorithms used for synthesizing workflows and generating controllers may not produce optimal results in all scenarios. This could lead to suboptimal system performance. 4) Implementation limitations: The implementation of the proposed architecture may be impacted by technical limitations, such as hardware constraints or software compatibility issues. This could result in deviations from the intended system design and reduced system performance. 5) Scalability performance: Another potential concern is the scalability of the proposed architecture. While the synthesis algorithm can handle complex workflows, it may struggle to scale to very large or complex systems. Additionally, the approach may require  significant domain expertise and resources to implement effectively, which could limit its applicability in certain domains or organizations.
While our approach does not rely solely on deep learning(DL) and artificial intelligence(AI) techniques, it does incorporate machine learning(ML) methods for knowledge capture. Compared to traditional AI-based approaches, our approach has several advantages like:-• First, it does not require large amounts of training data, as it relies on a combination of expert knowledge and machine learning [21]. This reduces the need for extensive data collection and annotation efforts.
• Second, our approach is more explainable, as the knowledge captured is explicitly represented in knowledge repositories and can be easily inspected and validated. The knowledge-driven approach can provide greater transparency and interpretability than AI-based approaches. With the proposed architecture, domain experts can directly input their knowledge into the system, allowing for greater control and understanding of the system's behavior. In contrast, many AI-based approaches rely on opaque machine learning models that can be difficult to interpret or understand. [22] • Finally, our approach is more flexible and adaptable to changing contexts, as the machine learning component can learn from new data and update the knowledge repositories accordingly [23].
Overall, our approach strikes a balance between traditional expert-based approaches and DL/AI approaches, leveraging the strengths of both while mitigating their weaknesses.

X. CONCLUSION
This paper has formulated the problem of the control software design composition for industrial plant systems. It proposes a comprehensive framework for device self-descriptions, based on the capabilities concept. It proposes the use of Transition systems with Tabuada interconnects as a formalism which captures the device capability model,and maps well to device capability self-descriptions, and facilitates compositionality reasoning and operating model synthesis. We have described an initial prototype that establises the effectiveness of composing control software.
Our primary objective in the coming months is to develop and implement a machine learning strategy that can learn and predict workflow activities required to achieve a high-level goal. This approach will enable the system to predict the required capabilities and bindings to execute the workflow activities efficiently. To achieve this, we are exploring the use of Recursive Neural Networks (RNN) and Deep Learning models to effectively capture the dynamic and implicit knowledge involved in the workflow activities. Once this approach is implemented, it will significantly reduce the effort required to design and rapidly prototype control software designs, thus enabling a faster and more efficient controller synthesis process.
In addition, we plan to extend our approach to support dynamic adaptation, enabling the system to re-design the controller design dynamically when there are changes in the goal or the underlying components. This will improve the flexibility and adaptability of our system, making it more suitable for real-world scenarios where goals and requirements often change over time.
Furthermore, as part of our future work, we plan to evaluate our approach's effectiveness and efficiency by conducting experiments in various domains, such as warehouse automation and smart home systems. We will also investigate the scalability and performance of our approach by testing it on larger and more complex systems. Additionally, we will explore the possibility of integrating our approach with other existing control synthesis tools to provide a more comprehensive solution. Finally, we aim to investigate the use of explainable AI techniques to provide insights into the controller design process and enhance the trust and reliability of our system.