Logic-Based Technologies for Intelligent Systems: State of the Art and Perspectives

: Together with the disruptive development of modern sub-symbolic approaches to artificial intelligence (AI), symbolic approaches to classical AI are re-gaining momentum, as more and more researchers exploit their potential to make AI more comprehensible, explainable, and therefore trustworthy. Since logic-based approaches lay at the core of symbolic AI, summarizing their state of the art is of paramount importance now more than ever, in order to identify trends, benefits, key features, gaps, and limitations of the techniques proposed so far, as well as to identify promising research perspectives. Along this line, this paper provides an overview of logic-based approaches and technologies by sketching their evolution and pointing out their main application areas. Future perspectives for exploitation of logic-based technologies are discussed as well, in order to identify those research fields that deserve more attention, considering the areas that already exploit logic-based approaches as well as those that are more likely to adopt logic-based approaches in the future.


Introduction
Artificial intelligence (AI) is getting evergrowing attention from both academia and industry, in terms of resources, economic impact, available technologies and widespread adoption in virtually any application area. In fact, more and more industries are adopting and applying state-of-the-art AI techniques, to actively pursue challenging business objectives.
Such a general interest, and technology adoption, has been favored by two main ingredients: (i) the development of advanced technologies even at the micro scale, and (ii) the availability of large amounts of data in the environment around us to learn from. These ingredients have boosted sub-symbolic AI techniques, such as machine learning (ML)-there including deep learning and neural networks-aimed at exploiting big data to make predictions and take autonomous decisions-in contrast to the more long-established symbolic techniques, based on the formal representation of knowledge and its elaboration via explicit reasoning rules.
The increasing role of intelligent systems in human society, however, raises unprecedented issues about the need to explain the behavior, or the result of, intelligent systems-in the sense of being capable of motivating their decision and make the underlying decision process understandable by human beings: this is where sub-symbolic techniques, despite their efficiency, fall short. This is especially relevant when AI is exploited in the context of human organizations meant at providing public services-such as, e.g., health care/diagnostic systems or legal advice. There is therefore an emerging need to reconcile

State of the Art: 60 Years in a Nutshell
According to Encyclopaedia Britannica (https://www.britannica.com/topic/logic), logic can be defined as "the study of correct reasoning, especially as it involves the drawing of inferences". There, reasoning refers to one of the most complex and fascinating capability of the human mind, whereas inference is one particular way of reasoning-indeed, a principled way of reasoning, which consists of drawing conclusions out of premises following a set of rules. Hence its relevance in science, where it can be seen as the main tool to understand the world and gain new knowledge about it, rigorously.
In the next sections we quickly overview the main logic-based formalism, technologies, and research lines. In particular, from the technological viewpoint, the logic-based technologies developed in over 60 years can be classified as technologies capable of responding to the needs of (i) knowledge representation, (ii) reasoning, (iii) model-checking and verification-the first two leading in their turn to expert systems and intelligent systems, which aim to mimic the human behavior. Figure 1 (left) summarizes this classification, discussed in detail in the following.

The Premise: Computational Logic
In computer science (CS) and artificial intelligence (AI), logic is perhaps the main means for endowing machines with human-like reasoning capabilities, as it is rigorous enough to express computations, and represent knowledge in a human-understandable way. In decades, the contribution of logic in CS and AI has widened enough to be considered a discipline by itself-computational logic (CL). Logic in CL has then been exploited for diverse purposes-for instance, as a means to (i) compute, (ii) represent computation, (iii) reason about computational systems.
Several logic formalisms exist in CL. They all leverage on some sort of inference rule aimed at drawing new consequences from a corpus of premises. Both premises and consequences represent chunks of useful knowledge expressed as valid formulas according to some language of choice. Therefore, the language adopted to represent them, as well as the rules exploited to draw inferences, are of paramount importance when it comes to define logic formalisms. This is why formalisms in CL essentially differ (i) in the particular language adopted for representing knowledge, and (ii) in the particular sort of inference rule adopted for reasoning.
Among the most known ones, it is certainly worth mentioning propositional calculus, first-order logic, higher-order logic, modal logic, up to Horn clauses, and description logic. Propositional calculus, for instance, adopts modus ponens, modus tollens, and abduction as inference rules, while other formalisms make different choices, e.g., induction or modalities (in modal logic). Horn clauses, in its turn, adopt the resolution principle (in several variants: SLD, SLDNF, tabled-SLD); the same applies to description logic.

Knowledge Representation
Knowledge representation (KR, Figure 1, top left) has always been regarded as a key issue since the early days of AI, since no reasoning can exist without knowledge. Its instances spread from deductive data bases [7] to description logics [8] and ontologies [9], to name just a few. Many kinds of logic-based knowledge representation systems have been proposed over the years, mostly relying on description logics and modal logics to represent, respectively, terminological knowledge and time-dependent or subjective knowledge.
Early KR formalisms, such as semantic networks and frames [10], also aimed at providing a structured representation of information. Description logics are based on the same ideas, but without the formal weaknesses that made the use of their precursors problematic. In DL, complex concepts are built from simpler ones with an emphasis on the decidability of the reasoning tasks and on the provision of sound, complete and empirically tractable reasoning services. Applications range from reasoning with database schemas and queries [11] to ontology languages such as OIL, DAML+OIL and OWL [12]-always keeping in mind that not only the key inference problems should be decidable, but also that the decision procedures should be implemented efficiently.
Ontology-based approaches are popular because of their basic goal-a common understanding of some domain that can be shared between people and application systems. At the same time, it should be understood that the general concepts and relations of a top-level ontology can rarely accommodate all the systems peculiarities [13,14].
Several systems based on DL have been developed-e.g., [15,16]-in diverse application domains, such as natural language processing, configuration of technical systems, software information systems, optimizing queries to databases, planning. It is especially worth mentioning Flora-2 [17], a rule-based, object-oriented knowledge base system for a variety of automated tasks on the semantic web, ranging from metadata management to information integration to intelligent agents. The system integrates F-logic, HiLog, and transaction logic into a coherent knowledge representation and inference language, resulting in an effective framework that combines the rule-based and the object-oriented paradigms.

Reasoning Approaches and Techniques
Logic-based reasoning approaches root back to John McCarthy's work of 1958 [18], aimed at developing the idea of formalizing the so-called commonsense reasoning to build intelligent artifacts. This also means to consider non-trivial involved issues, such as the need to formalize the situation, actions and causal laws, etc. Many tools have been developed over the years for commonsense reasoning formalization: there are freely available commonsense knowledge bases and natural language processing toolkits, supporting practical textual-reasoning tasks on real-world documents including analogy-making, and other context oriented inferences-see for instance [19][20][21][22][23]). There have been also several attempts to construct very large knowledge bases of commonsense knowledge by hand, one of the largest being the CYC program by Douglas Lenat at CyCorp [24].
The modern approach to automated theorem proving starts with Robinson's resolution principle [25]: since then, several technologies have exploited deduction on first-order logic knowledge base to provide reasoning capabilities in diverse areas-logic programming, deductive data bases, and constraint logic programming (CLP) possibly being the major ones. Other approaches and techniques, however, built upon the induction and abduction principles ( Figure 1, middle left)).
As its name suggests, deduction operates top-down, deriving a true conclusion from a universal true premise: logically speaking, this means that the conclusion's truth necessarily follows from the premise's truth. Induction, instead, operates bottom-up, basically making a guess-a generalization-from specific known facts: so, the reasoning involves an element of probability, as the conclusion is not based on universal premises. Abduction is somehow similar, but seeks for cause-effect relationships-i.e., the goal is to find out under which hypotheses (or premises) a certain goal is provable. Such technologies are exploited, in particular, for the verification of compliance of specific properties [26].
Logic programming (LP) is likely the most widely adopted technology based on deduction. From Colmerauer and Kowalsky's seminal work [27,28], the Prolog language has been since then one of the most exploited language in AI applications [29]. Other valuable approaches include fuzzy logic, answer-set programming (ASP), constraint logic programming (CLP), non-monotonic reasoning, and belief-desire-intention (BDI) (Figure 1, bottom left).
Fuzzy logic [30] aims at dealing with lack of precision or uncertainty. In this sense, it is perhaps closer in spirit to the human thinking than traditional logic systems. Not surprisingly, fuzzy approaches are exploited as a key technology in specific application areas, e.g., the selection of manufacturing technologies [31], and industrial processes where the control via conventional methods suffers from the lack of quantitative data about I/O relations. There, a fuzzy-logic controller effectively synthesizes an automatic control strategy from a linguistic control strategy based on an expert's knowledge.
Answer-set programming (ASP) and constraint logic programming (CLP) are the two main logical paradigms for dealing with various classes of NP-complete combinatorial problems. ASP solvers are aimed at computing the answer sets of standard logic programs; these tools can be seen as theorem provers, or model builders, enhanced with several built-in heuristics to guide the exploration of the solution space. Some of the best-known solvers are Clingo [32] and DLV [33].
Constraint logic programming (CLP) [34], perhaps the most natural extension of LP (or, its most relevant generalization), has evolved over the years into a powerful programming paradigm, widely used to model and solve hard real-life problems [35] in diverse application domains-from circuit verification to scheduling, resource allocation, timetabling, control systems, etc. CLP technologies can be seen as complementary to operation research (OR) techniques: while OR is often the only way to find the optimal solution, CLP provides generality, together with a high-level modeling environment, search control, compactness of the problem representation, constraint propagation, and fast methods to achieve a valuable solution [36]. CLP tools evolved from the ancestor-CHIP [37], the first to adopt constraint propagation-to the constraint-handling libraries of ILOG [38] and COSYTEC [39], up to CLP languages such as Prolog III [40], Prolog IV [41], CLP(R) [42], and clp(fd) [43].
Non-monotonic reasoning means to face the basic objection [44] that logic could not represent knowledge and commonsense reasoning as humans because the human reasoning is inherently non-monotonic-that is, consequences are not always preserved, in contrast to first-order logic. Since then, a family of approaches have been developed to suit specific needs-among these, default reasoning [45], defeasible reasoning [46], abstract argumentation theory [47]. Defeasible reasoning, in particular, is widely adopted in AI and law applications, to represent the complex intertwining of legal norms, often overlapping among each other, possibly from different, non-coherent sources. Abstract argumentation theory, in its turn, is concerned with the formalization and implementation of methods for rationally resolving disagreements, providing a general approach for modeling conflicts between arguments, and a semantics to establish if an argument can be acceptable or not.
Belief-desire-intention (BDI) logic is a kind of modal logic used for formalizing, validating, and designing cognitive agents-typically, in the multi-agent systems (MAS) context. A cognitive agent is an entity consisting of (i) a belief base storing the agent's beliefs, i.e., what the agent knows about the world, itself, and other agents; (ii) a set of desires (or goals), i.e., the proprieties of the world the agent wants to eventually become true; (iii) a plan library, encapsulating the agent's procedural knowledge (in the form of plans) aimed at making some goals become true; and (iv) a set of intentions, storing the states of the plans the agent is currently enacting as an attempt to satisfy some desires. All such data usually consist of first-order formulas. Then, the dynamic behavior of a BDI agent is driven by either internal (updates to the belief base or changes in the set of desires) or external (perceptions or messages coming from the outside) events, which may cause new intentions to be created, or current intentions to be dropped. By suitably capturing the revision of beliefs, and supporting the concurrent execution of goal-oriented computations, BDI architectures overcome critical issues of "classical" logic-based technologies-concurrency and mutability-in a sound way. Overall, BDI architecture leads to a clear and intuitive design, where the underlying BDI logic provides for the formal background. Among the frameworks rooted on a BDI approach, let us mention the AgentSpeak(L) [48] abstract language and its major implementation, namely Jason, Structured Circuit Semantics [49], Act Plan Interlingua [50], JACK [51], and dMARS-a platform for building complex, distributed, time-critical systems in C++ [52].

Verification and Model-Checking
Verification ( Figure 1, bottom left) is another crucial issue: in the logic context, it takes the form of model-checking-validating a (logic-based) model representation against some given (logic-based) specification. This process is typically performed by an automated tool, the model checker, which can be built adopting different philosophies-explicit/implicit-state model checkers, inductive model checkers, epistemic model checkers, and other.
In short, explicit-state model checkers construct and store a representation of each visited state, while implicit-state model checkers adopt logical representations of sets of states (such as Binary Decision Diagrams) to describe the regions of the model state space that satisfy the properties under evaluation, thus achieving a more compact representation. Inductive model checkers exploit induction over the state transition relation to prove that a property holds over all the executable paths in a model, while epistemic model checkers focus on how the states of information of agents change over the time. The most recent symbolic approach is perhaps SAT-based (SAT-solvers are algorithms to solve the satisfiability problem for propositional formulas) model-checking, which translates the model-checking problem for a temporal-epistemic logic to a satisfiability problem of a formula in propositional logic, achieving higher efficiency.
Overall, model checkers have been used to verify, among the others, MAS and IoT systems, and are currently extensively used in the hardware industry, as well as to check (kinds of) software systems [53].

Logic-Based AI: Application Areas
The above techniques have been applied in a variety of fields: Figure 2 summarizes both the main application areas and their interconnections. For each application area, in the following we (i) introduce what the area is about, (ii) outline the main sub-categories (if any), (iii) discuss the role of logic in each sub-category as well as (iv) its benefits and limits, and (v) present the main actual applications. Logic-based technologies application areas with respect to the main AI categories-namely AI Foundations, AI for Society, and AI for Business. Intentionally, the picture only illustrates the AI areas that are closely related to logic.
To make the comparison actually effective, Tables 1 and 2 summarize our findings from two different viewpoints. The first outlines the strengths of the diverse techniques per application area, i.e., where they provide the strongest contribution; the second puts each technique in relation to the different market segments, providing appropriate references to the literature where each technique is used.

Formalization and Verification of Computational Systems
Formalization and verification of computational systems refer to a collection of techniques for the automatic analysis of reactive systems-in particular, safety-critical systems, where subtle design errors can easily elude conventional simulation and testing techniques.
The main logic-based technology exploited in this field is model-checking-presently a standard procedure for quality assurance both because of its cost-effectiveness and of the ease of integration with more conventional design methods. The model checker input is a description of the system to be analyzed and several properties that are supposed to hold: logic is used both to formalize the system description-states, transitions, model description, and specifications to be verified-and to express the behavioral aspects, capturing the key properties of information flow. Accordingly, such descriptions are often expressed in temporal and probabilistic logic (and their extension/variations). The model checker input is then a (usually finite-state) description of the system to be analyzed, and the expected properties in terms of temporal-logic formulas.
Application examples (Section 2.4) are notably heterogeneous, since model-checking is multidisciplinary and cross-field by its very nature.
Model-checking can provide a significant increase in the level of confidence of a system, enabling system verification a-priori, a-posteriori, and-what is most relevant presently-at run-time. On the other hand, any validation is, by definition, only as good as the system model itself: so, the validation result strongly depends on the precision of the input model. In addition, model-checking can turn out to be unsuitable for data intensive applications, as it increases the number communications.

Cognitive Agents and Intelligent Systems
Cognitive architectures are design methodologies, i.e., collections of knowledge and strategies applied to the problem of creating situated intelligence. Here, the main technologies borrow from multi-agent systems (MAS), since the cognitive architecture can be considered to be the brain of an agent reasoning to solve problems, achieving goals and taking decisions.
Generally speaking, cognitive agents in intelligent MAS straightforwardly exploit the logic-based models and technologies for the rational process, knowledge representation, expressive communication, and effective coordination. Developing an agent means to set up a deduction process: each agent is encoded as a logic theory, and selecting an action means to perform a deduction that reduces the problem to a solution, as in theorem proving. Logic can also be used to represent the agent's surrounding environment and the society of agents-that is, overall, two of the three key aspects when it comes model the structure and dynamics of non-trivial MAS [100].
Technologies reflect the above context: many are related to agent programming and reasoning (Section 2.3), others to agent reliability and verification (Section 2.4). Many others focus on the societal aspect of cognitive architectures, by interpreting society as the ensemble where the collective behavior of the MAS are coordinated towards the achievement of the global system goals. Along this line, coordination models glue agents together by governing agent interaction, paving the way towards social intelligence: after the seminal work of Shared Prolog [101], notable examples are TuCSoN [102], ReSpecT [103], and AORTA [104].
Within an agent society, agents can enter into argumentation processes to reach agreements and dynamically adapt to changes: so, disputes and conflicts need to be managed in order to achieve a common agreement and establish the winner argument. Several technologies exist for solving reasoning tasks on the abstract argumentation frameworks [105]. Since problems of this kind are intractable, efficient algorithms and solvers are needed. As discussed in [106], most solvers are based on logic-based programming or logic frameworks including ASP and SAT-solvers.
Specific technologies exist for dealing with the environment abstraction of cognitive architectures, mostly in the coordination area. There, coordination/interaction artifacts work as run-time abstractions which encapsulate and support coordination/interaction services, usable as building blocks for designing and governing coordination, collaboration, competition services inside heterogeneous MASs. Description spaces with fuzziness [107] and semantic tuple centers [108] can be read as technologies for situated interaction & coordination, emphasizing the situated aspect of interaction, i.e., the environment-related aspect. In-between lies LPaaS (Logic Programming as a Service), a framework aimed at supporting the distribution of logic knowledge in the environment. There, artifacts work as knowledge repositories (in the form of environment structure and properties) while also embedding the reasoning process (enabled and constrained by the knowledge they embody).
When agents are immersed in a Knowledge-Intensive Environment (KIE), the cognition process goes beyond that of the individual agent, and distributed cognition processes may take place, promoting the idea of intelligent environment [109]. In such a way, the environment concept is extended beyond situated action-which, by the way, motivates the inclusion of the semantic web within the macro-area of environmental abstractions.
Moving from [110], the "Agents in the semantic web" sub-category lists JADL, AgentOWL, and EMERALD, which exploit semantic web technologies to inter-operate.
Hybrid cognitive architectures have recently gained attention to combine symbolic and sub-symbolic (emergent) approaches. Examples are ACT-R [111]-based on the modeling of human behavior-CLARION [112] and LIDA [113].
Despite the simplicity and elegance of the logical semantics in logic-based architectures, some issues do exist. The transduction problem [114] has to do with the difficulty of accurately translating the model into a symbolic representation, especially in a complex environment. One more difficulty comes from suitably representing information in such a symbolic form that agents can reason about, with and in a time-constrained environment. Finally, the transformation of percepts may not be accurate enough to describe the environment itself, due to sensor faults, reasoning errors, etc.

Healthcare and Wellbeing
In the healthcare domain, AI typically takes the form of complex algorithms and software systems to emulate human cognition in the analysis of complicated medical data, approximating conclusions without direct human input. The primary aim here is to analyze relationships between prevention or treatment techniques and patient outcomes. AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care.
In this field, logic is exploited to represent knowledge in a human-understandable way, and reason on it via properly formalized rules-in particular, decision support (symbolic) rules, obtained from domain experts and/or decision models induced from data.
At the same time, symbolic logic scales does not scale easily: knowledge engineers need to extract the logic by interviewing or observing human experts. On the other hand, sub-symbolic techniques such as supervised deep learning scale more easily, but are subject to bias in the training data-and, of course, their outcome cannot be explained.
Here again, the semantic web provides a technical framework for the formal semantic modeling-i.e., interpretation, abstraction, axiomatization, and annotation-of healthcare knowledge in terms of classes, properties, relations and axioms [115]. The semantic web framework for healthcare systems provide notable features: (i) semantic modeling of the procedural and declarative healthcare knowledge as ontologies, hence a semantically rich and executable knowledge representation formalism; (ii) annotation-typically via RDF (Resource Description Framework)-of healthcare knowledge artifacts guided by the ontological model of the knowledge artifact, so as to characterize the main concepts and relations within the artifact; (iii) representation of different patient data sources in a semantically enriched formalism that helps to integrate heterogeneous data sources by establishing semantic similarity between data elements; (iv) semantic interoperability between multiple ontologies, using ontology alignment and mediation methods to dynamically synthesize/shape multiple knowledge resources so as to address all the facets of the specific healthcare problem; (v) specification of the decision-making logic in terms of symbolic rules, which can be executed using proof engines to infer suitable recommendations/actions; and (vi) provision of a justification trace of the inferred recommendations, so as to let users understand the rationale of the recommended interventions [116].
Among the healthcare systems based on reasoning, CARA (Context-Aware Real-time Assistant) [83] aims at providing personalized healthcare services for chronic patients in a timely manner, adapting the healthcare technology so that it fits in both with the normal activities of the elderly and with the working practices of the caregivers. Based on a fuzzy-logic context model and a related context-aware reasoning middleware, CARA provides context-aware data fusion and representation, as well as inference mechanisms that support remote patient monitoring and caregiver notification.

Law and Governance
Due to its wide potential impact on society and economy, AI and law is one of today's most relevant research areas. It consists of an interdisciplinary effort combining methods and results from several sources, from deontic logic, norms and agent-based simulation to game theory and norms, normative agents, norms and organization, norms and trust, norms and argumentation.
Contributions in the field of AI and law are strongly connected with the aforementioned agent architectures. Agreement technologies [117], in particular, is a new vision outlining next-generation, open, distributed systems where interaction between computational agents could be based on the notion of agreement. This calls for (i) a normative context defining the rules of the game, or the "space" of agreements that the agents can possibly reach; (ii) an interaction mechanism to establish (first) and enact (then) agreements; and (iii) a joint research effort from several fields-including, but not limited to, multi-agent systems, semantic technologies, social sciences-aimed at fruitfully combining results and contributions from all such areas-like, for instance, semantic alignment, negotiation, argumentation, virtual organizations, learning, real time, and others. Semantic web standards provide a good basis for representing the knowledge of local agents, the functionalities and everything needed to achieve a goal in agreement with other agents.
However, the formalisms behind these technologies fall short when dealing with the distributed, open and heterogeneous nature of AT systems where agents may have different views of the world and, therefore, mutually inconsistent knowledge bases. To cope with this issue, new logical formalisms-specifically aimed at handling situations where pieces of knowledge are independently defined in different contexts-have been defined, extending classical logics in order to deal with incomplete and defeasible knowledge. Logic is thus exploited to represent the knowledge with a high degree of peculiarity (for instance defeasibility, but also the possibility to discern among permission, obligation or beliefs in deontic logic), and to reason over such knowledge.
Many interesting experiments have been performed in this application area. Notable defeasible argumentation implementations, aimed at supporting reasoning and resolve inconsistencies, are Defeasible Logic Programs [118], ASPIC [119], and ABA [120]. Several other applications use ontologies and legal search engines [121], which exploit advanced search technology from AI, data mining, data analytics, ontologies and natural language processing [122]. The main issues that remain unsolved in this area is that a unique and general framework for dealing with norms and argumentative issues is still missing: in fact, most solutions are too narrow in scope, tailored to specific use cases, other than being possibly weak from the software engineering perspective.
In short, logic-based approaches in the legal field [123] (i) help formalize legal norms and concepts in a clear and understandable way, thus enabling verification and the detection of unfair policies, or the violation of essential rights; (ii) support explanatory and arguable decisions in the regulatory context.
In general, however, current tools are unable to imitate advanced cognitive processes such as human reasoning, understanding, meta-cognition or the contextual perception of abstract concepts that are essential for legal thinking. Indeed, a lawyer's work is often very complex, implying the management and processing of huge amounts of data, where to find correlations between facts and circumstances, and formulate reasoned opinions and action guidelines taking into account all the applicable rights and obligations. This is why the process of understanding and formulating a decision is mostly creative-the result of a complex cognitive process.

Education
AI has been part of many e-learning platforms for a long time, with applications ranging from personalized learning, recommendation of resources, automated grading, to prediction of attrition rates-to name just a few. The rapid expansion of the educational technology industry is now further pushing and exploiting advanced AI-enabled learning technologies.
Within this area, symbolic AI techniques have been used in adaptive educational systems, such as fuzzy-logic, decision tree, etc. there, logic has been mainly applied for knowledge management and recommendation. In some systems, for instance, the focus is on examining and assessing the student characteristics in order to generate students' profiles, to be used for evaluating their overall level of knowledge and, consequently, as a basis for prescribed software pedagogy. Symbolic AI approaches are used to support the diagnostic process, so that the course content can be adjusted to cater each student's needs. Some of them, in addition, are also used to learn from student's behavior to adjust the prescribed software pedagogy.
Applications are related to semantic web technologies, contextualized to e-learning so as to adapt instruction to the learner's cognitive requirements in three ways-background knowledge, knowledge objectives and the most suitable learning style [124,125]. Over the years, fuzzy-logic techniques and logic MAS have also been experimented for e-learning purposes. In particular, in [126,127], a fuzzy-logic-based system learns the users' preferred knowledge delivery to generate a personalized learning environment; whereas in [128] agents detect, recognize, eliminate, and repair the faults of the e-learning course, keeping the system up and working, providing robustness. Another interesting application in the domain of formal logic proofs, taken as the base of several further extensions, is the Logic-ITA (Intelligent Teaching Assistant) web-based system [129]: its purpose is to soothe the issues caused by large classes or distance learning, acting as an intermediary between teacher and students. On the one hand, it provides students with an environment to practice formal proofs, giving proper feedback; on the other, it allows teachers to monitor the class's progress and mistakes.
Although the impact on classrooms has been relatively minor so far, the potential of AI in education is high and likely to increase, as demonstrated by the many European actions/projects currently in place [130,131]. The main challenges and issues concern the creation of a sustainable educational environment, capable of developing equitable education even for the least developed countries-to be dealt with at the suitable political level.

AI for Business: Automation and Robotics
Automation is probably the earliest and perhaps most impactful application area for AI, as it represents the first step towards machine autonomy. Autonomy, in turn, is highly desirable whenever there is a need to re-design, re-build, or re-program machines while the deployment context evolves. Autonomous machines differ from automatic ones in that designers no longer need to forecast any possible situation, because the machine is programmed for learning or planning. This is particularly interesting in cyber-physical systems (CPS) [132] and robotics, where machines have a physical body that makes them capable of affecting (or being affected by) the physical world.
Needless to say, applications of automation and robotics in industry are manifold, and so are the corresponding research lines. In the following we explore the role and impact of logic-based paradigms and technologies in such areas.

Planning and Task Allocation
Planning and scheduling are one of the oldest fields in AI, also related to multi-agents and cognitive architectures: research has mainly to do with the decision-making process that determines what, when, where, and how to reach a goal and compute a task. There, logic is exploited to represent the knowledge domain, its constraints, and the reasoning mechanism.
Logic-based scheduling methodologies include rule-based approaches and constraint-guided search. Rule-based scheduling methods aim to emulate the decision-making behavior of human schedulers, captured in terms of suitable logic rules. Correspondingly, rule-based systems are typically envisioned to replicate the actions of experienced humans with specific scheduling skills. Unsurprisingly, this is one the most successful application domains of CLP techniques: the scheduler goal is to identify feasible solutions which balance different constraints or schedule requirements.
Applications spread from manufacturing to traffic scheduling and management (e.g., autonomous vehicles, aircraft, . . . ), up to urban search and rescue activities (e.g., traffic assignment in natural disaster evacuations, . . . ), and many others. A typical example of a constraint-based scheduling application is ATLAS [133], which schedules the production of herbicides at the Monsanto plant in Antwerp. PLANE [134] is another system used at Dassault Aviation to plan the production of the military Mirage 2000 jet and the Falcon business jet: the objective is to minimize changes in the production rate, which has a high set-up cost, while finishing the aircraft just in time for delivery. The COBRA system [134] generates workplans for train drivers of North-Western Trains in the UK: each week, about 25,000 activities need to be scheduled in nearly 3000 diagrams on a complex route network. The DAYSY Esprit project [135] and the SAS-Pilot program [136] consider the operational re-assignment of airline crews to flights. The STP (Short Term Planning) application at Renault [137] assigns product orders to factories to minimize transportation costs. The MOSES application by COSYTEC [137] schedules the production of compound food for different animal species, eliminating contamination risks while satisfying customer's demand at the minimal cost. FORWARD [138] is a decision support system, based on CHIP, used in three oil refineries in Europe to tackle all the scheduling problems occurring in the process of crude oil arrival, processing, finished product blending and final delivery. Finally, Xerox has adopted a constraint-based system for scheduling various tasks in reprographic machines (like photocopiers, printers, fax machines, etc.); the scheduler is supposed to determine the sequence of print making and coordinate the time-sensitive activities of the several hardware modules that make up the machine configuration [139].
Overall, the CLP approach faces well some of the key issues such as development time, nodes visited in the search tree, number of generated feasible solutions, and efficiency. At the same time, the very nature of such systems mandates for considerable development and tuning effort for each new application, as there is no expert to emulate. More generally, the main two issues are (i) the lack of a structured way to carry the insight gained from one application to the next, and (ii) the complexity of generating the symbolic knowledge that fully describes the application domain.

Robotics and Control
Cognitive architectures, planning, and task allocation techniques have been widely applied to robotics and control system: indeed, robotics applications translate the agent abstraction of cognitive architecture into a mechanical robot capable of doing action and taking decision.
Logic in robotics is, much more than elsewhere, tailored to the specificity of the application field, since control mechanisms need to control the robot sensors and actuators, along with all their low-level control software (for instance, robot motion mandatorily requires a set of feedback control primitives in order to keep motion coherent). More generally, control systems are present in lifts, photocopiers, car engines, assembly lines, power stations, etc.
Again, the logic-based approaches (and technologies) that are mostly adopted in this context are CLP, fuzzy logic, and temporal logic: in fact, many works dealing with robotic reasoning [140] exploit languages and technologies detailed in Section 2.3. CLP-based applications are typically at the smaller end, where it is still possible to prove that some global properties can be guaranteed with a given control. Thus, CLP is often exploited to build control software for electro-mechanical systems with a finite number of inputs, outputs, and internal states: each component is connected to a small part of the overall system, so its behavior can be captured quite simply. However, when the system is considered to be a whole, the number of global states can become very large: this calls for a smart technology that is able to handle such combinatorial explosion [141].
On the other side, many control systems have been developed exploiting a fuzzy-logic approach for dealing with real data which are sometimes imprecise, uncertain, complex, and with a high degree of randomness. Due to its good tolerance of uncertainty and imprecision, fuzzy logic has gained wide application in the area of advanced control of humanoid robots. For the same reason, fuzzy system are powerful tools to face crucial problems in industrial engineering and technology, such as risk management or product quality assurance, as well as in intelligent decision support system for adaptive industrial engineering [142][143][144][145][146]. Also, the hybrid techniques based on the integration of neuro-fuzzy networks, neuro-genetic algorithms, and fuzzy-genetic algorithms are of great importance in the development of efficient algorithms [147].
Temporal-logic approaches and technologies have been exploited, e.g., for controlling robot motion or planning activities [148,149], because of their ability to reason over the time and its change, which makes it possible to build control laws to be verified over the time elapse. This is especially relevant for mobile robots, whose specifications are often temporal-even though time is not necessarily captured explicitly. For example, a swarm might be required to reach a certain position and shape eventually, or maintain a size smaller than a specified value until a final desired value is achieved. Other examples are collision avoidance among robots, obstacle avoidance, and cohesion, which are always required. In a surveillance mission, a selected area needs be visited "infinitely often" [150].

Perspective and Future Trends
In this section, we systematize and frame the main perspectives for the evolution of logic-based technologies. To this end, Figure 3 means to illustrate and assess the state of the art discussed in Sections 2 and 3, in pair with the AI requirements mentioned in the Introduction. The main research directions for logic-based approaches and technologies are represented as directional signs, pointing at a cloud of words that aims to capture the main concepts behind logic-based technologies. The cloud is populated by the main keywords extracted from the papers cited in this section, taken as representatives of the most promising research lines. In particular, we focus on (i) promising research areas, as well as applications, leveraging on logic-based technologies, which we expect to grow in the near future; and (ii) promising research directions which involve logic-based technologies that are currently gaining momentum.

Integration of Symbolic and Sub-Symbolic AI
In the second half the 20th century-when AI was first recognized as a discipline by itself-several approaches towards machine intelligence became subject of intensive research efforts, leading to the vast corpus of literature and to the abundance of techniques available today.
Notably, two main families of approaches emerged-the symbolic and the connectionist or sub-symbolic ones [151,152]. While the former focuses on representing the world through symbols-which in turn, represent concepts-thus emulating how the human mind reasons and infers, the latter aim at mimicking human intuition by emulating how the human brain works at a very low level. Both families have both pros and cons, and stepped through both glory and misery-in terms of expectations, funding, research interest, and industry adoption [153,154]. Despite the effort devoted to symbolic AI research and application in recent decades, numeric and connectionist approaches (e.g., neural networks) gained an unprecedented momentum since the early 2010s. However, even if it is currently not as popular as neural networks, the history of symbolic AI is extremely important as well-mostly because of the prominent influence on the many fields converging in AI: in fact, logic-based approaches represent the warhorse of symbolic AI.
In the recent years, the historical dichotomy between the "two souls" of AI has been reconciled, in favor of a comprehensive vision where symbolic and sub-symbolic approaches are seen as complementary-rather than in a competition-so that they mutually soften their corners [155][156][157] (see Figure 1). While symbolic approaches are well suited for relatively small-sized problems implying complex but exact tasks-possibly relying on structured data-sub-symbolic approaches are best suited to use cases processing big (possibly huge) amounts of possibly unstructured data-where errors, or lack of precision, are tolerated to some extent, if unavoidable. More precisely, complementarity between symbolic and sub-symbolic AI naturally emerges when comparing the two approaches under the following perspectives: • sub-symbolic AI is opaque, meaning that human beings struggle in understanding the functioning and behavior of sub-symbolically intelligent systems; instead, symbolic AI is more transparent, as it is both human-and machine-interpretable at the same time • sub-symbolic AI can improve itself automatically by consuming data, but it is difficult to extend and re-use outside the contexts it was designed for; conversely, symbolic AI is flexible and extensible, but requires humans to manually provide symbolic knowledge • sub-symbolic AI is adequate for fuzzy problems where some (minimal) degree of error or uncertainty can be tolerated; whereas symbolic AI calls for precise data and queries provided by human beings, yet provides exact, crisp results as its outcome.
The two most active research lines that currently focus on unifying symbolic and sub-symbolic AI are neural-symbolic computing (NSC) [158] and explainable artificial intelligence (XAI) [1]. The former focuses on merging neural networks with logic-based technologies at a fundamental level, while the latter encompasses diverse studies aimed at exploiting symbolic AI to explain the internal functioning of sub-symbolic AI, thus making it more interpretable in the eyes of human beings. In the reminder of this section we explore both NSC and XAI.

Techniques and Approaches: Hybrid Models for Intelligent Systems
Methods and approaches for the integration of symbolic and sub-symbolic techniques can be grouped under the heading of so-called "hybrid intelligent systems", which exploit a combination of methods and techniques from AI sub-fields. The area includes several approaches such as-to name a few-neuro-fuzzy systems and hybrid connectionist-symbolic models, neural-symbolic computing, fuzzy and connectionist expert systems, evolutionary neural networks, genetic fuzzy systems, rough fuzzy hybridization, and Reinforcement Learning with fuzzy, neural, or evolutionary methods as well as symbolic reasoning methods. Generally speaking, hybrid approaches are based on the explicit integration of symbolic and sub-symbolic models, so as to take the best of each approach in matching the context needs. In the following, we discuss two of such approaches-namely neuro-fuzzy systems and neural-symbolic computing-as exemplars of the key points to be addressed in such integration: for a full discussion, we refer the reader to [159,160].
Neuro-fuzzy systems (NFS) [161][162][163] (also called fuzzy neural network) is a form of hybridization aimed at a synergy of neural and fuzzy systems, combining human-like reasoning with the learning approach of neural networks for providing "an effective vehicle for modeling the uncertainty in human reasoning" [164]. Knowledge is modeled as if-then rules, and the fuzzy nature provides the capability of universal approximation, somehow conjugating two contradictory requirements-interpretability vs. accuracy.
Neural-symbolic computing (NSC) combines the benefits of meta-heuristics, neural networks, and logic programming in order to incorporate two fundamental cognitive abilities-namely the ability to learn and reason from what has been learned [165]-for diverse applications, such as finding an optimal solution in optimization problems [166,167]. NSC internalizes the benefits of novel and robust learning, and the logical reasoning and interpretability of symbolic simplification of artificial neural networks [168]: more recent advances in the fields of applied machine learning and reasoning can be found in [158,169]. The nature of integration, based on the convergence of neural processing with the symbolic representation of intelligence and reasoning, makes it particularly interesting for the development of explainable AI (XAI) systems, which focus on interpretability and transparency.
Indeed, the idea of logic as a programming language in neural networks to serve, represent and interpret a problem dates to [170]: the motivating force behind it is the idea that a single formalism is adequate both for logic and computing, and that it subsumes computation.
On the XAI wave, traditional logical approaches have in some sense been refurbished and are now experiencing a new dawn. However, a solid model is still to be defined, and a corresponding stable technology is still missing. For this reason, the field represents a hot issue at the frontier research level.

Application Scenarios: Explainable, Responsible, and Ethical AI
Despite their large adoption, intelligent systems whose behavior is the result of ML-based procedures are difficult to trust for most people, in particular for non-experts. The reason is that numeric, sub-symbolic approaches tend to produce opaque systems, whose internal behavior is hard to explain even for their expert developers: opacity not only makes training and development error prone, but also prevents people from fully trusting-and thus accepting-the system itself. This is a key issue today in several contexts, as it is often not sufficient for intelligent systems to produce bare decisions-they must also be explained, as ethical and legal issues may arise. The financial and medical playgrounds are typical examples: according to current regulations, intelligible explanations need be output for each decision.
When it comes to build explainable ML-based systems, two are the main approaches. One consists of building systems which only leverage on transparent-by-construction ML predictors-like, for instance, decision trees. The other consists of leveraging on the full spectrum of ML techniques, trying to extract post-hoc symbolic explanations out of trained numeric predictors. The implicit strategy behind this approach is that symbols are far closer to what humans' conscious, rational mind is used to handle. The second case is of particular interest here as it aims at combining the best of the symbolic and sub-symbolic realms. Several works in recent decades proposed to extract symbolic knowledge from numeric models. As witnessed by several surveys [158,[171][172][173] and works on the topic [174][175][176][177][178][179][180][181][182][183][184]-some of which are from the 80s or the 90s-the potential of symbolic knowledge extraction is well understood, although without hype.
At the current state of the art, a comprehensive and general framework tackling such a problem is still missing, as well as a technological implementation making the aforementioned techniques usable in practice. However, many research studies are exploiting the models and techniques mentioned in Section 4.1.1 to address the issue of explainable and responsible AI [185,186].

Relational Learning
Most approaches to learning machines simply target specific activities where the task to be learned from data is both static and straightforward. For instance, despite the many applications of classification tasks and the plethora of things which can be recognized (a.k.a. properly classified) through well-engineered classifiers, one may hardly expect or require a ML-based classifiers to also infer the hierarchical relations among the classes it is able to recognize. Consider for example a deep neural network perfectly trained to recognize animals: it may easily discriminate among cats, dogs, and other mammals, but it would be very hard for it to learn what a mammal actually is. Similar concerns characterize other well-established tasks in ML such as regression, or best policy estimation.
Statistical relational learning (or just relational learning henceforth) is a research area, laying the intersection of logic, machine learning, and statistics, which essentially focuses on letting machines learn relations among concepts, from examples. This is somewhat related to classification, except that relational learning is more general, as it is capable of inferring hierarchies of concepts from data, thus producing a symbolic knowledge base of logic rules as outcome, instead of a flat classifier.
The most common sort of inference performed by relational learning systems is induction. Roughly speaking, induction is the sort of reasoning performed by an agent willing to infer a general a rule justifying several (positive or negative) examples or observations. As widely acknowledged in philosophy and logic, induction is very useful as it produces new valuable knowledge, but incomplete as the logic it produces is not universal. In other words, when leveraging on induction, one agent can never be 100% certain the rules it has inferred from data are (and will always be) true, but it may be more or less confident, depending on the statistics supporting the inferred rules, according to the available data. Thus, relational learning is inherently statistical in nature: the rules it induces are true up to a given probability.
Generally speaking, as far as future perspectives are concerned, relational learning is expected to bring about huge benefits in at least two areas: XAI and the semantic web.
As far as XAI is concerned, it is worth noting how relational learning is essentially a very intelligible and interpretable way to learn from data. Accordingly, it may have a role to play in the development of future explainable systems. Furthermore, by using it in combination with state-of-the-art image/language recognition technologies, relational learning may endow them with generalization facilities.
As far as the semantic web is concerned, it may have a role to play in the management of large and incomplete ontologies. In fact, semantic web technologies (such are RDF and OWL) can be exploited to build huge graphs of symbolic concepts and their relations, also known as ontologies. Such graphs are often built manually by communities of users or experts, or automatically by crawling other large sources of information such as Wikipedia. In both cases, the probability of inconsistencies, holes, or incompleteness to arise in the resulting knowledge graphs is high. In this framework, relational learning may have a role to play in complementing the aforementioned issues and an automatic and principled way.

Inductive Logic Programming
Inductive logic programming (ILP henceforth) is arguably the most developed branch of relational learning-other than the soundest one, from a foundational point of view.
Several theoretical frameworks exist for ILP, as well as some logic-based technologies implementing them. For instance, the Progol technology leverages upon the entailment inversion framework [187] . Conversely, the CIGOL system leverages on resolution inversion [188], whereas the Golem system is based on the "relative least general generalization" approach [189]. Such technologies, have been exploited in many application scenarios, mostly involving the inference of rules and concepts out of logic knowledge bases. Despite their success, however, the technological development of ILP is still in its early stages. In fact, most ILP are currently unmaintained and poorly integrated with both mainstream logic-or ML-based technologies-even if ILP may have a role to play in both contexts.
Similar concerns arise for MAS as well. In fact, while the influence of computational logic in MAS is well established, and most agent-and logic-based programming frameworks (e.g., Jason [190]) are well integrated with deductive inference, a integration with ILP is still missing. Accordingly, that integration would be interesting as it would represent a straightforward means for learning symbolic knowledge.

Constraint (Logic) Programming
Constraint programming (CP) is a programming paradigm for addressing search-related problems over particular domains-such as Boolean, as well as integer or real numbers). There, users are in charge of modeling a (satisfaction or optimization) problem in terms of variables-defined over the aforementioned domains-and the constraints they are subject to-which may be numerical equations or inequalities, as well as other relations, involving one, two or more variables at one time.
It is widely acknowledged how logic programming technologies-and, in particular, Prolog-can naturally represent both CP variables and constraints. For this reason, the CLP acronym soon emerged in the logic programming playground. It is used to indicate the field of constraint logic programming (CLP), i.e., the exploitation of logic programming as a linguistic and technological reification for CP.
CLP is considered the most natural generalization of logic programming-as LP is CLP over the Herbrand domain. Intuitively, the rationale behind such a shared opinion is that-differently from what happens for most search strategies, such as SLD-CLP does not commit to any particular order when exploring a search space. This of course paves the way towards the efficient exploration of infinite domains, other than the definition of purely declarative specifications of logic programs.
Despite several Prolog implementations-such as SICStus or SWI Prolog-have been extended to support logic programming over most common domains, CLP is not part of the standard ISO Prolog. Nevertheless, many Prolog implementations share a common syntax and semantics for their CLP modules, which often provide common functionalities. Thus, technological interoperability is not a major issue in this field.
However, when dealing with future perspectives, it would be interesting to extend CLP towards more domains than the classical ones. In particular, two main questions arising from this perspective are: • will (C)LP ever reach full declarativeness? In other words, will it ever be possible to write CLP programs containing custom, user-defined domains and constraints? • can sub-symbolic AI play a role in the development of more efficient or more expressive CLP solvers?

Argumentation
Argumentation is part of the world around us: we are all increasingly interested in why a certain behavior occurs. This is actually the very reason behind the argumentative process: rather than just the mere reasoning, its ambition is to provide and document the interaction aspect-the dialogue that "subtitles" the reasoning. In the XAI perspective, getting to the root of certain behaviors and decisions becomes fundamental for intelligent systems: hence argumentation, with all its annexes, is an ineluctable research perspective. As discussed in Section 3.2.2, formal models of argumentation are making significant and increasing contributions to AI-from defining the semantics of logic programs, to implementing persuasive medical diagnostic systems, up to studying negotiation dialogues in multi-agent systems.
Therefore, from the two fundamental pillars of argumentation and AI, the perspectives in this area spread over the intersection of several fields such as autonomous agents, AI and law, logic in computer science, electronic governance, multi-agent systems. To be able to take part in an argumentative context, intelligent systems need to be explainable: here is why integrating symbolic and sub-symbolic approaches turns out to be, once again, a key theoretical and engineering issue. In turn, this calls for adequate supporting technologies-namely tools for argument analysis, evaluation, visualization, etc.

Argumentation Mining
To this end, an intriguing research area is argumentation mining [191], which aims at automatically detecting, classifying and structuring argumentation in text: such a technology could become an important component of an argumentation analysis system, to understand the content of serial arguments, their linguistic structure, the relationship between the preceding and following arguments, and then recognize the underlying conceptual beliefs and finally understand within the comprehensive coherence of the specific topic.
One further intriguing perspective is to investigate the connection between such patterns and logical concepts such as ontologies: by mapping argumentation patterns onto existing suitable ontologies on the matter of discourse, it could be possible to enable new kinds of reasoning, possibly building high-level aggregations aimed at synthesizing an argumentative process, to be possibly emulated. The ability to perform such sorts of analysis on these (possibly recurrent) structures in available data and texts opens a new range of challenging applications-for instance: verifying the compliance of a regulatory system; highlight injustices, discrimination, or prejudices; or even inject ethical (responsible, and explainable) behavior within the intelligent system.

Coordination and Self-Organization
As discussed in Section 3.1.2, logic has been largely exploited for coordination in multi-agent systems-to represent and enact the coordination rules to govern/constrain the interaction space, express agent coordination and communication languages, represent the environment, etc. More generally, the logical formalization makes it possible to express coordination (specification, rules, etc.) in a rigorous yet readable way, both at design-time and run-time-and possibly analyze, verify, enact them. Indeed, the adoption of a logic-based representation creates the potential to formalize a "coordination theory" that could then be manipulated by specialized (meta-level) intelligent agents [192].
One of the major research perspectives regards the self-organization aspect. In fact, a logic-based approach can fruitfully and effectively support the increasing need for artificial systems to self-organize-which means self-configure, self-protect, possibly automatically recover from errors (self-healing), and self-optimize [193]. Going further, under the hypothesis to adopt a logic, tuple-based representation, one could devise a fully distributed, swarm-like strategy for clustering tuples according to their type-as a tuple template-so that tuples with same type are aggregated together. This could be particularly interesting when coupled with a (logic) tuple space abstraction (and infrastructure), which could work as the aggregator and natural support for subsequent reasonings by intelligent agents [194].

Education
Educational institutions-from primary to higher education, as well as adult and professional learning-can benefit from the introduction of AI technologies into the learning process to actively support the achievement of learning objectives and enable new forms of learning patterns. One of the greatest challenges in this area concerns personalized education-that is, tailoring the path on the student's learning skills, assisting educators with organizational tasks, etc. More in general, AI tools can play a role in improving the student-teacher relationship, e.g., for guaranteeing fair education especially in the developing countries, and overall in addressing the issues discussed in Section 3.2.3.
However, even more interesting than the "mere" integration of AI tools in education is the perspective of AI education itself: in order to build reliable, ethical, and fair applications, chances are that AI itself should be educated. This calls for contributions from the most diverse disciplines-from logic, to humanistic disciplines such as psychology, philosophy, ethics: the logic-based approach makes it possible not only to formalize shareable and human-understandable principles, but also to verify non-violation and compliance of AI with the principles themselves.

Declarative Languages
One of the earliest and clearest advantages of logic-based technologies is declarativeness. In declarative programs one has simply to write what a machine should do, instead of how to do it. Declarativeness is today heavily involved in some very popular contexts-such as build/cloud automation languages (e.g., Gradle, OpenStack) or in continuous integration systems (e.g., Travis CI, GitLab)-even when a logic-based language is not used. Declarativeness in such contexts is often achieved via markup languages (e.g., XML, JSON or YAML) or a principled design of declarative API on top of functional languages (e.g., Gradle).
Consider for instance the case of OpenStack, an open-source technology aimed at supporting Infrastructure-as-a-Service (IaaS) Cloud providers. It is endowed with a module, namely Heat, aimed at supporting the orchestration of virtual resources for the Cloud users. From the user perspective, the usage of Heat is as simple as writing a Heat Orchestration Template (HOT) specification-that is, a declarative description of which virtual resources should be instantiated by Heat to set up the Cloud infrastructure. Declarativeness in that case is achieved through a JSON or YAML file describing virtual resources and their dependencies and relationships. The Heat engine is then capable of translating a well-formed HOT specification into an actual virtual infrastructure by considering dependencies and relationships among resources through an actual instantiation plan. A similar approach is exploited by Travis CI or GitLab, which still leverage on declarative YAML specification files to serve a different purpose. In this case, the goal is to let users automate the building, testing, or deploying phases of their software projects. However, despite the difference in their goals, such approaches share a common trait, i.e., a strong reliance on declarativeness attained through markup languages.
Of course, the practical exploitation of markup languages easily falls short when it comes to implement non-trivial, possibly parametric activities, involving loops, choices, string manipulation, or other computational constructs-like for instance instantiating a parametric number of resources to be differently customized according to some variable condition. There, implementors tend to tackle such issue through vendor-specific solutions for supporting loops or recursion, as well as conditional statements, or string manipulation through markup languages, thus giving birth to a plethora of poorly interoperable solutions.
To avoid such sort of issues, some technologies, such as Gradle, attempt to reach declarativeness by properly engineering their functional API, and by leveraging on mainstream languages with flexible syntaxes-such as Kotlin or Groovy. There, users usually declare the expected tasks to be performed on a software project. Inter-dependencies among tasks can be registered as well, along with execute-before or execute-after relations. To manage the project, in practice, users will then invoke some tasks among the ones previously defined (e.g., 'compile', 'test', or 'deploy') and let the Gradle system understand which tasks need to be executed for the requested task to become executable.
Despite such technologies are not logic-based in nature, their declarativeness is undeniable, as well as their success-as proven by their wide adoption. However, by comparing that sort of success against the moderate adoption of logic-based technologies, some questions immediately arise, which are to be answered by future research efforts in the field: • what is favoring adoption of non-logic declarative technologies?
• what is preventing a wider adoption of logic-based declarative technologies in these areas?
• can computational logic and logic programming contribute in overcoming the current shortcomings of non-logic declarative technologies?

Discussion and Conclusions
As mentioned above, the new AI era calls for two fundamentals enabling factors: (i) the availability of big computing power even in minimal spaces, and (ii) the availability of huge amounts of context-related data. Such factors on the one side make it possible to learn from experience, which is the playground of sub-symbolic algorithms; on the other, make logic algorithms, historically used for expert systems and hence very effective as for transparency and explainability, computable in good time.
Big data have naturally led sub-symbolic approaches to prevail, because of their effectiveness in elaborating and getting valuable results from context-related data, learning trends and repeating patterns: yet, their inherent black-box nature is a clear issue. This is precisely where the integration with symbolic approaches can naturally provide added value, complementing symbolic and sub-symbolic approaches with each other. In fact, the two main weaknesses of symbolic approaches concern specifically (a) the extraction of context-related knowledge, and (b) computational complexity-the first being the natural territory of sub-symbolic approaches.
Computational complexity, in its turn, can be partially addressed thanks to the exponential increase of computational power, by suitably exploiting parallelism, as well as by re-defining and re-tuning symbolic approaches so as to fit presently computing paradigms and architectures-as in the case, for instance, of LPaaS [195]. However, it remains an issue in some application contexts, often leading to higher processing costs. An example is propositional interval temporal logics (ITL) [196], which provide a natural framework for representing and reasoning about temporal properties, but whose computational complexity constitutes a barrier for extensive use in practical applications. To cope with this issue, several approaches exploit constraints or adopt a locality principle; in other cases, as in DLs, complexity is decreased at the price of a limited expressiveness. For instance, in Bowman and Thompson's decision procedure for propositional ITL, decidability is achieved by means of a simplifying hypothesis-the locality principle-that constrains the relation between the truth value of a formula over an interval and its truth values over initial sub-intervals [197]. Tableau-based decision procedures have been recently developed [198] for some interval temporal logics over specific classes of temporal structures, without resorting to any simplifying assumption.
Parallelism and concurrent programming techniques are another valuable tool to deal with complexity. The computing power of multi-level parallelism (MLP), in particular, constitutes a promising technique to facilitate concurrent programming while delivering performance comparable to that of fine-grained locking implementations-see for instance [199] and [200].
To recap, due to their strong foundations and features, logic-based technologies have the full potential to power symbolic approaches in such integration, opening intriguing perspectives currently under exploration in many research contexts, as discussed in Section 4. Tables 1 and 2, in particular, point out the very connections among the diverse logics, their application areas, and market segments. Table 1 highlights that cognitive agents and robotics are the application areas that exploit the widest spectrum of logic-based approaches, whereas other areas typically rely on a smaller subset. Reading the table orthogonally, first-order logic, description logic, and fuzzy logics appear to be the most general-purpose ones, the others being more specific. In its turn, Table 2 puts in evidence that some logics-such as description logic and fuzzy logics-are widely used in a variety of market segments, while the same does not hold for other logics, like BDI, defeasible reasoning, and probabilistic logic, thus opening a space of action for their possible expansion. BDI and defeasible reasoning, in particular, could effectively help to enable intelligent systems, intended as agents immersed in a continuous collaboration with humans, to behave and reason more "like a human".
Overall, the synergy of symbolic and sub-symbolic approaches appears to be a viable and promising option to face key issues in today's intelligent systems-namely the need for explainable, responsible, ethical AI. In particular, the adoption of symbolic approaches can help to achieve the key features of e-justice, fairness, ethics and transparency.