Skip to main content

Roboadvice, Artificial Intelligence and Responsibility: The Regulatory Framework Between Present Scenarios and Future Perspectives

  • Chapter
  • First Online:
Economic and Policy Implications of Artificial Intelligence

Part of the book series: Studies in Systems, Decision and Control ((SSDC,volume 288))

Abstract

Artificial Intelligence offers peculiar development prospects in the field of electronic commerce, on-line banking and financial services: this is a typical event of FinTech, a phenomenon considered a driving force for progress and promoting the capital market union (primary goal of the European Union). In particular, the field of automation of financial advice  has considerable relevance: the robo-advice and its “subspecies” (“pure”, mixed, “robo for advisors”) allow us to ask ourselves about the qualification of the existing relationship and that involves the investor, on the remedies available to him/her in the event of relationship disorders and liability profiles. This last aspect presents an element of complication due to the new “factors” introduced by the AI in the structure of the relationship, no longer simplified as direct and exclusive between two human beings. It is therefore necessary to investigate the possible development guidelines and to ask whether there is already, in the current legal system, an adequate regulatory framework or whether, alternatively, a direct intervention by the legislator is necessary. We could have: new interests on which to ask oneself about their legal recognition; or ways for a better pursuit of the already recognized interests thanks to new technologies; or again, the needs for regulatory intervention aimed to set up new cases of better protection of the same interests.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    They are completely new, in terms of interpretative adaptation on the existing ones.

  2. 2.

    (P. 1818). The author refers in particular to the European context.

  3. 3.

    (P. 3 recital E). The document goes on to highlight how, nonetheless, there are also “concerns about the future of employment and the sustainability of social security systems if the current fiscal base is maintained, giving potentially rise to a growing inequality in the distribution of wealth and power”.

  4. 4.

    Word linked to the expression “Financial Technology”.

  5. 5.

    IOSCO (P. 4 Fig. 1) highlighted eight macro areas of operation of FinTech players: (1) payments; (2) insurance; (3) Financial planning; (4) equity and lending-based crowdfunding; (5) blockchain/DLT (cropt currencies, smart contracts, registration and asset tracking); (6) investments and trading (high frequency trading, roboadvisory); (7) research and analysis of information (Big Data, predictive analysis, etc.); (8) security (digital identity, encryption, fraud management).

  6. 6.

    See, above all, the definition of the European Parliament (European Parliament 2017b): “FinTech should be understood as finance enabled or provided by new technologies, affect the whole financial sector, from banking to insurance, pension fund, investment advice and market infrastructures”. One of the first attempts (in Italy) to deal systematically with issues relating to FinTech may be found in Paracampo (2017), p. 1 et seq.

  7. 7.

    P. 46.

  8. 8.

    This is mostly true with reference to Western markets, where in different contexts, such as those in Asian and African countries, FinTech has primarily represented an opportunity for economic development.

  9. 9.

    This is demonstrated both by the diffusion and by the types of activities that are involved in it, as well as by people who go around to this macrophenomenon, thanks to which today we can talk about the “global fintech revolution”: Kauffman and Ma (2015).

  10. 10.

    It is an integral part of the third pillar of the Commission’s investment plan for Europe.

  11. 11.

    In this regard, the European Parliament has addressed an invitation to the European Supervisory Authority and to the European Commission, in the perspective of “achieving an efficient and competitive European financial system. That is more in-depth and more integrated, stable and sustainable”: European Parliament 2017b, p. 7.

  12. 12.

    P. 7.

  13. 13.

    Both in case of being an independent consultant or a professional who works for a legal entity intermediary.

  14. 14.

    P. 8.

  15. 15.

    If these are offered as an additional service compared to the pure consulting.

  16. 16.

    Thus including not only the marketing of the service and the stipulation of the contract, but also the phase of the customer's requesting of information necessary for his profiling, up to the formulation of the investment advice.

  17. 17.

    De Franceschi speaks about “innovation engines”.

  18. 18.

    Obviously, this does not only happen in the field of financial services; the potential for sharing also facilitate processes of knowledge of web surfers, as well as conscious purchasing choices in the broad sense. See in this regard (Colangelo and Zeno-Zencovich 2016; Sénéchal 2016).

  19. 19.

    This will allow, whenever the robot receives a specific input, an increasingly suitable choice of output, offered to achieve the goal.

  20. 20.

    An instructed robot with reinforcement learning will interact in a context with variable features; the paradigmatic example is the “scenario” of the financial markets.

  21. 21.

    Legislative Decree n. 58/1998.

  22. 22.

    Directive 2014/65/EU. It entered into force on 3rd January 2018.

  23. 23.

    CONSOB is the acronym of Commissione Nazionale per le Società e la Borsa, ie Italian national commission for listed companies and the stock exchange. See the art. 19 TUF which governs conditions, limits and procedures and that provides relative power to the Consob.

  24. 24.

    The definition of the expression “financial instrument” is found in art. 1, paragraph 1, lett. t, TUF; its fundamental characteristic is the negotiability in the capital market.

  25. 25.

    Also for a reassurance in terms of possible responsibilities.

  26. 26.

    In the case of an independent base, the consultant has no ties with a producer, so his remuneration consists of a activity linked to the proveided consultancy; otherwise, the remuneration mechanism for retrocessions will operate.

  27. 27.

    See Roppo (2009, p. 489), Perrone (2016 p. 204). Nevertheless, even in the light of secondary regulation, intermediaries are holders of information obligations towards customers who are almost always acquitted in writing: see Sfameni and Giannelli (2015, p. 90).

  28. 28.

    The form requirement, in this case, is required for evidentiary purposes of the intermediary-investor agreement (see, on the subsequent recommendation, Article 58, paragraph 1, Delegated Regulation 565/2017, which limits the reference to the written form pursuant to Article 23 of the Consolidated Law on Finance, effective from 3 January 2018).

  29. 29.

    In particular, the investor will be asked what kind of service and financial transaction he knows, or with which he has had experience, in addition to verifying both his level of knowledge and of the work activity he has carried out. A product may be definied appropriate if the customer demonstrates sufficient knowledge and experience to understand the risks involved.

  30. 30.

    Planned for the portfolio management service, as well as investment consulting.

  31. 31.

    See art. 25, paragraph 2, MiFID II and art. 54, paragraph 1, second period of the 2017/565 Delegated Regulation.

  32. 32.

    According to ESAs (2015, pp. 12–14), the automation requirement occurs, in particular, if: “(1) The automated tool is used directly by the consumer, without (or with very limited) human intervention; (2) An algorithm uses information provided by the consumer to produce an output; (3) The output of the tool is, or is perceived to be, financial advice”.

  33. 33.

    For International Standards ISO 8373:2012, robot is an “actuated mechanism programmable in two or more axes with a degree of autonomy, moving within its environment, to perform intended tasks”. The standard further specifies that a robot includes the control system and interface of the control system. The International Federation of Robotics (IFR) uses the same definition as the ISO standard.12 Nevejans suggests that a legal definition of robots be based on six conditions. In her view, a robot: (i) is a physical machine (“machine matérielle”); (ii) is alimented by energy; (iii) has a capacity to act in the real world; (iv) can analyse the environment; (v) can render decisions; and (vi) can learn”.

  34. 34.

    This constitutes a “common observation and constant starting point, even if negative, of every reflection on the subject” (Palmerini 2016, p. 1825). Some studies, above all those that have taken into consideration different categories of robots (for contexts of diffusion and utilization), have tried to frame them by classification categories. Among others, starting from the aspects more recurrent in the various definitions of robots present, the Robolaw project promoted by the European Union led to the identification of five categories on the basis of which evaluating and defining the robots: “The categories are as follows: (1) nature, which refers to the material in which the robot manifests itself; (2) autonomy, which refers to the level of independence from external human control; (3) task, which refers to the application or service provided by the robot; (4) operational environment, which refers to the contexts of use; and (5) human–robot interaction, which refers to the relationship established with human beings”: Palmerini et al. (2016).

  35. 35.

    P. 1825.

  36. 36.

    P. 14.

  37. 37.

    So called clickstream behavior: see Katawetawaraks and Wang (2011).

  38. 38.

    It is made up of the European Banking Authority (EBA), the European Securities and Markets Authority (ESMA) and the European Insurance and Occupational Pensions Authority (EIOPA).

  39. 39.

    The considerations were then resumed later: “the ESAs identified benefits of automated advice to consumers and financial institutions in respect of:—reduced costs for both consumers and financial institutions;—easy access to more products and wider client base for financial institutions; and—improved quality of the service provided” (ESAs 2018, p. 8).

  40. 40.

    See p. 6.

  41. 41.

    See p. 51, where a reference is made to the risks deriving from excessive reliance on the informal advice and from the c.d. framing effect; see also ESAs (2015), p. 21 et seq.

  42. 42.

    The “disguised consulting” risks of eliminating the asset guarantee in the event of liability.

  43. 43.

    Among the risks, the hermeneutical difficulties of allocating the costs of responsibility deriving from the increase in complexity of the case with the advent of robotic technology must also be considered; this aspect will be examined in detail shortly as a specific subject of this research.

  44. 44.

    P. 9.

  45. 45.

    There are peculiar outcomes of some researches which, indeed, on the basis of a statistical datum, show a series of critical profiles, potentially capable of severely limiting the development of roboadvice. In fact, it was detected the presence of some contractual “forms” with which it is noted that the device: does not provide a real financial consulting; it is not immune to conflicts of interest; it does not necessarily entail a benefit in terms of costs for the customer; does not act in the best interest of the latter; does not meet the safety standards required for fiduciary investments: see Fein (2015, p. 8). Furthermore, these “are not designed for ERISA retirement accounts and would not meet the DOL's proposed ‘best interest’contract exemption”.

  46. 46.

    Collection of information on the customer, processing of the same for the purposes of assessing adequacy, asset allocation, portfolio selection, up to the formulation of the “final” recommendation and, possibly, negotiation and portfolio rebalancing, if envisaged as additional activities with respect to the mere advice.

  47. 47.

    In this sense a summary of the current situation can be found in ESAs (2016, p. 17): “The majority of respondents to the DP that are currently offering financial advice actually use the hybrid business models. In particular in the securities, respondents indicated that the main two types of tools:

    • “Fully automated tools”: these are completely automated tools, driven by algorithms or decision trees, with no or very limited human interaction involved in the advisory process. In response to the information provided by the consumer, the recommendations are based on the following criteria:

    • “Hybrid advisory tools”: These tools combine an automated tool (algorithm or decision tree) with the ability to interact with a human advisor. Certain business models always involve a human advisor, typically to provide additional customer service, and/or to provide an additional quality overlay to the recommendations presented by the automated tool. Respondents to the DP also noted that, in some business models, a human advisor would be engaged when it is considered that the consumer has more complex needs than those that the automated tool can accommodate.

  48. 48.

    Formally, the “subjects” involved, from a purely legal point of view, could also remain just two, for example in the event that the consulting service is provided by a brokerage company that develops the software for the operation of the roboadvice by it own, perhaps with internal staff. Nevertheless, the necessary involvement of people other than those who are the protagonists of the “classic” financial advisory relationship remains firm, though in phases before (or after) the ordinary “value chain” of the service.

  49. 49.

    While in traditional advice the client's profile can be defined and then rebalanced on the basis of further questions to meet all individual needs or to clarify any inconsistencies, an automatic process based on a set of standard questions could have a lower capacity of getting the peculiarities of the individual and incorporate them into the proposal.

  50. 50.

    Evaluation and/or subsequent declaration, understood as the final manifestation of the consultancy.

  51. 51.

    Indeed, IOSCO (2017 p. 33), gives note how, in addition to the damage profiles, in any case differently programmed algorithms generate diversified outputs even starting from the same investor profile.

  52. 52.

    Respectively: Directive 2004/39/EC; Directive 2014/65/EU. The Mifid I was implemented by Italian law with Legislative Decree 17 September 2007, n. 164.

  53. 53.

    See in this regard, in addition to art. 19, paragraph 5, also art. 21, paragraph 4, as well as art. 44, paragraph 1.

  54. 54.

    Beyond some news that does not put here any emphasis on highlighting.

  55. 55.

    Although it is formulated in a peculiar manner, given that it is expected, literally, that intermediaries should “ensure that”.

  56. 56.

    See Linciano et al. (2019, p. 81), according to which this would not require an immediate rethinking of the regulations. For similar assessments in comparative perspective, see Hunt (2017).

  57. 57.

    See pp. 255 and 260.

  58. 58.

    Article 3, paragraph 1, lett. a, of the “Consumer Code” (Legislative Decree 6 September 2005, n. 206) defines the consumer as the natural person who acts for purposes unrelated to the business activity (or craft, or professional) eventually carried out.

  59. 59.

    There is also the duty for people to return the property they got, without prejudice to the further damages granted to the consumer.

  60. 60.

    Two sentences of the Supreme Court with joint sections (Court of Cassation 2017 n. 26724; Court of Cassation 2017 n. 26725) have emphasized that, in the field of financial services coomercialized at a distance, it is not possible to identify a general rule that penalties the violation of obligations of pre-contractual information through the use of the sanction.

  61. 61.

    For more information on the nature of liability, see Maggiolo (2014), p. 497.

  62. 62.

    According to which “in the compensation claims for damages caused to the client in the performance of investment and ancillary services, the burden of proof of having acted with the specific due diligence required is up to the authorized parties”.

  63. 63.

    The expressions are used by Court of Cassation 2016, n. 810, which conforms to an orientation in this sense expressed by the Supreme Court (see, among others, Court of Cassation 2009 n. 3733; Court of Cassation 2015, n. 826).

  64. 64.

    Legislative Decree 6 September 2005, n. 206.

  65. 65.

    As a remedy operating both in the event of termination and invalidity of the contract, pending the existence of an illegal act performed by the investment service provider: see Maggiolo (2014 p. 508), according to which compensation is defined as “exclusive or concurrent protection … always available for the customer damaged by the pathological behavior of the intermediary”.

  66. 66.

    See Court of Cassation 2009, n. 3773.

  67. 67.

    It is a case of annulment (not nullity) ruled by Court of Cassation 2007, n. 26172; (Ticozzi 2007; see already also Scalisi 1994, p. 190).

  68. 68.

    Indeed, if the error is caused by the violation of the information obligations in the pre-contractual phase by the advisor and if the investor is a consumer, he would find the most radical hypothesis of nullity of the contract, by virtue of the art. 67-septiesdecies, paragraph 4, c. cons.: but only if from this violation derives a significant alteration of the representation of the features of the offered product.

  69. 69.

    The mentioned norm is entitled “Competition for the negligent event of the creditor” and provides, in the first paragraph, that “If the negligent fact of the creditor has contributed to the damage, the compensation is reduced according to the gravity of the fault and the extent of the consequences that derived from it.” On its abstract applicability see Court of Cassation 2006, n. 8229.

  70. 70.

    In this regard, it was stated that “the human behind the robots to be held responsible for its actions. The choice of the owner/user and the producer” (Bertolini 2013, p. 227); and again (p. 230): “furthermore, there may be conditions where it could prove useful to hold the owner or user because it is one of the best position possible to intervene and avoid harm, or rather compensate the damage when it occurred, irrespective of whether he was at fault in causing it”.

  71. 71.

    Or in any case of the whole or part of what is necessary for the provision of automated advice.

  72. 72.

    P. 87.

  73. 73.

    The warranty for defects (Article 1492 of the Civil Code) would not satisfactorily include the abstractly verifiable situations; moreover, the forfeiture and prescription terms (art. 1495 c.c.) would be too short, and mostly all the limitations to compensation for damages deriving from the defects of the sold product would be excessive (art. 1494 c.c.).

  74. 74.

    We refer to Court of Cassation 2008, n. 11410; Court of Cassation 2014, n. 3021.

  75. 75.

    More recently it was expressed in the same way Court of Cassation 2017, n. 16654.

  76. 76.

    The reference to the outsourcing is understood as a variegated category of contracts, diffusing over time and becoming socially typical even before of finding a legal formalization, though they need to protection. They are onerous and for corresponding services, subjected to one or more types of contract (with the possibility of a mixed or complex contract) based on the concrete agreements of the parties. This includes the various possibilities for the company of rejecting the direct management of sections of the activity not included into the core business. For example, the business owner will be able to choose whether outsourcing the fulfillment of the service, or selling a branch of the company: see Court of Cassation 2006, n. 21287.

  77. 77.

    For this reason, the relationship can be better defined as a “software development contract”. Studies in this regard date back to several decades ago: see above all Rossello 1984. It is not always easy to identify the differences between the software development contract and outsourcing. In the first case, the intermediary has an “instrument” (even if it is an algorithm) with which he produces immaterial activities constituting the service he is providing; in the second case, it is the activity itself (in whole or in part) that is outsourced, therefore it is carried out by others.

  78. 78.

    Pp. 10–11. Also relevant is the subsequent observation, which reports the results of the survey conducted on practical cases: “Respondents noted that disputes—related costs and future liability could be a barrier to automated advice for some institutions. Some respondents expressed responsibility over the allocation of liability when consumers receive advice via an automated tool, particularly, whether the liability can be attributed to consumers, providers or third parties. The outsourcing of functions to special providers such as entities in the FinTech industry was also a response to this risk”.

  79. 79.

    Recital X.

  80. 80.

    P. 155. The author, referring to the European project “Robolaw”, underlined the negative answer to the question: “do we need a special regulation for robots? […] the conclusion is that no particular law for robots is needed. The technical performance of robots has reached the level of “ethical agents”.

  81. 81.

    European Parliament (2017a), point 6, expressed itself in favor of the principle of technology neutrality also for the setting future regulation of the subject.

  82. 82.

    Thus, where it’s adfirmed a link between the technological neutrality and the absence of “significant deviations” of the automation financial advisory service from the more general phenomenon of the advice: see Linciano et al. (2019, p. 69).

  83. 83.

    P. 497.

  84. 84.

    We will return soon to a similar qualification. Also European Parliament (2017a), point 29, highlights how a massive use of automation in data collection and processing through algorithms amplifies the difficulties in identifying (and subsequently managing) the hypotheses of responsibility in the roboadvice.

  85. 85.

    European Parliament (2017b, pp. 4–5). This point is confirmed even if we don’t agree with the perspective that it will come “the point that, if we are not prepared, it could be dangerous for humans’ ability to control what they have created and, consequently, also to control their capacity to be responsible of their own destiny in order to guarantee the survival of the species”.

  86. 86.

    P. 81.

  87. 87.

    P. 7. To date, in fact, “Only Financial Professionals Can Provide Portfolio Analysis in light of all the relevant factors a fiduciary must consider […]. While digital tools can assist financial professionals in determining clients’ profiles, the report indicates that prudent investment advice requires human judgment beyond such tools (although human judgment still depends on the skill of the professional).

  88. 88.

    P. 217.

  89. 89.

    Ibidem.

  90. 90.

    Decker (2017, p. 155), referring to some researches conducted by Susanne Beck, highlights how “adaptive learning robots are thought to interact with humans in normal environments, and, consequently, they can react to new sensory inputs in an unpredictable way. If that is the case, on can hardly assume that this reaction was caused by a wrongful act of the programmer, producer, or even the user”.

  91. 91.

    Palmerini et al. (2016). This is evidenced according to the perspective of a necessary organization of the regulatory framework “in order to balance opposing interests, but also—once desired policies are identified—taking into account the concrete effects and impacts of the rules on the market, not entirely related to general assumptions and unverified considerations about their presumed—or expected—consequences”.

  92. 92.

    […] If they are produced, the liability from defective product should be revised, otherwise it is not adequate to protect the interests involved. But the project of European standards on robotics (recital U) has already proposed possible itineraries in this direction: “manufacturers, owners or users could be considered objectively responsible for the acts or omissions of a robot if, for example, the robot has been classified as a dangerous object or within the area of product liability rules”.

  93. 93.

    Considering P.: “it is appropriate, considering the reached stage in the development of robotics and artificial intelligence, starting with the issues of civil liability and assessing whether the best starting point is not an approach based strictly on the objective liability of who is best allocated in order to provide guarantees”.

  94. 94.

    The basis of the juridical phenomenology, according to which the robot can be considered an object and not a subject, cannot be overcome.

  95. 95.

    P. 80: “the robot is capable of determining its own preferences and goals and acts towards their satisfaction in a completely free and autonomous fashion, it cannot be deemed an ethical agent”.

  96. 96.

    Among the first studies about the perspective of a personhood of devices equipped with artificial intelligence, see Solum (1992).

  97. 97.

    P. 242. The author adds: “in such a perspective, though, to specific ends needs to be identified, and alternative tools to be taken into account before concluding that would be the preferred way to achieve the desired result. It may indeed prove useful to attribute legal personhood to a software agent, which would be registered, know how to identify the limits of its ability to validly conclude contracts, the maximum amount of obligations it could assume, and eventually the (physical or legal) person it is representing”.

  98. 98.

    Considering T).

  99. 99.

    Also Decker (2017, p. 155), refers to the proposal (from others advocated), “introducing new legal status in order to overcome the diffusion of responsibility, namely electronic personhood, which develops along the line of the legal person of companies or corporations”.

  100. 100.

    The author notes that, in any case, the proposal of the introduction of the “electronic personality” of the robot should be accompanied by the creation of a register, equipping “each robot with an identifier when it is put on the market”.

  101. 101.

    P. 12. Also the European Parliament (2017b), considering T, recognized that the autonomy of the robot “raises the question of their nature in light of the existing legal categories—if they are to be considered as natural persons, legal persons, animals or objects—or if a new category with specific own characteristics and implications regarding the attribution of rights and duties must be created, including the liability for damages”.

  102. 102.

    Asaro (2007), p. 23 adfirms: “it seems reasonable to think that some robots will eventually become a kind of quasi-agent in the law before they achieve personhood”.

  103. 103.

    Of the same opinion are the conclusions of the Robolaw research project: see Palmerini et al. (2016, p. 80): “Hence, short of a full-fledged autonomy, robots may not be considered liable for the damages caused, rather than human behind them should. The choice between user, programmer and producer can vary by different circumstances and applications. “When discussing the structure of the liability rules in a functional perspective, it is necessary to pay attention to optimal distribution and management of costs associated with the device, as well as minimization of risks”.

  104. 104.

    See European Parliament (2017b), recital U.

  105. 105.

    And always on the condition that it is not the same initial setup that influences the data collection and subsequent processing in a decisive way.

  106. 106.

    P. 1841. “From a technical point of view, the robot with learning skills, after a certain period in which it is used, is therefore different from the other robots of which it has shared design and manufacture. Nevertheless, given that the characteristics, that make this deviation from the initial standard possible, have been installed in the machine by the manufacturer and the programmer, the possibility of ascribing responsibility for any damage to the same subjects remains confirmed”.

  107. 107.

    Decker (2017, p. 154), refers to a gray zone between the robot producer and the robot user with regard to who is liable for robot's actions. Adaptive and learning, crucial elements in HRI [human–robot interaction], are the key to successful cooperation in complex tasks. The system is able to learn about “its user” It can be described as a technology that changes its performance while being used. Who is responsible for these changes?”

  108. 108.

    And that could, indeed, be the subject of an autonomous and interesting research project. This is an adjoining case in the topography of the code but in which we are witnessing the transition from a contractual case with pension purpose to which is a real for-profit, based on the evolutionary perspectives of artificial intelligence. Uncertainties about the development of this innovative phenomenon could thus be “re-used” as a wagep with the purpose of further enrichment, in the typical perspective of derivative contracts. This would thus make the solution of the scheme insurance industry (new) speculative outlook in the markets. It is therefore important to consider the comments that follow, which are aimed at justifying the possibility of introducing an insurance system, with a cautious awareness of what has just been mentioned; and that could, indeed, be the subject of autonomous and interesting search path.

  109. 109.

    According to Palmerini (2016, p. 1842), in the case of an individual insurance scheme, “the accidents would be so rare to let the insurance premium very low and therefore it would be accessible to all private consumers.” The solution of an insurance system, in any case, should not be considered in terms of general applicability in the progress of robotics. It is considered as adequate in the specific phenomenon of roboadvice.

  110. 110.

    In the Italian internal system, the basis of a legislative policy choice in this sense can be found in art. 21, paragraph 1, lett. d) TUF, which requires intermediaries to “have the resources and procedures, including internal controls, to ensure the efficient performance of services and activities”; see Lucantoni (2012, p. 259).

  111. 111.

    If the robot had only the ability to collect and process data, the quid pluris recognizable to the consultant natural person expert (and moreover facilitated by the activity of collecting the robot) for his experience and competence, should direct towards the latter.

  112. 112.

    Situation that could be configured, in the robot for advisor, where the consultant who uses it “educate” the robot from time to time, informing, or better, “feeding” the artificial intelligence through the outcomes of the consultancy reports, both associating to each situation the recommendation made to the investor and, why not, if available, also the outcomes of the investment (if carried out), as a “proof of confirmation” of the adequacy of the advice issued.

  113. 113.

    The authors underline the presence of “things you just know how to do without being able to explain the rules for how you do them; […] that makes one expert with respect to a particular set of knowledge and/or tasks”.

  114. 114.

    In other words, when does the robot get the status, after the processes of machine and deep learning conducted over time? And mostly, who is entitled to such recognition? It is not easy to identify this verification/control figure; in a science fiction world, only the “super artificial intelligence” could do it.

  115. 115.

    Indeed, while the previous hypothesis appeared to grow in terms of complexity as we imagine the development of artificial intelligence, in this case the process is the opposite, with inverse proportionality between the possibility of verification and progress of the AI.

  116. 116.

    Despite the declared uncertainty and inability to address the issue of the pre-eminence of the robot or of the human being in the envisaged hypotheses, it is necessary to highlight how, though in a general perspective of consideration of the man-robot relationship, someone is inclined without too many uncertainties to a position in favor of the robot: “Owing to the evidence in their favor (stipulated by definition), it is more appropriate to think of expert than average in their ability to make decisions that will produce desirable outcomes.” This fact suggests that granting a general decision-making authority to human experts will be problematic once expert robots are properly on the scene. It might seem justifiable to grant “override” authority for human situations in situations where there is a “clear” evidence contradicting the expert's judgment, but even this would be contra-evidence-based. Furthermore, it would beg important questions about what weighted to be placed on claims of “clear” evidence, based on the features of human–human expert disagreements. “Disintegrated tendencies to be characterized by a lack, rather than excess, of clarity”: Millar and Kerr (2013, p. 19); see also p. 23.

  117. 117.

    P. 1.

  118. 118.

    It was asked the Commission to follow detailed recommendations contained in the Report; the same Commission was also invited, “once the technological developments will allow the construction of robots with an higher degree of autonomy than it is reasonable to expect at the moment, to propose an update of the pertinent legislation in due time” (point 25).

  119. 119.

    Point 26.

  120. 120.

    “Requesting a simple proof of the damage occurred and identifying a causal link between the injurious behavior of the robot and the damage suffered by the injured party” (European Parliament 2017b, point 27).

  121. 121.

    Point 28. It must not be forgotten, nevertheless, it is always necessary to keep in mind the distinction between machine learning, education in fieri and original programming of the algorithm.

  122. 122.

    See (European Parliament, 2017b), points 29, 30, 31: “a possible solution of the complex problem about attributing liability for the damage caused by increasingly autonomous robots could be a compulsory insurance scheme, as already happens, for example, with cars; however, the author notes that, unlike the motor vehicle insurance scheme, which protects human actions or errors, robot insurance could be based on the manufacturer's obligation to take out an insurance policy for the autonomous robots he produces”. This regime should then be “supplemented by a fund to guarantee the possibility of reimbursing damages in the absence of insurance coverage; this invites the insurance industry to develop new products in line with the progress of robotics”.

  123. 123.

    P. 17.

References

  • Alpa, G.: FinTech: un laboratorio per i giuristi. Contratto e impresa, 2, 377, (2019)

    Google Scholar 

  • Amidei, A.: Robotica intelligente e responsabilità: profili e prospettive evolutive del quadro normativo europeo. In Ruffolo, U. (Ed.), Intelligenza artificiale e responsabilità, Giuffrè, 63, (2017)

    Google Scholar 

  • Asaro, P.: Robots and responsibility from a legal perspective. Proceedings of the IEEE. (2007), https://pdfs.semanticscholar.org/427c/27a48205293fe59b94898ba1d266b4b3ea89.pdf?_ga=2.18658096.825710681.1574415992-333878711.1574415992

  • Bertolini, A.: Robots as Products: The Case for a Realistic Analysis of Robotic Applications and Liability Rules, in: Law, innovation and technology, 214, (2013)

    Article  MathSciNet  Google Scholar 

  • Bouyon, S.: The Future of Retail Financial Services. What policy mix for a balanced digital transformation?. CEPS-ECRI. (2017) www.ceps.eu and www.ecri.eu

  • Bravo, F.: “Commercio elettronico”, Enciclopedia del Diritto, Annali, V, Giuffrè, (2012)

    Google Scholar 

  • Colangelo, M., Zeno-Zencovich, V.: Online Platforms, Competition Rules and Consumer Protection in Travel Industry. Journal of European Consumer and Market Law, 75, (2016)

    Google Scholar 

  • De Franceschi, A.: Le piattaforme online nel mercato unico digitale: il caso Uber. (2016) https://agenda.unibocconi.it/eventi/attach/7_De_Franceschi20171106125305.pdf

  • Decker, M.: The Next Generation of Robots for the Next Generation of Humans, Elsevier - Robotics and Autonomous Systems, (2017)

    Google Scholar 

  • Fein, M.: Robo-Advisors: A Closer Look. (2015) https://ssrn.com/abstract=2658701

  • Fein, M.: FINRA's Report on Robo-Advisors: Fiduciary Implications. https://ssrn.com/abstract=2768295 or https://doi.org/10.2139/ssrn.2768295, p. 7, (2016)

  • Gorassini, A.: Il soggetto come logos del diritto. Panorami, (1995)

    Google Scholar 

  • Gorassini, A.: Il valore vivente nel Diritto. in Tescione, F., Persona e Soggetto - Il Soggetto come fattispecie della Persona, ESI, (2010)

    Google Scholar 

  • Hofacker, C.F.: Internet marketing. New York, Chichester: John Wiley & Sons, (2001)

    Google Scholar 

  • Hunt, T.: “Crypto currency and property rights”, Russell McVeagh, (2017)

    Google Scholar 

  • Katawetawaraks, C., Wang, C.: Online shopper behavior: influences of online shopping decision, Asian journal of business research. 1, 2, (2011)

    Google Scholar 

  • Kauffman, R.J., Ma, D.: Contemporary Research on Payments and Cards in the Global Fintech Revolution. Electronic Commerce Research and Applications, 14, 261, (2015)

    Article  Google Scholar 

  • Linciano, N., Soccorso, P., Lener, R.: La digitalizzazione della consulenza in materia d’investimenti finanziari. CONSOB, Quaderni FinTech, 3. (2019) http://www.consob.it/documents/46180/46181/FinTech_3.pdf/64bcf8bd-7fda-459d-9e01-82b1504dc316

  • Lucantoni, P.: Le regole di condotta degli intermediari finanziari, in Gabrielli, E., Lener, R., I contratti del mercato finanziario, UTET Giuridica. (2011)

    Google Scholar 

  • Macario, F., Addante, A.: Contratti. Formulario commentato - Outsourcing, Ipsoa, II ed., 2011. (2014)

    Google Scholar 

  • Maggiolo, M.: Servizio di consulenza in materia di investimenti vs. servizio di ricezione e trasmissione di ordini, Banca borsa titoli di credito, 4, 497, (2014)

    Google Scholar 

  • Millar, J., Kerr I.R.: Delegation, Relinquishment and Responsibility: The Prospect of Expert Robots. (2013) Draft in progress Downloaded from Elgar Online, available at SSRN: https://ssrn.com/abstract=2234645 or https://doi.org/10.2139/ssrn.2234645

  • Palmerini, E.: Robotica e diritto: suggestioni, intersezioni, sviluppi a margine di una ricerca europea, Responsabilità Civile e Previdenza, 6, 1815 (2016)

    Google Scholar 

  • Palmerini, E., Bertolini, A., Battaglia, F., Koops, B.J., Carnevale, A., Salvini, P.: Robolaw, Towards a European framework for robotic regulation. Elsevier, 10, (2016)

    Google Scholar 

  • Paracampo, M.T.: FinTech e il mercato unico tecnologico dei servizi finanziari, in ID, FinTech. Introduzione ai profili giuridici di un mercato unico tecnologico dei servizi finanziari, Giappichelli, (2017)

    Google Scholar 

  • Perrone, A.: Il diritto del mercato dei capitali, Giuffré, (2016)

    Google Scholar 

  • Prasad, J.S., Aryasri, A.R.: “Determinants of shopper behavior in e-tailing: an empirical analysis”, Paradigm, 13, 1, (2009)

    Article  Google Scholar 

  • Roppo, V.: Sui contratti del mercato finanziario, prima e dopo la MiFID. Rivista di diritto privato, 489, (2009)

    Google Scholar 

  • Rossello, C.: I contratti dell’informatica. Spunti di riflessione e comparazione con l’esperienza statunitense e francese, in Alpa, G., I contratti di utilizzazione del computer, Giuffrè, 105 ss. (1984)

    Google Scholar 

  • Ruffolo, U.: Per i fondamenti di un diritto della robotica self-learning; dalla machinery produttiva all’auto driverless: verso una “responsabilità da algoritmo”? In ID, Intelligenza artificiale e responsabilità, Giuffrè, 1, (2017)

    Google Scholar 

  • Santosuosso A., Boscarato, C., Caroleo, F.: Robot e diritto: una prima ricognizione. Nuova Giurisprudenza Civile Commentata, Second Part, 1 ss. (2012)

    Google Scholar 

  • Scalisi, V.: Dovere di informazione e attività di intermediazione mobiliare, Rivista di Diritto Civile, 167 ss. (1994)

    Google Scholar 

  • Schueffel, P.: Taming the Beast: A Scientific Definition of Fintech. Journal of Innovation Management, 4, 46, (2016)

    Google Scholar 

  • Sénéchal, J.: The Diversity of the Services provided by Online Platforms and the Specificity of the Counter-performance of these Services – A double Challenge for European and National Contract Law. Journal of European Consumer and Market Law, 39 ss, (2016)

    Google Scholar 

  • Sfameni, P., Giannelli, A.: Diritto degli intermediari e dei mercati finanziari, Egea, (2015)

    Google Scholar 

  • Solum, L.B.: Legal Personhood for Artificial Intelligences, North Carolina Law Review, 70, 4, 4, 1231 ss, (1992)

    Google Scholar 

  • Ticozzi, M.: Violazione di obblighi informativi e sanzioni: un problema non solo degli intermediari finanziari, Contratti, 363, (2007)

    Google Scholar 

  • Uricchio, A.: Intelligenza artificiale e diritto – robot tax: modelli di prelievo e prospettive di riforma. Giurisprudenza Italiana, 7, 1749, (2019)

    Google Scholar 

*

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pasquale Cuzzola .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Cuzzola, P. (2020). Roboadvice, Artificial Intelligence and Responsibility: The Regulatory Framework Between Present Scenarios and Future Perspectives. In: Marino, D., Monaca, M. (eds) Economic and Policy Implications of Artificial Intelligence. Studies in Systems, Decision and Control, vol 288. Springer, Cham. https://doi.org/10.1007/978-3-030-45340-4_8

Download citation

Publish with us

Policies and ethics