Skip to main content

EDITORIAL article

Front. Phys., 01 March 2023
Sec. Interdisciplinary Physics
Volume 11 - 2023 | https://doi.org/10.3389/fphy.2023.1150796

Editorial: Interdisciplinary approaches to the structure and performance of interdependent autonomous human machine teams and systems

  • 1Paine College, Augusta, GA, United States
  • 2United States Naval Research Laboratory, Washington, DC, United States

Our Research Topic seeks to advance the physics of autonomous human-machine teams with a mathematical, generalizable model [1]. However, limited team science exists (e.g., aircrews; in [2]). Why? Team science has been hindered by relying on observing how “independent” individuals act and communicate (viz., i.i.d. data; [3,4]), but independent data cannot reproduce the interdependence observed in teams [5]. In agreement, the National Academy of Sciences stated: The “performance of a team is not decomposable to, or an aggregation of, individual performances” ([6], p. 11), evidence of non-factorable teams and data dependency, requiring random searches to find well-fitted teammates, all characterized by fewer degrees of freedom and reduced entropy from interdependence. We review what else we know about a physics of autonomous human-machine teams.

First, we argue that state-dependency [7] rescues traditional social science from its current validation (e.g., “implicit” bias; [8,9]) and replication crises ([10]; e.g., attempts to reduce bias are “dispiriting” [11]), caused by assuming that cognition subsumes individual behavior, needing only independent data (i.i.d.) for teams. The result: Traditional models include large language models like OpenAI’s ChatGPT and game theory. Strictly cognitive, ChatGPT and two-person games are assumed to easily connect to reality, but ChatGPT skeptics exist ([12]; [13]); and in Science [14], real-world multi-agent approaches are “currently out of reach for state-of-the-art AI methods.” Previewed in Science, “real-world, large-scale multiagent problems … are currently unsolvable” [15].

Second, to describe interdependence between cogition and behavior, Bohr, the quantum pioneer [16,17]) borrowed “complementary” from psychologist, William James [18]. Later, but long before the Academy’s 2021 report, Schrödinger [19] wrote that entanglement meant “the best possible knowledge of a whole does not necessarily include the best possible knowledge of all its parts, even though they may be entirely separate.” [20] borrowed Schrödinger’s state-dependency to found social psychology; and engineers [21] to found Systems Engineering, a concept mostly abandoned until resurrected by the Academy’s 2015 report on team interdependence [5].

Third, generalizing the Academy’s 2021 claim while adhering to the laws of thermodynamics, data dependency arises when individuals become teammates, reducing degrees of freedom as a team coheres. With coherence, entropy decreases, increasing the power of a team’s productivity; however, when interviewed as individuals, coherence is lost.

Fourth, testing for data dependence in teams has proved successful. Treating the structure of a team as key for autonomous agents, assuming a team’s size matches a problem [22], with [23] “invisible hand” as baseline, team structure ranges from a group of individuals to a coherent team, generating from least to maximum team power. Several barriers lie ahead; e.g., the tradeoff between structure and performance may be a mathematical cul-de-sac, yet one we have generalized to multiple phenomena [1,2426]: uncertainty and conflict (where logic fails [27]); deception; blue-red team challenges; emotion; vulnerability; innovation; and mergers (viz., random searches for team fittedness).

Fifth, by exploiting data dependency, uncertainty reduced inside of bounded spaces may recover rational choice [28], game theory and [29] bounded rationality: For example, cross-examination in a courtroom is the greatest means to discovering truth [30], a bounded space with strict rules (judges) where opposing officers (lawyers) facing uncertainty compete to persuade an audience (jury) of each’s interpretation of reality; legal appeals further reduce uncertainty with an “informed assessment of competing interests” [31]. Generalizing, we see that a blue team’s decision under uncertainty on the battlefield challenged by an AI-assisted red-team might prevent future tragedies [32]; and why machine learning and game theory require controlled contexts.

Finally, for now, interdisciplinary explorations include social science (e.g., bidirectional trust [33]) and philosophy (e.g., ethics). Citing UN Secretary General Antonio Guterres, the Editors of the New York Times [34]recommended that “humans never completely surrender life and decision choices in combat to machines.” However, from [35],“Autonomous weapons already … [may] start their own war … [but] No theory for this encroaching world yet exists.” Uncertain of the next step, our success has confirmed the limits of a team science built on observing independent individuals; open science is critical to advance the science of autonomy; and an interdisciplinary approach to the physics of teamwork may master autonomous human-machine teams and offer guidance to prevent future wars.

Next, we introduce the published articles for our Research Topic.

Ira Moskowitz uses Riemannian distance for a cost metric to improve multi-agent team efficiency. With an idealized model of the problem’s geometry, he found solutions that may satisfy. Specifically, a combination of increasing skill and interdependence may optimize the probability of multi-agent teams reaching the correct conclusion to a problem confronted.

William Lawless proposes that a science of interdependent agents is necessary for autonomous human-machine teams. As evidence, a case study of the Uber pedestrian fatality in 2018 finds that the Uber car and its operator were both independent. But with an open approach, he discovers a tradeoff in a team’s structural entropy and productivity.

Robert Hunjet’s team consider bidirectional communication between humans and AI swarms to improve efficiency. To reduce ambiguity, they design a language used by Australian aborigines, the Jingulu, naming it JSwarm. It allows them to separate semantics from syntax. They provide an example in real-time with shepherding, planning human studies next.

Rino Falcone and Cristiano Castelfranchi investigate social interaction primitives in a dependence network of agents to model subjective valuations of trustworthiness when performing tasks. Their model allows a comparison of reality and subjective beliefs in preparation for autonomous collaboration with humans. They observe objective relationships emerge between agents, and they plan a future simulation.

Fred Petry and his team briefly review game theory for autonomy across several successful applications. They focus on Nash and Stackleberg equilibria and social dilemmas. They find that the use of “best responses” may create a negative result. In some situations, cooperation may violate moral rules, a result that has created lively discussions among practitioners about autonomy.

Krishna Pattipati’s team simulate autonomous multi-agent systems with path planning algorithms for interdependent agents to produce intelligent courses of action under uncertainty (their derived generalized recursions subsume the well-known Sum-product, Max-product, Dynamic Programming, and joint Reward/Entropy maximization approaches as special cases). Using unified probabilistic inference and dynamic programming, communication rules, and factor graphs in reduced normal form produce optimal decisions subject to agent schedules, predicting that bounded rationality and human biases can be overcome.

Tony Gillespie wants to ensure trust of autonomous human-machine teams when decision-making transfers between humans and machines. He identifies three key Research Topic and important questions for human trust and acceptance of autonomous entity actions; describes teams as hierarchical control systems for responsibilities and actions with practical solutions; and presents three applications of his technique.

Ryan Quandt questions assumptions as human-machine teams approach autonomy: that interactions depend on how AI is housed, positioned, and navigates society. Behaviors in these settings reveal whether human and machine act and communicate jointly. Experiments should be performed and interpreted so that the successes of teams help society (and AI) to understand their actions.

Nicolas Hili’s team notes that paper and pens are still used for modeling systems, partly because Computer-Aided Systems Engineering whiteboard tools remain problematic. New CASE tools improved applications, but without explainability. Instead, by separating handwritten text from geometrical symbols, they validate a human-machine interface for sketching that captures system models using interactive whiteboards with explainability.

Ashok Goel’s team studies robots tasked with assembling objects by manipulating parts, a complex problem prone to failure. They use meta reasoning, robotic principles and dual encoding of state expectations, finding that low-level information or high-level expectations alone produces poor results. They outline a multi-level robotic system for assembling objects having six degrees of freedom.

Ibrahim et al. review safety for human-machine teams in uncertain or safety-critical contexts, highlighting trust for their safe and effective operation. They use Autonomous Ground Vehicles to explore examples of interdependent teaming, communication and trust between humans and autonomous systems, emphasizing that context influences trust for these systems.

Tom McDermott and Dennis Folds describe an information model that distributed human and machine teams can interpret for decisions under complexity (military hierarchical command and control structures; Rules of Engagement; Commander’s Intent; and Transfer of Authority language). They use Construal Level Theory with progressive disclosures across real-time mission planning and control systems, demonstrated for simulated military mine countermeasures.

Mito Akiyoshi applies social science to interacting humans to guide the emergence of trust for Autonomous Human Machine Teams and Systems in real world contexts. She integrates these theoretical perspectives: the ecological theory of actors and tasks; theory of introducing social problems for civics; and political economy developed in the sociological study of markets.

Matthew Johnson’s team generalizes the effects of interdependence for adaptability and team effectiveness, finding it critical to human-machine team success. To help engineers move beyond models of individuals, they operationalize interdependence with formal structure and activity graphs to address complexity. They provide an example of an adversarial domain that exploits interdependence for effective, adaptive management. social and experiential aspects to be accounted for in the design of autonomous systems.

Ariel Greenberg and Julie Marble (https://www.frontiersin.org/articles/10.3389/fphy.2022.1080132/full) provide an overview of the conceptual foundations of teaming between people and intelligent machines. They examine the original meaning of relevant interpersonal terms as a basis from which to enrich their translated usage in the context of human-machine teaming, highlighting social and experiential aspects to be accounted for in the design of autonomous systems.

Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Acknowledgments

For the many past years of summer research support, the first author thanks RM, Branch Head, Information and Decision Sciences, Information Technology Department, Naval Research Laboratory, Washington, DC; for his many years of editing and other support, the first author thanks his colleague DS, Computer Scientist and Roboticist, who leads the Distributed Autonomous Systems Group at the Navy Center for Applied Research in Artificial Intelligence (NCARAI), Naval Research Laboratory, Washington, DC; and for her unwavering support over many years of conceptual discussions and constructive debates, the first author thanks Kim Butler, North Augusta, SC.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Lawless WF. Toward a physics of interdependence for autonomous human-machine systems: The case of the Uber fatal accident. Front Phys (2022). doi:10.3389/fphy.2022.879171

CrossRef Full Text | Google Scholar

2. Bisbey TM, Reyes DL, Traylor AM, Salas E. Teams of psychologists helping teams: The evolution of the science of team training. Am Psychol (2019) 74(3):278–89. doi:10.1037/amp0000419

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Schölkopf B, Locatello F, Bauer S, Ke NR, Kalchbrenner N, Goyal A, et al. (2021), Towards causal representation learning, arXiv

Google Scholar

4. Shannon CE. A mathematical theory of communication. Bell Syst Tech J (1948) 27(379–423):623–56. doi:10.1002/j.1538-7305.1948.tb00917.x

CrossRef Full Text | Google Scholar

5. Cooke NJ, Hilton ML, Enhancing the effectiveness of team science. Authors: Committee on the science of team science; board on behavioral, cognitive, and sensory sciences; division of behavioral and social sciences and education. Washington (DC): National Research CouncilNational Academies Press (2015).

Google Scholar

6. Endsley MR. Human-AI teaming: State-of-the-Art and research needs. Washington, DC: The National Academies of Sciences-Engineering-MedicineNational Academies Press (2021).

Google Scholar

7. Davies P. Does new physics lurk inside living matter? Phys Today (2021) 73(8):34–40. doi:10.1063/PT.3.4546

CrossRef Full Text | Google Scholar

8. Blanton H, Klick J, Mitchell G, Jaccard J, Mellers B, Tetlock PE. Strong claims and weak evidence: Reassessing the predictive validity of the IAT. J Appl Psychol (2009) 94(3):567–82. doi:10.1037/a0014665

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Leach CW (2021), Editorial, journal of personality and social psychology: Interpersonal relations and group processes. Available from: https://www.apa.org/pubs/journals/features/psp-pspi0000226.pdf, [Retrieved 11 15 2021].

Google Scholar

10. Nosek B. Estimating the reproducibility of psychological science. Science (2015) 349(6251):943. doi:10.1126/science.aac4716

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Paluck EL, Porat R, Clark CS, Green DP. Prejudice reduction: Progress and challenges. Annu Rev Psychol (2021) 72:533–60. doi:10.1146/annurev-psych-071620-030619

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Klein E. “A skeptical take on the A.I. revolution. The A.I. expert Gary Marcus asks,”What if ChatGPT isn’t as intelligent as it seems? (2023). New York Times, retrieved 1/7/2023. Available at: https://www.nytimes.com/2023/01/06/opinion/ezra-klein-podcast-gary-marcus.html.

Google Scholar

13. Zumbrun J. ChatGPT Needs Some Help with Math Assignments. ‘Large language models’ supply grammatically correct answers but struggle with calculations. Wall Street J (2023).

Google Scholar

14. Perolat B, De Vylder B, Hennes D, Tarassov E, Strub F, de Boer V, et al. Mastering the game of Stratego with model-free multiagent reinforcement learning. Science (2022) 378(6623):990–6. also, see the Research Highlight by Yury Suleymanov, same issue. doi:10.1126/science.add4679

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Suleymanov Y, De Vylder B, Hennes D, Tarassov E, Strub F, de Boer V, et al. Mastering the game of Stratego with model-free multiagent reinforcement learning. Science (2022) 378(6623):990–6. doi:10.1126/science.add4679

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Bohr N. Science and the unity of knowledge. In: L Leary, editor. The unity of knowledge. New York: Doubleday (1955). p. 44–62.

Google Scholar

17. Pais A. Niels bohr's times. In: Physics, philosophy, and polity. Oxford, UK: Clarendon Press (1991).

Google Scholar

18. James W. The principles of psychology. New York, United States: Dover Publications (1950).

Google Scholar

19. Schrödinger E. Discussion of probability relations between separated systems. Proc Cambridge Phil Soc (1935) 3132(555–563):446–51.

Google Scholar

20. Lewin K. Field theory of social science. Selected theoretical papers. Darwin Cartwright. New York: Harper and Brothers (1951).

Google Scholar

21. Walden DD, Roedler GJ, Forsberg KJ, Hamelin RD, Shortell TM. Systems Engineering Handbook. A guide for system life cycle processes and activities. In: Prepared by international council on system engineering (INCOSE-TP-2003-002-04). 4th ed. Hoboken, NJ: John Wiley and Sons (2015).

Google Scholar

22. Cummings J. Team science successes and challenges. Bethesda MD: National Science Foundation Sponsored Workshop on Fundamentals of Team Science and the Science of Team Science (2015).

Google Scholar

23. Smith A. An inquiry into the nature and causes of the wealth of nations. Chicago: University of Chicago Press (1977).

Google Scholar

24. Lawless WF, Risk determination versus risk perception: A new model of reality for human–machine autonomy. Informatics (2022) 9:30. doi:10.3390/informatics9020030

CrossRef Full Text | Google Scholar

25. Lawless WF. Interdependent autonomous human–machine systems: The complementarity of fitness, vulnerability and evolution. Entropy (2022) 24(9):1308. doi:10.3390/e24091308

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Lawless WF. Autonomous human-machine teams: Reality constrains logic, but hides the complexity of data dependency. Invited, Spec Issue Data Sci Finance Econ (2022) 2(4):464–99. doi:10.3934/DSFE.2022023

CrossRef Full Text | Google Scholar

27. Mann RP. Collective decision making by rational individuals. PNAS (2018) 115(44):E10387–E10396. doi:10.1073/pnas.1811964115

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Sen A. The formulation of rational choice. Am Econ Rev (1994) 84(2):385–90.

Google Scholar

29. Simon HA. Bounded rationality and organizational learning. Technical Report AIP 107. Pittsburgh, PA: CMU (1989).

Google Scholar

30. White J. California v. Green, 399 U.S. 149 (1970). U.S. Supreme Court Publisher. Available at: https://www.supremecourt.gov/.

Google Scholar

31. Ginsburg RB (2011), American electric power Co., inc, Available at: http://www.supremecourt.gov/opinions/10pdf/10-174.pdf (Accessed 11 May 2017).

Google Scholar

32. DoD (2021), Kirby and air force lt. Gen. Sami D. Said hold a press briefing. Pentagon Press Secretary John F. Available from https://www.defense.gov/News/Transcripts/Transcript/Article/2832634/pentagon-press-secretary-john-f-kirby-and-air-force-lt-gen-sami-d-said-hold-a-p/. [Retrieved 11 3 2021].

Google Scholar

33. Lawless WF, Sofge DA. The intersection of robust intelligence and trust: Hybrid teams, firms and systems. In: WFR LawlessMittu, D Sofge, and S Russell, editors. Autonomy and artificial intelligence: A threat or savior? New York: Springer (2017). p. 255–70.

Google Scholar

34.Editors (2019), “Ready for weapons with free will? New York times, Available from: https://www.nytimes.com/2019/06/26/opinion/weapons-artificial-intelligence.html.

Google Scholar

35. Kissinger H (2022), “Henry Kissinger’s guide to avoiding another world war. Ukraine has become a major state in Central Europe for the first time in modern history,” Available from: https://thespectator.com/topic/henry-kissinger-guide-avoiding-another-world-war/. [Retrieved 12 27 2022].

Google Scholar

Keywords: interdependency, data dependency, autonomy, human-machine teams, systems

Citation: Lawless WF, Sofge DA, Lofaro D and Mittu R (2023) Editorial: Interdisciplinary approaches to the structure and performance of interdependent autonomous human machine teams and systems. Front. Phys. 11:1150796. doi: 10.3389/fphy.2023.1150796

Received: 25 January 2023; Accepted: 13 February 2023;
Published: 01 March 2023.

Edited and reviewed by:

Alex Hansen, Norwegian University of Science and Technology, Norway

Copyright © 2023 Lawless, Sofge, Lofaro and Mittu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: W. F. Lawless, w.lawless@icloud.com

Download