Dear readers,

It’s conference time and with their selection of keynote speakers conferences tend to be seismographs for trends in research. I had a look at a handful of this year’s international AI conferences. Here is my selection: a continuing trend seems to be explainable AI with IJCAIFootnote 1 featuring even two keynotes in this area: Mihaela van der Schaar bringing our attention to machine learning interpretability in medicine which requires new methods for non-static data and which targets to enable medical scientists to make new discoveries by unraveling the underlying governing equations of medicine from data. However, Tim Miller reminds us to not let the “inmates run the asylum”. He argues that machine learning researchers may not bring in the best perspective to develop approaches for explanations that are helpful and understandable for lay persons. He makes a case of rather taking social scientists on board together with experts from human-computer interaction. Indeed, interdisciplinarity research has to be at the core of making AI decisions understandable and tractable for lay persons. At AAAIFootnote 2 Cynthia Rudin has shared her experiences on bringing interpretable models into situations with high societal stakes such as decisions in criminal justice, healthcare, financial lending, and beyond, collaborating with people from different fields. It appears that this branch of AI requires new efforts in trans- and interdisciplinary research and I think we can expect highly interesting new insights from this field.

Developing AI towards more human-interoperable forms is a topic that is recurring in a range of keynotes. At ICRAFootnote 3 Josh Tenenbaum highlighted his agenda of “Scaling AI the human way” which aspires to develop cognitive capabilities in machines that can match human ones and requires more than pattern recognition. With the metaphor of a “child hacker” in mind, where knowledge can be devised as code and learning as programming, he goes beyond the current machine learning paradigm of parameter learning towards explicit reasoning capabilities allowing to communicate structures and concepts. With a similar perspective on the difficulty of modeling human capabilities Anthony Hunter summarized at KRFootnote 4 approaches to model people’s ability for everyday life use of defeasible knowledge through non-monotonic reasoning and the evolution of argumentation and commonsense reasoning. While humans do exhibit capabilities that are currently unavailable to machines, Ulrike Hahn at KR, in contrast, highlighted the benefit of AI for human cognition especially for laypeople’s ability to match up to probabilistic norms. Whether humans ultimately benefit from AI or not and whether an AI approach is human compatible can ultimately be tested in human-machine collaboration. Bringing in a social dimension at ICRA, Julie Shah reported on her work on human-machine partnerships and work of the future with a focus on downstream consequences of AI design decisions for human workers. With a discussion of bias and its mitigation strategies another much discussed and not to be left out social dimension has been addressed by Ayanna Howard at ICRA. But also less known approaches have been discussed. AI cannot only revolutionize human labor but enable new forms of democracy through social choice and algorithms for collective decision making. This is the message of Jérôme Lang at IJCAI pointing out new ways of citizens to contribute to a plethora of public decision processes.

While these are interesting insights they remain subjectively selected topics, of course. So, let me finish with a more comprehensive reflection of the current state of AI: Michael Littman presented the 2nd “100 year study on AI 2021 study panel report” at AAAI which investigates current progress and impact of AI and infers from this suggestions for future directions. Having missed Littman’s keynote myself, find here a (hélas subjective) selection from the reportFootnote 5 itself:

Given that current research is rather focussed on narrow AI, the report discusses prospects for more general AI by naming three key abilities:

  1. (1)

    the ability to learn in a self-supervised or self-motivated way,

  2. (2)

    the ability for a single AI system to learn in a continual way to solve problems from many different domains, and

  3. (3)

    the ability for an AI system to generalize between tasks through the use of intrinsic motivation.

Yet, I wonder if the community will have enough incentives to decide to move towards more general AI - as it possibly requires to give up on the current rewarding narrow purpose approaches.

For the important question how to inform and educate the public the report gives the remarkable suggestion to move beyond the goal of educating in favor of a more participatory approach with the public. This is indeed an interesting perspective which would require entirely new approaches to AI, allowing to take the whole development and deployment process as a socio-technical system into account. Accordingly, as the most pressing dangers of AI the report identifies “insufficient thought given to the human factors of AI integration” leading to different kinds of misuse of AI systems. So, the human as a disturbing factor for AI?

Yet, the most remarkable comment, in my view, is made with respect to the long-term strategy where the report states that with its focus on data rather than models “the recent dominance of deep learning may be coming to an end”. In addition to faster processors and bigger data, future progress may depend on hand-coded methods – in other words: GOFAI (good old fashioned AI).

Best wishes and enjoy reading this issue of KI,

Britta Wrede

1 Forthcoming Special Issues

1.1 Explainable AI

Guest Editors: Ute Schmid (Universität Bamberg), Britta Wrede (Universitát Bielefeld)

Scope: During the last years, Explainable AI (XAI) has been established as a new area of research focusing on approaches which allow humans to comprehend and possibly control machine learned (ML) models and other AI-systems whose complexity makes the process which leads to a specific decision intransparent. In the beginning, most approaches were concerned with post-hoc explanations for classification decisions of deep learning architectures, especially for image classification. Furthermore, a growing number of empirical studies addressed effects of explanations on trust in and acceptability of AI/ML systems. Recent work has broadened the perspective of XAI, covering topics such as verbal explanations, explanations by prototypes and contrastive explanations, combining explanations and interactive machine learning, multi-step explanations, explanations in the context of machine teaching, relations between interpretable approaches of machine learning and post-hoc explanations, neuro-symbolic approaches and other hybrid approaches combining reasoning and learning for XAI. Addressing criticism regarding missing adaptivity more interactive accounts have been developed to take individual differences into account. Also, the question of evaluation beyond mere batch testing has come into focus.

In the special issue, the focus will be on research addressing such recent developments in XAI. Furthermore, interdisciplinary contributions as well as specific applications of XAI form domains such as education, healthcare, and industrial production are welcome.

The topics of interest for the special issue include, but are not limited to:

  • interactive approaches to XAI

  • adaptive XAI

  • deployment of explainable decision support systems in real-life settings (e.g. medical domain, work contexts etc.)

  • multi-modal approaches to XAI

  • process explanations

  • self-explaining robots

  • empirical evaluation of XAI approaches

  • measures for understanding of XAI

  • evaluation measures for XAI beyond trust and acceptability

Contributions can be from the following categories (for more detailed information please refer to the author instructions for each of these categories): Technical Contribution; System Descriptions; Project Reports; Dissertation and Habilitation Abstracts; AI Transfer; Discussion

If you are interested in submitting a paper please contact one of the guest editors:

Contact Ute Schmidt ute.schmid@uni-bamberg.de

1.2 GeoAI

Guest Editors: Simon Scheider, Zena Wood, Kai-Florian Richter

Scope: Researchers in Artificial Intelligence (AI) and Geography have been developing various points of contact in the past, with many possibilities of mutual benefit in the future. Recently, subsymbolic AI methods, such as Deep Learning, have increased the quality and scalability of data processing methods in remote sensing, geographic information retrieval, natural language processing (NLP) and geospatial modeling, among others. Furthermore, there is a tradition of using symbolic AI approaches to raise the quality and scalability of methods by linking, e.g., Geography with agent-based simulation (ABM), spatial cognitive reasoning with Robotics, as well as Geography with the Knowledge Graphs (KG) in the Semantic Web. At the same time, geographic information has become an indispensable resource in itself, needed not only for adding spatial intelligence to machines, and for making opaque models transparent, but also for understanding what kind of intelligence is needed to refer to place and to handle space. Understood in this broader sense, geoAI has the potential of fundamentally improving the way geographic information can be processed and interpreted by both humans and machines.

For this special issue, we invite researchers who investigate the kind of knowledge needed to account for Geography and space with(in) intelligent machines. We are looking for original research articles, project reports and discussion articles on (among others):

  • Symbolic (Semantic Web and ontological) approaches to geoAI

  • Sub-symbolic (deep learning/ML) based approaches to geoAI

  • Explainable geoAI (XgeoAI): interpreting and opening black box models with a-priori knowledge

  • Computational models of geospatial intelligence and spatial cognition

  • Methods for geospatial knowledge graphs (geoKG)

  • Reusability of geoAI models and reproducibility

  • Knowledge models of Geography and geographic information for data scientists

  • Pragmatic intelligence: Models of purpose and design of workflows with geoinformation

  • The human in the loop and models of human interaction in geoAI

Application areas include, but are not restricted to:

  • Agent-based models (ABM) and geoAI in Geography and Geosciences

  • AI in geographic information retrieval (GIR) and NLP: distant reading of geolocated texts

  • Geographic question-answering (geoQA) and automation of geographic data analysis

  • AI-enhanced geovisualization and dialogue methods

  • Object recognition in remote sensing and georeferenced image processing

  • geoAI in robotics, ubiquitous sensors and navigation systems

Contacts: Simon Scheider (s.scheider@uu.nl),Zena Wood (Z.M.Wood2@exeter.ac.uk),Kai-Florian Richter (kai-florian.richter@umu.se)

1.3 AI in Current and Future Agriculture

Guest Editors: Joachim Hertzberg, Jan Christoph Krause, Benjamin Kisliuk

Scope:

Agriculture is a perfect field for applying AI technology: uncertainty, data-rich and knowledge-rich applications and a high degree of digitalization in today’s farming technology. Today, assistive technologies as seen in precision agriculture, farm management systems and monitoring systems improve existing processes and improve their performance, while various robots have been in use in animal husbandry and start getting used in crop farming. Still, there is a lack of fully automated and integrated solutions for conventional agriculture which would transform practical procedures. Further, alternative cultivation concepts like agroforestry, spot farming and mixed cultivation approaches could become feasible by AI in the first place. For allowing AI to enable this transformation in agriculture, advances would be required in the fields of perception, navigation, autonomy, learning, data analysis, inference and (multi) robot control. Besides improving the technology, compliance with ethical, legal and social implications is vital for putting AI further into practice as well as to increase acceptance of users as of society at large. This Special Issue aims at providing an overview of work in AI in agriculture regarding, but not limited to, the following topics. All submissions will be peer- reviewed:

  • Monitoring and data acquisition in agricultural applications

  • AI-based assistive systems for decision making and execution

  • Robotic solutions for automating (partial) processes

  • Upcoming developments of robotic and AI technologies in agriculture

  • AI for alternative agriculture concepts

  • AI for indoor farming

  • Human-robot-interaction and user acceptance

  • ELSI aspects of AI in agriculture

Contributions can be from the following categories (for more detailed information please refer to the author instructions for each of these categories): Technical Contribution; System Descriptions; Project Reports; Dissertation and Habilitation Abstracts; AI Transfer; Discussion

Contact: Benjamin Kisliuk (DFKI), benjamin.kisliuk@dfki.de