Beware of Bureaucrats: A commentary on Lustick and Tetlock (2021)

The article by Lustick and Tetlock (2021) impressively embeds the need for reliable foresight in concrete historical events and, based on this, makes a strong appeal for significantly improved foresight and decisionmaking groundwork, through the use of theoryguided computer simulations such as the mentioned Virtual Strategic Analysis and Forecasting Tool (VSAFT). It goes without saying that theoryguided computer simulations improve the status quo of foresight, which is why they should complement the method portfolio of any strategist, risk manager, analyst, or policy maker. However, it is doubtful whether future pandemics, terrorist attacks, international conflicts, or even social upheavals can be regularly anticipated and, above all, whether appropriate preventive measures can be implemented consistently or even only to a predominant extent.

The article by Lustick and Tetlock (2021) impressively embeds the need for reliable foresight in concrete historical events and, based on this, makes a strong appeal for significantly improved foresight and decision-making groundwork, through the use of theory-guided computer simulations such as the mentioned Virtual Strategic Analysis and Forecasting Tool (VSAFT). It goes without saying that theory-guided computer simulations improve the status quo of foresight, which is why they should complement the method portfolio of any strategist, risk manager, analyst, or policy maker.
However, it is doubtful whether future pandemics, terrorist attacks, international conflicts, or even social upheavals can be regularly anticipated and, above all, whether appropriate preventive measures can be implemented consistently or even only to a predominant extent.

| A VOI CE CRYING IN THE WILDERNE SS
In fact, even much improved foresight still falls under the caveat of a voice of one crying in the wilderness: who hears him or her?
Foresight managers and planning teams are an essential, but just a single link in the network. Further up in key positions in all the countries and institutions around the world have been and still are sitting bureaucrats or, to put it neutrally, decision-makers.
Systems like VSAFT could therefore, exaggeratedly formulated, possess an almost divine foresight in certain contexts (see Table 1, Lustick & Tetlock, 2021, p. 3). Nevertheless, this would fizzle out without making a mark, when or if the responsible decision-maker simply favors other, competitive, erroneously prioritized strategic hypotheses. No existing or conceivable machine simulation system can protect against this human bias in decision-making-except in a literal "rule of the machines." As long as this does not happen, a truism of professional practice applies: Foresight is a craft, but so is persuasion. What good is the best foresight if it is impossible to convince anyone of it or only the very few? Persuasive foresight is a very powerful necessity in the age of our attention economy, which treats a potential consumer's attention as a scarce resource (Davenport & Beck, 2001;Goldhaber, 1997). From this perspective, theory-guided computer simulations are a prerequisite for more reliable foresight. But their competence for actually achieving improved decision-making depends on the effectiveness of their presentation. This is by no means intended as a call for guerrilla bureaucratism, in which futurists use all sorts of sleight of hand and political games to manipulate die-hard bureaucrats out of their well-worn groove. However, it is an appeal to better understand those decision-making hierarchies, systems and mechanisms and to equip them more effectively with bureaucracy-appropriate decisionmaking tools. This requires not only a deeper understanding of political and other decision-making systems, but also an in-depth knowledge of personality traits, perception types of recipients, and representation systems (Koch, 2011;Lotto, 2017;Yamagata, 2007). These profiles should be taken into account when presenting the established scenarios: For example, a recipient who is a numbers person expects a different presentation and reasoning than the narrative guy, the visual type, or the model person. Or to modify a well-known motto of the trade: Non-personalized scenarios are boring.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Tetlock (2021) is the so-called Skynet effect, also known as algocracy: the rule of algorithms (see, e.g., theory of algocracy by Sociologist A. Aneesh, 2016). Systems like VSAFT may deliver impressively illustrative simulations and reliable results, which are nevertheless not or not sufficiently taken into consideration by the relevant authorities, because they simply cannot comprehend how these results were generated and how the algorithm produced them. There is a reason why many players in science, business and politics now have a new buzzword on their agenda: XAI-Explainable Artificial Intelligence (Rai, 2020;Shin, 2021). What good is the best algorithm if humans can no longer comprehend its findings and therefore refuse to trust them? Or the other, far more serious alternative: the very question of comprehensibility is no longer pondered and resolved by the decision-makers and recipients of these systems, because they prefer to rely comfortably on the data and the conclusions of the systems in accordance with the doctrine of papal and legal infallibility: Roma locuta, causa finita. Skynet has spoken-so what is the point of even questioning it? We are already experiencing the potentially serious consequences of this in the case of investments made after following the advice of so-called robo advisors: the investor transfers a substantial amount of money to a depot that is managed autonomously by the robo advisor, the share price falls unexpectedly and unforeseen by the algorithm, the money is gone, the customer chalks it up to bad luck and switches to the next robo advisor; however, this must not become the modus operandi in the fight against terrorism or in predicting international conflicts.

| THE WOR S ENING PAR ADOX
Even if theory-guided computer simulations could reliably advise decision-makers, and bureaucracies or decision-makers would actually follow them and adopt and implement appropriate measures: Does the system then also predict a positive outcome for the chosen interventions? Can such systems effectively prevent the much-cited case where the solution to the problem has an unintended and, more importantly, unforeseen effect on the target variables that is worse than the original problem? Even without simulation systems, experienced strategists know: Doing nothing is also an option. In highly complex contexts such as international interdependencies or even systemic processes in natural ecosystems, laissez-faire, wait-andsee, and keen observation are in many cases better than activism or intervention, the consequences of which are difficult to predict.

| A NE W "ARMS R ACE "
Does this mean that theory-guided computer simulations are unnecessary, superfluous or even harmful? Quite the contrary.
Apart from the extent to which they are actually integrated into decision-making: No political apparatus, government, bureaucratic institution or company will be able to afford to operate without such systems in the future. A new "arms race" is emerging, a battle of systems. If the neighboring country or the industry competitor uses such a system-how can I do without it? It would mean putting myself at a potentially significant strategic com- Then, nothing will stand in the way of a benevolent or dictatorial algocracy.

| CON CLUDING THOUG HTS
Unless, of course, we revisit a central hypothesis of the two authors: Human thinking might be just too noisy, disturbed by inconsistencies and collectively distorted by interference to be suitable as a basis for reliable foresight (Lustick & Tetlock, 2021, p. 6). Machines, on the other hand, think rationally, consistently, and without interference. Humans do not. Humans are known to think and act erratically, irrationally and emotionally. Question: How could rationally thinking machines even approximately predict the actions of erratically thinking humans? For example, in 2016, how many campaign forecasting systems could predict Donald J. Trump's victory-ex ante (Lohr & Singer, 2016)? Only a marginal minority. The vast majority still suffers from their highly embarrassing error. As long as humans continue to think and remain erratic, simulative systems may not be the predictive tool of choice (alone). Unless they learn to think as chaotically as humans do. That, however, would be a very peculiar type of "progress."