Does the method for Military Utility Assessment of Future Technologies provide utility?

The Military Utility Assessment of Future Technologies (MUAFT) method was developed as a cost-efficient alternative to methods such as NATO ’ s Disruptive Technology Assessment Games, to be used as a part of the Swedish Armed Forces ’ long-term capability development process. The question addressed in this study is whether MUAFT can be considered to have validity in its context and thus if it has potential to be useful to other small to medium size states. The analysis was based on an operationalization of Clark ’ s framework for science and technology intelligence analysis, combined with a military capability centric view of military utility. MUAFT reports from 2012 to 2018 were reviewed in terms of how they satisfy five key criteria. The study shows that MUAFT provides utility, if used by a suitably composed group of experts, who are aware of the method ’ s limitations. The limitations mainly originate from a lack of explicit support for assessing the impact of forces for change, other than technological forces, on military capability development. The expert group serves as the synthesizing bridge between technology forecasts and military utility assessments. Therefore, comprehensive expertise is needed in various military technology specialisations, in the sponsor ’ s military capabilities and in subjects necessary to master in order to assess other influential societal forces for change.


Introduction
"Technological surprise is the unilateral advantage gained by the introduction of a new weapon (or by the innovative use of a known weapon) in war against an adversary who is either unaware of its existence or not ready with effective counter-measures, the development of which requires time."[1] Therefore, most states consider exploring the long-term developments in science and technology that may be crucial for military strategic capability decision making.In the past, the use of new technologies, and new applications of mature technologies, have affected the way conflicts and wars are conducted, and they will most probably continue to do so.The effects of technological developments on warfare are often evolutionary, but sometimes, as the initial quote by Handel suggests, they are surprising, even revolutionary [2].Such effects are known as disruptive [3], because they render some capabilities obsolete, while also creating a need for new capabilities.Gunpowder, railways, radar technology, information technology, global navigation satellite systems are but a few of many possible examples in military history [4].These examples tell us that a military actor who truly understands the potential in technology can gain advantages, either by exploiting it, or by avoiding surprise on the battlefield.Furthermore, from experience, we know that integrating new technologies into weapon systems, introducing them into military units, and adapting doctrine, often takes decades.Consequently, military decision makers have a real need for competent, accurate technology forecasting.
Forecasting has been characterized as "making more or less linear systematic estimations, statements, extrapolations, projections, or predictions of highly probable events" [5].Consequently, forecasting is a concept that is applicable to research areas and sectors of choice, such as technology and the defence and security sector.Although, even in its infancy, technological forecasting was predicted to become as accepted and useful as economic and weather forecasts [6], forecasting methods can rightfully be criticized for their inescapable inaccuracy.For example, the failure to predict the 1973 "Oil Shock" led to considerable scepticism about the validity and utility of forecasting [7].Today some military thinkers even claim it is futile to think that the use of scientific or military intelligence in force planning can avoid technological surprise on the battlefield.Instead, Finkel suggests that the remedy is to design a force flexible enough to recover quickly from the inevitable technological surprise when it happens [8].He claims some of the necessary flexibility requires diversity and redundancy in military materiel, but he emphasizes the organizational requirements and, above all, the education and training of officers.However, because their purpose is to help decision-makers to evaluate the probability and significance of different technological developments, rather than to predict their precise form in specific applications at specific future dates [6], the research into feasible methods is still ongoing.
The sequence of activities in the Swedish technology forecasting process typically include activities one through eight.All activities could, but must not, be performed by one group in an organization.In activity four the sponsor, e.g. the Swedish Armed Forces (SwAF) via the Swedish Defence Materiel Administration (FMV), selects prioritized technologies and order scientific reports.The preparation of scientific reports for selected technologies correspond to activity five and six.They are then used as input to MUAFT in the following step.The MUAFT method is further described in section three.The results of the MUAFTassessment together with recommendations for future action for each technology is delivered in a yearly Technology Forecast Report to the sponsor.The recommendations are formulated as to either invest in, monitor or disregard the specific technology.The sponsor is responsible for activity eight.
In the 1980s the technology forecasting coordinated by the Swedish Armed Forces (SwAF) produced trend reports for relevant areas of technology every four years or so.This work was completed in close collaboration with the Swedish Defence Materiel Administration (FMV) and the Swedish Defence Research Agency (FOI).However, because the resulting reports, which were extensive and classified, were rarely used in SwAF studies or for other development purposes, the last report in this format was issued in 2005.Coincidently, at the end of the 1990s, military development in Europe was characterized by de-escalation, after the collapse of the Soviet Union.From the end of the 1990s and well into the first decade of the 21st century, SwAF long-term planning was heavily influenced by concepts such as Dominant Battlefield Awareness (DBA), Revolution in Military Affairs (RMA) and Net Centric Warfare (NCW).There was considerable focus on the technological aspects of the development of sensors and command and control systems [9].Then, after the 9/11 attacks, and in keeping with SwAF involvement in the war against the Taliban in Afghanistan, the focus changed.There was more interest in technologies to support missions abroad, e.g.protection and power supply for soldiers' equipment [20].At this time, the SwAF did not want to reintroduce the previous, ineffective and resource consuming, technology forecasting method.Therefore, a new approach to future technology trend analysis was developed.As a small state, with limited economic resources for technology forecasting, Sweden needed a process that was simple, cost-efficient and could support a comprehensive approach to military capability development [20].A number of medium size nations used complex, costly and time consuming technology forecasting methods and had begun to change them [21,22].The primary objective of the Swedish military technology forecast was to provide the SwAF with policy recommendations for defence R&D appropriations.Due to budget cuts, the number of technologies that could be covered by the technology forecast had to be reduced.Therefore, the technologies to be assessed had to be rigorously prioritized (activity four in Fig. 1).Between 2005 and 2015, a group at the Swedish Defence Materiel Administration (FMV) decided the priorities and selected the scientific reports of future technologies to be analysed with the MUAFT method.Since 2016, the SwAF HQ has taken part in the selection process by approving the list of technologies before scientific reports are commissioned from research institutes such as the Fraunhofer Institute, Massachusetts Institute of Technology or the Swedish Defence Research Agency.
Furthermore, the SwAF wanted to increase the understanding of future technologies, specifically among armed forces personnel.Therefore, a new reporting format aimed at the general public was developed in 2009.The new publication used graphics and popular scientific explanations of future applications of technologies, which may otherwise have been difficult for decision-makers to grasp, and its popularity in the Swedish defence community increased [20].
Until 2011 policy recommendations (activity eight) were based directly on technology assessment reports issued by the Swedish Defence Research Agency, FOI, or partner research institutes.In 2012 the Swedish Defence University (SEDU) was asked to assess the potential military utility of selected future technologies; thus, activity seven in Fig. 1 was added to the sequence.The assessments focused on whether the technologies had the potential to be successfully integrated into military systems contributing to military capabilities, either by the SwAF or by a possible adversary.The rationale of the method was to avoid embarking on long, uncertain, often expensive, paths to develop new military materiel.Instead, the potential contribution to military capability of certain selected technologies should be explored.Previous recommendations, based purely on potential technical performance,  should be abandoned in the new Swedish technology forecast process [2,23,24].From the outset, the SEDU assessments also included policy recommendations regarding proposals for SwAF studies, R&D investments or monitoring the development of technologies, i.e. input to activity eight in Fig. 1.The method, that is evaluated in this article, has been incrementally developed and is now called the Military Utility Assessment of Future Technologies (MUAFT) [24].
There are reasons to believe that the Swedish cost-efficient MUAFT method could be of interest to other small and medium size countries.However, does it provide utility?While traditional technology forecasting activities have been thoroughly discussed elsewhere MUAFT has not.The method has been used for the better part of a decade and is now in need of scrutiny.The aim of this study is to analyse and assess the utility of the MUAFT method as part of the Swedish Armed Forces' longterm capability development process.
In section two we present the theory, i.e. our view of the central concepts and our theoretical framework, derived from the experiences of the American intelligence analyst, Robert M. Clark [25,26].In section three, we briefly introduce the MUAFT method.The structure thereafter is traditional.We describe the method for evaluation in section four.The analysis is presented in section five, followed by a discussion of the utility of the results and of the MUAFT method in section six.The paper ends with conclusions and recommendations for future research in section seven.

Military capability as a system
Our point of departure is a belief that the technologies chosen by the military, and their use, will affect outcomes on the battlefield, and the sustainability of capabilities over time.Hence, military capability is central.As military capability is inherent in military units, an actor can perform military tasks and thereby achieve the desired effects.In literature, a common view of military capability is that of Fighting Power, comprising three interdependent components, the conceptual, the moral and the physical [24][25][26].In this article, we have instead chosen a management view, developed within the concept of capability-based planning.It has proven useful to support military capabilities management in uncertainty [27,28].In this context, military capability can be seen as being composed of Doctrine, Organization, Training, Materiel, Leadership, Personnel, Facilities and Interoperability (DOTMLPFI), as in NATO publications [29].Our point is that it is advantageous to view capability as a system.No single component delivers effect on its own, because it is dependent on other components.Consequently, the military utility of the technical component can be assessed only if it is viewed as a contributing element in a capability system.

A capability-centric view on military utility
This study builds on previous results from Andersson et al. [30] and takes the same view of military capability, and the importance of technology.They argue that it is essential for a decision-maker in the military domain to understand what constitutes the military utility of technology.The technology forecast is put forward as one of the typical decision situations.The generic question can be phrased: in which technologies should the armed forces of a small state invest their limited R&D funds to maximize military utility?The concept is described as follows: "Military Utility is a function of three situational variables: the Element of Interest (EoI), the Military Actor and the Context.The concept has three dimensions.The Military Effectiveness dimension is a measure of the overall ability to accomplish a mission when the EoI is used by representative personnel in the environment planned or expected for operational employment of the military force.The Military Suitability dimension is the degree to which an EoI can be satisfactorily placed in military use in a specified context with consideration to interaction with other elements of the capability system.The Affordability dimension is a measure of compliance to the maximum resources a military actor has allocated to the EoI in a time frame defined by the context."[30].
Thus, when assessing military utility, it is important to first specify what the EoI is (in our case it is the technology in a technical system), from which military actor's perspective it is assessed, and in which context.The outcome of the assessment will be different depending on how we consider these aspects: Whether an aircraft is assessed as a single weapon system or as part of an air force?Whether the Swedish Armed Forces (SwAF) or the US Armed Forces are doing the assessment?Whether an aircraft is assessed as part of a tactical reconnaissance mission in northern Europe or an airlift mission to Central Africa?Furthermore, military utility is a compound measure of three dimensions: Military Effectiveness, Military Suitability and Affordability.The element evaluated has to be militarily effective as a component in a capability system, it has to fit with all the other components of that capability system, and it has to be affordable to the military actor assessing it.For example, an American two-billion-dollar B-2 bomber aircraft would probably not be militarily suitable, or affordable, to the Swedish Air Force.Hence, its military utility would be limited regardless of potential military effectiveness.
When using the military utility concept for assessing technology there is a complication.Andersson et al. [30] point out that technology in itself cannot be viewed as a system element.An analyst using their concept to assess the military utility of a technology must first apply the technology of interest to a technical system.Therefore, in the MUAFT method, the technology of interest is applied to a conceptual military technical system before doing the assessment.See section 3.2 for details.

Clark's framework for military intelligence analysis
Building on the concepts above, in order to assess the military utility of a future technology it is necessary to be able to make a good prediction of how that technology will be used in future capabilities.We have chosen Clark's structuring of concepts and methods as our point of reference.In his books on estimation and prediction [25,26], Clark presents a practicing military intelligence analyst's view of making predictions in the science and technology domain.He argues that the framework introduced can be used to describe predictions for any target of interest, on any timescale.In our application, the MUAFT analyst strives to predict the state of military capability (the target) two decades into the future (the timescale) -given a predicted technology development.
Fig. 2 illustrates the terminology used henceforth.Estimations of the present and past states of a target are based on the information available, taking into account its inevitable uncertainty.When estimating the present state of an interesting technology, an analyst has to work with a limited number of available articles and reports.These might be outdated or of limited value due to confidentiality.The uncertainty about a past state is smaller than that of the present, as one would expect.This is illustrated by different sized ellipses around the corresponding true states in the diagram (the dots).Assessments of the future states of a target are called predictions.In Clark's terminology these can be categorized into three types.Extrapolations are predictions of a future state, assuming that the present Forces for change (the arrows in Fig. 2) will continue to act on the target.Projections instead assume that the forces acting on the target will change.Finally, Forecasts assume that new forces will act on the target.However, the uncertainty of the target's true state will increase in any forecast when compared to an extrapolation.
Clark states that the first thing an analyst should do when making a projection or forecast is to identify the forces affecting the target [25,26].He says you should at least make sure that you include assessments of those forces relevant to your specific problem.In his book for analysts in the science and technology domain, Clark argues that there are six major forces to consider: Organization, Programs, Technology, Capital, Market and Regulations [25].He states.
• Organization analysis includes: intents, power distribution, effectiveness, human resources, decision-making processes, etc. • Program analysis is used to make predictions based on knowledge about the development process of the military actor in focus.• Technology analysis is about assessing a technology of interest seen as a force for change, including: o predicting the future performance of the technologywhile taking supporting technologies into account, o predicting the usage, transfer or dissemination of the technology, and, o forecasting innovation or breakthroughs in the technology.
• Capital analysis addresses the question of how innovation is brought into production.• Market analysis centres on the strength and nature of market forces.
• Finally, the analysis of regulations centres on the influence of government legislation.
In this study we argue, using Clark's terminology, that the input into technology developments by research institutes (see section one) is the result of a technology analysis.The scientific reports serve two purposes.Each report includes estimates of a new technology force found to be acting on military capabilities, and a prediction of the future of that same technology force.The scientific report brings with it certain assumptions that will steer the utility assessment performed using MUAFT.The MUAFT expert group must also agree on further assumptions in order to structure each specific technology assessment.The scenario and technology system constructed for each assessment becomes a result of a combination of these assumptions.
With that view, the task of forecasting requires an analyst to produce: • extrapolated predictions of the future state of military capabilities of interest, assuming that the forces acting on their development do not change, and to produce • forecast predictions of the future state of military capabilities of interest, assuming that the new technology forces act on their development.
In both cases, a forecasting analyst has to take the non-technology forces acting on predicted states into account.A MUAFT analyst is then required to assess the military utility of technologies by relating forecasted states to extrapolated states, thereby considering the changing and new forces.

The Military Utility Assessment of Future Technologies method
In the following sections, we briefly present the MUAFT method later being evaluated.It is more thoroughly described in Ref. [24], available online.

The context and constraints
The initial inspiration for MUAFT came from Disruptive Technology Assessment Games, DTAG, developed by NATO and further developed by NORDEFCO [9,31].The MUAFT and DTAG methods largely have the same purpose, but MUAFT is designed to be capability-centric and more cost-efficient.DTAG involves substantially more personnel, as preparation and execution of the war game, with structured data capture, followed by extensive analysis, which makes it considerably more costly.As a bonus, the central seminar activity in MUAFT makes it feasible to set up in a defence university environment.MUAFT requires that the group of experts performing the assessment is large enough to represent the perspectives required to assess a spectrum of military capabilities, but small enough to fit the budget.At a university a staff of experts with different scientific perspectives are available and usually keen to participate in order to develop their knowledge.Likewise, creative students, military and civilian, can be involved by making the seminar part of the curriculum.This study therefore builds on the assumption that MUAFT costs less than DTAG.Consequently, if the study also shows that MUAFT is valid for its purpose, we can conclude it is cost-efficient.
We would also like to make a special note clarifying that selecting the technologies to be assessed is not a task for MUAFT.The sponsor makes that selection earlier in the technology forecasting process.

Phase 1 preparation
Each input scientific report for specific MUAFT technology assessments should include an estimate of the present state, and a prediction of the future state, of the technology in focus, in the timeframe set for the process.The technologies can then play the parts of forces for change acting on military capabilities.
Each scientific report is assigned to the member of the expert group with the best match in expertise and interest for the technology in focus.He or she reviews the report and takes the role of an advocate, promoting the use of that technology.The scientific reports are basically held to be true, including the predicted Technology Readiness Level (TRL) of the technology.Otherwise, the task of the assessment team would slip into reviewing the work of the research institute, which would expend time and money.
On the basis of the scientific report, the advocate of the technology designs one or two conceptual technical systems exploiting the new technology, and puts it in one or two credible future military scenarios.The systems and the scenarios are chosen in such a way that the benefits of the technology become evident.The scenarios indirectly define the military capabilities of interest.
Before the seminar, the advocate prepares and distributes a memo as a basis for the assessment of the military utility of the technology.For further details, see Ref. [24].

Phase 2 The seminar
At the seminar, the underlying scientific report describing the technology is briefly introduced.The advocate for the technology presents the idea of a technical system, its identified possibilities and constraints, Fig. 2. Illustration of the terminology used.It has been developed from the original diagram presented by Clark (1996).and the assumptions that have been made.The suggested military use and the concept scenario are presented and then discussed in plenum.The advocate promotes the use of the new technology in the scenario.The other participants' role is primarily to criticize, based on the contents of the memo and on their individual knowledge and experience.
Next, the use of the technical system in the specified military scenario, including the use of the technology in focus, is analysed in four steps.
1. SWOT analysis.The purpose is to identify the Strengths, Weaknesses, Opportunities and Threats of using the conceptual technical system in the scenario, to be used as a basis for continued assessment.Risk assessment is part of the analysis of weaknesses and threats.2. Assessment of capability impact.The purpose of the second step is, in essence, to create a basis for assessing the military effectiveness dimension of military utility [30].The impact of the technology is assessed in terms of the Elements of Combat Power in Swedish doctrine: Effect, Mobility, Sustainability, Command and Control, Protection, and Intelligence and Information [32]. 3 Assessment of footprint.The purpose of the third step is to create a basis for assessing the military suitability and affordability dimensions of military utility."Footprint" thus denotes the influence on other components in the capability system.4 Assessment of the need for military R&D.The purpose of the fourth step is to identify actions required by the SwAF R&D directorate, should they wish to facilitate the introduction of the technology into service.

Phase 3 conclusions about military utility and recommendations
When all the technologies provided by the sponsor have been assessed at seminars, a final seminar is held, where the expert group formulates a conclusion about the assessment of the military utility of each future technology in focus.Four statements are used and consensus is sought using a variant of the Delphi method [33]: a.The military utility is considered significant if the technology is assessed to make significant contributions to military capabilities, or if it is considered potentially disruptive.b.It is negligible if no increase in military effectiveness can be foreseen, or if either the cost or the mismatch in military suitability is anticipated to be too great.c.It is moderate if it is neither significant nor negligible.d.Finally, it is considered uncertain if the expert group as a whole has difficulties in deciding on any of the other value statements.
Using these four statements of military utility, the technologies are sorted into three categories of recommendation: 1. To make investments in R&D with the aim of exploiting technologies with potentially significant military utility, i.e. presumably having significant contributions to military capabilities, 2. To monitor the development of technologies with moderate or uncertain impact, or 3.Not to invest in technologies with negligible impact.

Choice of approach
The straightforward approach to evaluating methods for assessing the future military utility of technology forecasts is to assess success at or after the horizon time [34].The time lag is the obvious disadvantage.Kott and Perconti analysed the accuracy of long-term forecasts of military technologies 20-30 years after the forecasts were made.They suggest an approach where the focus is on the Technology Readiness Level (TRL).They considered a technology forecast to be true if there is evidence reaching TRL 8, and to be less than true if the TRL is lower.With this approach, they managed to show that the average accuracy of a wide range of technology forecasts was actually quite high [35].
In this study, however, we evaluate a method that has not yet been used for the full duration of the horizon of the forecast.Fortunately, Clark's framework describing best practice in military intelligence analysis, see section 2.3, can be regarded a theory describing the relationships between contemporary trends in society and likely future developments in military capability.Thus, the approach chosen was to do a text analysis on the technology forecast reports resulting from MUAFT assessments conducted so far, and to ask ourselves: Given this result, how does the MUAFT method comply with Clark's framework for military intelligence analysis in the science and technology domain?

Sources
Our units of analysis are the technology forecast reports on assessments of future military utility produced in the Swedish technology forecasting process, using MUAFT, from 2012 to 2018 [36][37][38][39][40][41][42].They are considered primary sources briefly describing: the military utility assessment method and process applied, the military utility assessments of technologies in focus each year, and specifying participants in the expert groups doing the assessments.

Operationalization
The approach was operationalised by deriving four key criteria of a valid assessment from Clark's framework and combining them with a military capability centric view of military utility.See section two.A fifth criterion was introduced to include scrutiny of how useful the resulting assessments are to the sponsor in the continuing process.The resulting five key criteria for a valid assessment of the military utility of a future technology were: 1. Predictions are based on estimations of past and present states of military capability.2. The technology in focus is introduced in the form of estimations and predictions of a force for change acting on the military capability.3. Estimations and predictions of forces for change, other than technological forces, are also included.4. The assessment is based on an evaluation of the predicted future state of military capability. 5.The assessment, with uncertainty, is presented in a format useful as basis for decision.
During the review of the sources, using text analysis, each key criterion was used to collect statements from the early technology forecast consistent with the criterion.These were then documented in five corresponding protocols.The analysis presented in this text includes a discussion of to what extent that evidence satisfies the respective criterion.

An overview of MUAFT results
Since the introduction of MUAFT in 2012, forty-one (41) different technologies have been assessed.In Table 1 each technology is presented with the expert group's conclusion regarding the potential military utility.Note that the "Moderate" assessment was introduced in 2016 in order not to mix these technologies up with those of uncertain military utility [24].
Nineteen technologies were found to have potential for significant military utility, eleven were uncertain, four negligible and six had moderate military utility.One technology, "Temporal Analytics", a tool used to analyse future cyber threats, was difficult to grade due to a deviation in the form of the input of the scientific report.
We find it interesting to note that, althoughon averageonly a handful of technologies were assessed each year, quite a number of technologies have been assessed over a period of seven years.It is also interesting to note that half of the technologies in total, and all the technologies in 2015, were assessed to have potential for significant military utility.No doubt, this can be explained by the selection process where the input technologies were chosen from a list of promising technologies.One difficulty for the validity of the method is the eleven technologies assessed to have uncertain military utility.This is studied in greater detail in section 5.6.

Estimations of past and present states of military capability
According to Clark, accuracy in the prediction of a future state requires good estimates of past and present states (See section 2.3).Therefore, we have analysed the extent to which the MUAFT method meets these requirements.
There are no systematic steps in the MUAFT method to account for the estimates of past and present states.However, our analysis of the technology forecast reports from 2012 to 2018 shows that there are examples of statements of past and present states under other headings.Past states of technologies were discussed eight times in assessments of six different technologies, e.g."The first attempts to make electromagnetic guns were made around the turn of the 20th century.Still, a hundred years later, there are needs for further development before they can be used in the field."(Electromagnetic guns, 2012) or "Research on air-breathing propulsion systems for enabling such speeds [faster than Mach 5], hypersonic propulsion, has been ongoing for decades."(Hypersonic Propulsion, 2018).
More often, we find descriptions of present states.Our review of the technology forecast reports shows that there are 56 statements about present states in assessments of 34 technologies, e.g."The drivers of combat vehicles do not have sufficient view in all directions, especially not in the rear direction … Today the driver of fighting vehicles has to raise his (or her) head through the manhole in order to get a better overview of the situation."(Augmented Reality, 2012).The scientific reports in most cases include TRL levels of the present state of the technology in focus, there is however no evaluation of the present status regarding capabilities.
In summary, the analysis shows that, where descriptions of past states occur, they are mainly used to indicate for how long there has been an interest in the technology.This period of time can, in turn, be seen as an indicator of the technology ′ s feasibility and development rate.Descriptions of present states instead include statements about current applications and the shortcomings of particular technologies.Correspondingly, these can be used as indicators of the direction of development.

Estimations and predictions of the technology in focus as a force for change
As input, MUAFT requires a feasible description of a technology as a force for change.According to Clark [26], force analysis includes an assessment of how a specific technology has affected the development of a capability up to the present day, how the technology is changing, in what direction, how rapidly, and if there are any new technologies coming into play.The MUAFT method presupposes that the scientific reports from research institutes fulfil the requirements of Clark's force analysis.
The analysis shows that for three quarters of the technologies assessed, the scientific reports came from the Fraunhofer Institute.They all basically have the same format; the first section of six describes the general idea of the technology and potential benefits in support of military technology systems.The second section presents the potential benefits of the performance delivered, and a list of supported military functions, e.g.cognitive radar and surveillance.Technology readiness level (TRL) assessments twenty years into the future are also included.In MUAFT assessments, the TRL levels presented in the scientific reports are taken as true.The third section presents a survey of institutions doing research and development.The fourth section presents the technical and market forces for development.The last two sections usually present perceived restraining factors, such as technical challenges and regulations.On three occasions, of forty-one, the expert group judged the scientific report to be inappropriate for MUAFT assessment.
In summary, our analysis shows that most scientific reports provide suitable descriptions of technology as a force for change.The direction of technology development, the speed of change, and the forecasted impact of technology on capability development are all well covered in a typical scientific report.However, they often provide only rudimentary descriptions of the impact of present technology on the capability in focus.Where there are descriptions, the aim is often to contrast a current design with a forecasted design.The forecasting and analysis technology, described in the scientific report "Future of Cyber Threats", was considered to have potential for significant military utility, if the tool were combined with advanced artificial intelligence algorithms.In its present form it was assessed to have uncertain military utility.

Estimations and predictions of other forces for change
Technology is not the only force for change acting on military capabilities, as pointed out in section 2.3.The analysis indicates that all other key forces, organization, development program process, capital, the market, and government legislation, are taken into account implicitly at some point in the assessment process.However, the extent to which this happens varies depending on the technology in focus.
The most common force for change mentioned in the statements, besides technology, is organization, as defined in section 2.3.The analysis shows that these statements are often used in assumptions supporting the credibility of scenarios, or in assessments of strengths and opportunities.One example of a characteristic statement is: "Fighter pilots are expensive and their training is time consuming.The suggested mix of manned and unmanned platforms would allow for fewer pilots in the organization while maintaining the same operational capability."(Unmanned Combat Aerial Vehicle, 2014).However, most statements about organization arise from the DOTMLPFI influence factor analysis, i. e. the "assessment of footprint".In our view "M", for Materiel, should be translated to technology while all the other factors translate to organizational effects.Consequently, these statements should be understood as organizational forces for change.
Statements in the assessments regarding military development programs are used to predict constraints on development, e.g."Military systems have a much longer product cycle compared to civilian systems and adaption in procurement programs to third generation fuels will have to start 2020+."(Alternative fuels, 2013).Similarly, statements regarding capital seem to only concern issues of limited funding.Both forces add complexity to the development process.
Statements related to the market or government legislation are used both ways.Sometimes they indicate limiting effects, e.g. when the civilian market is assessed to have no interest in developing the technology in focus (Alternative fuels, 2013).It also occurs when it is anticipated that authorities will regulate its use, "It is important to monitor the legal development regarding unmanned and automated violence since legislations can be a deal-breaker for the military utility of UCAV" (Unmanned Combat Air Vehicles, 2014).In other scenarios, statements related to the market or the absence of regulation are presented as drivers.Characteristic examples are "development in this area is commercially driven, bringing down acquisition costs."(Nano Air Vehicles, 2012) and "The operation does not violate the territorial integrity of any state and hence it does not require a mandate from UN or EU" (Small satellites, 2012).
In summary, we find that MUAFT assessments in the past have given little explicit support for incorporating the influence of forces for change other than technological and perhaps, organizational.However, the method allows all key forces to be incorporated in the process.Whether they are incorporated or not depends on the breadth and competence of the expert group, and on the significance of each key force in the chosen scenarios.Despite the obvious need for a broadly composed expert group, the MUAFT method does not have an explicit process for its composition.

Predictions of the future state of the military capability
Errors in the estimates of past and present states, combined with errors in the assessments of forces for change, add to the uncertainty of predictions of the future state of military capabilities.Another aspect is whether the focus is on the most relevant military capabilities when designing the conceptual technical system and the future military scenario.The method only requires an expert advocate to develop one or two scenarios that make the benefits of the technology evident.Nevertheless, we conclude that the method is valid because a MUAFT analyst is not required to find the most relevant scenario.The logic is that if you can find one relevant scenario easily, there are probably several; however, if you are unable to find any convincing scenario, the military utility of the technology is probably questionable.
In summary, our evaluation of the MUAFT method showed that in three quarters of the assessments reported it was possible to make a conclusive judgement (significant, moderate or negligible) of the potential military utility of the technology in focus.Thus, we can conclude that the states of military capability predicted in these cases were found to be relevant and credible.In the remaining one quarter of the assessments, the statement was inconclusive, i.e. uncertain.We will provide some more details of the latter case in the next section.

Assessments of future military utility as a basis for decisions
Ultimately, the important question is whether the assessment is useful to the decision-maker, and whether any uncertainty is acceptable.
The MUAFT core mechanism for dealing with the aggregated uncertainty is to present qualitative judgements of military utility in four classes: significant, moderate, negligible or uncertain, while using a version of the Delphi method to avoid biases in the group assessment.Our analysis shows that significant military utility translates to a recommendation to invest in R&D, moderate military utility translates to a recommendation to monitor the development, and negligible military utility typically translates to a recommendation not to invest, at least not at the present time.Thirty assessments, out of forty-one, fall into these categories; see Table 1.
However, as touched upon earlier, there are eleven MUAFT assessments resulting in "uncertain" military utility.A detailed analysis of the technology forecast reports shows that there are two recurrent reasons for such an assessment.First, in three occurrences there were other competing technologies.Second, in five occurrences the capability was assessed to be unrealizable within the timeframe of the forecast.The remaining three occurrences were singular events.In one occurrence, the expert group was genuinely uncertain about whether it would ever be possible to realize the system concept assessed (A meta material cloaked submarine, 2012).In another, the uncertainty in cost assessment was found to be too great.Last, in one occurrence, the scientific report was clearly unsuitable for MUAFT assessment (Temporal Analytics, 2013), as mentioned above.We find that the first two types of uncertain military utility can be considered legitimate results because they easily translate to a continued monitoring of the corresponding technologies.However, the last three occurrences are arguably cases of poor execution.If the advocate expert was unable to find a relevant system concept for the technology, the assessment should have resulted in negligible military utility.

Discussion
The Military Utility Assessment of Future Technologies method (MUAFT) was developed as a low-cost alternative to methods such as NATO's Disruptive Technology Assessment Games, to be used as part of the long-term capability development process in the Swedish Armed Forces.The question addressed in this study is whether MUAFT can be considered to have validity in its context, and thus be potentially useful to other small to medium size states.The analysis was based on an operationalization of Clark's established pattern for science and technology intelligence analysis [25], combined with a military capability centric view of military utility [30].MUAFT reports from 2012 to 2018 were reviewed in terms of how they satisfy five key criteria.
In summary, the analysis indicates that MUAFT is useful for assessing the military utility of future technologies, if used by a well-composed expert group aware of its inherent limitations.This generalization of results is elaborated upon in the following discussion.
Consistency with the first criterion requires the method to produce predictions based on good estimates of the past and present states of military capability, including supporting technologies.The analysis (5.2) shows that there are no systematic steps in the method to satisfy this criterion.However, some well-documented assessments nonetheless provide a basis for estimates of the technology's feasibility and development rate, and for the direction of development.A reasonable conclusion is that if the analysts are aware of the need for estimates of past and present states, the method has the potential to provide them.One remaining question is how to make predictions when a technology is disruptive, i.e. in cases when the capability in focus is a military innovation made possible by the new technology, which significantly alters warfighting.We argue that the use of MUAFT is straightforward, because a forecasted capability is described simply in the form of a mission scenario, and there are no formal connections to past or present defined capabilities.Of course, in the absence of any known doctrinal or tactical constraints etc., the uncertainty increases, and the accuracy of the forecast decreases.
The second criterion requires a feasible description of the technology in focus as a force for change on military capability.The analysis (5.3) shows that the scientific reports largely fulfil the requirements of Clark's force analysis.However, in order to reduce assessment uncertainty, the results indicate there is a need to improve the estimated impact of present technology on capability, because neither of the aspects of the second criterion are normative in MUAFT.Consequently, the accuracy of military utility assessments could be improved by including people in the expert group with knowledge about the use of corresponding current technology in the problem domain, and by making group members aware of the method's limitations.However, the analysis also shows that in some cases the input reports were not feasible descriptions of the technology as a force for change.In such cases, our conclusion is that a MUAFT assessment is a waste of time.Our argument is that those cases are either the result of an unsuccessful selection process by the sponsor, or an unfortunate interpretation of the task by the analyst management.
The third criterion requires MUAFT to take forces for change, other than technology, into consideration.The analysis (5.4) shows that these other forces, as categorized by Clark, are all at least implicitly included in the assessment at some point in the process.The organizational force for change is systematically covered during the DOTMLPFI influence factor analysis, while development program process, capital, government legislation, and the market are incorporated, if the expert group finds them relevant in relation to the chosen scenario.According to Clark, omissions are legitimateif they are conscious.We also find it important to note that doctrinal aspects (D) -of organizationinfluence the analysis in quite another way.By examining the list of technologies assessed over the years it becomes evident that doctrinal changes affect their selection.At the beginning of the decade studied, there was an emphasis on technologies suitable for soldier centric applications, in accordance with the asymmetric expeditionary operations at the time.Towards the end of the decade, as states in northern and Eastern Europe once again emphasized national defence, we see a shift towards technologies useful in complex technical systems at the tactical system level or above, such as artificial intelligence, deep learning, and postquantum cryptography.However, the military sponsor, the SwAF, issued no directives as to whether the analysis should relate to capabilities in one operational context or another.The MUAFT method is not affected, but such directives would no doubt increase the accuracy of future analysis, by focusing each individual assessment on relevant conceptual systems and scenarios.
The fourth criterion requires the military utility assessment to be based on a future state of military capability, predicted with a reasonable sense of uncertainty.The analysis (5.5) concludes that there are supporting mechanisms.First, capabilities are described indirectly in the form of future military mission scenarios, chosen to be credible to the expert group and to highlight evident benefits.This is feasible because the focus is on the military utility of the technology, not the forecasting of a specific capability.Choosing one or two credible scenarios is arguably enough to establish the potential of the technology.Second, the scale used to measure the military utility is designed to indicate a clear distinction between assessments.The experts have to agree whether there is significant military utility, only moderate military utility, or negligible utility.The logic is that the resolution should just be good enough to support a useful recommendation to the sponsor.If the expert group cannot agree, the military utility is consequently uncertain usually because of a lack of information, competing technologies or a technology readiness level that is too low.We argue that this is not a failure of the MUAFT method, but a signal to the sponsor to put the technology on a watch list and to assess it again in a few years' time.We find this to be a reasonable option, based on the pre-condition that MUAFT assessments are low-cost, thereby supporting successive assessments.We would also like to acknowledge that the selection of technologies itself is an interesting process, which deserves attention.However, because it does not affect the validity of the MUAFT method, it cannot be included in this study.
The fifth and final criterion used in the study requires the assessment, with its inherent uncertainty, to be presented in a format that is useful as a basis for decisions.The analysis (5.6) shows that significant, moderate or negligible military utility typically translates to recommendations to invest in R&D, to monitor the development, or not to invest, respectively.But is that useful to the sponsor?Interviews indicate that the technology forecasts have been used as a basis for long-term strategic planning of capability development, to some extent.However, what the sponsor really wants is for MUAFT analysts to transfer their knowledge by actively taking part in the planning.According to Kott and Perconti [35], specific predictions are perhaps more useful to a military actor, such as those resulting from the "RAND Method".Kott and Perconti provide illustrations with such predictions as "electro-magnetic guns will operate on an aerial gunship" or "UAVs will roam at high altitudes for days".Very high accuracy is reported.On the other hand, this is not surprising since the predictions do not seem to be very bold.In summary, we find that our study is inconclusive in this fifth respect.The MUAFT method results in an assessment of whether a technology has military utility, when compared to current or competing technologies, but not in predictions of specific capabilities.
Arguably, the accuracy of the method can only be judged at a time beyond the first forecast horizon.However, given that the MUAFT method is validif properly appliedwhat does the study indicate that we should do to make the best use of it?The point of departure was viewing technology as a component of a technical system, which in turn is a component of a military capability, interacting with an organization.Thus, to assess the military utility of technology, with this viewpoint, the assessment team must have considerable knowledge of the military actor for whom the assessment is being completed, and some sense of the context in which the capability is likely to be used.The utility of a technology is probably different for actor A than it is for actor B, because their requirements and objectives will probably differ.The analysis of technology forecast reports shows that it is important to include knowledge from a broad spectrum of societal trends in the development of military capabilitynot only the technological.Unfortunately, the MUAFT method gives little explicit support for these other perspectives, which is why the experts themselves must ensure that they are integrated into the assessment process.To conclude, the authors can see two main paths for future development.First, the inherent limitations of the MUAFT method should be addressed in a separate study taking the requirement for cost-efficiency into account.Second, when using MUAFT, the expert group gathered for the assessment seminars should incorporate a broad spectrum of societal perspectives on military capability, not only military and technical expertise.We argue that a defence university is a good setting because many scientific and academic perspectives are represented, and because of the potential for achieving cost-efficiency by emphasizing the role the assessment seminars can serve as educational activities.Finally, if the military sponsor wishes to focus the assessments on specific operational or tactical capabilities, input to the MUAFT method could easily be limited in terms of the type of conceptual technical systems or military scenarios that are of interest.

Conclusions
Armed Forces need to follow technology developments closely and assess how they can affect future capabilities and threats.They also need to understand which military capabilities are needed in order to counter the threats from new technology.The objective of this paper is therefore to evaluate the validity of a cost-efficient method for military utility assessment of future technologies, developed and used by the authors.The study shows, by analysing reported output using an acknowledged framework for science and technology intelligence, that: The Military Utility Assessment of Future Technologies (MUAFT) method, if properly applied, offers a cost-efficient solution; it makes efficient use of technology forecasts from commissioned scientific reports and requires a relatively small group of scientific and military experts.
However, the composition of the expert group is important for validity, as it serves as the synthesizing bridge between technology forecasts and military utility assessments.Comprehensive expertise is required: in various military technology specialisations, in the sponsor's military capabilities, and in subjects necessary to master in order for assessing other influential societal forces for change.
One important distinction to make, when compared to other forecasting methods, is that the method does not produce predictions of specific capabilities; it is limited to assessing the potential future military utility of a technology of interest for a specific sponsor.
The cost-efficient approach allows for re-evaluations of any particular technology as it develops, which is useful if the technology initially was assessed to have uncertain military utility.
The study makes two recommendations for future research: Firstly, the inherent limitations of the MUAFT method, that is, the need to explicitly address all forces for change, and the need for a wide representation of competences in the expert group, should be addressed in a separate study taking the prerequisite for cost-efficiency into account.
Secondly, the process of selecting technologies for MUAFT assessments within the whole technology forecasting process is important for the overall outcome of long-term capability planning and has not been addressed in this study.
To summarize, this study shows that the method for Military Utility Assessment of Future Technologies has utilityif used by a well composed expert group aware of the limitations of the method.

Table 1
Future technologies and their assessed military utility for 2012-2018.