How can technology assessment tools support sustainable innovation? A systematic literature review and synthesis

Sustainability considerations are increasingly important for manufacturing companies seeking to develop products that meet the needs of society and the environment. The way technologies are assessed in the early design stages plays a crucial role in the integration of sustainability into innovation activities – a necessary step towards the development of products and processes with better environmental and social consequences. However, existing sustainability assessment tools are difficult to deploy in the highly uncertain and data-scarce front-end of innovation. In order to ascertain the efficacy of technology assessment methods, a systematic literature review was conducted to systematize best practices in technology assessment and establish a set of design propositions to improve early-stage sustainability assessment. Subsequently, recommendations for designing and effectively implementing sustainability assessment tools in technology development were elicited. Several avenues for future research are proposed, including the testing and refinement of the design propositions and how to operationalize early-stage sustainability assessment.


Introduction
Technology development is a critical phase of the innovation process in manufacturing companies (Gaubinger and Rabl, 2014) that can significantly impact a product's sustainability performance across its life cycle (Chebaeva et al., 2021).Technology development projects are foundational to a company's portfolio (Cooper, 2006), giving rise to multiple potential commercial offerings, through new product capabilities and functions, new concepts and architectures, or novel production processes.Therefore, technology research and development (R&D) projects, as early design activities, have disproportionate impact in the sustainability performance of a company's business (McAloone and Pigosso, 2018).Decisions and assessments made in technology development have major influence on the future environmental, economic, and social impacts of technologies, processes, and products (Fisher and Rip, 2013).
Technology assessment (TA) is an essential part of the technology development activities.TA is a systematic approach to evaluate the technical feasibility, economic viability, and societal impacts of new technologies and innovations (Rip, 2015).Companies often need to make decisions about which innovation projects to engage in, even when little information is available (Mitchell et al., 2022).Thus, technology assessment methods have been widely applied in manufacturing companies to inform decision-making and optimize R&D investments (Rip, 2015;Tran and Daim, 2008).TA can be applied in various stages of technology development or even repeatedly in the same project, depending on how the design process is structured (Aristodemou et al., 2019).In addition, TA can be useful in opportunity scoping, idea selection, project planning, and iteratively applied as the technology concept is further developed (Gaubinger and Rabl, 2014).
As sustainability has become an increasingly important consideration in innovation activities, manufacturing companies aim to integrate sustainability into their technology assessment methods to address potential future environmental and social impacts (Farrukh and Holgado, 2020).Sustainability assessment (SA) is an umbrella term for the set of appraisal methods, often complex and multidisciplinary, which seek to support decision-makers on which actions they should take, towards a more sustainable society (Sala et al., 2015).Several SA methods exist, which are routinely used by manufacturing companies to evaluate the sustainability performance of products, examples of such including life cycle assessment (McAloone and Pigosso, 2018) and ecodesign tools (Pigosso et al., 2015).
While there are similarities between TA and (product-focused) SA methods, there are also significant differences (Chebaeva et al., 2021).
Four challenges for applying SA in early-stage technology development and innovation have been identified (Parolin et al., 2023), namely: (i) lack of data regarding low-maturity technologies; (ii) breadth of scope as projects can vary greatly in goals and maturity; (iii) context uncertainty given that a technology's degree of sustainability is highly dependent on its application and future socio-technical factors; and (iv) competing interpretations of the concept of sustainability at the operational level.For these reasons, conventional SA methods aimed at product appraisal can often fail when applied to early-stage innovation projects.
Therefore, in this paper, we review TA methods and subsequently synthesize actions that manufacturing companies can adopt to improve their SA tools for innovation, ultimately enhancing the sustainability performance of their technologies, products, and operations.We adopt a Design Science (DS) approach (Romme and Holmström, 2023), due to its relevance to the development of tools that bridge the gap between theory and practice (Schutselaars et al., 2023).Via a systematic literature review, existing TA methods are categorized, analyzed, and best practices are synthesized as design propositions (Romme and Dimov, 2021).With a problem-focused methodology (Romme and Holmström, 2023), actions and possible mechanisms are proposed for practitioners to apply when performing sustainability-driven TA and for scholars to explore when developing new SA tools for early-stage projects.
In the following section, the systematic literature review methodology is described, including study selection and analysis.Next, the findings are presented, which highlight characteristics of existing technology assessment methods and their relation to sustainability assessment.Based on these findings, a set of design propositions to guide SA application in technology development are highlighted.In the discussion section, possible mechanisms for the design propositions are tentatively explored and the findings are contextualized within the existing literature on SA.Finally, the paper is concluded by summarizing the key contributions and discussing avenues for future research.

Methodology
This study consists of a systematic literature review (de Almeida Biolchini et al., 2007) with the goal of collecting and analyzing existing methods for conducting technology assessment in an industrial context.Following a DS-informed methodology, research was synthesized into design principles (Denyer et al., 2008) for technology assessment tools in practice.DS has been proposed as a useful approach for linking theory and practice when addressing multifaceted challenges (Schutselaars et al., 2023), including the development of tools for sustainabilityrelated issues (Romme and Holmström, 2023).The ensuing subsections describe the review and synthesis process, following the PRISMA framework (Page et al., 2021).The complete search strategy and literature review protocol can be accessed in the supplementary material.

Literature search
The search was conducted in Scopus and Web of Science databases.Both databases have a broad indexing of technical and socio-technical literature in the fields of sustainable design, management, and innovation.The search strategy was built around relevant keywords based on the aim of the review: technology development, assessment, tool, and innovation.Search strings were developed for each keyword, grouping related concepts and synonyms with "OR" logic connectors (Table 1).Finally, the four strings were combined into one with "AND" connectors.Since the goal was to achieve a broad mapping of technology assessment methods, the search strategy was not restricted to sustainability-related approaches or to specific publication dates.

Study selection and data collection
The initial pool of studies was screened for duplicates and for relevancy.Included studies presented methods, tools, indicators, or any approaches designed to assess or evaluate technologies during early development activities in industrial settings.Articles of other fields such as medicine or pharmacy (where health technology assessments are often conducted) were excluded.Studies focused on the public sector and policymaking were excluded, as the review only related to industrial innovation.Simple viability studies in the chemical or process industry were also excluded since they cannot be easily transposed to other industrial contexts.Life Cycle Assessment papers which did not stray away from their traditional process or did not implement any new methodological feature were excluded due to their retrospective nature, which is unsuitable in early development activities (Villares et al., 2017).Additionally, when a selected study was primarily review-based, other articles from its references were analyzed for inclusion (also referred to as snowballing).The selected articles were read in full and data on TA and SA methods and tools presented in the studies were extracted and logged.

Method analysis
To characterize the selected TA methods, all authors analyzed the extracted data to look for commonalities and differences among (categories of) methods.Special focus was placed on understanding the gaps in sustainability-related methods, compared to other technology assessment tools.All data analysis was performed using Microsoft Excel.Each method or tool (henceforth only referred to as method) was analyzed according to the parameters or virtue (e.g., cost, quality, efficiency) being assessed, building on top of Olesen's universal virtues framework for products (Olesen, 1992).Additionally, methods were characterized according to industrial sector; type of technology (if aimed at product or process); stage of development process; data requirements; time requirements; type of intended user (expert or novice); presence or absence of sustainability concerns; presence of scenarios, number and type of indicator used, presence of multi-criteria analysis, type of implementation (software or workshop), and presence of uncertainty considerations.Findings are shown in section 3.
TA methods were examined following a synthetical approach according to DS methodology (Denyer et al., 2008).The methods were surveyed for practices that could guide the implementation of TA in industry.These best practices are also called design principles or design propositions, and they can be used to direct the development and deployment of SA into technology development and other innovation activities.Design propositions are common artifacts resulting from DS-informed literature reviews (Bhatnagar et al., 2022), which aim to connect science and design, or retrospective and prospective knowledge (Romme and Dimov, 2021).They are often phrased in a Context-Agency-Mechanism-Outcome (CAMO) or Context-Intervention-Mechanism-Outcome (CIMO) logic format, acting both as descriptive-explanatory and prescriptive-normative statements (Romme  and Dimov, 2021).In this article, design propositions are discussed to clearly differentiate them from design principles, since they are only initial proposals which have not been tested.
The DS-informed literature synthesis approach employed in this study is illustrated by Fig. 1.To extract design propositions from the literature review findings (indicated by number 1 in the figure), methods which were tested in practice (number 2) were assumed to be more effective than untested ones (3).A method was considered as tested in practice when the study included at least one industrial case or real application, as opposed to only hypothetical cases or simulations.Bearing in mind this assumption, it is then possible to define which characteristics are disproportionately present in effective technology assessment methods (i.e., best practices, number 4).These characteristics are then formulated as the Context and Action/Intervention of a design proposition (5), presented in section 4.
The Mechanisms and Outcomes related to these Actions were often not present or not clearly stated in the original studies.Therefore, tentative explanations for the Actions are proposed and discussed in section 5, substantiated by management and innovation studies literature (not originally included in the scope of the systematic review).Via abductive reasoning, these sources are used to hypothesize possible mechanisms that may explain why certain actions lead to more effective TA methods.

Findings
Initial search in databases resulted in 1984 unique studies, out of which 168 were included in the review.See the supplementary material for complete description and flow diagram of the search and screening process.The 168 papers were analyzed in depth and 170 technology assessment methods for industry were identified.This section presents results from the analysis by describing characteristics of the methods, commonalities, and differences between them, as well as their relationship to sustainability.Methods are identified by an alphanumerical code with the format [M000].The mapping of method code to source (study) is available in the Appendix.

Summary of technology assessment methods
The majority of the 170 analyzed methods deal with technologies for product innovation (n = 67) or process innovation (n = 72), while only one is aimed at digital technologies specifically.Some authors claim their methods can be used for any type of technology (n = 22) and a few studies have indeterminate application (n = 8).The assessments span from complex mathematical simulations of fuel synthesis Methods were also sorted according to the main industrial sector where they can be applied, based on the classification from European Union's Directorate-General for Internal Market, Industry, Entrepreneurship and SME (European Commission, n.d.).Most product-related technologies are applicable to Mechanical Engineering industries (n = 21) followed by Electric Electronic (n = 13) and Automotive (n = 12).Process-related technologies are predominantly useful for Chemical (n = 23) followed by Mechanical Engineering (n = 22).The breakdown of type of technology contemplated by assessment methods in each industrial sector is shown in Fig. 2.Moreover, methods were categorized according to the virtues (Olesen, 1992) or parameters that they can assess in a technology (Fig. 3).Most methods included business metrics, such as financial measures (n Fig. 1.Literature synthesis approach: extracting design principles from a systematic literature review.In addition to virtues, TA methods can also be analyzed according to the development stage they can be applied in and the type of data they require (Fig. 4).TA methods were described according to Cooper's stages classification (Cooper, 2006), namely project scoping and technical assessment, detailed investigation, and business case.The methods were classified according to the earliest stage they could be applied to (i. e., a method categorized in "technical assessment" could also be applied to "detailed investigation").Data requirements range from estimates (i.e., by experts in a workshop setting) to proxy data (extracted from similar or related technologies, such as a previous version or the currently used process), and finally real data (from experiments or simulations).
Findings show that most methods are applicable in scoping (i.e., portfolio management) and technical assessments, while few are relevant in business case stage.This is especially salient for sustainabilityspecific TA methods, which are inexistent in business case stage.Additionally, data requirements change significantly according to technology development stage.Results indicate that technology development is characterized by progressive data availability, as are other design and engineering activities (Shishank and Dekkers, 2013).More than half the methods aimed at later technology development stages require real data for the assessment (56% of business case stage methods), while in initial stages more than half of methods can operate with estimates (61% of project scoping or technical assessment methods).
In summary, the average TA method is focused on product innovation, measuring cost-related impacts, and is applied in the early scoping stages (i.e., portfolio management).The inclusion of sustainability concerns is not uncommon, but social impacts are rarely present.Finally, very few TA methods consider trade-offs between virtues, which can undermine their practical impact on decision-making.

Clusters of technology assessment methods
TA methods were clustered into 9 families (Fig. 5), according to their  school of thought and main theoretical constructs.The classification considered the criteria used for the assessment (prescribed set of indicators versus discretionary choice) and degree of structure (the cluster is structured if the same steps or procedures are applied for all methods in it, while methods in unstructured clusters do not necessarily share the same guidelines).Although a clear-cut categorization was aimed for, the final clusters are arbitrary to an extent, and there is unavoidable overlap and connections between clusters.They are described and exemplified below.The examples shown do not represent an exhaustive list of all pertinent methodsrefer to the supplementary material for the complete list of methods in each cluster.
• Multi-criteria decision analysis (MCDA).The most populated cluster, with 40 instances, combines all methods which employ primarily MCDA approaches to technology assessment.MCDA is an umbrella term for decision support methods where multiple criteria or factors are taken into account to select one alternative in a set of possibilities (Belton and Stewart, 2002)  The assessment methods in clusters were classified according to their aim and type of measurement (Fig. 6).A method's aim could be either: (i) the diagnosis of a technology, where aspects of the technology are explored and improvement opportunities are established; or (ii) finding the most appropriate alternative among a set of technologies or concepts.Type of measurement refers to the output of the assessment, which could be either an absolute result (i.e., "is this technology good or bad?") or relative (i.e., "is this technology better or worse than another alternative?").By placing the method clusters along these two dimensions, it is possible to visualize that most methods (LCA, MCDA, FoM, QFD, Policy-inspired, and Simulation) are commonly solutionfocused where technologies are measured relatively.The readiness and CBA clusters, on the other hand, are often diagnosis-focused and assess technologies in absolute terms.Few methods present either solution-based assessment with absolute measurements or relative assessments for diagnosis purposes.The only cluster which is present in all four quadrants is FoM, although admittedly more prominent in the relative-solution section.
Most of the technology assessment methods employ some sort of technology forecasting (n = 83), although most use expert judgment to generate the forecasting (n = 56) to the detriment of more analytical methods (Mas-Machuca et al., 2014).Structured techniques to generate future scenarios are seldom applied (n = 27), usually relying on extrapolations from past data or expert judgment.Scenarios are primarily directed towards including uncertainty about external factors into the assessment.
Many methods that consider trade-offs between technology virtues

Sustainability in technology assessment
Sustainability is present in a large share of the methods (n = 78), and has become more common in recent years, as more than half the methods were published after 2018 (Fig. 7).SA methods for technology usually measure single or few environmental impacts, namely greenhouse gas emissions or, more broadly, climate change impacts.These methods rarely include environmental and social concerns at the same time, and very few exclusively assess social impacts (n = 2).On the other hand, combined assessment of environmental and economic impacts is common (n = 54), especially in methods that evaluate the contribution of a technology to the use-phase of an energy-consuming product.These methods often focus on energy saving, which may lead to both cost and environmental impact reduction, which can explain the simultaneous economic and environmental nature of the assessments.
No method mentions circular economy explicitly, although some include aspects of circularity, such as reducing the use of resources (24%).Circular Economy is a relatively recent concept in sustainability, defined as a "regenerative economic system" with the aim to promote "value maintenance and sustainable development" by "reducing, reusing, recycling, and recovering materials throughout the supply chain" (Kirchherr et al., 2023).Technology and innovation are widely considered to be enablers of this circular economic model (Guzzo et al., 2023).Very few methods in sustainability technology assessment encompass increasing lifetime of resources (4%) or closing the resource loop with recycling, reuse, and other end-of-life strategies (8%).
Technology sustainability assessment methods most commonly belong to the LCA or MCDA clusters, sometimes combining both approaches.Environmental concerns are especially common in methods designed for the chemical industry (32% of sustainability-related methods), which usually employ LCA-informed approaches.Sustainability-related methods frequently analyze either environmental impacts of production (30%) or use (33%) of a technology.Only 25% of sustainability-focused TA methods include impacts from cradle-tograve, including sourcing and end-of-life.
Sustainability-focused assessment methods display some factors which may discourage their use in early-stage projects.They are mostly designed for higher technology maturity (only 12% aim at TRL <4) and later stages of the development process.Additionally, they are less tested in industrial cases (70% are not tested or evaluated in mock cases).Furthermore, they usually require intensive data collection which consequently makes application times longer (73% of tools take

Design propositions
To understand how to improve sustainability assessment methods for early-stage innovation projects, we extracted initial design propositions (DP) from tested technology assessment methods (Fig. 1).The design propositions shown in Table 2 can be interpreted as a list of recommended actions for technology assessment in certain contexts.The contexts refer to various stages in a TA implementation, namely organizational pre-conditions (before performing an assessment), planning (when designing and preparing an assessment), evaluation (when appraising technologies), and interpretation (when analyzing the results).Each action is followed by a strength rating, which represents to what extent the action is more frequent in tested than untested methods, that is, the action's importance.A high strength denotes that tested methods include the action exceedingly more (15% or more) than untested ones.On the other hand, a low strength implies that the action is only slightly more common (between 5 and 10%) in tested than untested methods.
Furthermore, the extent to which the design propositions extracted from TA methods are also adopted by sustainability-focused methods was evaluated.This allowed the identification of commonalities and differences of best practices between sustainability-focused methods and TA methods not driven by sustainability.To do this, the procedure to obtain TA best practices was repeated exclusively with sustainability methods.That is, a certain characteristic was said to be a best practice if its occurrence was significantly higher in tested than untested sustainability-oriented TA methods, and the difference in occurrence is shown as the strength rating in the rightmost column of Table 2. Design propositions may be well adopted by sustainability-focused methods, in which the strength rating is equal or higher in sustainability-focused TA than in general TA.Another possibility may be that the action is frequently adopted in sustainability methods but slightly less common than in general TA methods.In more extreme cases, actions from TA methods are absent from sustainability methods (marked by *).
Each design principle is detailed in the following subsections, grouped by context.Possible mechanisms and outcomes for these actions are further explored in section 5, completing the traditional CAMO format of design propositions (Romme and Dimov, 2021).

Context: pre-conditions
One organizational pre-condition for effective TA application was identified and described below.
DP01: Having a well-defined technology development process is preferred to having no structure to the front-end of innovation process.A TA tool must be aligned to the process where it is employed, be it a technology stage-gate model (Cooper, 2006) or more iterative approaches (Aristodemou et al., 2019).This is not always straightforward, as even stage-gate models for technology development may retain certain "fuzzy" characteristics of innovation processes, such as not having clear requirements at development gates or a pre-defined number of stages (Ajamian and Koen, 2002).For example, one scorecard method for innovation [M087] implements this design proposition by aligning assessment activities in gate-meetings at the end of each development stage.The assessment activities include appraisals of cost and success potential, calculated by scoring the technology according to competitiveness, market size, competitor intensity, etc.This action is also observed to the same extent in tested sustainability-related methods.The ETEA framework tool [M026], for example, follows a stage-gate approach based on TRL for green technologiesthe activities suggested by the framework are applied at each development gate and change according to the maturity of the technology.

Context: planning
When preparing for a technology assessment exercise or designing a tool to support the assessment, five design propositions have been identified and are discussed below.
DP02: Designing the assessment tool for non-expert users is moderately more common in tested TA methods."Non-expert" refers to users without expertise in either the assessment method or the virtue being evaluated.For example, in the case of cost-benefit analysis, a nonexpert user would not be highly familiar with financial metrics or the methodology itself.Methods following this design proposition usually fall outside the LCA or MCDA clusters.An example would be MEPT [M050] method development by Siemens in which an easy-tounderstand scorecard is used to judge the attractiveness and technological position of a company's portfolio.The appraisals in MEPT are established via expert scoring of factors such as development potential, customer acceptance, and availability of human and financial resources.In sustainability-related tools, this action is present to a lesser extent.This could be due to the high influence of LCA and other product development approaches to sustainability quantification, which usually require more expert knowledge to be applied (McAloone and Pigosso, 2018).The design proposition is applied in the IISA method [M111], for example, which uses open public events to crowd-evaluate social aspects of innovations and map stakeholders' acceptance of the technology.
DP03: Involving the decision-maker during the assessment, and not only in the interpretation stage, is a best practice in TA methods.This design proposition can be exemplified by the STAR method [M152], in which technologies are assessed following a real-options approach.The appraisal is performed in project group which includes the project leader or other decision-makers.Through a checklist containing questions about strategic and financial aspects of the technology, the project group reaches a single conclusion, in consensus, and the decision is made collectively.The checklist contains statements such as "the technology will be able to offer substantial performance advantages over current solutions" and "we have the right skills in place for commercialization."This design proposition is less occurring in sustainability-focused projects.This could be explained by the common view that sustainability assessment requires people in specialist roles, which are often not decision-makers.One pairwise comparison tool [M030] employs this proposition by developing personas to reflect the views of decision-makers from distinct parts of the organization.Each persona is then used to develop weights for the evaluation criteria.For instance, a persona representing an environmental enthusiast would rate the "carbon intensity" of the technology as very important, while "ease of delivery" would receive a relatively low importance.On the flipside, a persona representing a local resident of the area where the technology would be deployed would have inverted these importance ratings.DP04: Implementing assessment tools in workshop settings with facilitation is recommended as opposed to software or spreadsheets for individual use.Workshop settings can be employed for group assessment of technologies with physical (i.e., paper-based) methods or digital approaches.The future oriented risk assessment method [M084] proposes participative approaches such as workshops and brainstorms to align technology assessment with risk assessment, displaying the use of the design proposition.Although less common in sustainability-related tools, there are methods which employ facilitation for environmental impact assessment such as Pindar [M048], used for robotics design.In Pindar, a series of facilitated evaluation steps are executed, from the selection of evaluation criteria to the quantification of results, guiding the choice of the most promising robotics design proposals.
DP05: Including a diverse set of stakeholders is preferred to relying solely on the development project personnel to assess technologies.Stakeholders included could be internal to the company, but in different departments, exemplified by the BRLa framework for emerging technologies [M017], or external, such as final users of the technology, shown in the SCORE method for defense and military applications [M011].In BRLa, or Balanced Readiness Level assessment, different "readiness" aspects of technologies are evaluated by a broad group of experts from within the company.The assessments include technology readiness levels, market readiness levels, organizational readiness levels, etc.In SCORE, feedback from actual end-users is captures in testing settings and is then incorporated into the technology assessment.The design proposition is also highly relevant in sustainability assessment tools, as employed in the LCA-inspired study of a bus fleet in Qatar [M161] which considered different user groups in its analysis.
DP06: Designing the assessment tool to be generic is substantially more common in tested TA methods than untested ones.Generic tools are easy to understand and to apply to a broad range of industries.A generic and wide-ranging tool such as the innovation impact map [M113] can be used to evaluate the opportunities provided by distinct types of technologies in many industries, improving its uptake by manufacturing companies.The innovation impact map proposes a quantification of quality-of-life improvements as a consequence of the adoption of a technology, and it is not limited to a specific type of technology of industry.However, the design proposition extracted from sustainability-focused assessment methods points at the opposite directiontested tools tend to be more specific than untested ones.Possible explanations for this conflict are explored in section 5.2.A generic SA method for technology can be exemplified by [M136], a multi-indicator assessment method comprised of guidelines for economic, environmental, and social indicators selection and application.This method, although developed for sorting technologies of e-waste, can also be used to guide indicator selection for other types of process technologies.Specific tools for sustainability can be illustrated by the guidelines on how to conduct LCA combined with techno-economic assessment for specific carbon capture and utilization technologies [M142].

Context: evaluation
When executing the appraisal of a technology, five design propositions have been identified, as described below.All actions in this context are present to the same extent in both general TA and sustainabilityfocused ones.
DP07: Using leading indicators in place of lagging indicators.Lagging indicators reflect final outcomes, while leading indicators monitor the current situation (Pojasek, 2009).The use of leading indicators is exemplified in tools like the Combined Compromise Solution (CoCoSo) [M004], used to select manufacturing technologies with a MCDA approach, or the sustainability-driven [M060] where expert judgement is used to rate product related technological scenarios using pre-defined leading indicators.In CoCoSo, a set of indicators (such as quality, cost, and profit from after sale services) are combined using a series of mathematical expressions and matrices and used to appraise a novel technology.In [M060], leading indicators are chosen to represent the triple bottom-line of sustainability: economic, environmental, and social aspects.Results are then displayed in radar plots.
DP08: Employing multiple indicators instead of a single metric.Multiple indicators can better capture the nuances of a technology assessment problem when they are not weighted or in any way combined into a single (composite) measure.For example, they are present in US Air Force's QTA method [M028], an analytic method to capture the impact of new technologies on aircraft performance.Several indicators of a technology's impact on aircraft performance are considered simultaneously, such as fuel flow, drag, and weight.In sustainabilityfocused methods, we can observe multiple indicators being employed in a visualization technique for sustainable energy system scenarios [M131].The technique combines indicators for measuring energy generation, energy consumption, and greenhouse gas emissions for each developed scenario.
DP09: Applying qualitative indicators rather than quantitative ones.Methods applying this design proposition usually fall outside the LCA and MCDA clusters.Qualitative data can be easier to obtain than quantitative information, especially in early stages of technology development.Qualitative data used for technology assessment can range from interviews for ethical evaluation [M061] (e.g., "does the technology enhance or diminish your sense of control?") to Likert-scale questionnaire about innovation used in the MIM method [M110] (including questions about protection level, global technical environment, and competitor's competence).In MIM, the degree of innovation of a technology is measured in a scale that goes from "there is a sophisticated product and a huge identified market" to "preliminary idea for a product, the market is not well defined."DP10: Using simple matrix or scoring methods, instead of algorithms, simulations, and other software tools.Matrix and scoring methods consist of a series of indicators, usually organized in tabular form, which are either scored following a pre-defined scale or filled in with available data.This approach tends to be easier to understand and implement when compared to more sophisticated algorithms or software.Technology assessment methods employing this design principle can be illustrated by QFD approaches ([M138] combines QFD and AHP for augmented reality technology selection, where QFD is used to identify relevant criteria and AHP is used to rank criteria against each other), MCDA methods (MAUT [M137] uses utility theory to formally map preferences of the decision makers and important scoring criteria), or several KPI tools (US EPA's GREENSCOPE [M037] consisting of environmental and economic indicators arranged visually for characterization of chemical process technologies, such as ethanol manufacturing from biomass).
DP11: Using real data, if possible, rather than estimates or proxy information.This design principle is more common in higher maturity technology, where experimental or pilot-scale data may already be available.For example, TCM [M073] uses a computer-based spreadsheet technique to simulate manufacturing costs based on historical data and G. Parolin et al. data regressions.TCM has been used to identify cost drivers and economic potential in production technologies of ceramic matrix composites, diamond films, engine components, among others.In the sustainability-based methods, the ones in the LCA cluster often require real data, like in the highly specific method [M006] combining LCA and AHP for technologies in the coal industry.

Context: interpretation
Finally, when interpreting the results from technology assessment, three design propositions have been extracted and are discussed in this subsection.
DP12: Considering the trade-offs and conflicts between technology virtues is more common in tested methods than in the untested ones.Trade-offs are situations in (product or technology) design where all existing requirements cannot be simultaneously satisfied by the current alternative (Andreasen et al., 2015).To deal with these conflicts, the assessment must consider how improving one virtue may (negatively) affect another.For example, TOPSIS and CBA can be combined to simultaneously evaluate several virtues of semiconductors manufacturing processes, as suggested by [M154].In this method, benefits for company managers are defined as important technology attributes, such as production lead time, flexibility, and quality.The relative importance of the attributes, alongside technology implementation cost, are ranked by the same managers, providing key information to solve trade-offs.Finally, the technology alternatives are ranked using this preference information via the TOPSIS method.However, for sustainability assessment methods, this design proposition does not hold (i.e., consideration of trade-offs occurs equally in tested and untested tools).Possible reasons for this discrepancy between generic TA and sustainability-focused methods are further discussed in section 5.2.An example where trade-offs between environmental issues and other virtues are considered is the QSA method [M033], which combines LCA and economic analysis and simulation in the chemical industry.The results of both assessments are taken into account simultaneously whenever a technology development decision is made.
DP13: Considering the uncertainty in data and context.Technology development and other early innovation activities are often stated to be "fuzzy" (Eling and Herstatt, 2017), largely due to the several types of uncertainty that occur in the front-end of innovation and that may hinder technology assessments.Data uncertainty, commonly associated with early design activities (Andreasen et al., 2015), can limit the types of assessment methods that may be applied.More troublesome are context uncertainties, such as those related to technical factors, organizational factors, markets, and resources, which are particular to innovation activities (O'Connor and Rice, 2013).Most methods which proposed to deal with uncertainties left out the influence of these contextual factors.There are multiple ways to manage uncertainties in technology assessment, ranging from mostly qualitative to mostly quantitative methods.A proposed NASA lunar outpost program [M044] used decision trees and sensitivity analysis for decision support.The method model different system characteristics and analyses their impacts at a system level.A "hard" approach is taken by [M077], which includes a fuzzy best-worst method for new product idea selection under group decision-making, with rigorous quantification of uncertainty.In sustainability-focused methods, this design proposition is mostly absent in the studied tools.The possible reasons behind this are further discussed in section 5.2.
DP14: Considering future scenarios in structured ways are preferred to ad-hoc approaches or not using foresight techniques at all.Foresight methods can be useful in technology assessment tools to reduce (context) uncertainties and lead to more informed decisions (Keenan et al., 2007).For example, scenarios can be created in a systematic way to address the impact of market-related factors to financial aspects of a technology [M120].Scenarios in this method are used as a sort of sensitivity analysis and robustness check to verify if technologies would maintain their placement in the assessment even in non-ideal future situations.This design proposition is less common in sustainability assessment methods, which usually use unstructured approaches to technology foresight, if at all.A contrasting example [M042] uses a participative method to co-develop scenarios from narratives and visions that participants have regarding the promises of additive manufacturing.As the authors of the method state, "the goal is to move from often little reflected technology-driven visions to reflexive sociotechnical scenarios that address the complexity of grand challenges in a more relevant way."

Discussion
In this section the learnings from technology assessment best practices that could support the design and effective implementation of sustainability assessment are discussed.First, possible mechanisms to explain the design propositions established in section 4 are explored, elucidating how these actions may lead to positive outcomes in TA.Finally, the gaps between sustainability-focused and non-sustainabilityfocused TA methods are investigated, resulting in an analysis of how the design propositions may be applied to early-stage SA tools.

Mechanisms and outcomes of design propositions
A complete design proposition must establish the generative mechanisms and expected outcomes of an action in a given context (Romme and Dimov, 2021).The mechanisms aim to explain why a certain action works to achieve a certain outcome.In this research, the possible mechanisms and outcomes of the design propositions were investigated using abductive thinking and are discussed in the remainder of this section (Tables 3-5).The mechanisms should be interpreted as initial proposals containing possible explanations for the design propositions and reflect a limited body of knowledge in the literature within technology and innovation management, futures studies, and engineering design.
In the pre-conditions and planning context (Table 3), literature points at two positive outcomes that may be achieved through different mechanisms: (i) a higher adoption of the tool in the organization; and (ii) increased reliability and trustworthiness of decisions.Regarding driving up the adoption of TA tools, companies working with a higher level of formalization in product development process are shown to be better equipped to implement management tools (Nijssen and Frambach, 2000).This mechanism, if transposed to technology development, may explain why TA tools designed for a well-defined process (DP01) achieve higher adoption rates.Tools in compliance with DP01 are easier to set up in an organization with structured technology development processes and are less dependent on the willingness of specific people in the organization to be effectively used.On the other hand, if the company does not have an established process for technology development, this may constrain which tools can be employed and subdue the positive effect of the applied tools.Similarly, technology management toolkits with user-friendly processes and modular or generic structure show increased adoption rates (Kerr et al., 2013).This effect may clarify why designing simple and generic tools (DP06) for non-expert users (DP02) are best practices for TA methods.Another way to achieve higher adoption of assessment tools is by driving accountability and participation of top management, which is shown to play a key role in the success of new product development (Nijssen and Frambach, 2000).If this mechanism can be transported to technology development, it may explain why having the direct involvement of decision-makers in the evaluation process (DP03) can lead to positive outcomes.
Additionally, actions in the preparation context can also lead to more robust decisions.The involvement of the decision-maker in the assessment (DP03) can increase acceptance of results by others (Wiebe et al., 2018) and enhance the legitimacy of the decision.It may also streamline the assessment procedure, as the decision-maker, by participating directly, becomes aware of assumptions and limitations of the analysis earlier on in the process.However, when decision-makers are involved, their hierarchical position can cause other participants to feel overruled and afraid to speak up, possibly resulting in sub-optimal decisions (Kerr and Tindale, 2004).Furthermore, the inclusion of a group of decision-makers in the evaluation may be challenging for practical reasons (e.g., space limitations, logistic or scheduling constraints).
Technology management scholars argue that workshop settings (DP04) generate more trustworthy decisions, as they increase levels of communication and knowledge-sharing among participants (Franco and Montibeller, 2010).The same is said of including diverse participants (DP05) in the assessment (Kerr et al., 2013), which are then incentivized to reach consensus and co-create solutions, driving increased reliability and trustworthiness of the decision.On the negative side, large assessments including multiple stakeholders can be drawn out and delay the decision-making process (Wiebe et al., 2018).
There is abundant literature on the potential benefits of using multiple qualitative leading indicators in management tools, in the context of evaluating technologies (Table 4).Leading indicators (DP07) are argued to have positive effects in promoting early and preventive action (Pojasek, 2009), by offering a way to monitor the degree of compliance with management criteria instead of measuring outcomes.Multiple indicators (DP08) can be used to map a more complete picture of the impacts of a technology by including in the assessment a broader set of issues and sub-components of a complex system (Greco et al., 2019).Qualitative indicators (DP09) are shown to increase adoption of the assessment tool by simplifying the assessment (Franco and Montibeller, 2010) and better representing the participants think and communicate in the early-design stages.Simplification is also argued to be the main mechanism behind the use of scoring methods (DP10) (Mitchell et al., 2022), as they offer a robust yet easy method to breakdown complex assignments into simple evaluations (Kerr et al., 2013).
On the other hand, the use of purely qualitative indicators can be challenging in a highly technical setting which is part of an engineering organization, such as technology development.Decision-makers and participants may prefer quantitative indicators in an aggregated form, which are more readily understandable (Greco et al., 2019).Additionally, the use of scorecard methods may increase the likelihood of participants distrusting the assessment if the scoring is too vague, inconsistent, or not representative of their values.These concerns can be alleviated using real data (DP11) to substantiate the assessment, which is reasoned to increase the trustworthiness of decision-making, by ensuring results are "based upon a sound knowledge base" (Keenan et al., 2007).Combining qualitative indicators and real data requirements is a delicate balance between simplicity and reliability.
In the interpretation context (Table 5), actions are argued to contribute to more informed and robust decisions.Considering tradeoffs between options (DP12) may help uncover hidden costs or benefits of different alternatives and compare them on a common basis (Belton and Stewart, 2002;Kravchenko et al., 2020a).Being transparent about uncertainty (DP13) can also lead to more credible results by ensuring that limitations and assumptions regarding data and context are acknowledged in the assessment (Mitchell et al., 2022).Finally, the   (Chermack, 2004;Parolin et al., 2023) G. Parolin et al. use of foresight techniques such as future scenarios (DP14) may result in more comprehensive decisions by allowing the exploration of possible changes to the decision context and increasing awareness of uncertainties in the assessment (Chermack, 2004).

Gaps in sustainability assessment methods
The dominant position in sustainable design literature sometimes does not align with the design propositionspossible reasons for these gaps are explored in this section.The design propositions for general technology assessment methods were analyzed in the context of sustainable design literature (Table 6).The investigated sustainabilityrelated literature is focused on studies of sustainability assessment methods and tools, and it is not limited to technology development but includes also new product development, innovation topics more broadly, and SA as applied in other disciplines.
Best practices extracted from sustainability-related literature match most of the design propositions in the preparation context, with the noted exception of a discussion around generalizability versus customization of assessment tools.In his PhD thesis (O'Hare, 2010), O'Hare presents recommendations of eco-innovation tools for the early design stages, backed by industrial investigation and literature reviews.The author states that environmental considerations should be integrated within the new product development process, matching the first proposed action in this study (DP01) (Pigosso et al., 2014).Additionally, O'Hare incentivizes the use of tools that require a low level of effort to be applied and that are "easy to learn, understand and use," especially in early design stages, mirroring DP02.However, the author also states that one should "customize the tools to the specific company or application" (O'Hare, 2010), which may conflict with DP06.In fact, SA tools for technologies tend to be specific to a certain use-case, as evidenced in Table 2.We can hypothesize that the lack of generic tools for sustainability assessment may come from the (apparent) need for more specialist knowledge when evaluating environmental and social impacts, compared to business-related metrics.Sustainability assessment tend to be complex methods (Huang, 2021) that may require more tailoring for company-specific characteristics to make it useable by non-sustainability experts (McAloone and Pigosso, 2018).
The other actions in the preparation context are largely supported by sustainable design literature.Participation of decision-makers in collaborative TA exercises (DP03) has been shown to have positive effects (Gasde et al., 2020a) in the decision.Workshop settings (DP04) were shown to increase participants' "consideration of technology value towards customers, society as a whole and the environment" (Farrukh and Holgado, 2020).However, awareness may not be enough and such workshop activities may need to be qualified and complemented by more analytical approaches.The importance of stakeholder engagement (DP06) was argued for in front-end of innovation activities, like sustainable business model innovation (Pieroni et al., 2018;Schlüter et al., 2023), and in other types of technology assessment, namely in policy making and evaluation (Sala et al., 2015).
The design propositions in the evaluation context are moderately endorsed in sustainable design literature.The use of leading indicators (DP07) is commonly advocated for: (i) measuring corporate circular economy initiatives (Kravchenko et al., 2020b); (ii) evaluating product-related environmental performance (Issa et al., 2015); and (iii) gauging process-related environmental performance (Rodrigues et al., 2016(Rodrigues et al., , 2017)).Likewise, scorecard methods (DP10) find applications in SA deployed in technology development (Farrukh and Holgado, 2020) and new product development (McAloone and Bey, 2009).
On the other hand, having multiple indicators (DP08) is less represented as a best practice in sustainability literature.It is seen as a positive feature of SA (Sala et al., 2015), but there are claims that multiple indicators can make decision-making more complicated, since there may be conflicts between indicators (Saidani et al., 2021).While it is understandable that presenting multiple indicators may result in a less  (Chebaeva et al., 2021) and product development projects (Kravchenko et al., 2020a), whereas later-stage SA is traditionally carried out in quantitative terms using specialized software, the case of LCA (McAloone and Pigosso, 2018).Also not supported by sustainable design literature is the use of real data (DP11), since early-stage SA typically asks for the use of estimates or proxy measures due to resource availability issues (Matthews et al., 2019a,b).A combination of real data and estimates may be an adequate compromise to ensure ease of use and trustworthiness of the assessment.Sustainable design literature generally supports research and implementation of the design propositions in the interpretation context.How to deal with trade-offs (DP12) is a major challenge of sustainability evaluation in product development and innovation (Dekoninck et al., 2016;Schlüter et al., 2023) and several methods have been proposed recently, from manufacturing companies (Kravchenko et al., 2020a) to the built environment (de Magalhães et al., 2019).The same can be said about uncertainty quantification and management (DP13), which is recognized as a key research area within SA (Sala et al., 2015).Scenarios (DP14) and foresight techniques are also increasingly valued ways of incorporating uncertainty into sustainability assessment (Bisinella et al., 2021).
In contrast, the design propositions in the interpretation context are rarely employed in practice (Table 2), even as they are valued by sustainable design scholars.A plausible reason for this lack of adherence to best practices could be the increased methodological sophistication needed from SA tools to employ these actions.Similarly, conducting uncertainty and trade-off management could be a time-consuming and resource-intensive task, which may not fit within the technology development constraints.Finally, these practices may challenge existing norms and expectations of decision-makers (Sala et al., 2015), leading to "fuzzier" decisions and less deterministic results.

Developing and adapting sustainability-focused methods for technology assessment
In addition to the design propositions themselves, there are several opportunities to improve tools aimed at assessing the sustainability of new technologies in manufacturing companies, according to the gaps discussed throughout this study.Specifically, the following remarks stem from: (i) the gaps in sustainability-focused technology assessment methods (section 3.2); (ii) the differences between design propositions extracted from generic TA methods and sustainability-oriented ones (Table 2); and (iii) the disagreements that exist when it comes to linking TA best practices and SA literature explored (section 5.2).
• Some recognized best practices in both TA and SA are still not largely employed in SA methods for technology development, such as uncertainty and trade-off management.The use of simplified approaches to uncertainty quantification and streamlined MCDA methods which do not require extensive calculations are possible "low hanging fruit" actions to improve existing sustainabilityfocused TA tools.• There needs to be a reconciliation between the advantages of generic TA methods (broad applicability and apparent simplicity) and the perceived benefits of company-specific SA tools (efficiency and ease of use).Kerr et al. suggest one possible solution (Kerr et al., 2013) which recommend the development of flexible and modular tools, consisting of a general form that can be made specific according to the needs of a company or industry.• Furthermore, methods which include circularity criteria into the assessment are inexistent in the studies included in this review.At the same time, initiatives to include circular economy principles into companies' practices, including technology and product design, are increasingly common, but are seldom evaluated according to their environmental or social impacts (Das et al., 2022).As a first step to fill this evaluation gap in technology development, current TA methods based on leading indicators could be adapted to include circularity-related figures from existing databases (Kravchenko et al., 2020b).• Finally, although process models for technology development exist for decades (Aristodemou et al., 2019), there are few SA tools for technology which are clearly designed with process integration in mind.Other organizational barriers to the implementation of appropriate tools must also be investigated, including which competences and resources are needed in the front end of innovation to enable the use of these methods, but also the role of culture and leadership (Dekoninck et al., 2016).
There are some existing sustainability-focused technology assessment tools included in this study, which fulfil the design propositions and the recommendations above to a considerable extent.A notable example is method M115 (Farrukh and Holgado, 2020), a modular toolset for early-stage TA including scoring sustainability criteria.Implemented in a facilitated setting and tested in industrial workshops, it was shown to lead to useful discussions among the project group members.The method includes value mapping and stakeholder identification exercises.Lacking from the method is the explicit consideration for trade-offs and future scenarios, although both aspects are, to a certain extent, covered implicitly.Additionally, circularity aspects are not mentioned and there is no predefined set of indicators or sustainability criteria recommended to users.Making use of the modular aspect of this tool, the aforementioned functionalities or characteristics could be included in the method as new modules on top of the existing elements.

Conclusion
In this paper, TA methods for manufacturing companies are systematically catalogued and explored in relation to SA.The importance of the front-end of innovation and technology development in manufacturing companies was explored, as was the need to integrate sustainability considerations into these activities.In total, 170 existing TA methods were mapped and categorized through a systematic literature review, identifying their strengths and limitations.From this collection of TA methods, a set of design propositions or best practices for technology assessment was proposed.Using a design science approach, the potential for technology assessment methods to be expanded to include sustainability considerations was demonstrated, highlighting the importance of doing so considering increasing pressure for companies to address their environmental and social impacts.By integrating sustainability into state-of-the-art TA methods, manufacturing companies can improve their innovation processes, increase the value of their offerings, and contribute to a more sustainable future.
Our findings have practical implications for manufacturing companies seeking to improve the sustainability performance of their technologies, products, and operations, as well as for researchers and practitioners interested in the intersection of technology, innovation management and sustainability assessment.Manufacturing companies can improve their sustainability assessment practices in technology development by following the design propositions discussed in this study, namely.
• Design generic (Kerr et al., 2013) sustainability assessment tools that can be integrated into the technology development process (Nijssen and Frambach, 2000) used by non-sustainability experts (O'Hare, 2010).
• Implement the assessment as a facilitated workshop (Franco and Montibeller, 2010) involving a diverse set of stakeholders in the evaluation itself, including the decision-maker (Kerr et al., 2013).• Employ multiple leading indicators (Kravchenko et al., 2020b;Saidani et al., 2021) in simple scoring methods (McAloone and Pigosso, 2018), instead of simplified single measures (Mitchell et al., 2022), avoiding overdependence on quantitative data (Chebaeva et al., 2021) that is often not available or reliable in early-stage projects.• Employ scenario and foresight techniques to systematically incorporate internal and external uncertainties in the assessment (Bisinella et al., 2021;Parolin et al., 2023) and consider possible trade-offs (Schlüter et al., 2023) when interpreting the results.
Additional recommendations to practice were also developed, stemming from current gaps in SA methods compared to TA methods.
• Use simplified approaches to uncertainty quantification and streamlined MCDA methods in sustainability-focused TA tools.• Develop flexible and modular tools, consisting of a general form that can be made specific according to the needs of a company or industry.• Include circular economy principles into the sustainability assessment by, for example, introducing circularity leading indicators.• Design the assessment tools around the technology development process and be aware of organizational factors that may affect its operationalization.
This review has limitations, mainly related to publication bias.There might be other TA methods in use in manufacturing companies, which have not been published in scientific literature, and instead are available in grey literature, distributed in companies themselves, consultancies, or other organizations.Furthermore, the criteria for establishing design propositions rely on the reporting of the methods and their testing.If the testing reporting is limited in any form (e.g., due to confidentiality issues, published before industrial cases were concluded, inaccurate) the design propositions could be skewed towards methods that are easier to deploy or more popular with industry professionals.Additionally, the discussion on possible mechanisms and outcomes for the design propositions is abductive in nature and reflects a partial view of management and sustainability literature.The mechanisms proposed in this study should be interpreted as provisional and not conclusive.Finally, since simple LCA studies were excluded from the reviewed articles for being rarely applicable to technology development projects in manufacturing companies, there might be a bias for less sustainability-focused and more qualitative methods in the final findings.
Overall, this paper contributes to the field of technology management and innovation by mapping existing technology assessment tools, providing novel insights into best practices for TA, and offering practical recommendations for SA in early-stage projects.
Following the framework for research on the front-end of innovation (FEI) (Eling and Herstatt, 2017), this contributes with important findings to themes General FEI Methods and Tools and Idea/Concept Evaluation, specifically to questions "Which FEI methods and tools exist and what are their benefits and drawbacks of applying these?", "Which organizational, project, environmental, team, or other factors impact the successful application of FEI methods or tools", and "which evaluator characteristics determine idea/concept evaluation or screening success?".
Our study suggests several avenues for future research in this area.One important direction would be to further refine the design principles we proposed for integrating sustainability considerations into technology assessment methods, testing their applicability in different contexts and industries.Testing of design propositions could also scrutinize the conflicts between best practices in methods targeted for sustainability versus generic ones, reconciling the divergent recommendations in literature.Furthermore, there is a need for research on how manufacturing companies can effectively implement sustainability assessment in their innovation processes, including the role of organizational culture, leadership, and incentives.Moreover, there is an important gap in the current suite of assessment tools when it comes to assessing circularity of technologies.Finally, future research could investigate how to successfully operationalize indicator selection, scenario planning, and uncertainty and trade-off management practices in the "fuzzy" front end of innovation for sustainability.

Appendix Table 7
Technology assessment methods.Methods in italics were named by the authors of this study.
[M155] to simple qualitative checklists of product viability [M103]; and very specialized multi-indicator assessment of biochemical methane generation technologies [M134] to very generic and accessible scoring of innovation projects [M157].

Fig. 2 .
Fig. 2. Distribution of methods according to industrial sectors and types of technologies.

Fig. 3 .
Fig. 3.The predominance of business and technical factors in the distribution of virtues considered by technology assessment methods.One method may consider multiple virtues.

Fig. 4 .
Fig. 4. Data requirements, technology development stage and virtues encompassed by technology assessment methods.

Fig. 5 .
Fig. 5. TA method clusters.Methods were first categorized according to type of criteria used (discretionary choice of criteria or prescribed) and degree of structuredness.

Fig. 6 .
Fig. 6.Clusters of technology assessment methods according to aim and type of measurement.The area of each rectangle is proportional to the number of methods in each cluster.

Fig. 7 .
Fig. 7. Share of sustainability and circular economy related technology assessment methods included in the review.Each circle represents roughly 4 methods, according to year of publication.

Table 1
Search strategy.Each string was combined with "AND" connectors.

Life Cycle-oriented (LCA/LCC). The 35 methods in this category bring
inspiration from Life Cycle Assessment (LCA) or Life Cycle Costing (LCC), prescribing a set of environmental or economic indicators for TA.LCA is the de facto standard method for product and process environmental impact assessment, with a large body of work

Table 2
Initial design propositions (context and action) extracted from technology assessment methods.The strength rating indicates to what extent the action is more frequent in tested than untested methods.* = irrelevant strength or absence of action in sustainability-focused TA.TD = technology development.

Table 3
Mechanisms for the design propositions before technology assessment and when designing and preparing (pre-condition and planning context).TD = technology development.

Table 4
Mechanisms for design propositions when appraising technologies (evaluation context).

Table 5
Mechanisms and actions for the design propositions when analyzing technology assessment results (interpretation context).

Table 6
Relationship and alignment (agreement)between design propositions and sustainability assessment literature.TD = technology development.++straightforward conclusion, clearly displaying conflicts between sustainability aspects can be a positive consequence for SA, helping uncover uncertainties and capturing a broader view of sustainability.Similarly debatable is the use of qualitative indicators (DP09).Early-stage SA often resorts to qualitative indicators (DP09) in the case of lack of data, as seen in R&D projects assessment