Every Cloud Has a Silver Lining: A History of Barriers to Intellectual Capital Measurement

The purpose of this paper is to contribute to the Intellectual Capital Accounting (ICA) literature by investigating the barriers to IC measurement and how they can (re)shape IC measurement projects. The paper presents an interventionist case study of a company which has been measuring its IC for several years. It focuses on some emblematic situations in which barriers arose during the design and implementation of IC indicators followed by a (re)moulding of the IC measurement project within the organization. The case analysis revealed two barriers: the heavy workload that the design and calculation process of IC indicators entails and the perceived limited reliability of those indicators. These barriers did not lead to a complete failure of the project, but acted as a “filter”, resulting in the discontinued use of those indicators that were not perceived as useful or reliable and in the adoption of those that the managers actually considered useful. While this hindered a complete implementation of the ICMS, it promoted a selection of IC indicators and a legitimation of those that subsequently became part of the management control system. While previous studies explore how barriers can hamper the design as well as the implementation of an ICMS, this paper sheds light on what happens to IC measurement projects when the magnitude of barriers is not so significant as to lead to the complete failure of the project. This offers a new and different view of the barriers related to the adoption of ICMSs. Rather than considering them as factors which can interrupt IC measurement, they could be seen as factors which can challenge the implementation of the ICMS, strengthening those IC indicators that are perceived as reliable and useful and eliminating or disregarding those that did not meet expectations.


Introduction
The development of the knowledge-based economy has meant that a company's degree of competitiveness depends more and more on the management of Intellectual Capital (IC) elements rather than on physical and financial assets (Edvinsson, 2013;Roos, 2017).IC is the complex set of company-specific knowledge resources, such as brands, customer relationships, employee competencies, databases, intellectual properties, etc. (Dumay, 2009;Dumay & Cuganesan, 2011;Montemari & Nielsen, 2013).
Starting from the familiar adage "What you measure is what you get" (Kaplan & Norton, 1992, p. 71), several frameworks to measure IC have been proposed over the years, the aim being to encourage companies to adopt IC management practices by promoting the adoption of narratives and indicators on IC (Meritum Project, 2002;Mouritsen et al., 2003;Andriessen, 2004;FMEL, 2004;European Commission, 2008;Sveiby, 2010).Along these same lines, a specific research stream arose within the IC discourse, that of Intellectual Capital Accounting (ICA), defined by Guthrie et al. (2012, p. 68) as "an accounting, reporting and management technology of relevance to organizations to understand and manage knowledge resources.It can account and report on the size and development of knowledge resources such as employee competencies, customer relations, financial relationships and communication and information technologies".
Despite the large portfolio of Intellectual Capital Measurement Systems (ICMSs) available for companies, their actual adoption in practice is still limited (Dumay, 2009;Lönnqvist et al., 2009;Chiucchi, 2013;Chiucchi & Montemari, 2016).Therefore, one of the current aims of ICA research is to explore the levers and the barriers to their adoption and use (Guthrie et al., 2012;Dumay, 2013;Dumay, 2014), in order to shed light on how and why successful (or unsuccessful) experiences occur (Dumay, 2012, p. 12).
As proposed by the extant literature, narratives and indicators on IC coexist within ICMSs (Andriessen, 2004;Sveiby, 2010).Nevertheless, while the role that narratives can play to support the management of IC has been extensively explored (Mouritsen et al., 2001;Cuganesan et al., 2007;Dumay & Rooney, 2011;Dumay & Roslender, 2013), the role of IC indicators deserves additional attention (Catasús et al., 2007;Mouritsen, 2009).Literature, in fact, has emphasized that their adoption can be problematic because of several factors that can affect their actual use for managerial purposes (Vaivio, 2004;Catasús & Gröjer, 2006;Demartini & Paoloni, 2013;Chiucchi et al., 2018).In many cases, the research (and case study analysis, as well) has been aimed at exploring how barriers to their adoption can lead to a failure of the IC measurement project within companies (Chiucchi & Montemari, 2016;Giuliani et al., 2016;Schaper, 2016;Nielsen et al., 2017).In contrast, scholars have paid little attention to what happens to IC measurement projects when the magnitude of the barriers is not so relevant as to lead to a complete failure of the project.
This paper aims to contribute to filling this gap through an interventionist case study of a company which has been measuring its IC for several years.The story of the evolution of the ICMS tells of a non-linear progression, with many twists and turns as the project has taken several different directions over the years.In particular, the paper focuses on some emblematic situations in which barriers arose during the design and the implementation of IC indicators, in order to show how they affected the IC measurement project and (re)shaped it.
The structure of the remainder of the paper is as follows: Section 2 presents the literature review and the research question, and Section 3 describes the method chosen to answer the research question.Section 4 illustrates the case study and focuses on the barriers that arose during the IC measurement project and their effects.Finally, Section 5 discusses the case findings and concludes the paper by presenting its main contributions.

Literature Review
Today, intangible resources are unanimously considered by academics and practitioners to be one of the main drivers of value creation (Edvinsson, 2013;Roos, 2017).The relevance of these resources has grown exponentially over the last thirty years in relation to the increase of the competitive pressure and the advent of new information and communication technologies (Lev, 2001).These factors have forced companies to privilege innovation as their primary source of competitive advantage, with a resulting greater emphasis on intangible assets than on tangible and financial resources.
In view of the pivotal role taken on by intangible resources, the specific research area of IC gradually emerged toward the end of the millennium (Petty & Guthrie, 2000).Edvinsson (1997, p. 368) defined IC as "the possession of knowledge, applied experience, organizational technology, customer relationships and professional skills that provide […] a competitive edge in the market".
Over the years, several taxonomies have been proposed to classify IC (for a throughout review, see Keong Choong, 2008).A broad consensus was gradually reached on the categorization that splits IC into human capital, organizational capital, and relational capital (Bjurström & Roberts, 2007;Guthrie et al., 2012).Human capital entails the individual knowledge, competences, skills, and experience of people working within a company (Sveiby 1997).Organizational capital is the codified knowledge which is structured in "tangible" elements so that it can be shared and transmitted in time and space.It includes databases, information systems, intellectual properties, technologies, software, internal procedures (Stewart, 1997).Relational capital is the network of relationships established with stakeholders, investors, suppliers and, above all, with current and potential customers.It also includes the corporate image, reputation, and brand (Sánchez et al., 2000).
Starting from the consideration that "What you measure is what you get" (Kaplan and Norton, 1992, p. 71), academics and practitioners have suggested several frameworks and models to measure and report IC (Chiucchi & Gatti, 2015), both for internal decision making purposes and for external disclosure purposes (for a thorough review, see Andriessen, 2004;Sveiby, 2010).Moreover, governments and institutions have also proposed innovative models to measure and report IC, the aim being to encourage companies to adopt IC management practices (Meritum Project, 2002;Mouritsen et al., 2003;FMEL, 2004;European Commission, 2008).
Despite several IC frameworks and measurement models having been proposed over the years, their actual adoption by companies is still limited (Dumay, 2009;Lönnqvist et al., 2009;Chiucchi, 2013;Chiucchi & Montemari, 2016).Therefore, one of the current aims of ICA research is to identify the levers and the barriers to the use in practice of ICMSs by companies (Guthrie et al., 2012;Dumay, 2013;Dumay, 2014) in order to shed light on how and why successful (or unsuccessful) experiences occur (Dumay, 2012, p. 12).
The extant literature has explored the factors that can hinder or enable the design, the implementation, the transmission and the use of IC indicators.In particular, research has emphasized that indicators should be designed to fit the characteristics of the company setting where they are supposed to be used (Dumay, 2009;Gatti & Chiucchi, 2017).At this stage, managerial information needs, strategic objectives, and key performance areas should be at the core of the design and the calculation of indicators.Following along these lines, research has also acknowledged that involving the information users in designing the IC indicators fosters their understanding of them, which may increase the chances of producing measures coherent with their information needs (Chiucchi and Dumay, 2015).On the contrary, when the participation of managers is limited from the design stages onward, IC does not acquire an organizational meaning and IC measurement projects dwindle and stop altogether (Giuliani et al., 2016).The influence of company actors on the evolution of IC measurements and reports has been recently explored, also highlighting which actors influence both the adoption and the fate of an IC report and how they influence these reports (Giuliani & Chiucchi, 2019).
Moreover, the complexity of the process through which indicators are designed has also catalyzed the attention of researchers (Catasús & Gröjer, 2006).Firstly, as IC is a multifaceted phenomenon, a high number of indicators must be designed and implemented in order to capture all its dimensions (Andriessen, 2004).Secondly, developing IC indicators could be a long process as it requires the cooperation of many data providers within the company who could be unwilling to contribute to projects that are usually perceived to be unrelated to their day-to-day activities (Demartini & Paoloni, 2013).Thirdly, IC indicators can rapidly become outdated; IC is a dynamic phenomenon, and over time, new IC items may emerge and become a priority for management, while others may lose relevance, and some may even return to the spotlight in a revitalized format.Thus, the ICMS should be continuously updated and the benefits of calculating IC indicators are sometimes short-lived (Chiucchi, 2013).Taken together, all these factors inevitably make the data collection and processing very difficult and time-consuming.This, in turn, may lead managers to consider IC measurement too complicated (i.e., costly) and of low value when compared to its benefits (Schaper, 2016).
Another stream or research concerns the way information collected through IC indicators is transmitted, i.e., the logics through which indicators are selected as well as how and when they are conveyed to the users.Several contributions have revealed that there are a few ways to select and present indicators that can favor subsequent reception.One way is by "dramatizing" indicators; this takes place when a certain set of conditions occur (Catasús & Gröjer, 2006, pp. 195-197).First, indicators should be presented as tools that provide information on a company's priority concerns and specific organizational challenges; second, indicators should be easy to interpret and understand; third, the labels chosen to present indicators should be appealing to managers as this increases the chances of catching their attention.
Moreover, indicators could be designed and presented through causal maps (Montemari & Nielsen, 2012;Montemari & Nielsen, 2013); this tool displays how IC items interact with each other for the sake of value creation, therefore revealing how IC really works in the specific context in which it is deployed (Cuganesan, 2005).This method of selecting and conveying indicators may improve managerial awareness of the relevant IC items and indicators may be perceived as useful tools to govern the value creation process (Cuganesan & Dumay, 2009).
Another way to design and transmit IC indicators is through business models (Montemari & Chiucchi, 2017;Chiucchi et al., 2018).The business model is the framework through which companies execute their strategy (McGrath, 2010;Nielsen & Montemari, 2012;Lambert & Montemari, 2017) and it clarifies how value is actually created and captured (Osterwalder & Pigneur, 2010;Arend, 2013).Extracting and presenting IC indicators through the business model highlights which IC elements are of utmost importance and what role they play in the company's value creation process, thus ensuring that the measures are closely connected to the company's conception of value creation.
Finally, barriers to IC indicators have been identified by the extant literature with regard to their use.A number of contributions have underlined that IC indicators can be criticized and rejected by information users for several reasons.First, IC indicators are provocative as they spotlight deeply rooted local practices and this may provoke reactions from company actors whose performance may end up being publicly exposed (Vaivio, 2004, p. 61).Second, the relevance and the validity of the IC indicators may be questioned by information users, as well.It is worthy of note that non-financial measures outnumber the financial ones in ICMSs (Andriessen, 2004;Sveiby, 2010) and, by nature, these indicators have some features that may be perceived as drawbacks by information users (Gatti, 2015).As a matter of fact, they may be seen as partial, i.e., not able to capture all the relevant dimensions of the phenomenon to be measured (Vaivio, 2004, p. 61), they may be rejected because of their lack of objectivity (Chiucchi & Montemari, 2016), or they may be criticized because they can cause ambiguity and subjective interpretation (Vaivio, 2004, p. 55).Third, the indicator score may also influence the information recipients' attitude toward using it or not in the decision-making process; the score may indeed highlight some aspects of the phenomenon being measured that catch managerial attention and require action, especially if the score is extremely positive or negative (Catasús & Gröjer, 2006).However, the score may also hinder the use of IC indicators if it does not confirm the information recipients' perception of reality; moreover, the lack of objectivity and completeness can be further reasons why users reject IC indicators (Chiucchi & Montemari, 2016).
What has been presented thus far shows that literature has deeply analyzed the main barriers that can arise and the levers that can be used when an IC measurement project is carried out.Concerning barriers, most of the contributions have shown that they can lead to a complete failure of the project (Chiucchi & Montemari, 2016;Giuliani et al., 2016;Schaper, 2016;Nielsen et al., 2017).Limited attention has been paid to how these barriers can affect the project, namely to how they can contribute to (re)shaping or (re)moulding it depending on the nature and the magnitude of the barriers that emerge during the design and the implementation stage of the IC indicators.This paper aims to contribute to filling such a gap by exploring the case of an Italian company that experienced many barriers during the design and implementation of an IC measurement project and showing how they affected the project itself within the organization.

Method
To carry out the empirical research, the authors have adopted the case study method and have analyzed an Italian medium-sized company which designed, implemented, and used an ICMS.
As IC is a complex and context-dependent phenomenon (Mouritsen, 2006;Jørgensen, 2006;Montemari & Nielsen, 2013), the case study method is particularly suitable to achieve the research aim because it allows the holistic and in-depth exploration of a complex phenomenon in the real-life context in which it takes place (Yin, 2002;Scapens, 2004;Lukka, 2005).
The in-depth nature of the analysis is enhanced by the choice of a single case which makes it possible for the researchers to obtain a richer and thicker understanding of the phenomenon under investigation; the term "richer" is referred to the quality and quantity of information on the phenomenon itself (Ferreira & Merchant, 1992) and on the reasons that push actors to take certain decisions (Ahrens & Dent, 1998), while the term "thicker" is connected to the opportunities to draw theoretical reflections on the phenomenon under analysis (Baxter & Chua, 1998).The case was chosen purposefully (Patton, 1990) because some barriers deeply affected the IC measurement project, thus providing the opportunity to explore how these barriers reshaped the project itself.
The case was conducted by one of the authors of this paper using a "strong" interventionist approach (Kasanen et al., 1993;Lukka, 2005;Jönsson & Lukka, 2005;Dumay, 2010).In 2001 the cooperation between the researcher and the organizational actors aimed to introduce an "innovative construction" to solve a company's practical problem.In particular, the General Manager (GM) had invested a considerable amount of money to develop human resources, to structure knowledge in databases and procedural manuals, to improve the product design processes and to strengthen the customer relationships.He was sure that developing the company's intangible resources would affect the company's performance in a positive way.However, the company's management control system was not able to show and to measure the effects of these investments in IC as it was mainly focused on financial outcomes.In light of this, the intention of the GM was to design and to implement a system able to measure the company's IC and its performance, as well as its contribution to the company's overall financial performance.
A collaboration between the company and the researcher began, in the aim of achieving this objective; the measurement model chosen was the one that the researcher had designed during her studies on IC as it was considered suitable to satisfy the GM's information needs.In particular, the model was inspired by the Meritum framework (Meritum Project, 2002) and it was based on the "intangible resources -activities" logic (Chiucchi, 2008).The GM expected the model to be used in an interactive mode (Simons, 2000) by promoting discussion, among the managers, on IC and on the managerial actions that would improve its performance.
The purpose of constructive research is twofold: solving a practical problem and making a theoretical contribution (Jönsson & Lukka, 2005, p. 5;Dumay, 2010, p. 48).With regard to the case under analysis, the interventionist project allowed the researcher to test the model she had designed for IC measurement so that any barriers and levers relative to its design, implementation, and use could be explored.While conducting the interventionist research project, the researcher acted as a "member of the team".As such, she was involved in decisions and actions, and was also considered responsible for ensuring that the objective of the measurement project would be achieved.This role entails, on the one hand, the opportunity to gather more subtle data, but on the other, the risk of "going native"; in other words, there is the implied danger of biasing the research findings (Jönsson & Lukka, 2005, p. 20).To eliminate or, at least, to mitigate these risks, some counterweights were adopted by the researcher; for example, she made sure to have periods of absence from the company in order to safeguard the research gaze, especially during times when personal involvement was high.
Participant observation, semi-structured interviews, and document analysis (Denzin & Lincoln, 1998;Yin, 2002;Scapens, 2004) were the techniques used to collected empirical data.The notes referred to observations were completed within one day of the meeting, meaning that the "24-hour rule" was applied (Eisenhardt & Bourgeois, 1988;Scapens, 2004).The semi-structured interview was chosen as a technique for collecting data because it offers the opportunity to address themes that come to light during the interview, thus allowing a deeper understanding of the motivations that drive the interviewee's actions (Kvale & Brinkmann, 2009;Qu & Dumay, 2011).As suggested by Kreiner and Mouritsen (2005), during the semi-structured interviews, the interviewees were posed reflective questions and were asked for anecdotes and examples, in an effort to obtain further stories, trigger more thoughts, and encourage the provision of detailed information.Moreover, data was also gathered through informal talks, e-mail exchanges, and phone calls.A qualitative data analysis was performed on the data collected, the aim being to focus on the meanings the respondents attributed to the IC indicators while keeping sensitivity to the context (Denzin & Lincoln, 2000;Patton, 2002).

The Design and the Implementation of the ICMS
Aestas is a middle-sized Italian manufacturing company.The company's competitive advantage depends on technological and design innovation and on the services associated with its products.The latter are designed both by company designers and world-renown designers and architects who cooperate with the company in designing the products to insert in their projects and which can later be sold by the company to its own customers, as well.Over the years, Aestas invested a sizeable amount of resources to develop intangible assets as the GM, Dr. Red, was convinced that the outcomes related to them were not only significant, but also the "real" source of company value.Nevertheless, at that time he was not able to make these outcomes visible and, consequently, was unable to measure them.
After a meeting with a researcher who was conducting studies on IC measurement and reporting, a project was officially started with the university, in 2001.Its aim was to improve the existing management control system by introducing tools for measuring the company's IC, to both fully understand the role of IC as a determinant of the company value and, at the same time, to manage it.The following sections analyze how the ICMS was designed and implemented (section 4.1), the barriers that arose during the design and implementation processes of IC indicators (section 4.2), and their effects on the ICMS (section 4.3).

Building the ICMS
The project team in charge of carrying out the measurement project was composed of the researcher, the management accountant, and the GM who acted as the sponsor and "gatekeeper" of the project.At the beginning, the researcher who was part of the team was also the "expert" and she guided the design and the implementation of the system.It was her intent, which dovetailed with the GM's expectations, that the system would be used interactively (Simons, 2000), thus favouring discussions and strategic debate among managers and subordinates.At the same time, the ICMS was supposed to be gradually and autonomously managed by the management accountant.Therefore, the latter was taught all the possible ways of measuring IC and was involved in discussions pertaining to the model for measuring IC that was going to be implemented within the company.
In order to develop a measurement system that could support the management of IC and of its performance, the first stage entailed identifying the key intangible resources.Afterwards, the activities or actions, that allow intangible assets to be created and developed, were also identified.This was done by exploring the actual and potential ways of increasing the entity and the quality of the intangible resources.
During this first stage, managers were actively engaged in order to help them to make sense of IC and build their own meaning of it.The idea was that if the final aim was to develop managerial actions and to improve the management of IC, those who were responsible for these resources and for their management had to be personally involved in the construction of the information which would assist them in this task.
Once the intangible resources and the activities had been identified, the indicators were developed.The aim was to monitor, on the one hand, the efficiency and the effectiveness of the activities undertaken for creating and developing IC and, on the other hand, to gain control over the obtained results.Regarding the activities that allow IC to be managed, efficiency was monitored through measures referred to their cost or by resorting to non-financial indicators (e.g. for training activities, the per-capita cost and the per-capita hours can be calculated).Effectiveness was measured in terms of punctuality, reliability, perceived utility, timeliness, etc. (e.g. for training activities, effectiveness can be monitored by the growth/decline rate of employees' potential or of competencies measured after the activity takes place).
The management accountant, the researcher, and the managers cooperated in all steps, i.e., identifying the intangible resources, selecting the activities for developing them, and establishing the IC indicators.Concerning the latter, the researcher made an initial proposal which was discussed, reviewed, and refined, in collaboration with the management accountant.Finally, the proposals were shared and discussed with the managers.Changes and adaptations were made according to their suggestions and reactions to the indicators.Some of these referred to the label assigned to certain indicators which was perceived in a negative way, while others concerned the way certain indicators were calculated.In some cases, indicators were replaced by new ones or simply eliminated.
The high (estimated) benefits of this cooperative procedure were presumed to be attained at a certain "cost" to the managers.Involving them in the building of the system meant that they had to "invest" some of their work hours by participating in a project whose outcome would not be evident in the short term and which would "steal" time away from their ordinary activities.Scheduling managers' participation in the project was not an easy task.In fact, breaks between one meeting and another were sometimes long, and meetings were sometimes postponed to make room for more urgent daily activities.Table 1 shows the intangible resources identified in the case company, articulated according to the three IC categories, while table 2 displays some of the indicators designed and implemented in Aestas.

Barriers to IC Measurement
Albeit with some delays, the design of the ICMS was accomplished.Intangible resources, development activities, and indicators were established.Nevertheless, when it came time to present and discuss them in meetings with company managers some barriers arose.
More specifically, when reflecting upon the intangible resources that were the key drivers of the company performance, many managers brought up the company software.An in-depth analysis of this driver led to the realization that not all the software represented a key driver.Consequently, a distinction was made between "strategic" and "other" software.The latter was intended to be standard software useful for carrying out ordinary activities; the former, instead, was the software which was essential for carrying out value-added activities since it was able to increase their speed, precision, effectiveness, etc.Thus, this distinction was considered helpful to more adequately plan training activities on software, investment in software updates, etc.Nevertheless, when it came time to specify which software was strategic and which was not, some problems arose.A list of every type of department software had to be made and interviews had to be carried out to discern whether it was strategic or not.Moreover, a survey was conducted to measure personnel knowledge and satisfaction with reference to it.The IT department was expected to take a front row position in participating in these activities, which were time-consuming.However, because it was already involved in other projects, these activities were postponed, and this part of the measurement system got held up due to these calculation problems.In other words, the project was not perceived as important and urgent.Consequently, it was not given the necessary priority by the people involved in it -people who were fundamental to ensuring the implementation of the system.
Other problems also arose which seemed to be related to the way the management accounting system worked and to the changes that introducing the ICMS implied.The company management accounting system was composed of a budget, managerial reports predominantly based on financial measures, and an accounting system for calculating product costs and for controlling efficiency.Therefore, the heads of departments such as IT or Human Resources (HR) were used to contributing to the company budgeting and reporting processes by simply indicating the amount of expected/actual expenses.Participating in the IC measurement and reporting process implied doing something more.In fact, for every intangible resource related to their activities they were asked to quantify a specific objective, define actions to reach it and, above all, to publicly discuss them.In certain departments, the system implied going about a different way of planning and measuring the activities undertaken and also required a higher "exposure" of the objectives and of the results achieved.Consequently, some of the managers did not feel at ease with the ICMS.
An example may help to understand this aspect of the problem.Considering the importance attributed to HR over the past few years, the GM had promoted investments to develop employee recruitment and evaluation processes.When establishing indicators for measuring employees' competences, difficulties arose since employees' competences were not being mapped at the time and therefore, there were no indicators able to monitor them (e.g.number and level of competences, their rate of growth, etc.).The head of the HR department did not accept the researcher's suggestion to undertake a pilot project aimed at measuring employees' competences.Because this option was not viable, the researcher and the management accountant proposed proxy measures ("employee seniority index" and "career track index") which were the only ones that could be calculated at that time.However, the HR manager did not accept them either, claiming they were unreliable and he did not want to monitor the results of the activities carried out by his department with measures he did not perceive as trustworthy.It is worth of note that, when the management accountant and the researcher presented these measures to the HR manager, they would have clearly identified them as proxies, thereby underlining their obvious limitations and pointing out the fact that their scores could not be considered completely accurate or reliable.
Other criticisms arose with regard to the label given to some of the IC indicators.In order to focus attention on the most relevant competences, i.e., those that contributed to the company's competitive advantage, the project team proposed categorizing employees and their competences as "easily replaceable" and "source of competitive advantage".When the categorization and the correspondent indicators (i.e., "number of easily replaceable competences") were presented to managers, they were rejected.Upon closer analysis, their concern revolved around how employees would feel if their competences had fallen into this category.Their opposition did not regard the substance of the indicator but rather its appearance, i.e., its label.Sentences like "Guess what employees would ask each other: -Hey dude, are you among the easily replaceable competences or not?" clearly reflect the managers' concern.Although the management accountant and the researchers had said that these labels were only "names" that could be changed, the words had very clearly caught the full attention of the managers who completely disregarded the underlying substance.In other words, they neglected to even look at the competences in terms of which ones were standard and which ones, instead, were key, nor did they consider what their level was.These were the key points the management accountant and the researcher had tried to address through the proposed categorization and indicators.the managers' negative reaction made the researcher and the management accountant aware of how important the label of an indicator can be.
To sum up, at the end of the time period in question, the system seemed to have arrived at a dead end.Some relevant intangibles could not be measured in the way that had been originally envisaged.The researcher and the management accountant felt like they were at a crossroads: they could either give up measuring these resources in the way they thought most appropriate (what they had designed) and settle for other measures, or persist in their efforts and keep trying to build the most complete system possible.The latter, however, entailed the risk of not only taking more time, but also of losing the managers' attention and engagement.As frequently happens, neither of these paths was chosen, but a third one was found.The researcher and the management accountant engaged in brainstorming in order to break the impasse by overcoming the obstacles that stood in the way.

How Barriers Reshaped the IC Measurement Project
Given the barriers that emerged during the design phase and particularly during the implementation stage, the researcher and the management accountant started an in-depth reflection process aimed at identifying the best way to overcome them.A possible solution came from the GM who underlined the need to "sell the project to the managers".The idea came to him as a result of his realization that some managers involved in the project had not completely understood how the project would have made a positive contribution to their daily work.This is the case of the Marketing manager who, during one meeting, asked the researcher and the management accountant: "What do I stand to gain from the project?"It seemed, then, that the benefit the management accountant and the researcher perceived as the most important one, i.e., having a system that made it possible to govern the company intangibles, was not perceived to be important enough by this manager.He expected different or greater benefits.The management accountant and the researcher on the one side, and this manager on the other (and probably other managers, as well), seemed to hold very different viewpoints.Therefore, the team members decided that it was up to them to tip the scales: if the entire project (in terms of its success or failure) was at stake, it was necessary to either increase the "revenues" or lessen the "costs" or work on both sides of the equation.For this reason, changes and adaptations to the ICMS were made in the hopes of counteracting the reluctance perceived in some of the managers' behaviours and observed reactions.Changes and adaptations differed in nature and complexity.Sometimes it was enough to simply change the name given to an indicator; for instance, "level of competence of easily replaceable competences" was changed to "level of standard competences" in order to reduce managers' reticence.
In other cases, it was necessary to change the indicator in order to identify a proxy that was, according to the Following along these lines, a "Sales Life Cycle" has been designed in order to map and measure all the activities to be performed during all the stages of a relationship with a customer (scouting, promotion, prescription, sale, follow-up).
All in all, the design and the implementation of the ICMS at Aestas was fraught with barriers that hampered a complete implementation of the ICMS.Thus, some actions were undertaken by the researcher and the management accountant to try to overcome these barriers or, at least, to mitigate their effects.These actions were sometimes unsuccessful so that barriers persisted, and some IC indicators were neither implemented nor used, as happened in the case of the indicators proposed to the HR manager.In other cases, the actions were successful and limited the effects of the barriers, leading not only to a stable adoption of the IC indicators for decision making purposes, but also to a further evolution of these indicators over time.This was the case in the Marketing department, where actions aimed at increasing the perceived reliability of the IC indicators actually persuaded the Marketing manager to use them when planning actions to acquire new customers.

Discussion and Conclusions
The purpose of this study was to investigate the barriers that can emerge when an IC measurement project is carried out and, in particular, how these barriers may affect the design, the implementation, and the possible evolution of an ICMS.To reach this objective, this paper has presented a case in which some barriers arose during the design and the implementation of IC indicators in order to explore the effects they had on the IC measurement project.
From a broad perspective, the case analysis has revealed that getting an IC measurement project up and running in this company was quite difficult.What was even more difficult was ensuring that it was understood and accepted by the managers and generally seen as something worth carrying out.At the beginning of the project, it was evident that the benefits the management accountant and the researcher attributed to the ICMS (i.e., providing information to properly plan and govern the company's intangible resources) were not relevant enough for the managers; according to them, the costs of participating in the IC project were clearly higher than the revenues.
Calculating indicators implied, at best, an in-depth analysis or a new way of processing existing data; more often, it required collecting new data through interviews, surveys, etc.This meant that the calculation process, in which the various company departments were involved in directly, was time consuming and "competed" with the other activities already scheduled.For these reasons, the calculation process often got postponed as the already-planned activities were given priority.In a similar fashion to what Schaper (2016) found, the IC measurement project was considered too costly when compared to its benefits, and many managers decided to focus on other priorities.Moreover, many of the indicators used for measuring IC were of a non-financial nature and their calculation process presented margins of subjectivity at different levels.Therefore, some managers perceived the IC information as unreliable and too open to criticism.Thus, at that stage, the IC measurement project stopped because of two important barriers: the heavy workload entailed in the design and calculation process of IC indicators (Demartini & Paoloni, 2013) and their perceived limited reliability (Chiucchi & Montemari, 2016).
Both the researcher and the management accountant adopted different measures in order to limit the negative effects of these barriers or to try to overcome them.During the design and implementation stages the management accountant experienced, a learning process, that primarily concerned issues related to other areas, which was fundamental to gaining the trust of managers (employed in other areas) who were particularly sceptical about the implementation of the ICMS (Chiucchi, 2013;Gatti et al., 2018).In this perspective, it could be stated that the abovementioned barriers stimulated a stronger dramatization of the IC indicators.Moreover, the learning process that affected the management accountant and some of the managers should have led to the full adoption of the ICMS.Nevertheless, as stated above, this did not happen.Despite this apparently negative outcome, and contrary to previous studies that have shown how barriers can block the design as well as the implementation of an ICMS (Chiucchi & Montemari, 2016;Giuliani et al., 2016;Schaper, 2016;Nielsen et al., 2017), in the Aestas case they acted as a sort of "filter".In fact, although the whole system was not used by the managers, at least initially, in a subsequent stage some of the indicators did become part of the management control system of the company.In other words, the abovementioned barriers filtered out those indicators that were not perceived as useful or reliable and supported the adoption of those that managers actually considered useful.If, on the one side, this did not ensure the whole implementation of the ICMS, on the other side, it made it possible to select and legitimate some IC indicators which later became part of the management control system.This offers a new and different view of such barriers.Rather than considering them merely as factors which can block the adoption of ICMSs, they can be seen as factors which can challenge their implementation, strengthening those IC indicators that are perceived as reliable and useful, and allowing the others to fail.
Another relevant finding of this paper lies in the fact that involving managers in the design step does not always lead to positive outcomes.As suggested by Dumay (2009) and Chiucchi and Dumay (2015), IC indicators were designed to foster managers' understanding of IC, catch their attention, and let them see IC itself as a solution for solving their problems.However, this attempt did not lead to the development of IC interventions; rather, the case analysis showed that involving managers in the design step can also stop the measurement process, as happened with the HR manager, who accepted neither to design direct measures referred to competences (e.g.number and level of competences, their rate of growth, etc.) nor to use proxy measures (e.g.employee seniority index, career track index).As also found by Vaivio (2004), the HR manager's reticence to carry out the IC measurement project was also related to the public exposure that the activities performed and the results achieved would imply.Reticence may also be attributable to the fact that adopting the ICMS might have influenced the way certain activities were managed.The case analysis showed that, when the IC indicators are highly provocative, involving managers in the design step can hinder and even, halt the measurement process, and limited reliability is the main reason given by subjects who reject the IC measures.
Finally, it is important to acknowledge the limitations of this paper.Although the use of a single case study provides in-depth and rich data, it also limits the generalizability of the observations to other companies.Moreover, it is worth noting that the results obtained were influenced by the researcher's intervention as the case was conducted using a "strong" interventionist approach.Concerning avenues for future research, it could be interesting to understand whether these findings could be extended to other types of non-financial measures referred to objects of analysis different from IC, particularly in organizational settings where non-financial indicators play a relevant role.In addition, it could be fruitful to further explore the reasons and the conditions that may cause the failure of the design and the implementation of company-wide measurement systems to achieve the expected impact on the companies' global management control systems, but which instead, may foster long-lasting, circumscribed changes in the local (departmental) management control systems.

Table 1 .
The intangible resources at Aestas company

Table 2 .
Some of the indicators used to monitor intangible resources at Aestas companey