Teaching technological forecasting to undergraduate students: a reflection on challenges and opportunities

This discussion article contains a reflection of the challenges I encountered while teaching technological forecasting to undergraduate students. I group these challenges into three groups: the inherent uncertainty of the forecasting process, the lack of appropriate learning materials


Introduction
Three years ago, I was tasked by my department at the University to create an introductory course on Technological forecasting. At the time of writing, I have taught the course twice, and I am preparing to teach it for a third time. This paper is a reflection upon my personal experience and the challenges I have encountered along the way. The insights I provide come from more approximately a hundred hours of course preparations; twenty hours of interaction with educational support staff; fifty hours of interactions with students; two standardized student evaluations; and a focus group with students, organized by university's quality assurance office, after concluding the course during its first year. Appendix A contains additional information about the learning objectives and the assessment methods of the course.
While I was given almost complete freedom to design the course, I was bound by two structural constraints which had a profound impact on what I could do with our students, and of which the reader should be aware. First, the course would be an elective in the Bachelor college, so students could register for the course already in their second year. The vast majority of students are enrolled in the Industrial Engineering program. After they graduate, most of them pursue a Master's degree in either Innovation Management, or Operations Management. At the end of their studies, they are usually employed by a high-tech firm in the region. This background implies that, while students had taken mandatory lessons in basic calculus, statistics, and programming, they were not trained yet in multivariate statistics, or advanced innovation management topics. I needed to make sure that the concepts related to technological forecasting were explained in a relatively simple manner. Second, the course was limited to a duration of eight weeks, with only 24 hours of in-class interaction, and a maximum working load of 140 hours. Therefore I had to be selective about the topics covered in class, and the depth with which I covered each of the topics. If I resorted to supplementary materials like papers and videos to read/watch at home, I needed to ensure they could be digested by undergraduate students.
The next sections focus on the most important challenges I encountered while preparing and delivering the course, namely: the inherent uncertainty in the forecasting practice; the lack of adequate teaching materials for undergraduate students and newcomers; and some methodological inconsistencies found in the literature. For each of the challenges, I present how I have tried to (partially) overcome them, and which issues still need to be resolved. Based on my experience, I conclude with a discussion on how to tackle some of the challenges I encountered in a collective manner, to facilitate the task of future teachers. My ultimate goal is to promote a healthy debate among forecasting practitioners about the best way to teach this discipline.

Challenge 1: Technological forecasting is an inherently uncertain process
Technological forecasting is an iterative process during which the forecaster progressively familiarizes themselves with a certain technological field (Porter and Cunningham, 2004). At the beginning, the problem might be fuzzy, and forecasters need to interpret the data, filtering out irrelevant results and focusing only on what they believe are the most interesting aspects or drivers of technological change. Forecasters only know where they start, but never where they will end. This slowly process of discovery might be what makes forecasting fascinating for us researchers, but can be a nightmare for undergraduate students. In most cases, students just 'want to pass' the course in the easiest way possible, without having to resort to their creativity. That said, even in the case of motivated students who are interested about the topic, I have observed that most struggle to cope with uncertain outcomes. In the first week, when they read the introductory chapter by Martino (1992) and his explanation on why a good forecast is not necessarily one which becomes true, their natural aversion against ambiguity exacerbates. What do they need to do? How are they going to know if what they do is right? How is it going to be graded?
To tackle this problem, I chose a teaching method tailored to openended problems. After consultation with our university's teaching support specialists, I designed the course following the guidelines of challenge-based learning (CBL) (Gallagher and Savage, 2020), which is strongly related to the concept of experiential learning (Wurdinger and Carlson, 2009). In CBL, students must work in interdisciplinary teams to identify, analyze and propose a solution to a problem presented by a challenge owner. In my case, I have worked with local companies who often collaborate with the university. From a teacher's perspective, in CBL the focus is more in process (how students reach their solution) than on the product (what the results are).
One of the advantages of CBL is that it gives students certain freedom to choose the direction of their group projects. For instance, the first year the challenge was related to the automation and sustainability of the construction industry. Some groups looked into chemical components and production processes to make concrete more sustainable; other groups looked into the projected evolution of concrete structures production costs, or the effect that larger automation could have on occupational safety. Student evaluations suggest that this freedom provides students with a higher sense of ownership, which increases their motivation. In addition, from a collective learning perspective, having different topics within the same challenge makes it more enriching for students, and more interesting for the collaborating company.
In addition, CBL, like other practice-based teaching methods, may help foster critical thinking among students (Gelder, 2005;Seibert, 2021). Students start by analyzing a societal issue, and asking themselves whether technology can provide a solution to that problem. They need to choose which method is the best suited for that problem, where to find enough data, and defend their results, acknowledging any potential limitations. I consider this teaching method appropriate to forecasting, given the type of unstructured nature of forecasting problems. My role as a teacher is to push them outside of their comfort zone by asking why their approach is (not) good, or why their results look correct (or not). Given that teacher-student interactions are stronger in CBL than in traditional lecturing, as the course progresses, students become more involved with the course and more willing to speak up and discuss their approaches.
CBL comes however with two limitations I consider important. First, as a teacher I need to ensure that students acquire the required knowledge to analyze data and produce forecasts. My experience suggests that I cannot rely on students doing all the independent learning activities (readings, videos, programming exercises). Consequently, the first four weeks of the course I combine traditional, theoretical lectures, with the introduction to the challenge. During these four weeks, I teach them the very basics of technology diffusion and life cycle theories, judgmental methods, and bibliometrics (academic publications and patents). I assign one or two mandatory readings to each topic. Furthermore, I recorded several videos which cover more applied topics such as where to find good data, the pros and cons of each database, how to assess the quality of the data, or data cleaning. At the end of week four, I conduct a mid-term exam to assess whether students know the basics before reaching the data analysis stage in their projects. This way, the entire second half of the course is devoted to the challenge, and students can focus on data collection, analysis and interpretation.
The second limitation is that CBL is quite resource intensive and may not scale well with large numbers of students. To ensure that groups are progressing as desired, I assigned a teaching assistant to mentor them through the project. Each teaching assistant has assigned two student groups, and I meet with them every session to check teams' progress and potential hurdles. In addition, I hired another teaching assistant exclusively to tackle issues related to programming, which for some students might be the most important barrier in the course. Furthermore, as a teacher I constantly receive requests from students who need help or clarifications, and I need to be the interface with the challenge owner. It is time consuming, although it also feels rewarding at the end.

Challenge 2: Lack of appropriate learning materials
Students should have a good manual or reference source they can consult when they have questions about a topic covered in class. This is even more important in CBL, as I expect students to conduct most of their group work independently, and deepen into the methods as they require. Given that they are undergraduate students, for me an ideal manual would be one that 1) covers the basics of a variety of forecasting methods, 2) explains the strengths and limitations of each method, 3) provides examples which can easily be checked and replicated by the reader, and 4) uses a language which can easily be understood by nonresearchers. What I found is that most seminal works are either too complex, or too old.
When I started designing the course, I followed vaguely the structure of Porter and Cunningham (2004). It is a concise book, which speaks clearly to the reader and that provides a good overview of what can be done, without going into many methodological details. I thought it was more digestible by non-experts than Porter et al (2011), where one can get more lost among its list of forecasting methods. However, during my first year of teaching I realized that Porter and Cunningham's approach might have been too simple for the students. To my surprise, both in informal conversations and in final evaluations they asked for both more theoretical grounding and for more methodological detail.
In the second year, I decided to change and use Martino (1992) as the reference manual. It is a much bigger book, but I think it has a pleasant narrative tone and, compared to others, it places more emphasizes on the 'why' forecasting works the way it does. So far, I have not received any complain about it from the students. However, it comes at the expensive of being really outdated in terms of data analytics, and not covering topics such as technology roadmapping which have become popular in more recent years. I cover this shortcoming by providing students with academic papers which I believe provide a good overview of the basic methods without using an extremely complex language. Topics covered by the papers include how to connect funding data to Technology Readiness Level (TRL) (Jeffrey et al., 2014); the differences and time lag between academic publications and patents (Lezama-Nicolás et al., 2018); the limitations of using experts in decision-making (Bolger and Wright, 2017); and how to create a technology roadmap (Pearson et al., 2020).
The area where I believe most work has to be done, is in the development of up-to-date materials to teach data collection and analysis tools. All manuals I found present information which is either too old, or come together with their own proprietary software. This is shocking in a day and age where a lot of data is publicly available at little or no cost. In fact, when I started to search online for resources that my students could use, I almost found the opposite problem: there are almost 'too many' software packages available, and content curation is needed, as it is not always clear what are the differences in functionalities between them.
As part of our university's rules, students use Python as the main programming language. Without much effort I was able to find several libraries devoted to collection and handling from diverse sources such as Web of Science, Scopus, Google Scholar/Patents and more. For networkbased bibliographic analysis, tools like VosViewer and Biblioshiny are powerful and beginner-friendly. PATSTAT offers a free one-month trial, which is enough for my students. The Lens (lens.org) offers free access to a basic, but curated and clean, patent database which contains basic analysis tools and is perfect for students' and practitioners starting in the field who do not need to perform very complex analyses. My experience with these tools has been quite positive, and students have sometimes surprised me by finding ways to conduct complex analyses I thought they would not be able to do.

Theoretical substantiation of forecasting methods
In the last fifty years, the process through which innovations are developed and commercialized has changed dramatically (Meade and Islam, 2006). The number of stakeholders involved in innovation has increased substantially (Howells, 2006). A great portion of the basic research has moved from large companies to universities and research centers (Leydesdorff, 2000). Recent trends in innovation management highlight the emergence of innovation platforms and the so-called ecosystems (Gomes et al., 2018), and the phenomena of open innovation (Bogers et al., 2018) and user-centric innovation (Hopkins et al., 2011) to provide solutions to current societal challenges.
Parallel to changes in the innovation environment, innovation studies have witnessed the development of a wide variety of theories to explain how innovations reach the market and evolve. Old innovation models such as the linear model of innovation (Balconi et al., 2010) or the Kline-Rosenberg chain-link model (Kline and Rosenberg, 1986), while still having explanatory power, are much less used. Instead, currently scholars focus on theories which highlight the social aspects of innovation and stakeholder heterogeneity, like as technological innovation systems (Bergek et al., 2008); the multi-level perspective and transitions theory (Svensson and Nikoleris, 2018); technology and industry life cycles (Klepper, 1997;Taylor and Taylor, 2012); and innovation platforms (Nambisan et al., 2018), among many others.
However, the forecasting literature has evolved in ways that do not necessarily converge with these new theoretical perspectives. I found relatively easy to establish links between classical forecasting literature, and Rogers (1962) Diffusion of Innovation theory, or Vernon's (1966) product life cycle. With modern forecasting literature, links with emerging trends in innovation studies are less explicit. If nothing is done, this divergence could aggravate in the near future. With the advent of modern data science techniques and artificial intelligence (AI), researchers have started to develop plenty of algorithms with a forecasting purpose. While AI brings exciting possibilities to the field, I think there exists a real danger of focusing too much in the 'how to perform an analysis' and less in the 'why, and why this method works'. This could be an unfortunate deviation from the historical roots of this field.

Misalignment between considered best practices and published forecasting research
During these two years, it has happened several times that a student would come to me and ask: 'in the lecture you taught us that we should (not) do this, but in this paper…'. It is an awkward situation, as it may undermine both student's trust in the teacher, and the teacher's selfconfidence. In some cases, I had to thoroughly check and double check my sources to make sure that what I was teaching was right. However, the other side of the problem is that there is an undesirably high number of published papers out there which, sometimes for good reasons, do not follow classical teachings and norms, and do not provide any convincing explanation about why.
A notable example of these discrepancies appears in the use of Scurves as growth models to predict the lifecycle of a technology. A historically well-known problem is the estimation of the upper limit or saturation level in models such as the logistic or Gompertz curves . As Martino (2003, p.727) explains, "it is generally recommended that the forecaster not try to estimate the upper limit of a growth curve by using a fitting procedure". The reason is that a slight deviation in the input data, especially when only few data points are available, translates into a large deviation in the upper limit. Furthermore, growth curves which best fit the historical data tend to be not be the best models for forecasting purposes (Young, 1993). However, many papers published in recent years do use regression techniques to estimate the upper limit, and/or choose their growth model based on basic statistical measures such as R 2 (e.g. Adamuthe and Thampi, 2019;Yuan and Cai, 2021).
In my classroom, a group of students ran into a similar trouble. At first, they used linear regression to fit the logistic curve on cumulative publications for alignment technologies in photonics. They obtained an R 2 of 0.99, but that fitted curve suggested that the end of the lifecycle would be around 2030, which the students considered extremely early for technologies which were quite immature, according to their conversations with an expert on the topic. At the end, they changed their model. As a teacher, I was happy that my students displayed critical thinking and reflection. As a researcher, I understand that, in cases like this, choosing another model might not look 'scientific' and raise concerns among peer reviewers. In many aspects, forecasting remains more of an art than a science.
The other big controversial topic is the difference between description of prediction. As (Porter and Cunningham, 2004) elaborate, they both are useful for managers and researchers, but have distinct purposes and requirements. In particular, they emphasize the need for forecasts to understand the underlying mechanisms of technological change, and to combine data on R&D outputs (patents, publications, etc) with other data on R&D inputs such as funding patterns, experts opinion, or industry roadmaps. Triangulating different data sources may increase forecaster's awareness on potential shocks which may alter current trends. However, very few literature studies combine both inputs and outputs.
Other questions have appeared during teaching for which there is no clear-cut answer. One is the misuse of methodological terms. For instance, the word 'Delphi' has been used so broadly that most of the applications nowadays have little to do with its origins, and not always with the desired methodological rigor (Hasson and Keeney, 2011). Another issues are more minor and practical: does increased complexity in your model translate into a higher accuracy? Should the different assessments of an expert pool be aggregated? In order to improve your forecast, is it admissible to delete some of the early data, in cases where there is a long tail of values close to zero? The answer to these questions is arguably highly contextual, and the details hard to explain to undergraduate students.

Treatment of uncertainty in the forecasts
We cannot predict the technological future. Some forecasts might reflect reality more accurately than others, but there is always intrinsic uncertainty that the forecaster needs to embrace. As such, I considered uncertainty as a topic I could not overlook as a teacher, even when my target audience was novel to the field. Here, the problem I found is that, while there is a large number of theoretical and method-oriented papers which discuss how to treat uncertainty (usually making use of complex mathematics, and oriented towards time series), I found difficult to find empirical papers which brought that theory to practice in a comprehensible manner. Given the undergraduate nature of my course, I had to find a balance between methodological rigor and complexity.
To make it simpler for my students, I divided the topic of treating uncertainty into three different components. The first component relates to the quality of the data, which is related to how good your data sources are, and how good the search query is. I put emphasis on developing a good search query, and made students compute recall and precision values for their searches. This way, they realized how many false positives they had, and adapted their queries to ensure that, the more specific their search was, the higher the precision was.
The second component is how good the chosen prediction model is. Traditionally, authors have used confidence intervals to show that uncertainty (Martino, 1992;Porter and Cunningham, 2004). I think, however, that confidence intervals are a good way to show how good your fit is, but not how good your prediction is. As mentioned in the previous section, one may have a very good fit, but a poor prediction. To ameliorate this problem, I suggest my students to argue, based on theoretical grounds, why one model might be better than other, regardless of the quality of the fit. To this end, I think that CBL works better than the 'cookbook approach' to make students critically think about which method is best suited to achieve their goal, and to be aware of the strengths and weaknesses of their analysis.
The third component is whether, and why, the current trend will continue in the future. This answer cannot be objectively provided, but I ask my students to at least show signs that there is growing interest in the diffusion of the technology among the research community, government, and industry members. This can be done through a review of the (grey) literature, something that students struggle with, by looking at future funding plans, industry roadmaps, or speaking with experts. For teaching purposes, I found particularly useful for my students to speak with PhD candidates at our university, given that they are considerably easier to reach than industry members, and they should have a good understanding of how mature certain industry applications are.

Discussion
The challenges I have mentioned in this paper are not unique to Technology Forecasting. Indeed, educators and researchers in areas such as chemistry (Browne and Blackburn, 1999), biology (Sundberg et al., 2005), or even qualitative research methods (Graebner et al., 2012) have warned against the use of a 'cookbook approach' in teaching. The proliferation of this teaching style is a complex phenomenon in itself. Cookbooks facilitate teaching to inexperienced teachers, and passing to students, making it an attractive tool to score higher in educational quality control evaluations (Green, 2007). It also requires fewer resources than problem-based learning and is better fit for large classroom sizes.
Nonetheless, I think that Technology Forecasting presents two characteristics not applicable to other fields. First, forecasting focuses on potential future events dependent on a series of factors external to the forecaster. Consequently, there is no straightforward way to assess the validity of a forecast. In natural sciences, or other branches of social sciences which use methods based on optimization or econometrics, there is a wealth of underlying mathematical theoretical knowledge, and a large number of tests, which researchers may use to check their results. The development of such quality assurance tools in forecasting is at a more immature stage. Second, many Technology Forecasting techniques have been developed by practitioners, not by academic researchers. The field was born to help governments and large companies to create strategic plans, often in confidential settings such as military applications (Coates et al., 2001). Forecasting experts were scarce, results-oriented, and much of their knowledge was tacit. Knowledge flows from practitioners to academics have proven difficult to formalize and codify.
Because of these characteristics, I think it is critical that forecasting education focuses more on the development of student's critical thinking. Nowadays the forecasting community has grown considerably, propelled by the larger role of universities now play in corporate research, and globalization. I think this is the chance for the community to take some steps towards making forecasting become less of an art, and closer to a science, in which newcomers can more easily initiate themselves. Actions can be taken to facilitate the tasks of both researchers and educators.
As explained in section 4.1, most of the forecasting literature is disconnected from theoretical advances in innovation studies. I think there is an opportunity for conducting a thorough review of the forecasting literature, with a focus on theoretical grounding. This review may answer questions such as how compatible are forecasting studies with empirical results from other disciplines, what are historically the largest sources of uncertainty in a forecast, and under which conditions a certain model or method works better than the others. Answering these questions rigorously would likely require writing several papers and gathering an interdisciplinary team, but I think the result would be highly beneficial. These studies could be published together, for instance, in a Special Issue of this journal.
To facilitate teaching, existing manuals would need to be renovated to establish a more solid foundation and incorporate novel methods. An example of a reference manual which is updated periodically, and which is useful to both academics and practitioners in the field of innovation management, is the Oslo Manual (OECD, 2018). The Oslo Manual does not only contain a detailed explanation of procedures to measure innovation activities, but also provides definitions, refers to the most common theoretical models, and warns about some of the most common errors and methodological weaknesses. I acknowledge, however, that creating such 'forecasting manual' would be a challenging endeavor. Its creation and maintenance would require both human and financial resources which might be hard to obtain. In addition, the field of technology forecasting is growing rapidly (Sarin et al., 2020) and therefore the amount of work to review might become difficult to manage.

Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that cdocould have appeared to influence the work reported in this paper.
The primary objective of the course is to provide students with the knowledge to apply basic technology forecasting methods to evaluate real-life technology investment decisions, and how to treat and communicate the uncertainty implicit in the forecasts. Other learning objectives are: • Understand the different sources of uncertainty present in the technology adoption process, and how they change as technology evolves • Understand the differences across different types of technology forecasting methods, and discuss their suitability under different business circumstances • Review, discuss, and apply scientific literature in the field of technology forecasting During the course, students work in groups on a 'challenge' provided by an external organization ('challenge owner'). Each week there are scheduled three hours of in-class interactions, and one hour of tutorials, typically devoted to programming in Python and troubleshooting. During the first four weeks, students receive theoretical lessons, and are expected to perform background research to present a proposal for their project. The last four weeks of the course are exclusively devoted to data collection and analysis, and in-class moments are used to report progress and discuss next steps. At the end of the course, students present their work to the challenge owner, and write a 20-page report with their findings and recommendations.

Assessment methods
1-An individual mid-term written examination (35%). This typically consists of ten short open-ended questions where students have to apply basic theoretical concepts 2-Three presentations (15% total). Each presentation is assessed according to its content (50%), slide design (25%), and oral delivery (25%) a Week 4: Proposals to tackle the challenge, including method and data sources b Week 7: Discussion of the findings c Week 8: Final presentation with the challenge owner 3-A final report (50%), to be delivered one week after the final presentation A minimum grade of 5.0 in the final report, and an average grade of 5.5, are required to pass the course. Jaime Bonnin Roca is an Assistant Professor of Innovation and Entrepreneurship in the Innovation, Technology Entrepreneurship and Marketing Group, at Eindhoven University of Technology, Netherlands. He completed his PhD (2017) in Engineering and Public Policy in a dual program between Carnegie Mellon University (USA) and Universidade de Lisboa (Portugal). His research focuses on the role of uncertainty in entrepreneurship and technology adoption, especially in manufacturing industries, and the role of public organizations and policy in shaping that uncertainty. Besides research, he teaches courses related to decision analysis and technology forecasting.