The History of Academic Research in Marketing and Its Implications for the Future

Purpose – The purpose of this study is to determine what the history of research in marketing implies for the reaction of the field to recent developments in technology due to the internet and associated developments.<br><br>Design/methodology/approach – This paper examines the introduction of new research topics over 10-year intervals from 1960 to the present. These provide the basic body of knowledge that drives the field at the present time.<br><br>Findings – While researchers have always borrowed techniques, they have refined them to make them applicable to marketing problems. Moreover, the field has always responded to new developments in technology, such as more powerful computers, scanners and scanner data, and the internet with a flurry of research that applies the technologies.<br><br>Research limitations/implications – Marketing will adapt to changes brought on by the internet, increased computer power and big data. While the field faces competition for other disciplines, its established body of knowledge about solving marketing problems gives it a unique advantage.<br><br>Originality/value – This paper traces the history of academic marketing from 1960 to the present to show how major changes in the field responded to changes in computer power and technology. It also derives implications for the future from this analysis.


Introduction
Beginning in the mid-1990s improved computer power, the existence of the internet and other advances in technology, have spawned a number of major changes on both sides of markets. There has been an explosion of the data that is available to sellers. There is clickstream data, click-through data, online product reviews, blogs, social media data, video data, purchase history data and numerous other types of data. This has given rise to extremely large data sets (big data) that require new approaches to analysis. In turn, this has led to a need for methods that are capable of making sense of this mass of data, and to the rise of data science as a profession. Big data has also led to major increases in the ability to target individual consumers, and the diffusion of the smart phone has made mobile advertising a major medium. On the buyer side, online retailing has grown rapidly, availability of online information has greatly lowered search costs, and the internet has fostered social networks and online reviews that have become a new source of information for both buyers and sellers. Very large intermediaries such as Google, Amazon and Facebook have emerged to facilitate the transmission of online data to buyers, the flow of data to sellers and the flow of transactions between buyers and sellers.
These changes have created exciting research opportunities for marketing scholars, provided that they keep abreast of the developments. But there are also threats. Examination of recent issues of the leading journals in MIS will uncover many titles that could easily apply to marketing papers. A common viewpoint in the popular literature is that marketing personnel are not equipped to handle marketing analytics, and that only data scientists are equipped to handle big data issues of targeting, measuring sentiment and mapping social networks (Deloitte, 2018;Olenski, 2018;Roubaud, 2018). In short, there is a potential turf war about who will supply marketing research in the future.
Thus, a major question is what will happen to the field of marketing in the face of the technological changes outlined above, and of increased competition from data scientists and others. In this paper, I will address this question by showing that that the field has always responded favorably to major changes in technology, and to major research needs. I will do this through a detailed examination of research topics that occupied the profession from 1960 to the present. Three key insights emerge from this analysis. One is that marketing has always adapted to changes in technology and research needs, and generally has adapted and refined research methods developed elsewhere in doing so. There is no reason why this will not be the case in the future. Another is that there is a large body of knowledge about marketing practice and consumer behavior that is unique to marketing, and has developed SJME over a long time period. I doubt that most of us fully appreciate the extent of this knowledge, which is not easily acquired by data scientists and others. A third key insight is the absolutely huge impact that the internet has had on firms and consumers, aided by improved computers and other technology. This has led to major changes in the practice of marketing, and created a vast number of research opportunities that can occupy academics in the field for a long time to come. A final observation is that many of the techniques favored in machine learning and by data scientists are derived from methods first applied in marketing many years ago. Until recently, a lack of computer power hindered their further development.
While there is other work on the history of marketing, this paper has a different focus. There are several studies of trends in marketing topics over time (Cho et al., 2017;Mela et al., 2013;Huber et al., 2014;Wang et al., 2015). While these articles focus on the entire life cycle of topics, our focus is on their introduction to the field. There are also review articles that trace developments in the sub-areas of quantitative marketing (Winer and Neslin, 2014), and strategy (Kerin, 1996;Kumar, 2015). There is no review article that covers all areas of the field, and the studies of trends and review articles cited above generally do not focus on the drivers of changes in the field. This study examines changes in all areas of marketing, and attempts to trace their origin to technological changes, demands of practitioners or research needs that emerge as the field progresses.
This study will trace changes in methods and topics introduced into the field in 10-year intervals starting at 1960, and ending in the present. A list of the changes in technology and available data that appeared in each 10-year interval is also provided. It will be clear that these changes triggered many of the developments in methodology and research topics. An overview of the major developments is presented in Table I. Developments at each 10-year period will be discussed in detail in the following sections. In preparing this analysis, I relied heavily on lists of highly cited papers in Journal of Marketing, Journal of Marketing Research, Journal of Consumer Research and Marketing Science, articles in the book edited by Winer and Neslin (2014), and on other articles that are cited in the following sections.
The historical analysis reveals that the basic conceptual framework and techniques for empirical analysis used in marketing that emerged in the 1960s and 1970s are still in use today, albeit in refined versions that take advantage of improved data and computer power. A reason for this is that the fundamental problems of marketing management remain the same as they were in that period. Aided by the influx of faculty trained in psychology, a focus on the micro aspects of consumer information processing behavior emerged in the 1970s and 1980s. With an influx of faculty trained in economics, the 1980s also saw many applications of game theory to marketing problems. But the major development in the 1980s was the diffusion of bar code scanning, which provided an important new source of data, plus improved computer power and the logit model to facilitate the analysis. The advent of scanner data made it possible to create customer data bases, which facilitated the development of a large body of research on customer relationship management (CRM) in the 1990s. The late 1990s saw the emergence of the internet as a new communication medium. This created new sources of data for consumers, e.g. social networks, online reviews and massive amounts of data for firms to analyze. This led to a demand for data scientists and also attracted the attention of scholars in MIS. Marketing now had a new source of competition. While the field was initially slow to adopt the internet as a research area, research on internet-related topics dominated marketing in the recent decade.
Following the historical analysis, I will outline some needed changes and opportunities for further research that emerge from the presentation. I will also outline the implications for attracting students into marketing rather than MIS or data science. [1960][1961][1962][1963][1964][1965][1966][1967][1968][1969]: marketing management and quantitative methods With the introduction of computers that use integrated circuits in the middle of this period, computer power increased considerably. By today's standards computer power was still quite limited, and older readers may recall boxes of punched cards which were provided to an operator, and waiting for an output that might materialize hours later. Still, tasks could be performed given a certain amount of patience.

The history of research in marketing
Much of the research during this period emerged from fellowships aimed at training business school faculty members that were provided by the Ford Foundation (Winer and Neslin, 2014, pp. 2-3). The list of attendees included many who became the most influential SJME scholars in our field. Two collections of readings that were required reading when I was a graduate student emerged from the conference: Bass et al. (1961) and Frank et al. (1962). The primary data sources during this period were aggregate time series data, sometimes assembled from data obtained from diary panels, and surveys. The era started with the development of two key concepts that are still useful: the marketing concept (Levitt, 1960) and the development of the key concept of the marketing mixthe 4p's (McCarthy, 1960). However, most of the innovative research over 1960-1969 centered on empirical applications to practical marketing problems. For example, the new product forecasting models of Fourt and Woodlock (1960) and Parfitt and Collins (1968) used panel data to forecast steady state levels of trial and repeat purchase. Other examples are the response model to price and dealing of Massy and Frank (1965), and the models of response to advertising of Palda (1965) and Bass (1969). These response models applied regression analysis, and Bass (1969) was a pioneering application of simultaneous equation techniques.
The concept of the product life cycle also emerged during this period (Kotler, 1965a, 1965b, Cox, 1967. Kotler developed a numerical simulation of the long-term strategy of a firm introducing a new product that anticipates later game-theoretic models. The wellknown Bass diffusion model, another application of regression analysis, appeared in 1969. There was also a considerable interest in brand loyalty and brand switching during this period, which was usually analyzed by studying purchase sequences with diary panel data using some form of stochastic model (Kuehn, 1962;Massy, 1966;Morrison, 1966).
The period 1960-1969 also saw the initial marketing applications of discriminant analysis (Massy, 1965;Morrison, 1969), cluster analysis (Frank and Green, 1968) and multidimensional scaling (Neidell, 1969;Green and Carmone, 1969). These articles were mainly methodological. However, Massy (1965) presented an application to discriminating between the audiences of a set of radio stations; Frank and Green (1968) outlined applications of clustering in various areas, including grouping TV programs on the basis of audience, and grouping respondents on the basis of purchasing patterns; Green and Carmone (1969) provided a perceptual map of automobile models.
A conceptual framework for examining product differentiation and market segmentation as alternative strategies was developed by Smith (1956). The framework is based in economics, and there is no empirical analysis. Claycamp and Massy (1968) developed a more precise normative segmentation model, with components expressed mathematically. Several empirical approaches to segmentation were also developed during 1960-1969. Examples are Bass et al. (1968), Yankelovich (1964), Frank (1967) and Engel et al. (1969).
In sum, the period 1960-1969 is characterized by pioneering work in many familiar areas of marketing, which was driven by a serious effort to improve the quality of marketing education and research. Microeconomics provided the conceptual starting point, and the empirical approaches were derived from work in other disciplines. For example, simultaneous equations regression originated in economics; stochastic models originated in operations research and psychometrics; clustering originated in biology; and multidimensional scaling and discriminant analysis originated in statistics.
While much of the work is rudimentary by today's standards, many of the problems still exist, and many of the approaches are still in use today. Some of the techniques introduced during 1960-1969 play an important role in machine learning. For example, support vector machines are basically a technique for discrimination, and clustering is still used for classification. We still teach about the marketing concept, and product life cycle; we still use response models to develop an optimal marketing mix; we still use extensions of the Bass (1969) model to address problems of diffusion and new product forecasting; we still examine brand loyalty and brand switching; competitive positioning maps are still useful; and we still attempt to define market segments. What has changed is the size of the data sets that we have, the computer power needed to deal with them, and the technical sophistication of our tools. 1970-1979: preferences, preferences, segmentation and positioning The years 1970-1979 might be labeled as the era of perceptions, preferences, segmentation and positioning. During this period, products also came to be viewed as bundles of attributes, and a consumer's choice problem was modeled as an attempt to find the product offering the best mix of attributes. Three approaches to measuring preferences emerged over 1970-1979: the multi-attribute attitude model; preference mapping based on regressions of stated preference on brand attributes; conjoint analysis based on relations between stated preferences and hypothetical brand attributes.
Attitudes. Based on frameworks presented by Rosenberg (1956) and Fishbein (1963), an extensive body of literature built on measuring attitudes toward brands and their attributes emerged around 1970. The underlying model postulates that consumer k's attitude toward brand j, A jk is the sum of ratings of the brand on each attribute rated multiplied by the consumer's evaluation of the importance of the corresponding attribute. The brand with the highest value of A jk is the most preferred and should be chosen. In their review of this literature, Wilkie and Pessemier (1973) discuss the many different ways in which attributes and their importance have been measured, and the many measurement problems that this model poses. Nevertheless obtaining ratings of brands on attributes, and of the importance of each attribute, has become standard practice in marketing research surveys. The attitude model is an indicator of the strong interest in applying psychology to marketing problems that emerged around 1970, which resulted in the founding of the Journal of Consumer Research that was first published in 1974.
Preference mapping. Rather than asking the consumer to rate the importance of different attributes, one can estimate their importance if one has data on brand preferences, and the positioning of brands on individual attributes. The positioning can be derived from attribute ratings or multi-dimensional scaling of similarity judgements (Urban, 1975). Attribute importance can be obtained for each consumer by regressing stated preferences on brand positions for each consumer. The results can be used in predicting choices of a new concept given the positioning of existing brands. This is essentially the approach taken in Urban (1975) and Silk and Urban (1978). The latter article outlines the structure of the ASSESSOR pre-test market model, which is a laboratory simulation of an actual test market. A variant of this model is still in commercial use.
Conjoint. Alternatively, one can construct hypothetical profiles of the attributes of different brands, and obtain respondents' preference evaluations of each profile. Relating these evaluations to the attribute levels in the profile gives a measure of the importance of each level of each attribute. This is the basic idea behind conjoint analysis (Green, 1974;Green and Srinivasan, 1978) which has become a standard tool for evaluating the attractiveness of different brands, and for testing the viability of attribute bundles not currently on the market. The basic difference between conjoint and other models is that the attributes are actual values rather than perceptions, which allows products that match the actual bundles to be designed, eliminating the need to find relations between perceived and actual values. Green and Srinivasan attribute the basic theory of conjoint measurement to Luce and Tukey (1964). Luce was a psychologist, Tukey a statistician. Conjoint analysis is widely used in practice; for example, Green et al. (2001) state that there had been over 1,000 applications of conjoint over the 30 years prior to 2001. SJME Measurement: behavioral and quantitative. Consistent with the increasing influence of psychology on marketing, many papers were written on generating valid and reliable measures of constructs used in marketing studies. The measurement paper by Churchill (1979) has approximately 5,800 citations in SSCI, the most of any marketing paper in 1970-1979 by a wide margin. Other highly cited papers on measurement in this era are Peter (1979), Green and Rao (1970) and Heeler and Ray (1972). This period also marks the first marketing applications of covariance structure analysis, a formal model for measuring relationships between constructs (Bagozzi et al., 1979;Aaker and Bagozzi, 1979). In this model, scale items are imperfect measures of latent constructs, and the constructs are connected by regression relationships. The model allows the quality of the scales and the predictive ability of the relation between constructs to be assessed. The covariance structure model was developed by the Swedish statistician Joreskog (1970).
In addition to attempts to measure attribute perceptions, there was considerable interest in the process of acquiring the information that leads to the perceptions. Much of the data was acquired through experiments that monitored how respondents progressed through information displayed in different formats (Jacoby et al., 1974a, 1974b, Bettman and Kakkar, 1977Bettman and Zins, 1979), which are early examples of experimental consumer research, which has become the norm in recent years.
The period 1970-1979 was also marked by an interest in qualitative research. Two methodological articles that are still relevant to current practice from this period are Calder (1977) and Kassarjian (1977). Calder focused on the methodology of focus groups, and made the point that their appropriateness in a given situation depends on the objective of the research, and on whether the moderator is an active or passive participant. Kassarjian specified a set of conditions for the content analysis to be valid, and outlined relevant units for the analysis. Interestingly, he anticipated the use of computerized text mining procedures, but stressed the practical difficulty of developing dictionaries appropriate to a given study. Those currently interested in text mining would be do well to pay attention to this paper.
Two other streams of literature emerged in the 1970s: marketing mix models estimated on aggregate sales data and decision support systems for managers. These are formalizations and extensions of work that began in the 1960s. Examples of the marketing mix models are Lambin (1970), Naert and Bultez (1973), Nakanishi and Cooper (1974). These models were commonly based on log-linear relations between market share and decision variables in which shares were constrained to sum to one. They produced price, advertising and distribution elasticities, typically accounted for lagged effects, and sometimes accounted for competitive reactions. They provided a basic structure that was used in later applications of logit models to individual-level data, and also provided the basis for marketing mix models that are currently used by practitioners.
The decision support models, which were often built around the basic framework contained in the market share models, were focused on aiding managers in making marketing decisions. If actual data was not available, management judgments were quantified (Little, 1970). Examples of these systems are the call planning model of Lodish (1971), and the industrial marketing mix model of Lilien (1979).
Summary. In sum, many concepts and measurement approaches that are part of the mainstream of marketing practice were introduced in 1970-1979. While positioning approaches have been refined, we still speak of product attributes and competitive positioning, we still collect ratings data on attributes, we still use refined versions of conjoint analysis to measure preferences for attributes, results of information processing experiments are still underlie models of presenting information, and qualitative research is still carried out. These approaches generally originated in economics, statistics or psychology and were extended and modified by researchers in marketing.
1980-1989: scanner data, strategy and information processing The period 1980-1989 began with the rapid diffusion of bar code scanning, and of recording sales at the point of sale, which created a source of big data that was rapidly exploited by retailers, and data suppliers IRI and Nielsen. This created a unique opportunity to estimate response to price, promotion and advertising at the level of a UPC and at the individual or store level. Scanner data also facilitated the creation of customer databases that could be used to predict churn and target promotional mailings.
For grocery stores in particular, the large amount of detailed and accurate individuallevel data facilitated the construction of consumer panels which soon replaced the older diary panels. The rich data made it possible to predict choices as a function of price, promotion, advertising and other variables, at the individual level. But as choices are discrete (usually one brand is chosen from a set on a purchase occasion), standard regression was not appropriate. This gave rise to the widespread application of the logit model, which can be solved via maximum likelihood using gradient search techniques. Increases in computing power made it feasible to apply these methods to large data sets. In its basic form this model assumes that consumers maximize a utility function on each purchase occasion, where the independent variables are marketing mix variables, and the errors are have a Type 2 extreme value distribution, which is IID across alternatives. This is the logit model based on the work of Luce (1958) and McFadden (1974). While this model was introduced into marketing by Silk and Urban (1978), Punj and Staelin (1978) and Gensch and Recker (1979), it did not receive widespread use until the scanner data was available.
The pioneering marketing application of the model to scanner data on packaged goods was Guadagni and Little (1983). Many other applications of this model appear in the literature. For example, Winer (1986) applied to the model to estimate the effect of reference prices; Lattin and Bucklin (1989) used it to examine the reference effects of both prices and promotions; Gupta (1988) used the logit model as the brand choice component of a model that also incorporated purchase incidence and purchase quantity; and Neslin and Shoemaker (1989) used the logit model as part of their examination of the decline in aggregate repeat purchase rates after a promotion.
As it assumes that consumers have the same basic preference for brand attributes, and that any differences in preference across consumers are reflected in the IID error term, this basic form of the logit model implies that the effect a change in a marketing variable on another brand is proportional to that brand's share (proportional draw). Allenby (1989) addressed this problem by applying a nested logit model to aggregate data, where each nest is a group of brands that have homogeneous preferences, but preferences vary freely across nests. Allenby used the results to draw a positioning map. Kamakura and Russell (1989) went beyond this to use individual level data to estimate a finite mixture logit model that defines segments that have different preferences, and determines the probability of each sample member's membership in each segment. This is a pioneering attempt to account for heterogeneity in individual preferences in a logit model used in marketing. Other approaches to the heterogeneity problem would await development until sufficient computer power was available.
Theory and strategy. The decade 1980-1989 also marked the development of a body of theoretical literature based on economic modeling, much of it using game theory. One stream of this literature dealt with the problem of coordinating channels between the manufacturer and retail level, and on how to achieve the coordination through pricing SJME and/or vertical integration. Examples are Jeuland and Shugan (1983), McGuire and Staelin (1983), Coughlan (1985), and Moorthy (1987Moorthy ( , 1988b. There is also an extensive theoretical literature on a various aspects of pricing: dynamic pricing (Bass, 1980;Dolan and Jeuland, 1981;Kalish, 1983;Rao and Bass, 1985), product line pricing (Moorthy, 1988a;Dobson and Kalish, 1988), quantity discounts (Dolan, 1987) and pricing under price expectations (Narasimhan, 1989).
Marketing strategy emerged as an identifiable field during this period. Some of the strategy papers are based on economic theory. Examples are papers on defensive marketing strategy (Hauser and Shugan, 1983), and strategy for durables producers faced with competition from used versions (Levinthal and Purohit, 1989). Other papers are more similar to work in the management strategy literature. Examples are papers on pioneering advantage (Robinson and Fornell, 1985;Urban, et al., 1986), asymmetric competition (Carpenter et al., 1988), marketing control (Jaworski, 1988), brand image management (Park et al., 1986), cause-related marketing (Varadarajan and Menon, 1988) and international marketing strategy (Jain, 1989). Strategies for marketing services also received attention during this period. Prominent examples are papers on classifying services (Lovelock, 1983), service quality (Gronroos, 1984) and strategies for marketing services .
An extensive literature on channels also emerged during this period. The game theory papers on channel coordination listed above are part of this literature. But there was also an extensive body of literature related to organizational theory and transaction cost economics. Examples of topics are opportunism (John, 1984), international entry and expansion Anderson and Caughlan (1987), power (Gaski and Nevin, 1985), inter-organizational exchange (Frazier et al., 1988) and continuity (Anderson and Weitz, 1989). With the exception of Anderson and Caughlan (1987), there is little overlap between game theory and methods used in other papers.
Search and information processing. Much of the consumer research literature over 1980-1989 was focused on information. But instead of focusing on the mechanics of acquiring and processing information, the focus was on various aspects of the search process: involvement, the motivation to acquire information (Petty et al., 1983;Zaichkowsky, 1985); knowledge and expertise, which define the stock of information (Alba and Hutchinson, 1987); the effects of knowledge on search (Brucks, 1985) and product evaluation (Bettman and Park, 1980;Sujan, 1985;Rao and Monroe, 1988); and on the search process itself (Punj and Staelin, 1983;Bloch et al., 1986;Beatty and Smith, 1987;Furse et al., 1984).
Summary. In sum, the availability of scanner data and improved computer power facilitated the development of empirical studies of marketing mix elements for packaged goods. But the limitations of the basic logit model hindered the development of this literature, and there was a need to develop improved models of heterogeneity and state dependence. While models that address the problem of heterogeneity were presented by Allenby (1989) and Kamakura and Russell (1989), much further development of choice models would come in the following decade.
With the exception of the marketing mix models, there was a shift away from a focus on developing models that would be directly applicable to managerial decisions to more theory development, and models that might explain behavior rather than provide a direct application. This was true among researchers who study marketing strategy, and among those who study consumer behavior. This does not mean that the research was not ultimately useful for marketing practice. In fact a reason for the shift of emphasis may be that the remaining research topics that could be studied with existing methods may not have promised the same results as the more theoretically oriented topics: the low hanging fruit had been picked.
The period 1980-1989 also marked the development of marketing into at least three different sub-fields that are grounded in different underlying disciplines. With the introduction of the journal Marketing Science, quantitative research based on economics and statistics became a more distinct area. As noted above, strategy research based on research on organizations became another. Finally, consumer behavior research based on psychology continued to develop as a separate area.
1990-1999: early internet, improved choice models, customer relationship management and consumption behavior Early internet. The introduction of the internet, Web browser and search engines in the 1990s created a new communication medium that has had a profound influence on marketing, advertising and retailing. The internet was commercialized around 1990-1992, with the removal of restrictions on commercial use of the ARPANET, an online communication network involving the US Government and universities. The first popular Web browser, Mosaic, was introduced in 1993. While the first search engines also emerged around 1993, Google, which became the dominant search engine, was not founded until 1998, and only rose to prominence around 2000. Similarly, while Wi-Fi was introduced in 1998, dial-up modems were the main means of accessing the internet until broadband became popular in the mid-2000s.
Research on the internet began to emerge in the second half of the decade. Three widely cited conceptual articles appeared in the marketing literature during this period. Hoffman and Novak (1996) presented a decision process model of internet use built around the concept of flow, total engagement with the current online task. Based on an assessment of the strengths and weaknesses of the internet relative to other channels, Peterson et al. (1997) developed predictions about the viability of online marketing for different products and services with different characteristics. Alba et al. (1997) examined the implications of electronic shopping for consumers, retailers and manufacturers. The authors also posed a series of research questions about the internet at the end of their paper. Most of these have been addressed in subsequent research.
Work directly related to online consumer behavior began to emerge in the marketing literature late in the decade. Lal and Sarvary (1999) demonstrated that the internet has a competitive advantage in selling "digital goods" (those that do not require personal inspection), which can be exploited through higher margins for internet sellers compared to traditional retailers. Examples of empirical studies from this period are a characterization of internet shoppers (Donthu and Garcia 1999), a scale to measure attitudes toward the website (Chen and Wells, 1999), and a study of user responses to online privacy concerns (Sheehan and Hoy, 1999). In general, the studies cited in this paragraph provide insights into what the internet was like for early users, and into the ability of researchers to predict its ultimate development.
Mention should also be made of two widely cited studies produced by researchers in the MIS area, the study of the impact of the internet on buyer search costs by Bakos (1997), and the study of bundling information goods by Bakos and Brynjolfsson (1999).
Simulation-based choice models. Driven by increases in computer power, major advances in research methods also took place in marketing during 1990-1999. Simulation-based techniques that eliminated the need to estimate high-dimensional integrals were introduced. This removed a major impediment to the incorporation of heterogeneity and dynamics into choice models. Rossi et al. (1996) presented a Bayesian model for estimating household-level parameters which employed Gibbs sampling and the Markov Chain Monte Carlo (MCMC) estimation technique. This approach has become widely used and is now incorporated into SJME all major computer packages. An alternative approach to estimating high-dimensional integrals, simulated maximum likelihood, was used by Erdem and Keane (1996) in their dynamic structural model of learning the quality of competing brands. In this model, consumers are assumed to maximize their expected utility, taking into account the benefit of quality information learned on the current purchase for improving future utility. The forward-looking behavior makes the model dynamic, while maximizing expected utility makes it structural. As this pioneering paper, many papers using this structural approach have appeared in the marketing literature.
Customer relationship management and strategy. As to the substantive content of the marketing literature during 1990-1999, there was an extensive literature devoted to different aspects of customer relationship management (CRM). Many of these studies had something to do with commitment and trust in dyadic relationships. Examples are Morgan and Hunt (1994), Doney and Cannon (1997) and Smith and Barclay (1997). Other noteworthy papers in this stream are about incorporation of consumer input into product design (Griffin and Hauser, 1993), modeling the duration of a customer's relationship with a seller (Bolton, 1998), and about collaboration between buyers and sellers (Jap, 1999).
Related to the CRM literature was an extensive literature on market and customer orientation. Examples are Kohli and Jaworski (1990), Webster (1992), Jaworski and Kohli (1993) and Day (1994). A number of papers in this general stream attempted to relate market orientation to firm performance. Examples are Narver and Slater (1990), Gatignon and Xuereb (1997) and Srivastava et al. (1998Srivastava et al. ( , 1999. A concern about the long-run relation between market orientation and firm performance implies a need to determine long-term effects. Dekimpe and Hanssens (1995) used the persistence modeling approach of time series analysis to address this problem.
Customer satisfaction and brand equity are two outcomes of CRM. Several widely cited studies of customer satisfaction appeared during 1990-1999. Examples are Hauser et al. (1994), Anderson et al. (1994) and Smith et al. (1999). Widely cited studies of brand equity by Simon and Sullivan (1993) and Park and Srinivasan (1994) also appeared during this decade.
An extensive literature on consideration sets emerged during the 1990s. Widely cited studies are Hauser and Wernerfelt (1990), Nedungadi (1990) and Roberts and Lattin (1991). Roberts and Lattin (1997) present a brief review. A key finding is that consideration sets are generally smaller than the set of all available brands, making it necessary to assure that a brand is in the set as a necessary condition for ultimate purchase.
Consumption behavior. The marketing literature is usually concerned with the purchase decision, and how products are actually consumed did not receive much attention prior to 1990. Over the 1990-1999 period, there were a large number of studies that used ethnographic techniques to understand the benefits that consumers receive from consuming. Examples are studies of river rafting (Arnould and Price, 1993), bikers as a sub culture (Schouten and McAlexander, 1995), sky diving (Celsi et al., 1993) and uses of fashion (Thompson and Haytko, 1997).
Summary. In sum, while the introduction of the internet received research attention, a major contribution in this period was the development of Bayesian and simulated maximum techniques into useable techniques for modeling heterogeneity and dynamic customer behavior. This could not have been possible without major improvements in computer power. Major substantive research innovations in 1990-1999 were related to CRM, customer satisfaction, measuring brand equity and consideration sets. All of these are intertwined in some way with branding. All have practical applications in areas like lifetime value, customer churn, targeting, determining the market value of a brand name, measuring customer satisfaction and its determinants and persuading consumers to consider buying a particular brand. These innovations were enabled by the ability to collect and maintain data bases of scanner data, and by improved computer power. Ethnographic studies, which can provide valuable insights into how products are used, also became popular during this period.

2000-2009
Internet, marketing profitability, structural models of markets As shown in Table I, many innovations that made the internet more productive were introduced and achieved some degree of use during 2000-2009. While Wi-Fi was introduced in 1997, the initial version was very slow, and it did not achieve widespread use until the technology was improved and broadband became popular. Broadband was introduced in 2000, and gradually replaced the earlier dial-up connection through the period 2000-2009. Social networks were introduced in 2005, smart phones in 2007. While their use grew rapidly, social media and smartphones did not achieve 50 per cent penetration until after 2009.
Broadband and Wi-Fi made online communication much faster and made it feasible to transmit much larger files. Users could easily share content with one another, and sellers could provide more detailed information, and allow faster transactions, online. Sellers could also mine the content of online reviews and conversations on social networks and could engage in online conversations with customers. Access to Wi-Fi made it possible for phones to access the internet and enabled the internet to be accessed through smart phones. Location was no longer a major constraint on access, and firms had the ability to track the location of their customers. Cloud computing, computer services provided on demand over the internet, also emerged during this period.
Internet-related studies. Internet-related studies in the marketing literature became more prominent during 2000. Novak et al. (2000 presented an empirical version of the widely-cited conceptual paper by Hoffman and Novak (1996). Other studies analyzed brand communities (Muniz and O'Guinn, 2001;Schau et al., 2009), online advertising (Chen et al., 2009), online word-of-mouth (Godes and Mayzlin, 2004;Mayzlin, 2006;Trusov et al., 2009), customization (Ansari andMela, 2003), recommendation systems (Ansari et al., 2000), motives for posting on social media (Schau and Gilly, 2003) and the impact of Web page attributes on choice (Mandel and Johnson, 2002). These studies used a wide variety of qualitative and quantitative methods.
The large amount of text data generated by online word-of-mouth created a need to develop efficient methods for analyzing the data. One response was the development of netnography (Kozinets, 2002), a method for doing ethnographic research using online data instead of face-to-face interviews. Text mining was applied to measure sentiment conveyed by online reviews; originally, this involved measuring their volume, dispersion and valence (positive or negative); refer Godes and Mayzlin (2004) for a widely cited example. The relative ease of contacting customers or other respondents online also facilitated the development of field experiments. Examples are the pricing experiment of Kannan et al. (2009), and the test of methods for stimulating word-of-mouth communication by Godes and Mayzlin (2009). Machine learning methods for predictive modeling were first applied in the marketing literature during this period (Cui and Curry, 2005;Cui et al., 2006;Lemmens and Croux, 2006). Machine learning techniques estimation in conjoint studies were developed by Evgeniou et al. (2005) and Evgeniou et al. (2007).
However, compared to the management information systems (MIS) discipline, marketing lagged behind in picking up on marketing-related aspects of the internet. Of the 100 most cited articles in the Social Sciences Citation Index for the two leading MIS journals (top 50 in Information Systems Research and top 50 in MIS Quarterly) over 2000-2009, I counted 24 SJME that were on topics related to online marketing, such as trust and satisfaction with online transactions, social networks, online reviews, and electronic retailing. By contrast I counted only 15 articles related to online marketing among the 200 most cited articles in the four leading marketing journals (50 most cited articles in Social Sciences Citation Index in each of Journal of Consumer Research, Journal of Marketing, Journal of Marketing Research, and Marketing Science over [2000][2001][2002][2003][2004][2005][2006][2007][2008][2009]. The topics of these articles were similar to those in the MIS literature. The marketing discipline is not the only one working on marketing-related problems in electronic commerce. Structural models of markets. Drawing on the economics literature, structural models of markets became very popular during 2000-2009. In an aim to describe competitive behavior, these models estimate relationships derived from optimizing behavior on both the supply and demand sides of the market. Examples are Kadiyali et al. (2000), Sudhir (2001aSudhir ( , 2001b and Mehta et al. (2003). Chintagunta et al. (2006)  Service dominant logic. The most widely cited article from this period was Vargo and Lusch (2004), which presented a conceptual framework in which servicesthe application of resources for the benefit of othersare the fundamental unit of exchange. Consumers obtain value from the services. This service dominant logic (SDL) model emphasizes co-creation, the interaction of customer and firm and other consumers in producing the services. This framework has spawned a large body of research, which has been reviewed by Vargo and Lusch (2017) and Wilden et al. (2017). The reviews provide numerous suggestions for further research. One weakness of the SDL model is that it does not provide an explicit model of economic incentives, e.g. how prices and costs are determined. The model of household production in economics (Stigler and Becker, 1977;Ratchford, 2001) focuses on these incentives while treating the consumer as obtaining value from the consumption of activities. An effort to synthesize the two models might be beneficial.
Summary. In sum, the internet became an important research area in marketing over 2000-2009, when pioneering work on online reviews, text mining, field experiments and online word-of-mouth were published. Applications of machine learning also appeared in the marketing literature during this time. However, marketing-related studies of online behavior also appeared in the MIS literature during this period. Aside from the internet, marketing profitability, structural models of market behavior and the introduction of the SDL framework were prominent during this period.

2010-2019 the era of internet research
Volume of internet research. To measure the impact of the internet on research in marketing, I started by listing the 50 most cited articles in Social Sciences Citation Index for Journal of Consumer Research, Journal of Marketing, Journal of Marketing Research and Marketing Science over 2010-present. I added others rated as highly cited by the Social Sciences Citation Index, primarily recent articles that had not reached their citation potential. Of the 242 articles that satisfied the above criteria, I found that 65, or approximately 28 per cent, were related to the internet. This is by far the most common topic area in the marketing literature between 2010 and present. In sum, the internet has been the dominant research domain in marketing over this period. Based on the Social Science Citation Index, I also found that about half of the most cited articles published in 2010-2019 in the two leading MIS journals ere related to online marketing. One of these articles is an excellent review of the entire field of business analytics by Chen et al. (2012), which provides valuable insights into how MIS and data scientists approach empirical research. The emphasis is on the data and prediction, not on testing and applying theories.
Review articles. Major areas of internet research in 2010-present continued to be those listed in Table I for 2000-2009, plus online advertising, multi-channel and mobile, which are listed in Table I  Some review articles also focused on specific topics related to online marketing. Wedel and Kannan (2016) reviewed the development of marketing analytics, and provided a detailed discussion of research opportunities created by the vast amounts of online data. You et al. (2015) performed a meta-analysis of the relation between volume and valence of reviews and sales and found that valence had a larger impact. Rosario et al. (2016) presented a similar meta-analysis, but with a much larger data set. They found that a composite volume-valence measure based on the proportion of positive reviews explained the results the best, and that the results indicated considerable differences across products and platforms.
Mobile marketing. Turning to the developments over 2010-2019 listed in Table I, there is much recent work related to mobile marketing. Shankar et al. (2016) provided a review of mobile marketing in general, Grewal et al. (2016) provided a review of mobile advertising, and Hofacker et al. (2016) reviewed the literature in mobile gaming. A number of recent studies examined the effect of app adoption on purchasing (Bellman et al., 2011;Kim et al., 2015;Liu et al., 2019;Narang and Shankar, 2019). Melumad et al. (2019) examined the effect of mobile use on user generated content, and Grewal and Stephen (2019) found that mobile reviews tend to be more trusted than non-mobile reviews. Andrews et al. (2016) found that mobile ads become more effective as the respondent's environment becomes more crowded.
Research methods for online data. Though machine learning was introduced to the marketing literature before 2010, recent applications of these techniques have become more sophisticated and useful for practice. For example, in applications of text mining, Netzer et al. (2012) and Tirunillai and Tellis (2014) demonstrated how to use review or online forum data to develop competitive maps. Liu et al. (2016) combined text data from online reviews with TV viewing data to develop a forecasting model for TV ratings. Buschken and Allenby (2016) developed a sentence-based topic structure for text mining that improved on previous word-based structures.
There have been recent applications of machine learning to areas other than text mining. For example, Xia et al. (2019) developed a machine learning model for estimating shopping patterns from very large scanner datasets. Hu et al. (2019) applied a deep learning model to SJME develop a quality measure in their study of daily deals. Timoshenko and Hauser (2019) used a neural network to develop an efficient procedure for identifying customer needs.
Using the abundant online data available from clickstreams and advertising responses, several studies have conducted controlled advertising field experiments using various techniques for forming control groups (Johnson et al., 2017) and honing in on an optimal allocation policy (Schwartz et al., 2017). Nair et al. (2017) described an approach to marketing analytics for large databases, and a field experiment to measure the effectiveness of a promotional campaign. Gordon et al. (2019) presented a detailed comparison between experimental and observational methods for measuring online advertising effectiveness.
Online privacy. In the area of online privacy, Krafft et al. (2017) examined consumers' willingness to grant permission to access their personal data. They found that consumers tend to make a cost-benefit calculation in which the expected benefits of granting permission (personal relevance, entertainment, allowing consumers to control information) are traded off against the costs of granting permission (registration effort, privacy concerns and intrusiveness). Three other recent studies on the general effects of privacy concerns are Gardete and Bart (2018),  and Martin and Murphy (2017).
Social media. In the area of social media, much attention has been paid to content posted by brand communities and how firms might relate to it (Peters et al., 2013;Gensler et al., 2013). Gensler et al. (2013) emphasize the importance of brand stories generated by consumers for creating a brand image, and provide both positive and negative examples. Ameri et al. (2019) examine the effects of online word-of-mouth on adoption. Online seeding through social media, e.g. targeting consumers who may pass along a message, has also received attention. Kozinets et al. (2010) provided an in-depth study of the impact of seeding on marketing messages and their impact. Other studies have emphasized the use of network measures in determining seeding strategies (Hinz et al., 2011;Chen et al., 2017). The strategy of selecting seeds based on their degree centrality (number of friends) appears to be the most successful at spreading information.
Multi-channel. Many retailers now offer both online and offline channels, allowing them to be able to offer all types of service, and giving rise to multi-channel retailing. Amazon and Whole Foods is a prominent example. Studies have generally shown that the benefits of cross-channel synergy generally outweigh the costs (Pauwels and Neslin, 2015), and that the benefits are enhanced if services are integrated across channels (Herhausen et al., 2015).
If the offline seller belongs to a different company than the online one, the practice of searching offline and buying online, which is called showrooming, leaves offline sellers uncompensated for the information they provide. In an extensive survey study of showrooming behavior, Gensler et al. (2017) found that consumers are more likely to showroom when they have difficulty finding a sales person, and the authors concluded that large numbers of salespeople may be more important than high-quality salespeople. While the received wisdom assumes that showrooming is harmful to offline retailers, Kuksov and Liao (2018) show that this need not be the case if manufacturers compensate retailers for providing the appropriate level of service.
The internet and well-conceived customer databases allow consumers to be tracked through different stages of the purchase process. This has led to omni-channel marketing: developing a marketing strategy for each stage in the purchase process (Verhoef et al., 2015). For example, search and display advertising, one's own website, referral websites, offline retailers, desktop and mobile devices, may all be used by consumers, and may merit their own channel strategy. As an example, Kalyanam et al. (2017) developed a structural model to examine the use of different online and offline channels by customers of a catalog retailer and found that the market can be segmented by the shopping costs associated with different channels. These costs are related to a customer's past experience and basket size. As another example, Li and Kannan (2014) used detailed data on different channels used in searching for a hotel to develop a model for attributing conversions to different channels. Li et al. (2016) examined the profitability of different attribution strategies.
Research on non-internet topics. Apart from the internet, the areas with the most research in 2010-2019 were ethnographic studies of consumption behavior, and studies of branding, advertising and word-of-mouth, all of which are traditional areas. There were also a number of papers on research methods, ranging from a framework for conceptual contributions (MacInnis, 2011), to procedures for testing mediation (Zhao et al., 2010) and moderation (Spiller et al., 2013), to various econometric methods, including estimating the logit scale factor (Fiebig et al., 2010), estimating control functions (Petrin and Train, 2010) and estimating copulas (Danaher and Smith, 2011;Park and Gupta, 2012).
Applications of neuroscience (measuring brain activity) to marketing also became more prominent during this period. Reviews that emphasize barriers to the application of neuroscience techniques in marketing were presented by Plassmann et al. (2015) and Ramsoy (2019). Plassmann et al. (2015) also outlined a number of potential marketing applications of neuroscience methods. Two examples of recent papers that demonstrate the potential usefulness of neuroscience techniques are Venkatraman et al. (2015) and Chan et al. (2018). In a test of the ability of different measures to predict market-level advertising elasticities, Venkatraman et al. (2015) found that functional magnetic resonance measures imagery measures (fMRI) added explanatory power to a battery of traditional measures of advertising response. Chan et al. (2018) found that profiles of brand image developed from fMRI measures were associated with self-report measures of brand image. In general, neuroscience is a promising area that will likely be developed in the coming years.
Summary. In sum, faced with a new medium, new forms of advertising, social networks as a new vehicle for word-of-mouth, the growth of online retailing, clickstream data, data from online reviews and increased computer power, the marketing field has been provided with a wealth of research topics, and increased feasibility for addressing them. The marketing field has taken great advantage of these opportunities over the past 10 years.

Discussion
The past Over the past 50 years, academic research has produced unique body of knowledge that defines the marketing field as we know it today. This is outlined in Table I. As shown in Table I, basic concepts such as the marketing mix, product life cycle and diffusion of new products date at least to the 1960s. With the advent of useable computers, the basic empirical methods for analyzing consumers and markets also emerged at this time. Multiattribute models of products, perceptions and preferences emerged in the 1970s. In particular, conjoint measurement dates to this period. With the advent of scanner data, discrete choice models became prominent in the 1980s. Models of competitive strategy also emerged during this time. Customer relationship management and brand equity developed during the 1990s and measuring marketing profitability was a major topic in the period 2000-2009. Finally, changes triggered by the internet have dominated the field after 2000 and have featured new developments in online advertising, multi-channel marketing and mobile marketing, as well as major additions to the literature on text mining and social networks.
Though they have been developed and refined, all of these topics, and others listed in Table I, are relevant today. Moreover this review demonstrates that the field has always responded with a flurry of research whenever a new problem has become prominent, a new SJME source of data has emerged, or improved computer power has enabled improvements in analysis.
The future Innovations and research topics. Traditional machine learning approaches are good at uncovering patterns and prediction, but not at explaining why things happen. Marketing models tend to emphasize explanation. We have already reviewed some papers that combine the strengths of the two approaches, and this is an area of research that will likely develop further. There are a number of environmental changes that are also likely to spawn research in marketing. We are moving into an age of automation. The period 2010-2019 saw the introduction of virtual assistants, such as Amazon Alexa, that can respond to voice commands and perform household tasks. The increasing availability of automated devices means that the smart home is evolving to join the smart phone as a mainstream appliance. For analyses of these innovations, Hoffman and Novak (2018) and Novak and Hoffman (2019). We also see an evolution in retailing toward multi-channel selling, and experimentation with delivery technologies such as self-driving cars and drones. There is a need to understand the drivers of these changes: in many cases, delivery through physical stores may be the most cost-effective method.
Needed improvements in the marketing field. In general, the analytic rigor in our field is desirable and is the reason why the field turned to underlying disciplines in the 1960s. The result has been an impressive body of work that has had a major impact on practice. However, marketing currently faces significant problems and significant competition. Some of the problems and approaches to solving them are listed below: The fact that basic concepts that date to the 1960s are still used is not necessarily a positive thing. For example, Webster and Lusch (2013) argue that the focus on the needs of individual consumers is obsolete, and that a broader focus on quality of life and the impact of marketing on society is needed. The field has become increasingly divided into groups that look to different basic disciplines for inspiration: economics, statistics, psychology, management, anthropology, etc., and tend to limit their work to their chosen area. Crossfertilization of topics across areas is limited. There are major opportunities for synergies. For example quantitative research can be used to establish empirical results, experiments can be used for studying the underlying process. Quantitative text mining methods can replace hand coding of qualitative data. Clickstream data can be used to formulate and test models of search behavior (Kim et al., 2011;Bronnenberg et al., 2016). Articles within each area sub-field contain a degree of technical rigor that corresponds to the state of the art for the area. While this is desirable, the articles are not easy to read and understand for researchers who are not in that area, and even less so for practitioners. Possibly because of this, most practitioners no longer attend our conferences. Review articles, which provide a relatively quick overview of an area, are one way of addressing this problem.
With the exception of research sponsored by the Marketing Science Institute, and the contacts made by a few scholars, the field has not paid much attention to user needs in the recent past. Good marketing implies that appreciating user needs should be a high priority. We have to develop closer contacts with practitioners. Partly due to lack of contact, topics that are important to practitioners may not receive attention from marketing academics. The practitioners may need to look elsewhere for help. For example, as they need to generate responses to consumer inputs in real time, practitioners may favor machine learning approaches from computer science that give useable solutions quickly over complex econometric models that may take a week or more to estimate. This is another reason for working more closely with practitioners. The field is also facing increasing competition form MIS and other fields that have chosen to apply their methods to what we would call marketing problems. Who wins will depend on who wins the battle for students, which in turn depends on the demand by practitioners for employees who possess the body of knowledge that we impart to our students. To win this competition, marketing scholars need to be able to explain to others what our unique competence is, which is based on understanding marketing problems, and applying the best available techniques to solving them. This includes the techniques of data science if appropriate for a given problem.
There are articles that detail contributions of marketing to practice. An example is the ISMS Practice Prize competition articles, which are reports of solutions of actual marketing problems. If the field is to be relevant to practitioners such articles should be encouraged.
In sum, the main problem facing marketing is not to keep up by learning techniques favored by data scientists. Marketing academics with quantitative skills can easily learn these techniques and find applications for them. The main problem lies in convincing constituents that we have a better solution to marketing problems, which follows from a knowledge of the marketing discipline, a body of knowledge that is unique to our field. If employers appreciate this, our students will be in demand, students will take our courses, and our field will flourish.