Guest editorial: Favoring fieldwork makes marketing more meaningful

Tobias Otterbring (University of Agder, Kristiansand, Norway)
Giampaolo Viglia (University of Portsmouth, Portsmouth, UK)
Laura Grazzini (University of Florence, Florence, Italy)
Gopal Das (Indian Institute of Management Bangalore, Bangalore, India)

European Journal of Marketing

ISSN: 0309-0566

Article publication date: 9 June 2023

Issue publication date: 9 June 2023

683

Citation

Otterbring, T., Viglia, G., Grazzini, L. and Das, G. (2023), "Guest editorial: Favoring fieldwork makes marketing more meaningful", European Journal of Marketing, Vol. 57 No. 7, pp. 1793-1803. https://doi.org/10.1108/EJM-07-2023-987

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Emerald Publishing Limited


1. Introduction

Online retailing and digital marketing are steadily growing in popularity (Ares et al., 2023; Singh and Söderlund, 2020). However, most everyday consumer decisions still take place beyond the computer screen, far away from university labs or crowdsourced online platforms. Despite this reality, field studies – designed to capture actual purchase or choice behavior – are surprisingly scarce in psychology, marketing and consumer research (Li et al., 2015; Liberali et al., 2013; Robitaille et al., 2021; Wang et al., 2016). In contrast, ample academic work in these disciplines use self-reported measures as the principal outcome, mainly based on university students or paid online panel members (Baumeister et al., 2007; Doliński, 2018; Gneezy, 2017; Otterbring et al., 2020), with the data commonly collected through the “research by convenience” approach (Pham, 2013).

Reliance on stated attitudes, preferences or intentions – as collected either under controlled conditions inside a lab or through “Turkers” or “Prolificants” – makes good sense in many situations. For example, wisely designed online and lab experiments can increase rigor by allowing scholars to collect data on several variables that may act as confounds (Ding et al., 2020; Elbæk et al., 2022; Viglia et al., 2021). Rigor can be construed as encompassing careful design, execution, analysis, interpretation of results and retention of data, including operationalization and measurement of key constructs to test and provide evidence for the effect of an independent variable on a dependent variable, even when other alternative accounts have been considered empirically (McAlister, 2016). Therefore, controlled lab and online experiments may offer a more complete theoretical picture by, for instance, ruling out competing explanations, documenting the cognitive, affective or motivational mechanisms driving an effect (mediation), demonstrating under what specific circumstances the effect can be turned on and off (moderation) and ultimately providing more compelling evidence for the causal relationship between variables – at least in equally controlled contexts (Otterbring, 2021).

Lab and online studies also tend to be substantially less labor intense than field studies and can often be conducted in a much shorter timeframe, while costing considerably less to execute (Cialdini, 2009; Pham, 2013). Indeed, it can take many months before even getting the green light to run a field study and then several additional weeks to get a sufficiently large sample size. This long-lasting and costly process often makes researchers prefer prompt online panel studies instead, where researchers can easily collect data from hundreds of participants in a few minutes using only a tiny fraction of the finances needed for fieldwork.

In light of the above, what is the value of running field studies? Below, we delineate some of their many merits, while simultaneously advising marketing scholars to more frequently test their theorizing in actual shopping settings to stay relevant and ensure that their findings also apply in the real world (Rozin, 2001). Next, we introduce the articles of this special issue, with a summary of what each included paper investigates and the data used for these purposes. Finally, we conclude by presenting an example checklist of things to consider for marketing scholars and reviewers who conduct or review studies with a quantitative touch, including cross-sectional and (quasi-) experimental online, lab and field studies, with the hope of seeing more field-based investigations and increased methodological pluralism in the loftiest marketing journals.

2. Favorable features with fieldwork

One benefit of field studies, regardless of their precise design, is that they usually allow for studying actual purchase or choice behavior in ecologically validity settings, as they tend to blend in with what is naturally going on. This is important, as very few studies in marketing and consumer research include behavioral dependent variables as the focal outcomes (e.g. product or brand choices, money spent), collected in actual consumption contexts (online or offline). Although much academic work within and beyond the marketing discipline captures responses that, in some way, resemble behavior (e.g. hypothetical choice or self-reported behavior), few articles explicitly deal with real, observable behavior as the principal outcome (Baumeister et al., 2007; Gneezy, 2017). Instead, researchers often abandon fieldwork entirely to stay productive in the “publish or perish” game.

To be clear, we are not advising against the use of self-report measures, and we frequently incorporate such measures in our own study setups. Researchers often have good reasons for abstaining to conduct studies in actual consumption contexts and opting for self-report measures instead of providing behavioral evidence for a particular phenomenon. Certain research topics, such as the precise psychological processes driving a given marketing phenomenon, are messy to investigate in the field. A research team might therefore want to maximize internal validity in those instances to isolate cause-effect relationships while avoiding the unpredictable “noise” that is characteristic of virtually all field studies (Baumeister et al., 2007; Bergquist et al., 2020; Cialdini, 2009). However, the scarcity of field studies is problematic because we need a leap of faith to believe that results obtained under controlled conditions will automatically replicate in the field. We see a set of serious risks in moving away from behavioral studies and work conducted in actual field settings, with scholars now strategically seeming to stay away from such “slow” studies and instead submitting “speedy” multi-study packages largely based on self-report to boost their publication portfolios.

The practical significance of behavioral dependent variables is arguably much easier to articulate to the public compared to most self-report measures (Baca-Motes et al., 2013; Chen et al., 2020; Cialdini, 2009; Munz et al., 2020). This might be especially true in business disciplines, such as marketing, where relevance can be thought of as the extent to which a given study focuses on factors that managers can influence and are interested in Jaworski (2011). When viewed through this lens, well-designed marketing studies should optimally give actionable advice to managers in the pursuit of their own goals, thus implying that addressing real and up-to-date marketing problems is key in making a substantive contribution with clear practical relevance.

As a case in point, consider pitching one or the following two findings to a retail manager: A certain tweak in the shelf configuration of packaged products

  1. increases consumers’ choice likelihood of a more (vs. less) expensive product in a store by 25%; or

  2. increases students’ purchase intentions from 6.2 to 6.6 on a nine-point scale ranging from 1 (not at all willing to buy) to 9 (very willing to buy) inside a university lab.

Most store managers would likely favor the first pitch and would most certainly have an easier time understanding its relevance for marketing practice (Otterbring, 2023). Cleverly conducted field experiments, such as those capturing actual purchase behavior or naturalistic consumer choice, overcome most of the limitations associated with one-off cross-sectional survey research without randomization, the latter of which mainly relies on self-report measures. Compared to such complementary methods, field experiments show cause-and-effect relationships, thus allowing for more tangible practical insights to marketing researchers and practitioners alike.

Field experiments have also been discussed as an effective way to counteract endogeneity bias (McAlister, 2016), which exists when changes in an independent variable X is associated with changes in a dependent variable Y, not because X causally influenced Y but rather due to other reasons linked to omitted explanatory variables, simultaneity and measurement error (Antonakis et al., 2014; Semykina and Wooldridge, 2010; Toubia et al., 2003). As the classic example goes, the positive link between ice-cream consumption and outdoor violence is not necessarily causal, given that both these events tend to occur more frequently in the summertime as opposed to the wintertime, suggesting that an omitted variable (seasonality) accounts for this spurious correlation.

The behavioral component of many field studies is equally important because consumers do not always live as they learn, suggesting that their stated attitudes and intentions cannot always be used to draw accurate behavioral inferences (Mittal, 1988; Otterbring, 2021; Park and Lin, 2020; Sheeran and Webb, 2016). This gap between attitudes and intentions, on the one hand, and actual behavior, on the other hand, can have many explanations, such as biases linked to social desirability, expectancy and experimenter effects and demand characteristics (Sheeran, 2002; Otterbring and Frank, 2023; Rosenthal, 1976; Viglia and Acuti, 2023).

Considering the above, scholars cannot (and should not) take for granted that models and theories primarily developed through stated preferences, self-reported attitudes and behavioral intentions will automatically generalize to consumers’ actual purchase or choice behavior (Carrington et al., 2014; Loebnitz et al., 2022; Maner, 2016, Morales et al., 2017; Rutz and Watson, 2019). Whereas self-report can be an extremely useful tool in explaining behavior, this tool should not replace (or erase) the study of behavior completely – at least if scholars strive to make inferences about behavior, which typically tends to be the case in psychological science and most of the marketing literature (Baumeister et al., 2007; Bougie et al., 2003; Gneezy, 2017; Nisbett and Wilson, 1977). Moreover, moving away from behavioral variables and field studies may dilute the perceived managerial relevance of the published literature (Cialdini, 2009; Holbrook, 1987; MacInnis et al., 2020), not least because self-reported responses do not always predict behavior. In fact, self-report measures frequently produce findings that are in direct contrast to customers’ actual shopping responses (Otterbring, 2020; Sciandra et al., 2019) and sometimes even change customers’ subsequent behavior in case self-report data are collected prior to the behavioral outcome (Morwitz and Fitzsimons, 2004). Accordingly, the common focus areas of rigor and control, frequently treated as top priorities in lab-based studies, can lead to results that are irrelevant to managerially important problems taking place in the real world in case most published findings are purely based on self-report, eventually hurting marketing science as a discipline (Lehmann et al., 2011; Qian et al., 2023).

The excessive use of student samples and paid participants from crowdsourced online platforms may also make the academic literature biased and prone to over-generalization (Pham, 2013; Saad, 2021; Van Heerde et al., 2021; Yarkoni, 2020). By all means, online panel studies can be more representative than studies based on other common sample types (Goodman and Paolacci, 2017), and often yield comparable or higher data quality when compared to traditional sample types, such as university students (Hauser and Schwarz, 2016). Student samples may also be justified for testing general principles (e.g. consumer preferences among young and educated adults), but should not be used as surrogates of the general population for matters that they are not representative of (e.g. business travelers’ reactions to price changes; Viglia et al., 2021). For these and other reasons, collecting data from customers in actual field settings is frequently superior when it comes to getting more heterogeneous – and often more representative – samples, which in part may help to mitigate the so-called WEIRD bias in the literature, with most of the published findings based on participants from Western, educated, industrialized, rich and democratic societies (Henrich et al., 2010; Muthukrishna et al., 2020), typically in the form of Western university students or online panel members.

The current special issue strived to publish papers based on data collected in actual field settings and hence prioritize realism, relevance and real-world impact when assessing all submissions. It is our hope that this special issue will stimulate further work based on a multitude of metrics, methods and marketing phenomena that can be captured, used or addressed through various field-based approaches.

3. Special issue articles

We received a total of 42 submission to this special issue on field studies in marketing. Following the regular peer-review process, we ended up with six accepted articles (five empirical articles and one review article), yielding an acceptance rate of 14.3%. In terms of the continents and countries linked to the authors’ first or primary affiliation, the accepted articles cover Europe (9), North America (6), Asia (4) and South America (3), with country-specific data represented by the USA (6), Brazil (3), India (2), Taipei (2), Sweden (2), Norway (2), Finland (2), Denmark (1), Iceland (1) and Italy (1), although some authors’ secondary or third affiliations cover additional countries, such as South Africa, Belgium and France. Accordingly, and despite the persistent WEIRD bias, the author teams cover a fair share of the planet, with additional countries and continents also covered by all initial submissions that did not make it all the way to final publication.

The first empirical paper by Isojärvi and Aspara (2023) deals with pricing in the case of organic products. The topic is relevant in that organic products have higher costs and, therefore, identifying the appropriate online advertising strategies to enhance their quality perceptions is essential for the survival of companies offering such products. With a pre-study and a field experiment, the authors find that consumers expect the price promotion of an organic product as a periodical promotional action such as the price promotions for conventional, non-organic products. Importantly, consumers assume that the regular prices of organic products are so high that the retailer/manufacturer can well afford periodic price discounts. This offers actionable advice for managers on the conditions of when and why to apply such discounts. Additionally, the findings encourage cost transparency activities for brands selling online products, as consumers perceive that the sector is characterized by higher margins.

In the second empirical contribution, de Mesquita et al. (2023) focus on understanding the intention–behavior gap in the context of service failure. This is a lingering issue, as most of the empirical evidence is based on intentions rather than behavioral evidence. The paper uses a longitudinal panel of customers accessing fitness centers and measure the association between switching intentions and actual customer exit, also looking at important boundary conditions, such as gender and type of failure (i.e. process or outcome). The authors show the key role of failure severity, which reduces the intention–behavior gap. Overall, the results show a pervasive prevalence of customers who mention the idea of switching but then stick with the provider, thus highlighting a widespread intention–behavior gap.

Next, Otterbring et al. (2022) present two field studies that test whether salesperson–customer proximity influences consumers’ purchase behavior and store loyalty, and whether the short-term effects on purchase behavior are moderated by the extent to which the consumption context has a clear connection to consumers’ own bodies. Drawing on the nonverbal communication literature and theories on processing fluency, the authors find that salesperson proximity increases consumers’ purchase behavior in consumption contexts with a bodily basis (e.g. clothes, beauty, health), but – if anything – decreases purchase behavior in contexts that lack a clear bodily connection (e.g. building materials, furniture, books). These findings suggest that salesperson-customer proximity may improve short-term revenue in bodily consumption contexts and shopping settings characterized by one-time purchases. However, as salesperson proximity consistently decreases store loyalty, regardless of whether the shopping setting has a bodily basis, the costs of violating consumers’ personal space may outweigh the benefits.

In the fourth article, Yim et al. (2022) examine the relationship between the extent to which salespeople have a babyface in their profile pictures and the number of online reviews they receive, also looking at important boundary conditions such as consumer involvement and salesperson gender. Using an experimental design and field data from online profile pictures of real estate agents combined with an application interface of facial recognition based on artificial intelligence, the authors find that salespeople with a babyface get fewer (more) online reviews in high-involvement (low-involvement) service settings, but that the negative effect of a babyface on the number of online reviews is less (more) severe among female (male) salespeople. From a managerial viewpoint, these results indicate that salespeople should look more (less) mature on their online profile photos in high-involvement (low-involvement) purchase situations to generate more online reviews by consumers.

As the fifth and final empirical paper, Park et al. (2023) investigate how combining religion and business may be risky, as doing so typically leaves consumers with a perception of greed. Accordingly, the authors examine certain antecedents and consequences of consumers’ greed perception, when applied to the domain of for-profit religious-affiliated companies. Across an online experiment and a subsequent field experiment, they demonstrate that consumers indeed perceive greed against commercial activities initiated by those companies, but that indirect rather than direct appeals in the form of third-party promoters may mitigate such greed perceptions, with indirect appeals also having the potential of increasing consumers’ purchase responses, at least in the short run. Practically, these findings imply that for-profit religious-affiliated companies may benefit from subtle cues that only indirectly convey their religious affiliation (e.g. through third-party promoters) instead of saliently signaling their religious identity to consumers. Further, considering the short-lived gains of using such indirect appeals, these companies may have to rely on a combination of direct and indirect appeals, as each appeal type has its inherent strengths and weaknesses.

Ending with a review article, Malodia et al. (2023) synthesize the proportion of field experiments appearing in 10 leading marketing journals (i.e. Journal of Marketing, Journal of Marketing Research, Journal of the Academy of Marketing Science, Marketing Science, Journal of Consumer Research, Journal of Consumer Psychology, International Journal of Research in Marketing, Journal of Retailing, Journal of Public Policy and Marketing and European Journal of Marketing), from these journals’ inception to early 2022, with their review spanning approximately half a century of marketing and consumer research. Evidently, experiments conducted in actual field settings are relatively rare, with the reviewed data revealing that less than 2% of all papers published in the reviewed journals between their inception and February 2022 contained field experiments. However, there is variability between the reviewed journals regarding the proportion of published fieldwork and a trend toward more field experiments during the past decade – partially attributed to the increased reliance on online platforms for running such studies – with roughly 4% of published papers containing at least one field experiment from 2011 to February 2022, according to the authors’ search terms and their review of the literature.

4. Concluding remarks and study checklist

Internal and external validity are frequently thought to be conflicting forces (Schram, 2005), with internal validity referring to how confidently we can conclude that a given independent variable is the primary cause for changes in the dependent variable(s) and with external validity representing the extent to which the results can be generalized across samples and settings (Galizzi and Navarro-Martinez, 2019; Trafimow, 2022). As such, academic studies can be put on a continuum, ranging from maximum internal validity (often at the expense of lower external validity) to maximum external validity (often at the expense of lower internal validity). Studies with improved behavioral realism constitute an intermediate category between traditional laboratory and field studies, respectively, as they often have lower internal but higher external validity. Below, we present a checklist for researchers and reviewers designing, running and evaluating marketing studies with a quantitative touch (Table 1).

Given that most top-tier publications in marketing now almost require multi-study submissions, some of the above points (e.g. the typical tension between rigor and relevance or between internal and external validity; Holbrook, 1987; Jaworski, 2011; Lehmann et al., 2011; McAlister, 2016; MacInnis et al., 2020; Simonson et al., 2001) may not always be feasible to address in a single study. Therefore, as complementarity between samples, settings and study paradigms is often an underused resource, we urge scholars to mix methods more than what is currently the case to create a whole that is greater than the sum of its parts, while also carefully considering how to enhance realism, incorporate more behavioral measures and conduct a larger proportion of studies in actual field settings.

As evidenced from the articles included in this special issue, field studies can be conducted in a wide range of shopping environments, capturing multiple key customer outcomes, including but not restricted to customer switching (de Mesquita et al., 2023), customers’ purchase likelihood and the amount of money spent in a store (Otterbring et al., 2022), clickthrough rates of online ads (Isojärvi and Aspara, 2023) and the number of online reviews written by customers (Yim et al., 2022). Moreover, such field-based investigations are not restricted to traditional physical commercial settings but can be effectively run in a variety of digitally enabled environments. In fact, the ease of conducting field studies in online settings has even given rise to a new managerial jargon known as “split tests” or “A/B testing,” allowing managers to run field tests to evaluate the effects of different landing pages or website content using the highly granular unit of an individual website visit. That said, it is typically more challenging to use realistic study scenarios when the connection between the researcher and the participant is remote and anonymous, such as in many online studies. A possible solution, therefore, is to emphasize realism, which could entail requiring participants to put actual effort when making decisions, searching for information or actively choosing between real products (Morales et al., 2017), while simultaneously highlighting the social dimension of the tasks at hand by, for instance, having participants engage in some sort of interpersonal interactions (Baumeister et al., 2022).

Field studies certainly have their own flaws and usually do not allow for the same level of rigor as studies conducted under more controlled conditions. That said, journal editors and reviewers need to stop asking why, for example, five possible confounding factors were not controlled for in a field study, why two different control conditions were not incorporated in the study design, why some of the variables were not consistently measured through multi-item scales and why self-report measures of the presumed mediator and moderator were not included in a field-based investigation. All such questions can be crucial to address in follow-up studies. Yet, fieldwork should not be assessed through the same evaluation metrics as those used for online and lab studies conducted in more controlled settings. Therefore, as some of the major strengths of field studies relate to realism, relevance and real-word impact as well as generalizability and enhanced confidence in the behavioral consequences of a given phenomenon, assessments of field studies need to be firmly founded in these building blocks instead of a reflexive use of the evaluation criteria adopted for assessing controlled studies (Otterbring, 2023).

Example checklist for researchers and reviewers running/evaluating marketing studies

Key aspect Things to consider
Relevance • Address a real and timely marketing problem
• Chose a topic relevant to marketing practice (e.g. it satisfies practitioners’ current informational needs)
Rigor • Develop an appropriate and robust research design
• Pay attention to measurement-related issues, construct operationalizations and reliability aspects
• Come up with a clear design that rules out possible alternative explanations for the examined effects
• Determine possible confounding variables (e.g. temperature, consumer income, seasonality)
• In experiments, consider the criteria needed to compellingly demonstrate that the independent variable has the intended causal effect on the dependent variable
• Consider whether the effectiveness of a manipulation (manipulation check) needs to be ascertained in the main study or in an earlier validation study
• Determine if study participants can be randomly assigned to conditions
Internal and external validity • Choose a design that allows for internal and/or external validity
• In multiple-study packages, use a combination of studies that maximizes both internal and external validity
• Determine how many observations are needed through statistical power calculations and sample size considerations
• In experiments, determine the number of conditions (e.g. control vs treatment)
Ethical approval • If applicable, receive an ethical approval from a relevant authority. These aspects vary greatly between countries, universities and academic journals, so adhering to the strictest guidelines may be advisable
Research impact • Measure actual behavior as the dependent variable or, at the very least, document behavioral realism
• Be aware of that self-report measures can suffer from various bias sources
• Seek to offer actionable advice to practitioners (e.g. managers, policy makers, researchers)
Replicability • Describe the sample, design, procedure, measures, data and analyses with a sufficient level of detail so that others can replicate the study
Informativeness and illustrativeness • Show accuracy and completeness in the reporting of the results (e.g. go beyond p-values by also reporting effect sizes and/or confidence intervals)
• Include adequate visual representations of the results (e.g. tables, stimuli used, figures)

References

Note that sources with *asterisks reflect articles included in the special issue.

Antonakis, J., Bendahan, S., Jacquart, P. and Lalive, R. (2014), “Causality and endogeneity: problems and solutions”, The Oxford Handbook of Leadership and Organizations, Vol. 1, pp. 93-117.

Ares, G., Alcaire, F., Gugliucci, V., Machín, L., de León, C., Natero, V. and Otterbring, T. (2023), “Colorful candy, teen vibes and cool memes: prevalence and content of Instagram posts featuring ultra-processed products targeted at adolescents”, European Journal of Marketing, Vol. 1.

Baca-Motes, K., Brown, A., Gneezy, A., Keenan, E.A. and Nelson, L.D. (2013), “Commitment and behavior change: evidence from the field”, Journal of Consumer Research, Vol. 39 No. 5, pp. 1070-1084.

Baumeister, R.F., Tice, D.M. and Bushman, B.J. (2022), “A review of multisite replication projects in social psychology: is it viable to sustain any confidence in social psychology’s knowledge base?”, Perspectives on Psychological Science, Vol. 1.

Baumeister, R.F., Vohs, K.D. and Funder, D.C. (2007), “Psychology as the science of self-reports and finger movements: whatever happened to actual behavior?”, Perspectives on Psychological Science, Vol. 2 No. 4, pp. 396-403.

Bergquist, M., Nyström, L. and Nilsson, A. (2020), “Feeling or following? A field‐experiment comparing social norms‐based and emotions‐based motives encouraging pro‐environmental donations”, Journal of Consumer Behaviour, Vol. 19 No. 4, pp. 351-358.

Bougie, R., Pieters, R. and Zeelenberg, M. (2003), “Angry customers don’t come back, they get back: the experience and behavioral implications of anger and dissatisfaction in services”, Journal of the Academy of Marketing Science, Vol. 31 No. 4, pp. 377-393.

Carrington, M.J., Neville, B.A. and Whitwell, G.J. (2014), “Lost in translation: exploring the ethical consumer intention–behavior gap”, Journal of Business Research, Vol. 67 No. 1, pp. 2759-2767.

Chen, Y., Lee, J.Y., Sridhar, S., Mittal, V., McCallister, K. and Singal, A.G. (2020), “Improving cancer outreach effectiveness through targeting and economic assessments: insights from a randomized field experiment”, Journal of Marketing, Vol. 84 No. 3, pp. 1-27.

Cialdini, R.B. (2009), “We have to break up”, Perspectives on Psychological Science, Vol. 4 No. 1, pp. 5-6.

*de Mesquita, J.M.C., Shin, H., Urdan, A.T. and Pimenta, M.T.C. (2023), “Measuring the intention-behavior gap in service failure and recovery: the moderating roles of failure severity and service recovery satisfaction”, European Journal of Marketing, Vol. 1.

Ding, Y., DeSarbo, W.S., Hanssens, D.M., Jedidi, K., Lynch, J.G. and Lehmann, D.R. (2020), “The past, present, and future of measurement and methods in marketing analysis”, Marketing Letters, Vol. 31 Nos 2/3, pp. 175-186.

Doliński, D. (2018), “Is psychology still a science of behaviour?”, Social Psychological Bulletin, Vol. 13 No. 2, pp. 1-14.

Elbæk, C.T., Mitkidis, P., Aarøe, L. and Otterbring, T. (2022), “Honestly hungry: acute hunger does not increase unethical economic behaviour”, Journal of Experimental Social Psychology, Vol. 101.

Galizzi, M.M. and Navarro-Martinez, D. (2019), “On the external validity of social preference games: a systematic lab-field study”, Management Science, Vol. 65 No. 3, pp. 976-1002.

Gneezy, A. (2017), “Field experimentation in marketing research”, Journal of Marketing Research, Vol. 54 No. 1, pp. 140-143.

Goodman, J.K. and Paolacci, G. (2017), “Crowdsourcing consumer research”, Journal of Consumer Research, Vol. 44 No. 1, pp. 196-210.

Hauser, D.J. and Schwarz, N. (2016), “Attentive Turkers: MTurk participants perform better on online attention checks than do subject Pool participants”, Behavior Research Methods, Vol. 48 No. 1, pp. 400-407.

Henrich, J., Heine, S.J. and Norenzayan, A. (2010), “The weirdest people in the world?”, Behavioral and Brain Sciences, Vol. 33 Nos 2/3, pp. 61-83.

Holbrook, M.B. (1987), “What is consumer research?”, Journal of Consumer Research, Vol. 14 No. 1, pp. 128-132.

*Isojärvi, J. and Aspara, J. (2023), “Consumers’ behavioural responses to price promotions of organic products: an introspective pre-study and an online field experiment”, European Journal of Marketing, Vol. 1.

Jaworski, B.J. (2011), “On managerial relevance”, Journal of Marketing, Vol. 75 No. 4, pp. 211-224.

Lehmann, D.R., McAlister, L. and Staelin, R. (2011), “Sophistication in research in marketing”, Journal of Marketing, Vol. 75 No. 4, pp. 155-165.

Li, J.Q., Rusmevichientong, P., Simester, D., Tsitsiklis, J.N. and Zoumpoulis, S.I. (2015), “The value of field experiments”, Management Science, Vol. 61 No. 7, pp. 1722-1740.

Liberali, G., Urban, G.L. and Hauser, J.R. (2013), “Competitive information, trust, brand consideration and sales: two field experiments”, International Journal of Research in Marketing, Vol. 30 No. 2, pp. 101-113.

Loebnitz, N., Frank, P. and Otterbring, T. (2022), “Stairway to organic heaven: the impact of social and temporal distance in print ads”, Journal of Business Research, Vol. 139, pp. 1044-1057.

McAlister, L. (2016), “Rigor versus method imperialism”, Journal of the Academy of Marketing Science, Vol. 44 No. 5, pp. 565-567.

MacInnis, D.J., Morwitz, V.G., Botti, S., Hoffman, D.L., Kozinets, R.V., Lehmann, D.R., … and Pechmann, C. (2020), “Creating boundary-breaking, marketing-relevant consumer research”, Journal of Marketing, Vol. 84 No. 2, pp. 1-23.

*Malodia, S., Dhir, A., Hasni, M.J.S. and Srivastava, S. (2023), “Field experiments in marketing research: a systematic methodological review”, European Journal of Marketing, Vol. 1.

Mittal, B. (1988), “Achieving higher seat belt usage: the role of habit in bridging the attitude‐behavior gap”, Journal of Applied Social Psychology, Vol. 18 No. 12, pp. 993-1016.

Morales, A.C., Amir, O. and Lee, L. (2017), “Keeping it real in experimental research—understanding when, where, and how to enhance realism and measure consumer behavior”, Journal of Consumer Research, Vol. 44 No. 2, pp. 465-476.

Morwitz, V.G. and Fitzsimons, G.J. (2004), “The mere-measurement effect: why does measuring intentions change actual behavior?”, Journal of Consumer Psychology, Vol. 14 Nos 1/2, pp. 64-74.

Munz, K.P., Jung, M.H. and Alter, A.L. (2020), “Name similarity encourages generosity: a field experiment in email personalization”, Marketing Science, Vol. 39 No. 6, pp. 1071-1091.

Muthukrishna, M., Bell, A.V., Henrich, J., Curtin, C.M., Gedranovich, A., McInerney, J. and Thue, B. (2020), “Beyond Western, educated, industrial, rich, and democratic (WEIRD) psychology: measuring and mapping scales of cultural and psychological distance”, Psychological Science, Vol. 31 No. 6, pp. 678-701.

Nisbett, R.E. and Wilson, T.D. (1977), “Telling more than we can know: verbal reports on mental processes”, Psychological Review, Vol. 84 No. 3, pp. 231-259.

Otterbring, T. (2020), “Appetite for destruction: counterintuitive effects of attractive faces on people’s food choices”, Psychology and Marketing, Vol. 37 No. 11, pp. 1451-1464.

Otterbring, T. (2021), “Peer presence promotes popular choices: a ‘spicy’ field study on social influence and brand choice”, Journal of Retailing and Consumer Services, Vol. 61, p. 102594.

Otterbring, T. (2023), “Field studies in food settings: Lessons learned and concrete cases”, in Gómez-Corona, C. and Rodrigues, H. (Eds), Consumer Research Methods in Food Science, Humana, New York, NY, pp. 313-328.

Otterbring, T., Sundie, J., Li, Y.J. and Hill, S. (2020), “Evolutionary psychological consumer research: bold, bright, but better with behavior”, Journal of Business Research, Vol. 120, pp. 473-484.

*Otterbring, T., Samuelsson, P., Arsenovic, J., Elbæk, C.T. and Folwarczny, M. (2022), “Shortsighted sales or long-lasting loyalty? The impact of salesperson-customer proximity on consumer responses and the beauty of bodily boundaries”, European Journal of Marketing, Vol. 1.

Park, H.J. and Lin, L.M. (2020), “Exploring attitude–behavior gap in sustainable consumption: Comparison of recycled and upcycled fashion products”, Journal of Business Research, Vol. 117, pp. 623-628.

*Park, S., Kang, J.S. and Markman, G.D. (2023), “Can we serve both god and money? The role of indirect appeal and its limitation”, European Journal of Marketing, Vol. 1.

Pham, M.T. (2013), “The seven sins of consumer psychology”, Journal of Consumer Psychology, Vol. 23 No. 4, pp. 411-423.

Qian, D., Yan, H., Pan, L. and Li, O. (2023), “Bring it on! Package shape signaling dominant male body promotes healthy food consumption for male consumers”, Psychology and Marketing, Vol. 1.

Robitaille, N., Mazar, N., Tsai, C.I., Haviv, A.M. and Hardy, E. (2021), “Increasing organ donor registrations with behavioral interventions: a field experiment”, Journal of Marketing, Vol. 85 No. 3, pp. 168-183.

Rosenthal, R. (1976), Experimenter Effects in Behavioral Research, Irvington, New York, NY.

Rozin, P. (2001), “Social psychology and science: some lessons from Solomon Asch”, Personality and Social Psychology Review, Vol. 5 No. 1, pp. 2-14.

Rutz, O.J. and Watson, G.F. (2019), “Endogeneity and marketing strategy research: an overview”, Journal of the Academy of Marketing Science, Vol. 47 No. 3, pp. 479-498.

Saad, G. (2021), “Addressing the sins of consumer psychology via the evolutionary lens”, Psychology and Marketing, Vol. 38 No. 2, pp. 371-380.

Schram, A. (2005), “Artificiality: the tension between internal and external validity in economic experiments”, Journal of Economic Methodology, Vol. 12 No. 2, pp. 225-237.

Sciandra, M.R., Inman, J.J. and Stephen, A.T. (2019), “Smart phones, bad calls? The influence of consumer mobile phone use, distraction, and phone dependence on adherence to shopping plans”, Journal of the Academy of Marketing Science, Vol. 47 No. 4, pp. 574-594.

Semykina, A. and Wooldridge, J.M. (2010), “Estimating panel data models in the presence of endogeneity and selection”, Journal of Econometrics, Vol. 157 No. 2, pp. 375-380.

Sheeran, P. (2002), “Intention—behavior relations: a conceptual and empirical review”, European Review of Social Psychology, Vol. 12 No. 1, pp. 1-36.

Sheeran, P. and Webb, T.L. (2016), “The intention–behavior gap”, Social and Personality Psychology Compass, Vol. 10 No. 9, pp. 503-518.

Singh, R. and Söderlund, M. (2020), “Extending the experience construct: an examination of online grocery shopping”, European Journal of Marketing, Vol. 54 No. 10, pp. 2419-2446.

Toubia, O., Simester, D.I., Hauser, J.R. and Dahan, E. (2003), “Fast polyhedral adaptive conjoint estimation”, Marketing Science, Vol. 22 No. 3, pp. 273-303.

Trafimow, D. (2022), “A new way to think about internal and external validity”, Perspectives on Psychological Science.

Van Heerde, H.J., Moorman, C., Moreau, C.P. and Palmatier, R.W. (2021), “Reality check: infusing ecological value into academic marketing research”, Journal of Marketing, Vol. 85 No. 2, pp. 1-13.

Viglia, G. and Acuti, D. (2023), “How to overcome the intention–behavior gap in sustainable tourism: tourism agenda 2030 perspective article”, Tourism Review, Vol. 78 No. 2, pp. 321-325.

Viglia, G., Zaefarian, G. and Ulqinaku, A. (2021), “How to design good experiments in marketing: Types, examples, and methods”, Industrial Marketing Management, Vol. 98, pp. 193-206.

Wang, Y., Lewis, M., Cryder, C. and Sprigg, J. (2016), “Enduring effects of goal achievement and failure within customer loyalty programs: a large-scale field experiment”, Marketing Science, Vol. 35 No. 4, pp. 565-575.

Yarkoni, T. (2020), “The generalizability crisis”, Behavioral and Brain Sciences, Vol. 45.

*Yim, A., Price, B., Agnihotri, R. and Cui, A.P. (2022), “Do salespeople’s online profile pictures predict the number of online reviews? Effect of a Babyface”, European Journal of Marketing, Vol. 1.

Further reading

Hoyer, W.D. (1984), “An examination of consumer decision making for a common repeat purchase product”, Journal of Consumer Research, Vol. 11 No. 3, pp. 822-829.

Viglia, G. and Dolnicar, S. (2020), “A review of experiments in tourism and hospitality”, Annals of Tourism Research, Vol. 80, p. 102858.

Related articles