skip to main content
10.1145/3613904.3642342acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open Access

Understanding Underground Incentivized Review Services

Published:11 May 2024Publication History

Abstract

While human factors in fraud have been studied by the HCI and security communities, most research has been directed to understanding either the victims’ perspectives or prevention strategies, and not on fraudsters, their motivations and operation techniques. Additionally, the focus has been on a narrow set of problems: phishing, spam and bullying. In this work, we seek to understand review fraud on e-commerce platforms through an HCI lens. Through surveys with real fraudsters (N=36 agents and N=38 reviewers), we uncover sophisticated recruitment, execution, and reporting mechanisms fraudsters use to scale their operation while resisting takedown attempts, including the use of AI tools like ChatGPT. We find that countermeasures that crack down on communication channels through which these services operate are effective in combating incentivized reviews. This research sheds light on the complex landscape of incentivized reviews, providing insights into the mechanics of underground services and their resilience to removal efforts.

Skip 1INTRODUCTION Section

1 INTRODUCTION

While cyber security is an area of growing importance in the HCI community, most research focuses on the victims’ experiences with certain kinds of cyber crime [7] or the effectiveness of intervention strategies to prevent those crimes [22, 26, 33]. Relatively little is known about the experiences and perspectives of actors who actually perpetrate online fraud, including how they operate and more crucially, how they evade detection and combat takedown efforts. Additionally, most research efforts at the HCI-Security intersection have focused on a narrow set of problems: mainly phishing [17, 35, 45] and privacy [24, 44]. Although more recent threats such as reputation manipulation and review fraud are growing in prevalence [28], no academic work has treated this as an HCI problem, but focusing on the technical detection measures instead.

Our work aims to bridge these gaps by treating review manipulation as an HCI problem, and, in doing so, uniquely focuses on the modus operandi of the attackers involved. We study underground incentivized review services, which allow sellers to solicit positive reviews from real customers in exchange for free products. While there exist some forms of legitimate incentivized reviews (e.g., platform-run programs such as Amazon Vine that aim to solicit honest and unbiased opinions and are marked as “Vine Customer Review of Free Product” for transparency [48]), our unique focus is on illegitimate incentivized reviews that aim to solicit misleading guaranteed positive reviews and do not carry any disclosure of the free product incentive. Since such incentivized reviews are generally prohibited on e-commerce marketplaces [9], buyers actually purchase the product on the e-commerce marketplace, and then get reimbursed out-of-band (e.g., via PayPal) after they provide evidence of the submitted positive review. Operating through a complex web of multiple intermediaries, a myriad of social media platforms, and spanning multiple countries, underground incentivized review services mediate the interactions between buyers and sellers while receiving a commission from the sellers in exchange for sourcing a five-star review.

In order to understand the organization and operational characteristics of these services, the motivations and incentives for the players involved, and fraudsters’ mental models on detection, we conduct qualitative surveys with review agents and reviewers who engage in incentivized review fraud. Drawing upon insights from N = 36 review agents and N = 38 incentivized reviewers, we discover how review rings operate and evade detection.

Incentivized reviews have garnered significant attention over the past few years, with both the public and private sector taking steps to combat them. The Federal Trade Commission (FTC) has proposed a new set of rules [12] through which it can prosecute businesses for failing to disclose an incentive in a review. Multiple e-commerce companies (Google, Yelp, TrustPilot, TripAdvisor and Amazon) responded favorably, stating that they have been removing millions of suspect reviews over the years [23, 47, 55]. Additionally, Amazon, through a string of lawsuits sued over 10,000 individuals who were moderators of Facebook groups involved in review brokering [1]. To examine the effect of the lawsuit by Amazon and the subsequent group takedown by Facebook, we conducted a follow-up survey with N = 32 agents. Through these responses, we discovered how effective the lawsuit was, and how the fake reviews economy adapted to the changing landscape of group deletions.

Research Questions. Our work aims to answer the following research questions:

  • RQ1. What are the demographic, behavioral and operational characteristics of the key players involved in the reviews ecosystem?

  • RQ2. What strategies are used by agents and reviewers to evade detection of incentivized reviews?

  • RQ3. How do agents in review marketplaces adapt to the countermeasures against incentivized reviews?

  • RQ4. What is the effectiveness of existing takedown measures (both technical and non-technical) in detecting incentivized reviews?

Contributions. Our study examines recruitment, execution and evasion strategies in underground review services. While our investigation focuses on products on Amazon.com, evidence from our qualitative study shows that much of what we uncover also exists on other e-commerce marketplaces such as Walmart, Target, and Wayfair. Our study furthers the understanding of the inner workings of the incentivized review ecosystem. In summary, our key contributions are as follows:

First, we systematically examine the inner workings of fake review rings, including the operational characteristics, interplay between the various actors involved, and their motivations for engaging in fraud.

Our work discovers evasion strategies that fraudsters use to avoid detection by the e-commerce platform. We discover that fraudsters engage in purposeful manipulation of their behavior (such as tweaking their browsing activity or review content) so that incentivized reviews are not discovered.

We show the role that social media plays in review services, and demonstrate how certain features (such as targeted advertisements) actually enable more effective perpetration of review fraud.

We conduct an audit to evaluate current detection mechanisms and find that while technical countermeasures (machine learning solutions and review removal) are ineffective in countering incentivized reviews, collaborative efforts between platforms and legal or policy measures show potential for bringing review services to their decline.

We show how incentivized review services are adapting to the changing landscape around them (such as reacting to lawsuits by Amazon, and leveraging AI tools to enhance their fraud).

Paper Organization. The rest of the paper is organized as follows. In Section 2, we provide an overview of the reviews ecosystem, discuss prior research and the novelty of our work. Section 3 describes our study design and data collection strategies. Section 4 draws upon our qualitative analysis to identify motivations and operational characteristics of agents and buyers. Section 5 outlines key evasion strategies used by agents and jennies in order to avoid review detection by Amazon, followed by Section 6 that audits existing takedown measures and their effectiveness in curbing incentivized reviews. In Section 7, we present the results of our second survey and highlight approaches using which fraudsters adapt to targeted countermeasures such as lawsuits and group moderation. Finally, we summarize our findings and conclude in Section 8.

Skip 2BACKGROUND Section

2 BACKGROUND

2.1 Key Players

There are three major players in the incentivized reviews ecosystem; sellers, agents, and jennies, which have been briefly described in prior investigations [16, 37, 38].

Sellers. Third-party sellers on e-commerce platforms can buy products in bulk from a wholesaler like AliExpress, and use another platform (like Amazon, Target or Walmart) for sales; poor quality products can be purchased at a low price and sold at a significant margin. Sellers stand to gain the most from positive reviews; more positive reviews usually means better ranking and the product shown to more potential buyers. Note that here we specifically refer to third-party vendors as sellers, and not the e-commerce platform which itself is a first-party seller (for example, Amazon acts as a platform for other sellers, but also sells its own line of products through the Amazon Basics brand).

Jennies. Buyers who will buy a product and leave a five-star review for it are referred to as jennies by the agents. Note that jennies are regular, paying customers of the e-commerce platform; they are sources of revenue, engagement and advertisement impressions. A jenny is different than a traditional crowd-worker; while crowd-workers rely on crowd-sourced tasks for their livelihood and typically do hundreds of tasks a day, jennies write the occasional fake review in exchange for a free product. The persistence of engagement held by a jenny is much shorter than that one would expect of a crowd-worker. Jennies engage in review fraud to get free products, and some even make a profit by reselling the products they receive [37].

Agents. These are the middle-men contracted by sellers to identify potential buyers. Through Facebook groups, Telegram and Slack channels, Discord servers, and targeted advertisements, these agents reach out to buyers and help the sellers obtain incentivized positive reviews. They instruct buyers (jennies) on how to make purchases and what evasion tactics to use to avoid detection. They also communicate information about orders, reviews, and refunds (which happen through a medium outside of the e-commerce platform, such as PayPal) from buyers to sellers and vice-versa.

2.2 End-to-End Functioning

At a high level, a jenny leaves a five-star positive review for a product and gets the product for free in return. Figure 1 depicts how the incentivized reviews market functions end-to-end. First, sellers identify potential buyers (also called jennies) via their agents (steps 1 - 3). Agents assist jennies in searching for and buying the product (steps 4 - 8). The jenny then sends a screenshot of their order, which is ultimately confirmed by the seller (steps 9 - 11); the order screenshot contains an order number that the seller can use for verifying the order and tracking any return / refund activity. After the jenny receives the product, they submit a review. All reviews need to be approved by Amazon; once a review is live, the buyer sends a screenshot to the agent which is ultimately confirmed by the seller (steps 13 - 16). The seller now has a five-star positive review for their product; they pay the agreed-upon commission to agents and refund the buyer for the full price of the product (steps 17 - 18), effectively making it free for them; they can either use it or resell it through eBay or Facebook Marketplace and make a profit.

Figure 1:

Figure 1: End-to-End Functioning of the Reviews Market

2.3 Legitimacy of Incentivized Reviews

2.3.1 Legitimate vs. Illegitimate Incentivized Reviews.

Incentivized reviews are a marketing strategy that is used to solve the cold start problem and encourage consumers to buy a newly introduced product [36, 41, 42]. Platform-run programs, affiliate marketing, and sponsored posts on social media are all forms of incentivized reviews. Incentivized reviews are not inherently illegitimate. They are considered problematic if (a) the incentive to write the review is not disclosed or (b) the incentive depends on the sentiment of the review (i.e., the review has to be positive to earn the incentive). For example, in platform-run incentivized review programs such as Amazon Vine 1, the fact that the review comes from a Vine reviewer is disclosed (“Vine Voice”), and the incentive does not depend on the sentiment of the review; Vine reviewers will get paid whether the review is positive or not. Our goal, however, is to study illegitimate incentivized reviews that violate these principles. These incentivized reviews on Amazon are typically obtained by sellers outside the e-commerce platform’s knowledge. Such reviews do not carry a disclosure of the incentive and are required to be five-star reviews by the seller. As a result, these incentivized reviews falsely appear to be organic and not driven by any incentives. There is evidence of the harm caused by fraudulent incentivized reviews. For example, an investigation by Nguyen [37] revealed that a boric acid health supplement had stellar (incentivized) reviews, but was determined by doctors to be potentially fatal. The same investigation revealed that a local business lost its sales by more than half because of fraudulent positive reviews posted to another seller selling counterfeit goods.

2.3.2 Legal Violations.

Major e-commerce platforms such as Walmart [50], Amazon [8], Ebay [19] and Etsy [20] explicitly prohibit such reviews in their terms of service, and such restrictions are also built into the contracts sellers have with them. Therefore, sellers who engage in incentivized review activity are at least in violation of the terms of their contracts (terms of service [ToS]), which itself is grounds for a civil lawsuit [1]. Additionally, incentivized review activity may also amount to criminal fraud in certain jurisdictions. In 2022, Amazon filed a lawsuit [1] against several incentivized review groups arguing that ToS violations may also constitute “exceeding authorized access" that violates the Computer Fraud and Abuse Act (CFAA) [14] resulting in criminal liability [49]. In June 2018, the owner of a company called PromoSalento (which offered review boosting services for TripAdvisor) was found guilty of criminal conduct on the grounds of using a fake identity to commit fraud, and sentenced to nine months in prison as a result [27] in the EU. Finally, as discussed in more detail next, incentivized review activity can be deemed “deceptive” and violate consumer protection laws. For example, Amazon’s lawsuit alleged violation of the Washington Consumer Protection Act because undisclosed incentivized reviews deceive consumers.

2.3.3 Consumer Deception.

In the United States, the Federal Trade Commission (FTC) has the authority to investigate misleading endorsements and enforce actions against businesses who engage in review fraud for deceptive practices under the Section 5 of the FTC Act [13]. The FTC has exercised this power in the past [11] to specifically address the issue of incentivized reviews. Our findings have partially informed the FTC’s recent publication of a new set of rules [12] that provide guidelines around incentivized reviews. Specifically, the FTC rules state – among other things – that “it is an unfair or deceptive act or practice and a violation” if a business does not provide “clear and conspicuous” disclosure that is “easily noticeable (i.e., difficult to miss) and easily understandable”, and if a business provides “compensation or other incentives in exchange for, or conditioned on, the writing or creation of consumer reviews expressing a particular sentiment”. §465.4 specifically states that “It is an unfair or deceptive act or practice and a violation of this Rule for a business to provide compensation or other incentives in exchange for, or conditioned on, the writing or creation of consumer reviews expressing a particular sentiment, whether positive or negative, regarding the product, service, or business that is the subject of the review.” Therefore, the lack of disclosure about the incentives and the requirement that they must be five-star constitutes a violation of the FTC regulations.

2.4 Related Work

2.4.1 Fraudsters and Fraud.

Most HCI research on cyber crime has focused on examining the prevalence of certain kinds of crime [7], user perception of various attacks [46, 51], or user privacy behaviors [6, 30, 52]. However, little is known about how fraudsters organize themselves, how they function and what their motivations for fraud are. Padgett [40] provides a comprehensive description of various kinds of fraud actors, their modus operandi and tell-tale signs. While the work focuses mainly on financial fraud, it describes a motivation model that profiles fraudsters as motivated by need, greed, anger or pressure. Annual reports released by the Association of Certified Fraud Examiners (ACFE) discuss characteristics of fraudsters from reported fraud, and find notably that fraudsters typically have limited education with most of them finishing only high school [3]. Maimon and Louderback [34] provide a detailed overview of cyber crime ecosystems, and the malicious actors involved. Financial gain and peer pressure from family were noted to be the major motivating factors for fraudsters in Nigeria, according to a study [39] that conducted in-depth interviews with cyber criminals. Farooqi et al [21] provide a detailed analysis of an online black-hat marketplace and study user demographics, revenue and operational characteristics for services like backlink generation, fake social media followers and spam. Along similar lines to our work, Rahman et al[43] conducted interviews with black hat app search optimization workers and discovered their operational mechanisms, mental models around detection and being flagged, as well as strategies employed in order to evade detection.

2.4.2 Detecting Incentivized Reviews.

A study by Zhang et al [56] collects incentivized review data from social media channels (groups on Facebook and WeChat) and by generating a set of co-review graphs, detects suspicious user communities writing reviews for a particular set of products. Similarly, He et al [25] show that products engaging in review manipulation are highly clustered in product-reviewer networks, and therefore, network-based features can be used to detect products which have onboarded incentivized reviews. Another study [53] examines underground markets for reputation escalation and finds that such services can improve seller reputation rapidly, while having a detection rate of less than \(3\%\). Fake review marketplaces, especially for Amazon, have received wide media coverage and reports [16, 32, 38], and reveal that Amazon is trying to crack down on these incentivized reviews using machine learning systems in combination with social media to detect them and stop them at source. The work that is most closely aligned with ours is a 2018 investigation by Nguyen [37]. Through interviews with a few complacent buyers and analysis of social media channels for review brokering, they investigate the fake reviews market and incentives of buyers and the operation of incentivized review services through Facebook groups, Slack channels and Reddit threads. Their work shows that a family-run business for bedsheet fasteners suddenly suffered a 50% loss in revenue because of counterfeit products; these sellers manufactured duplicate products, and obtained incentivized reviews for them which ranked them higher in a short period.

2.4.3 Novelty and Motivation.

The prevalence of review fraud through underground services is growing day-by-day. While media investigations have discovered some characteristics for reviewers, our work is the first to systematically examine the reviews ecosystem, and gather insights from both agents and jennies. Academic research on fraudsters has been focused on phishing, romance or financial scams and never on fake reviewers. As a result, our understanding of the underground reviews economy is severely limited and consists only of anecdotal reports. Additionally, as we note above, fraudsters under consideration are not typical fraudsters; they are actual users of Amazon.com, with some fraud activity interleaved into legitimate activity on their accounts. This makes them particularly interesting to study as they are likely to exhibit characteristics and motivations different than traditional fraudsters. To the best of our knowledge, no prior work has sought to understand review services and discover motivations of such fraudsters and their evasion strategies.

Skip 3METHODOLOGY Section

3 METHODOLOGY

In this work, we study the functioning of the fraudulent reviews ecosystem. We use Amazon as a case study because (i) It is the largest e-commerce platform in the US, and (ii) it appeared to be the most favoured by sellers and agents in seeking incentivized reviews. In order to closely understand the inner-working of the incentivized review ecosystem, and examine the motivations and key tactics involved, we conducted semi-structured (consisting of both multiple-choice and free form response questions) surveys with agents and jennies.

3.1 Data Crawling

We identified groups on Facebook where agents offer products for review (more information about groups in Section 4.2). We ordered the groups by the number of members and average daily posts and chose the ones which were reasonably active. Based on the distribution in members and daily active posts, we defined a group to be active if it had more than 1500 members and at least 10 posts a day. This resulted in 156 groups. We posed as buyers from the United States and observed group activity to identify who the agents and buyers were. The groups serve as a hub for agents and buyers to interact with each other. Agents frequently advertise the products listed, and share links to spreadsheets that contain the list of products that need a five-star review. An example of this can be seen in Fig 5 in Appendix A. Through agent posts and comments, we were able to obtain 8 spreadsheets through links that were shared with public read permissions. In all, we were able to compile a list of 1600 unique products. We then systematically crawled product metadata (e.g., price, seller, variants) as well as reviews for each product including the review text, posted date, additional media attachments, and helpful votes. Our crawl was conducted during February–April 2022. Because this list of products was obtained directly through agents, we have high confidence that the reviews of these products have been manipulated through incentivized review services.

Six of links we obtained were links to sheets that contained multiple tabs, and information other than product details. These additional tabs contained links to other folders and documents as well. We collected all of this information manually and downloaded all files we could find. This supplementary material we crawled includes: (i) slide decks and video tutorials on how agents should communicate with jennies and form a network of jennies, (ii) what instructions are to be followed so that Amazon does not delete the review, and (iii) agent leader boards containing information about how many reviews each agent obtained and the bonus they earned. In our qualitative analysis, we draw on these sources as materials complementary to our survey responses.

3.2 Participant Recruitment

We recruited participants for our study using Facebook groups where reviews are brokered by agents. We used the search term "Free Amazon Products" and ordered the search results by number of group members and post frequency. We joined the top ranking group (900k members and 78 posts per day on average) as it would provide a large sample to study the fake reviews market. By observing the posts and group activity, we identified which group members were agents (members who posted pictures of products and asked for reviews) and which of them were jennies (members who expressed their interests in receiving those products through comments on the posts). For every potential participant, we examined group activity over the past two weeks, and selected those which were fairly active (at least 10 posts or comments on 10 unique posts in the two-week period). Following this criteria, we identified the top 500 agents and jennies, and reached out to these individuals via Facebook Messenger. Potential participants were provided the details on the nature of our study and assured that we would not collect any PII (name, email, IP address). Once they consented, we sent them a link to our survey. All participants were over 21 years of age (self-reported).

3.3 Survey Design

Our surveys were conducted online via Google Forms in two phases. In the first survey conducted in March 2022, we posed questions in four main areas: operations (how the process works, key players), demographics (age, location, gender identity), incentives (motivations and earnings), and experience and evasion (tactics to avoid detection, observations, mental models). The goal of this survey was to characterize the key players in underground review services and their manipulation strategies. In July 2022, Amazon filed a lawsuit [1] against more than 10000 agents, who were moderators of Facebook groups. According to the lawsuit, Amazon investigators worked with Facebook to have those groups deleted. To understand the perceptions, impact and response of the services around these targeted takedowns by Amazon, we conducted a follow-up survey with agents in January 2023. We surveyed a total of N = 36 jennies and N = 38 agents in the first survey. We surveyed N = 34 agents in the second survey; however, as we did not collect any personally identifying information, we were unable to link these responses to the first survey. We refer to the agents and jennies in the first survey as participants A1 - A38 and J1 – J36 respectively. We refer to agents in the second survey as B1 – B34 to distinguish them from the agents in the first round. All of the questions asked are described in Appendix B.

3.4 Analysis

The surveys were semi-structured, meaning that some responses were free-form. In order to identify key themes in our free-text survey responses, we used iterative coding inspired by grounded theory [10]. Two coders were involved in the analysis. Initially, the first coder analyzed our responses in batches of 5, and identified codes for every free-text response. In every batch, they recorded earlier batches if new codes had been found. The second coder then used the code book created and performed deductive coding independently. The inter-coder agreement was high (> 0.95), indicating a substantial strength of agreement between both coders and the reliability of our coding.. We identified codes in broadly three phases of the incentivized reviews process: recruitment phase, purchase phase and review phase and in roughly three themes: onboarding, motivation and evasion. The code book can be found in the Appendix D.

3.5 Ethical Considerations

Our recruitment strategy, surveys, and overall study protocol was reviewed and approved by the Institutional Review Board (IRB) at our institution.

We did not collect any personally identifying information during the course of this study, and as a result, we are able to maintain participant anonymity. All the participants were aware that the answers they provide will be reported as part of a research study. Participants were recruited and shown the survey only after they provided us consent. All participants had the choice to decline to answer any of the questions, or withdraw their participation at any time during the survey. We treated all participants and their responses with respect, and followed principles laid down by the Menlo Report [31].

We analyzed the content posted in various Facebook groups to curate our list of fraudulent products. These groups were public and open to join for any individual with a Facebook account. Agents marketed their products through public posts and comments in the group, which could be read by all members (an example is shown in Figure 5). Therefore, we did not engage in deception to curate out list of fraudulent products or conduct our analysis.

Finally, throughout the course of our study, we identified several sellers and brands which engage in buying reviews for their products sold on Amazon. We do not reveal who these sellers are; if we were to do so, the sellers could trace the source back to our participants, which would jeopardize their anonymity. However, in order to further research in detecting incentivized reviews, we will share de-identified datasets, interview transcripts, key operational features and evasion tactics. Additionally, we were invited by Amazon to discuss our research with the review integrity team, where we shared insights from our work and disclosed some key discoveries.

Skip 4OPERATIONAL CHARACTERISTICS OF REVIEW SERVICES Section

4 OPERATIONAL CHARACTERISTICS OF REVIEW SERVICES

In this section we examine the characteristics of the key players involved in underground review services, such as their demographics, operational characteristics, recruitment strategies and motivations.

4.1 Demographics

Table 1:
AttributeResponseJenniesAgents
GenderMale14(38.89%)13(33.33%)
Female19(52.78%)26(66.67%)
Prefer not to say3(8.33%)0(0%)
Age18 - 210(0%)3(7.69%)
22 - 3221(58.33%)31(79.49%)
33 - 4312(33.33%)1(2.56%)
44 - 543(8.33%)3(7.69%)
Education LevelHigh School or Lower0(0%)19(48.72%)
Diploma0(0%)20(51.28%)
Bachelors Degree15(41.67%)0(0%)
Masters Degree16(44.44%)0(0%)
Doctorate Degree5(13.89%)0(0%)
LocationBangladesh0(0%)18(46.15%)
Pakistan0(0%)21(53.85%)
United States24(66.67%)0(0%)
Canada7(19.44%)0(0%)
United Kingdom5(13.89%)0(0%)
Annual Income (USD)$00(0%)8(20.51%)
1 − 10000(0%)10(25.64%)
1001 − 20000(0%)16(41.03%)
2001 − 30000(0%)4(10.26%)
3001 − 40000(0%)2(5.13%)
4001 − 50,0003(8.33%)0(0%)
50, 001 − 100,0008(22.22%)0(0%)
100, 001 − 150,0007(19.44%)0(0%)
150, 001 − 200,0006(16.67%)0(0%)
200, 001 − 250,0004(11.11%)0(0%)

Table 1: Demographic Characteristics of Agents and Jennies

Sellers. All of the agents we surveyed reported that the sellers they work with are based in China. The focus of this work is incentivized reviews on Amazon; however the underground reviews economy spans other platforms like Walmart (A8, A12, A35, J36), Wayfair (A20, J1, J19, J33) and Target (A3, A25, J14, J32). 9 agents reported that a seller often lists the same product on multiple e-commerce platforms.

Agents. The agents we surveyed are based in Pakistan (\(54\%)\) and Bangladesh (\(46\%)\). Agents are 21 − 35 years old (μ = 25.6). Agents work in groups and each group has an admin. The admin is the interface between agents and the seller. An admin and his group of agents works for multiple sellers at the same time. Agents tend to have limited education; 20 agents had earned a diploma (equivalent to an associate degree in the U.S.) and 19 agents had only a high-school level education. Only 5 agents were pursuing further education (a diploma or bachelors degree). Detailed agent demographics can be seen in Table 1.

Jennies. Jennies we surveyed were Amazon account holders based in the US, UK, and Canada. However, agents reported that they also target jennies in Ireland, France, and Germany as well since some products are geared towards those countries. Jennies were 22 − 51 years old (μ = 31.6). All of the jennies held at least a bachelors degree, with 15 holding a masters degree and 5 holding a doctorate. This level of education is significantly higher from that observed among fraudulent actors in prior work [3, 40]. With their unusually high educational qualifications, it is likely that the jennies intricately understand the nature of fraud they are perpetrating, the safeguards that might be in place to detect it, and purposely manipulate their activities to evade them. In fact, when asked about how their education shapes their understanding of review deletion by Amazon, one jenny (J15) who holds a bachelors degree in a technical field, says:

"...I studied machine learning and web dev in my college....I built a classifier for fake review detection for my bachelors thesis project. So very helpful in knowing what to avoid..."

Detailed jenny demographics can be seen in Table 1.

4.2 Recruitment

Agents are responsible for recruiting jennies and help them buy products. There are two mediums by which agents recruit reviewers: social media channels and targeted advertisements.

Facebook Groups. We were able to find 156 active Facebook groups which served as a hub for reviewers and sellers to contact each other. The largest of these groups had close to one hundred thousand members. On these groups, agents post information about the products which they have. Typically, agents post products on the group along with what the seller is looking for; it can be a five-star review, a five-star rating, positive seller feedback, upvotes to existing reviews or answering questions in the Q&A section (J24, J25, J28, J30, J33). Additionally, reviewers can also ask for any products they want. For example, a reviewer can make a post sharing that they are an Amazon account holder in the U.S. and are looking for bluetooth speakers. Any agents who have the product can comment or reach out. Jenny J30 says:

"...wanted to set up my home office, and h ad written a few reviews before. So I posted as k ing for a standing desk and office chairs - my inbox was flooded with agents who had these products..."

Agents are aware that what their activities are potentially illegal, and hence, always operate with an alias. They also use minor perturbations on the keywords so that they may not be flagged by Facebook (see Fig. 6 in Appendix C). Jennies were part of at least 1 and at most 25 such groups (μ = 6) on Facebook. Some jennies also reported that both jennies and agents post about scammers in the groups and warn others not to trust them (J4, J9, J21, J26, A11, A12, A29, A31). Jenny J4 notes:

"An agent offered me a very lucrative deal and then disappeared after I sent them the review screenshot. So I immediately posted his profile screenshot on the group. Many fellow reviewers commented saying that they were talking to the same agent, and will now be cautious..."

Agents also use groups to report fraudulent jennies; agent A29 says:

"If some jenny returns the product after refund, we lose commission....so I post on the group, and also include their PayPal (so they cannot just change profile name)..."

The goal of the groups is to function as an exchange where reviews are bought and sold (J1, J11, J13, J15). Groups are the most important means for agents to continue their operations and keep marketing new products to jennies which they have already identified. Through auxiliary materials shared by agents, we discovered extensive training material complete with the scripts and scenarios on how agents should operate on groups, how they can identify and approach a jenny, and how to walk them through the process.

Targeted Advertisements. Most jennies (\(75\%)\) reported that they were introduced to the incentivized reviews economy via targeted advertisements on Facebook. Facebook facilitates the delivery of advertisements to a targeted audience; therefore agents are able to leverage search history, marketplace activity and other data available to Facebook to identify users who might be interested in writing such incentivized reviews. Further, a direct fallout of the targeted advertisements is the rabbit hole phenomenon; once a user clicks or interacts positively with an advertisement for free products, they go down the rabbit hole and see several more advertisements of similar pages. As J1 says:

"I just chanced upon this side-hustle one day; I clicked on the advertisement, and then saw a few more. Pretty soon, my feed was full of such advertisements; every third or fourth post I saw was an ad for free products in exchange for reviews."

As Instagram is also owned by Facebook, jennies reported that once they interacted with a few advertisements on Facebook, they started getting exposed to similar ads (advertising free products in exchange for reviews) on Instagram as well (J13, J16, J22). An example of this is shown in Figure 7 in the Appendix C.

4.3 Motivations & Incentives

4.3.1 Agent Incentives.

Incentives for agents are mainly monetary. Agents we surveyed are located primarily in low-income countries. Agents earn either $4 or $5 as commission on a review they helped procure, with the average being $4.42. The annual income of agents (apart from the reviews) ranges from $0 to $3600 (μ = $1203). Most agents earn commission comparable to their monthly income; income from reviews ranges from \(33\%\) to \(100\%\) of their total income. 7 of the agents have no other full-time job; the reviews are their only source of income. All of the agents were aware that by facilitating incentivized reviews, they were violating Amazon’s terms of service. There are three important factors that contribute to agents’ motivation: additional income that is comparable to full-time salary (15 agents), lack of requirement of any infrastructure (9 agents), and lack of special education or skills (11 agents). According to agent A6, based in Bangladesh:

"One month I earned almost $250; my regular job pays me around $100 a month. And it requires no additional expense; just an internet connection."

Additionally, through the proxy materials, we discovered that there is an incentive structure to encourage agents to seek more reviews. Leader boards updated in real-time list agent names and number of reviews they helped procure. The top 3 − −5 agents at the end of every month receive additional commission (A13, A19, A34).

4.3.2 Jenny Incentives.

Major motivations for jennies were the opportunity to get products for free, and not having to pay for some expensive products (such as robot vacuum cleaners (J7), treadmill (J14) and blenders (J17)). J8 says:

"I know it’s wrong....but when I moved into my new place, I was able to get a chair, desk, humidifier, kitchen utensils, storage boxes, vacuum cleaners and a lot more for free. I saved so much by just writing reviews."

Other buyers go a step ahead. They get their products for free by writing reviews, and then sell them online. Jennies reported selling products on sites like eBay, Facebook Marketplace and Offerup. Jenny J2 tells us:

"The product is basically free. Once I get my refund, I sell it on Facebook marketplace for around \(75\%\) of the price. I made over $300 last month."

When asked if they were aware that writing incentivized reviews was against Amazon’s terms of service, 32 jennies (\(\gt 86\%\)) responded Yes. 4 responded as Maybe and only 1 said No.

We also see that the financial incentives for jennies are weaker than they are for our agents. Jenny incomes range from $25, 000 − $230, 000 annually. Our lowest-earning jenny who is a graduate student earns $25, 000 a year, and the amount they save by writing reviews ($1200 in a year) is less than \(5\%\) of their annual income. This is in stark contrast with agents where the money earned via reviews forms at least a third of their total income (and note that agents earn a smaller amount per review than jennies save!).

Skip 5EVASION TACTICS Section

5 EVASION TACTICS

In this section, we discuss the evasion tactics that jennies are asked to follow. Agents and sellers provide these guidelines during the purchase and before writing a review. The goal is to minimize the likelihood of the purchase and the following review being detected and subsequently deleted by Amazon. By analyzing our free-text survey responses, we identified the following evasion tactics.

Table 2:
Tacticn (Agents)n (Jennies)
Search product with keywords11 (\({\it 28.94\%}\))10 (\(27.78\%\))
Spend time reading features and reviews7 (\(18.42\%\))7 (\(19.44\%\))
Mark reviews as helpful and ask questions in the QnA12 (\(31.57\%\))8(\(22.22\%)\)
Add similar products to saved lists and shopping cart8 (\(21.05\%\))9 (\(25\%)\)
Not paying with gift cards or coupons6 (\(15.79\%\))2 (\(5.5\%\))
Wait 10 − 15 days to submit a review11 (\(28.94\%\))4 (\(11\%\))
Add photos and videos to submitted review13 (\(34.21\%\))12 (\(33.33\%\))
Write reviews of at least 300 words8 (\(21.05\%\))1 (\(2.25\%\))
Avoid buying from the same seller3 (\(7.89\%\))2 (\(5.5\%\))
Write some 1-star and 2-star reviews for other products13 (\(34.21\%\))11(\(30.55\%\))
Be consistent across all reviews in terms of timing, content, media0 (\(0\%\))6(\(16.66\%\))

Table 2: Key Evasion Tactics employed by Jennies and recommended by Agents to avoid detection of reviews by Amazon.

5.1 Organic Search for Product

Agents recommend buyers to search for the product with keywords, rather than provide a direct link or the brand name. Buyers are then asked to scroll through the products and find the correct item. According to agents, this is because Amazon can track that you landed on that product from a link, and can consider it to be suspicious. An agent (A3) notes:

"It is for security of your account. Sellers don’t allow us to share direct links with buyers, as Amazon will detect this, and may remove the review that you write."

Similarly, buyers are advised never to search for a product with the brand name in the search term. As far as possible, agents never directly disclose the brand name. A jenny(J2) describes this:

"They never show us the brand. In the image they send us, the brand name is blurred out. I try to find the correct product and send them a screenshot or link to confirm."

5.2 Platform Engagement

As an extension of above, jennies are encouraged to engage with the Amazon platform while searching for the product as a legitimate buyer would. They are asked to look at similar products, browse through images, read reviews and simulate an organic product discovery experience. A2 says:

"We always ask buyers to spend at least 1 minute browsing similar products, and at least 15 seconds on checkout page since if search and purchase is done too quickly, it is suspicious to Amazon."

Jennies generally follow the guidelines, since their accounts are at risk. A jenny (J18) also concurs, saying:

"I spend a lot of time in searching for the products. I really read 5-10 5* and some 1* reviews. I also ask a question or two in the forum. For some of my trusted agents, I also write reviews AFTER the return period. That will tell Amazon that my product is not probably refunded (if it was, I wouldn’t risk losing the 30 day return period)."

5.3 Payment Restrictions

Jennies are advised not to pay using gift cards, and always make the full payment using a different payment method. In addition, Amazon at times displays coupons using which buyers can get a discount on the product; buyers are cautioned not to use these either. The reason behind this is that the seller has designated a fixed number of coupons, and these are to attract other, genuine customers to buy the product. An agent (A7) says:

"Most sellers tell us not to ask buyers to use coupons. If they do, they still refund them, as it is not a loss. But the coupons are mainly there for other customers (who we will not be refunding) to think that they are getting a discount."

Our hypothesis behind sellers not allowing the use of a gift card is that since a gift card works via a coupon code, it could be construed by Amazon as a discount offered by the seller.

5.4 Review Timing

Jennies are recommended to wait at least 10 days before submitting a review on Amazon. Reviews which are submitted too soon arouse suspicion. According an agent (A3):

"...Please submit reviews 10-15 days after you receive the shipment. Use the product for a few days and then review. Else, your review will be removed by Amazon."

Buyers are, of course, eager to receive the refund as soon as possible. However, they understand the reasoning. Jenny J7 says:

"It makes sense. I need to have experienced the product before writing a review, otherwise it is suspicious. I maintain a spreadsheet of items I purchased along with their dates, so I know when 10 days have passed to write the review."

5.5 Review Content

Agents provide detailed instructions to buyers on the content of the reviews. According to agents, factors like the length of the review, and presence of supplementary media (images and videos) affects whether Amazon will remove the review. All ratings have to be five-star. Agents always recommend writing a review of at least 300 words. A jenny (J6) tells us:

"Picture and video reviews are really important to them. Once, an agent offered me additional commission of $5 if I wrote a review of more than 200 words. And $5 more if I attached a picture or video to it."

However, 5 jennies reported that they do not follow this advice because they find it challenging to come up with a long, positive, five-star fake review (J1, J8, J12, J25). In addition to review length, they are cautioned never to strongly emphasize any particular seller. An agent (A11) notes:

"Never ever mention ”always buy from this seller” in your review. We want the product from our seller to be highly ranked. If you write about a seller, Amazon moves the review to seller feedback, which sellers don’t want, because customers don’t view seller feedback often."

5.6 Seller Diversity

In general, buyers are not permitted to review multiple products from the same seller within a certain period of time. According to agent A8:

"Very suspicious if you buy too many products from same store. I never return to the same buyer before 2 months of the first review."

Buyers have another thumb rule which they follow, and that is avoiding buying too many products offered by the same agent. Jenny J10 shares their experience:

"I purchased a vacuum cleaner and sent the order screenshot to the agent. She confirmed my order, and showed me some more products to buy. When I went to amazon, I saw the SAME product as a suggestion, under the heading ”people with similar purchases to you also bought....”. It would have been very suspicious. Clearly, a pattern had been captured."

5.7 Account Metadata

Agents and sellers also take into consideration account metadata while analyzing whether a particular buyer’s review is likely to be removed or not. Of our 38 participating agents, 31 reported that they strongly prefer Amazon Prime account holders. According to agents, reviews from accounts who have Amazon Prime subscriptions are less likely to get removed. 24 of our participating agents reported that when some products had some reviews removed, all of them were from non-Prime accounts. In addition, agents also check if a prospective buyer is an easy rater, that is, they have too many five-star reviews and no three or four-star reviews. Agents also recommend that reviewers write reviews for items that they have purchased organically (that is, without rebate). They also recommend that not all reviews be five-star. Jennies are aware that a diversity in ratings is important to keep up the impression that reviews are genuine. Jenny J16 says:

"I have some non-5* reviews on my profile. If I write only 5*, amazon will think I am writing fake reviews, if I add some 1,2* they will think I am doing it to balance out, but I write some 3/4*; there is no clear incentive for me, which increases their confidence that I am a genuine buyer."

We would like to note here how this need for rating diversity can be harmful for other sellers. 11 jennies reported that for the products which they purchased by themselves, or were not incentivized, they left a negative review even if the product was good so that they would have some 1-star and 2-star reviews on their reviewer profile.

5.8 Review Consistency

This strategy is unique because none of the agents expressed it as an evasion tactic, but 8 jennies shared that they kept all of their reviews consistent in terms of content, length, tone, and media attachments. Jenny J34 says:

"I try to be consistent across all reviews so amazon doesn’t feel I’m giving special attention to any product. All my reviews are same length, all have photos, etc."

Having these attributes consistent over various products (both incentivized and genuine purchases) makes it harder for Amazon to detect the incentivized reviews as different. It is interesting to note that jennies are able to understand the significance of consistency and employ it as an adversarial tactic.

Skip 6EFFECTIVENESS OF TAKEDOWN MEASURES Section

6 EFFECTIVENESS OF TAKEDOWN MEASURES

6.1 Incentivized Review Detection by Amazon

In this section, we evaluate how effective Amazon is at detecting incentivized reviews. To this end, we periodically scraped product reviews for all products in our incentivized products dataset every week over a 6-week period. This allowed us to examine how many reviews were added and deleted every week. While we cannot evaluate how precise their removal was (since we do not have access to all reviews they removed), we can evaluate the recall on our dataset of incentivized products. Figure 2 shows how reviews were deleted over the 6-week period. We considered all of the reviews in our original crawl as a baseline, and computed the proportion of reviews removed relative to this baseline during each week. We then plot an empirical cumulative distribution function over products for every week. We can see that nearly \(50\%\) of the products seeking incentivized reviews had none of their reviews removed.

Figure 2:

Figure 2: Week-over-Week Deletion of Reviews

6.2 Temporal Analysis

Our analysis so far has considered how reviews are removed week-over-week. However, every week that the product is not deleted, the seller can acquire more incentivized reviews. When Amazon removes reviews, the seller can counter it by launching an incentivized review campaign to seek out more incentivized reviews. Upon examining the week-over-week patterns for each product, we identified five main classes of products based on their review addition and removal trends:

Products which had some reviews removed, and then launched an incentivized review campaign to recover from deletion (Figure 3a).

Products which had some reviews removed, and did not attempt to gain more reviews (Figure 3b).

Products which obtained reviews via a campaign, and these reviews were not removed by Amazon; the campaign was successful. (Figure 3c).

Products which have ongoing campaigns; they are engaged with a cat-and-mouse game with Amazon. There are frequent review deletions, followed by review additions to recover from them (Figure 3d).

Products which are eventually removed from Amazon. Figure 3e

Figure 3:

Figure 3: Review Addition and Removal Trends over 6 weeks. Blue points denote the total number of reviews in that week. Green and red points indicate number of reviews added and number of reviews deleted since the previous week respectively.

Skip 7ECOSYSTEM EVOLUTION WITH CHANGING LANDSCAPE Section

7 ECOSYSTEM EVOLUTION WITH CHANGING LANDSCAPE

Amazon attempted to crack down on incentivized reviews by targeting the communication platform (i.e., groups on Facebook) [1]. In July 2022, Amazon filed a lawsuit [1, 2] against more than 10000 individuals, who were administrators of Facebook groups and pages that were involved in review brokering. The lawsuit reveals targeted interventions conducted by Amazon in collaboration with Facebook. According to the lawsuit, Amazon investigators identified thousands of groups that engaged in review brokering. Based on the screenshots and chats presented in the lawsuit, it is likely that investigators from Amazon infiltrated the groups, monitored them to examine activity, and posed as jennies or sellers to gather evidence about the review brokering being perpetrated in the groups. Upon discovering such groups, Amazon reported them to Facebook, citing that the groups violate Facebook’s own terms of service. Amazon then collaborated with Facebook to deactivate the groups discovered. In order to examine the ecosystem adaptation and evolution after these takedowns, we conducted a qualitative survey with review agents. We now discuss our findings from these surveys. During our survey, we asked agents about the impact of Facebook pages and groups being removed and their response to it.

7.1 Agent Perceptions of Group Removal

Agents believe that Facebook uses a mix of automated technical countermeasures as well as undercover human operatives to identify and flag groups where reviews are brokered.

Automated Mechanisms. 14 agents reported that they believed that Facebook use automated methods to detect and flag groups that are engaged in incentivized reviews services. These methods include looking for certain keywords (refund, review, free product, refund after review). According to agents, Facebook analyzes group posts, comments, and ads to look for posts about incentivized reviews. Agent B9 says

"...they probably use data science and text analysis rules to detect which groups and profiles are of agents..."

Honeypot Jennies. 14 agents also believe that there are investigators from Amazon and/or Facebook masquerading as jennies, and whose goal is to report groups and get products removed. Amazon has, in the past, inserted undercover operatives in these forums and used them to identify sellers who are engaged in incentivized reviews. According to agent B25:

"Some users are fake jennies but working for Amazon. They find such pages, join them and then report them. Also, they see what we are posting and report products to Amazon to remove them."

7.2 Evasion Tactics against Group Removal

Agents employ evasion tactics to prevent their profiles and groups from being detected by Facebook. Agents report that they purposefully manipulate their content in order to evade detection. There are four main strategies that we uncovered.

Obfuscation. First, agents obfuscate the text in the posts so as to avoid certain keywords being flagged. This also throws machine learning models off, as they see words they have not encountered during training (words which are out of vocabulary). Agent B1 reports:

"Never use direct words like free, review, rating or refund. Always use safe keywords say f-R-**ee instead of free, RevW instead of Review...."

Images instead of Text. Second, agents use images instead of text to post their content. These images contain the text message, often in varying font styles and with blemishes, lines and non-text symbols. The goal is to make it harder for an automated system to consume this and look for keywords. Agent B7, who has a bachelors degree in a related technical field2 notes:

"... send messages in images or screenshots with lines and scribbles, so they can’t do OCR 3 to extract what you post...."

Frequent Post Removal. Third, agents keep a particular post active only for a short period; then delete that and make a new one. This is to limit the number of times the post gets reported or flagged by automated systems. Agent B1 tells us:

"I remove posts after 12 hours or so and make new post. Now FB 4 cannot get multiple flags and reports on my post and has less time to remove it. "

Defending against Honeypots. Agents also report that they protect their groups from undercover investigators who may flag and report their content. 13 agents reported that they add a buyer to their private groups only after confirming that the latter is an “honest” jenny. This is often confirmed after the buyer has completed a few (3 –5) orders end-to-end (from purchase to review) successfully. Additionally, agents report that they work with other agents to identify fraudulent jennies and maintain a centralized list of “honest” jennies. Agent B29 says:

"....add only safe jenny to the group. We have a network of agents, and keep a shared sheet to report fake jennies. We mention amazon profile URL, screenshots and PayPal email. We add jennies as trusted in this sheet only after they did some orders successfully."

7.3 Recovery Strategy after Group Removal

Our survey also asked agents on the recovery steps they take after their groups are removed. Findings revealed that agents maintain backup channels through alternate platforms like Telegram, Signal, Discord, and WhatsApp. Backup information is communicated after a successful order. 7 agents reported that they share backup contact information with jennies. Agent B1 says:

"....when order confirm, I immediately send backup email, telegram and whatsapp so they can contact me if need and the page gets deactivated."

When a group or page is deleted, the first step is generally to reach out to the jenny and communicate the issue to them followed by creating a new group/page and adding the same jennies again. 11 agents reported that they follow this approach. According to B26:

"whatever FB does, they cannot remove email accounts....[when group is removed] email jennies but quickly else they think we have scammed them. Tell jennies that group is removed by FB, and you will create new group for new free products.."

Some even go as far as automating this approach. Jenny B29, who is fairly technical says:

"I wrote a script that can send email messages to all my jennies if I create new account. So quickly I can reach all my jennies if my account is removed. When accounts are removed I make new page and share those details."

Often, agents do not even have to resort to contacting jennies via email. 26 agents reported that they maintain backup channels with jennies on platforms other than Facebook. It is noteworthy that jennies are aware of the technical and legal limitations of Facebook’s actions against them. Jenny B23 says:

"I keep backup groups on WhatsApp and Signal. FB cannot control those. And they are end-to-end encrypted, so no messages can be analyzed."

7.4 Seller Response to Takedown

Amazon attempts to remove fake/incentivized reviews and also implements targeted interventions against these services. While agents and jennies attempt to evade detection entirely, some products and respective sellers are eventually detected and removed by Amazon. In our survey, we asked agents their understanding of how sellers respond to takedown by Amazon.

In the event that a product is removed, agents reported that sellers employed one or both of the following two strategies: changing the store name or changing the platform. 12 agents reported that sellers simply close down the store, create a new seller account and operate under a different store name to ship and sell the same/similar product. 23 agents reported that if a product is de-listed on Amazon, the seller moves to other platforms such as Walmart or Target, where review moderation is considered lax. Agent B16 says:

"Amazon is very strict and removes reviews, but I have never seen walmart remove reviews. So if Amazon removes the product, seller just moves to Walmart and we continue our orders."

7.5 Role of AI Tools

In addition to the four evasion strategies described above in Section 7.2, we uncovered another technique used by agents. This is not so much an evasion strategy, but a means to avoid being banned from Facebook and Amazon, and claim plausible deniability in the event of a lawsuit. 14 of the agents said that they used ChatGPT in order to generate text in a certain manner that would allow them to crate their content without using certain keywords or being too overt about their intentions. There were three main ways in which ChatGPT was used.

Review Generation. Because review length is a characteristic Amazon may look at while detecting incentivized reviews, having longr reviews is recommended the more detail there is in the review, the less is the likelihood of it being fake). Agents use ChatGPT to generate reviews and then send the generated reviews to the reviewer so they can post it. Agent B11 says:

"Many jennies write very small, one-sentence reviews which can easily be detected. So, we give ChatGPT the product description and ask it to generate a 250-word review, covering many points in detail, and then send this review to the jenny."

Some agents also use specific prompts so that the review appears organic. For example, agent B2 says that they first share the product description from Amazon with ChatGPT, and then use the following prompt, and it has worked well for them so far.

"..Write a positive, five-star review for the vacuum cleaner whose description is above. The review should appear unbiased and neutral but convey positive points. It should not appear to be an incentivized review."

Style Copying. In line with review generation, some agents also use ChatGPT to emulate the writing styles of certain reviewers (generally, highly ranked ones). According to them, this bolsters the credibility of the review and makes it less suspect for Amazon. Agent B8 tells us:

"..reviewers with high ranks are considered good by Amazon, so I tell jennies to write reviews in that style. I just upload a few samples on ChatGPT and ask it to create new reviews as per my requirement."

Post Generation. Finally, agents use ChatGPT to rephrase the content in their posts and messages so as to avoid being flagged. This allows them to market their products and ask for reviews without explicitly going against Facebook’s terms of service. For example, agent B28 tells us:

"..I want to ask for reviews in exchange for free products. So I give my message to ChatGPT and ask it to rephrase it without using words like review, free, or refund. So the text is sufficiently vague that I can convey my message without getting banned."

Based on the example and prompt provided by agent B17, we see that it is indeed effective in achieving the intended effect:

Original Sentence: If you buy this product on Amazon, and write a five-star review for it, we will refund you the entire cost of the product. So effectively the product is free for you.

Prompt: I will provide you with a sentence. Rewrite this sentence without using the words: free, review, refund and five-star. The meaning of the sentence should not change.

Transformed Sentence: If you acquire this product through Amazon, and provide your thoughts online, we will cover the full expense of the item. This makes the product essentially obtainable for you at no cost.

7.6 Impact of Targeted Countermeasures

7.6.1 Legal Action.

Through the lawsuit, Amazon seeks relief in the form of monetary damages and other sanctions against the involved jennies and agents. Being located in Pakistan and Bangladesh (see Section 4.1), the defendants (group moderators and agents, currently addressed as John Does) are outside the jurisdiction of the Superior Court of the State of Washington, where the lawsuit was filed. However, geographical location is not necessarily an impediment to legal action, as evidenced through historical cases. For example, Microsoft while taking down the Kelihos botnet in 2011 sued entities in the Czech Republic and established jurisdiction in Virginia, citing that the botnet affected Virginia-based computers. Requests by American courts to defend against malicious behavior have been sometimes honored by entities located outside the U.S. in the past. Collaborations with law enforcement and international bodies have resulted in successful takedowns of large botnets (such as the Waledac botnet [15] in 2010 and ZeroAccess clickfraud networks [18] in 2013). These cases demonstrate that collaborative efforts across the industry and law enforcement are potential solutions to combating fraudulent incentivized review services.

7.6.2 Pre-Post Analysis.

In order to examine the effect of Amazon’s targeted intervention, we compared activity on Facebook groups before and after the lawsuit was filed. During our initial data collection, we had identified 156 active groups. Out of these 156 groups, 147 were named in the lawsuit by Amazon. As of February 2023, 142 of these groups (\(96\%\)) have been removed (either taken down by Facebook or shutdown by the group administrators). In the five groups that remain active as well as private Facebook Messenger chats that we were monitoring, we see that the group activity has reduced considerably. For example, a private chat which had around 22 messages per day (by agents advertising products) has now, on average, less than a post per day.

Skip 8CONCLUDING REMARKS Section

8 CONCLUDING REMARKS

We conclude with a summary of our key takeaways with a focus on actionable insights as well as a discussion of limitations that present opportunities for future work.

8.1 Key Takeaways

8.1.1 Characterizing Fraudsters.

Our qualitative analysis revealed several interesting behavioral characteristics of fraudsters (both agents and jennies). We see that jennies are highly educated, with some even having doctorate degrees. This contrasts prior findings that fraudsters typically have limited educational qualifications [4]. Motivations for both appear to be mainly monetary, as also seen in prior work on profiling fraudsters [34]. We also see that the money jennies save by writing reviews forms a very small fraction of their income; whereas, for agents it forms a much larger portion. In fact, for some agents (all of which are women), incentivized reviews is their only source of income. Another notable fact is that agents earn a smaller amount per review than jennies save; this disparity is clearly seen in Fig 4.

Figure 4:

Figure 4: Reviews Income vs. Total Income for Agents and Jennies

8.1.2 Purposeful Manipulation.

Based on our survey responses, it appears that both agents and jennies have a non-trivial understanding of how incentivized fraud detection works, and what features Amazon might use for detection. This supports our hypothesis of purposeful and deliberate manipulation that jennies engage in to perpetrate this fraud while remaining undetected, which is similar to what was observed in prior research on fraudulent play store reviews [43]. The evasion tactics employed here may confuse Amazon’s detection systems (both automated and human) that the review is perhaps authentic by introducing signs of organic behavior. For example, if a user browses several products before buying one, and then writes a review for it, a human moderator may easily believe that the review is genuine. Because machine learning models likely leverage such human-labeled data, the evasion tactics may successfully bypass automated detection mechanisms too.

8.1.3 Design Implications.

We identified 8 evasion tactics used to avoid fraudulent incentivized reviews from being detected. Based on these, we can devise several features that can be used to detect incentivized review fraud. We categorize them into two distinct classes: individual features and group features.

Individual Features: A reasonable consumer will try to get the best price for a particular product they are buying. However, a user wanting to buy a specific product because they are instructed to do so will ignore lower-priced an even higher rated products. Additionally, as discussed earlier, jennies are instructed to not use any coupons offered by Amazon. A user who does not care about getting the best deal may be getting compensated outside the platform. E-commerce platforms can capture these signals to identify potentially fraudulent behavior.

Group Features: Agents build their network of jennies via Facebook groups and tend to reuse jennies multiple times. Because of this, there are likely groups of users that always review particular groups of products. A graph can be built with users and products as nodes with an edge if a user has reviewed a product. Community detection or clustering algorithms may help us identify clusters in users and products [5, 25, 29]. Because of the natural graph structure, advanced graph machine learning (such as graph neural networks) could be used to detect malicious users, products, review activity, or fraudulent sub-graphs of users and products.

8.1.4 Audit of Countermeasures.

As discussed in prior work [16], Amazon’s anti-fraud systems attempt to detect incentivized reviews, and Amazon routinely removes the detected reviews, products and user accounts from the platform. Our week-over-week analysis provides some evidence of review deletion by Amazon. However, out of 1600 products, Amazon was able to detect and eventually remove only 20 products from the platform. Furthermore, simply removing a small subset of the reviews and not removing the product does not meaningfully solve the issue, as sellers can counter the removal simply by seeking more incentivized reviews. Thus, audits like ours can be useful to assess the effectiveness of anti-fraud systems in the wild.

8.1.5 Changing Landscape.

Our investigation shows that Amazon’s targeted interventions have been largely successful in cracking down on review brokering in Facebook groups. These groups are an important channel of recruitment and communication between agents and jennies as recruitment of jennies happens primarily on Facebook (see Section 4.2 for how jennies are recruited). While agents may use certain tactics (such as posting images or avoiding keywords) to evade automated detection systems, they will not be effective against manual investigations and human moderation. While new groups and chats have popped up, they are not as active as the previous ones. Agents have responded to these interventions by creating backup channels on other communication platforms such as Signal or WhatsApp, but this simply enables them to continue communicating with existing jennies. Targeting new jennies is not as effective on Signal or WhatsApp. As Facebook groups are taken down, the supply of jennies will be exhausted over time if new ones are not being recruited at the same rate. Therefore, although agents have devised workarounds against group removal, Amazon’s targeted interventions will continue to cause a meaningful decline in incentivized review services.

8.2 Limitations and Future Work

Our work provides insights into the inner working of fake review rings for Amazon products, and evasion tactics employed by fraudsters. While we have some evidence of review fraud on other sites (like Walmart and Target), we did not quantify it and audit their detection measures at scale. Future work could focus on obtaining datasets of incentivized product reviews on sites other than Amazon, examine the characteristics of fraudsters and their evasion tactics, and compare the findings to ours. Our findings revealed that fraudsters now use AI tools like ChatGPT to create reviews that appear unbiased and authentic. This raises an important question: Is it possible to distinguish AI-generated reviews from human-written ones, and are the former more effective at evading automated detection systems? While prior work has investigated this potential [54], it is important that future work revisits these questions, as LLM-powered generative models are becoming increasingly effective and accessible.

8.3 Conclusion

In this paper, we studied incentivized reviews through the perspective of fraudulent actors (agents and jennies). The qualitative analysis of our survey of agents and jennies identified various tactics employed by agents and jennies to evade detection by Amazon such as manipulating account activity, writing negative reviews for genuine products, and modifying review style and content. Our audit of Amazon’s existing countermeasures by way of incentivized review removal showed that these services are either able to evade detection or are able to keep adding new incentivized reviews if Amazon does delete some of them. In order to overcome targeted countermeasures by Amazon (which lead to the removal of Facebook groups), agents have also adopted measures to protect their groups such as moving to Signal and more carefully vetting group members to weed out undercover investigators. Additionally, we also discover how review agents use generative AI tools such as ChatGPT to write text and generate other promotional material that allows them to evade detection by platforms. We have disclosed our findings to Amazon and shared insights about evasion strategies that help them extract relevant features for future-proofing their detection systems. Overall, we find that the problem of detecting incentivized reviews is a challenging one, particularly because these incentivized reviews are from real people that do not leave obvious trace of the fraudulent activity for Amazon to detect, and fraudulent reviewers have genuine account activity making it hard to isolate the fraud. However, targeted counter-measures like the ones carried out by Amazon can be successful in eliminating primary channels where complacent buyers (jennies) are recruited. While backup channel exist, the inability to identify and target jennies on Facebook is likely to cause the underground incentivized reviews economy to grind to a halt.

Skip ADATA CRAWLING Section

A DATA CRAWLING

When we pretended to be buyers from the United States, we posted in the groups asking for lists of items. Agents responded by commenting on the posts. One such example is given below in Fig 5. Note that these were public posts – anyone on the group could have accessed the links.

Figure 5:

Figure 5: Agents responding on groups with links to spreadsheets containing product lists

Skip 2SURVEY QUESTIONNAIRE Section

2 SURVEY QUESTIONNAIRE

The surveys were semi-structured, meaning that there were some questions with free-form responses. In this section, we describe our survey questionnaire.

2.1 Phase 1

In our first phase, we posed questions in four areas. These areas and our questions are described below. All questions were optional, and we interpreted lack of response as "Prefer not to Answer". Note that the questions below are from the survey sent to agents; the survey for jennies had questions along the same lines, but worded appropriately so as to reflect their point of view.

Introduction. This section did not have any questions but was meant to introduce participants to the study. It consisted of the following information:

Purpose of the study

Nature of questions asked

Disclosure of how responses would be used

Participant rights about withdrawing participation or declining to answer

Contact information for lead researcher

Demographics. This section consisted of basic questions to understand the agent demographic. While this has already been covered in prior work, we believed it would help us provide context for the other responses.

What is your age?

What gender do you identify as? (Male / Female / Other)

What country are you located in?

What is your highest level of education? (High School / Diploma / Bachelors / Masters / PhD)

Are you pursuing any form of education? (High School / Diploma / Bachelors / Masters / PhD)

What is your annual income in USD?

What is your annual income, apart from the one earned through reviews in USD?

What countries are sellers from?

Operations. This section aimed to understand how the reviews ecosystem works and operational characteristics of agents and jennies.

How many products do you have listed with you at the moment?

Which websites do the products belong to?

In your words, how does the process of getting a review for a product work?

How do you reach out to Jennies?

How many Facebook groups are yo a part of?

Typically, what happens in the Facebook groups?

Incentives. In this section, we asked questions to discover the incentives and motivating factors due to which agents and jennies engage in incentivized review fraud.

How much do you earn per review?

How often are you paid?

In the past year, what is the highest yo have earned from reviews in a month?

Are you aware that engaging in incentivized reviews is against Amazon’s Terms of Service? (Y / N)

Evasion. Finally, this section dealt with review deletion by Amazon, including how agents ensure that reviews are deleted, and what happened if a review was deleted.

If a review is deleted by Amazon, what happens to the buyer refund?

If a review is deleted by Amazon, what happens to the agent commission?

How do you avoid reviews from being deleted by Amazon?

2.2 Phase 2

Our second phase survey applied only to agents, where we aimed to discover how agents perceive and respond to the changing landscape of fake reviews, such as the lawsuits by Amazon, and group removal by Facebook.

Introduction. This section did not have any questions but was meant to introduce participants to the study. It consisted of the following information:

Purpose of the study

Nature of questions asked

Disclosure of how responses would be used

Participant rights about withdrawing participation or declining to answer

Contact information for lead researcher

Demographics. This section consisted of basic questions to understand the agent demographic. While this has already been covered in prior work, we believed it would help us provide context for the other responses.

What is your age?

What gender do you identify as? (Male / Female / Other)

What country are you located in?

What is your highest level of education? (High School / Diploma / Bachelors / Masters / PhD)

Perceptions on Group Removal. Through this section, we aimed to explore agents mental models of group removal by Facebook.

Has a group you were involved in ever been removed by Facebook? (Y / N)

If you answered YES above, how many groups have been removed?

Has your Facebook profile ever been banned? (Y / N)

According to you, how does Facebook detect your groups?

Evasion Tactics. This section consists of questions that discuss how agents change their behavior online in order to prevent their groups from being detected.

How do you prevent your posts from being flagged by Facebook?

How do you prevent your groups from being flagged by Facebook?

Recovery Measures. In this section, we posed questions to understand whether and how agents continued marketing of their products even after the lawsuit by Amazon and group removal by Facebook.

What do sellers typically do, if their product is removed from Amazon?

How do you contact jennies if your profile is removed?

How do you continue sharing the products you have, if the group is removed?

Skip 3JENNY RECRUITMENT Section

3 JENNY RECRUITMENT

Recruitment of Jennies via Facebook groups with manipulated words can be seen in Figure 6. The power and role of targeted advertisements can be seen in Figure 7.

Figure 6:

Figure 6: Posts in Facebook Groups seeking Jennies. Terms like revi**ew instead of ’review’ are used to avoid being flagged by Facebook.

Figure 7:

Figure 7: A targeted advertisement on Facebook, and the same advertisement delivered moments later to the same user via Instagram.

Skip 4CODEBOOK Section

4 CODEBOOK

4.1 Recruitment Phase

Onboarding: Recruited via targeted advertisement on Facebook

Onboarding: Recruited via targeted advertisement on Instagram

Onboarding: Recruited via Facebook group

Onboarding: Recruited via email from seller

4.2 Purchase Phase

Evasion Tactic: Simulate organic search with keyword

Evasion Tactic: Interact with questions and comments

Evasion Tactic: Add to Shopping Cart and wait

Evasion Tactic: No Payment with Gift Cards or coupons

Evasion Tactic: Avoid products from the same seller

4.3 Review Phase

Evasion Tactic: Waiting Period before Review

Evasion Tactic: Add Media to Reviews

Evasion Tactic: Long Reviews (at least 300 words)

Evasion Tactic: Consistency across Reviews

Evasion Tactic: Avoid extreme opinions in Reviews

Evasion Tactic: Write some 1/2-star reviews

4.4 General

Motivation: Financial Incentive

Motivation: Free Products

Motivation: Lack of Skills requirement

Fraud: Refund and Return

Fraud: Order Spoofing

Fraud: Competitive Reviews Fraud

Footnotes

  1. As of this writing, the author is employed by Microsoft Corporation. However, this research is not endorsed by Microsoft in any way. Opinions expressed are the author’s own, and not of Microsoft Corporation.

  2. 1 https://sell.amazon.com/tools/vine

    Footnote
  3. 2 Computer Science, Information Technology, Electronics Engineering, and allied fields

    Footnote
  4. 3 Optical Character Recognition; the technology used to extract text from images.

    Footnote
  5. 4 FB is the colloquial shorthand for Facebook.

    Footnote
Skip Supplemental Material Section

Supplemental Material

Video Presentation

Video Presentation

mp4

21.3 MB

References

  1. 2022. Amazon Inc. vs. John Does. Superior Court of the State of Washington.Google ScholarGoogle Scholar
  2. Amazon.com Press Room. 2022. Amazon Sues Fake Review Brokers Who Attempt to Profit From Generating Misleading and Fraudulent Reviews. https://press.aboutamazon.com/news-releases/news-release-details/amazon-sues-fake-review-brokers-who-attempt-profit-generating/. Accessed: 2022-05-12.Google ScholarGoogle Scholar
  3. Kimberly Bales and Terry L Fox. 2011. Evaluating a trend analysis of fraud factors. Journal of Finance and Accountancy 5 (2011), 1.Google ScholarGoogle Scholar
  4. Rodrigo Barbado, Oscar Araque, and Carlos A Iglesias. 2019. A framework for fake review detection in online consumer electronics retailers. Information Processing & Management 56, 4 (2019), 1234–1244.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Alex Beutel, Wanhong Xu, Venkatesan Guruswami, Christopher Palow, and Christos Faloutsos. 2013. Copycatch: stopping group attacks by spotting lockstep behavior in social networks. In Proceedings of the 22nd international conference on World Wide Web. 119–130.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Maia J Boyd, Jamar L Sullivan Jr, Marshini Chetty, and Blase Ur. 2021. Understanding the security and privacy advice given to black lives matter protesters. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Casey Breen, Cormac Herley, and Elissa M Redmiles. 2022. A large-scale measurement of cybercrime against individuals. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–41.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Amazon Seller Central. 2022. Answers to Questions About Product Reviews. https://sellercentral.amazon.com/gp/help/external/G201972160?language=en_US. Accessed: 2023-09-12.Google ScholarGoogle Scholar
  9. Chee Chew. 2016. Update on customer reviews, Amazon. https://www.aboutamazon.com/news/innovation-at-amazon/update-on-customer-reviews.Google ScholarGoogle Scholar
  10. Ylona Chun Tie, Melanie Birks, and Karen Francis. 2019. Grounded theory research: A design framework for novice researchers. SAGE open medicine 7 (2019), 2050312118822927.Google ScholarGoogle Scholar
  11. Fedral Trade Commission. 2021. FTC Puts Hundreds of Businesses on Notice about Fake Reviews and Other Misleading Endorsements. https://www.ftc.gov/news-events/news/press-releases/2021/10/ftc-puts-hundreds-businesses-notice-about-fake-reviews-other-misleading-endorsements. Accessed: 2023-11-20.Google ScholarGoogle Scholar
  12. Federal Trade Commission. 2023. Trade Regulation Rule on the Use of Consumer Reviews and Testimonials. https://www.ftc.gov/system/files/ftc_gov/pdf/r311003consumerreviewsandtestimonials_nprm.pdf. Accessed: 2023-09-12.Google ScholarGoogle Scholar
  13. United States Congress. 1914. Federal Trade Commission Act. https://uscode.house.gov/view.xhtml?req=granuleid:USC-prelim-title15-chapter2-subchapter1&edition=prelim. Accessed: 2023-11-12.Google ScholarGoogle Scholar
  14. United States Congress. 2012. Computer Fraud and Abuse Act (CFAA), 18 U.S.C. § 1030. https://uscode.house.gov/view.xhtml?req=computer+crime&f=treesort&num=76. Accessed: 2023-11-24.Google ScholarGoogle Scholar
  15. Tim Cranton. 2010. Cracking Down on Botnets. https://blogs.microsoft.com/on-the-issues/2010/02/24/cracking-down-on-botnets/. Accessed: 2022-12-12.Google ScholarGoogle Scholar
  16. Zachary Crockett. 2019. 5-star phonies: Inside the fake Amazon review complex. https://thehustle.co/amazon-fake-reviews.Google ScholarGoogle Scholar
  17. Verena Distler. 2023. The Influence of Context on Response to Spear-Phishing Attacks: An In-Situ Deception Study. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 619, 18 pages. https://doi.org/10.1145/3544548.3581170Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Brian Donohue. 2013. Microsoft and Friends Take Down ZeroAccess Botnet. https://threatpost.com/microsoft-and-friends-take-down-zeroaccess-botnet/103122/. Accessed: 2023-01-08.Google ScholarGoogle Scholar
  19. eBay. 2023. eBay ToS on Incentivized Reviews. https://www.ebay.com/help/policies/feedback-policies/feedback-extortion-policy?id=4230. Accessed: 2023-09-12.Google ScholarGoogle Scholar
  20. Etsy. 2023. eBay ToS on Incentivized Reviews. https://www.etsy.com/legal/policy/shilling/243317364583. Accessed: 2023-09-12.Google ScholarGoogle Scholar
  21. Shehroze Farooqi, Guillaume Jourjon, Muhammad Ikram, Mohamed Ali Kaafar, Emiliano De Cristofaro, Zubair Shafiq, Arik Friedman, and Fareed Zaffar. 2017. Characterizing key stakeholders in an online black-hat marketplace. In 2017 APWG Symposium on Electronic Crime Research (eCrime). IEEE, 17–27.Google ScholarGoogle ScholarCross RefCross Ref
  22. Anirudh Ganesh, Chinenye Ndulue, and Rita Orji. 2023. Tailoring a Persuasive Game to Promote Secure Smartphone Behaviour. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 618, 18 pages. https://doi.org/10.1145/3544548.3581038Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Google. 2023. Google’s Response to FTC ANPR on the Use of Consumer Reviews and Testimonials. https://www.regulations.gov/comment/FTC-2022-0070-0034. Accessed: 2023-09-12.Google ScholarGoogle Scholar
  24. Rakibul Hasan, Rebecca Weil, Rudolf Siegel, and Katharina Krombholz. 2023. A Psychometric Scale to Measure Individuals’ Value of Other People’s Privacy (VOPP). In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 581, 14 pages. https://doi.org/10.1145/3544548.3581496Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Sherry He, Brett Hollenbeck, Gijs Overgoor, Davide Proserpio, and Ali Tosyali. 2022. Detecting fake-review buyers using network structure: Direct evidence from Amazon. Proceedings of the National Academy of Sciences 119, 47 (2022), e2211932119.Google ScholarGoogle ScholarCross RefCross Ref
  26. Hendrik Heuer and Elena Leah Glassman. 2022. A Comparative Evaluation of Interventions Against Misinformation: Augmenting the WHO Checklist. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 241, 21 pages. https://doi.org/10.1145/3491102.3517717Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. TripAdvisor Insights. 2018. Investigations Spotlight: Jail Time for Review Fraud. https://www.tripadvisor.com/TripAdvisorInsights/w4237. Accessed: 2023-11-22.Google ScholarGoogle Scholar
  28. Soheil Jamshidi, Reza Rejaie, and Jun Li. 2019. Characterizing the dynamics and evolution of incentivized online reviews on Amazon. Social Network Analysis and Mining 9 (2019), 1–15.Google ScholarGoogle ScholarCross RefCross Ref
  29. Meng Jiang, Peng Cui, Alex Beutel, Christos Faloutsos, and Shiqiang Yang. 2014. Catchsync: catching synchronized behavior in large directed graphs. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. 941–950.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Dilara Kekulluoglu, Kami Vaniea, and Walid Magdy. 2022. Understanding privacy switching behaviour on Twitter. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Erin Kenneally and David Dittrich. 2012. The Menlo Report: Ethical principles guiding information and communication technology research. Available at SSRN 2445102 (2012).Google ScholarGoogle Scholar
  32. Leticia Miranda. 2019. Some Amazon Sellers Are Paying $10,000 A Month To Trick Their Way To The Top. https://www.buzzfeednews.com/article/leticiamiranda/amazon-marketplace-sellers-black-hat-scams-search-rankings. Accessed 2020-09-21.Google ScholarGoogle Scholar
  33. Yifang Li and Kelly Caine. 2022. Obfuscation Remedies Harms Arising from Content Flagging of Photos(CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 35, 25 pages. https://doi.org/10.1145/3491102.3517520Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. David Maimon and Eric R Louderback. 2019. Cyber-dependent crimes: An interdisciplinary review. Annual Review of Criminology 2 (2019), 191–216.Google ScholarGoogle ScholarCross RefCross Ref
  35. Ioana Andreea Marin, Pavlo Burda, Nicola Zannone, and Luca Allodi. 2023. The Influence of Human Factors on the Intention to Report Phishing Emails(CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 620, 18 pages. https://doi.org/10.1145/3544548.3580985Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Ioana Marinescu, Nadav Klein, Andrew Chamberlain, and Morgan Smart. 2018. Incentives can reduce bias in online reviews. Technical Report. National Bureau of Economic Research.Google ScholarGoogle Scholar
  37. Nicole Nguyen. 2018. Inside Amazon’s Fake Review Economy. https://www.buzzfeednews.com/article/nicolenguyen/amazon-fake-review-problem. Accessed 2020-09-11.Google ScholarGoogle Scholar
  38. Nicole Nguyen. 2019. Her Amazon Purchases Are Real. The Reviews Are Fake. https://www.buzzfeednews.com/article/nicolenguyen/her-amazon-purchases-are-real-the-reviews-are-fake. Accessed 2020-08-24.Google ScholarGoogle Scholar
  39. Yetunde O Ogunleye, Usman A Ojedokun, and Adeyinka A Aderinto. 2019. Pathways and Motivations for Cyber Fraud Involvement among Female Undergraduates of Selected Universities in South-West Nigeria.International Journal of Cyber Criminology 13, 2 (2019).Google ScholarGoogle Scholar
  40. Simon Padgett. 2014. Profiling the fraudster: Removing the mask to prevent and detect fraud. John Wiley & Sons.Google ScholarGoogle Scholar
  41. Sungsik Park, Woochoel Shin, and Jinhong Xie. 2020. Incentivized Reviews. Available at SSRN 3860510 (2020).Google ScholarGoogle Scholar
  42. Maria Petrescu, Kathleen O’Leary, Deborah Goldring, and Selima Ben Mrad. 2018. Incentivized reviews: Promising the moon for a few stars. Journal of Retailing and Consumer Services 41 (2018), 288–295.Google ScholarGoogle ScholarCross RefCross Ref
  43. Mizanur Rahman, Nestor Hernandez, Ruben Recabarren, Syed Ishtiaque Ahmed, and Bogdan Carbunar. 2019. The art and craft of fraudulent app promotion in google play. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2437–2454.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Kat Roemmich, Florian Schaub, and Nazanin Andalibi. 2023. Emotion AI at Work: Implications for Workplace Surveillance, Emotional Labor, and Emotional Privacy. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 588, 20 pages. https://doi.org/10.1145/3544548.3580950Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Anne Clara Tally, Jacob Abbott, Ashley M Bochner, Sanchari Das, and Christena Nippert-Eng. 2023. Tips, Tricks, and Training: Supporting Anti-Phishing Awareness among Mid-Career Office Workers Based on Employees’ Current Practices. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 621, 13 pages. https://doi.org/10.1145/3544548.3580650Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Kurt Thomas, Patrick Gage Kelley, Sunny Consolvo, Patrawat Samermit, and Elie Bursztein. 2022. “It’s common and a part of being a content creator”: Understanding How Creators Experience and Cope with Hate and Harassment Online. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. TrustPilot. 2023. TrustPilot’s Response to FTC ANPR on the Use of Consumer Reviews and Testimonials. https://www.regulations.gov/comment/FTC-2022-0070-0028. Accessed: 2023-09-12.Google ScholarGoogle Scholar
  48. Amazon US. [n. d.]. Amazon Vine. https://www.amazon.com/vine/about.Google ScholarGoogle Scholar
  49. Eugene Volokh. 2021. Computer Fraud & Abuse Act Lawsuit for Posting Reviews That Allegedly Violate Terms of Service. https://reason.com/volokh/2021/02/23/computer-fraud-abuse-act-lawsuit-for-posting-reviews-that-allegedly-violate-terms-of-service/. Accessed: 2023-11-24.Google ScholarGoogle Scholar
  50. Walmart. 2023. Walmart ToS on Incentivized Reviews. http://reviews.walmart.com/content/1334seller/termsandconditions.htm. Accessed: 2023-09-12.Google ScholarGoogle Scholar
  51. Ye Wang, Patrick Zuest, Yaxing Yao, Zhicong Lu, and Roger Wattenhofer. 2022. Impact and user perception of sandwich attacks in the defi ecosystem. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Fiona Westin and Sonia Chiasson. 2021. “It’s So Difficult to Sever that Connection”: The Role of FoMO in Users’ Reluctant Privacy Behaviours. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Haitao Xu, Daiping Liu, Haining Wang, and Angelos Stavrou. 2015. E-Commerce Reputation Manipulation: The Emergence of Reputation-Escalation-as-a-Service. In Proceedings of the 24th International Conference on World Wide Web (Florence, Italy) (WWW ’15). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1296–1306. https://doi.org/10.1145/2736277.2741650Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Yuanshun Yao, Bimal Viswanath, Jenna Cryan, Haitao Zheng, and Ben Y Zhao. 2017. Automated crowdturfing attacks and defenses in online review systems. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. 1143–1158.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Inc. Yelp. 2023. Yelp’s Response to FTC ANPR on the Use of Consumer Reviews and Testimonials. https://www.regulations.gov/comment/FTC-2022-0070-0028. Accessed: 2023-09-12.Google ScholarGoogle Scholar
  56. Yubao Zhang, Shuai Hao, and Haining Wang. 2021. Detecting incentivized review groups with co-review graph. High-Confidence Computing 1, 1 (2021), 100006.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Understanding Underground Incentivized Review Services

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Conferences
              CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
              May 2024
              18961 pages
              ISBN:9798400703300
              DOI:10.1145/3613904

              Copyright © 2024 Owner/Author

              This work is licensed under a Creative Commons Attribution International 4.0 License.

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 11 May 2024

              Check for updates

              Qualifiers

              • research-article
              • Research
              • Refereed limited

              Acceptance Rates

              Overall Acceptance Rate6,199of26,314submissions,24%

              Upcoming Conference

              CHI PLAY '24
              The Annual Symposium on Computer-Human Interaction in Play
              October 14 - 17, 2024
              Tampere , Finland
            • Article Metrics

              • Downloads (Last 12 months)162
              • Downloads (Last 6 weeks)162

              Other Metrics

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format .

            View HTML Format