skip to main content
10.1145/3613904.3642520acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Free Access
Artifacts Available / v1.1

“I Prefer Regular Visitors to Answer My Questions”: Users’ Desired Experiential Background of Contributors for Location-based Crowdsourcing Platform

Authors Info & Claims
Published:11 May 2024Publication History

Abstract

This three-phase study explores the experiential background of contributors to platforms that provide crowdsourced location-related information. Initially, we utilized interviews to understand users’ expectations for location-related information and the contributors’ experiential background they believe would enhance this information’s utility. We then deployed a survey to identify the top eight sought-after location-information types and their perceived characteristics. Then the concluding online scenario-based study provided quantitative evidence about the interrelationships of eight types of location-related information, ten crucial quality attributes, and aspects of the contributors’ experiential background believed to enhance the utility of the descriptions they provide. Notably, although certain experiential background aspects were deemed universally advantageous across all information types, unique connections were identified among specific information types and distinct experiential background aspects seen as augmenting the contributor’s descriptions’ utility. These insights underline the importance of location-based crowdsourcing platforms incorporating contributors’ experiential background when assigning tasks.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Collecting geolocation data serves a plethora of purposes, from advancing scientific research [8, 15] and boosting commerce [65] to empowering local communities [45]. Crowdsourcing provides a vast array of knowledge, insights, and perspectives about specific locales for individuals to absorb [4, 61]. However, as the number of contributors sharing their views on these locations increases, so too does the diversity in their feedback. This diversity is rooted in the varied experiential backgrounds of the contributors, encompassing both their personal experiences, such as previous visits to the location, and professional experiences that are directly related to the location in question. The diversity of opinions present on these platforms can be beneficial, serving as an indicator of platform impartiality [11, 14, 67]. However, a significant challenge arises from the current platforms’ lack of contextual information about the contributors’ experiential backgrounds. This deficiency can lead to user confusion, particularly when users encounter a spectrum of differing opinions [52, 59, 70]. It becomes challenging for users to navigate through this array of perspectives and determine which ones are most relevant to their needs. This issue is compounded by the uncertainty surrounding how relevant and applicable the contributors’ experiential backgrounds are to the locations in question.

Assuming present platforms recognize the significance of contributors’ experiential backgrounds, the subsequent question to tackle is: how to discern which such experiential backgrounds are pertinent to the information platform users are seeking. It seems reasonable to expect that such users will find insights more valuable when they are contributed by people with certain background characteristics [54, 69]. For instance, in certain situations, the perspectives of long-term local residents might be deemed more informative [42], while in different contexts, the views of recent visitors or those who have extensive experience visiting similar places could be more valued [66]. Acknowledging their users’ preferences about their contributors’ experiential backgrounds will allow platforms not only to more effectively curate information – i.e., to strategically emphasize specific types of experiential background in their user interfaces – but also to identify whom to engage to provide specific types of information that meet information-seekers’ expectations.

Therefore, while the practice of soliciting information from mobile crowds, as exemplified by platforms like Google Maps1 and Google Crowdsourcing2, has been well-established, enhancing this approach further necessitates an understanding of the specific types of experiential backgrounds users value from contributors. This insight is crucial for platforms aiming to target specific mobile crowds capable of providing descriptions with the highest utility – the practical value and usefulness that individuals derive from these descriptions [7], which are essential in assisting individuals to form accurate judgments about a place or make informed decisions. However, there has been a notable lack of investigation into the specific types of contributors’ experiential backgrounds that individuals find beneficial for enhancing the utility of descriptions about a particular place. Therefore, this study is aimed to address three research questions:

RQ1. What kinds of location-related information are people seeking on location-based crowdsourcing platforms, and what characteristics do they perceive this information as having?

RQ2. What particular aspects of quality in the information descriptions are valued when people are seeking specific kinds of location-related information?

RQ3. What types of experiential backgrounds of contributors do information seekers consider as beneficial for enhancing the utility of the descriptions of specific types of location-related information?

To address the research questions, we conducted a three-phase study. The first phase involved semi-structured interviews with 22 users of location-based platforms (21 of whom had also contributed) to garner preliminary insights. The second involved a survey of 162 information seekers to understand the most commonly sought types of location-related information and the attributes they perceived from each type. Leveraging the survey findings, the final stage engaged an online study with 307 study participants, focusing on capturing two key elements: 1) participants’ perceptions of essential aspects of description quality for the commonly sought location-centric information, and 2) their views on the beneficial aspects of contributors’ experiential background that could enhance the utility of the provided descriptions.

As such, the contributions of the current paper is threefold:

The interview study reveals five key attributes of location-related information and ten aspects of description quality that users of location-based crowdsourcing platforms consider pertain to the types of experiential background contributors should have in order to enhance the utility of their descriptions.

The online study identifies aspects of contributors’ experiential background that are universally perceived as beneficial across various types of location-related information. These include the recentness, quantity, and regularity of the contributor’s visits to the location in question.

The online study also brings to light the unique associations between specific types of location-related information, the most valued aspects of description quality and particular elements of a contributor’s experiential background that are perceived as enhancing the utility of the provided descriptions.

Skip 2BACKGROUND Section

2 BACKGROUND

2.1 Mobile Crowdsourcing for Collecting Location-related Information

Crowdsourcing has emerged as a useful method of acquiring data from large groups of individuals [25]. It has been employed in various contexts, including the division of creative labor [63], feedback collection [10, 12, 41], labeling for machine-learning tasks [49, 64], and the provision of objective information about specific objects or places [5, 13, 19, 41, 63]. Mobile crowdsourcing refers both to crowdsourcing tasks performed ‘on the go,’ for which mobile devices are an ideal medium, and to tasks that require contributors to be at a specific location [34, 41]. Various research projects have harnessed distributed crowds to collect data about moving objects [28, 72], specific objects at a particular location [5, 13, 19, 63], or a location itself [10, 12, 41]. Given that much of the information collected pertains to particular locations, such tasks are typically classified as location-related or geographical [27]. A common example is the provision of comments or feedback on online map services such as Google Maps3 and OpenStreetMap4, which can include reviews of restaurants [12, 41], stores [12], or attractions [10, 12, 41].

Numerous studies have demonstrated how mobile crowdsourcing can be used for collecting diverse types of location-related information. For example, the fields of citizen science and mobile crowdsensing have explored recruiting individuals to use their phones to collect data about urban places [9], including about noise [43, 53, 60], air quality [26], and the provision of local facilities like biking trails [46] and accessibility features [19, 56]. Prior research, such as Chang et al. [10], have also investigated the use of mobile crowds to contribute up-to-date information about potential attractions and points of interest in a given area. Likewise, Liu [41] developed a social question-answering service whereby users could ask time-sensitive and location-specific questions and receive prompt answers from the crowd, covering topics such as restaurants, travel, and transportation. As the basis for news reporting, Väätäjä et al. [63] recruited reader-reporters as crowd workers who collected information, including other media content. Similarly, Agapie et al.’s [1] hybrid crowdsourcing process for event reporting used a combination of local workers to gather firsthand information and remote ones to curate it and generate event reports. And Huang et al. [28] developed a participatory sensing system that invited transit passengers to share their location data and provide real-time reports, as a means of predicting buses’ arrival times.

While demonstrating the effective use of crowdsourcing to acquire geolocation data, sometimes relying on real-time updates from on-site contributors, none of the studies reviewed above considered the potential influence of the contributors’ experiential background. This could be particularly pertinent when the information being sought is not time-sensitive, but involves more general aspects of a place. Where platform users’ need or preference is for non-real-time data, contributors who have already moved on from the target location could offer information just as valuable as those who are still there – or sometimes perhaps more valuable, thanks to the additional time they have had to reflect on their experience or compile and structure their observations into a detailed account of the location. On the minus side, however, extending the range of potential contributors is likely to increase the challenge of identifying the most pertinent crowdsourced information, given their different experiential backgrounds with the place.

The present study seeks to augment the relevant literature by investigating whether the end users of crowdsourced information prefer the contributors of such information to have had specific experiential backgrounds; and if so, what types, and how those types vary across different information classes.

2.2 Matching Contributors with Tasks

Task assignment and contributor matching is a specific research area directly relevant to the present study. Prior research in this area has proposed various methods for matching tasks to contributors [23]. These methods include utilizing historical worker data [47, 51], analyzing existing answer distributions [35, 71], asking ‘gold-standard’ questions with known answers [30, 40], and reviewing workers’ attributes [21, 33] and behavioral data [55]. In this section, our focus is on the matching of contributors to tasks based on attributes such as contextual factors [34], skills [22, 44], and cognitive abilities [21, 24]. Some prior studies have looked at matching tasks with crowd workers based on their current location. For example, Kazemi and Shahabi [34] implemented centralized task assignment by assigning workers tasks near their reported locations, with the aim of maximizing the overall number of assigned tasks. Similarly, Tran et al. [62] restricted task performance to workers within the spatiotemporal vicinity of the task; Liu et al. [41] allocated temporal and geo-sensitive questions to workers based on their current location by analyzing live streams from public microblogging platforms; and Linnap and Rice [39] assigned geo-sensitive questions based on location tracking and model-based methods. Konomi et al. [38] assigned tasks based on crowd workers’ movement patterns, and reported achieving greater geographical relevance than basic proximity-based methods could. To maximize the coverage of collected data in a given spatio-temporal space, Ji et al. [31] also assigned tasks to crowd workers based on their mobility, as measured by the time-gap between a participant’s arrival and departure, minus the necessary travel time between his/her point of origin and destination.

In addition to tasks’ geographic locations, researchers have explored task matching based on the expertise and skills the crowd needs to effectively perform them. Heimerl et al. [22], for instance, installed a physical kiosk in a university Computer Science building with the aim of attracting individuals who possessed specialized knowledge or skills in that field to participate in grading tasks. Mavridis et al. [44], meanwhile, proposed a skills-based task-assignment model that measures the distance between the skills of a given worker and the skills required for a specific task, and assigns the most specialized tasks first to those workers with fewer skills. Goncalves et al. [21] investigated the matching of tasks and workers based on the latter’s cognitive abilities, including visual perception and fluency, which were measured in a laboratory setting using the well-established Kit of Factor-Referenced Cognitive Test [18]. Hettiachchi et al. [24] also implemented task assignment based on workers’ cognitive abilities, but via a dynamic system that recommended tasks based on workers’ performance on its four types of online cognitive tasks: namely, classification, counting, transcription, and sentiment analysis. Lastly, instead of using a system to match workers to tasks, Wang et al.’s [68] proposed task-recommendation approach helps workers select tasks based on their own progress and resource contexts.

Nevertheless, few if any previous studies have explored the matching of crowd workers to location-related description tasks based on their personal experiential backgrounds with the locations in question. Given the rising utilization of location-based mobile crowdsourcing platforms, it is critical to ensure that the information available on these platforms is relevant and useful to their users. Gaining an understanding of the aspects of contributors’ experiential backgrounds that users perceive as enhancing the utility of contributors’ descriptions can offer important benefits. Therefore, the present study is intended to bridge this research gap.

The remainder of this paper is structured as follows. First, we will describe the design and execution of a semi-structured interview study aimed at obtaining preliminary insights into our three research questions. Next, it describes the survey study we conducted to delve deeper into the types of location-related information commonly sought by crowdsourcing-platform users, along with their perceptions regarding the attributes of this information. Then, we explain how the results from that second stage informed the design of our third: an online scenario-based study, the objective of which was to capture crowdsourcing-platform users’ expectations about the information they were seeking and the aspects of experiential backgrounds they believed would enhance the utility of the descriptions provided.

Skip 3STUDY 1: PRELIMINARY INTERVIEW STUDY Section

3 STUDY 1: PRELIMINARY INTERVIEW STUDY

Figure 1:

Figure 1: A collection of pre-printed cards supplied by the researchers for use during interviews, covering numerous examples of location-related experience and information

3.1 Interviewees

Our semi-structured interview study involved a diverse group of 22 interviewees, 12 male and 10 female, aged between 21 and 50. Each interviewee self-reported seeking location-based information on crowdsourcing platforms like online maps or forums at least once a month, and all except one also had experience serving as contributors on such platforms. We recruited them through a variety of Facebook groups specifically designed to bridge the gap between researchers and potential research subjects in our country. In appreciation of their time, each participant was compensated with NT$300 (approximately US$10.75).

3.2 Study Procedure

We informed invited participants about the study’s objective. Once they provided their informed consent by signing the document, we commenced the interview. During the interviews, we sought to understand the kinds of information the participants had procured or wished to procure from location-based crowdsourcing platforms or online forums. We also inquired about their anticipations/preferences regarding the experiences and backgrounds of the contributors to such platforms and forums, and prompted them to reflect on how such anticipations/preferences differed across location-related inquiry types. In doing this, our goal was to identify the central characteristics of the location-related information they commonly pursued; the quality attributes of the descriptions they expected; and the types of experiential background they wanted contributors to have to answer specific type of location-related questions.

To aid the interview process, we prepared a set of cards (as shown in Fig. 1) featuring examples of various location-related experiences and information. These cards functioned as prompts, helping interviewees recall their past experiences of seeking and obtaining similar information. The content of the cards underwent iterative development, with new ones being introduced whenever the researchers identified a novel type of location-related experience or information from the interview data. Ultimately, we formulated 23 cards that covered types of location-based information (shown in blue in Fig. 1) and 11 cards that addressed types of contributors’ experiences (shown in yellow).

Due to COVID-19 restrictions, all interviews were conducted via online video conferencing with screen-sharing capabilities. The cards were exhibited on Conceptboard5, an online whiteboard platform, which allowed the researchers and interviewees to view them concurrently. Each interview was video-recorded and transcribed, and lasted from 90 to 120 minutes. The interview study, along with the subsequent survey and online study that will be presented later, all received approval from the Institutional Review Board (IRB) of the authors’ institution.

3.3 Data Analysis

The construction of our codebook was informed by our research questions, and we utilized MAXQDA6 for thematic analysis of our interview data. To ensure the reliability of our coding procedure, three researchers independently coded the same three interview transcripts. Then, they jointly reviewed and compared their selected codes, explored similarities and differences in their interpretations, and ultimately reached a consensus on the coding schema. The codebook was then updated accordingly.

3.4 Preliminary Insights from the Interview Study

3.4.1 Key Perceived Attributes of Location-related Information.

Through analysis of our interview data, we found that the interviewees perceived the location-related information they typically sought as having five critical attributes. These attributes, listed below, profoundly influenced their expectations of how the information should be described as well as the experiential backgrounds they sought from contributors.

Objectivity: The degree to which the sought information leans towards being objective or subjective.

Relativity: The degree to which the sought information relates or is comparable to similar information from other locations.

Specificity: The extent to which the sought information applies to a specific item or a wider range of items, inclusive of location and time-period descriptors.

Variability: The degree to which the sought information tends to vary vs. remain stable.

Temporal Regularity: The extent to which the sought information follows a regular or irregular pattern of change over time.

3.4.2 Desired Quality Attributes of Information Description.

As our interviewees shared their perceptions of the attributes of the location-related information being sought, they also conveyed their expectations of an ideal description and the aspects of quality they considered essential for creating useful descriptions. Often, these discussions were intertwined with their opinions on the desired experiential background of the contributors, as such experiences would influence their evaluation and perception of the utility and relevance of the location-related information described. The key ten aspects of description quality that emerged from the interview data are as follows.

Completeness: The extent to which the description provides sufficient breadth and depth of information.

Degree of Context: The extent to which the description provides contextual information.

Enjoyability: The degree to which consuming the description is enjoyable or fun.

Novelty: The extent to which the description is entirely new to information seekers, or differs from their existing knowledge.

Objectivity: The degree to which the description is unbiased and impartial.

Recentness: The extent to which the description is sufficiently up-to-date to be useful in completing the task at hand.

Reliability: The degree to which the description is accurate and trustworthy.

Specificity: The extent to which the description is specific to a particular item, topic, location, and/or time appropriate to the user’s needs.

Temporal Specificity: The degree to which the description satisfies the user’s time-specific needs.

Understandability: The extent to which the user can comprehend the description.

3.4.3 Desired Aspects of Contributors’ Experiential Background.

Finally, the interviewees frequently cited the following seven key aspects of contributors’ experiential background as influencing their evaluation of the descriptions provided.

Length of Residence: The time the contributor spent living in or exposed to the place associated with the described information.

Quantity: The number of times the contributor observed, interacted with, or visited the place associated with the described information.

Recentness: The temporal proximity of the contributor’s last interaction or observation with the described information or visit to the associated place.

Regularity: The frequency with which the contributor observed or interacted with the described information or visited the associated place.

Variety: The diversity of places where the contributor encountered the same or similar types of information.

Professional Relevance: The relatedness of the contributor’s professional experience to the described information.

Engagement in Commentary: The contributor’s propensity to publicly and proficiently share their thoughts or opinions about a place.

3.4.4 The Perceived Attributes of Sought Information Influence User Desires about Description Quality and Contributors’ Experiential Background.

As anticipated, all interviewees acknowledged that their expectations about the aspects of description quality and contributors’ experiential background varied according to the nature of the information being sought. For instance, when interviewees were confronted with information perceived as having high temporal variability, such as crowd density in a specific area, they expressed a preference for very recent descriptions. Consequently, they favored contributions from individuals who had either visited the location recently or did so regularly. On the other hand, when the interviewees perceived that the sought information exhibited specific temporal patterns – e.g., data on traffic congestion and seasonal weather conditions – the interviewees inclined toward wanting clear information about the specific time periods when conditions were observed. As a result, they preferred contributors with a track record of consistent, long-term observations, irrespective of how recent such observations were.

In instances where interviewees perceived information as highly subjective – such as about the taste of food, attitudes of staff, or ambiance – they sought dependable information that thoroughly covered diverse aspects of the location. In such cases, they assigned a high value to completeness and exhibited a preference for contributors who had made vast numbers of observations. This preference appeared to be motivated by a concern that insufficient experience could result in one-off or outlier insights. A majority of the interviewees also showed a preference for contributors who had offered commentary or left online reviews previously, as they believed this experience would equip contributors to identify which aspects of their subjective experience would be most beneficial to future visitors to a location. Additionally, when the requested information was perceived as highly subjective, variable, and relative, such as comparing the tastiness of food or crowd levels at different times and from different places, interviewees desired additional contextual information that might help them discern whether differences among descriptions were attributable to the information itself, the contributor’s subjective feelings or opinions, or the specific circumstances under which the information was observed. With that end in mind, they preferred contributors with a broad range of experience as a basis for such comparisons. Additionally, if a contributor held formal expertise or professional skills related to the topic, such as being a food critic or chef in the case of food-related information, interviewees considered this an advantage.

Lastly, we learned that when it came to highly precise descriptions of information such as Wi-Fi connectivity, operating hours, or menu options at a certain location or during a defined period, the interviewees favored contributors who had visited the location multiple times during that specific timeframe. But when seeking broader information encompassing an entire region, interviewees expressed a preference for contributors who had lived there for a considerable duration, based on a belief that long-term residents would possess comprehensive and diverse knowledge of their own locales, including information about shortcuts, parking, and lesser-known attractions that might be known and accessible primarily to locals.

In summary, our interview findings indicated that the weights assigned to different aspects of description quality and contributors’ experiential backgrounds were highly contingent upon the particular location-related information being sought by the interviewees. Moreover, the range of such information types corresponded to a spectrum of aspects of description quality deemed important by the interviewees, and this in turn led them to have varied preferences about contributors’ experiential backgrounds. We also encountered notable differences of opinion and divergent preferences among interviewees, further underscoring the need for quantitative evidence when delineating the relationships among the type of information being described, user-valued aspects of quality of such description, and user-desired experiential background of contributors. The interviewees also mentioned such a wide variety of location-related information that an exhaustive investigation of it would have been infeasible. Consequently, we elected to narrow our focus to location-related information that was more commonly sought after and diverse enough in its perceived attributes.

Skip 4STUDY 2: A SURVEY OF LOCATION-RELATED INFORMATION AND ITS ATTRIBUTES Section

4 STUDY 2: A SURVEY OF LOCATION-RELATED INFORMATION AND ITS ATTRIBUTES

Building on the preliminary results described above, we designed a survey aimed at capturing the types of location-related information commonly sought by users of online map platforms, and how those users perceive the attributes of such information. Specifically, each of the survey’s 73 items contained location-related information that had been either 1) frequently mentioned during our interview study or 2) previously recognized as crowdsourcing tasks in the mobile-crowdsourcing literature. These included food reviews [41], product-supply details [20, 32], product pricing [17, 32], crowdedness levels [13, 20, 32, 41], event-related information [1, 20, 41], conditions of public equipment [13, 19, 32, 63], region-specific public issues [43, 50, 53, 60], parking availability [5, 6], scenery descriptions [48, 63], and regional points of interest (POI) recommendations [10, 12, 16, 41]. In these ten items, we asked the respondents to indicate how frequently they sought that type of location-related information using a seven-point Likert scale ranging from 1=“never” to 7=“always”. In an additional 60 items, we presented an information attribute in the form of a scale with polar opposites as ends (e.g., “subjective <—> objective”) and asked the respondents to specify the value of the information attribute along this continuum, again utilizing a seven-point scale. The six attributes comprised the same four discussed in the context of the interview study (i.e., objectivity, relativity, variability, temporal regularity), plus time-specificity and location-specificity, to help us further distinguish between these two types of specificity. Further detail descriptions of the survey questions are provided in the supplementary document. Lastly, three items covered the respondents’ basic demographic data. The online survey was administered via SurveyCake7, a tool designed for creating online questionnaires and visualizing data results. The survey took approximately 15 minutes to complete.

4.1 Survey Respondents

The online survey was publicized in numerous Facebook groups and pages dedicated to residents of various cities in our country. These venues were selected for advertising due to our assumption that large numbers of their members would be interested in gathering information about their local areas. Before initiating the study, participants were informed about their rights and the withdrawal process. As an incentive to participate, we entered all respondents into a raffle, with every fifth participant chosen at random receiving a reward of NT$200 (approximately US$7.17). Initially, we received a total of 240 responses. Subsequently, we undertook a data cleaning process, which included eliminating duplicate responses, discarding responses that contained incorrect answers to two embedded attention-check questions, and removing entries from participants who took an unusually short time to complete the questionnaire. This resulted in a final dataset of 162 responses that were used for analysis. In this final dataset, 58.6% of respondents identified as female, 39.5% as male, and 1.9% chose not to disclose their gender. The age range of the respondents was 20 to 56 years, with an average age of 29.8 years and a standard deviation of 7.8.

4.2 Results

4.2.1 Most Frequently Sought Types of Location-related Information.

Table 1 illustrates the frequency with which respondents sought location-related information of various types via the mobile crowdsourcing platforms.

Table 1:
FrequencyscoreScores of the six key information attributes
ObjectivityVariabilityLocation-specificityTemporal regularityTime-specificityRelativity
Food review5.972.834.284.673.523.754.96
Product supply4.915.144.324.704.794.894.33
Product price6.015.213.704.813.473.675.25
Crowdedness4.694.075.444.665.015.174.41
Event-related information4.944.564.744.854.775.784.20
Condition of public equipment3.734.534.424.053.433.894.07
Regional issue3.673.924.723.173.273.364.46
Parking availability4.945.055.194.054.094.254.88
Scenery description5.053.063.913.404.344.334.37
Regional POI recommendations5.752.904.193.703.533.774.61

Table 1: Average scores assigned to the frequency and the six key information attributes, by information type

Figure 2:

Figure 2: Categorization of information types based on similarities in their attributes, illustrated in a radar chart format. There are two primary groups: (a) Information characterized as highly subjective and relative, and (b) Information noted for being highly time-specific and showing temporal regularity. Category (c) includes various other information types that do not align with the first two groups. For ease of comparison, the radar charts for groups (a) and (b) are stacked in panel (d) for a direct visual comparison.

The three categories of information most frequently sought were product price (average score of 6.01 out of 7), food review (5.97), and regional POI recommendations (5.75). These three categories were significantly more sought after than others (χ2(9)=462.25; product price: p<.001; food review: p<.001; regional POI recommendations: p<.001). On the other hand, regional issue (3.67) and the condition of public equipment (3.73) were the significant least sought-after categories (χ2(9)=462.25; regional issue: p<.001; condition of public equipment: p<.001). As such, these two categories were not considered in our subsequent online study.

4.2.2 Perceived Characteristics of Location-related Information Items.

The respondents’ perceptions of the characteristics of each location-related information item are presented in Table 1. We grouped the information items based on similarities in their perceived characteristics, as depicted in Figure 2. Note that the three items illustrated in Figure 2 c – namely product price, parking availability, and scenery description – were grouped together due to their distinct differences from the rest. Our omission of two least frequently sought information types has already been noted.

Figure 2 a shows that food review and regional POI recommendations were similar, insofar as both were perceived as having low objectivity and high relativity, compared to other three types of information in Figure 2 b. However, food review were perceived as having a higher level of location-specificity compared to regional POI recommendations. As illustrated in Figure 2 b, three types of location-related information – product supply, crowdedness, and event-related information – had similar attributes to one another. That is, they were all perceived as having relatively high objectivity, temporal regularity, and time-specificity, compared to food review and regional POI recommendations, as shown in Figure 2 d. Among them, crowdedness was regarded as having moderate objectivity. Of the remaining information types (as shown in Fig. 2 c), scenery description was perceived as having particularly low objectivity and location-specificity. Conversely, product price was seen as highly objective, location specific, and relative, along with its low level of temporal regularity. Parking availability was also seen as highly objective, location and relative, but was notable for its high level of variety.

Based on the survey results, we decided that our final-phase online study should explore, for the eight most frequently sought location-related information types, which aspects of description quality users deemed crucial to the utility of the descriptions. These eight types of information exhibit diverse attributes, ensuring that the online study would not investigate only similar items. Additionally, we aimed to identify the types of experiential background that users believed made contributors’ descriptions more valuable.

Table 2:
ScenarioDescription
Food reviewAn advertisement for a hot pot restaurant caught my attention, and I want to learn about “whether the dishes taste good” on the platform.
Product supplyA limited-edition product at a particular bakery is in high demand, so before heading there, I came to the platform hoping to find out “if they are likely to still have this item available at this time”.
Product priceA friend recommended a restaurant, and I’m considering whether to go. I came to the platform hoping to find out “the usual price of a set meal” at this restaurant.
CrowdednessI’m planning to visit a ramen restaurant, and before heading there, I came to the platform hoping to find out “if it’s typically crowded at this time”.
Event-related informationA store regularly holds promotional events where they offer gifts with purchase. I came to the platform hoping to find out “what are the usual promotional gifts given by this store”.
Parking availabilityI’m planning to visit a restaurant, and before heading there, I came to the platform hoping to find out “where is usually the best place to park near the destination”.
Scenery descriptionI’m planning to visit a scenic area to look at the flowers, and before heading there, I came to the platform hoping to find out “how the scenery usually is at this time of year in the scenic area”.
Regional POI recommendationsI’m planning to travel to a city, and before that, I’d like to learn “what are the recommended tourist attractions in that area” from the platform.

Table 2: Descriptions of eight scenarios in the online scenario-based study

Skip 5STUDY 3: ONLINE SCENARIO-BASED STUDY Section

5 STUDY 3: ONLINE SCENARIO-BASED STUDY

In the final phase, our goal was to understand if users of location-based crowdsourcing platforms valued specific aspects of description quality more highly for certain types of location-related information than for others, and if they believed that particular aspects of contributors’ experiential background enhanced the utility of such information. Accordingly, in our online scenario-based study, participants rated the importance of ten description-quality attributes: completeness, degree of context, enjoyability, novelty, objectivity, recentness, reliability, specificity, temporal specificity, and understandability. Then, they rated how they thought the utility of the provided information would be enhanced by seven key aspects of contributors’ experiential background, i.e., length of residence, quantity, recentness, regularity, variety, professional relevance, and engagement in commentary. Further details are provided below.

5.1 Study Design

We conducted an online study using SurveyCake8, an online survey tool, with scenarios to prompt participants to imagine the experience of using a real location-based crowdsourcing platform. We first captured their general perceptions regarding the expected quality of location-related information descriptions and expected contributors’ experiential background. Specifically, we asked participants to rate the importance of ten aspects of description quality, as well as how much the presence of each of seven aspects of contributors’ experiential background would enhance the utility of the provided descriptions based on their general experiences of using platforms such as Google Maps, Facebook, and BBS Local Boards to search for/inquire about location-related information.

Subsequently, the participants encountered eight different scenarios, as shown in Table 2, each revolving around one of the eight most sought-after location-related information types identified in our survey results. These were food review, product supply, product price, crowdedness, event-related information, parking availability, scenery description, and regional POI recommendations.

For each scenario, participants responded to a set of three question sections. First, they rated their level of interest in the presented location-related information. Then, they rated the importance of different aspects of description quality on a five-point Likert scale ranging from 1=“not very important” to 5=“very important”, and how much the presence of each of seven aspects of contributors’ experiential background would enhance the utility of the provided descriptions from 1=“no utility increase” to 5=“significant utility increase”. To ensure attentive participation, we included two attention-check questions in the study, both of which simply instructed the participants to select “4” as their answer.

5.2 Participants and Recruitment

We disseminated the recruitment message across various online platforms, including local residential-oriented Facebook groups, Internet forums, Google Local Guides, and travel forums. The message outlined specific eligibility criteria: being aged 20 or older and having prior experience in obtaining location-related information from crowdsourcing platforms such as Google Maps, BBS Local Boards, and Facebook local groups. The recruitment message included a link to the study webpage, allowing participants to directly access the study webpage.

During the four-month period from April 1 to July 31, 2022, we gathered 404 responses. Following data cleaning, as detailed in the upcoming section, a total of 307 responses were deemed valid and included in our subsequent analyses. The respondents of the valid responses included 193 females, 108 males, and six participants who opted not to reveal their gender. Their ages ranged from 20 to 69 years (M=28.9, SD=8.0). In terms of frequency of using location-based crowdsourcing platforms, 71 reported doing so several times a day; 18, once a day; 116, several times a week; 20, once a week; 59, several times a month; 10, once a month; and 13, several times a year. The participants with valid responses were remunerated NT$100 (US$3.19).

5.3 Study Procedure

The study procedure is depicted in Figure 3, which outlines the step-by-step webpage encountered by participants. When participants clicked on the study’s webpage link, they were first presented with an introduction and consent. This form included information on eligibility for participation, the study’s objectives, detailed procedures, participants’ rights, and the process for withdrawing from the study. At this stage, participants could choose to continue by clicking an "Agree" button or opt to exit the page. Upon agreeing, they gained access to an online document explaining key terms they would encounter during the study, including definitions and examples of the ten aspects of description quality for enhanced clarity. Participants were allowed to refer to this document at any point during the study.

Figure 3:

Figure 3: This figure provides a comprehensive overview of the online study procedure. It begins with the Introduction and Consent phase, followed by a detailed explanation of the key terms pertinent to the study. Subsequent to this, study instructions are provided, setting the stage for the core component of the study, which consists of nine distinct sets of questions. The first set pertains to a general situation, while the subsequent eight sets are centered around specific scenario descriptions, each addressing three types of queries: interest in the scenario, the quality of information valued, and experiences considered helpful. After the completion of all question sets, the study progresses to the phase where participants provide their background information.

Figure 4:

Figure 4: Ratings of how much seven aspects of contributors’ experiential background enhanced description utility, by location-related information type

Initially, they encountered a brief set of instructions explaining that they would be presented with nine sets of questions. Each set includes a scenario description and three types of questions: their interest in information types, the information quality they value, and the experiences they find helpful. Participants were asked to imagine themselves using a platform similar to Google Maps, Google Local Guide, or a local community on Facebook or BBS forums, where individuals collaborate to answer each other’s local queries. Their task was to seek local information on this platform. The first set of questions was based on a general situation, while the subsequent sets focused on scenarios for eight different types of information, as detailed in Table 2. Upon completing all question sets, participants provided basic background information, such as gender, age, and frequency of viewing location-related information on crowdsourcing platforms. Lastly, they received an identification number for participation in a raffle. The goal of this design was to ensure participant understanding and engagement throughout the study.

5.4 Data Cleaning and Analysis

To ensure the validity of our data, we adopted a multifaceted approach: scrutinizing responses for two attention-check questions, tracking the time participants spent on the survey, and identifying duplicate responses. This led to the removal of 97 responses in total. Among these, 89 were disqualified for failing the attention checks, 5 were eliminated due to being multiple submissions by the same individual, and 3 were excluded for excessively rapid completion times.

For statistical analysis, we ran linear mixed-effects models using the R package lme4 [3]. We opted for a linear mixed-effects model due to the repeated measures from participants in our study. This approach, accounting for individual differences, allowed us to estimate fixed effects on the dependent variable and control for random effects. Participant IDs were used as random effects to manage repeated data from the same individuals. Moreover, the robustness of mixed-effects linear regression against assumption violations underscores its appropriateness for our analysis [58]. Below, we reported test results with p-values using Satterthwaite’s approximation of the effective degree of freedom [57].

5.5 Results

In this section, we first identify which aspects of contributors’ experiential background were regarded as enhancing and not enhancing the utility of descriptions for the eight types of location-related information, respectively. Subsequently, we explore the relationship between description quality aspects and each type of location-related information. We also examine the correlation between contributors’ experiential backgrounds and description quality aspects. Finally, we utilize linear mixed-effects regression models to identify which among the ten aspects of description quality are effective predictors of the value attributed to specific types of experiential background.

Figure 5:

Figure 5: Ratings of how ten description-quality attributes were valued, by location-related information type

5.5.1 Experiential Background Deemed to Enhance Description Utility, by Information Type.

Overall, participants regarded recentness (M=4.34, SD=0.84), quantity (M=4.12, SD=1.00), and regularity (M=3.87, SD=1.15) as the top three aspects of contributors’ experiential background that were likely to enhance the utility of location-related information descriptions. As shown in Figure 4, which presents the participants’ ratings of seven aspects of contributors’ experiential background across eight types of location-related information, recentness was rated above 4 across all information types. In contrast, professional relevance (M=2.28, SD=1.2) was considered the least helpful aspect of a contributors’ experiential background, averaging below 3 across all types of information.

The figure reveals intriguingly specific relations between types of information and aspects of contributors’ experiential background. Notably, length of residence received low ratings in the context of food-review, product-supply, product-price, and event-related information, all of which were regarded as highly location-specific by the respondents in Study 2. In contrast, length of residence received higher ratings for other types of information that were perceived as more location-general (i.e., parking-availability information, scenery description, and regional POI recommendations). The ratings assigned to this experiential background for these three types of information were significantly higher than the rest (parking availability: F=316.25, p<.001; scenery description: F=234.63, p<.001; regional POI recommendations: F=245.25, p<.001).

Similarly, variety received ratings of 3.6 or above for scenery description, regional POI recommendations, and food review: significantly higher than the ratings it received in connection with other types of information (food review: F=282.64, p<.001; scenery description: F=101.76, p<.001; regional POI recommendations: F=249.16, p<.001). Regularity, in contrast, was not considered beneficial to food review, product price, or event-related information (food review: F=45.76, p<.001; product price: F=21.177, p<.001; event-related information: F=112.17, p<.001). Interestingly, in the case of the latter type of information, only recentness was perceived as beneficial.

5.5.2 Perceived Important Aspects of Description Quality, by Information Type.

Figure 5 links the perceived importance of various aspects of description quality to the eight types of location-related information. Similar to contributors’ experiential background, participants of the online study underscored the importance of the recentness of descriptions (M=4.39, SD=0.84). Reliability was also assigned high importance across most-sought types of information (M=4.44, SD=0.82). The differences in the importance ratings given to these aspects compared to others were both statistically significant (recentness: p<.001; reliability: p<.001). In contrast, enjoyability (M=2.72, SD=1.3) and novelty (M=2.97, SD=1.26) were not perceived as important across most information types. The differences in the importance ratings given to these aspects compared to the others were also both statistically significant (enjoyability: p<.001; novelty: p<.001). The exception was regional POI recommendations, for which these aspects of description quality received statistically significantly higher scores of 3.57 (SD=1.23) and 3.83 (SD=1.14), respectively (enjoyability: F=317.62, p<.001; novelty: F=304.05, p<.001).

Figure 5 also reveals that participants valued or discounted specific aspects of description quality for different types of information. For instance, temporal specificity was seen as important for five out of eight information types, but relatively unimportant for food review (M=3.34, SD=1.25), product price (M=3.52, SD=1.30), and event-related information (M=3.78, SD=1.19). The ratings assigned to this description quality for these three types of information were significantly less than the rest (food review: F=425.31, p<.001; product price: F=286.9, p<.001; event-related information: F=155.29, p<.001). Conversely, objectivity was rated as significantly more important for food review (M=4.15, SD=0.95) than for other types of information (F=73.914, p<.001). Similarly, completeness was rated significantly higher in the context of product price (M=4.09, SD=0.94) than in other information contexts (F=32.231, p<.001).

These results implicitly reveal associations between aspects of description quality and contributors’ experiential background, as both display specific associations with certain types of location-related information. Below, we explore the correlations between these two sets of factors.

5.5.3 Correlation between Desired Contributors’ Experiential Background and Valued Aspects of Description Quality.

We conducted a Spearman correlation analysis to initially explore the relationships between different aspects of contributors’ experiential backgrounds and the aspects of description quality. The results are shown in Figure 6.

Figure 6:

Figure 6: Spearman correlations between seven desired aspects of contributors’ experiential background and ten desired description-quality attributes

It is important to emphasize that such correlations represent overarching relationships, and do not take account of the specific types of location-related information. As indicated earlier, some facets of experiential background or aspects of description quality were perceived as particularly helpful when seeking certain types of information, and the above correlations do not reflect those specific relationships. As such, the results should not be interpreted as fully explaining the relationship between the two elements. Indeed, probably because of the intricate nature of the relationships involved, no pairing of an experience background aspect and a description-quality aspect demonstrated a strong correlation. Even in the seemingly straightforward pairing of recentness of experience and recentness of description, the correlation was only moderate, at 0.43. This suggests that the perceived benefits of a contributors’ experiential background is not dependent on any one aspect, but rather on a combination of various ones.

That being said, the correlation result did indicate that certain correlations were more likely to be seen as beneficial. For example, when participants valued temporal specificity in a description, they were also more likely to value contributors’ longer residence and the quantity, recentness, and regularity of their visiting experiences. In contrast, variety, professional relevance, and engagement in commentary were more likely to be deemed beneficial when participants assigned a high value to the enjoyment they might derive from reading descriptions.

Additionally, some extremely low absolute correlation values in our results can be used to identify which aspects of contributors’ experiential background and which aspects of description quality were considered to be unrelated. For instance, participants’ perception of whether recent experience was beneficial showed no correlation with the importance they assigned to the enjoyability of descriptions. Similarly, the perceived value of relevant professional experience was uncorrelated with the perceived importance of a description being temporally specific and recent.

5.5.4 Predictors of Desired Aspects of Contributors’ Experiential Background.

Lastly, we employed linear mixed-effects regression models to identify which of the ten aspects of description quality served as effective predictors of the value assigned to particular types of experiential background. These models were structured using the R package lme4 [3].

The predictor variables, often referred to as independent variables, included all ten aspects of description quality, while the types of location-related information were used as a categorical variable (with event-related information used as the reference level). We also included participant ID of each participant as a random effect to control for repeated measures from the same individual. The findings of our seven regression models predicting the perceived helpfulness of each aspect of contributors’ experiential background are presented in Table 3. The effects linked with information type are outlined in the top section of the table, and those associated with quality in the lower section.

Table 3:
Length ofresidenceQuantityRecentnessRegularityVarietyProfessionalrelevanceEngagement incommentary
βSEβSEβSEβSEβSEβSEβSE
(Intercept)0.517**0.1982.069***0.1512.499***0.1261.920***0.1691.287***0.1861.449***0.1741.769***0.177
Food review0.361***0.0820.482***0.0610.0200.0520.206**0.0650.880***0.0820.301***0.0660.488***0.065
Product supply0.790***0.0820.542***0.0600.145**0.0520.610***0.065-0.1200.0820.181**0.0660.182**0.065
Product price0.1160.0810.324***0.0600.126*0.0510.301***0.0640.208*0.081-0.0320.0650.195**0.064
Crowdedness1.262***0.0830.579***0.0610.0990.0520.533***0.065-0.0050.083-0.236***0.0670.1140.066
Parking availability1.635***0.0830.550***0.0610.104*0.0520.464***0.0650.0140.083-0.401***0.066-0.0440.066
Scenery description1.469***0.0820.427***0.0600.200***0.0520.376***0.0640.478***0.082-0.244***0.0660.150*0.065
Regional POI recommendations1.448***0.0830.426***0.0610.0670.0530.249***0.0650.701***0.0830.136*0.0670.368***0.067
Completeness-0.0120.0310.056*0.0230.088***0.0200.0450.0260.0360.0310.0350.0260.0200.026
Degree of context0.050*0.0250.0220.019-0.0090.0160.0160.0200.113***0.0250.0390.0210.079***0.021
Enjoyability-0.0020.0260.0050.020-0.0070.0170.0270.0210.109***0.0260.062**0.0220.079***0.022
Novelty0.058*0.0240.044*0.018-0.0070.0150.0290.0200.120***0.0240.072***0.0200.0350.020
Objectivity0.052*0.0260.063***0.019-0.0090.0160.066**0.0210.105***0.0250.083***0.0210.069**0.021
Recentness-0.0020.0320.0390.0240.248***0.0200.0440.0260.0200.032-0.0060.0260.0380.026
Reliability0.0640.0330.0360.025-0.0020.0210.0100.0270.0040.033-0.0190.0270.0000.027
Specificity0.088**0.0280.067**0.0210.0160.0180.076***0.0220.0350.0280.0070.0230.0260.023
Temporal specificity0.154***0.0240.082***0.0180.043**0.0150.081***0.019-0.066***0.024-0.0140.020-0.039*0.020
Understandability0.0300.0330.0090.0250.050*0.0210.0270.0270.065*0.0330.0130.0280.0320.028
marginal \(\emph {R}^2\)0.2930.1150.1370.0910.2350.0910.076
conditional \(\emph {R}^2\)0.5080.4590.4190.5280.4520.5530.615
Note: Reference level for the eight types of location-related information is event-related information.

Table 3: The eight types of location-related information and the ten valued description-quality attributes as predictors of desirable aspects of contributors’ experiential background, *** p<.001 ** p<.01 * p<.1

In most of the seven experience models, there were significant positive main effects of both information types and aspects of description quality. Here, it is important to note that the coefficients for information type should be read in comparison to the reference level: in this case, event-related information, which as noted earlier was the type of information associated with the lowest average ratings for all aspects of contributors’ experiential background.

Each model of experiential background has a unique set of potent predictors linked to aspects of description quality, and importantly, some aspects demonstrate no predictive power in all models of experiential background. Below, we discuss the predictors for each experience model.

Length of Residence, Quantity, and Regularity

Table 3 reveals that three kinds of experiential background – length of residence, quantity, and regularity – share two predictors with strong statistical significance: temporal specificity and specificity. This similarity could have arisen from our participants’ perception of these three kinds of experiential background as similar and/or closely related. However, the predictive power of temporal specificity was stronger for length of residence than for quantity and regularity, suggesting that when assigning value to the time specificity of descriptions, the participants particularly desired contributors to be long-term residents of the place being described.

On the other hand, objectivity emerged as a significant predictor for the experiential backgrounds of quantity and regularity. In other words, participants more frequently associated the ability to deliver objective information with the experience derived from regular and frequent visits, rather than with long-term residence. This could have been due to the participants’ sense that residing near a specific location does not necessarily equate to frequent and regular visits to it.

Recentness

The aspect of recentness in experiential background is prominently linked with the predictor of recentness in description quality, reflecting a clear and direct correlation between these two factors. Moreover, this predictor does not strongly predict other aspects of experiential background, hinting at a unique and exclusive relationship between these two variables. Although their direct relationship may seem intuitive, the lack of cross-predictivity with other variables was somewhat unexpected.

Variety, Professional Relevance, and Engagement in Commentary

Three aspects of experiential backgrounds – variety, professional relevance, and engagement in commentary – were found to have two powerful, statistically significant predictors in common: enjoyability and objectivity. In other words, when the participants sought enjoyable and objective descriptions, they preferred contributors to have these three aspects of experiential backgrounds. The variety aspect of experiential background was also predicted by a broader range of description-quality aspects. Specifically, it shared a strong predictor – degree of context – with the engagement-in-commentary aspect, implying that participants thought contributors with such experience were more likely than others to include context in their descriptions. The variety aspect, meanwhile, shared a predictor – novelty – with the professional-relevance aspect of experience, suggesting that participants thought contributors with these two types of experiences would likely provide information previously unknown to the participants. Variety was also predicted by temporal specificity, perhaps due to an assumption that variety implies quantity and/or coverage of multiple time periods.

Interestingly, no contributor-experience rating was predicted by reliability, and in two of our models, understandability was only a weak predictor. These results could have been due to the explanatory power of these two variables being overshadowed by more specific aspects of description quality. However, Figure 6 suggests another possible reason: low variance in the rated importance of these two aspects of description quality across different types of information. In other words, the participants generally considered reliability and understandability to be equally important irrespective of information type. In the sections below, we summarize the key findings from the results presented and discuss their implications.

Skip 6DISCUSSION Section

6 DISCUSSION

Current research on location-based crowdsourcing platforms tends to focus on increasing the numbers of contributors [21], enhancing productivity [20, 32], decreasing disruptions to the crowd [37], improving task quality [29], and selecting contributors based on factors like location context [34, 62], mobility [31, 38], and cognitive abilities [21, 24]. The current research, in contrast, is the first to establish a link between types of location-related information being sought and the experiential background that users believe would enhance the utility of the descriptions provided by contributors. The subsections below provide detailed discussions of our results, along with their design implications for future location-based crowdsourcing platforms.

6.1 Experiential Background Deemed Universally Beneficial vs. Beneficial for Specific Types of Location-related Information

Our findings indicate that certain aspects of experiential background of contributors were perceived as universally beneficial to information descriptions by most of our online study participants, with the top three being recentness, quantity, and regularity. These results suggest that, regardless of the type of information one is seeking, the presence of certain types of experiential background among contributors will generally be considered more beneficial than their absence. These findings suggest that, across the majority of location-related information being sought, location-based crowd-sourcing platforms could usefully prioritize contributors who possess the above-mentioned universally preferred experiential backgrounds.

However, and notably, our findings highlight the additional benefit of task assignments tailored to specific experiential backgrounds. While prior research has delved into matching contributors with tasks based on their expertise and skills in non-location-related contexts [22, 44], our study sheds light on the distinct and sometimes exclusive relationships between experiential backgrounds and particular types of location-related information. This insight supports the adoption of a similar task assignment strategy, which involves pairing contributors with location-related queries based on their relevant experiential backgrounds. This approach represents a notable contribution of our study, enhancing both the methodology of task assignment and the field of location-related crowdsourcing [1, 10, 28, 41, 63].

For example, when temporally specific descriptions are desired – e.g., when querying high-variability information such as about parking availability or crowdedness – participants favored contributions by long-time residents. This suggests that the participants associated the ability to provide time-specific information with extensive observations both across seasons and at multiple times of day.

Conversely, when participants were interested in obtaining highly subjective information, such as assessments of food, recommendations about POIs, or descriptions of scenery, they tended to assign a high value to contributors with a variety of similar experiences. This preference was correlated with a strong desire for objectivity in descriptions of these types of inherently subjective location-related information. The preference for variety of experience was likely informed by the idea that contributors who have visited a diverse range of locations are able to provide nuanced comparisons and contextualize their assessments. Such context-rich comparative analyses, in turn, could lend a degree of objectivity to subjective evaluations, by situating assessments in relation to other contexts rather than attributing them solely to the information items in question. Such an approach enhances the utility of the information provided by enabling information seekers to discern between individual bias and assessments made after thoughtful comparison and consideration of contextual factors.

Also, probably because encountering similar information items in a variety of contexts further implies that the contributor has experienced the items in various situations, participants deemed the variety aspect of experiential background as beneficial to the provision of novel information: presumably, because it suggests the contributor has visited places or observed things that the information seekers have not.

When participants perceived novelty, enjoyment, and objectivity as highly valuable, they perceived contributors’ professionally relevant experience as beneficial, alongside variety. This could have been because their professional backgrounds allow contributors to provide information from an expert perspective, introducing insights and knowledge uncommon among the general public, thus boosting typical platform users’ sense of novelty and enjoyment.

Probably because contributors who engage actively in commentary are often perceived as possessing experience of delivering information in an enjoyable and engaging way, active-commentary experience was deemed helpful by participants seeking descriptions that were enjoyable. Such experience was also deemed useful when participants desired a deeper understanding of information’s context. This could have been due to an expectation that contributors who actively engage in commentary have a better understanding of the specific needs of information seekers, and are able to provide context-specific information to effectively meet those needs. Participants also desired such experience when seeking objective descriptions, since they believed this experience would equip contributors to identify which aspects of their subjective experience would be most beneficial to future visitors to a location.

Due to space consideration, our discussion cannot cover every link in detail. Overall, however, these specific connections suggest that location-based crowdsourcing platforms can do more than match tasks with the crowd’s geolocation and movement, like prior studies [31, 34, 38, 39, 41]. They can also thoughtfully consider the link between the nature of location-related crowdsourcing tasks and the crowd’s experiential background when assigning tasks. This additional consideration could potentially enhance the relevance and applicability of the information provided to the seeker. However, it would also be beneficial for the system to determine the urgency level required by the information seeker, as this would enable the system to tailor its crowd recruitment strategy, striking a balance between prioritizing individuals’ geolocation – essential when immediate receipt of information is critical – and their experiential background, which may be more relevant for other types of inquiries.

6.2 Moving away from Locality and toward Experience in Location-based Crowdsourcing Platforms

Several location-based crowdsourcing platforms, including Google’s Local Guide9 and Local Wiki10, concentrate on using ‘local’ individuals to offer location-specific details, under an assumption that these individuals are best equipped to answer queries about a specific area. However, our findings suggest the importance of broadening the focus from a strict ‘local’ emphasis to encompassing a more diverse range of experiences and interactions with a location. This expansion is necessary for two primary reasons. Firstly, the concept of ‘local’ is subject to varying interpretations, which in turn influences the types of crowdsourced groups and their relative strengths in addressing different types of location-based queries. For instance, in several platforms discussed in the literature, priority is often given to those spatially close to the location, such as recent visitors [10, 41], those about to pass by [31, 38], or those currently nearby [34, 62], are often given priority for task assignments. This approach typically emphasizes the immediacy of the visit experience or observation. However, when the definition of ‘local’ is focused on long-term residents [1], this kind of experiential background is deemed particularly valuable for responding to queries that demand a high degree of specificity. Hence, when prioritizing the recruitment of ‘locals’, it is critical to identify the particular types of location inquiries that benefit from a specific kind of local insight. Essentially, this becomes a question of which experiential backgrounds are most valued for addressing specific types of location-related questions.

Secondly, exclusively depending on ‘locals’ to source contributors may not guarantee the best responses. Our study findings indicate that for certain location queries, traditional ‘local’ perspectives may not be sufficient. For example, objective information was perceived in our study as better provided by individuals who frequently visit the location, rather than those who are merely spatially close or long-term residents. Similarly, regular visitors or even one-time tourists can offer unique viewpoints that may be absent among long-term residents. If these visitors have been to various similar places, they can provide fresh insights and comparative experiences that would be beneficial to platform users in certain scenarios. Therefore, we suggest that platforms broaden their focus from a strict ‘local’ orientation to encompass a more diverse array of experiences and interactions with the location. By placing greater emphasis on ‘experience,’ a platform can refine its task assignment strategy, aligning tasks with contributors based on various aspects of their interaction with and understanding of the location. Additionally, when presenting multiple responses to information seekers on the platform, it can prioritize or suitably display contributions from those whose experiential backgrounds are most relevant to the specific nature of the location queries. Adopting this expanded focus could serve as an effective strategy to attract both contributors and information seekers. This message would signal that the platform welcomes anyone with relevant experience to answer specific location-related queries, and not just those who identify as ‘locals’. Logically, this would tend to increase the pool of potential respondents and result in more accurate and relevant contributions.

6.3 Design Implications

Our research findings have several design implications for platforms operating in the space of location-related mobile crowdsourcing. On that basis, this section proposes several design directions aimed at improving the user experience of those seeking information, and at making more effective use of input from contributors with diverse experience.

Our design recommendations can be divided into two primary strategies. First, our study underlines the potential benefits of customized task assignment and targeted application of contributors’ experiential background to better match platforms’ location-related information requests. Specifically, when task requests do not require real-time information, like those seeking general assessments of POIs or scenery, we propose that platforms take into consideration the characteristics of the information being sought, the specific aspects of description quality that the information seekers are likely to emphasize and care about, as well as the possible experiential background of contributors that could be perceived as beneficial by such seekers. We propose that platforms implement a scoring system to evaluate potential contributors based on how well their experiential backgrounds align with the requested information. In this system, contributors would receive scores reflecting the suitability of their experience, and then be ranked and selected for tasks accordingly. By employing such strategies, platforms can more effectively match contributors to the specific needs of information requests, thereby enhancing the likelihood that requesters receive relevant and applicable answers. The results from our study could serve as a foundational reference for designing such a scoring mechanism. This approach may increase user trust and perception of the platform’s utility. However, for the scoring system to be effective, the platform should keep track of contributors’ visit and mobility histories, or allow contributors to periodically report their experiences when responding to queries. This information can then be integrated into their profiles, ensuring that their scores in future tasks are as accurate and reflective of their real-world experiences as possible. This approach could be particularly advantageous for platforms that already push tasks to consumers’ devices and seek feedback post-exposure to the information, e.g., Google Maps11, Local Guides12, and rewards-based applications.

Second, beyond the task assignment phase, we recommend that platforms display the relevant aspects of a contributor’s experiential background alongside the information they provide. Specifically, platforms should organize information based on the type of aspect being sought. Using restaurant reviews as an example, the information can be organized into categories such as food quality, ambiance, pricing, crowdedness, product availability, among others. For each piece of content (e.g. a review), the platform could display indicators that emphasize the contributor’s relevant experiences, especially those experiences that are considered valuable for that particular type of content. This organized presentation, along with clear indicators of the contributors’ backgrounds, would assist users who focus on specific aspects of description quality, allowing them to find relevant information more efficiently. Additionally, it would help them to better determine the relevance and applicability of each piece of information in relation to their specific situations and requirements. Platform managers should also consider giving higher visibility and priority to information provided by contributors who have demonstrated relevant experiential background of addressing specific information needs. Such an approach would enable users to quickly find the most helpful descriptions from the most suitable contributors, thereby improving the overall efficiency of platform use.

In summary, our findings can serve as a valuable guide for platforms looking to implement these two strategies. As a reference encapsulating the results of this study, we have provided a summary table (Table  in the supplementary information) that organizes types of location-related information, information attributes, aspects of description quality, and aspects of contributors’ experiential background.

6.4 Limitations

The current study has several limitations that should be acknowledged. First, the range of information items we collected and summarized from prior literature and our exploratory interview study might not fully cover the diversity of information found on real location-related crowdsourcing platforms. This limits the generalizability of our findings. Moreover, all three phases of our study were conducted in our country. This geographic and cultural specificity raises questions about the generalizability of our results to other cultural contexts and regions of the world. Thus, future research should aim to collect a more comprehensive range of requested information from a wider variety of sources to enhance the applicability of the results, and in multiple world regions.

Second, our sample population had a gender imbalance, with a higher proportion of females (58.6% in the survey and 62.9% in the online study). Although this aligns with previous literature’s findings that women seem more motivated than men to seek online reviews [2, 36], it may limit the generalizability of our findings across genders. Finally, during the final phase of our study, we did not collect data about information attributes. This decision was made to avoid extending the length of the study excessively. However, it means that we did not capture the complete interrelationship of information attributes, description quality, and contributors’ experiential background. Future studies should aim to fill these gaps to arrive at a more comprehensive understanding.

Skip 7CONCLUSION Section

7 CONCLUSION

In this paper, with the aims of 1) contributing to the location-based mobile-crowdsourcing literature and 2) improving location-based crowdsourcing platforms commonly used by consumers around the world to access location-related information, we conducted a three-phase study. This investigation involved semi-structured interviews with 22 participants, a survey study with 162 respondents, and an online study involving 307 participants.

By synthesizing the findings from these three phases, we have made several significant contributions. The most notable are 1) our identification of the aspects of experiential background that are universally valued across different types of location-related information, and 2) our revelation of the unique associations between specific types of location-related information, the most valued aspects of description quality, and the aspects of contributors’ experiential background that were perceived as enhancing the utility of contributors’ provided descriptions. These results underscore the potential of leveraging contributors’ experience as a key mechanism for matching requested information on location-related crowdsourcing platforms. By tailoring task assignment and/or contributor recruitment based on these insights, platforms should be able to enhance the effectiveness and accuracy of their information provision, ultimately improving the overall user experience.

Footnotes

Skip Supplemental Material Section

Supplemental Material

Video Presentation

Video Presentation

mp4

44.5 MB

References

  1. Elena Agapie, Jaime Teevan, and Andrés Monroy-Hernández. 2015. Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 3, 1, 2–11. https://doi.org/10.1609/hcomp.v3i1.13235Google ScholarGoogle ScholarCross RefCross Ref
  2. Soonyong Bae and Taesik Lee. 2011. Gender differences in consumers’ perception of online consumer reviews. Electronic Commerce Research 11, 2 (May 2011), 201–214. https://doi.org/10.1007/s10660-010-9072-yGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  3. Douglas Bates, Martin Maechler, Ben Bolker, Steven Walker, Rune Haubo Bojesen Christensen, Henrik Singmann, Bin Dai, Fabian Scheipl, Gabor Grothendieck, Peter Green, 2009. Package ‘lme4’. URL http://lme4. r-forge. r-project. org (2009).Google ScholarGoogle Scholar
  4. Ivo Blohm, Jan Marco Leimeister, and Helmut Krcmar. 2013. Crowdsourcing: How to benefit from (too) many great ideas.MIS quarterly executive 12, 4 (2013).Google ScholarGoogle Scholar
  5. Fabian Bock, Sergio Di Martino, and Antonio Origlia. 2020. Smart Parking: Using a Crowd of Taxis to Sense On-Street Parking Space Availability. IEEE Transactions on Intelligent Transportation Systems 21, 2 (Feb 2020), 496–508. https://doi.org/10.1109/TITS.2019.2899149Google ScholarGoogle ScholarCross RefCross Ref
  6. Fabian Bock, Sergio Di Martino, and Monika Sester. 2016. What Are the Potentialities of Crowdsourcing for Dynamic Maps of On-Street Parking Spaces?. In Proceedings of the 9th ACM SIGSPATIAL International Workshop on Computational Transportation Science (Burlingame, California) (IWCTS ’16). Association for Computing Machinery, New York, NY, USA, 19–24. https://doi.org/10.1145/3003965.3003973Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Mark Boons and Daan Stam. 2019. Crowdsourcing for innovation: How related and unrelated perspectives interact to increase creative performance. Research Policy 48, 7 (2019), 1758–1770. https://doi.org/10.1016/j.respol.2019.04.005Google ScholarGoogle ScholarCross RefCross Ref
  8. Thierry Buecheler, Jan Henrik Sieg, Rudolf Marcel Füchslin, and Rolf Pfeifer. 2010. Crowdsourcing, open innovation and collective intelligence in the scientific method: a research agenda and operational framework. In The 12th International Conference on the Synthesis and Simulation of Living Systems, Odense, Denmark, 19-23 August 2010. MIT Press, 679–686. https://doi.org/10.21256/zhaw-4094Google ScholarGoogle ScholarCross RefCross Ref
  9. Francesco Calabrese, Massimo Colonna, Piero Lovisolo, Dario Parata, and Carlo Ratti. 2011. Real-Time Urban Monitoring Using Cell Phones: A Case Study in Rome. IEEE Transactions on Intelligent Transportation Systems 12, 1 (March 2011), 141–151. https://doi.org/10.1109/TITS.2010.2074196Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Yung-Ju Chang, Chu-Yuan Yang, Ying-Hsuan Kuo, Wen-Hao Cheng, Chun-Liang Yang, Fang-Yu Lin, I-Hui Yeh, Chih-Kuan Hsieh, Ching-Yu Hsieh, and Yu-Shuen Wang. 2020. Tourgether: Exploring Tourists’ Real-Time Sharing of Experiences as a Means of Encouraging Point-of-Interest Exploration. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 4, Article 128 (sep 2020), 25 pages. https://doi.org/10.1145/3369832Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Peng Cheng, Xiang Lian, Zhao Chen, Rui Fu, Lei Chen, Jinsong Han, and Jizhong Zhao. 2014. Reliable diversity-based spatial crowdsourcing by moving workers. arXiv preprint arXiv:1412.0223 (2014).Google ScholarGoogle Scholar
  12. Chia-En Chiang, Yu-Chun Chen, Fang-Yu Lin, Felicia Feng, Hao-An Wu, Hao-Ping Lee, Chang-Hsuan Yang, and Yung-Ju Chang. 2021. “I Got Some Free Time”: Investigating Task-Execution and Task-Effort Metrics in Mobile Crowdsourcing Tasks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 648, 14 pages. https://doi.org/10.1145/3411764.3445477Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Chih-Chi Chung, Yen-Chun Lin, Yu-Cheng Wang, Tze-Yu Chen, Chia-Yu Chen, Xinye Jiang, Fang-Yu Lin, Yu-Hao Weng, and Yung-Ju Chang. 2022. CAMPUS: A University Crowdsourcing Platform for Reporting Facility, Status Update, and Problem Area Information. In Companion Publication of the 2022 Conference on Computer Supported Cooperative Work and Social Computing (Virtual Event, Taiwan) (CSCW’22 Companion). Association for Computing Machinery, New York, NY, USA, 59–62. https://doi.org/10.1145/3500868.3559447Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Sara Cohen and Moran Yashinski. 2017. Crowdsourcing with Diverse Groups of Users. In Proceedings of the 20th International Workshop on the Web and Databases (Chicago, IL, USA) (WebDB’17). Association for Computing Machinery, New York, NY, USA, 7–12. https://doi.org/10.1145/3068839.3068842Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Jeffrey P. Cohn. 2008. Citizen Science: Can Volunteers Do Real Research?BioScience 58, 3 (03 2008), 192–197. https://doi.org/10.1641/B580303 arXiv:https://academic.oup.com/bioscience/article-pdf/58/3/192/26888677/58-3-192.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  16. David Dearman, Timothy Sohn, and Khai N. Truong. 2011. Opportunities Exist: Continuous Discovery of Places to Perform Activities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 2429–2438. https://doi.org/10.1145/1978942.1979297Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Kanhere S. Chou C. T. Bulusu N. Dong, Y. F.2008. Automatic Collection of Fuel Prices from a Network of Mobile Cameras. In Distributed Computing in Sensor Systems. Springer Berlin Heidelberg, Berlin, Heidelberg, 140–156.Google ScholarGoogle Scholar
  18. Ruth B Ekstrom and Harry Horace Harman. 1976. Manual for kit of factor-referenced cognitive tests, 1976. Educational testing service.Google ScholarGoogle Scholar
  19. Ge Gao, Yuling Sun, and Yongle Zhang. 2020. Engaging the Commons in Participatory Sensing: Practice, Problems, and Promise in the Context of Dockless Bikesharing. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376439Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Kapil Garg, Yongsung Kim, Darren Gergle, and Haoqi Zhang. 2019. 4X: A Hybrid Approach for Scaffolding Data Collection and Interest in Low-Effort Participatory Sensing. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 90 (nov 2019), 28 pages. https://doi.org/10.1145/3359192Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jorge Goncalves, Michael Feldman, Subingqian Hu, Vassilis Kostakos, and Abraham Bernstein. 2017. Task Routing and Assignment in Crowdsourcing Based on Cognitive Abilities. In Proceedings of the 26th International Conference on World Wide Web Companion (Perth, Australia) (WWW ’17 Companion). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1023–1031. https://doi.org/10.1145/3041021.3055128Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Kurtis Heimerl, Brian Gawalt, Kuang Chen, Tapan Parikh, and Björn Hartmann. 2012. CommunitySourcing: Engaging Local Crowds to Perform Expert Work via Physical Kiosks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI ’12). Association for Computing Machinery, New York, NY, USA, 1539–1548. https://doi.org/10.1145/2207676.2208619Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Danula Hettiachchi, Vassilis Kostakos, and Jorge Goncalves. 2022. A Survey on Task Assignment in Crowdsourcing. ACM Comput. Surv. 55, 3, Article 49 (feb 2022), 35 pages. https://doi.org/10.1145/3494522Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Danula Hettiachchi, Niels van Berkel, Vassilis Kostakos, and Jorge Goncalves. 2020. CrowdCog: A Cognitive Skill Based System for Heterogeneous Task Assignment and Recommendation in Crowdsourcing. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 110 (oct 2020), 22 pages. https://doi.org/10.1145/3415181Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Jeff Howe 2006. The rise of crowdsourcing. Wired magazine 14, 6 (2006), 176–183.Google ScholarGoogle Scholar
  26. Yen-Chia Hsu, Jennifer Cross, Paul Dille, Michael Tasota, Beatrice Dias, Randy Sargent, Ting-Hao (Kenneth) Huang, and Illah Nourbakhsh. 2020. Smell Pittsburgh: Engaging Community Citizen Science for Air Quality. ACM Trans. Interact. Intell. Syst. 10, 4, Article 32 (nov 2020), 49 pages. https://doi.org/10.1145/3369397Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Yun Huang, Alain Shema, and Huichuan Xia. 2017. A proposed genome of mobile and situated crowdsourcing and its design implications for encouraging contributions. International Journal of Human-Computer Studies 102 (2017), 69–80. https://doi.org/10.1016/j.ijhcs.2016.08.004 Special Issue on Mobile and Situated Crowdsourcing.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Yun Huang, John Zimmerman, Anthony Tomasic, and Aaron Steinfeld. 2016. Combining Contribution Interactions to Increase Coverage in Mobile Participatory Sensing Systems. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (Florence, Italy) (MobileHCI ’16). Association for Computing Machinery, New York, NY, USA, 365–376. https://doi.org/10.1145/2935334.2935387Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Kazushi Ikeda and Keiichiro Hoashi. 2017. Crowdsourcing GO: Effect of Worker Situation on Mobile Crowdsourcing Performance. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 1142–1153. https://doi.org/10.1145/3025453.3025917Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Panagiotis G. Ipeirotis and Evgeniy Gabrilovich. 2014. Quizz: Targeted Crowdsourcing with a Billion (Potential) Users. In Proceedings of the 23rd International Conference on World Wide Web (Seoul, Korea) (WWW ’14). Association for Computing Machinery, New York, NY, USA, 143–154. https://doi.org/10.1145/2566486.2567988Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Shenggong Ji, Yu Zheng, and Tianrui Li. 2016. Urban Sensing Based on Human Mobility. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Heidelberg, Germany) (UbiComp ’16). Association for Computing Machinery, New York, NY, USA, 1040–1051. https://doi.org/10.1145/2971648.2971735Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Thivya Kandappu, Nikita Jaiman, Randy Tandriansyah, Archan Misra, Shih-Fen Cheng, Cen Chen, Hoong Chuin Lau, Deepthi Chander, and Koustuv Dasgupta. 2016. TASKer: Behavioral Insights via Campus-Based Experimental Mobile Crowd-Sourcing. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Heidelberg, Germany) (UbiComp ’16). Association for Computing Machinery, New York, NY, USA, 392–402. https://doi.org/10.1145/2971648.2971690Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. 2011. Worker Types and Personality Traits in Crowdsourcing Relevance Labels. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management (Glasgow, Scotland, UK) (CIKM ’11). Association for Computing Machinery, New York, NY, USA, 1941–1944. https://doi.org/10.1145/2063576.2063860Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Leyla Kazemi and Cyrus Shahabi. 2012. GeoCrowd: Enabling Query Answering with Spatial Crowdsourcing. In Proceedings of the 20th International Conference on Advances in Geographic Information Systems (Redondo Beach, California) (SIGSPATIAL ’12). Association for Computing Machinery, New York, NY, USA, 189–198. https://doi.org/10.1145/2424321.2424346Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Asif R. Khan and Hector Garcia-Molina. 2017. CrowdDQS: Dynamic Question Selection in Crowdsourcing Systems. In Proceedings of the 2017 ACM International Conference on Management of Data (Chicago, Illinois, USA) (SIGMOD ’17). Association for Computing Machinery, New York, NY, USA, 1447–1462. https://doi.org/10.1145/3035918.3064055Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Ellen Eun Kyoo Kim, Anna S. Mattila, and Seyhmus Baloglu. 2011. Effects of Gender and Expertise on Consumers’ Motivation to Read Online Hotel Reviews. Cornell Hospitality Quarterly 52, 4 (Nov 2011), 399–406. https://doi.org/10.1177/1938965510394357Google ScholarGoogle ScholarCross RefCross Ref
  37. Yongsung Kim, Darren Gergle, and Haoqi Zhang. 2018. Hit-or-Wait: Coordinating Opportunistic Low-Effort Contributions to Achieve Global Outcomes in On-the-Go Crowdsourcing. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173670Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Shin’ichi Konomi and Tomoyo Sasao. 2015. The Use of Colocation and Flow Networks in Mobile Crowdsourcing. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers (Osaka, Japan) (UbiComp/ISWC’15 Adjunct). Association for Computing Machinery, New York, NY, USA, 1343–1348. https://doi.org/10.1145/2800835.2800967Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Mattias Linnap and Andrew Rice. 2014. Managed Participatory Sensing with YouSense. Journal of Urban Technology 21, 2 (2014), 9–26. https://doi.org/10.1080/10630732.2014.888216 arXiv:https://doi.org/10.1080/10630732.2014.888216Google ScholarGoogle ScholarCross RefCross Ref
  40. Xuan Liu, Meiyu Lu, Beng Chin Ooi, Yanyan Shen, Sai Wu, and Meihui Zhang. 2012. CDAS: A Crowdsourcing Data Analytics System. Proc. VLDB Endow. 5, 10 (jun 2012), 1040–1051. https://doi.org/10.14778/2336664.2336676Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Yefeng Liu, Todorka Alexandrova, and Tatsuo Nakajima. 2013. Using Stranger as Sensors: Temporal and Geo-Sensitive Question Answering via Social Media. In Proceedings of the 22nd International Conference on World Wide Web (Rio de Janeiro, Brazil) (WWW ’13). Association for Computing Machinery, New York, NY, USA, 803–814. https://doi.org/10.1145/2488388.2488458Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. John G. McNutt Lori A. Brainard. 2010. Virtual Government–Citizen Relations: Informational, Transactional, or Collaborative?Administration & Society 42, 7 (2010), 836–858. https://doi.org/10.1177/0095399710386308 arXiv:https://doi.org/10.1177/0095399710386308Google ScholarGoogle ScholarCross RefCross Ref
  43. Nicolas Maisonneuve, Matthias Stevens, and Bartek Ochab. 2010. Participatory Noise Pollution Monitoring Using Mobile Phones. Info. Pol. 15, 1,2 (apr 2010), 51–71.Google ScholarGoogle Scholar
  44. Panagiotis Mavridis, David Gross-Amblard, and Zoltán Miklós. 2016. Using Hierarchical Skills for Optimized Task Assignment in Knowledge-Intensive Crowdsourcing. In Proceedings of the 25th International Conference on World Wide Web (Montréal, Québec, Canada) (WWW ’16). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 843–853. https://doi.org/10.1145/2872427.2883070Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. J. Alan Glennon Michael F. Goodchild. 2010. Crowdsourcing geographic information for disaster response: a research frontier. International Journal of Digital Earth 3, 3 (2010), 231–241. https://doi.org/10.1080/17538941003759255 arXiv:https://doi.org/10.1080/17538941003759255Google ScholarGoogle ScholarCross RefCross Ref
  46. Aditi Misra, Aaron Gooze, Kari Watkins, Mariam Asad, and Christopher A Le Dantec. 2014. Crowdsourcing and its application to transportation data collection and management. Transportation Research Record 2414, 1 (2014), 1–8. https://doi.org/10.3141/2414-01Google ScholarGoogle ScholarCross RefCross Ref
  47. Kaixiang Mo, Erheng Zhong, and Qiang Yang. 2013. Cross-Task Crowdsourcing. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Chicago, Illinois, USA) (KDD ’13). Association for Computing Machinery, New York, NY, USA, 677–685. https://doi.org/10.1145/2487575.2487593Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Shigeya Morishita, Shogo Maenaka, Daichi Nagata, Morihiko Tamai, Keiichi Yasumoto, Toshinobu Fukukura, and Keita Sato. 2015. SakuraSensor: Quasi-Realtime Cherry-Lined Roads Detection through Participatory Video Sensing by Cars. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Osaka, Japan) (UbiComp ’15). Association for Computing Machinery, New York, NY, USA, 695–705. https://doi.org/10.1145/2750858.2804273Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Stefanie Nowak and Stefan Rüger. 2010. How Reliable Are Annotations via Crowdsourcing: A Study about Inter-Annotator Agreement for Multi-Label Image Annotation. In Proceedings of the International Conference on Multimedia Information Retrieval (Philadelphia, Pennsylvania, USA) (MIR ’10). Association for Computing Machinery, New York, NY, USA, 557–566. https://doi.org/10.1145/1743384.1743478Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Sangkeun Park, Sujin Kwon, and Uichin Lee. 2018. CampusWatch: Exploring Communitysourced Patrolling with Pervasive Mobile Technology. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 134 (nov 2018), 25 pages. https://doi.org/10.1145/3274403Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Eyal Peer, Joachim Vosgerau, and Alessandro Acquisti. 2014. Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior research methods 46 (2014), 1023–1031. https://doi.org/10.3758/s13428-013-0434-yGoogle ScholarGoogle ScholarCross RefCross Ref
  52. Lingyun Qiu, Jun Pang, and Kai H. Lim. 2012. Effects of conflicting aggregated rating on eWOM review credibility and diagnosticity: The moderating role of review valence. Decision Support Systems 54, 1 (2012), 631–643. https://doi.org/10.1016/j.dss.2012.08.020Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Rajib Kumar Rana, Chun Tung Chou, Salil S. Kanhere, Nirupama Bulusu, and Wen Hu. 2010. Ear-Phone: An End-to-End Participatory Urban Noise Mapping System. In Proceedings of the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks (Stockholm, Sweden) (IPSN ’10). Association for Computing Machinery, New York, NY, USA, 105–116. https://doi.org/10.1145/1791212.1791226Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Soo Young Rieh and Nicholas J Belkin. 1998. Understanding judgment of information quality and cognitive authority in the WWW. In Proceedings of the 61st annual meeting of the american society for information science, Vol. 35. 279–289.Google ScholarGoogle Scholar
  55. Jeffrey M. Rzeszotarski and Aniket Kittur. 2011. Instrumenting the Crowd: Using Implicit Behavioral Measures to Predict Task Performance. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (Santa Barbara, California, USA) (UIST ’11). Association for Computing Machinery, New York, NY, USA, 13–22. https://doi.org/10.1145/2047196.2047199Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Manaswi Saha, Michael Saugstad, Hanuma Teja Maddali, Aileen Zeng, Ryan Holland, Steven Bower, Aditya Dash, Sage Chen, Anthony Li, Kotaro Hara, and Jon Froehlich. 2019. Project Sidewalk: A Web-Based Crowdsourcing Tool for Collecting Sidewalk Accessibility Data At Scale. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300292Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. F. E. Satterthwaite. 1946. An Approximate Distribution of Estimates of Variance Components. Biometrics Bulletin 2, 6 (1946), 110–114. http://www.jstor.org/stable/3002019Google ScholarGoogle ScholarCross RefCross Ref
  58. Holger Schielzeth, Niels J. Dingemanse, Shinichi Nakagawa, David F. Westneat, Hassen Allegue, Céline Teplitsky, Denis Réale, Ned A. Dochtermann, László Zsolt Garamszegi, and Yimen G. Araya-Ajoy. 2020. Robustness of linear mixed-effects models to violations of distributional assumptions. Methods in Ecology and Evolution 11, 9 (2020), 1141–1152. https://doi.org/10.1111/2041-210X.13434 arXiv:https://besjournals.onlinelibrary.wiley.com/doi/pdf/10.1111/2041-210X.13434Google ScholarGoogle ScholarCross RefCross Ref
  59. Guohou Shan, Lina Zhou, and Dongsong Zhang. 2021. From conflicts and confusion to doubts: Examining review inconsistency for fake review detection. Decision Support Systems 144 (2021), 113513. https://doi.org/10.1016/j.dss.2021.113513Google ScholarGoogle ScholarCross RefCross Ref
  60. Matthias Stevens and Ellie D’Hondt. 2010. Crowdsourcing of Pollution Data using Smartphones. In Proceedings of the Workshop on Ubiquitous Crowdsourcing, held at ACM conference on Ubiquitous Computing 2010 (UbiComp2010), Copenhagen, Denmark, September 26-29, 2010. ACM.Google ScholarGoogle Scholar
  61. James Surowiecki. 2005. The wisdom of crowds. Anchor.Google ScholarGoogle Scholar
  62. Luan Tran, Hien To, Liyue Fan, and Cyrus Shahabi. 2018. A Real-Time Framework for Task Assignment in Hyperlocal Spatial Crowdsourcing. ACM Trans. Intell. Syst. Technol. 9, 3, Article 37 (jan 2018), 26 pages. https://doi.org/10.1145/3078853Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Heli Väätäjä, Teija Vainio, Esa Sirkkunen, and Kari Salo. 2011. Crowdsourced News Reporting: Supporting News Content Creation with Mobile Phones. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (Stockholm, Sweden) (MobileHCI ’11). Association for Computing Machinery, New York, NY, USA, 435–444. https://doi.org/10.1145/2037373.2037438Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Jennifer Wortman Vaughan. 2017. Making better use of the crowd: How crowdsourcing can advance machine learning research. The Journal of Machine Learning Research 18, 1 (2017), 7026–7071.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Maja Vukovic. 2009. Crowdsourcing for Enterprises. In 2009 Congress on Services - I. 686–692. https://doi.org/10.1109/SERVICES-I.2009.56Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Chelsey Walden-Schreiner, Yu-Fai Leung, and Laura Tateosian. 2018. Digital footprints: Incorporating crowdsourced geographic information for protected area management. Applied Geography 90 (2018), 44–54. https://doi.org/10.1016/j.apgeog.2017.11.004Google ScholarGoogle ScholarCross RefCross Ref
  67. Ana Wang, Meirui Ren, Hailong Ma, Lichen Zhang, Peng Li, and Longjiang Guo. 2020. Maximizing user type diversity for task assignment in crowdsourcing. Journal of Combinatorial Optimization 40, 4 (2020), 1092–1120. https://doi.org/10.1007/s10878-020-00645-6Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Junjie Wang, Ye Yang, Song Wang, Chunyang Chen, Dandan Wang, and Qing Wang. 2022. Context-Aware Personalized Crowdtesting Task Recommendation. IEEE Transactions on Software Engineering 48, 8 (Aug 2022), 3131–3144. https://doi.org/10.1109/TSE.2021.3081171Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Stephan Winter and Nicole C. Krämer. 2014. A question of credibility – Effects of source cues and recommendations on information selection on news sites and blogs. Communications 39, 4 (2014), 435–456. https://doi.org/doi:10.1515/commun-2014-0020Google ScholarGoogle ScholarCross RefCross Ref
  70. Dezhi Yin, Triparna de Vreede, Logan M Steele, and Gert-Jan de Vreede. 2023. Decide now or later: making sense of incoherence across online reviews. Information Systems Research 34, 3 (2023), 1211–1227. https://doi.org/10.1287/isre.2022.1150Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Yudian Zheng, Jiannan Wang, Guoliang Li, Reynold Cheng, and Jianhua Feng. 2015. QASCA: A Quality-Aware Task Assignment System for Crowdsourcing Applications. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data (Melbourne, Victoria, Australia) (SIGMOD ’15). Association for Computing Machinery, New York, NY, USA, 1031–1046. https://doi.org/10.1145/2723372.2749430Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. John Zimmerman, Anthony Tomasic, Charles Garrod, Daisy Yoo, Chaya Hiruncharoenvate, Rafae Aziz, Nikhil Ravi Thiruvengadam, Yun Huang, and Aaron Steinfeld. 2011. Field Trial of Tiramisu: Crowd-Sourcing Bus Arrival Times to Spur Co-Design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 1677–1686. https://doi.org/10.1145/1978942.1979187Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. “I Prefer Regular Visitors to Answer My Questions”: Users’ Desired Experiential Background of Contributors for Location-based Crowdsourcing Platform

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
        May 2024
        18961 pages
        ISBN:9798400703300
        DOI:10.1145/3613904

        Copyright © 2024 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 11 May 2024

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate6,199of26,314submissions,24%
      • Article Metrics

        • Downloads (Last 12 months)87
        • Downloads (Last 6 weeks)87

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format