Published January 15, 2020 | Version 1.0
Dataset Open

Stance in Replies and Quotes (SRQ): A New Dataset For Learning Stance inTwitter Conversations

  • 1. Carnegie Mellon University

Description

Automated ways to extract stance (denying vs. supporting opinions) from conversations on social media are essential to advance opinion mining research. Recently, there is a renewed excitement in the field as we see new models attempting to improve the state-of-the-art. However, for training and evaluating the models, the datasets used are often small.  Additionally, these small datasets have uneven class distributions, i.e., only a tiny fraction of the examples in the dataset have favoring and denying stance, and most other examples have no clear stance. Moreover, the existing datasets do not distinguish between the different types of conversations on social media (e.g., replying vs. quoting on Twitter). Because of this, models trained on one event do not generalize to other events. 

In the presented work, we create a new dataset by labeling stance in responses to posts on Twitter (both replies and quotes) on controversial issues. To the best of our knowledge, this is currently the largest human-labeled stance dataset for Twitter conversations with over 5200 stance labels. More importantly, we designed a tweet collection methodology that favours the selection of denial-type responses. This class is expected to be more useful in the identification of rumours and determining antagonistic relationships between users. 

Files

event_universe.zip

Files (69.5 MB)

Name Size Download all
md5:8f6445c5605d9991ed7934830ec602f7
68.5 MB Preview Download
md5:a771acc486a7e89587cb4a03174449c1
1.0 MB Preview Download