Xenophobia Meter Project
Xenophobia is a pervasive problem that exists in many societies today. It is an irrational fear and hatred of people from other countries and cultures that often lead to discrimination, prejudice and racism. Xenophobia can take many forms, including hateful language and acts of violence against individuals and groups.
Xenophobia has surged around the world in recent years due to economic instability, political tensions, social media and other factors. Xenophobic attitudes and behaviors often target immigrants and refugees who are irrationally seen as a threat to local jobs and culture as well as humanitarian efforts globally. Such attitudes can lead to exclusion and marginalization and make it difficult for individuals and communities to fully integrate into society. To ensure a more inclusive and just society for all, it is critical to address and combat xenophobia through education, awareness raising and policy change.
The Xenophobia Meter Project (XMP) aims to track anti-immigrant hate speech. It includes the Xenophobia Meter, a labeling procedure, and a labeled dataset of Twitter (X) data. Read more about the project in the Cornell Chronicle.
The Meter
The Meter is a tool designed to help classify social media data on a detailed scale of expressions of attitudes toward foreigners and those perceived as foreigners. Unlike binary classifiers, the meter is a 7-item scale that captures an ordinal range of attitudes, from “Very Xenophobic” to “Very Pro-Foreigner”, with the purpose of capturing a nuanced insight into language use as it relates to foreigners, immigrants, and immigration policy. It includes both positive and negative sentiments with the purpose of highlighting both problematic xenophobic speech and the possibility of allyship, equity, and inclusion. Each of the 7 categories includes specific reasoning criteria and examples, which are useful for understanding the nuances of xenophobic and pro-foreigner language. Through our Xenophobia Meter, researchers and advocates can gain a deeper understanding of attitudes towards foreigners online and take steps toward creating a safer and welcoming space for all.
The Meter is freely available to the public to use and modify for your own needs, language, country, and online content. We welcome proposals to collaborate with translating the Meter to other languages and modify it to be relevant in other contexts. Please cite the Xenophobia Meter as Cornell University Xenophobia Meter.
Labeling
To use the meter on a dataset, start by defining the distinct pieces of content within the dataset to analyze: sentences, posts, images, or other units of analysis. For each piece, examine its content and decide the degree to which it fits any of the reasoning criteria outlined in the meter. In the case of Twitter (X) data, our labelers reviewed the tweet and any additional media and thumbnails that accompany it, and followed links provided in the tweet. Once you find a reasoning criterion that matches the content, assign to that content piece with the corresponding rating, indicating the rational for the chosen rating.
Learning to use the meter and to apply it on a dataset requires training. We suggest training with content that is highly relevant to foreigners or immigrants and that unambiguously displays one of the 7 categories with a clear reasoning. Over time, human labelers become familiar with xenophobic and anti-xenophobic language, aware of current events and politics, and immigrant-related policies. This helps labelers become more confident in labeling decisions, and able to identify more accurately nuanced language that demonstrates xenophobic and anti-xenophobic speech.
To ensure reliability of the labeled content, we recommend assigning at least two labelers to each piece of content, and then computing an inter-rater reliability. In XMP, we also held weekly calibration meetings, in which labelers discussed cases where they felt unconfident about their ratings, and where multiple labelers disagreed on their ratings by an absolute difference of 2 or more. The discussions covered U.S. policies, nuanced language use, and possible political motives related to a tweet and its language. After the discussion, a labeler may change their rating.
The Dataset
The dataset is our application of the Xenophobia Meter to a Twitter dataset. Since one of our goals is to highlight the accountability of public actors bearing legal or moral responsibility for their speech, we focus on discourse by figures and entities such as journalists, government agencies, and non-profit organizations with verified Twitter accounts, who produce a high volume of social media content related to immigration and immigrant communities.
The dataset contains tweets from 11 verified accounts collected in September 2020, of which 7500 tweets are each labeled by at least 2 human labelers. The dataset is freely available in order to open our process to comment and feedback, and to promote awareness, research, and advocacy for a more inclusive society. The accounts we collected tweets from are:
- AAAJ_AAJC: A non-profit organization advocating for Asian American communities.
- AILANational: The American Immigration Lawyers Association.
- BAJItweet: Black Alliance for Just Immigration.
- BreitbartNews: A far-right news network.
- FAIRImmigration: Federation for American Immigration Reform.
- ICEgov: U.S. Immigration and Customs Enforcement.
- IngrahamAngle: Political talk show on Fox News Channel.
- Splcenter: Southern Poverty Law Center.
- StatePRM: U.S. Bureau of Population, Refugees, and Migration.
- TuckerCarlson: Tucker Carlson Tonight show host on Fox News Channel.
- UNHCRUSA: U.S. chapter of United Nations High Commissioner for Refugees.
Our Findings
Across the 11 accounts in the dataset, the majority of tweets are neutral, and most are rated as “not relevant” to foreigners or immigrants. There are more tweets falling on the pro-foreigner side than the anti-foreigner side.
Each account presents a different distribution of average tweet ratings. This distribution corresponds with our expectations given their U.S. partisan perspective, interests, and activities.
For more information about the Xenophobia Meter Project, the research, and our findings, check out our ICWSM 2024 paper (forthcoming).
Collaborators
This research was carried out by Beth Lyon (Cornell Law School), Gilly Leshed (Cornell Information Science), Khonzoda Umarova (Computer Science), Oluchi Okorafor (Information Science), Pinxian Lu (Information Science), Jialin (Sophia) Shan (Information Science), Alex Xu (Information Science), Ray Zhou (Information Science), and Jennifer Otiono (Information Science).
Funding
The Xenophobia Meter Project was launched with seed awards from Global Cornell’s Mario Einaudi Center for International Studies and Migrations initiative, with support from a Just Futures partnership with the Andrew W. Mellon Foundation.