2020 Assembly Projects
THE 2020 ASSEMBLY COHORT came together to work on the challenge of disinformation on online platforms. This year’s cohort gathered 17 competitively-selected fellows, coming primarily from the private sector, with additional representation from government and nonprofits. Each fellow participated as an individual, and not as a representative of their organization. Assembly Fellows conducted their work independently, with light advisory guidance from program advisors and staff.
This year, the cohort created five projects that tackle a range of problems. Disinfodex is an online database prototype that indexes public disclosures of disinformation campaigns by online platforms. Into the Voids analyzes data voids and offers a framework to evaluate the harms they pose. Coordinated Authentic Behavior collects cases of coordinated responses to disinformation and draws lessons for whole-of-society responses to future campaigns. Misinfo Motives explores motivations behind creating and sharing disinformation and offers a framework for designing interventions. And Semaphore is a prototype of a tool to help users publicly flag misinformation.
Read more below about the five projects developed during Assembly 2020.
DISINFODEX, developed by fellows with backgrounds in journalism, policy, and cybersecurity, is a prototype of a searchable database that indexes “public disclosures” of disinformation campaigns issued by online platforms. Upon discovering coordinated disinformation activity, platforms are increasingly disclosing their findings with the public through blog posts and reports. Currently, the project captures disclosures from Facebook, Instagram, Google, YouTube, Twitter, and Reddit. By aggregating the resources in one searchable database, Disinfodex provides a way to analyze and interpret publicly available information about disinformation. An accompanying white paper details the process of building Disinfodex as an independent aggregator, explores the benefits of advancing a shared infrastructure in the field, and outlines plans to build upon the database.
INTO THE VOIDS analyzes data voids and offers a framework for conceptualizing the harms they pose. The risks posed by “data voids”--the absence of high-quality, authoritative sources in search results--have only recently been explored in the context of disinformation. Developed by fellows with expertise in design, policymaking, and human rights, the team created a harms framework to evaluate both existing and emerging data voids and developed a data-driven methodology for mapping the lifecycle of data voids across Google search trends, Wikipedia page edits, and Mediacloud journalistic coverage. The methodology produces lifecycle maps that make it easy to clearly identify the “spike” and “long tail” phases of data voids, and help highlight the role of mainstream media coverage and organic search in amplifying and countervailing disinformation narratives. The team hopes its work will add further clarity, structure, and empirical data to the conversation around data voids and their implications.
COORDINATED AUTHENTIC BEHAVIOR collects cases of successfully mitigated disinformation and draws lessons for whole-of-society responses for future campaigns. Despite the documentation and journalistic coverage disinformation campaigns have received in recent years, there has been relatively little focus on successful responses to the problem that might serve as models. The team, consisting of fellows with expertise in policy making, linguistics, and data science, worked to contribute to case study analysis scrutinizing the work of journalists, governments, social media platforms, and civil society organizations in effectively strategizing and executing responses to disinformation. Over the course of the fellowship, the team developed a prototype of a searchable database that collects examples of successful mitigation efforts, as well as a framework outlining potential strategies for countering disinformation campaigns.
MISINFO MOTIVES puts forward an initial framework to try to answer a complex question: “why do some people create and share mis/disinformation?” Most interventions against mis- and disinformation attempt to treat its symptoms, but don’t address the underlying incentives that encourage its creation and dissemination – in part because those incentives are often not well-understood. The team, made up of professionals from the non-profit sector, government, and tech industry, worked to better understand the complex interplay between actors, motivations, and goals to bring more clarity to efforts aimed at countering mis- and disinformation. Over the course of the fellowship, the team developed a draft white paper that lays out a taxonomy of disinformation actors, identifies four major motivations behind the efforts of those actors, and maps existing interventions to those taxonomies.
SEMAPHORE is a beta-version browser extension that enables users to share data about instances in which they flag inauthentic content through the “report an issue” feature on platforms; the tool is currently available for use on Twitter. Disinformation researchers are limited by their lack of access to data under the control of platforms, including records of takedowns and copies of removed content (indeed, platforms themselves frequently don’t collect the latter). Developed by technologists and a human rights expert, Semaphore springs into action when a user with the extension installed flags a piece of content, capturing both the flagged content and its related metadata. That information can be archived in a database for future study and analysis, effectively creating an alternative means of data access for researchers. The team is interested in building upon Semaphore to allow users to share their data with interested parties like research communities. The team hopes that by enhancing the data accessible to experts by way of a crowdsourced donation and sharing model, they can enable more people to take part in the fight against disinformation.