2019-2020 Assembly Student Projects
THE 2020 ASSEMBLY STUDENT FELLOWSHIP COHORT came together to study and tackle disinformation from a cybersecurity perspective. Over the course of the academic year, student fellows learned from experts in the field, including Professor Jonathan Zittrain and Dr. Joan Donovan. The cohort also participated in team building, ideation activities, and project development.
This year, projects tackle a range of problems. Team CDA 230 developed an interactive explainer for the Communications Decency Act. Team Politics researched disinformation campaigns that target minority political candidates. The Disinformation Literacy Team created the basis for an infographic that teaches first-time voters how to identify inauthentic content online. Finally, the Taxonomy of COVID-19 Disinformation developed a framework for a taxonomy of COVID-19 disinformation and a map of stakeholder mitigation efforts against the infodemic.
Read more below about the four student projects developed during Assembly 2019-2020.
TEAM POLITICS focuses on the experiences of underrepresented minority politicians and candidates online. While all politicians must deal with some degree of disinformation about their campaigns, minority politicians often face disinformation that intersects with harassment. Often disinformation about these candidates mobilize racial stereotypes, gendered language and xenophobia alongside false information. Through research and conversations with candidates, we learned that while social media is critical to running a successful grassroots campaign, platform content policies fail to adequately protect this group of individuals. Candidates feel unprepared to campaign online and are uncertain about what protections they have online. In order to further understand how platforms treat this intersection of harassment and misinformation, we performed a content policy audit for the six major platforms used by candidates to campaign: Facebook, Instagram, Twitter, LinkedIn, YouTube and Medium. An evaluation of these policies revealed inconsistent standards of protection for public figures, definitions of harassment, and stances on combating disinformation.
Surveillance State of the Union is a data visualization and set of short illustrative cases that seeks to raise awareness among tech workers, academics, military-decision makers, and journalists about the risks of pursuing surveillance-related work in AI. Work that may, to a researcher, be thought of as theoretical has very real consequences for people who are subjected to state surveillance, as evidenced in the suppression of the Uyghur minority in Xinjiang province of China and other marginalized communities around the world.
The project leveraged a variety of data sources such as government contracts, co-authored papers, and public releases to begin to map the surviellance research netowkr. The work shows, for example, overlap between universities collaborating on US state-funded surveillance research and similar research by Chinese comapnies implicated in Xinjiang.
Watch Your Words examines the expansion of Natural Language Processing / Natural Language Understanding systems. More & more often, people are being asked to interact with these systems in order to access education, job markets, customer service, medical care, and government services. Without active attention, biases encoded in written language will be reinforced, extended & perpetuated in these systems, resulting in multiple types of harm to vulnerable populations.
Because discussion of bias needs to move beyond the machine-learning community to include developers who build applications based on "off-the-shelf" models, Watch Your Words will present evidence of these biases, explore approaches to raise awareness of bias, define harms visited on vulnerable groups, and suggest approaches for bias mitigation.
AI Blindspot offers a process for preventing, detecting, and mitigating bias in AI systems.
Organizations lack a framework for preventing, detecting, and mitigating bias in AI systems. Audit tools often focus on specific parts of a system rather than the entire AI pipeline, which can lead to unintended consequences. AI Blindspot is a discovery process to help AI developers and teams evaluate and audit how their systems are conceptualized, built, and deployed. We produced a set of printed and digital prompt cards to help teams identify and address potential blindspots.