© 2019 by Assembly.

    "Unbiased" data and ML systems are risky fiction; there is no view from nowhere. The Kaleidoscope: Positionality - Aware Machine Learning project explores the development of positionality-aware ML/AI systems.

    ML/AI systems are trained on data, and classification systems enable data set creation and curation. Classification systems are, simply put, sets of boxes into which things can be put (e.g., the International Classification of Disease, ICD). In the design of such systems, one decides what can and will be visible in data sets. Context shapes these decisions (e.g., the discovery of HIV required changes in the ICD). Classification systems are informed by the perspectives, experiences, and knowledge of their creators. As such, categories are data, too, and classification systems have positionality, an inherited perspective.

    Surveillance State of the Union

    Surveillance State of the Union is a data visualization and set of short illustrative cases that seeks to raise awareness among tech workers, academics, military-decision makers, and journalists about the risks of pursuing surveillance-related work in AI. Work that may, to a researcher, be thought of as theoretical has very real consequences for people who are subjected to state surveillance, as evidenced in the suppression of the Uyghur minority in Xinjiang province of China and other marginalized communities around the world.

    The project leveraged a variety of data sources such as government contracts, co-authored papers, and public releases to begin to map the surviellance research netowkr. The work shows, for example, overlap between universities collaborating on US state-funded surveillance research and similar research by Chinese comapnies implicated in Xinjiang.

    Watch Your Words examines the expansion of Natural Language Processing / Natural Language Understanding systems. More & more often, people are being asked to interact with these systems in order to access education, job markets, customer service, medical care, and government services. Without active attention, biases encoded in written language will be reinforced, extended & perpetuated in these systems, resulting in multiple types of harm to vulnerable populations.

    Because discussion of bias needs to move beyond the machine-learning community to include developers who build applications based on "off-the-shelf" models, Watch Your Words will present evidence of these biases, explore approaches to raise awareness of bias, define harms visited on vulnerable groups, and suggest approaches for bias mitigation.

    AI Blindspot offers a process for preventing, detecting, and mitigating bias in AI systems.

    Organizations lack a framework for preventing, detecting, and mitigating bias in AI systems. Audit tools often focus on specific parts of a system rather than the entire AI pipeline, which can lead to unintended consequences. AI Blindspot is a discovery process to help AI developers and teams evaluate and audit how their systems are conceptualized, built, and deployed. We produced a set of printed and digital prompt cards to help teams identify and address potential blindspots.