Dr Tracie Farrell
Senior Research Fellow
Biography
Tracie Farrell is a Senior Research Fellow at the Open University and recipient of the UKRI Future Leaders Fellowship (Round 6). Her transdisciplinary, mixed methods research explores the impact of Artificial Intelligence on society, including impacts on people, their communities, and the wider ecosystems and societies in which they live. Before her academic career, Tracie worked for 18 years in the non-formal education sector on issues related to human rights, gender, leadership and citizenship.
Projects
Unlearning AI (Extension to AI4J fellowship, AMS 1675774)
Artificial Intelligence (AI) is a powerful technology that utilises many (shared) resources and has far-reaching impacts for individuals, populations and our wider ecologies. We have to think carefully about its role, particularly in light of the emergence of Generative AI (GenAI) and the environmental costs of computing. It has been historically difficult to predict the impacts of any technology comprehensively. However, the factors that lead to harm, remain largely unchanged. The original FLF proposal, “How can we create a more just society with AI”, focused on power as one of those factors; the mechanism by which any impact of Artificial Intelligence (good or bad) will be materially felt. Through the fellowship, we have discovered that certain ideas and assumptions have become hegemonic and act as barriers for innovation in AI research, which have farther-reaching consequences. For the renewal, we will be looking at three of these in more detail. One is technical, one is theoretical and one is methodological. The technical assumption is that increasing data and power to AI will result in improved reasoning (the concept of Large Reasoning Models, LRMs). Recently, researchers at Apple showed that LRMs "face a complete accuracy collapse beyond certain complexities." Models failed to reason consistently or apply the correct algorithms for certain tasks. In our team, we are exploring how our values/beliefs offer a banner of understanding; of interpreting complexity in very specific and useful ways. Values allow us to prioritise, guide ethical judgement, frame problems, filter information, and resolve conflicts. When others (effective altruists, rationalists, etc.) discuss the "Alignment Problem", they tend to talk about values as necessary for determining the goodness of AI. We are suggesting they may be necessary for AI to function past a certain threshold of complexity and we will experiment with this in the upcoming renewal. Related to this, there is a theoretical assumption that it will be possible to standardise, arriving at consensus on the values that AI should have that will be universally applied. Our position, following the first part of the fellowship, is that plurality is inevitable and useful for innovation. Our team will explore the interconnection between subjectivity and objectivity in research, via the concept of "Thought Collectives" from Ludwig Fleck, to create new models of interoperability between distinct pieces of AI research. Finally, we have observed changes to methodological approaches for qualitative research, in which the use of Generative AI is increasing, but interpretative skill/insight is decreasing. As much of our project relies on qualitative data collected from marginalised groups, this is a serious concern. Because qualitative research is often interpretative, which we view as its strength, we will develop ways of maintaining subjectivity when using GenAI, and create potential for it to scale a methodology that has been largely unchanged, in terms of analytical traditions, for 50 years. Our team aims to use curation in training; fine-tuning models to the interpretative lenses of the researcher, allowing portions of their study to be reproducible at certain levels. This research is driven by a clear purpose (shifting power), executed with rigor and integrity (using all tools/methodologies available), and supported by resources (UKRI). In the last phases of the current fellowship, we are focusing on sharing our research in ways that matter (publications, but also public engagement via podcasting and post-disciplinary collaboration).
How can we create a more just society with A.I.?
Justice can be viewed as "objective" or mediated through power [Chomsky & Foucault, 1971; Costanza-Chock, 2018]. Finding commonalities across different legal and ethical frameworks [Floridi & Cowls, 2019; Jobin et al., 2019] is an example of the former. In the latter, justice is a "requirement" for non-equitable societies, ensuring protection for the most harmed [Cugueró-Escofet & Fortin, 2014]. The difficulty in achieving this type of justice through A.I. is that A.I. is used primarily for classification and prediction [Vinuesa et al., 2020]. Growing evidence indicates that A.I. accelerates and compounds social bias, contributing to unequal distributions of power [O'Neil, 2016, p. 3, Noble, 2018; Benjamin]. "Trade-offs" in providing accurate and fair predictions also impact sub-populations disproportionately [Yu et al. 2020], meaning that people with multiple forms of marginalisation are more likely to be misunderstood by A.I. than those with normative characteristics [Costanza-Chock, 2018]. While there are legal and ethical frameworks that should govern the way we use A.I., minority voices are still under-represented [Buolamwini, J. and Gebru, T., 2018, Costanza-Chock, 2018; Magalhães & Couldry, 2020] and there are few structures for enforcement and accountability [Mittelstadt, 2019]. We need to rethink how A.I. is contributing to justice as a relational concept, which includes dimensions of power and marginalisation. This project draws together the cultural, technical, and socio-technical expertise necessary to extend our current notions of justice in empirical research for A.I. for social good (AI4SG). To start with, a team of 3 researchers will develop a conceptual model of A.I. and "justice" that includes a) different definitions of justice used to frame the tasks of A.I. and evaluate their efficacy, b) the questions that can be answered under that definition and c) the trade-offs that are determined to be acceptable in the process. The research team will map scholarly literature from AI4SG to the ethical, legal or political frameworks that underpin the research, identifying gaps or conflicts in how justice is operationalised within AI4SG in comparison with other social justice models. In particular, we will explore the questions: are different positions on justice incompatible with A.I.? Can we identify new pathways for justice to emerge? To extend our conceptual model, we will conduct 3 case studies in which minority interests are ignored within specific A.I. tasks: 1) non-binary people in gender-based analysis of sexism 2) discriminatory deplatforming of sex workers or artists through content moderation and 3) shadow-banning activists as part of a counter-terrorism approach. The case studies will explore conflicts between these communities' concept of justice and the A.I. task, and which alternative solutions exist. They will also contribute to the global problem of tackling online harm and using A.I. techniques to help identify and classify relevant cases. Finally, to test alternative solutions, a multi-sectoral Advisory Board of A.I. and community experts will be brought together to create a design challenge for A.I. researchers. Issued through 2 workshops at top-level A.I. conferences, the challenge will be to prioritise marginalised perspectives. The outputs of the challenge and their evaluation will inform a set of guidelines for dealing with errors and trade-offs in AI4SG. Our contribution is to a) expose connections between how A.I. researchers define justice and which justice questions we attend to in AI4SG; b) reflect on the benefits of A.I. for which societies; and c) influence and inspire researchers to question assumptions of A.I. research around acceptable trade-offs and errors. This research will bring together social scientists, community experts and A.I. researchers to explore what new lines of inquiry can be opened by focusing on maximising the benefits in A.I. for marginalised groups.
Publications
Digital Artefact
Journal Article
Understanding AI and Power: Situated Perspectives from Global North and South Practitioners (2025)
Abuse in the time of COVID-19: the effects of Brexit, gender and partisanship (2024)
Mediating learning with learning analytics technology: guidelines for practice (2022)
Misogynoir: Challenges in Detecting Intersectional Hate (2022)
Decentralized Learning Infrastructures for Community Knowledge Building (2020)
Presentation / Conference
A Qualitative Study on Cultural Hegemony and the Impacts of AI (2024)
Annotators’ Perspectives: Exploring the Influence of Identity on Interpreting Misogynoir (2023)
False Hopes in Automated Abuse Detection (Short Paper) (2023)
Understanding the Acceptance of Artificial Intelligence in Primary Care (2023)
MisinfoMe: A Tool for Longitudinal Assessment of Twitter Accounts’ Sharing of Misinformation (2023)
Understanding Misogynoir: A Study of Annotators’ Perspectives (2023)
Misogynoir: Public Online Response Towards Self-Reported Misogynoir (2021)
Agents for Fighting Misinformation Spread on Twitter: Design Challenges (2021)
On the use of Jargon and Word Embeddings to Explore Subculture within the Reddit’s Manosphere (2020)
Co-Spread of Misinformation and Fact-Checking Content during the Covid-19 Pandemic (2020)
Pathway to a Human-Values Based Approach to Tackle Misinformation Online (2020)
Challenging Misinformation: Exploring Limits and Approaches (2019)
Exploring Misogyny across the Manosphere in Reddit (2019)
Understanding the Role of Human Values in the Spread of Misinformation (2019)
A Microservice Infrastructure for Distributed Communities of Practice (2018)
Transferring a Question-Based Dialog Framework to a Distributed Architecture (2017)
Are you thinking what I'm thinking? Representing Metacognition with Question-based Dialogue (2016)
Presentation / Conference Contribution
Thesis
Affordances of Learning Analytics for Mediating Learning (2018)