OU Profiles homepage Edit my profile User guide Accessibility Statement
Picture  of Tracie Farrell

Dr Tracie Farrell

Senior Research Fellow

Knowledge Media Institute

tracie.farrell@open.ac.uk

Other icon

Biography

Tracie Farrell is a Senior Research Fellow at the Open University and recipient of the UKRI Future Leaders Fellowship (Round 6). Her transdisciplinary, mixed methods research explores the impact of Artificial Intelligence on society, including impacts on people, their communities, and the wider ecosystems and societies in which they live. Before her academic career, Tracie worked for 18 years in the non-formal education sector on issues related to human rights, gender, leadership and citizenship. 

Projects

How can we create a more just society with A.I.?

Justice can be viewed as "objective" or mediated through power [Chomsky & Foucault, 1971; Costanza-Chock, 2018]. Finding commonalities across different legal and ethical frameworks [Floridi & Cowls, 2019; Jobin et al., 2019] is an example of the former. In the latter, justice is a "requirement" for non-equitable societies, ensuring protection for the most harmed [Cugueró-Escofet & Fortin, 2014]. The difficulty in achieving this type of justice through A.I. is that A.I. is used primarily for classification and prediction [Vinuesa et al., 2020]. Growing evidence indicates that A.I. accelerates and compounds social bias, contributing to unequal distributions of power [O'Neil, 2016, p. 3, Noble, 2018; Benjamin]. "Trade-offs" in providing accurate and fair predictions also impact sub-populations disproportionately [Yu et al. 2020], meaning that people with multiple forms of marginalisation are more likely to be misunderstood by A.I. than those with normative characteristics [Costanza-Chock, 2018]. While there are legal and ethical frameworks that should govern the way we use A.I., minority voices are still under-represented [Buolamwini, J. and Gebru, T., 2018, Costanza-Chock, 2018; Magalhães & Couldry, 2020] and there are few structures for enforcement and accountability [Mittelstadt, 2019]. We need to rethink how A.I. is contributing to justice as a relational concept, which includes dimensions of power and marginalisation. This project draws together the cultural, technical, and socio-technical expertise necessary to extend our current notions of justice in empirical research for A.I. for social good (AI4SG). To start with, a team of 3 researchers will develop a conceptual model of A.I. and "justice" that includes a) different definitions of justice used to frame the tasks of A.I. and evaluate their efficacy, b) the questions that can be answered under that definition and c) the trade-offs that are determined to be acceptable in the process. The research team will map scholarly literature from AI4SG to the ethical, legal or political frameworks that underpin the research, identifying gaps or conflicts in how justice is operationalised within AI4SG in comparison with other social justice models. In particular, we will explore the questions: are different positions on justice incompatible with A.I.? Can we identify new pathways for justice to emerge? To extend our conceptual model, we will conduct 3 case studies in which minority interests are ignored within specific A.I. tasks: 1) non-binary people in gender-based analysis of sexism 2) discriminatory deplatforming of sex workers or artists through content moderation and 3) shadow-banning activists as part of a counter-terrorism approach. The case studies will explore conflicts between these communities' concept of justice and the A.I. task, and which alternative solutions exist. They will also contribute to the global problem of tackling online harm and using A.I. techniques to help identify and classify relevant cases. Finally, to test alternative solutions, a multi-sectoral Advisory Board of A.I. and community experts will be brought together to create a design challenge for A.I. researchers. Issued through 2 workshops at top-level A.I. conferences, the challenge will be to prioritise marginalised perspectives. The outputs of the challenge and their evaluation will inform a set of guidelines for dealing with errors and trade-offs in AI4SG. Our contribution is to a) expose connections between how A.I. researchers define justice and which justice questions we attend to in AI4SG; b) reflect on the benefits of A.I. for which societies; and c) influence and inspire researchers to question assumptions of A.I. research around acceptable trade-offs and errors. This research will bring together social scientists, community experts and A.I. researchers to explore what new lines of inquiry can be opened by focusing on maximising the benefits in A.I. for marginalised groups.

Publications

Digital Artefact

Elon Musk could roll back social media moderation – just as we’re learning how it can stop misinformation (2022)

Augmented Reality in Activism: Go March (2018)

Journal Article

Co‐creating an equality diversity and inclusion learning analytics dashboard for addressing awarding gaps in higher education (2024)

Abuse in the time of COVID-19: the effects of Brexit, gender and partisanship (2024)

Mediating learning with learning analytics technology: guidelines for practice (2022)

Experience of Health Professionals with Misinformation and Its Impact on Their Job Practice: Qualitative Interview Study. (2022)

Misogynoir: Challenges in Detecting Intersectional Hate (2022)

Demographics and topics impact on the co-spread of COVID-19 misinformation and fact-checks on Twitter (2021)

Decentralized Learning Infrastructures for Community Knowledge Building (2020)

Scaffolding Reflection: Prompting Social Constructive Metacognitive Activity in Non-Formal Learning (2017)

Presentation / Conference

A Qualitative Study on Cultural Hegemony and the Impacts of AI (2024)

Annotators’ Perspectives: Exploring the Influence of Identity on Interpreting Misogynoir (2023)

False Hopes in Automated Abuse Detection (Short Paper) (2023)

Understanding the Acceptance of Artificial Intelligence in Primary Care (2023)

MisinfoMe: A Tool for Longitudinal Assessment of Twitter Accounts’ Sharing of Misinformation (2023)

Understanding Misogynoir: A Study of Annotators’ Perspectives (2023)

Misogynoir: Public Online Response Towards Self-Reported Misogynoir (2021)

Agents for Fighting Misinformation Spread on Twitter: Design Challenges (2021)

Opinions, Intentions, Freedom of Expression, ... , and Other Human Aspects of Misinformation Online (2021)

On the use of Jargon and Word Embeddings to Explore Subculture within the Reddit’s Manosphere (2020)

Co-Spread of Misinformation and Fact-Checking Content during the Covid-19 Pandemic (2020)

Pathway to a Human-Values Based Approach to Tackle Misinformation Online (2020)

Challenging Misinformation: Exploring Limits and Approaches (2019)

Exploring Misogyny across the Manosphere in Reddit (2019)

Understanding the Role of Human Values in the Spread of Misinformation (2019)

A Microservice Infrastructure for Distributed Communities of Practice (2018)

Transferring a Question-Based Dialog Framework to a Distributed Architecture (2017)

“We’re Seeking Relevance”: Qualitative Perspectives on the Impact of Learning Analytics on Teaching and Learning (2017)

Are you thinking what I'm thinking? Representing Metacognition with Question­-based Dialogue (2016)

Developing Self-Regulated Learning through Reflection on Learning Analytics in Online Learning Environments (2015)

Presentation / Conference Contribution

Foreword: Towards a Safer Web for Women - First International Workshop on Protecting Women Online (2025)

Thesis

Affordances of Learning Analytics for Mediating Learning (2018)