Participatory Harm Auditing Workbenches and Methodologies

A project to investigate the systematic and participatory auditing of current and evolving AI technologies

AI now affects people’s work and everyday lives through decision support systems, information search, and AI-generated text and images. A significant barrier to the safe and trustworthy deployment of predictive and generative AI is their unassessed potential for harm: an AI that is inaccurate, biased, or unfair. High-profile failures such as hallucinated precedents in law and gender bias in hiring algorithms are emerging. Assessment of AI has been haphazard, unsystematic and left solely in the hands of AI experts.

Our project will investigate the systematic and participatory auditing of current and evolving AI technologies in terms of accuracy, bias and fairness as well as trade-offs between them by stakeholders with diverse levels of expertise. Our research will create auditing workbenches comprising novel user interfaces, algorithms and privacy-preserving mechanism that underpin participatory approaches accessible to domain experts, regulators, affected parties, and the public. We will advance systematic auditing methodologies embedded in auditing workbenches which reflect, anticipate and inform regulatory frameworks.

Challenges

We will address significant research challenges: lack of objective ground truth makes measures unreliable; fairness notions are context-dependent; bias (technical concept) does not necessarily lead to unfairness (human value); fairness measures may not reflect stakeholders’ fairness notions; stakeholders have conflicting fairness notions even within the same context; choice of measure can lead to “fairwashing”; mitigating issues found in auditing is complex.

Our project will:

  1. Build novel participatory auditing workbenches for designing and deploying predictive and generative AI, targeted at diverse stakeholders.
    • Investigate auditing across a range of contexts (e.g. health, cultural heritage, public sector decision-making, information seeking) and diverse stakeholders, individually and in groups;
    • Explore novel interfaces to explain models, outcomes, measures, their trade-offs and expected social impact.
    • Develop new fairness measures.
    • Create privacy-preserving algorithms that support participatory auditing.
    • Ensure that AI auditing leads to an improved EDI agenda.
    • Investigate participatory approaches to mitigate issues yet guard against harms by malicious actors.
  2. Develop novel participatory AI auditing methodologies.
    • Align AI auditing with current and anticipated regulatory frameworks in the UK, EU, Japan and USA.
    • Embed responsible innovation in AI auditing to anticipate, reflect on, engage with and act on potential harms.
    • Create a new certification framework for participatory AI auditing.
  3. Extend the network of researchers in Responsible AI.
    • Contribute to the emerging national and international AI ecosystem.
  4. Embed participatory auditing in future AI deployment practices.
    • Create a network of stakeholders.
    • Train stakeholders in participatory auditing.
    • Educate the public about AI auditing and participatory mechanisms

Methods

Project activities will be undertaken by technical and non-technical researchers across institutions, thus training them in multidisciplinary research and extending knowledge and skills. We will flexibly extend our research focus to include other contexts as necessary.

We will use qualitative, quantitative and mixed research methods to interview, observe, and hold focus groups to collect requirements for auditing workbenches from a range of stakeholders – AI experts, UX designers, domain experts, regulators and end users – and evaluate auditing in prototype-driven experiments. Non-technical researchers will work closely with technical researchers to build interfaces, algorithms and mechanisms that support systematic and privacy-preserving participatory auditing and mitigation. To establish novel participatory AI auditing methodologies, we will review regulatory frameworks and interview policy makers to embed findings in auditing workbenches and the AI development lifecycle.

Project Website

Consortium

Our consortium brings together a multidisciplinary team (Computer Science, HCI, Humanities, Information Science, Law, Psychology, Social Science) of 21 academics from 7 institutions (Glasgow, Edinburgh, KCL, Sheffield, Stirling, Strathclyde, York) and 15 partners from across Scotland, England, USA and Japan. Our research will be a fundamental step in supporting the participation of stakeholders with diverse levels of expertise in AI auditing, thus addressing harms and maximising benefits for society. Possibilities of partners to contribute to our project are:

  • Contribute use cases and datasets for investigating participatory AI auditing
  • Provide access to domain experts and other stakeholders
  • Work directly on research activities with academics
  • Be part of experiments and studies which evaluate workbenches and methodlogies
  • Organise outreach events and training and awareness campaigns
  • Form a community of interest around AI auditing
  • Join an independent advisory committee, led by Professor Gina Neff, University of Cambridge

Project Team

University of Glasgow

  • Simone Stumpf, Reader in Responsible and Interactive AI. (HCI). End-user interactions with AI, Explainable AI (XAI), interactive machine learning, human-in-the-loop AI fairness.
  • Monika Harvey, Professor of Neuropsychology and Cognitive Neuroscience (Psychology). People’s lived experiences in healthcare systems. Deputy Director of UKRI CDT in Social AI.
  • Yunhyong Kim, Lecturer in Information Studies (Humanities). AI in the Humanities. Member of AI Special Interest Group of the Education Committee for Dublin Core Metadata Initiative.
  • Craig Macdonald, Professor of Information Retrieval (Computer Science). Predictive and generative AI for search and recommendation systems.
  • Graham McDonald, Senior Lecturer (Computer Science).  Fairness in search and fairness evaluation methodologies.
  • Iadh Ounis, Professor of Information Retrieval (Computer Science).  Responsible interactive search and recommendation systems.
  • Alessandro Vinciarelli, Professor of Computational Social Intelligence (Computer Science). Social Signal Processing for predictive AI, Director of EPSRC CDT in Social AI. Mark Wong, Senior Lecturer in Public Policy and Research Methods (Social Science). Equitable, Inclusive and Responsible AI.

University of Edinburgh

  • Adam Turner, Head of Transformational Projects at the Data Lab. (Cross Discipline) AI translation into real-life settings within Industry and Public Sector.
  • Vyron Christodoulou, Data Scientist at the Data Lab (Computer Science): Deployment of AI within multi-disciplinary, multi-sector teams.

King’s College London

  • Elizabeth Black, Reader in AI (Computer Science). Joint human-AI reasoning and decision-making, AI fairness, Director of the UKRI CDT in Safe and Trusted AI.
  • Daniele Quercia, Professor of Computer Science (HCI/Computer Science). Responsible AI, user interactions with AI, AI Fairness, Predictive AI.
  • Elena Simperl, Professor of Computer Science (Computer Science). Human-in-the-loop AI, responsible AI, citizen participation in AI, Director of Research at the Open Data Institute.
  • Dan Hunter, Professor of Law (Law). AI and Law, internet and intellectual property law. ARC Centre of Excellence for Automated Decision-Making and Society. Founder or on the founding teams of four start-ups in EdTech, legal tech and regulatory compliance.   

University of Sheffield

  • Kathryn Simpson, Lecturer in Digital Humanities (Humanities). AI in the Cultural Sector. Curator for the David Livingstone Museum.

University of Stirling

  • Leonardo Bezerra, Lecturer in AI/Data Science (Computer Science). Automated multi-criteria AI design, higher education and public health.
  • Alexander Brownlee, Senior Lecturer in Computing Science (Computer Science). XAI, multi-objective optimisation.

University of Strathclyde

  • Leif Azzopardi, Associate Professor in Artificial Intelligence and Information Retrieval (Information Science). Bias and fairness in search systems, user interactions with AI.
  • Ian Ruthven, Professor of Information Seeking and Retrieval (Information Science). User interactions with search and information, cultural heritage.

University of York

  • Siamak Shahandashti, Senior Lecturer in Cyber Security & Privacy (Computer Science). Cryptography and usable security and privacy, auditing of electronic voting systems and privacy-enhancing technologies.