Impact Accelerator

Our Impact Accelerator projects will maximise impact from existing responsible AI research to rapidly realise benefits for the economy, society, culture, policy, health, the environment and quality of life.

“AI Equality by Design, Deliberation and Oversight”

This project will develop the theoretical foundations of ‘equality-by-design, deliberation and oversight’ (EbD) approaches to AI governance, striving to embed that knowledge into AI-standards by (1) representing European equality defenders in European AI standard-setting by CEN/CENELEC (2) empowering and equipping public equality defenders and other social stakeholders with the knowledge and skills to advocate for, adopt and embed EbD principles into the development and implementation of technical systems and organisational frameworks to protect fundamental rights to equal treatment from AI-generated discrimination (3) provide UK tech firms with training in equality law to address gaps and misunderstandings in their current knowledge.

Project Lead: Professor Karen Yeung, University of Birmingham

Working with the European Network of Equality Bodies (Equinet), the Equality and Human Rights Commission (EHRC), the University of Oxford, Beyond Reach Consulting Ltd, and Supertech


“Automated Empathy – Globalising International Standards (AEGIS): Japan and Ethically Aligned Regions”

Partnering with Japan’s National Institute of Informatics and standards developer Institute of Electrical and Electronics Engineers (IEEE), and engaging the Information Commissioner's Office (UK), this project augments our UK-Japan social science to create soft governance of autonomous systems that interact with human emotions and/or emulate empathy. This includes a regional (Japan/ethically aligned regions) IEEE Recommended Practice for these technologies, and advancement of a global parental standard. We achieve this through in-person workshops of regional experts, deriving learning for the UK & EU from a region immersed in social robotics and steeped in ethical questions about systems that process intimate data.

Project Lead: Professor Andrew McStay, Bangor University

Working with the University of Sussex, University of Winchester, the Institute of Electrical and Electronics Engineers, and the National Institute of Informatics


“RAISE - Responsible generative AI for SMEs in UK and Africa”

Building on insights developed across several ethical AI projects, notably the EU-funded projects SHERPA, and SIENNA, this work will provide actionable guidance to small and medium-sized enterprises (SMEs) on how generative AI systems can be developed and used responsibly. By working with our SME partner Trilateral Research, (using the ‘with SMEs, for SME’s’ approach) the project has access to a company leading on socially responsible AI to co-design and test practical and actionable guidance on how generative AI can be integrated into innovative products. Impact will be generated for SMEs in the UK and in Africa.

Project Lead: Professor Bernd Stahl, University of Nottingham

Working with Trilateral Research Ltd


“Amplify: Empowering Underserved Voices in Spoken Language Interaction”

Speech is the richest, most precious form of human communication. But speech-based interactive systems are currently available for only a small fraction of the world’s languages. Consequently, hundreds of millions of people are being excluded globally. For the past three years we have worked with under-served "low-resource" language communities to explore highly innovative responsible AI techniques for developing speech recognition with low data requirements. In this project we will facilitate the uptake of the speech toolkit that we have created, working with a network of community partners and NGOs to prove and refine the tools, and expand spoken language support.

Project Lead: Professor Simon Robinson, Swansea University

Working with Minah's Research Services and Studio Hasi


“RAKE (Responsible Innovation Advantage in Knowledge Exchange)”

Conducting research and innovation using Responsible Innovation (RI) involves collaboration, foresight, considering impacts on people, the environment, reflection and response. RI’s impact is therefore central to ‘responsible’ AI, but RI training and interdisciplinary practices must be developed to provide robust mechanisms for creating and assessing responsible AI. RAKE consolidates existing experience and resources to work with funders, businesses, projects, CDTs, and university spinouts. It will investigate how RI can be better embedded within these pipelines to improve AI development and deployment. It will collectively build on past work to support a new generation of RI-in-practice, strengthening RAI-UK’s responsible-AI research agenda.

Project Lead: Professor Marina Jirotka, University of Oxford

Working with University of Warwick, University of Nottingham, Beyond Reach, Kainos, Swansea University, the National Archives, Platform 7, Pantar, Newton Europe, and Albino Mosquito


“AIPAS - AI Accountability in Policing and Security”

The challenge for Law Enforcement Agencies (LEAs) is balancing the significant opportunities AI presents for the safeguarding of society with the societal concerns and expectation about its responsible use. Accountability is at the core of UK Government efforts to ensure responsible AI use. Accountability, however, is highly abstract. Building on the research team’s global work on AI Accountability, AIPAS will design the practical mechanisms and software tool LEAs in the UK need to assess and implement AI Accountability for their AI applications. These solutions will not only support AI Accountability during deployments but also proactively during design and procurement stages.

Project Lead: Professor Babak Akhgar, CENTRIC, Sheffield Hallam University

Working with the Centre of Excellence in Terrorism, Resilience, Intelligence and Organised Crime Research (CENTRIC), Humberside Police, the Metropolitan Police Service, and Innovate UK KTN Defence and Security


“SAGE-RAI: Smart Assessment and Guided Education with Responsible AI”

Can responsible Generative AI (GenAI) lead to improved student outcomes? In SAGE-RAI, we utilise partner-applied education-oriented GenAI tools to explore this. Inspired by Bloom’s 1984 study on 1-to-1 teaching’s efficacy and the potential for cost-effective, scalable personalised education, we aim to unlock this potential. Addressing tutor limitations in accommodating large cohorts, we investigate how responsible GenAI can enhance tutoring, offer tailored more personalised learning experiences and generate student feedback. Our goal is to create a platform supporting assessment and student guidance while responsibly applying GenAI, addressing challenges of misinformation, copyright, and bias. The journey embodies educational innovation for better outcomes.

Project Lead: Professor John Domingue, Knowledge Media Institute, The Open University

Working with the Open Data Institute