International Partnerships

Our International Partnership projects will develop strategic collaborations with world-leading research organisations to ensure society deploys and uses AI in a responsible way, beyond national boundaries. These projects will explore technical, social, legal and ethical challenges to generate global impact for people, communities, and societies.

“The human role to guarantee an ethical AI for healthcare”

The integration of AI-based tools into routine clinical care is opening the door to a completely new paradigm where doctors and AI systems collaborate to decide a right diagnosis or treatment for a patient, based on individual’s biomedical information. Several important ethical challenges rise from the development of clinical AI to its implementation. What happens if the AI tool and the clinician diasegree? Who holds responsibility? Is there a risk of human replacement? Can we battle models' bias and potential unfairness? Is there a risk of disempowerment of clinicians and patients? In the year of the UK AI global summit and the European AI act, this project aims to build international partnerships with academia, governments from all over the world, and industry agents to reflect together with patients and clinicians about the human role that can ensure an ethical development and deployment of clinical AI models that can benefit all and respect human dignity.

Project Lead: Dr Raquel Iniesta, King’s College London

Working with the Government of Catalonia and Open University of Catalonia


“Harnessing AI to enhance electoral oversight”

For democratic elections to be trusted they must be free and fair. To ensure this, democracies across the world establish regulators and management bodies to oversee electoral activity. However, we know that these bodies often operate on a limited budget with limited personnel. They have also been slow to adapt to the digital age and to harness the power of automation. This project will bring together an international team of researchers, practitioners, and activists to develop automated systems to promote compliance and test these tools to protect against backfire effects and engender trust.

Project Lead: Dr Sam Power, University of Sussex

Working with University of Sheffield, Johns Hopkins University, International Foundation for Electoral Systems, International IDEA, OpenSecrets, and InternetLab


“Transparency Regulation Toolkits for Responsible Artificial Intelligence”

The United Kingdom and European Union are creating new regulations for AI transparency; drafting and implementing laws that will require bodies to communicate when they are using AI, and how they are using it. However, the exact meaning of ‘AI transparency’ is contestable, so implementing these rules requires interpretation. Our aim is to examine how data scientists are interpreting and implementing transparency rules in practice, and how they plan to in the future. We will create two legal and responsible innovation toolkits to help Small and Medium Sized Enterprises (SMEs) comply with AI transparency requirements.


Project Lead: Dr John Downer , University of Bristol

Working with University of Antwerp, the European Digital SME Alliance, and the Digital Catapult

"For the FATES of Africa: A co-developed pipeline for responsible AI in online learning across Africa"

The project focuses attention on a population that is vast on a global scale, yet is often left behind: namely, online learners in Sub-Saharan Africa (SSA). Our partnership will develop a ‘how-to’ pipeline for protecting and championing responsible AI among SSA online learners: from the conception of an online learning idea, through iterative design and development, until the moment of scale. We will deliver a comprehensive programme of consultation across the edtech ecosystem in SSA. Finally, an openly available Toolkit will be published, alongside policy and academic papers, as outputs from this project. 

Project Lead: Dr. Nora McIntyre, University of Southampton 

Working with Whizz Education UK & Kenya and Investing In People, DRC


“TAS Hub And Good Systems Strategic Partnership”

This international partnership aims to strengthen the existing ties between TAS Hub (UK) and Good Systems (USA). These are two renowned leaders in Responsible and Trustworthy AI and Autonomous Systems, both aligned with the mission and objectives of RAI UK. The partnership will involve a series of innovative and ambitious activities for research development, knowledge exchange, and sharing of best practices. These activities will empower existing members and bring in new members from from diverse backgrounds, including non-academic affiliates and partners from the Global South, to nurture an international community dedicated to advancing Responsible AI. 

Project Lead: Helena Webb, University of Nottingham

Working with Good SystemsUniversity of Texas at Austin; University of Missouri-Columbia; University of Southampton, King’s College London


“AI Regulation Assurance in Safety - Critical Systems’'

Safety-critical systems that use artificial intelligence (AI)can pose a variety of challenges and opportunities. This class of AI systems especially come with the risk of real consequential harms. With a cross-border approach spanning the UK, US, and Australia, our team aims to thoroughly investigate AI safety risks for technologies within the aerospace, maritime, and communication sectors. Through in-depth case studies, the project will identify technical and regulatory gaps and propose solutions to mitigate potential safety risks. Bridging the wider scientific community with international government stakeholders will allow us to positively impact the development and regulation of AI in safety-critical systems for the betterment of society.


Disruption Mitigation for Responsible AI’'

In today's dynamic landscape, AI applications across critical sectors face continual disruptions, spanning from environmental shifts to human errors and adversities. To effectively navigate these challenges, AI solutions must demonstrate adaptability and responsibility, aligning with the diverse social, legal, ethical, empathetic, and cultural norms of stakeholders (SLEEC). Yet, current AI development frameworks fall short in addressing this multifaceted demand. DOMINOS is poised to fill this void by delivering a comprehensive methodology and toolkit for seamless development, deployment, and utilization of responsible AI solutions capable of mitigating a wide spectrum of disruptions in ways that are compliant with SLEEC norms. 

Project Lead: Dr. Jennifer Williams, University of Southampton

Working with George Washington University in DC, and Australian National University Canberra, as well as multiple government, military, regulatory, and industry partners across the three nations.

Project Lead: Prof. Radu Calinescu, University of York, Institute for Safe Autonomy

Working with the University of Toronto's Schwartz Reisman Institute for Technology and Society, Thales UK, Critical Systems Labs Canada, and Advai UK


Responsible AI international community to reduce bias in AI music generation and analysis’'

This project aims to establish an international community dedicated to address Responsible AI challenges, specifically addressing bias in AI music generation and analysis. The prevalent dependence on large training datasets in deep learning often results in AI models biased towards Western classical and pop music, marginalising other genres. The project will bring together an international and interdisciplinary team of researchers, musicians, and industry experts to develop AI tools, expertise, and datasets aimed at enhancing access to marginalised music genres. This will directly benefit both musicians and audiences, engaging them to explore a broader spectrum of musical styles. Additionally, this initiative will contribute to the evolution of creative industries by introducing novel forms of music consumption. To know more about the project, read here.

Project Lead: Prof. Nick Bryan-Kinns, University of the Arts London

Working with Music Hackspace UK, DAACI UK, Steinberg Germany and Bela UK


Exploring Fairness and Bias of Multimodal Natural Language Processing for Mental Health’'

The world is grappling with an escalating mental health crisis. Emerging technologies like Artificial Intelligence (AI), and in particular Natural Language Processing (NLP) are presented as promising tools to solve these challenges by tackling online abuse and bullying, suicide ideation detection and other behavioural online harms. This international partnership project between the University of Southampton and Northeastern University is dedicated to leveraging the responsible use of AI in addressing mental health issues. With a core focus on ethical implementation, our partnership prioritises fairness and bias mitigation within AI models. Key initiatives encompass reciprocal resource sharing, model evaluations to identify and mitigate biases, workshops around policy and AI and mental/public health and proposing policy recommendations for the ethical integration of AI in mental health. In addition, we will foster extensive collaborations with experts in AI, public health and mental health, engaging stakeholders from the outset, ensuring that our approach to AI integration in mental health remains innovative, ethically sound, and genuinely responsive to user needs.

Project Lead: Dr Rafael Mestre, University of Southampton, New Frontiers Fellow

Working with Northeastern University


Responsible AI networks for industries and governments in Latin America’'

This project aims to establish a global network to advocate for the responsible use of Artificial Intelligence in Latin America. This network will explore and share best practices and experiences in designing safeguard measures and regulations for the responsible use of AI in governments and industries in developing countries. A significant part of our discussions will focus on the ethical and economic consequences of using AI in countries that heavily depend on imported technology and regulatory frameworks. By focusing on Latin America, we aim to enhance our understanding of these issues and their implications not only for the region but also for the UK and Europe.

Project Lead: Nestor Castaneda (University College London): Associate Professor and Deputy Director of the UCL Social Data Institute

Working with Chile's National Center for Artificial Intelligence (CENIA) and UCL Social Data Institute


Understanding Robot Autonomy in Public’'

In recent years, robots deployed in public settings like autonomous delivery bots and shuttles have been operating services in towns and cities worldwide. Yet there is little systematic understanding regarding how these technologies influence the daily lives of residents who coexist with them. This project aims to bridge various disciplines including transport, human-computer interaction (HCI), human-robot interaction (HRI), robotics, sociology, and linguistics. By forming partnerships among established and emerging collaborators, to share empirical data on human robot interactions in public spaces, and via this jointly develop interdisciplinary insights that will guide the responsible design of public robotics in the future.


Project Lead: Stuart Reeves, University of Nottingham

Working with Swedish National Road and Transport Research Institute (VTI) and Linköping University