FRIDGE is helping the UK’s AI Research Resource (AIRR) support sensitive data research by creating secure Trusted Research Environments (TREs) within large-scale AI supercomputers. This will make it easier for approved researchers to use powerful computing resources while ensuring strict governance and security are in place.

Artificial intelligence (AI) models can help to accelerate data research to solve complex challenges in health and society — but developing them often requires access to sensitive data on powerful supercomputers. Accessing and analysing sensitive data at this scale is difficult, as strict governance is needed to protect people’s privacy.

Led by The Alan Turing Institute, in partnership with University College London and the Universities of Cambridge and Bristol, FRIDGE (Federated Research Infrastructure by Data Governance Extension) is one of the DARE UK Early Adopter projects exploring how parts of the UK’s AI Research Resource (AIRR) can be securely configured as TREs. This would allow approved researchers to use large-scale AI systems for sensitive data research while keeping security and governance at the core.

AIRR is a national service made up of powerful computers designed for AI research, giving researchers, universities, and businesses access to world-class computing resources for developing and testing AI.

What FRIDGE is working to achieve

FRIDGE wants to show how AI-driven research using sensitive data can be done more safely, securely, and efficiently. The project’s goals include:

  • Creating a robust AIRR TRE platform – establishing a space where innovations and standards developed through TREvolution (a DARE UK initiative enhancing UK TRE capabilities and operations) can be tested and validated in real-world supercomputing environments.
  • Delivering a SATRE and NHS standards-compliant TRE – ensuring researchers can carry out AI-driven research while meeting strict governance and security requirements.
  • Protecting sensitive data – securely isolating supercomputing resources so only authorised researchers can access them.
  • Sharing knowledge and resources – making the software, legal agreements, and policies developed through FRIDGE openly available to help other projects and TREs use sensitive data safely.

Progress so far

In its first few months, the FRIDGE team has made good progress towards achieving its goals. Working with partners and gathering feedback, the project has:

  • Developed a working prototype system that allows workloads from a host TRE to be securely processed on high-performance computing (HPC) systems using Kubernetes — an open-source platform that lets TREs function across public and private clouds and servers. 
  • Enabled the safe return of results to the host TRE, so researchers can harness supercomputing power without compromising data security.
  • Developed the design for a secure TRE “tenancy” — a protected space within the supercomputer — validating this design on the Dawn AI supercomputer (one of the UK’s fastest artificial intelligence supercomputers hosted at the University of Cambridge’s Research Computing Service) underway.
  • Hosted a hands-on workshop with partners from University College London, the Alan Turing Institute, Cambridge, and Bristol, helping to refine the architectural design and ensure it will be deployable across both the Dawn and Isambard-AI supercomputers that make up the AI Research Resource.
  • Participated in TREvolution public engagement activities, gathering feedback and raising awareness of secure AI research capabilities.

Over the coming months, FRIDGE will focus on finalising the design of the FRIDGE system, deploying it on Dawn, and further validating it on Isambard-AI. These steps will prepare the platform for broader testing and practical use in AI-driven research.

How FRIDGE supports TREvolution

FRIDGE directly contributes to TREvolution by creating a real-world testbed for implementing Trusted Research Environments on large-scale AI supercomputers. By developing secure, standards-compliant TREs, FRIDGE helps demonstrate how sensitive data can be used safely and efficiently for AI research. 

These insights will inform the wider UK TRE community, enabling other projects and TREs to adopt and scale standards and innovations assembled through TREvolution across the UK, ultimately supporting faster, safer, and more impactful data research for public benefit.

About the DARE UK Early Adopters

The DARE UK Early Adopters are pioneering projects helping to put the standards and innovations assembled through TREvolution into practice. Working across more than 10 Trusted Research Environments (TREs), including five sub-national NHS SDEs in England, these projects bring together researchers, TRE operators and members of the public. Their role is to test the new approaches in real-world settings, making sure they are practical, effective and safe before being more widely adopted for research in the public interest.

Learn more about the FRIDGE project