DARE UK Community Working Group

AI Risk Evaluation Group

As interest in Artificial Intelligence (AI) using sensitive data grows among the data research community, the need to understand the unique privacy risks that AI poses has never been greater. The AI Evaluation Working Group is a group of experts and members of the public who are on a mission to create guidelines for using AI on patient data in Trusted Research Environments that protect the identities of individuals within the data.

The increasing availability of potentially sensitive medical data, such as brain imaging and genetic blood tests, within Trusted Research Environments for AI model development holds transformative potential for healthcare but raises important issues in ensuring data privacy and security. The AI Risk evaluation working group will bring together a team of experts as well as members of the public to develop comprehensive guidelines for the ethical use of AI on sensitive data in Trusted Research Environments, building on previous initiatives and engaging with diverse stakeholders to ensure responsible AI integration into clinical settings and using medical data.

Projected Outputs

The AI risk evaluation group plans to conduct four workshops to assess and mitigate risks associated with AI models trained on patient data:

  • Public/Patient Workshop: WHAT are the risks associated with AI model release: This workshop will focus on gathering patient perspectives on the risks of using their data for AI research.
  • Researcher Workshop: WHAT are the most effective mitigation techniques: This workshop will bring together AI and clinical experts to discuss current AI methods, privacy-preserving techniques, and potential privacy risks in implementing AI models and produce a set of recommendations of best practices for mitigating privacy risks in AI and effective privacy-preserving techniques.
  • Data Provider Workshop: WHAT is the risk appetite of data providers: The third workshop will engage data providers to understand their risk appetite and any barriers or restrictions they have regarding AI training on their data.
  • Developing guidelines and recommendations for Trusted Research Environments on assessing AI risk and implementing privacy-preserving methods: The final workshop aims to consolidate insights from the previous sessions by bringing together all participants from the previous three workshops to co-develop comprehensive guidelines for AI risk assessment and recommendations for privacy-preserving methods within Trusted Research Environments (TREs).

Participation and Collaboration

The AI Risk Evaluation working group comprises a diverse group of researchers (with expertise in neuroimaging, data security, AI development, and clinical research), data and infrastructure providers, and public representatives. The working group includes members of Dementias Platform UK (DPUK), which has built a community of data providers and researchers with plans to collaborate with networks like the British Neuroscience Association, the Deep Dementia Phenotyping (DEMON) Network, and the UK Health Data Research Alliance. Dementias Platform UK (DPUK) is well-versed in neuroimaging and genomic data, with ongoing AI model development that collaborates closely with data providers to ensure safe model deployment in clinical environments. Their collective expertise and ongoing initiatives lay a solid foundation for the community’s objectives.

Ways of Working

Four workshops will be held over the funding period (November 2023 – March 2024). These workshops will be held in person and online to allow for maximum attendance and participation.

Group Co-Chairs

  • Prof Simon Thompson, Swansea University
  • Prof John Gallacher, Oxford University
  • Dr Timothy Rittman, Cambridge University
  • Lewis Hotchkiss, Swansea University

Documents

For enquiries, please email catrin.morris@swansea.ac.uk.