
Working group
AI Risk Evaluation Community Group (AI Risk Evaluation Working Group)
The increase of AI model development within Trusted Research Environments (TREs) holds transformative potential but raises concerns about data privacy when it comes to the release of these models. This group aimed to develop risk assessment methodologies, governance, and training to enable the safe release of AI models from TREs, ensuring that privacy is protected.
Status:
Primary contact: lewis.hotchkiss@chi.swan.ac.uk
Affiliated with:
AI Risk Evaluation Community Group (AI Risk Evaluation Working Group),
Join the community!
Click the button below to contact us about joining the community group and learning more about our work.

Community charter
This group held several workshops with members of the public to understand privacy concerns, with data owners to discuss barriers to release and assessment, and with researchers to explore privacy-preserving techniques. All the outputs from these workshops contributed to the development of our governance framework and recommendations for AI assessment and release.
Download full charterCo-chairs
Meet the co-chairs who led advancements in AI risk evaluation to ensure privacy in sensitive data research.

Lewis Hotchkiss
Dementias Platform UK

Simon Thompson
Dementias Platform UK

Timothy Rittman
University of Cambridge

John Gallacher
University of Oxford
Group events
Explore events to connect, collaborate, and advance the safe release of AI models from TREs, protecting privacy in research.
All group eventsNo events at the moment. Check back soon!
Group updates
Latest news, insights, and developments from the AI Risk Evaluation Community Group
Latest
No posts at the moment. Check back soon!
Group outputs
Key outputs for the development of assessment methodologies, governance, and training for AI risk evaluation in sensitive data research.