ArXiv Preprint
TREs are widely, and increasingly used to support statistical analysis of
sensitive data across a range of sectors (e.g., health, police, tax and
education) as they enable secure and transparent research whilst protecting
data confidentiality. There is an increasing desire from academia and industry
to train AI models in TREs. The field of AI is developing quickly with
applications including spotting human errors, streamlining processes, task
automation and decision support. These complex AI models require more
information to describe and reproduce, increasing the possibility that
sensitive personal data can be inferred from such descriptions. TREs do not
have mature processes and controls against these risks. This is a complex
topic, and it is unreasonable to expect all TREs to be aware of all risks or
that TRE researchers have addressed these risks in AI-specific training.
GRAIMATTER has developed a draft set of usable recommendations for TREs to
guard against the additional risks when disclosing trained AI models from TREs.
The development of these recommendations has been funded by the GRAIMATTER UKRI
DARE UK sprint research project. This version of our recommendations was
published at the end of the project in September 2022. During the course of the
project, we have identified many areas for future investigations to expand and
test these recommendations in practice. Therefore, we expect that this document
will evolve over time.
Emily Jefferson, James Liley, Maeve Malone, Smarti Reel, Alba Crespi-Boixader, Xaroula Kerasidou, Francesco Tava, Andrew McCarthy, Richard Preen, Alberto Blanco-Justicia, Esma Mansouri-Benssassi, Josep Domingo-Ferrer, Jillian Beggs, Antony Chuter, Christian Cole, Felix Ritchie, Angela Daly, Simon Rogers, Jim Smith
2022-11-03