Nvidia and King's College London join forces to inject privacy into medical imaging analysis

Big data mockup

Nvidia researchers have created the first privacy-focused federated learning paradigm to train medical imaging analysis tools using artificial intelligence (AI), in partnership with King's College London.

Federated learning involves multiple parties such as developers and organisations collaboratively training a centralised deep neural network (DNN) using data from multiple sources.

When it comes to patient data, which is needed to train such models, privacy issues arise when sharing data between parties. This method means the private data never has to leave the hospital from where it's stored.

Instead of taking the data out of the hospital, the federated learning method relies on installing a client at the hospital's systems. Then, the centralised federated learning server, where the data is turned into knowledge but located elsewhere, can communicate with that client and processes the data locally at the hospital, only sending the learned insights back to the server and not the raw data itself.

"The client contains data that has been stored in a harmonised way," said Jorge Cardoso, associate professor in AI at King's College London. "We then take an algorithm, we put it into this container that is then sent to the different hospitals. This container learns from the data locally and the parameters of the model, not the data, is sent back to a centralised server.

"The centralised server then takes multiple realisations of the model that have been trained on separate datasets and creates a consensus out of it," he added. "This consensus is sent back to the hospitals, learns from data again and the updated models are sent back to the centralised server."

The process is repeated until the algorithm learns all that it can from the data, the whole time never learning who the data belongs to or where it came from. According to experts, the model appears to solve one of the more prevalent fundamental issues in medical data sharing.

"As I see it, the flaw in most solutions is model inversion, allowing the ultimate exposure of the individual patient's identity," said Peter Borner, president and CEO at The Data Privacy Group.

"It would seem that the approach being adopted by Nvidia and Kings has taken this major issue into account. I am excited to see this approach mature into a fully-fledged solution that will enable research teams to fully cooperate with the danger of breaching patient confidentiality."

To further protect the privacy that federated learning affords, the researchers are investigating the feasibility of adding the e-differential privacy framework, a way to formally define privacy loss.

The model has been tested using data pulled from the BRaTS 2018 dataset which contains the MRI scans of 285 patients with brain tumours. The data allowed the researchers to evaluate the performance of the multi-modal and multi-class segmentation class.

They found a comparable performance from the federated learning model to a data-centralised system; Nvidia said the slight performance decrease was a "natural tradeoff" between privacy protection and quality of the trained model.

The research will be formally presented to attendees of MICCAI, a large medical imaging conference starting today in Shenzen, China.

Connor Jones
News and Analysis Editor

Connor Jones has been at the forefront of global cyber security news coverage for the past few years, breaking developments on major stories such as LockBit’s ransomware attack on Royal Mail International, and many others. He has also made sporadic appearances on the ITPro Podcast discussing topics from home desk setups all the way to hacking systems using prosthetic limbs. He has a master’s degree in Magazine Journalism from the University of Sheffield, and has previously written for the likes of Red Bull Esports and UNILAD tech during his career that started in 2015.