Nvidia and King's College London join forces to inject privacy into medical imaging analysis

The new model is said to help solve one of the most fundamental issues relating to the sharing of private data

Big data mockup

Nvidia researchers have created the first privacy-focused federated learning paradigm to train medical imaging analysis tools using artificial intelligence (AI), in partnership with King's College London.

Federated learning involves multiple parties such as developers and organisations collaboratively training a centralised deep neural network (DNN) using data from multiple sources.

When it comes to patient data, which is needed to train such models, privacy issues arise when sharing data between parties. This method means the private data never has to leave the hospital from where it's stored.

Instead of taking the data out of the hospital, the federated learning method relies on installing a client at the hospital's systems. Then, the centralised federated learning server, where the data is turned into knowledge but located elsewhere, can communicate with that client and processes the data locally at the hospital, only sending the learned insights back to the server and not the raw data itself.

"The client contains data that has been stored in a harmonised way," said Jorge Cardoso, associate professor in AI at King's College London. "We then take an algorithm, we put it into this container that is then sent to the different hospitals. This container learns from the data locally and the parameters of the model, not the data, is sent back to a centralised server.

"The centralised server then takes multiple realisations of the model that have been trained on separate datasets and creates a consensus out of it," he added. "This consensus is sent back to the hospitals, learns from data again and the updated models are sent back to the centralised server."

The process is repeated until the algorithm learns all that it can from the data, the whole time never learning who the data belongs to or where it came from. According to experts, the model appears to solve one of the more prevalent fundamental issues in medical data sharing.

"As I see it, the flaw in most solutions is model inversion, allowing the ultimate exposure of the individual patient's identity," said Peter Borner, president and CEO at The Data Privacy Group.

"It would seem that the approach being adopted by Nvidia and Kings has taken this major issue into account. I am excited to see this approach mature into a fully-fledged solution that will enable research teams to fully cooperate with the danger of breaching patient confidentiality."

To further protect the privacy that federated learning affords, the researchers are investigating the feasibility of adding the e-differential privacy framework, a way to formally define privacy loss.

The model has been tested using data pulled from the BRaTS 2018 dataset which contains the MRI scans of 285 patients with brain tumours. The data allowed the researchers to evaluate the performance of the multi-modal and multi-class segmentation class.

They found a comparable performance from the federated learning model to a data-centralised system; Nvidia said the slight performance decrease was a "natural tradeoff" between privacy protection and quality of the trained model.

The research will be formally presented to attendees of MICCAI, a large medical imaging conference starting today in Shenzen, China.

Featured Resources

Edge-enabled mobility of the future

Turning vehicle data into value

Download now

Modern networking for the borderless enterprise

Five ways top organisations are optimising networking at the edge

Download now

Address multi-cloud configuration risks

Cloud security challenges and how to overcome them

Watch now

The total economic impact of IBM Security Verify

Cost savings and business benefits enabled by IBM Security Verify

Download now

Recommended

How to become a machine learning engineer
Careers & training

How to become a machine learning engineer

23 Dec 2020
Data science fails: Building AI you can trust
Whitepaper

Data science fails: Building AI you can trust

2 Dec 2020
MLOps 101: The foundation for your AI strategy
Whitepaper

MLOps 101: The foundation for your AI strategy

2 Dec 2020
Realising the benefits of automated machine learning
Whitepaper

Realising the benefits of automated machine learning

2 Dec 2020

Most Popular

UK gov flip-flops on remote work, wants it a standard for all jobs
flexible working

UK gov flip-flops on remote work, wants it a standard for all jobs

5 Mar 2021
How to find RAM speed, size and type
Laptops

How to find RAM speed, size and type

26 Feb 2021
How to connect one, two or more monitors to your laptop
Laptops

How to connect one, two or more monitors to your laptop

25 Feb 2021