Latest security protocol shields data from attackers during cloud-based computation

-

Deep-learning models are getting used in lots of fields, from health care diagnostics to financial forecasting. Nevertheless, these models are so computationally intensive that they require the usage of powerful cloud-based servers.

This reliance on cloud computing poses significant security risks, particularly in areas like health care, where hospitals could also be hesitant to make use of AI tools to research confidential patient data as a consequence of privacy concerns.

To tackle this pressing issue, MIT researchers have developed a security protocol that leverages the quantum properties of sunshine to ensure that data sent to and from a cloud server remain secure during deep-learning computations.

By encoding data into the laser light utilized in fiber optic communications systems, the protocol exploits the elemental principles of quantum mechanics, making it unattainable for attackers to repeat or intercept the data without detection.

Furthermore, the technique guarantees security without compromising the accuracy of the deep-learning models. In tests, the researcher demonstrated that their protocol could maintain 96 percent accuracy while ensuring robust security measures.

“Deep learning models like GPT-4 have unprecedented capabilities but require massive computational resources. Our protocol enables users to harness these powerful models without compromising the privacy of their data or the proprietary nature of the models themselves,” says Kfir Sulimany, an MIT postdoc within the Research Laboratory for Electronics (RLE) and lead writer of a paper on this security protocol.

Sulimany is joined on the paper by Sri Krishna Vadlamani, an MIT postdoc; Ryan Hamerly, a former postdoc now at NTT Research, Inc.; Prahlad Iyengar, an electrical engineering and computer science (EECS) graduate student; and senior writer Dirk Englund, a professor in EECS, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE. The research was recently presented at Annual Conference on Quantum Cryptography.

A two-way street for security in deep learning

The cloud-based computation scenario the researchers focused on involves two parties — a client that has confidential data, like medical images, and a central server that controls a deep learning model.

The client wants to make use of the deep-learning model to make a prediction, resembling whether a patient has cancer based on medical images, without revealing information in regards to the patient.

On this scenario, sensitive data have to be sent to generate a prediction. Nevertheless, throughout the process the patient data must remain secure.

Also, the server doesn’t wish to reveal any parts of the proprietary model that an organization like OpenAI spent years and hundreds of thousands of dollars constructing.

“Each parties have something they wish to hide,” adds Vadlamani.

In digital computation, a foul actor could easily copy the info sent from the server or the client.

Quantum information, however, can’t be perfectly copied. The researchers leverage this property, often known as the no-cloning principle, of their security protocol.

For the researchers’ protocol, the server encodes the weights of a deep neural network into an optical field using laser light.

A neural network is a deep-learning model that consists of layers of interconnected nodes, or neurons, that perform computation on data. The weights are the components of the model that do the mathematical operations on each input, one layer at a time. The output of 1 layer is fed into the subsequent layer until the ultimate layer generates a prediction.

The server transmits the network’s weights to the client, which implements operations to get a result based on their private data. The info remain shielded from the server.

At the identical time, the safety protocol allows the client to measure just one result, and it prevents the client from copying the weights due to quantum nature of sunshine.

Once the client feeds the primary result into the subsequent layer, the protocol is designed to cancel out the primary layer so the client can’t learn the rest in regards to the model.

“As an alternative of measuring all of the incoming light from the server, the client only measures the sunshine that’s crucial to run the deep neural network and feed the result into the subsequent layer. Then the client sends the residual light back to the server for security checks,” Sulimany explains.

Resulting from the no-cloning theorem, the client unavoidably applies tiny errors to the model while measuring its result. When the server receives the residual light from the client, the server can measure these errors to find out if any information was leaked. Importantly, this residual light is proven to not reveal the client data.

A practical protocol

Modern telecommunications equipment typically relies on optical fibers to transfer information due to have to support massive bandwidth over long distances. Because this equipment already incorporates optical lasers, the researchers can encode data into light for his or her security protocol with none special hardware.

Once they tested their approach, the researchers found that it could guarantee security for server and client while enabling the deep neural network to attain 96 percent accuracy.

The tiny bit of data in regards to the model that leaks when the client performs operations amounts to lower than 10 percent of what an adversary would wish to recuperate any hidden information. Working in the opposite direction, a malicious server could only obtain about 1 percent of the data it will have to steal the client’s data.

“You possibly can be guaranteed that it’s secure in each ways — from the client to the server and from the server to the client,” Sulimany says.

“A couple of years ago, once we developed our demonstration of distributed machine learning inference between MIT’s primary campus and MIT Lincoln Laboratory, it dawned on me that we could do something entirely latest to supply physical-layer security, constructing on years of quantum cryptography work that had also been shown on that testbed,” says Englund. “Nevertheless, there have been many deep theoretical challenges that needed to be overcome to see if this prospect of privacy-guaranteed distributed machine learning could possibly be realized. This didn’t turn into possible until Kfir joined our team, as Kfir uniquely understood the experimental in addition to theory components to develop the unified framework underpinning this work.”

In the long run, the researchers want to review how this protocol could possibly be applied to a method called federated learning, where multiple parties use their data to coach a central deep-learning model. It may be utilized in quantum operations, slightly than the classical operations they studied for this work, which could provide benefits in each accuracy and security.

“This work combines in a clever and intriguing way techniques drawing from fields that don’t normally meet, specifically, deep learning and quantum key distribution. Through the use of methods from the latter, it adds a security layer to the previous, while also allowing for what appears to be a sensible implementation. This may be interesting for preserving privacy in distributed architectures. I’m looking forward to seeing how the protocol behaves under experimental imperfections and its practical realization,” says Eleni Diamanti, a CNRS research director at Sorbonne University in Paris, who was not involved with this work.

This work was supported, partly, by the Israeli Council for Higher Education and the Zuckerman STEM Leadership Program.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x