Home Artificial Intelligence Accelerating AI tasks while preserving data security

Accelerating AI tasks while preserving data security

0
Accelerating AI tasks while preserving data security

With the proliferation of computationally intensive machine-learning applications, corresponding to chatbots that perform real-time language translation, device manufacturers often incorporate specialized hardware components to rapidly move and process the large amounts of information these systems demand.

Selecting the most effective design for these components, often called deep neural network accelerators, is difficult because they will have an unlimited range of design options. This difficult problem becomes even thornier when a designer seeks so as to add cryptographic operations to maintain data secure from attackers.

Now, MIT researchers have developed a search engine that may efficiently discover optimal designs for deep neural network accelerators, that preserve data security while boosting performance.

Their search tool, often called SecureLoop, is designed to think about how the addition of information encryption and authentication measures will impact the performance and energy usage of the accelerator chip. An engineer could use this tool to acquire the optimal design of an accelerator tailored to their neural network and machine-learning task.

In comparison to traditional scheduling techniques that don’t consider security, SecureLoop can improve performance of accelerator designs while keeping data protected.  

Using SecureLoop could help a user improve the speed and performance of demanding AI applications, corresponding to autonomous driving or medical image classification, while ensuring sensitive user data stays secure from some varieties of attacks.

“If you happen to are serious about doing a computation where you’ll preserve the safety of the info, the foundations that we used before for locating the optimal design at the moment are broken. So all of that optimization must be customized for this recent, more complicated set of constraints. And that’s what [lead author] Kyungmi has done on this paper,” says Joel Emer, an MIT professor of the practice in computer science and electrical engineering and co-author of a paper on SecureLoop.

Emer is joined on the paper by lead creator Kyungmi Lee, an electrical engineering and computer science graduate student; Mengjia Yan, the Homer A. Burnell Profession Development Assistant Professor of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior creator Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. The research shall be presented on the IEEE/ACM International Symposium on Microarchitecture.

“The community passively accepted that adding cryptographic operations to an accelerator will introduce overhead. They thought it could introduce only a small variance within the design trade-off space. But, it is a misconception. In actual fact, cryptographic operations can significantly distort the design space of energy-efficient accelerators. Kyungmi did a unbelievable job identifying this issue,” Yan adds.

Secure acceleration

A deep neural network consists of many layers of interconnected nodes that process data. Typically, the output of 1 layer becomes the input of the subsequent layer. Data are grouped into units called tiles for processing and transfer between off-chip memory and the accelerator. Each layer of the neural network can have its own data tiling configuration.

A deep neural network accelerator is a processor with an array of computational units that parallelizes operations, like multiplication, in each layer of the network. The accelerator schedule describes how data are moved and processed.

Since space on an accelerator chip is at a premium, most data are stored in off-chip memory and fetched by the accelerator when needed. But because data are stored off-chip, they’re vulnerable to an attacker who could steal information or change some values, causing the neural network to malfunction.

“As a chip manufacturer, you possibly can’t guarantee the safety of external devices or the general operating system,” Lee explains.

Manufacturers can protect data by adding authenticated encryption to the accelerator. Encryption scrambles the info using a secret key. Then authentication cuts the info into uniform chunks and assigns a cryptographic hash to every chunk of information, which is stored together with the info chunk in off-chip memory.

When the accelerator fetches an encrypted chunk of information, often called an authentication block, it uses a secret key to get well and confirm the unique data before processing it.

However the sizes of authentication blocks and tiles of information don’t match up, so there might be multiple tiles in a single block, or a tile might be split between two blocks. The accelerator can’t arbitrarily grab a fraction of an authentication block, so it might find yourself grabbing extra data, which uses additional energy and slows down computation.

Plus, the accelerator still must run the cryptographic operation on each authentication block, adding much more computational cost.

An efficient search engine

With SecureLoop, the MIT researchers sought a way that would discover the fastest and most energy efficient accelerator schedule — one which minimizes the variety of times the device must access off-chip memory to grab extra blocks of information due to encryption and authentication.  

They began by augmenting an existing search engine Emer and his collaborators previously developed, called Timeloop. First, they added a model that would account for the extra computation needed for encryption and authentication.

Then, they reformulated the search problem into an easy mathematical expression, which enables SecureLoop to search out the perfect authentical block size in a far more efficient manner than looking through all possible options.

“Depending on the way you assign this block, the quantity of unnecessary traffic might increase or decrease. If you happen to assign the cryptographic block cleverly, then you definately can just fetch a small amount of additional data,” Lee says.

Finally, they incorporated a heuristic technique that ensures SecureLoop identifies a schedule which maximizes the performance of all the deep neural network, fairly than only a single layer.

At the top, the search engine outputs an accelerator schedule, which incorporates the info tiling strategy and the scale of the authentication blocks, that gives the most effective possible speed and energy efficiency for a selected neural network.

“The design spaces for these accelerators are huge. What Kyungmi did was work out some very pragmatic ways to make that search tractable so she could find good solutions while not having to exhaustively search the space,” says Emer.

When tested in a simulator, SecureLoop identified schedules that were as much as 33.2 percent faster and exhibited 50.2 percent higher energy delay product (a metric related to energy efficiency) than other methods that didn’t consider security.

The researchers also used SecureLoop to explore how the design space for accelerators changes when security is taken into account. They learned that allocating a bit more of the chip’s area for the cryptographic engine and sacrificing some space for on-chip memory can lead to higher performance, Lee says.

In the long run, the researchers need to use SecureLoop to search out accelerator designs which might be resilient to side-channel attacks, which occur when an attacker has access to physical hardware. As an example, an attacker could monitor the facility consumption pattern of a tool to acquire secret information, even when the info have been encrypted. Also they are extending SecureLoop so it might be applied to different kinds of computation.

This work is funded, partly, by Samsung Electronics and the Korea Foundation for Advanced Studies.

LEAVE A REPLY

Please enter your comment!
Please enter your name here