Generative AI Privacy Risks

-

Privacy Risks of Large Language Models (LLMs)

Fig: Gen AI vs. Traditional ML Privacy Risks (Image by Writer)

In this text, we concentrate on the privacy risks of huge language models (LLMs), with respect to their scaled deployment in enterprises.

We also see a growing (and worrisome) trend where enterprises are applying the privacy frameworks and controls that they’d designed for his or her data science / predictive analytics pipelines — as-is to Gen AI / LLM use-cases.

That is clearly inefficient (and dangerous) and we want to adapt the enterprise privacy frameworks, checklists and tooling — to have in mind the novel and differentiating privacy facets of LLMs.

Allow us to first consider the privacy attack scenarios in a conventional supervised ML context [1, 2]. This consists of nearly all of AI/ML world today with mostly machine learning (ML) / deep learning (DL) models developed with the goal of solving a prediction or classification task.

Fig: Traditional machine (deep) learning privacy risks / leakage (Image by Writer)

There are mainly two broad categories of inference attacks: membership inference and property…

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x