Generative AI Privacy Risks. Privacy Risks of Large Language Models… | by Debmalya Biswas | Jul, 2024

Editor
1 Min Read


Privacy Risks of Large Language Models (LLMs)

Fig: Gen AI vs. Traditional ML Privacy Risks (Image by Author)

In this article, we focus on the privacy risks of large language models (LLMs), with respect to their scaled deployment in enterprises.

We also see a growing (and worrisome) trend where enterprises are applying the privacy frameworks and controls that they had designed for their data science / predictive analytics pipelines — as-is to Gen AI / LLM use-cases.

This is clearly inefficient (and risky) and we need to adapt the enterprise privacy frameworks, checklists and tooling — to take into account the novel and differentiating privacy aspects of LLMs.

Let us first consider the privacy attack scenarios in a traditional supervised ML context [1, 2]. This consists of the majority of AI/ML world today with mostly machine learning (ML) / deep learning (DL) models developed with the goal of solving a prediction or classification task.

Fig: Traditional machine (deep) learning privacy risks / leakage (Image by Author)

There are mainly two broad categories of inference attacks: membership inference and property…

Share this Article
Please enter CoinGecko Free Api Key to get this plugin works.