February 3, 2025 | Intelligence Community News
By Gina Scinta, Deputy CTO, Thales Trusted Cyber Technologies
As artificial intelligence and machine learning continue to extend into all aspects of the enterprise, there is a growing need to protect sensitive private data in Large Language Models (LLMs). This is true not only for data at rest and in transit, but also during computational execution.
The primary emphasis in protecting LLMs is the backend framework, which stores all data queried by users to the LLM, along with user credentials, logs, metadata, and more. Any prompt or response that is used and stored can contain sensitive data that requires protection.
To understand how LLM data protection works in two separate but closely related use cases, we should begin by breaking down the components of a typical LLM framework.