Devon Kerr

Now available: The LLM safety assessment

Check out the newest report from Elastic Security Labs, which explores how you can protect your organization from LLM threats.

1 min lireRapports
Now available: The LLM safety assessment

Today Elastic Security Labs publishes our LLM safety assessment report, a research endeavor meant to collect and clarify information about practical threats to large language models. These forms of generative AI are likely to become ubiquitous in the near future-- but we need to consider the security of them a little sooner than that.

One of the most immediate and significant challenges-- and this is true of every new data source-- is understanding the properties and characteristics of the data, if it exists. You can read more about that process in this excellent pair of articles, which speak to a challenge many detection engineers are facing today.

New data sources are problematic in a unique way: with no visibility to rank malicious techniques by popularity, how does a detection engineer determine the most effective detections? Mapping fields and normalizing a data source is a good initial step that makes it possible to begin investigating; it's exciting to be a little closer to the answer today than we were yesterday.

Check out the new report, browse our prior research on this topic, and join us in preparing for tomorrow.

Partager cet article