LLM Applications: Security, Reliability, Trust & Safety

LLM Applications: Security, Reliability, Trust & Safety

Aug 16, 2023

01:00 PM Eastern Time (US and Canada)

Despite rapid progress and rising commercial interest in Large Language Model (LLM) applications, critical gaps remain in the development and adoption of solutions ensuring their security, reliability and safety. From risks like sensitive data leakage and prompt injection attacks to malicious LLM alignment by bad actors, the road to broad industry understanding of LLM application vulnerabilities and how to address them is being forged in the shadows of the hype.

This panel will shed light on LLM application risks and experts working hands-on to expose their complexity while developing mitigation strategies and solutions. We’ll discuss:

  • LLM application vulnerabilities across security, reliability and trust & safety including data leakage, prompt injection attacks, lack of human alignment or misalignment.

  • Developing and available standards and solutions for addressing LLM application vulnerabilities across open source projects and commercial software.

  • Threats involving fully autonomous agents and AGI – analysis, predictions and suggestions for preventing worst-case scenarios.

You’ll hear from:

  • Kai Greshake, independent security researcher whose recent work in LLMs and security risk has been featured in the MIT Technology review and Wired magazine

  • Will Pienaar, Creator of Feast, the leading open source feature store, and Rebuff, software for LLM prompt injection detection

  • Frank Walsh, Field CTO at HUMAN Security, the world’s largest internet fraud sensor network

  • Tristan Zajonc, CEO & Co-Founder of Continual, who will moderate the discussion

Register below to access the replay!