Building Responsible & Compliant Generative AI
Wednesday, October 25, 2023
,
2:00 PM Eastern Time
Regulatory laws governing the use of AI are not new. Both the EU General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA) contain laws addressing machine learning (ML) models used in consumer data processing. US federal law has also applied – Weight Watchers, for example, was fined $1.5M and made to delete ML algorithms in violation of the FTC’s Children’s Online Privacy Protection Act (COPPA).
Commercial AI has been mostly limited to major tech giants and the Fortune 500 until the recent arrival of powerful, cloud hosted large language models (LLMs). Now, as the opportunity for AI applications expands dramatically, traditional governance techniques based on model observability and explainability must be rethought. Meanwhile, AI regulatory engines are roaring, with an array of new US based AI regulations joining chorus with the EU’s forthcoming AI Act.
What do organizations need to know and plan to do to ensure responsible use of AI and regulatory compliance in the LLM era? This expert panel will accompany us into this murky, nuanced realm to shed some light.
You’ll hear from:
- Aparna Dhinakaran, Chief Product Officer, Arize AI
- Justin Norman, CTO of Vera, a new product focused on helping companies use LLMs with built-in policy controls and governance
- Nicolette Nowak, VP Legal, Associate General Counsel & Data Protection Officer, Beamery
- Tamara Kneese, Director and Senior Researcher, Data & Society
Register below to access the replay!
Speakers
Aparna Dhinakaran
Arize AI
Justin Norman
Vera
Nicolette Nowak
Beamery
Tamara Kneese
Data & Society
Bethann Noble
Continual