The AI PMs Guide to Building Guardrails for Responsible AI
Overview
What if the biggest threat to your AI product isn’t bad actors, but the guardrails you missed to build?
As AI capabilities explode, product leaders are facing security risks that don’t show up in traditional testing. In this session, you’ll hear real-world examples of how targeted prompting exposed vulnerabilities, even in systems considered “safe”, and understand the real work behind guardrails that actually scale.
On December 9, Okta’s Engineering Security Director Arun Kumar Elengovan will dive into:
- How prompting reveals hidden security gaps before users ever see them
- Surprising edge cases and model behaviors that highlighted the need for stronger guardrails
- Balancing usability, velocity, and security without slowing teams down
- The real-world playbook for responsible AI: checkpoints, governance, and workflows
- Designing scalable AI governance frameworks that protect innovation rather than restrict it
You’ll leave with tactical frameworks to identify vulnerabilities earlier, build safety mechanisms that scale, and strengthen your AI’s reliability across the entire product lifecycle.
You will also get the chance to take part in our Community Shoutout. Hiring? Growing your team? Searching for a co founder or beta users? This is your opportunity to share announcements, make connections, and spotlight what you are building.
Seats are limited. RSVP today to save your spot.
Lineup
Good to know
Highlights
- 1 hour
- Online
Location
Online event
Presentation
Q&A
Community Shoutouts
Organized by
Products That Count
Followers
--
Events
--
Hosting
--