The AI PMs Guide to Building Guardrails for Responsible AI

The AI PMs Guide to Building Guardrails for Responsible AI

By Products That Count
Online event

Overview

The guide to how AI PMs can build guardrails that scale, with real-world examples on catching hidden risks before they ship.

What if the biggest threat to your AI product isn’t bad actors, but the guardrails you missed to build?

As AI capabilities explode, product leaders are facing security risks that don’t show up in traditional testing. In this session, you’ll hear real-world examples of how targeted prompting exposed vulnerabilities, even in systems considered “safe”, and understand the real work behind guardrails that actually scale.

On December 9, Okta’s Engineering Security Director Arun Kumar Elengovan will dive into:

  • How prompting reveals hidden security gaps before users ever see them
  • Surprising edge cases and model behaviors that highlighted the need for stronger guardrails
  • Balancing usability, velocity, and security without slowing teams down
  • The real-world playbook for responsible AI: checkpoints, governance, and workflows
  • Designing scalable AI governance frameworks that protect innovation rather than restrict it

You’ll leave with tactical frameworks to identify vulnerabilities earlier, build safety mechanisms that scale, and strengthen your AI’s reliability across the entire product lifecycle.

You will also get the chance to take part in our Community Shoutout. Hiring? Growing your team? Searching for a co founder or beta users? This is your opportunity to share announcements, make connections, and spotlight what you are building.

Seats are limited. RSVP today to save your spot.


Category: Science & Tech, Science

Lineup

Good to know

Highlights

  • 1 hour
  • Online

Location

Online event

Agenda
12:00 PM - 12:30 PM

Presentation

12:30 PM - 12:45 PM

Q&A

12:45 PM - 1:00 PM

Community Shoutouts

Organized by

Products That Count

Followers

--

Events

--

Hosting

--

Free
Dec 9 · 12:00 PM PST