An Introduction to the MLOps Tool Evaluation Rubric
Just Added

An Introduction to the MLOps Tool Evaluation Rubric

In this webcast, Violet Turri and Emily Newman discuss the challenges of finding the right tools to support Machine Learning Operations

By Software Engineering Institute at Carnegie Mellon University

Date and time

Tuesday, June 17 · 10:30 - 11:30am PDT

Location

Online

About this event

  • Event lasts 1 hour

Organizations looking to build and adopt artificial intelligence (AI)–enabled systems face the challenge of identifying the right capabilities and tools to support Machine Learning Operations (MLOps) pipelines. Navigating the wide range of available tools can be especially difficult for organizations new to AI or those that have not yet deployed systems at scale. This webcast introduces the MLOps Tool Evaluation Rubric, designed to help acquisition teams pinpoint organizational priorities for MLOps tooling, customize rubrics to evaluate those key capabilities, and ultimately select tools that will effectively support ML developers and systems throughout the entire lifecycle, from exploratory data analysis to model deployment and monitoring. This webcast will walk viewers through the rubric’s design and content, share lessons learned from applying the rubric in practice, and conclude with a brief demo.


What Attendees Will Learn:

  • How to identify and prioritize key capabilities for MLOps tooling within their organizations
  • How to customize and apply the MLOps Tool Evaluation Rubric to evaluate potential tools effectively
  • Best practices and lessons learned from real-world use of the rubric in AI projects


Who should attend:

  • Acquisition teams involved in selecting MLOps tools
  • AI/ML engineers and developers interested in tool evaluation and adoption strategies
  • Project managers overseeing AI-enabled system deployments


About the Speaker

Violet Turri is an associate software developer in the SEI AI Division, where she leads AI engineering efforts focused on AI design and adoption, test and evaluation strategies, and MLOps pipelines. A graduate of Cornell University with a degree in computer science, she applies strong technical expertise across AI engineering projects. Her research experience in human-computer interaction shapes her user-centered approach to developing and deploying AI systems. Dedicated to connecting technical innovation with practical application, Violet works to ensure AI solutions are robust, scalable, and user-aligned.

Emily Newman is an associate machine learning engineer in the SEI AI division, working on applying assured AI to autonomous systems. Emily obtained her bachelor’s in computer science and robotics from Carnegie Mellon University. Prior to working at the SEI, she spent five years at NASA JPL as a software systems engineer and Mars rover operator. Currently, Emily enjoys leveraging her applied robotics knowledge to research in AI-enabled autonomous agents operating at the edge.

Organized by

The SEI is a not-for-profit federally funded research and development center (FFRDC) at Carnegie Mellon University 

FreeJun 17 · 10:30 AM PDT