Lessons from Using LLMs to Check Software Security
Overview
Lessons from Using LLMs to Check Software Security
Large language models (LLMs) like ChatGPT are transforming the landscape of software development and evaluation, although claims of AI replacing programmers are often overstated. This session delves into the core technologies of LLMs and their role in software generation and assessment, emphasizing the influence of training data that includes insecure coding practices. We share insights from historical analyses of over 100 million lines of code in languages such as C, C++, and Java to drive our analysis of ChatGPT 3.4, ChatGPT 4, and Copilot performance. Participants will gain a comprehensive understanding of the advantages and potential pitfalls of LLMs, strategies for mitigating associated risks, and foresight into the future of secure AI-driven software development.
About the Presenter
Dr. Mark Sherman is the Technical Director of the Cybersecurity Foundations directorate in the CERT division of the Carnegie Mellon University Software Engineering Institute (CMU SEI).
Dr. Sherman leads a diverse team of researchers and engineers on projects that focus on foundational research on the lifecycle for building secure software, data-driven analysis of cybersecurity, cybersecurity of quantum computers, cybersecurity for and enabled by machine learning applications, and digital media authenticity. Dr. Sherman was at IBM and various startups, working on mobile systems, integrated hardware-software appliances, transaction processing, languages and compilers, virtualization, network protocols, and databases.
Good to know
Highlights
- 1 hour
- Online
Location
Online event
Organized by
The Software Excellence Alliance (SEA)
Followers
--
Events
--
Hosting
--