Glasgow Computing Science Innovation Lab - LLMs for Software Development
Just Added

Glasgow Computing Science Innovation Lab - LLMs for Software Development

By School of Computing Science

Overview

Leveraging LLMs in the Software Development Lifecycle: Practical Insights on Code Quality and Test Generation

Glasgow Computing Science Innovation Lab (GLACSIL) welcomes colleagues, researchers, GLACSIL partners, and industry representatives to this latest in our series of hybrid events where hot topics in computing and innovation are explored over a working lunch and networking.


Leveraging LLMs in the Software Development Lifecycle: Practical Insights on Code Quality and Test Generation

Speakers: Dr Gul Calikli and Dr Debasis Ganguly, School of Computing Science, University of Glasgow

Large Language Models (LLMs) like GPT-4.o are becoming powerful tools in modern software engineering—powering code assistants, enabling rapid prototyping, and now playing a growing role in testing. This talk shares two industry-relevant research studies focused on helping developers make the most of LLMs in real-world workflows.

The first study addresses a common challenge in feature-driven and rapid development environments: how do you assess the quality of generated code when you don’t yet have tests? We present a practical technique that uses in-context learning (ICL) to estimate the functional correctness of LLM-generated code by analyzing ranked alternatives—similar to how search engines rank results. By showing LLMs examples of correct code during generation, developers can get more reliable signals about which output is most likely to work.

The second study dives into automated unit test generation—a time-consuming but critical task. We evaluate how LLMs perform when prompted with different types of test examples: human-written, traditional tool-generated (like SBST), and LLM-generated. Our findings, based on popular benchmarks and GPT-4.o (used in tools like GitHub Copilot), show that the right few-shot examples—especially human-written ones—can significantly improve test quality and code coverage. We also demonstrate how combining code and problem similarity helps select the most effective examples automatically.

Packed with actionable insights, this session will help practitioners understand how to better guide LLMs, improve the reliability of generated code, and boost the effectiveness of automated testing—all without overhauling existing workflows.



Category: Science & Tech, High Tech

Lineup

Good to know

Highlights

  • 1 hour 45 minutes
  • In person

Location

Advanced Research Centre (ARC), University of Glasgow

11 Chapel Lane

Glasgow G11 6EW United Kingdom

How do you want to get there?

Frequently asked questions

Organized by

School of Computing Science

Followers

--

Events

--

Hosting

--

Free
Nov 10 · 12:15 PM GMT