AI Deep Research Models

AI Deep Research Models

Orbach Science Library (Room 122) or ZoomRiverside, CA
Tuesday, May 12  •  10 AM - 11:30 AM
Overview

This workshop will provide an introduction to the top AI LLM models and current AI benchmark landscape for research and learning

AI Tools Series

AI large language models have demonstrated rapid performance progress in the last few years across reasoning, coding, and domain-specific research benchmarks — including Abstract Reasoning (ARC), Coding (SWE-Bench), and Graduate-Level Academic Disciplines (GPQA Diamond). For researchers and learners at all levels the critical question is which model best fits their specific research, learning needs, and academic disciplinary contexts.

This workshop dives deep into the current AI LLM benchmark landscape, examining use cases for both academic research and learning. Participants will review recent top benchmarks against the current top global LLM models, including both Western and open-source Chinese models, exploring various models' unique affordances, strengths, and limitations. The session will be valuable for researchers at all levels — students, faculty, and staff — who want to make informed decisions about which models to integrate into their research and learning workflows.

Beyond model selection, the workshop will offer practical recommendations for working with models across a range of research and learning tasks, including strategies for human/AI collaboration and obtaining strong results through cross-model meta-analysis, synthesis of research and data, and report-based and data-driven visualization.

All UCR community members are welcome to attend. Make sure to register with your UCR email. Zoom link will be emailed to registered participants one hour before the workshop starts. Participants are expected to follow and uphold UC Riverside's Principles of Community.

This workshop will provide an introduction to the top AI LLM models and current AI benchmark landscape for research and learning

AI Tools Series

AI large language models have demonstrated rapid performance progress in the last few years across reasoning, coding, and domain-specific research benchmarks — including Abstract Reasoning (ARC), Coding (SWE-Bench), and Graduate-Level Academic Disciplines (GPQA Diamond). For researchers and learners at all levels the critical question is which model best fits their specific research, learning needs, and academic disciplinary contexts.

This workshop dives deep into the current AI LLM benchmark landscape, examining use cases for both academic research and learning. Participants will review recent top benchmarks against the current top global LLM models, including both Western and open-source Chinese models, exploring various models' unique affordances, strengths, and limitations. The session will be valuable for researchers at all levels — students, faculty, and staff — who want to make informed decisions about which models to integrate into their research and learning workflows.

Beyond model selection, the workshop will offer practical recommendations for working with models across a range of research and learning tasks, including strategies for human/AI collaboration and obtaining strong results through cross-model meta-analysis, synthesis of research and data, and report-based and data-driven visualization.

All UCR community members are welcome to attend. Make sure to register with your UCR email. Zoom link will be emailed to registered participants one hour before the workshop starts. Participants are expected to follow and uphold UC Riverside's Principles of Community.

Good to know

Highlights

  • 1 hour 30 minutes
  • In person

Location

Orbach Science Library (Room 122) or Zoom

900 University Ave

Riverside, CA 92521

How do you want to get there?

Map
Organized by
UC Riverside Library
Followers--
Events2220
Hosting10 years
Report this event