$239 – $285

The Next AI Platform

Event Information

Share this event

Date and Time

Location

Location

The Glasshouse

2 South Market Street

San Jose, CA 95113

View Map

Refund Policy

Refund Policy

No Refunds

Event description

Description

Much of what sets The Next Platform apart from other tech publications is depth and analysis. As it turns out, the key to getting both of those facets is knowing what questions to ask and pushing for answers that go beyond the basic and cut through marketing and hype.

This time we are conducting interviews in a new format—and we want you involved in the process.

Please join us on May 9th for an all-day event featuring the same in-depth conversations you expect, live on-stage followed by a cocktail reception and evening dinner opportunities for networking with key people defining the next generation of AI infrastructure.

Meet the Next Platform team with plenty of time to talk about what matters to you, get first access to exclusive interviews, and spend the day with us in an intimate setting at San Jose’s premier event venue, The Glasshouse.

No marketing, no hype, no PowerPoint presentations, no one-sided vendor material. Just some of the best interviewers in the high-end infrastructure space and a lineup of thought leaders building the next generation of large-scale infrastructure to support emerging AI workloads. More on our approach to this topic can be found here. Because like you, we have attended plenty of events that were enjoyable but offered presentations that were too general (or too specific), were pure marketing, or provided little in the way of real insight because of limited time for questions and conceptually, a format that did not encourage well-rounded delivery of insight.

Registration in mid-March $285

April-Early May Registration $345

The Next AI Platform will feature live in-depth interviews with those are the forefront on both the technology creation and end user sides of the AI infrastructure spectrum.

Our agenda as it is evolves is below, but take a quick peek at just a fraction of the day’s content…

Your hosts for the wide range of in-depth, technical interviewers: Timothy Prickett Morgan, Co-Founder, Co-Editor, The Next Platform. Nicole Hemsoth, Co-Founder, Co-Editor, The Next Platform; More interviews will be conducted by contributing editor, Stacey Higginbotham and analyst/contributing author, Paul Teich.

FROM LARGE-SCALE ML INFRASTRUCTURE AT TWITTER TO THE FUTURE OF TRAINING

This live interview with Clement Farabet will chart the shifting hardware requirements for deep learning at Twitter scale over time and bring us to the present where both training and inference requirements are pushing server infrastructure in new directions as seen from his view at Nvidia. This conversation will focus on the shifting needs of AI hardware infrastructure and look ahead to a future with larger, more complex training sets and models (GANs, etc) and efficient inference.

ARCHITECTURAL AND INFRASTRUCTURE EVOLUTION FOR AI AT SCALE

Greg Diamos from Baidu Research will detail the evolution of architectures and the impact on overall infrastructure during Baidu’s long journey to continue scaling AI. We will talk about how they considered various accelerators and the system-level impacts of those decision as well as discuss how they evaluated price, performance, and efficiency along the way.

AI INFRASTRUCTURE IN CONTEXT: NASA’S AI HARDWARE JOURNEY

We will talk mission critical deep learning. In this interview with NASA’s Graham Mackintosh. Focus wil be on building and using AI hardware infrastructure for the agency’s space weather prediction program. We will discuss balancing hardware with application demands and discuss the path to making the right decision with cost, performance, efficiency, and multi-application suitability.


EVALUATING AI PROCESSOR AND ACCELERATOR PERFORMANCE

Greg Diamos will be joined for a detailed session on evaluating various architectures for AI performance, including discussion of what benchmarks are representative and how these can be used. We will also look ahead at MLperf, including its coming inference chip benchmark, which should be released not long after our event. Greg will be joined by Peter Mattson from Google and Lingjie Xu from Alibaba as well as David Kanter on behalf of the MLperf effort.

AI CHIP STARTUP ECOSYSTEM: IN-DEPTH INTERVIEW SERIES FOLLOWED BY PANEL

We will sit down with the leaders behind some of the most noteworthy AI chip architectures, including Nigel Toon, co-founder and CEO of Graphcore and Jin Kim, VP and Chief Data Scientist at Wave Computing for individual interviews followed by a panel where they will be joined by other architecture startup founders, including Mike Henry, co-founder and CEO of AI chip startup Mythic for a view into how inference-specific chips fit into this evolving ecosystem.

VC PANEL: FUTURE CONCEPTS & DIRECTIONS FOR AI HARDWARE INVESTMENTS

This interview will feature Vijay Reddy of Intel Capital, Kanu Gulati from Khosla Ventures, Michael Stewart from Applied Ventures, Rob Cihra, as well as other VCs as we discuss the balance (and lack thereof) in hardware investments in AI chips, software, and other elements of the stack, including storage and networking. We will talk about how and where the market might drive the industry and how that meshes with real (and perceived) demand. In other words, we’ll be focused on asking what is real in this space and what might not stand the test of time.

INTEGRATING AI INTO EXISTING MISSION-CRITICAL SYSTEMS

Weather agency NOAA is a prime example of an organization with mission-critical systems that simply need to work with predictable performance and efficiency at all times. While NOAA is keenly aware that there could be improvements over time by adding AI into existing workflows, this takes some serious thought. This process of evaluation is not different from large companies that see AI benefit but need to think carefully about how and where it fits—and whether it is hardened and stable enough to qualify on critical systems. With over 30 years of experience at NOAA on systems and software, we’ll learn about this process with Mark Govett, head of HPC at NOAA.

EVOLVING AI INFRASTRUCTURE AND THE STORAGE/IO IMPACT

This section will feature a number of rapid-fire interviews with some starting points from storage guru, Gary Grider The following interviews will examine the current state of storage infrastructure starting with a hyperscale end user view as well as an end user view representing the challenges of parallel file systems’ failure to keep up with mixed AI workloads. These two perspectives will branch out with the various ways these file and storage system problems are being addressed via various NVMeOF and other tweaks. There are five interviews scheduled for this storage-focused section, including Renen Hallak, Andy Watson, Curtis Anderson, Lior Gal, and others.

Deep Learning at Supercomputer Scale Part I (AI Infrastructure Decision-Making)

A deep dive with renowned UC Berkeley researcher and NERSC group leader, Prabhat, about the trickle down of various parts of the hardware and software stacks in high performance computing to enterprise, hyperscale, and cloud AI efforts. From accelerators to applications we will focus on the common elements between HPC and AI and what the two still have to learn from one another—as well as how they are diverging.

Following the infrastructure and systems-level requirements for deep learning at scale we discuss above, we will put this in AI framework and application context with Fred Streitz, director of the HPC Innovation Center at Lawrence Livermore University. Here we will use a real application examples to talk about the limitations and benefits of certain architectures (ranging from processors/accelerators to storage and I/O, memory, etc.) for demanding AI workloads.

And much more…

CLICK HERE TO REGISTER NOW, SPACE IS LIMITED

EVOLVING LINEUP FOR THE PACKED DAY

9:00 – 9:30 – Registration with light snacks, coffee, networking

9:30 – 9:45 – Introduction with hosts, co-founders, co-editors of The Next Platform, Timothy Prickett Morgan and Nicole Hemsoth.

9:45 – 10:00 – AI Infrastructure in Context: NASA’s AI Hardware Journey – Live interview with Graham Mackintosh

10:00 – 11:00 – Architecture/Accelerator Focus: In-depth interviews focused on how specific datacenter chip architectures fit against the current norms for training and where both applications and end users might take such emerging technologies. Interviews with Jin Kim, Chief Data Science Officer at Wave Computing, Nigel Toon, co-founder of AI chip startup, Graphcore, and Gurav Singh of Xilinx with time for brief audience Q&A following each.

11:20 – 11:40 – AI Chip/Accelerator Panel – the above participants remain on stage for a deep dive panel hosted by analyst and Next Platform contributor, Paul Teich, featuring the addition of Mike Henry of AI chip startup, Mythic.

11:00 – 11:20 – Coffee, Networking Break

11:30 – 12:00 – VC Panel: Future Concepts and Directions for AI Hardware Investments hosted by Next Platform contributor, Stacey Higginbotham with Vijay Reddy (Intel Capital), Kanu Gulati(Khosla Ventures), Michael Stewart (Applied Ventures). Brief audience Q&A to follow.

12:00 – 12:10 – Platinum Spotlights: AMD, Cray, DDN

12:10 – 1:00 – Lunch and networking

1:00 – 1:10 – Platinum Spotlights: Excelero IBM, Panasas

1:10 – 1:30 – From Large Scale AI Infrastructure at Twitter to the Future of Training with Clement Farabet.

1:30 – 1:50 – Integrating AI Into Mission-Critical Systems – Live interview with Mark Govett,NOAA

The Glasshouse is located in the heart of downtown San Jose with many options for hotels, parking, and dining in close walking distance. The venue is known for the quality of its catering with excellent food (provided for all attendees throughout the day) and drinks, including those provided for all attendees at the Next Platform Happy Hour that follows the event and is located (weather permitting) on the attractive Glasshouse patio.

1:50 – 2:05 – Kick-off to Topic: System Balance and Key Oversights in Building AI Machines with Dave Turekand Bob Picciano.

1:50 – 2:50 – I/O and Systems-Level Views of AI – Rapid-fire interviews (followed by panel to wrap up key ideas) – Andy Watson, Renen Hallak, Curtis Anderson, and Lior Gal.

2:50 – 3:00 – The Big Picture for AI Storage and IO panel hosted by storage guru, Los Alamos National Lab’s Gary Grider with interviewees above. (Topics to include momentum in NVMe and NVME over fabrics, flash, memory balance considerations, main challenges in building AI systems from an I/O infrastructure view).

3:00 – 3:10 – Platinum Spotlights: Graphcore, Wave, Xilinx

3:10 – 3:30 – Break/Networking

3:30 – 3:40 – Platinum Spotlights: WekaIO, Vast Data, Nvidia

3:40 – 4:00 – Architecture and Infrastructure Evolution: Lessons Learned Building AI Systems at Scale – Greg Diamos

4:00 -4:20 – Evaluating AI Processor and Accelerator Performance – Hosted by Nicole Hemsoth, featuring David Kanter (MLPerf), Peter Mattson (Google), Lingjie Xu (Alibaba).

4:40 – 5:00 – New Competitiveness in the Ecosystem: A General Purpose Processing POV (AMD)

5:00 – 5:45 – Deep Learning at Supercomputer Scale – Kicks off with insights from leading researchers at Lawrence Berkley National Lab, including Prahbat and Brian Spears about their lessons learned integrating AI into HPC systems followed by perspective from supercomputer maker Cray via Per Nyberg on how this is happening at scale in commercial HPC—and will continue to grow over coming years.

5:45 – 6:00 – Closing remarks from The Next Platform team

6:00 – 7:30 – Join us on the patio (weather permitting) for our happy hour to discuss the day’s conversations. Light refreshments, beer, wine, etc.

Share with friends

Date and Time

Location

The Glasshouse

2 South Market Street

San Jose, CA 95113

View Map

Refund Policy

No Refunds

Save This Event

Event Saved