Artistic Innovation with AI

Artistic Innovation with AI

Two-Day Intensive on Image Making, Prompt Engineering & Node-Based Workflows

By Integrated Design & Media Program at NYU Tandon

Date and time

July 19 · 10am - July 20 · 5pm EDT

Location

NYU Tandon @ The Yard

Mc Donough Avenue Brooklyn, NY 11205

Refund Policy

Refunds up to 7 days before event

About this event

  • Event lasts 1 day 7 hours

Across two full days you’ll move from a tour of today’s most powerful AI image platforms to building your own Stable Diffusion + ComfyUI pipeline. Day 1 demystifies the AI landscape while drilling effective prompt-engineering strategies for storyboards, concept decks, prototypes, and stock imagery. Day 2 dives deeper: you’ll install Stable Diffusion locally or in the cloud, master ComfyUI’s node graph, and automate advanced image manipulations. Hands-on labs throughout ensure you leave with a repeatable workflow and a portfolio of AI-generated visuals.


For more workshops like this, visit Tandon at The Yard's 2025 Summer Workshops page.


Who Is This For

  • Designers, artists, architects seeking AI for ideation or client decks
  • Creative technologists & coders wanting node-based, reproducible pipelines
  • Marketers, producers, educators needing fast, royalty-free imagery

Anyone comfortable with basic computer skills and curious about text-to-image tools (no coding required, though a little JavaScript/Python literacy helps)


Materials

  • Laptop (Windows, macOS, or Linux) with 16 GB RAM recommended
  • GPU (NVIDIA 6 GB VRAM +) optional for local Stable Diffusion
  • Pre-created images or mood boards welcome for experimentation


Workshop Schedule (Two Days, 10 AM – 5 PM each)

Day 1 – Overview of AI Image Tools and Basic Prompt Engineering

This course provides an overview of the most relevant AI tools currently available in the market and an in-depth exploration of prompt engineering techniques. Participants will learn how to strategically control the image-making process to generate reference material, decks, stock photos, architectural and product prototype explorations, storyboards, and more. We will explore various AI tools and how they can be combined, as well as how LLMs can assist the process. This high-level, introductory workshop will give participants the background and confidence to explore further.


10:00 – 10:15

Welcome & Setup

Introductions, goals, account sign-ins for cloud tools

10:15 – 11:15

Survey of AI Image Platforms

Compare Midjourney, DALL·E 3, Stable Diffusion (& variants), Runway; pricing, strengths, legal notes

11:15 – 12:15

Prompt Engineering 101

Anatomy of a prompt; style, subject, lighting, camera, negative prompts; asserting control

12:15 – 12:30

Break


12:30 – 1:30

Mood boards and reference

Strategies to produce reference boards, stock photo sets, visual metaphors

1:30 – 2:30

Special Topics

Covering the latest news and developments and things to keep an eye on.

2:30 – 2:45

Break


2:45 – 3:45

Mini-Project Sprint

Build a concept deck using multiple AI tools which will include mood boards, reference, styleframes, storyboards, product/architectural concepts, etc.

3:45 – 4:00

Wrap-Up Day 1

Share results, outline Day 2 requirements (installers, GPU checks)


Day 2 – Intermediate AI Image Making with Node-Based Tools

Open-source image generation tools offer significant advantages over online platforms: complete creative control with custom models and complex workflows, privacy since your data never leaves your computer, cost efficiency for heavy users who aren't limited by credits or subscriptions, immunity from platform changes or shutdowns, access to cutting-edge community developments, and deeper technical understanding that improves your skills. While they require more initial setup, these tools provide the flexibility, reliability, and advanced capabilities that serious creators need for professional work.

This intermediate workshop explores the integration of Stable Diffusion with ComfyUI, an open-source graphical user interface (GUI) designed to enhance and streamline the image generation process. Participants will gain a basic understanding of the ComfyUI environment, which facilitates sophisticated image manipulation, model management, and workflow automation. The course will cover both cloud-based and local setups, providing students with the necessary skills to operate Stable Diffusion efficiently within various computing environments. Through hands-on exercises and detailed demonstrations, this course will equip participants with the technical expertise required to effectively leverage Stable Diffusion for a wide range of creative applications.

This class assumes participants have a basic understanding of how text-to-image, image-to-image, and basic prompting techniques work which is covered on the first day of the course.


10:00 – 10:15

Recap & Environment Check
Verify local / cloud SD installs; launch ComfyUI

10:15 – 11:15

ComfyUI Basics
A brief tour of common nodes. What is a node graph, how does it work. Comfy UI manager. Accessing the community.

11:15 – 12:15

Node-Based Image Pipelines
Build text-to-image and image-to-image graphs.

12:15 – 12:30

Break


12:30 – 1:30

Advanced Techniques
Control net, IP adapters, differential diffusion.

1:30 – 2:30

Advanced techniques 2
Model training. API.

2:30 – 2:45

Break


2:45 – 3:45

Studio Time
Choose a challenge and attempt to create something.

3:45 – 4:00

Showcase & Closing
Group demos, Q&A, next steps, resources



Outcomes

By the end of the workshop you will:

  1. Navigate today’s AI image ecosystem and pick the right tool for the task.
  2. Craft precise prompts to steer style, composition, and fidelity.
  3. Combine multiple AI services—and LLMs—to accelerate concept generation.
  4. Install & operate Stable Diffusion locally or in the cloud.
  5. Design node-based workflows in ComfyUI for repeatable, high-quality outputs.
  6. Produce a portfolio-ready storyboard, prototype deck, or interactive generator.

Presenters

David Lobser is a creative technologist and immersive media specialist focusing on AI applications within therapeutic XR environments. As founder of Light Clinic, he served as director of generative content for a healthcare startup utilizing a sophisticated node-based system for creating complex content with large language models, generating over 1,000 hours of personalized meditation experiences. His R&D work explores AI-assisted video game creation, merging procedural design principles with generative AI capabilities. Lobser has taught AI workshops at prestigious institutions including NYU and Harvard (light.clinic/aiwhisperer/), sharing his technical expertise in Unity/C#, WebGL, and shader programming. His technically sophisticated approach combines AI with biometric tracking in projects like MindStream/Eunoe, creating responsive therapeutic environments that adapt to users' physiological states. Through collaborations with medical researchers at Penn Health and Johns Hopkins, Lobser continues pioneering the application of AI technologies to create deeply personalized immersive experiences that bridge cutting-edge technical innovation with evidence-based mental wellness applications.

Aysu Unal is a New York-based architect, artist, and design technologist whose work explores the intersection of spatial design, emerging technologies, and storytelling. With a background in architecture from UCLA, she has been at the forefront of integrating XR into experiential design. At Gensler, she served as a Design Technology Specialist, leading the integration of immersive tools and digital workflows within architectural practice. Aysu organizes monthly XR gatherings in New York, teaches AI-driven design workshops at NYU, CCNY, and architecture firms, and creates interactive experiences that merge spatial computing, sensory engagement, and narrative experimentation. She adapts AI tools into spatial design for virtual environments and leverages them as collaborative instruments in physical, in-person activation spaces.

Organized by

Programs that tend to teach one thing or even several things neatly bounded and categorized are generally easy to describe and easy to write about. IDM is not such a program. Even a cursory look at the makeup of our faculty, the courses we teach, and our academic and professional practice cannot fail to give the impression that we are a program hard to pin down: an eclectic crew of singular individuals gathering the arts, design, engineering and humanities into our capacious minds and hands. A visit to our floor and a few conversations with our students would reveal much the same: terrifically busy crisscrossing mediums, genres, and forms; curious, critical, and creative. We could add, with no little pride, that we temper this spirit of experimentation and invention with a commitment to criticality and ethical and social responsibility; to engage in 'art for art's sake, design for the market' would be no good. So perhaps this is what, despite the diversity of disciplines, practices and skills we present, binds us together - faculty and students - in common cause, that we believe to create entails a commitment to what Hyginus deemed as constitutive of the human condition: care.

$535.38