How Do We Build Trust in AI Applications? Human Centric Design
A workshop exploring what human-centric AI is and what needs to be done to build more effective AI applications the public can trust.
Date and time
Location
Winchester School of Art
Park Avenue Winchester SO23 8DL United KingdomLineup
Agenda
10:30 AM - 11:30 AM
Dr Stephen Anning shares his research building human-centric AI
11:30 AM - 12:00 PM
Tea/Coffee Networking Break
12:00 PM - 1:00 PM
Dr Christina Silver: further potential of human centred qualitative research
1:00 PM - 2:00 PM
Questions and Discussion
1:00 PM - 2:00 PM
Lunch
About this event
- Event lasts 4 hours
- Paid venue parking
How Do We Build Trust in AI Applications? Human Centric Design
The UK government is creating extraordinary opportunities to accelerate the adoption of artificial intelligence (AI) across public services. As the current Government’s AI action plan states, “AI should become core to how we think about delivering services, transforming citizens’ experiences, and improving productivity”. Fully realising this potential, however, requires revisiting a long-standing debate: the role of quantitative versus qualitative research in understanding human experiences. Designing technology around the human as both user and subject makes for a better AI application and builds trust in the model.
Join us at the Winchester School of Art for an interactive discussion on building trust in AI applications through human-centric design. Discover how we can create AI technologies that prioritize human values and ethics. Learn from industry experts and academics about the importance of transparency, accountability, and inclusivity in AI development. Whether you're a student, researcher, govenrment service provider or tech enthusiast, this event will provide valuable insights into the future of AI. Don't miss out on this opportunity to be part of the conversation! A light lunch is provided.This project and workshop are co-funded by the Web Science Institute at Southampton University and the Survival and Flourishing Fund.
Frequently asked questions
Human-centric AI designs AI systems prioritizing human needs, values, and ethics. It aims to augment human abilities, ensure transparency, fairness, and privacy, fostering trust and collaboration between humans and AI for better societal outcomes.
It builds trust and acceptance by focusing on ethical, transparent, and user-friendly AI. Involving users reduces bias and privacy risks, making AI more equitable and effective, which is essential for widespread adoption and positive societal impact.
Qualitative research explores human experiences and social phenomena through interviews, observations, and text analysis. It provides deep insights into meanings and behaviors, essential for understanding complex social contexts beyond numbers.
Zig Zag is a start-up pioneering new approaches to narrative intelligence. Following the principles of human-centric AI, we uniquely combine narrative analysis methodologies with a graph-based approach to natural language processing. Doing so generates insights to new high standards of rigour.
CAQDAS Networking Project supports researchers using qualitative data analysis software, offering resources, training, and community to improve coding and analysis of qualitative data.
Web Science studies the web as a socio-technical system, combining tech and social sciences to understand its impact, governance, and evolution, addressing issues like privacy, misinformation, and digital inclusion.
A philanthropic fund supporting projects that reduce existential risks and promote humanity’s long-term safety and well-being, focusing on AI safety, biosecurity, climate, and global resilience.