When AI Goes Bad: What Every Tech Leader Should Learn from Failure
Proposed session for SQLBits 2026TL; DR
AI can accelerate innovation—or amplify mistakes at scale. In this session, Dr. Davis McAlister, a military-trained interrogator turned leadership expert featured on Ticker News, shares real-world examples of AI gone wrong and what developers can do to prevent similar failures. Through stories and case studies, you’ll learn how bias, poor data, and blind trust in automation can derail even the smartest systems—and how transparency, testing, and communication can keep your projects (and reputation) intact.
Session Details
AI can accelerate innovation—or amplify mistakes at scale. In this session, Dr. Davis McAlister, a military-trained interrogator turned leadership expert featured on Ticker News, shares real-world examples of AI gone wrong and what developers can do to prevent similar failures. Through stories and case studies, you’ll learn how bias, poor data, and blind trust in automation can derail even the smartest systems—and how transparency, testing, and communication can keep your projects (and reputation) intact.
3 things you'll get out of this session
1. Identify the most common types of AI failures (hallucinations, bias, misinformation, data leaks) and their real-world consequences.
2. Analyze real case studies of AI misuse in business, healthcare, education, and media.
3. Evaluate the leadership and reputational risks associated with overreliance on generative AI tools.
4. Apply basic ethical frameworks and critical thinking protocols to review AI-generated output before acting on it.
5. Develop a basic set of AI use policies to guide teams toward responsible and transparent use of AI tools in professional settings.
Speakers
Davis McAlister's other proposed sessions for 2026
Trust in the Machine: Leading People Through AI Fear and Change - 2026