Architecting Integrity: Restoring Content Authenticity Using Azure Databricks Agents
Proposed session for SQLBits 2026TL; DR
Learn how to build a multi-agent 'Trust Engine' on Azure Databricks to detect generic LLM-generated content and verify authenticity in your data. Leave with a blueprint and ideas for expanding the solution for your own industry-specific use case.
Session Details
While LLMs make perfect prose free and infinite, at the same time an increased erosion of trust and verified expertise is taking place. From sophisticated bot farms flooding ecommerce sites with convincing product reviews, to synthetic expertise flooding academic publishing or professional networks, the mass output of content generated by AI is diluting authentic human experience.
In this session, we explore the power of a modular framework that can detect synthetic sentiment and help preserve verified expertise and human authenticity. Using Azure Databricks and Agent Bricks, we show how a multi-agent framework can scale beyond simple detection to deliver actionable insights for a multitude of use cases.
We will demonstrate through a live demo how to combat fake reviews using a 'Trust Engine' to identify AI generated reviews with linguistic fingerprint and behavioural metadata analysis in parallel to assign a 'Verified Human' score.
Attendees will leave with an 'Authenticity Layer' blueprint for use in their own high-stakes data stream and ideas for expanding the solution to their own use cases and industry-specific problems.
In this session, we explore the power of a modular framework that can detect synthetic sentiment and help preserve verified expertise and human authenticity. Using Azure Databricks and Agent Bricks, we show how a multi-agent framework can scale beyond simple detection to deliver actionable insights for a multitude of use cases.
We will demonstrate through a live demo how to combat fake reviews using a 'Trust Engine' to identify AI generated reviews with linguistic fingerprint and behavioural metadata analysis in parallel to assign a 'Verified Human' score.
Attendees will leave with an 'Authenticity Layer' blueprint for use in their own high-stakes data stream and ideas for expanding the solution to their own use cases and industry-specific problems.
3 things you'll get out of this session
Discover how to architect a Multi-Agent 'Trust Engine' using Azure Databricks and Agent Bricks to orchestrate specialised agents that use linguistic fingerprint and behavioural metadata parallel analysis to assign a 'Verified Human' score.
Identify 'LLM Fingerprints' by understanding technical indicators like low perplexity (predictability) to distinguish synthetic text from authentic human expertise and original thought.
Develop an adaptable 'Authenticity Layer' blueprint that demonstrates how to pivot a modular framework to various high-stakes data streams such as legal filings, journalism, or scientific peer reviews by adapting ingestion patterns and logic agents.
Speakers
Pardeep Singh Japper's other proposed sessions for 2026
Shadow AI to Secure AI: Patterns and Pitfalls of Building an Internal Knowledge Chatbot - 2026