22-25 April 2026

RAG Rewired: Building and Evaluating Connected Intelligence with Knowledge Graphs

Proposed session for SQLBits 2026

TL; DR

Competitive AI isn’t won by model choice, but by how domain knowledge is extracted, structured, and evaluated. This talk explores RAG, Graph RAG, and agentic search experiments, showing how data extraction and evaluation turn retrieval into connected, high-stakes intelligence.

Session Details

As organisations move toward expert AI workflows, competitive advantage is increasingly determined not by model choice, but by how effectively domain knowledge is extracted, structured, and retrieved. While many AI solutions rely on fine-tuned foundation models, teams operating in high-stakes, document-heavy domains are discovering that success hinges on the data extraction layer, long before multi-agent orchestration enters the picture.

In this talk, we share lessons learned from building and evaluating retrieval systems over large, unstructured, and highly nuanced document corpora. The challenges we faced: inconsistent formats, implicit relationships, domain-specific terminology, and severe consequences for retrieval errors, will be familiar to anyone working with complex enterprise data.
We present a set of practical experiments comparing three approaches:

1. Traditional RAG, using hierarchical chunking and hybrid search
2. Knowledge-graph-enhanced RAG, where entities and relationships are explicitly modelled
3. Agentic search patterns, where retrieval is decomposed into multi-step, intent-driven queries

We go under the hood of Graph RAG, showing how to extract entities from unstructured text, construct knowledge graphs, and combine them with vector search to improve contextual grounding, traceability, and reasoning depth.

Another focus of the session is evaluation. We demonstrate how we designed an evaluation pipeline for multi-step retrieval workflows, combining standard LLM quality metrics, retrieval diagnostics, safety checks, and custom domain-aware metrics driven by simulated user journeys. Evaluation became a design tool, informing chunking strategies, metadata extraction, and retrieval orchestration choices.

Attendees will leave with concrete design patterns, performance trade-offs, and evaluation techniques they can apply to RAG systems across legal, financial, compliance, and other expert-driven domains, transforming retrieval from simple similarity search into performant connected, context-aware intelligence.

3 things you'll get out of this session

• Practical design patterns for building and comparing traditional RAG, Graph RAG, and agentic retrieval systems over complex, unstructured enterprise data. • Concrete evaluation techniques for multi-step retrieval workflows, including quality, safety, and domain-aware metrics that actively inform system design. • A technical blueprint for turning raw documents into connected, context-aware intelligence by improving chunking, metadata extraction, and retrieval orchestration.