Inaccuracies, discrepancies, and small errors in queries using small and moderate tables can present serious challenges and problems when it comes to large tables.
What challenges do you face when dealing with tables that are over a terabyte in size? How do you index them? What do their statistics look like? How do you update them, and how do you fight against index fragmentation? How does all of this affect the estimation of the amount of data and how to tune queries involving very large tables?
If you are interested in answers to all these questions from someone who has experience managing many large tables in a production environment, come to this session.