In the beginning we were manually deploying database changes and checking these into source control -  after the fact using SSDT projects. This gave us the impression we were at least using source control as we could track changes and that we had CI builds… However, the true benefit of source control is in fact using it as the source for your deployment! We quickly managed to use build pipelines to deploy SSIS packages as we didn’t have to worry about the data, but with complex data structures and the risky nature of state-based deployments, the mountain seemed too difficult to overcome, or at least to find the time! New publishing options in SSDT in 2017 spurred us on to finally start deploying database changes from source as we could deploy with lower risk and avert any data loss if we so required.

Due to the risky nature of state-based deployments, such as the dreaded renaming or reordering of columns, we also started raising awareness and embedding some best practices in all developers. Test environment rebuilds of data structures could go undetected due to the smaller volumes, so we used SQLPackage.exe to carry out comparisons at CI time so we could check XML breakdowns and the actual
scripts of what was eventually going to be deployed against production, this occurring earlier in the development process.

We then started to get some quick wins of the smaller databases and proving that it was now possible to deploy databases using SSDT from source. At this time, we were using builds in TFS, but by embedding a database team member with the configuration team we quickly started to utilise TFS release pipelines. This also introduced TFS task groups and libraries, these were utilised for parameter-driven template deployment sequences and centralised secrets respectively.

The release pipelines involved quality gates where we introduced QA as approvers through the test environments and our operations team for deployment into production. After many discussions and feedback, we were then able to remove the necessity for release tickets for the majority of releases as the proof was in the pudding (of the approved release pipeline!). Progress was slow and difficult at this stage, but other tools such as Slack and a great culture of change really helped us on.

We also started to introduce RedGate SQL Change automation projects, which was great for reference and configuration data, due to its migration based approach.

Later combining the deployment of database and their associated ETLs followed by the automated execution of the related job, using PowerShell, all in the test release pipelines meant we started getting
even faster feedback of our release packets. If a non-database developer was making changes to schema
only, we could still address whether that impacted other systems by failing the release if the job began to fail.

Migrating to VSTS and utilising key vault to link to libraries is also making the whole system even more platform of a service giving us an even lower infrastructure overhead. More recently we have introduced ARM deployments prior to the database schema, meaning we can combine entire data architectures in our release packets (Infrastucture, Database and ETL). 

There is still some way to go but I know that the majority of tasks can and now will be automated. I therefore want to share with you this journey and convince you manual deployments of data architecture will be a thing of the past. 
(no tags)
The video is not available to view online.