T-SQL Tuesday: The "Smoking Man" of SQL Server Defaults
Going with the defaults is a big deal, and in most situations where performance matters, it is a BIG MISTAKE. So much so, that I've listed it as one of the Top 10 DBA Mistakes since the very first time I wrote and delivered that popular presentation ten years ago, with help from SQL Server MVPs like Mike Walsh (t | b) and Colin Stasiuk (t | b).
If these default settings are so problematic, then why doesn't Microsoft change them? There are a couple reasons.
First, Microsoft has intentionally designed SQL Server to be easy to set up, install, get running, walk away from, kill the server room lights, and ignore for years. Defaults of that sort might run afoul of those settings which deliver the best performance, because achieving high performance is highly dependent on the type of workload running on the instance. Basically, the defaults have a goal to meet the highest common denominator for uptime while maintaining the lowest common denominator for performance.
Second, default settings don't really help sell more licenses. And licensing revenue is the driver for the product teams' new development efforts. Consequently, once a default is established for a given functionality, for example the 5 second broker activation wakeup task in Service Broker, it is VERY unlikely to ever change. Even when dramatic improvements in the underpinning technology make a default setting quite silly, there's very little incentive and quite a lot of risk for the Dev teams to tinker with a default setting. In another example, the sp_configure settings governing the default behavior for parallelism, MAXDOP, and Cost Threshold for Parallelism, were established at a time when multi-core SMP processors were an extravagant rarity for many IT organizations. Today, 16+ cores are not uncommon on a single silicon wafer. Those old fashioned settings were established for old fashioned hardware, in turn introducing some problematic behavior on current generation hardware.
So what'cha get is what'cha got, when it comes to default settings.* (*Footnote: Not true in every situation. As with all things technology, your mantra is "It depends"). Sometimes default settings do get changed by Microsoft. But it always takes a well-constructed argument, empirical proof, and a large amount of community support to move the needle - consider Aaron Bertrand's (b | t) plea for better guidance about tempdb during setup (Connect #1380861).
If you're of a certain age here in the USA, then you're likely to be intimately familiar with an intriguing and villainous character, the nameless "Smoking Man" from the X-Files television show. This mysterious character had as much power over the story arc as a puppet-master does their marionettes. And he never, ever worried about getting cancer. (Freaky!)
In our case, the SQL Server Smoking Man isn't quite so sinister. In fact, the completely innocuous nature of SQL Server's Smoking Man is part of the reason it can be so threatening when ignored.
I'm talking about the SQL Server model database.
This (usually) tiny system database is tucked away under the "System Databases" folder in SSMS. If you never open up and look at the system databases, it's possible you might go years without looking at the model database. And yet, the model database is used as a template for all newly created databases, including tempdb. Thus, every time a SQL Server is cycled, the settings of the model database are interrogated and applied anew to tempdb. Any and every database configuration setting of model is then likely to be repeated over and over again, pulling strings in your SQL Server instance possibly without your knowledge.
Naturally, that also means when a CREATE DATABASE statement is issued for a new user database, the first of the database is created by copying in the contents of the model database. The rest of the new database is then filled with empty pages. If you modify the model database, all databases created thereafter will inherit those changes, such as permissions, recovery model, database options (e.g. Optimize for Adhoc Workloads), and added tables, functions, and stored procedures.
Remember that whole "It Depends" thing from earlier? Same here. It depends on what your aims are. But I'll give you a few of the defaults which I usually change on model database in my important instances of SQL Server:
- Initial and auto-grow database and log file sizes: This is one that's easy to get wrong if you get to aggressive. But I believe that the current defaults are moronic. I usually change the initial file size and the auto-grow sizes to 250mb for the database file and 50mb for the transaction log, assuming I know absolutely nothing else about the needs of the system.
- Recovery model: Again, this is risky when applied blindly. But I often set this to 'simple' for instances that are considered low-touch and/or low-importance. Otherwise, leave it at 'full' but ensure that there is a database and transaction log backup preventative maintenance schedule in place.
- Optimize for Ad Hoc Workloads: I enable this setting for all OLTP databases, unless I have reason to do otherwise.
There are a handful of other settings that I might change and objects which I might add, for example a NUMBERS table, to provide additional utility. But the circumstances under which I might alter those settings or make those additions are much more variable and not ones I make as blanket choices.
I'm sure I've probably missed a beneficial consideration for the model database. What's your favorite tip for maximizing the power, and minimizing the risk, of the model database?