Skip to main content

Changes to the default installation of SQL Server

SQL Server pretty much works out of the box – go through the install wizard, click Next, Next, Next… create a database and off you go. However, that doesn't mean that everything is taken care of and no further changes are required.

There are some changes that I like to make to the vanilla installation – some of these are personal preference but I think all of them afford benefits. As I work for a project team I often have to install SQL Server on new hardware. I follow through the list below to ensure I have covered everything.

I have a selection of SQL jobs I like to set up and some server configuration options. I won’t be dealing with hardware aspects or RAID configuration but with changes to SQL Server software configuration itself, however, I may write a post about those in the future.

So here it is:

Model Database

Change Model DB to SIMPLE Recovery Model and that files are set to grow by megabytes and not percent - so any future databases created inherit these properties.

Configure database mail

Some of the scripts below rely on an Operator to be configured to send alerts to so we will need to configure Database Mail. This is out of scope for this article but details can be found here.

Configure SQL Agent job notification emails 

After you have configured Database Mail you need to configure the SQL Agent to use it. Follow these steps:

  1. Right Click on SQL Server Agent, Select properties.
  2. Select Alert System.
  3. Select Enable mail profile.
  4. Select your mail profile
  5. Select OK
  6. Restart SQL Agent service

SQL jobs

This is a list of jobs I create by default on every new server:
  • MaintenanceSolution  By Ola Hallengren - includes backup and some maintenance jobs. Only the jobs are created, you will need to create the schedules manually. You may possibly want to make changes to the steps such as adding compression to the backup one.
  • Weekly Cycle of Error Logs by Paul Hewson – prevents the error log from bloating by cycling it every week (this script includes the schedule)
  • SP_WhoIsActive by Adam Machanic – useful diagnostic query
  •  dbWarden by Stevie Rounds and Michael Rounds – daily health report emailed to your inbox each day.
  • sp_blitz by Brent Ozar – check that everything is configured correctly.
  • Monitor File Growth by Paul Hewson – useful for keeping a  record of the growth of data and log files. This can be queried at a later time by using SSRS or Excel.
  • Agent Alerts Management Pack by Tibor Karaszi 
When you have run all the scripts above it could be worth running the following script so that we log all SQL Agent job history we can:
Enable additional logging for SQL Agent jobs by Paul Hewson

SQL Agent History

I find that the default number of records recorded for SQL Agent job history at 1000\100 is usually not enough so I like to increase this value by several orders of magnitude by using this command.
 
Click to enlarge

USE [msdb]
GO
EXEC msdb.dbo.sp_set_sqlagent_properties @jobhistory_max_rows=100000,
              @jobhistory_max_rows_per_job=10000
GO

NB, this will increase the size of the msdb slightly so another reason to keep it off the C: drive.

Configure TempDB  with additional files if required

This a little contentious but if I know the database is going to be used quite heavily with several large databases installed I will move TempDB to its own LUN/drive and split it into several separate but equally sized files.

Manually alter the default database to be 1024MB then add additional files with a script similar to this:

ALTER DATABASE tempdb
ADD FILE (NAME = tempdev2, FILENAME = 'F:\MSSQL\Data\tempdb2.mdf', SIZE = 1024);

ALTER DATABASE tempdb
ADD FILE (NAME = tempdev3, FILENAME = 'F:\MSSQL\Data\tempdb3.mdf', SIZE = 1024);

ALTER DATABASE tempdb
ADD FILE (NAME = tempdev4, FILENAME = 'F:\MSSQL\Data\tempdb4.mdf', SIZE = 1024);

ALTER DATABASE tempdb
ADD FILE (NAME = tempdev5, FILENAME = 'F:\MSSQL\Data\tempdb5.mdf', SIZE = 1024);

ALTER DATABASE tempdb
ADD FILE (NAME = tempdev6, FILENAME = 'F:\MSSQL\Data\tempdb6.mdf', SIZE = 1024);

ALTER DATABASE tempdb
ADD FILE (NAME = tempdev7, FILENAME = 'F:\MSSQL\Data\tempdb7.mdf', SIZE = 1024);

ALTER DATABASE tempdb
ADD FILE (NAME = tempdev8, FILENAME = 'F:\MSSQL\Data\tempdb8.mdf', SIZE = 1024);

Change memory settings

This is pretty much an essential change to make as the default is that SQL Server will use all of the available memory on the server which may cause problems. I use the details from this post by Glenn Berry to determine the values:

Configure the Trace Flags

By default SQL captures only limited details on deadlocks when they occur so I enable trace 1222 for more verbose logging. So this trace is enabled every time the server starts I alter the properties of the SQL Server service as below. I also think it is convenient to know when Auto Update Statistics events have occurred so I enable trace 8721. The screen shot below is taken from SQL 2008 using SQL Server Configuration Manager:


Click to enlarge


Check Configuration

After you have done all the above you could run sp_blitz against the server to see if you have missed anything.

That's it! I may amend this posting from time to time but generally if you follow the steps above you should have a well configured and stable server. If you can think of anything else, please let me know.

Comments

Popular posts from this blog

How to configure the SSAS service to use a Domain Account

NB Updating SPNs in AD is not for the faint hearted plus I got inconsistent results from different servers. Do so at your own risk! If you need the SSAS account on a SQL Server to use a domain account rather than the local “virtual” account “NT Service\MSSQLServerOLAPService”. You may think you just give the account login permissions to the server, perhaps give it sysadmin SQL permissions too. However, if you try and connect to SSAS  remotely  you may get this error: Authentication failed. (Microsoft.AnalysisService.AdomdClient) The target principal name is incorrect (Microsoft.AnalysisService.AdomdClient) From Microsoft: “A Service Principle Name (SPN) uniquely identifies a service instance in an Active Directory domain when Kerberos is used to mutually authenticate client and service identities. An SPN is associated with the logon account under which the service instance runs. For client applications connecting to Analysis Services via Kerberos authentication, th

How to move the Microsoft Assessment and Planning Toolkit (MAP) database to a different drive

The Microsoft Assessment and Planning Toolkit (MAP) is a very useful tool for scanning your network to find instances of SQL Server plus all manner of detailed information about the installed product, OS and hardware it sits on. <Click image to enbiggen> There is an issue with it the database it uses to store the data it collects, however. Assuming you don't have an instance called MAPS on your server, the product will install using LocalDB (a cut down version of SQL Server Express) and puts the databases on your C: drive. If you then scan a large network you could easily expand the database to 10GB which may cause issues on a server when that drive is often one of the smallest. However, there is a simple solution: connect to LocalDB using Management Studio, detach the databases, move to a different drive, set permissions on the new location if required and reattach the database. How do you connect to LocalDB? Here you go: Connect to (localdb)\MAPTOOLKIT The

SAN performance testing using SQLIO

Introduction This document describes how to use Microsoft’s SQLIO to test disk/SAN performance. It is biased towards SQL Server – which uses primarily 64KB and 8KB data pages so I am running the tests using those cluster sizes, however, other sizes can be specified.  Download SQLIO from https://www.microsoft.com/en-gb/download/details.aspx?id=20163   SQLIO is a command line tool with no GUI so you need to open a command prompt at  C:\Program Files (x86)\SQLIO  after you have installed it. Configuration First of all edit param.txt so that you create the test file we will be using. The file needs to be bigger than the combined RAID and on-board disk caches. In this case we are using a 50GB file. The “ 2”  refers to the number of threads to use when testing, you don’t need to change this now. The “ 0x0”  value indicates that all CPUs should be used, which you probably don’t want to change either, “ #”  is a comment. The only part you may want to change is 51200 (50GB) a