Anda di halaman 1dari 211

Contents

Overview of SQL Server Tools and Utilities


New and updated articles
SQL Operations Studio (preview)
SQL Server Management Studio (SSMS)
SQL Server Data Tools (SSDT)
mssql-cli (command-line query tool)
Configuration Manager
mssql-conf (Linux)
Distributed Replay
Database Engine Tuning Advisor (dta)
SQL Server Profiler
Service Broker (ssbdiagnose)
Command Prompt Utilities
Bulk Copy Utility (bcp)
SqlLocalDB Utility
osql Utility
Profiler Utility
sqlagent90 Application
sqlcmd Utility
SQLdiag Utility
sqlmaint Utility
sqllogship Application
sqlps Utility
sqlservr Application
tablediff Utility
sqlpackage
Install sqlpackage
Release notes
sqlpackage ref
SQL Tools and Utilities for SQL Server, Azure SQL
Database, and Azure SQL Data Warehouse
6/1/2018 • 5 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
To manage (query, monitor, etc.) your database you need a tool. There are several database tools available. While
your databases can be running in the cloud, on Windows, or on Linux, your tool doesn't need to run on the same
platform as the database.
This article provides information about the available tools for working with your SQL databases.

Tools to run queries and manage databases


TOOL DESCRIPTION

Microsoft SQL Operations Studio (preview) SQL Operations Studio (preview) is a free, light-weight tool, for
managing databases wherever they're running. This preview
release provides database management features, including an
extended Transact-SQL editor and customizable insights into
the operational state of your databases. SQL Operations
Studio (preview) runs on Windows, macOS, and Linux.

SQL Server Management Studio (SSMS) Use SQL Server Management Studio (SSMS) to query, design,
and manage your SQL Server, Azure SQL Database, and Azure
SQL Data Warehouse. SSMS runs on Windows.

SQL Server Data Tools (SSDT) Turn Visual Studio into a powerful development environment
for SQL Server, Azure SQL Database, and Azure SQL Data
Warehouse. SSDT runs on Windows.

mssql-cli mssql-cli is an interactive command-line tool for querying SQL


Server. mssql-cli runs on Windows, macOS, and Linux

Visual Studio Code After installing Visual Studio Code, install the mssql extension
for developing Microsoft SQL Server, Azure SQL Database, and
SQL Data Warehouse. Visual Studio Code runs on
Windows, macOS, and Linux.

Which tool should I choose?


Do you want to manage a SQL Server instance or database, in a light-weight editor on Windows, Linux or Mac?
Choose Microsoft SQL Operations Studio (preview )
Do you want to manage a SQL Server instance or database on Windows with full GUI support? Choose SQL
Server Management Studio (SSMS )
Do you want to create or maintain database code, including compile time validation, refactoring and designer
support on Windows? Choose SQL Server Data Tools (SSDT)
Do you want to query SQL Server with a command-line tool that features IntelliSense, syntax high-lighting, and
more? Choose mssql-cli
Do you want to write T-SQL scripts in a light-weight editor on Windows, Linux or Mac? Choose Visual Studio
Code and the mssql extension

Additional tools
TOOL DESCRIPTION

Configuration Manager Use SQL Server Configuration Manager to configure SQL


Server services and configure network connectivity.
Configuration Manager runs on Windows

mssql-conf Use mssql-conf to configure SQL Server running on Linux.

SQL Server Migration Assistant Use SQL Server Migration Assistant to automate database
migration to SQL Server from Microsoft Access, DB2, MySQL,
Oracle, and Sybase.

Distributed Replay Use the Distributed Replay feature to help you assess the
impact of future SQL Server upgrades. Also use Distributed
Replay to help assess the impact of hardware and operating
system upgrades, and SQL Server tuning.

ssbdiagnose The ssbdiagnose utility reports issues in Service Broker


conversations or the configuration of Service Broker services.

Command line utilities


Command line utilities enable you to script SQL Server operations. The following table contains a list of command
prompt utilities that ship with SQL Server.

UTILITY DESCRIPTION INSTALLED IN

bcp Utility Used to copy data between an instance <drive:>\Program Files\ Microsoft SQL
of Microsoft SQL Server and a data file Server\Client
in a user-specified format. SDK\ODBC\110\Tools\Binn

dta Utility Used to analyze a workload and <drive>:\Program Files\Microsoft SQL


recommend physical design structures Server\nnn\Tools\Binn
to optimize server performance for that
workload.

dtexec Utility Used to configure and execute an <drive>:\Program Files\Microsoft SQL


Integration Services package. A user Server\nnn\DTS\Binn
interface version of this command
prompt utility is called DTExecUI, which
brings up the Execute Package Utility.

dtutil Utility Used to manage SSIS packages. <drive>:\Program Files\Microsoft SQL


Server\nnn\DTS\Binn

Deploy Model Solutions with the Used to deploy Analysis Services <drive>:\Program Files\Microsoft SQL
Deployment Utility projects to instances of Analysis Server\nnn\Tools\Binn\VShell\Common
Services. 7\IDE
UTILITY DESCRIPTION INSTALLED IN

mssql-scripter (Public Preview) Used to generate CREATE and INSERT See our GitHub repo for download and
T-SQL scripts for database objects in usage information.
SQL Server, Azure SQL Database, and
Azure SQL Data Warehouse.

osql Utility Allows you to enter Transact-SQL <drive>:\Program Files\Microsoft SQL


statements, system procedures, and Server\nnn\Tools\Binn
script files at the command prompt.

Profiler Utility Used to start SQL Server Profiler from a <drive>:\Program Files\Microsoft SQL
command prompt. Server\nnn\Tools\Binn

RS.exe Utility (SSRS) Used to run scripts designed for <drive>:\Program Files\Microsoft SQL
managing Reporting Services report Server\nnn\Tools\Binn
servers.

rsconfig Utility (SSRS) Used to configure a report server <drive>:\Program Files\Microsoft SQL
connection. Server\nnn\Tools\Binn

rskeymgmt Utility (SSRS) Used to manage encryption keys on a <drive>:\Program Files\Microsoft SQL
report server. Server\nnn\Tools\Binn

sqlagent90 Application Used to start SQL Server Agent from a <drive>:\Program Files\Microsoft SQL
command prompt. Server\<instance_name>\MSSQL\Binn

sqlcmd Utility Allows you to enter Transact-SQL <drive:>\Program Files\ Microsoft SQL
statements, system procedures, and Server\Client
script files at the command prompt. SDK\ODBC\110\Tools\Binn

SQLdiag Utility Used to collect diagnostic information <drive>:\Program Files\Microsoft SQL


for Microsoft Customer Service and Server\nnn\Tools\Binn
Support.

sqllogship Application Used by applications to perform <drive>:\Program Files\Microsoft SQL


backup, copy, and restore operations Server\nnn\Tools\Binn
and associated clean-up tasks for a log
shipping configuration without running
the backup, copy, and restore jobs.

SqlLocalDB Utility An execution mode of SQL Server <drive>:\Program Files\Microsoft SQL


targeted to program developers. Server\nnn\Tools\Binn\

sqlmaint Utility Used to execute database maintenance <drive>:\Program Files\Microsoft SQL


plans created in previous versions of Server\MSSQL13.MSSQLSERVER\MSSQ
SQL Server. L\Binn

sqlps Utility Used to run PowerShell commands and <drive>:\Program Files\Microsoft SQL
scripts. Loads and registers the SQL Server\nnn\Tools\Binn
Server PowerShell provider and cmdlets.

sqlservr Application Used to start and stop an instance of <drive>:\Program Files\Microsoft SQL
Database Engine from the command Server\MSSQL13.MSSQLSERVER\MSSQ
prompt for troubleshooting. L\Binn
UTILITY DESCRIPTION INSTALLED IN

Ssms Utility Used to start SQL Server Management <drive>:\Program Files\Microsoft SQL
Studio from a command prompt. Server\nnn\Tools\Binn\VSShell\Common
7\IDE

tablediff Utility Used to compare the data in two tables <drive>:\Program Files\Microsoft SQL
for non-convergence, which is useful Server\nnn\COM
when troubleshooting a replication
topology.

SQL Command Prompt utilities syntax conventions


CONVENTION USED FOR

UPPERCASE Statements and terms used at the operating system level.

monospace Sample commands and program code.

italic User-supplied parameters.

bold Commands, parameters, and other syntax that must be typed


exactly as shown.
New and Recently Updated: Tools for SQL Server
5/1/2018 • 4 minutes to read • Edit Online

Nearly every day Microsoft updates some of its existing articles on its Docs.Microsoft.com documentation website.
This article displays excerpts from recently updated articles. Links to new articles might also be listed.
This article is generated by a program that is rerun periodically. Occasionally an excerpt can appear with imperfect
formatting, or as markdown from the source article. Images are never displayed here.
Recent updates are reported for the following date range and subject:
Date range of updates: 2018-02-03 -to- 2018-04-28
Subject area: Tools for SQL Server.

New Articles Created Recently


The following links jump to new articles that have been added recently.
mssql-cli command-line query tool for SQL Server

Updated Articles with Excerpts


This section displays the excerpts of updates gathered from articles that have recently experienced a large update.
The excerpts displayed here appear separated from their proper semantic context. Also, sometimes an excerpt is
separated from important markdown syntax that surrounds it in the actual article. Therefore these excerpts are for
general guidance only. The excerpts only enable you to know whether your interests warrant taking the time to
click and visit the actual article.
For these and other reasons, do not copy code from these excerpts, and do not take as exact truth any text excerpt.
Instead, visit the actual article.

Compact List of Articles Updated Recently


This compact list provides links to all the updated articles that are listed in the Excerpts section.
bcp Utility

1. bcp Utility
Updated: 2018 -04 -25

-G This switch is used by the client when connecting to Azure SQL Database or Azure SQL Data Warehouse to
specify that the user be authenticated using Azure Active Directory authentication. The -G switch requires version
14.0.3008.27 or later. To determine your version, execute bcp -v. For more information, see Use Azure Active
Directory Authentication for authentication with SQL Database or SQL Data Warehouse.

TIP
To check if your version of bcp includes support for Azure Active Directory Authentication (AAD) type bcp -- (bcp<space>
<dash><dash>) and verify that you see -G in the list of available arguments.

Azure Active Directory Username and Password:


When you want to use an Azure Active Directory user name and password, you can provide the -G option
and also use the user name and password by providing the -U and -P options.
The following example exports data using Azure AD Username and Password where user and password is
an AAD credential. The example exports table bcptest from database testdb from Azure server
aadserver.database.windows.net and stores the data in file c:\last\data1.dat :

bcp bcptest out "c:\last\data1.dat" -c -t -S aadserver.database.windows.net -d testdb -G -U


alice@aadtest.onmicrosoft.com -P xxxxx

The following example imports data using Azure AD Username and Password where user and password is
an AAD credential. The example imports data from file c:\last\data1.dat into table bcptest for database
testdb on Azure server aadserver.database.windows.net using Azure AD User/Password:

bcp bcptest in "c:\last\data1.dat" -c -t -S aadserver.database.windows.net -d testdb -G -U


alice@aadtest.onmicrosoft.com -P xxxxx

Azure Active Directory Integrated


For Azure Active Directory Integrated authentication, provide the -G option without a user name or
password. This configuration assumes that the current Windows user account (the account the bcp
command is running under) is federated with Azure AD:

Similar articles about new or updated articles


This section lists very similar articles for recently updated articles in other subject areas, within our public
GitHub.com repository: MicrosoftDocs/sql-docs.
Subject areas that do have new or recently updated articles
New + Updated (11+6): Advanced Analytics for SQL docs
New + Updated (18+0): Analysis Services for SQL docs
New + Updated (218+14): Connect to SQL docs
New + Updated (14+0): Database Engine for SQL docs
New + Updated (3+2): Integration Services for SQL docs
New + Updated (3+3): Linux for SQL docs
New + Updated (7+10): Relational Databases for SQL docs
New + Updated (0+2): Reporting Services for SQL docs
New + Updated (1+3): SQL Operations Studio docs
New + Updated (2+3): Microsoft SQL Server docs
New + Updated (1+1): SQL Server Data Tools (SSDT) docs
New + Updated (5+2): SQL Server Management Studio (SSMS ) docs
New + Updated (0+2): Transact-SQL docs
New + Updated (1+1): Tools for SQL docs
Subject areas that do not have any new or recently updated articles
New + Updated (0+0): Analytics Platform System for SQL docs
New + Updated (0+0): Data Quality Services for SQL docs
New + Updated (0+0): Data Mining Extensions (DMX) for SQL docs
New + Updated (0+0): Master Data Services (MDS ) for SQL docs
New + Updated (0+0): Multidimensional Expressions (MDX) for SQL docs
New + Updated (0+0): ODBC (Open Database Connectivity) for SQL docs
New + Updated (0+0): PowerShell for SQL docs
New + Updated (0+0): Samples for SQL docs
New + Updated (0+0): SQL Server Migration Assistant (SSMA ) docs
New + Updated (0+0): XQuery for SQL docs
What is Microsoft SQL Operations Studio (preview)?
5/17/2018 • 2 minutes to read • Edit Online

SQL Operations Studio (preview ) is a free tool that runs on Windows, macOS, and Linux, for managing SQL
Server, Azure SQL Database, and Azure SQL Data Warehouse; wherever they're running.
Download and Install SQL Operations Studio (preview)

Transact-SQL (T-SQL) code editor with IntelliSense


SQL Operations Studio (preview ) offers a modern, keyboard-focused T-SQL coding experience that makes your
everyday tasks easier with built-in features, such as multiple tab windows, a rich T-SQL editor, IntelliSense,
keyword completion, code snippets, code navigation, and source control integration (Git). Run on-demand T-SQL
queries, view and save results as text, JSON, or Excel. Edit data, organize your favorite database connections, and
browse database objects in a familiar object browsing experience. To learn how to use the T-SQL editor, see Use
the T-SQL editor to create database objects.

Smart T-SQL code snippets


T-SQL code snippets generate the proper T-SQL syntax to create databases, tables, views, stored procedures,
users, logins, roles, etc., and to update existing database objects. Use smart snippets to quickly create copies of your
database for development or testing purposes, and to generate and execute CREATE and INSERT scripts.
SQL Operations Studio (preview ) also provides functionality to create custom T-SQL code snippets. To learn more,
see Create and use code snippets.

Customizable Server and Database Dashboards


Create rich customizable dashboards to monitor and quickly troubleshoot performance bottlenecks in your
databases. To learn about insight widgets, and database (and server) dashboards, see Manage servers and
databases with insight widgets.

Connection management (server groups)


Server groups provide a way to organize connection information for the servers and databases you work with. For
details, see Server groups.

Integrated Terminal
Use your favorite command-line tools (for example, Bash, PowerShell, sqlcmd, bcp, and ssh) in the Integrated
Terminal window right within the SQL Operations Studio (preview ) user interface. To learn about the integrated
terminal, see Integrated terminal.

Next steps
Download and Install SQL Operations Studio (preview )
Connect and query SQL Server
Connect and query Azure SQL Database
Download SQL Server Management Studio (SSMS)
6/26/2018 • 6 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
SSMS is an integrated environment for managing any SQL infrastructure, from SQL Server to SQL Database.
SSMS provides tools to configure, monitor, and administer instances of SQL. Use SSMS to deploy, monitor, and
upgrade the data-tier components used by your applications, as well as build queries and scripts.
Use SQL Server Management Studio (SSMS ) to query, design, and manage your databases and data warehouses,
wherever they are - on your local computer, or in the cloud.
SSMS is free!
SSMS 17.x is the latest generation of SQL Server Management Studio and provides support for SQL Server 2017.

Download SQL Server Management Studio 17.8.1

Download SQL Server Management Studio 17.8.1 Upgrade Package (upgrades 17.x to 17.8.1)
Version Information
Release number: 17.8.1
Build number: 14.0.17277.0
Release date: June 26, 2018
The SSMS 17.x installation does not upgrade or replace SSMS versions 16.x or earlier. SSMS 17.x installs side by
side with previous versions so both versions are available for use. If a computer contains side by side installations
of SSMS, verify you start the correct version for your specific needs. The latest version is labeled Microsoft SQL
Server Management Studio 17, and has a new icon:

Available Languages
NOTE
Non-English localized releases of SSMS require the KB 2862966 security update package if installing on: Windows 8, Windows
7, Windows Server 2012, and Windows Server 2008 R2.

This release of SSMS can be installed in the following languages:


SQL Server Management Studio 17.8.1:
Chinese (People's Republic of China) | Chinese (Taiwan) | English (United States) | French | German | Italian |
Japanese | Korean | Portuguese (Brazil) | Russian | Spanish
SQL Server Management Studio 17.8.1 Upgrade Package (upgrades 17.x to 17.8.1):
Chinese (People's Republic of China) | Chinese (Taiwan) | English (United States) | French | German | Italian |
Japanese | Korean | Portuguese (Brazil) | Russian | Spanish

NOTE
The SQL Server PowerShell module is now a separate install through the PowerShell Gallery. For more information, see
Download SQL Server PowerShell Module.

SQL Server Management Studio


New in this Release
SSMS 17.8.1 is the latest version of SQL Server Management Studio. The 17.x generation of SSMS provides
support for almost all feature areas on SQL Server 2008 through SQL Server 2017. Version 17.x also supports
SQL Analysis Service PaaS.
Version 17.8.1 includes:
General SSMS
Database Properties:
This improvement exposes the "AUTOGROW_ALL_FILES" configuration option for Filegroups. This new config
option is added under the Database Properties > Filegroups window in the form of a new column (Autogrow
All Files) of checkboxes for each available Filegroup (except for Filestream and Memory Optimized Filegroups).
The user can enable/disable AUTOGROW_ALL_FILES for a particular Filegroup by toggling the corresponding
Autogrow_All_Files checkbox. Correspondingly, the AUTOGROW_ALL_FILES option is properly scripted when
scripting the database for CREATE / generating scripts for the database (SQL2016 and above).
SQL Editor:
Improved experience with Intellisense in Azure SQL Database when the user doesn't have master access.
Scripting:
General performance improvements, especially over high-latency connections.
Analysis Servics (AS )
Analysis Services client libraries and data providers updated to the latest version, which added support for the
new Azure Government AAD authority (login.microsoftonline.us).

Supported SQL offerings


This version of SSMS works with all supported versions of SQL Server 2008 - SQL Server 2017 and provides
the greatest level of support for working with the latest cloud features in Azure SQL Database and Azure SQL
Data Warehouse.
Use SSMS 17.x to connect to SQL Server on Linux.
Additionally, SSMS 17.x can be installed side by side with SSMS 16.x or SQL Server 2014 SSMS and earlier.
SQL Server Integration Services (SSIS ) - SSMS version 17.x does not support connecting to the legacy SQL
Server Integration Services service. To connect to an earlier version of the legacy Integration Services, use the
version of SSMS aligned with the version of SQL Server. For example, use SSMS 16.x to connect to the legacy
SQL Server 2016 Integration Services service. SSMS 17.x and SSMS 16.x can be installed side-by-side on the
same computer. Since the release of SQL Server 2012, the SSIS Catalog database, SSISDB, is the
recommended way to store, manage, run, and monitor Integration Services packages. For details, see SSIS
Catalog.

Supported Operating systems


This release of SSMS supports the following 64-bit platforms when used with the latest available service pack:
Windows 10 (64-bit)
Windows 8.1 (64-bit)
Windows 8 (64-bit)
Windows 7 (SP1) (64-bit)
Windows Server 2016 *
Windows Server 2012 R2 (64-bit)
Windows Server 2012 (64-bit)
Windows Server 2008 R2 (64-bit)
* SSMS 17.X is based on the Visual Studio 2015 Isolated shell, which was released before Windows Server 2016.
Microsoft takes app compatibility seriously and ensures that already-shipped applications continue to run on the
latest Windows releases. To minimize issues running SSMS on Windows Server 2016, ensure SSMS has all of the
latest updates applied. If you experience any issues with SSMS on Windows Server 2016, contact support. The
support team determines if the issue is with SSMS, Visual Studio, or with Windows compatibility. The support
team then routes the issue to the appropriate team for further investigation.

SSMS installation tips and issues


Minimize Installation Reboots
Take the following actions to reduce the chances of SSMS setup requiring a reboot at the end of installation:
Make sure you are running an up-to-date version of the Visual C++ 2013 Redistributable Package.
Version 12.0.40649.5 (or greater) is required. Only the x64 version is needed.
Verify the version of .NET Framework on the computer is 4.6.1 (or greater).
Close any other instances of Visual Studio that are open on the computer.
Make sure all the latest OS updates are installed on the computer.
The noted actions are typically required only once. There are few cases where a reboot is required during
additional upgrades to the same major version of SSMS. For minor upgrades, all the prerequirements
for SSMS are already installed on the computer.

Release Notes
The following are issues and limitations with this 17.8 release:
Clicking the Script button after modifying any filegroup property in the Properties window, generates two
scripts – one script with a USE statement, and a second script with a USE master statement. The script with USE
master is generated in error and should be discarded. Run the script that contains the USE statement.
Some dialogs display an invalid edition error when working with new General Purpose or Business Critical
Azure SQL Database editions.
Some latency in XEvents viewer may be observed. This is a known issue in the .Net Framework. Please,
consider upgrading to NetFx 4.7.2.

Uninstall and reinstall SSMS


If your SSMS installation is having problems, and a standard uninstall and reinstall doesn't resolve them, you can
first try repairing the Visual Studio 2015 IsoShell. If repairing the Visual Studio 2015 IsoShell doesn't resolve the
problem, the following steps have been found to fix many random issues:
1. Uninstall SSMS the same way you uninstall any application (using Apps & features, Programs and features,
etc. depending on your version of Windows).
2. Uninstall Visual Studio 2015 IsoShell from an elevated cmd prompt:
PUSHD "C:\ProgramData\Package Cache\FE948F0DAB52EB8CB5A740A77D8934B9E1A8E301\redist"

vs_isoshell.exe /Uninstall /Force /PromptRestart

3. Uninstall Microsoft Visual C++ 2015 Redistributable the same way you uninstall any application. Uninstall
both x86 and x64 if they're on your computer.
4. Reinstall Visual Studio 2015 IsoShell from an elevated cmd prompt:
PUSHD "C:\ProgramData\Package Cache\FE948F0DAB52EB8CB5A740A77D8934B9E1A8E301\redist"

vs_isoshell.exe /PromptRestart

5. Reinstall SSMS.
6. Upgrade to the latest version of the Visual C++ 2015 Redistributable if you're not currently up to date.

Previous releases
Previous SQL Server Management Studio Releases

Feedback
SQL Client Tools Forum

Get Help
UserVoice - Suggestion to improve SQL Server?
Setup and Upgrade - MSDN Forum
SQL Server Data Tools - MSDN forum
Transact-SQL - MSDN forum
DBA Stack Exchange (tag sql-server) - ask SQL Server questions
Stack Overflow (tag sql-server) - also has some answers about SQL development
Reddit - general discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft

See Also
Tutorial: SQL Server Management Studio
SQL Server Management Studio documentation
Additional updates and service packs
Download SQL Server Data Tools (SSDT)

Contribute SQL documentation


How to contribute to SQL Server Documentation
Download and install SQL Server Data Tools (SSDT)
for Visual Studio
7/2/2018 • 4 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
SQL Server Data Tools is a modern development tool for building SQL Server relational databases, Azure SQL
databases, Analysis Services (AS ) data models, Integration Services (IS ) packages, and Reporting Services (RS )
reports. With SSDT, you can design and deploy any SQL Server content type with the same ease as you would
develop an application in Visual Studio.
For most users, SQL Server Data Tools (SSDT ) is installed during Visual Studio installation. Installing SSDT using
the Visual Studio installer adds the base SSDT functionality, so you still need to run the SSDT standalone installer to
get AS, IS, and RS tools.

Install SSDT with Visual Studio 2017


To install SSDT during Visual Studio installation, select the Data storage and processing workload, and then
select SQL Server Data Tools. If Visual Studio is already installed, you can edit the list of workloads to include
SSDT:

Install Analysis Services, Integration Services, and Reporting Services


tools
To install AS, IS, and RS project support, run the SSDT standalone installer.
The installer lists available Visual Studio instances to add the SSDT tools to. If Visual Studio is not installed,
selecting Install a new SQL Server Data Tools instance installs SSDT with a minimal version of Visual Studio,
but for the best experience we recommend using SSDT with the latest version of Visual Studio.
SSDT for VS 2017 (standalone installer)
Download SSDT for Visual Studio 2017 (15.7.1)

IMPORTANT
Before installing SSDT for Visual Studio 2017 (15.7.1), uninstall Analysis Services Projects and Reporting Services Projects
extensions if they are already installed, and close all VS instances.
When installing SSDT on Windows 10 and choosing Install new SQL Server Data Tools for Visual Studio 2017
instance, please clear any checkbox and install the new instance first. After the new instance is installed, please reboot the
computer and open the SSDT installer again to continue the installation.

Version Information
Release number: 15.7.1
Build number: 14.0.16167.0
Release date: July 02, 2018
For a complete list of changes, see the changelog.
SSDT for Visual Studio 2017 has the same system requirements as Visual Studio.
Available Languages - SSDT for VS 2017
This release of SSDT for VS 2017 can be installed in the following languages:
Chinese (People's Republic of China) | Chinese (Taiwan) | English (United States) | French
German | Italian | Japanese | Korean | Portuguese (Brazil) | Russian | Spanish

SSDT for VS 2015 (standalone installer)


Download SSDT for Visual Studio 2015 (17.4)
Version Information
The release number: 17.4
The build number for this release: 14.0.61712.050
For a complete list of changes, see the changelog.
Available Languages - SSDT for VS 2015
This release of SSDT for VS 2015 can be installed in the following languages:
Chinese (People's Republic of China) | Chinese (Taiwan) | English (United States) | French
German | Italian | Japanese | Korean | Portuguese (Brazil) | Russian | Spanish
ISO Images - SSDT for VS 2015
An ISO image of SSDT can be used as an alternative way to install SSDT or to set up an Administrative Installation
point. The ISO is a self-contained file that contains all of the components needed by SSDT and it can be
downloaded using a restartable download manager, useful for situations with limited or less reliable network
bandwidth. Once downloaded, the ISO can be mounted as a drive or burned to a DVD.

NOTE
The SSDT for VS 2015 17.4 ISO images are now available.

Chinese (People's Republic of China) | Chinese (Taiwan) | English (United States) | French
German | Italian | Japanese | Korean | Portuguese (Brazil) | Russian | Spanish

Supported SQL versions


PROJECT TEMPLATES SQL PLATFORMS SUPPORTED

Relational databases SQL Server 2005* - SQL Server 2017


(use SSDT 17.x or SSDT for Visual Studio 2017 to connect to
SQL Server on Linux)

Azure SQL Database

Azure SQL Data Warehouse (supports queries only; database


projects are not yet supported)

* SQL Server 2005 support is deprecated,

please move to an officially supported SQL version

Analysis Services models SQL Server 2008 – SQL Server 2017

Reporting Services reports

Integration Services packages SQL Server 2012 – SQL Server 2017


DacFx
SSDT for Visual Studio 2015, and SSDT for Visual Studio 2017 both use DacFx 17.4.1: Download Data-Tier
Application Framework (DacFx) 17.4.1.

Next steps
After installing SSDT, work through these tutorials to learn how to create databases, packages, data models, and
reports using SSDT:
Project-Oriented Offline Database Development
SSIS Tutorial: Create a Simple ETL Package
Analysis Services tutorials
Create a Basic Table Report (SSRS Tutorial)

Get Help
UserVoice - Suggestion to improve SQL Server?
Setup and Upgrade - MSDN Forum
SQL Server Data Tools - MSDN forum
Transact-SQL - MSDN forum
DBA Stack Exchange (tag sql-server) - ask SQL Server questions
Stack Overflow (tag sql-server) - also has some answers about SQL development
Reddit - general discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft

See Also
SSDT MSDN Forum
SSDT Team Blog
DACFx API Reference
Download SQL Server Management Studio (SSMS )
mssql-cli command-line query tool for SQL Server
5/17/2018 • 2 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
mssql-cli is an interactive command-line tool for querying SQL Server, install it on Windows, macOS, or Linux.

Install mssql-cli
For detailed installation instructions, see the Installation Guide, or if you know pip, install by running the following
command:
$ pip install mssql-cli

mssql-cli documentation
Documentation for mssql-cli is located in the mssql-cli GitHub repository.
Main page/readme
Installation Guide
Usage Guide
Additional documentation is located in the doc folder.
SQL Server Configuration Manager Help
5/3/2018 • 2 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (Windows only) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Use SQL Server Configuration Manager to configure SQL Server services and configure network connectivity. To
create or manage database objects, configure security, and write Transact-SQL queries, use SQL Server
Management Studio. For more information about SQL Server Management Studio, see SQL Server Books Online.

TIP
If you need to configure SQL Server on Linux, use the mssql-conf tool. For more information, see Configure SQL Server on
Linux with the mssql-conf tool.

This section contains the F1 Help topics for the dialogs in SQL Server Configuration Manager.

NOTE
SQL Server Configuration Manager cannot configure versions of SQL Server earlier than Microsoft SQL Server 2005.

Services
SQL Server Configuration Manager manages services that are related to SQL Server. Although many of these
tasks can be accomplished using the Microsoft Windows Services dialog, is important to note that SQL Server
Configuration Manager performs additional operations on the services it manages, such as applying the correct
permissions when the service account is changed. Using the normal Windows Services dialog to configure any of
the SQL Server services might cause the service to malfunction.
Use SQL Server Configuration Manager for the following tasks for services:
Start, stop, and pause services
Configure services to start automatically or manually, disable the services, or change other service settings
Change the passwords for the accounts used by the SQL Server services
Start SQL Server using trace flags (command line parameters)
View the properties of services

SQL Server Network Configuration


Use SQL Server Configuration Manager for the following tasks related to the SQL Server services on this
computer:
Enable or disable a SQL Server network protocol
Configure a SQL Server network protocol
NOTE
For a short tutorial about how to configure protocols and connect to the SQL Server Database Engine, see Tutorial: Getting
Started with the Database Engine.

SQL Server Native Client Configuration


SQL Server clients connect to SQL Server by using the SQL Server Native Client network library. Use SQL Server
Configuration Manager for the following tasks related to client applications on this computer:
For SQL Server client applications on this computer, specify the protocol order, when connecting to
instances of SQL Server.
Configure client connection protocols.
For SQL Server client applications, create aliases for instances of SQL Server, so that clients can connect
using a custom connection string.
For more information about each of these tasks, see F1 help for each task.
To open SQL Server Configuration Manager
On the Start menu, point to All Programs, point to Microsoft SQL Server (version), point to Configuration
Tools, and then click SQL Server Configuration Manager.
To access SQL Server Configuration Manager Using Windows 8
Because SQL Server Configuration Manager is a snap-in for the Microsoft Management Console program and not
a stand-alone program, SQL Server Configuration Manager not does not appear as an application when running
Windows 8. To open SQL Server Configuration Manager, in the Search charm, under Apps, type
SQLServerManager12.msc (for SQL Server 2014 (12.x)) or SQLServerManager11.msc (for SQL Server 2012
(11.x)), and then press Enter.

See Also
SQL Server Services
SQL Server Network Configuration
SQL Native Client 11.0 Configuration
Choosing a Network Protocol
Configure SQL Server on Linux with the mssql-conf
tool
7/11/2018 • 14 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (Linux only) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
mssql-conf is a configuration script that installs with SQL Server 2017 for Red Hat Enterprise Linux, SUSE Linux
Enterprise Server, and Ubuntu. You can use this utility to set the following parameters:

Agent Enable SQL Server Agent

Collation Set a new collation for SQL Server on Linux.

Customer feedback Choose whether or not SQL Server sends feedback to


Microsoft.

Database Mail Profile Set the default database mail profile for SQL Server on Linux

Default data directory Change the default directory for new SQL Server database
data files (.mdf ).

Default log directory Changes the default directory for new SQL Server database
log (.ldf ) files.

Default master database file directory Changes the default directory for the master database files on
existing SQL installation.

Default master database file name Changes the name of master database files.

Default dump directory Change the default directory for new memory dumps and
other troubleshooting files.

Default error log directory Changes the default directory for new SQL Server ErrorLog,
Default Profiler Trace, System Health Session XE, and Hekaton
Session XE files.

Default backup directory Change the default directory for new backup files.

Dump type Choose the type of dump memory dump file to collect.

High availability Enable Availability Groups.

Local Audit directory Set a a directory to add Local Audit files.

Locale Set the locale for SQL Server to use.

Memory limit Set the memory limit for SQL Server.


TCP port Change the port where SQL Server listens for connections.

TLS Configure Transport Level Security.

Traceflags Set the traceflags that the service is going to use.

TIP
Some of these settings can also be configured with environment variables. For more information, see Configure SQL Server
settings with environment variables.

Usage tips
For Always On Availability Groups and shared disk clusters, always make the same configuration changes
on each node.
For the shared disk cluster scenario, do not attempt to restart the mssql-server service to apply changes.
SQL Server is running as an application. Instead, take the resource offline and then back online.
These examples run mssql-conf by specify the full path: /opt/mssql/bin/mssql-conf. If you choose to
navigate to that path instead, run mssql-conf in the context of the current directory: ./mssql-conf.

Enable SQL Server Agent


The sqlagent.enabled setting enables SQL Server Agent. By default, SQL Server Agent is disabled. If
sqlagent.enabled is not present in the mssql.conf settings file, then SQL Server internally assumes that SQL
Server Agent is disabled.
To change this settings, use the following steps:
1. Enable the SQL Server Agent:

sudo /opt/mssql/bin/mssql-conf set sqlagent.enabled true

2. Restart the SQL Server service:

sudo systemctl restart mssql-server

Change the SQL Server collation


The set-collation option changes the collation value to any of the supported collations.
1. First backup any user databases on your server.
2. Then use the sp_detach_db stored procedure to detach the user databases.
3. Run the set-collation option and follow the prompts:

sudo /opt/mssql/bin/mssql-conf set-collation

4. The mssql-conf utility will attempt to change to the specified collation value and restart the service. If there
are any errors, it rolls back the collation to the previous value.
5. Retore your user database backups.
For a list of supported collations, run the sys.fn_helpcollations function: SELECT Name from sys.fn_helpcollations() .

Configure customer feedback


The telemetry.customerfeedback setting changes whether SQL Server sends feedback to Microsoft or not. By
default, this value is set to true for all editions. To change the value, run the following commands:

IMPORTANT
You can not turn off customer feedback for free editions of SQL Server, Express and Developer.

1. Run the mssql-conf script as root with the set command for telemetry.customerfeedback. The following
example turns off customer feedback by specifying false.

sudo /opt/mssql/bin/mssql-conf set telemetry.customerfeedback false

2. Restart the SQL Server service:

sudo systemctl restart mssql-server

For more information, see Customer Feedback for SQL Server on Linux and the SQL Server Privacy Statement.

Change the default data or log directory location


The filelocation.defaultdatadir and filelocation.defaultlogdir settings change the location where the new
database and log files are created. By default, this location is /var/opt/mssql/data. To change these settings, use the
following steps:
1. Create the target directory for new database data and log files. The following example creates a new
/tmp/data directory:

sudo mkdir /tmp/data

2. Change the owner and group of the directory to the mssql user:

sudo chown mssql /tmp/data


sudo chgrp mssql /tmp/data

3. Use mssql-conf to change the default data directory with the set command:

sudo /opt/mssql/bin/mssql-conf set filelocation.defaultdatadir /tmp/data

4. Restart the SQL Server service:

sudo systemctl restart mssql-server

5. Now all the database files for the new databases created will be stored in this new location. If you would like
to change the location of the log (.ldf ) files of the new databases, you can use the following "set" command:
sudo /opt/mssql/bin/mssql-conf set filelocation.defaultlogdir /tmp/log

6. This command also assumes that a /tmp/log directory exists, and that it is under the user and group mssql.

Change the default master database file directory location


The filelocation.masterdatafile and filelocation.masterlogfile setting changes the location where the SQL
Server engine looks for the master database files. By default, this location is /var/opt/mssql/data.
To change these settings, use the following steps:
1. Create the target directory for new error log files. The following example creates a new
/tmp/masterdatabasedir directory:

sudo mkdir /tmp/masterdatabasedir

2. Change the owner and group of the directory to the mssql user:

sudo chown mssql /tmp/masterdatabasedir


sudo chgrp mssql /tmp/masterdatabasedir

3. Use mssql-conf to change the default master database directory for the master data and log files with the
set command:

sudo /opt/mssql/bin/mssql-conf set filelocation.masterdatafile /tmp/masterdatabasedir/master.mdf


sudo /opt/mssql/bin/mssql-conf set filelocation.masterlogfile /tmp/masterdatabasedir/mastlog.ldf

4. Stop the SQL Server service:

sudo systemctl stop mssql-server

5. Move the master.mdf and masterlog.ldf:

sudo mv /var/opt/mssql/data/master.mdf /tmp/masterdatabasedir/master.mdf


sudo mv /var/opt/mssql/data/mastlog.ldf /tmp/masterdatabasedir/mastlog.ldf

6. Start the SQL Server service:

sudo systemctl start mssql-server

NOTE
If SQL Server cannot find master.mdf and mastlog.ldf files in the specified directory, a templated copy of the system
databases will be automatically created in the specified directory, and SQL Server will successfully start up. However,
metadata such as user databases, server logins, server certificates, encryption keys, SQL agent jobs, or old SA login password
will not be updated in the new master database. You will have to stop SQL Server and move your old master.mdf and
mastlog.ldf to the new specified location and start SQL Server to continue using the existing metadata.

Change the name of master database files.


The filelocation.masterdatafile and filelocation.masterlogfile setting changes the location where the SQL
Server engine looks for the master database files. By default, this location is /var/opt/mssql/data. To change these
settings, use the following steps:
1. Stop the SQL Server service:

sudo systemctl stop mssql-server

2. Use mssql-conf to change the expected master database names for the master data and log files with the
set command:

sudo /opt/mssql/bin/mssql-conf set filelocation.masterdatafile /var/opt/mssql/data/masternew.mdf


sudo /opt/mssql/bin/mssql-conf set filelocation.mastlogfile /var/opt/mssql/data /mastlognew.ldf

3. Change the name of the master database data and log files

sudo mv /var/opt/mssql/data/master.mdf /var/opt/mssql/data/masternew.mdf


sudo mv /var/opt/mssql/data/mastlog.ldf /var/opt/mssql/data/mastlognew.ldf

4. Start the SQL Server service:

sudo systemctl start mssql-server

Change the default dump directory location


The filelocation.defaultdumpdir setting changes the default location where the memory and SQL dumps are
generated whenever there is a crash. By default, these files are generated in /var/opt/mssql/log.
To set up this new location, use the following commands:
1. Create the target directory for new dump files. The following example creates a new /tmp/dump directory:

sudo mkdir /tmp/dump

2. Change the owner and group of the directory to the mssql user:

sudo chown mssql /tmp/dump


sudo chgrp mssql /tmp/dump

3. Use mssql-conf to change the default data directory with the set command:

sudo /opt/mssql/bin/mssql-conf set filelocation.defaultdumpdir /tmp/dump

4. Restart the SQL Server service:

sudo systemctl restart mssql-server

Change the default error log file directory location


The filelocation.errorlogfile setting changes the location where the new error log, default profiler trace, system
health session XE and Hekaton session XE files are created. By default, this location is /var/opt/mssql/log. The
directory in which SQL errorlog file is set becomes the default log directory for other logs.
To change these settings:
1. Create the target directory for new error log files. The following example creates a new /tmp/logs
directory:

sudo mkdir /tmp/logs

2. Change the owner and group of the directory to the mssql user:

sudo chown mssql /tmp/logs


sudo chgrp mssql /tmp/logs

3. Use mssql-conf to change the default errorlog filename with the set command:

sudo /opt/mssql/bin/mssql-conf set filelocation.errorlogfile /tmp/logs/errorlog

4. Restart the SQL Server service:

sudo systemctl restart mssql-server

Change the default backup directory location


The filelocation.defaultbackupdir setting changes the default location where the backup files are generated. By
default, these files are generated in /var/opt/mssql/data.
To set up this new location, use the following commands:
1. Create the target directory for new backup files. The following example creates a new /tmp/backup
directory:

sudo mkdir /tmp/backup

2. Change the owner and group of the directory to the mssql user:

sudo chown mssql /tmp/backup


sudo chgrp mssql /tmp/backup

3. Use mssql-conf to change the default backup directory with the "set" command:

sudo /opt/mssql/bin/mssql-conf set filelocation.defaultbackupdir /tmp/backup

4. Restart the SQL Server service:

sudo systemctl restart mssql-server

Specify core dump settings


If an exception occurs in one of the SQL Server processes, SQL Server creates a memory dump.
There are two options for controlling the type of memory dumps that SQL Server collects:
coredump.coredumptype and coredump.captureminiandfull. These relate to the two phases of core dump
capture.
The first phase capture is controlled by the coredump.coredumptype setting, which determines the type of
dump file generated during an exception. The second phase is enabled when the coredump.captureminiandfull
setting. If coredump.captureminiandfull is set to true, the dump file specified by coredump.coredumptype is
generated and a second mini dump is also generated. Setting coredump.captureminiandfull to false disables
the second capture attempt.
1. Decide whether to capture both mini and full dumps with the coredump.captureminiandfull setting.

sudo /opt/mssql/bin/mssql-conf set coredump.captureminiandfull <true or false>

Default: false
2. Specify the type of dump file with the coredump.coredumptype setting.

sudo /opt/mssql/bin/mssql-conf set coredump.coredumptype <dump_type>

Default: miniplus
The following table lists the possible coredump.coredumptype values.

TYPE DESCRIPTION

mini Mini is the smallest dump file type. It uses the Linux
system information to determine threads and modules in
the process. The dump contains only the host
environment thread stacks and modules. It does not
contain indirect memory references or globals.

miniplus MiniPlus is similar to mini, but it includes additional


memory. It understands the internals of SQLPAL and the
host environment, adding the following memory regions
to the dump:
- Various globals
- All memory above 64TB
- All named regions found in /proc/$pid/maps
- Indirect memory from threads and stacks
- Thread information
- Associated Teb’s and Peb’s
- Module Information
- VMM and VAD tree

filtered Filtered uses a subtraction-based design where all


memory in the process is included unless specifically
excluded. The design understands the internals of SQLPAL
and the host environment, excluding certain regions from
the dump.

full Full is a complete process dump that includes all regions


located in /proc/$pid/maps. This is not controlled by
coredump.captureminiandfull setting.
Set the default database mail profile for SQL Server on Linux
The sqlpagent.databasemailprofile allows you to set the default DB Mail profile for email alerts.

sudo /opt/mssq/bin/mssql-conf set sqlagent.databasemailprofile <profile_name>

High Availability
The hadr.hadrenabled option enables availability groups on your SQL Server instance. The following command
enables availability groups by setting hadr.hadrenabled to 1. You must restart SQL Server for the setting to take
effect.

sudo /opt/mssql/bin/mssql-conf set hadr.hadrenabled 1


sudo systemctl restart mssql-server

For information how this is used with availability groups, see the following two topics.
Configure Always On Availability Group for SQL Server on Linux
Configure read-scale availability group for SQL Server on Linux

Set local audit directory


The telemetry.userrequestedlocalauditdirectory setting enables Local Audit and lets you set the directory
where the Local Audit logs are created.
1. Create a target directory for new Local Audit logs. The following example creates a new /tmp/audit
directory:

sudo mkdir /tmp/audit

2. Change the owner and group of the directory to the mssql user:

sudo chown mssql /tmp/audit


sudo chgrp mssql /tmp/audit

3. Run the mssql-conf script as root with the set command for
telemetry.userrequestedlocalauditdirectory:

sudo /opt/mssql/bin/mssql-conf set telemetry.userrequestedlocalauditdirectory /tmp/audit

4. Restart the SQL Server service:

sudo systemctl restart mssql-server

For more information, see Customer Feedback for SQL Server on Linux.

Change the SQL Server locale


The language.lcid setting changes the SQL Server locale to any supported language identifier (LCID ).
1. The following example changes the locale to French (1036):
sudo /opt/mssql/bin/mssql-conf set language.lcid 1036

2. Restart the SQL Server service to apply the changes:

sudo systemctl restart mssql-server

Set the memory limit


The memory.memorylimitmb setting controls the amount physical memory (in MB ) available to SQL Server.
The default is 80% of the physical memory.
1. Run the mssql-conf script as root with the set command for memory.memorylimitmb. The following
example changes the memory available to SQL Server to 3.25 GB (3328 MB ).

sudo /opt/mssql/bin/mssql-conf set memory.memorylimitmb 3328

2. Restart the SQL Server service to apply the changes:

sudo systemctl restart mssql-server

Change the TCP port


The network.tcpport setting changes the TCP port where SQL Server listens for connections. By default, this port
is set to 1433. To change the port, run the following commands:
1. Run the mssql-conf script as root with the "set" command for "network.tcpport":

sudo /opt/mssql/bin/mssql-conf set network.tcpport <new_tcp_port>

2. Restart the SQL Server service:

sudo systemctl restart mssql-server

3. When connecting to SQL Server now, you must specify the custom port with a comma (,) after the
hostname or IP address. For example, to connect with SQLCMD, you would use the following command:

sqlcmd -S localhost,<new_tcp_port> -U test -P test

Specify TLS settings


The following options configure TLS for an instance of SQL Server running on Linux.

OPTION DESCRIPTION

network.forceencryption If 1, then SQL Server forces all connections to be encrypted.


By default, this option is 0.
OPTION DESCRIPTION

network.tlscert The absolute path to the certificate file that SQL Server uses
for TLS. Example: /etc/ssl/certs/mssql.pem The certificate
file must be accessible by the mssql account. Microsoft
recommends restricting access to the file using
chown mssql:mssql <file>; chmod 400 <file> .

network.tlskey The absolute path to the private key file that SQL Server uses
for TLS. Example: /etc/ssl/private/mssql.key The
certificate file must be accessible by the mssql account.
Microsoft recommends restricting access to the file using
chown mssql:mssql <file>; chmod 400 <file> .

network.tlsprotocols A comma-separated list of which TLS protocols are allowed by


SQL Server. SQL Server always attempts to negotiate the
strongest allowed protocol. If a client does not support any
allowed protocol, SQL Server rejects the connection attempt.
For compatibility, all supported protocols are allowed by
default (1.2, 1.1, 1.0). If your clients support TLS 1.2, Microsoft
recommends allowing only TLS 1.2.

network.tlsciphers Specifies which ciphers are allowed by SQL Server for TLS. This
string must be formatted per OpenSSL's cipher list format. In
general, you should not need to change this option.
By default, the following ciphers are allowed:
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-
GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-
AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-
ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-
RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-
ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-
AES128-SHA:AES256-GCM-SHA384:AES128-GCM-
SHA256:AES256-SHA256:AES128-SHA256:AES256-
SHA:AES128-SHA

network.kerberoskeytabfile Path to the Kerberos keytab file

For an example of using the TLS settings, see Encrypting Connections to SQL Server on Linux.

Enable/Disable traceflags
This traceflag option enables or disables traceflags for the startup of the SQL Server service. To enable/disable a
traceflag use the following commands:
1. Enable a traceflag using the following command. For example, for Traceflag 1234:

sudo /opt/mssql/bin/mssql-conf traceflag 1234 on

2. You can enable multiple traceflags by specifying them separately:

sudo /opt/mssql/bin/mssql-conf traceflag 2345 3456 on

3. In a similar way, you can disable one or more enabled traceflags by specifying them and adding the off
parameter:

sudo /opt/mssql/bin/mssql-conf traceflag 1234 2345 3456 off


4. Restart the SQL Server service to apply the changes:

sudo systemctl restart mssql-server

Remove a setting
To unset any setting made with mssql-conf set , call mssql-conf with the unset option and the name of the
setting. This clears the setting, effectively returning it to its default value.
1. The following example clears the network.tcpport option.

sudo /opt/mssql/bin/mssql-conf unset network.tcpport

2. Restart the SQL Server service.

sudo systemctl restart mssql-server

View current settings


To view any configured settings, run the following command to output the contents of the mssql.conf file:

sudo cat /var/opt/mssql/mssql.conf

Note that any settings not shown in this file are using their default values. The next section provides a sample
mssql.conf file.

mssql.conf format
The following /var/opt/mssql/mssql.conf file provides an example for each setting. You can use this format to
manually make changes to the mssql.conf file as needed. If you do manually change the file, you must restart
SQL Server before the changes are applied. To use the mssql.conf file with Docker, you must have Docker persist
your data. First add a complete mssql.conf file to your host directory and then run the container. There is an
example of this in Customer Feedback.
[EULA]
accepteula = Y

[coredump]
captureminiandfull = true
coredumptype = full

[filelocation]
defaultbackupdir = /var/opt/mssql/data/
defaultdatadir = /var/opt/mssql/data/
defaultdumpdir = /var/opt/mssql/data/
defaultlogdir = /var/opt/mssql/data/

[hadr]
hadrenabled = 0

[language]
lcid = 1033

[memory]
memorylimitmb = 4096

[network]
forceencryption = 0
ipaddress = 10.192.0.0
kerberoskeytabfile = /var/opt/mssql/secrets/mssql.keytab
tcpport = 1401
tlscert = /etc/ssl/certs/mssql.pem
tlsciphers = ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-
RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-
AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:AES256-
GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA
tlskey = /etc/ssl/private/mssql.key
tlsprotocols = 1.2,1.1,1.0

[sqlagent]
databasemailprofile = default
errorlogfile = /var/opt/mssql/log/sqlagentlog.log
errorlogginglevel = 7

[telemetry]
customerfeedback = true
userrequestedlocalauditdirectory = /tmp/audit

[traceflag]
traceflag0 = 1204
traceflag1 = 2345
traceflag = 3456

Next steps
To instead use environment variables to make some of these configuration changes, see Configure SQL Server
settings with environment variables.
For other management tools and scenarios, see Manage SQL Server on Linux.
Install Distributed Replay - Overview
6/25/2018 • 2 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Use the following topics to install the Distributed Replay feature.

In This Section
TOPIC DESCRIPTION

Distributed Replay Requirements Procedural topic that lists the requirements for installing
Distributed Replay.

Install Distributed Replay Procedural topic that provides a typical Distributed Replay
installation by using the Setup Wizard, sample syntax and
installation parameters for running unattended Setup, and
sample syntax and installation parameters for running
Distributed Reply through a configuration file.

Complete the Post-Installation Steps Procedural topic for completing a Distributed Replay
installation.

Modify the Controller and Client Services Accounts Procedural topic for how to start and stop the Distributed
Replay controller and client services, and modify the service
accounts.

See Also
Install SQL Server 2016
dta Utility
5/3/2018 • 15 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The dta utility is the command prompt version of Database Engine Tuning Advisor. The dta utility is designed to
allow you to use Database Engine Tuning Advisor functionality in applications and scripts.
Like Database Engine Tuning Advisor, the dta utility analyzes a workload and recommends physical design
structures to improve server performance for that workload. The workload can be a plan cache, a SQL Server
Profiler trace file or table, or a Transact-SQL script. Physical design structures include indexes, indexed views, and
partitioning. After analyzing a workload, the dta utility produces a recommendation for the physical design of
databases and can generate the necessary script to implement the recommendation. Workloads can be specified
from the command prompt with the -if or the -it argument. You can also specify an XML input file from the
command prompt with the -ix argument. In that case, the workload is specified in the XML input file.

Syntax
dta
[ -? ] |
[
[ -S server_name[ \instance ] ]
{ { -U login_id [-P password ] } | –E }
{ -D database_name [ ,...n ] }
[ -d database_name ]
[ -Tl table_list | -Tf table_list_file ]
{ -if workload_file | -it workload_trace_table_name |
-ip | -iq }
{ -ssession_name | -IDsession_ID }
[ -F ]
[ -of output_script_file_name ]
[ -or output_xml_report_file_name ]
[ -ox output_XML_file_name ]
[ -rl analysis_report_list [ ,...n ] ]
[ -ix input_XML_file_name ]
[ -A time_for_tuning_in_minutes ]
[ -n number_of_events ]
[ -I time_window_in_hours ]
[ -m minimum_improvement ]
[ -fa physical_design_structures_to_add ]
[ -fi filtered_indexes]
[ -fc columnstore_indexes]
[ -fp partitioning_strategy ]
[ -fk keep_existing_option ]
[ -fx drop_only_mode ]
[ -B storage_size ]
[ -c max_key_columns_in_index ]
[ -C max_columns_in_index ]
[ -e | -e tuning_log_name ]
[ -N online_option]
[ -q ]
[ -u ]
[ -x ]
[ -a ]
]
Arguments
-?
Displays usage information.
-A time_for_tuning_in_minutes
Specifies the tuning time limit in minutes. dta uses the specified amount of time to tune the workload and generate
a script with the recommended physical design changes. By default dta assumes a tuning time of 8 hours.
Specifying 0allows unlimited tuning time. dta might finish tuning the entire workload before the time limit expires.
However, to make sure that the entire workload is tuned, we recommend that you specify unlimited tuning time (-A
0 ).
-a
Tunes workload and applies the recommendation without prompting you.
-B storage_size
Specifies the maximum space in megabytes that can be consumed by the recommended index and partitioning.
When multiple databases are tuned, recommendations for all databases are considered for the space calculation.
By default, dta assumes the smaller of the following storage sizes:
Three times the current raw data size, which includes the total size of heaps and clustered indexes on tables
in the database.
The free space on all attached disk drives plus the raw data size.
The default storage size does not include nonclustered indexes and indexed views.
-C max_columns_in_index
Specifies the maximum number of columns in indexes that dta proposes. The maximum value is 1024. By
default, this argument is set to 16.
-c max_key_columns_in_index
Specifies the maximum number of key columns in indexes that dta proposes. The default value is 16, the
maximum value allowed. dta also considers creating indexes with included columns. Indexes recommended
with included columns may exceed the number of columns specified in this argument.
-D database_name
Specifies the name of each database that is to be tuned. The first database is the default database. You can
specify multiple databases by separating the database names with commas, for example:

dta –D database_name1, database_name2...

Alternatively, you can specify multiple databases by using the –D argument for each database name, for example:

dta –D database_name1 -D database_name2... n

The -D argument is mandatory. If the -d argument has not been specified, dta initially connects to the database
that is specified with the first USE database_name clause in the workload. If there is not explicit USE database_name
clause in the workload, you must use the -d argument.
For example, if you have a workload that contains no explicit USE database_name clause, and you use the following
dta command, a recommendation will not be generated:

dta -D db_name1, db_name2...


But if you use the same workload, and use the following dta command that uses the -d argument, a
recommendation will be generated:

dta -D db_name1, db_name2 -d db_name1

-d database_name
Specifies the first database to which dta connects when tuning a workload. Only one database can be specified for
this argument. For example:

dta -d AdventureWorks2012 ...

If multiple database names are specified, then dta returns an error. The -d argument is optional.
If you are using an XML input file, you can specify the first database to which dta connects by using the
DatabaseToConnect element that is located under the TuningOptions element. For more information, see
Database Engine Tuning Advisor.
If you are tuning only one database, the -d argument provides functionality that is similar to the -d argument in the
sqlcmd utility, but it does not execute the USE database_name statement. For more information, see sqlcmd
Utility.
-E
Uses a trusted connection instead of requesting a password. Either the -E argument or the -U argument, which
specifies a login ID, must be used.
-e tuning_log_name
Specifies the name of the table or file where dta records events that it could not tune. The table is created on the
server where the tuning is performed.
If a table is used, specify its name in the format: [database_name].[owner_name].table_name. The following table
shows the default values for each parameter:

PARAMETER DEFAULT VALUE DETAILS

database_name database_name specified with the –D


option

owner_name dbo owner_name must be dbo. If any other


value is specified, then dta execution
fails and it returns an error.

table_name None

If a file is used, specify .xml as its extension. For example, TuningLog.xml.

NOTE
The dta utility does not delete the contents of user-specified tuning log tables if the session is deleted. When tuning very
large workloads, we recommend that a table be specified for the tuning log. Since tuning large workloads can result in large
tuning logs, the sessions can be deleted much faster when a table is used.

-F
Permits dta to overwrite an existing output file. If an output file with the same name already exists and -F is not
specified, dtareturns an error. You can use -F with -of, -or, or -ox.
-fa physical_design_structures_to_add
Specifies what types of physical design structures dta should include in the recommendation. The following table
lists and describes the values that can be specified for this argument. When no value is specified, dta uses the
default -faIDX.

VALUE DESCRIPTION

IDX_IV Indexes and indexed views.

IDX Indexes only.

IV Indexed views only.

NCL_IDX Nonclustered indexes only.

-fi
Specifies that filtered indexes be considered for new recommendations. For more information, see Create Filtered
Indexes.
-fc
Specifies that columnstore indexes be considered for new recommendations. DTA will consider both clustered and
non-clustered columnstore indexes. For more information, see
Columnstore index recommendations in Database Engine Tuning Advisor (DTA). ||
|-|
|Applies to: SQL Server 2016 (13.x) through SQL Server 2017.|
-fk keep_existing_option
Specifies what existing physical design structures dta must retain when generating its recommendation. The
following table lists and describes the values that can be specified for this argument:

VALUE DESCRIPTION

NONE No existing structures

ALL All existing structures

ALIGNED All partition-aligned structures.

CL_IDX All clustered indexes on tables

IDX All clustered and nonclustered indexes on tables

-fp partitioning_strategy
Specifies whether new physical design structures (indexes and indexed views) that dta proposes should be
partitioned, and how they should be partitioned. The following table lists and describes the values that can be
specified for this argument:

VALUE DESCRIPTION

NONE No partitioning

FULL Full partitioning (choose to enhance performance)


VALUE DESCRIPTION

ALIGNED Aligned partitioning only (choose to enhance manageability)

ALIGNED means that in the recommendation generated by dta every proposed index is partitioned in exactly the
same way as the underlying table for which the index is defined. Nonclustered indexes on an indexed view are
aligned with the indexed view. Only one value can be specified for this argument. The default is -fpNONE.
-fx drop_only_mode
Specifies that dta only considers dropping existing physical design structures. No new physical design structures
are considered. When this option is specified, dta evaluates the usefulness of existing physical design structures
and recommends dropping seldom used structures. This argument takes no values. It cannot be used with the -fa, -
fp, or -fk ALL arguments
-ID session_ID
Specifies a numerical identifier for the tuning session. If not specified, then dta generates an ID number. You can
use this identifier to view information for existing tuning sessions. If you do not specify a value for -ID, then a
session name must be specified with -s.
-ip
Specifies that the plan cache be used as the workload. The top 1,000 plan cache events for explicitly selected
databases are analyzed. This value can be changed using the –n option.
-iq
Specifies that the Query Store be used as the workload. The top 1,000 events from the Query Store for explicitly
selected databases are analyzed. This value can be changed using the –n option. See Query Store and Tuning
Database Using Workload from Query Store for more information. ||
|-|
|Applies to: SQL Server 2016 (13.x) through SQL Server 2017.|
-if workload_file
Specifies the path and name of the workload file to use as input for tuning. The file must be in one of these
formats: .trc (SQL Server Profiler trace file), .sql (SQL file), or .log ( SQL Server trace file). Either one workload file
or one workload table must be specified.
-it workload_trace_table_name
Specifies the name of a table containing the workload trace for tuning. The name is specified in the format:
[database_name].[owner_name].table_name.
The following table shows the default values for each:

PARAMETER DEFAULT VALUE

database_name database_name specified with –D option.

owner_name dbo.

table_name None.

NOTE
owner_name must be dbo. If any other value is specified, execution of dta fails and an error is returned. Also note that either
one workload table or one workload file must be specified.

-ix input_XML_file_name
Specifies the name of the XML file containing dta input information. This must be a valid XML document
conforming to DTASchema.xsd. Conflicting arguments specified from the command prompt for tuning options
override the corresponding value in this XML file. The only exception is if a user-specified configuration is entered
in the evaluate mode in the XML input file. For example, if a configuration is entered in the Configuration element
of the XML input file and the EvaluateConfiguration element is also specified as one of the tuning options, the
tuning options specified in the XML input file will override any tuning options entered from the command prompt.
-m minimum_improvement
Specifies the minimum percentage of improvement that the recommended configuration must satisfy.
-N online_option
Specifies whether physical design structures are created online. The following table lists and describes the values
you can specify for this argument:

VALUE DESCRIPTION

OFF No recommended physical design structures can be created


online.

ON All recommended physical design structures can be created


online.

MIXED Database Engine Tuning Advisor attempts to recommend


physical design structures that can be created online when
possible.

If indexes are created online, ONLINE = ON is appended to its object definition.


-n number_of_events
Specifies the number of events in the workload that dta should tune. If this argument is specified and the workload
is a trace file that contains duration information, then dta tunes events in decreasing order of duration. This
argument is useful to compare two configurations of physical design structures. To compare two configurations,
specify the same number of events to be tuned for both configurations and then specify an unlimited tuning time
for both also as follows:

dta -n number_of_events -A 0

In this case, it is important to specify an unlimited tuning time ( -A 0 ). Otherwise, Database Engine Tuning Advisor
assumes an 8 hour tuning time by default.
-I time_window_in_hours
Specifies the time window (in hours) when a query must have executed for it to be considered by DTA for tuning
when using -iq option (Workload from Query Store).

dta -iq -I 48

In this case, DTA will use Query Store as the source of workload and only consider queries that have executed with
the past 48 hours.
||
|-|
|Applies to: SQL Server 2016 (13.x) through SQL Server 2017.|
-of output_script_file_name
Specifies that dta writes the recommendation as a Transact-SQL script to the file name and destination specified.
You can use -F with this option. Make sure that the file name is unique, especially if you are also using -or and -ox.
-or output_xml_report_file_name
Specifies that dta writes the recommendation to an output report in XML. If a file name is supplied, then the
recommendations are written to that destination. Otherwise, dta uses the session name to generate the file name
and writes it to the current directory.
You can use -F with this option. Make sure that the file name is unique, especially if you are also using -of and -ox.
-ox output_XML_file_name
Specifies that dta writes the recommendation as an XML file to the file name and destination supplied. Ensure that
Database Engine Tuning Advisor has permissions to write to the destination directory.
You can use -F with this option. Make sure that the file name is unique, especially if you are also using -of and -or.
-P password
Specifies the password for the login ID. If this option is not used, dta prompts for a password.
-q
Sets quiet mode. No information is written to the console, including progress and header information.
-rl analysis_report_list
Specifies the list of analysis reports to generate. The following table lists the values that can be specified for this
argument:

VALUE REPORT

ALL All analysis reports

STMT_COST Statement cost report

EVT_FREQ Event frequency report

STMT_DET Statement detail report

CUR_STMT_IDX Statement-index relations report (current configuration)

REC_STMT_IDX Statement-index relations report (recommended configuration)

STMT_COSTRANGE Statement cost range report

CUR_IDX_USAGE Index usage report (current configuration)

REC_IDX_USAGE Index usage report (recommended configuration)

CUR_IDX_DET Index detail report (current configuration)

REC_IDX_DET Index detail report (recommended configuration)

VIW_TAB View-table relations report

WKLD_ANL Workload analysis report

DB_ACCESS Database access report


VALUE REPORT

TAB_ACCESS Table access report

COL_ACCESS Column access report

Specify multiple reports by separating the values with commas, for example:

... -rl EVT_FREQ, VIW_TAB, WKLD_ANL ...

-S server_name[ \instance]
Specifies the name of the computer and instance of SQL Server to connect to. If no server_name is specified, dta
connects to the default instance of SQL Server on the local computer. This option is required when connecting to a
named instance or when executing dta from a remote computer on the network.
-s session_name
Specifies the name of the tuning session. This is required if -ID is not specified.
-Tf table_list_file
Specifies the name of a file containing a list of tables to be tuned. Each table listed within the file should begin on a
new line. Table names should be qualified with three-part naming, for example,
AdventureWorks2012.HumanResources.Department. Optionally, to invoke the table-scaling feature, the name
of an existing table can be followed by a number indicating the projected number of rows in the table. Database
Engine Tuning Advisor takes into consideration the projected number of rows while tuning or evaluating
statements in the workload that reference these tables. Note that there can be one or more spaces between the
number_of_rows count and the table_name.
This is the file format for table_list_file:
database_name.[schema_name].table_name [number_of_rows]
database_name.[schema_name].table_name [number_of_rows]
database_name.[schema_name].table_name [number_of_rows]
This argument is an alternative to entering a table list at the command prompt (-Tl). Do not use a table list file (-Tf)
if you are using -Tl. If both arguments are used, dta fails and returns an error.
If the -Tf and -Tl arguments are omitted, all user tables in the specified databases are considered for tuning.
-Tl table_list
Specifies at the command prompt a list of tables to be tuned. Place commas between table names to separate
them. If only one database is specified with the -D argument, then table names do not need to be qualified with a
database name. Otherwise, the fully qualified name in the format: database_name.schema_name.table_name is
required for each table.
This argument is an alternative to using a table list file (-Tf). If both -Tl and -Tf are used, dta fails and returns an
error.
-U login_id
Specifies the login ID used to connect to SQL Server.
-u
Launches the Database Engine Tuning Advisor GUI. All parameters are treated as the initial settings for the user
interface.
-x
Starts tuning session and exits.

Remarks
Press CTRL+C once to stop the tuning session and generate recommendations based on the analysis dta has
completed up to this point. You will be prompted to decide whether you want to generate recommendations or not.
Press CTRL+C again to stop the tuning session without generating recommendations.

Examples
A. Tune a workload that includes indexes and indexed views in its recommendation
This example uses a secure connection ( -E ) to connect to the tpcd1G database on MyServer to analyze a
workload and create recommendations. It writes the output to a script file named script.sql. If script.sql already
exists, then dta will overwrite the file because the -F argument has been specified. The tuning session runs for an
unlimited length of time to ensure a complete analysis of the workload ( -A 0 ). The recommendation must provide
a minimum improvement of 5% ( -m 5 ). dta should include indexes and indexed views in its final recommendation
( -fa IDX_IV ).

dta –S MyServer –E -D tpcd1G -if tpcd_22.sql -F –of script.sql –A 0 -m 5 -fa IDX_IV

B. Limit disk use


This example limits the total database size, which includes the raw data and the additional indexes, to 3 gigabytes
(GB ) ( -B 3000 ) and directs the output to d:\result_dir\script1.sql. It runs for no more than 1 hour ( -A 60 ).

dta –D tpcd1G –if tpcd_22.sql -B 3000 –of "d:\result_dir\script1.sql" –A 60

C. Limit the number of tuned queries


This example limits the number of queries read from the file orders_wkld.sql to a maximum of 10 ( -n 10 ) and
runs for 15 minutes ( -A 15 ), whichever comes first. To make sure that all 10 queries are tuned, specify an
unlimited tuning time with -A 0 . If time is important, specify an appropriate time limit by specifying the number
of minutes that are available for tuning with the -A argument as shown in this example.

dta –D orders –if orders_wkld.sql –of script.sql –A 15 -n 10

D. Tune specific tables listed in a file


This example demonstrates the use of table_list_file (the -Tf argument). The contents of the file table_list.txt are as
follows:

AdventureWorks2012.Sales.Customer 100000
AdventureWorks2012.Sales.Store
AdventureWorks2012.Production.Product 2000000

The contents of table_list.txt specifies that:


Only the Customer, Store, and Product tables in the database should be tuned.
The number of rows in the Customer and Product tables are assumed to be 100,000 and 2,000,000,
respectively.
The number of rows in Store are assumed to be the current number of rows in the table.
Note that there can be one or more spaces between the number of rows count and the preceding table
name in the table_list_file.
The tuning time is 2 hours ( -A 120 ) and the output is written to an XML file ( -ox XMLTune.xml ).

dta –D pubs –if pubs_wkld.sql –ox XMLTune.xml –A 120 –Tf table_list.txt

See Also
Command Prompt Utility Reference (Database Engine)
Database Engine Tuning Advisor
SQL Server Profiler
5/3/2018 • 9 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
SQL Server Profiler is an interface to create and manage traces and analyze and replay trace results. Events are
saved in a trace file that can later be analyzed or used to replay a specific series of steps when trying to diagnose a
problem.

IMPORTANT!!
We are announcing the deprecation of SQL Server Profiler for Database Engine Trace Capture and Trace
Replay. These features are available in SQL Server 2016 but will be removed in a later version.
The Microsoft.SqlServer.Management.Trace namespace that contains the Microsoft SQL Server Trace and
Replay objects will also be deprecated.
Note that SQL Server Profiler for the Analysis Services workloads is NOT being deprecated, and will continue
to be supported.
Submit your feedback and questions on our Connect page.

Where is the Profiler?


You can start the Profiler in a number of ways from within SSMS. Here is a topic that lists the ways to start the
Profiler.

Capture and replay trace data


The following table shows the features we recommend using in SQL Server 2017 to capture and replay your trace
data.

Feature\Target Workload Relational Engine Analysis Services

Trace Capture Extended Events graphical user interface SQL Server Profiler
in SQL Server Management Studio

Trace Replay Distributed Replay SQL Server Profiler

SQL Server Profiler


Microsoft SQL Server Profiler is a graphical user interface to SQL Trace for monitoring an instance of the
Database Engine or Analysis Services. You can capture and save data about each event to a file or table to analyze
later. For example, you can monitor a production environment to see which stored procedures are affecting
performance by executing too slowly. SQL Server Profiler is used for activities such as:
Stepping through problem queries to find the cause of the problem.
Finding and diagnosing slow -running queries.
Capturing the series of Transact-SQL statements that lead to a problem. The saved trace can then be used to
replicate the problem on a test server where the problem can be diagnosed.
Monitoring the performance of SQL Server to tune workloads. For information about tuning the physical
database design for database workloads, see Database Engine Tuning Advisor.
Correlating performance counters to diagnose problems.
SQL Server Profiler also supports auditing the actions performed on instances of SQL Server. Audits record
security-related actions for later review by a security administrator.

SQL Server Profiler concepts


To use SQL Server Profiler, you need to understand the terms that describe the way the tool functions.

NOTE! Understanding SQL Trace really helps when working with SQL Server Profiler. For more information,
see SQL Trace.

Event
An event is an action generated within an instance of SQL Server Database Engine. Examples of these are:
Login connections, failures, and disconnections.
Transact-SQL SELECT, INSERT, UPDATE, and DELETE statements.
Remote procedure call (RPC ) batch status.
The start or end of a stored procedure.
The start or end of statements within stored procedures.
The start or end of an SQL batch.
An error written to the SQL Server error log.
A lock acquired or released on a database object.
An opened cursor.
Security permission checks.
All of the data generated by an event is displayed in the trace in a single row. This row is intersected by data
columns that describe the event in detail.
EventClass
An event class is a type of event that can be traced. The event class contains all of the data that can be
reported by an event. Examples of event classes are the following:
SQL:BatchCompleted
Audit Login
Audit Logout
Lock:Acquired
Lock:Released
EventCategory
An event category defines the way events are grouped within SQL Server Profiler. For example, all lock
events classes are grouped within the Locks event category. However, event categories only exist within
SQL Server Profiler. This term does not reflect the way Engine events are grouped.
DataColumn
A data column is an attribute of an event classes captured in the trace. Because the event class determines
the type of data that can be collected, not all data columns are applicable to all event classes. For example, in
a trace that captures the Lock:Acquired event class, the BinaryData data column contains the value of the
locked page ID or row, but the Integer Data data column does not contain any value because it is not
applicable to the event class being captured.
Template
A template defines the default configuration for a trace. Specifically, it includes the event classes you want to
monitor with SQL Server Profiler. For example, you can create a template that specifies the events, data
columns, and filters to use. A template is not executed, but rather is saved as a file with a .tdf extension. Once
saved, the template controls the trace data that is captured when a trace based on the template is launched.
Trace
A trace captures data based on selected event classes, data columns, and filters. For example, you can create
a trace to monitor exception errors. To do this, you select the Exception event class and the Error, State,
and Severity data columns. Data from these three columns needs to be collected in order for the trace
results to provide meaningful data. You can then run a trace, configured in such a manner, and collect data
on any Exception events that occur in the server. Trace data can be saved, or used immediately for analysis.
Traces can be replayed at a later date, although certain events, such as Exception events, are never
replayed. You can also save the trace as a template to build similar traces in the future.
SQL Server provides two ways to trace an instance of SQL Server: you can trace with SQL Server Profiler,
or you can trace using system stored procedures.
Filter
When you create a trace or template, you can define criteria to filter the data collected by the event. To keep
traces from becoming too large, you can filter them so that only a subset of the event data is collected. For
example, you can limit the Microsoft Windows user names in the trace to specific users, thereby reducing
the output data.
If a filter is not set, all events of the selected event classes are returned in the trace output.

SQL Server Profiler tasks


TASK DESCRIPTION TOPIC

Lists the predefined templates that SQL Server provides for SQL Server Profiler Templates and Permissions
monitoring certain types of events, and the permissions
required to use to replay traces.

Describes how to run SQL Server Profiler. Permissions Required to Run SQL Server Profiler

Describes how to create a trace. Create a Trace (SQL Server Profiler)

Describes how to specify events and data columns for a trace Specify Events and Data Columns for a Trace File (SQL Server
file. Profiler)

Describes how to save trace results to a file. Save Trace Results to a File (SQL Server Profiler)

Describes how to save trace results to a table. Save Trace Results to a Table (SQL Server Profiler)

Describes how to filter events in a trace. Filter Events in a Trace (SQL Server Profiler)

Describes how to view filter information. View Filter Information (SQL Server Profiler)
TASK DESCRIPTION TOPIC

Describes how to Modify a Filter. Modify a Filter (SQL Server Profiler)

Describes how to Set a Maximum File Size for a Trace File (SQL Set a Maximum File Size for a Trace File (SQL Server Profiler)
Server Profiler).

Describes how to set a maximum table size for a trace table. Set a Maximum Table Size for a Trace Table (SQL Server Profiler)

Describes how to start a trace. Start a Trace

Describes how to start a trace automatically after connecting Start a Trace Automatically after Connecting to a Server (SQL
to a server. Server Profiler)

Describes how to filter events based on the event start time. Filter Events Based on the Event Start Time (SQL Server
Profiler)

Describes how to filter events based on the event end time. Filter Events Based on the Event End Time (SQL Server Profiler)

Describes how to filter server process IDs (SPIDs) in a trace. Filter Server Process IDs (SPIDs) in a Trace (SQL Server Profiler)

Describes how to pause a trace. Pause a Trace (SQL Server Profiler)

Describes how to stop a trace. Stop a Trace (SQL Server Profiler)

Describes how to run a trace after it has been paused or Run a Trace After It Has Been Paused or Stopped (SQL Server
stopped. Profiler)

Describes how to clear a trace window. Clear a Trace Window (SQL Server Profiler)

Describes how to close a trace window. Close a Trace Window (SQL Server Profiler)

Describes how to set trace definition defaults. Set Trace Definition Defaults (SQL Server Profiler)

Describes how to set trace display defaults. Set Trace Display Defaults (SQL Server Profiler)

Describes how to open a trace file. Open a Trace File (SQL Server Profiler)

Describes how to open a trace table. Open a Trace Table (SQL Server Profiler)

Describes how to replay a trace table. Replay a Trace Table (SQL Server Profiler)

Describes how to replay a trace file. Replay a Trace File (SQL Server Profiler)

Describes how to replay a single event at a time. Replay a Single Event at a Time (SQL Server Profiler)

Describes how to replay to a breakpoint. Replay to a Breakpoint (SQL Server Profiler)

Describes how to replay to a cursor. Replay to a Cursor (SQL Server Profiler)

Describes how to replay a Transact-SQL script. Replay a Transact-SQL Script (SQL Server Profiler)

Describes how to create a trace template. Create a Trace Template (SQL Server Profiler)
TASK DESCRIPTION TOPIC

Describes how to modify a trace template. Modify a Trace Template (SQL Server Profiler)

Describes how to set global trace options. Set Global Trace Options (SQL Server Profiler)

Describes how to find a value or data column while tracing. Find a Value or Data Column While Tracing (SQL Server
Profiler)

Describes how to derive a template from a running trace. Derive a Template from a Running Trace (SQL Server Profiler)

Describes how to derive a template from a trace file or trace Derive a Template from a Trace File or Trace Table (SQL Server
table. Profiler)

Describes how to create a Transact-SQL script for running a Create a Transact-SQL Script for Running a Trace (SQL Server
trace. Profiler)

Describes how to export a trace template. Export a Trace Template (SQL Server Profiler)

Describes how to import a trace template. Import a Trace Template (SQL Server Profiler)

Describes how to extract a script from a trace. Extract a Script from a Trace (SQL Server Profiler)

Describes how to correlate a trace with Windows performance Correlate a Trace with Windows Performance Log Data (SQL
log data. Server Profiler)

Describes how to organize columns displayed in a trace. Organize Columns Displayed in a Trace (SQL Server Profiler)

Describes how to start SQL Server Profiler. Start SQL Server Profiler

Describes how to save traces and trace templates. Save Traces and Trace Templates

Describes how to modify trace templates. Modify Trace Templates

Describes how to correlate a trace with Windows performance Correlate a Trace with Windows Performance Log Data
log data.

Describes how to view and analyze traces with SQL Server View and Analyze Traces with SQL Server Profiler
Profiler.

Describes how to analyze deadlocks with SQL Server Profiler. Analyze Deadlocks with SQL Server Profiler

Describes how to analyze queries with SHOWPLAN results in Analyze Queries with SHOWPLAN Results in SQL Server
SQL Server Profiler. Profiler

Describes how to filter traces with SQL Server Profiler. Filter Traces with SQL Server Profiler

Describes how to use the replay features of SQL Server Replay Traces
Profiler.

Lists the context-sensitive help topics for SQL Server Profiler. SQL Server Profiler F1 Help

Lists the system stored procedures that are used by SQL SQL Server Profiler Stored Procedures (Transact-SQL)
Server Profiler to monitor performance and activity.
See also
Locks Event Category
Sessions Event Category
Stored Procedures Event Category
TSQL Event Category
Server Performance and Activity Monitoring
ssbdiagnose Utility (Service Broker)
5/3/2018 • 18 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The ssbdiagnose utility reports issues in Service Broker conversations or the configuration of Service Broker
services. Configuration checks can be made for either two services or a single service. Issues are reported either in
the command prompt window as human-readable text, or as formatted XML that can be redirected to a file or
another program.

Syntax
ssbdiagnose
[ [ -XML ]
[ -LEVEL { ERROR | WARNING | INFO } ]
[-IGNORE error_id ] [ ...n]
[ <baseconnectionoptions> ]
{ <configurationreport> | <runtimereport> }
]
| -?

<configurationreport> ::=
CONFIGURATION
{ [ FROM SERVICE service_name
[ <fromconnectionoptions> ]
[ MIRROR <mirrorconnectionoptions> ]
]
[ TO SERVICE service_name[, broker_id ]
[ <toconnectionoptions> ]
[ MIRROR <mirrorconnectionoptions> ]
]
}
ON CONTRACT contract_name
[ ENCRYPTION { ON | OFF | ANONYMOUS } ]

<runtime_report> ::=
RUNTIME
[-SHOWEVENTS ]
[ -NEW
[ -ID { conversation_handle
| conversation_group_id
| conversation_id
}
] [ ...n]
]
[ -TIMEOUT timeout_interval ]
[ <runtimeconnectionoptions> ]

<baseconnectionoptions> ::=
<connectionoptions>

<fromconnectionoptions> ::=
<connectionoptions>

<toconnectionoptions> ::=
<connectionoptions>

<mirrorconnectionoptions> ::=
<connectionoptions>

<runtimeconnectionoptions> ::=
[ CONNECT TO <connectionoptions> ] [ ...n]

<connectionoptions> ::=
[ –E | { -U login_id [ -P password ] } ]
[ -S server_name[\instance_name] ]
[ -d database_name ]
[ -l login_timeout ]

Command Line Options


-XML
Specifies that the ssbdiagnose output be generated as formatted XML. This can be redirected to a file or to
another application. If -XML is not specified, the ssbdiagnose output is formatted as human-readable text.
-LEVEL { ERROR | WARNING | INFO }
Specifies the level of messages to report.
ERROR: report only error messages.
WARNING: report error and warning messages.
INFO: report error, warning, and informational messages.
The default setting is WARNING.
-IGNORE error_id
Specifies that errors or messages that have the specified error_id not be included in reports. You can specify -
IGNORE multiple times to suppress multiple message IDs.
<baseconnectionoptions>
Specifies the base connection information that is used by ssbdiagnose when connection options are not included
in a specific clause. The connection information that is given in a specific clause overrides the
baseconnectionoption information. This is performed separately for each parameter. For example, if both -S and
-d are specified in baseconnetionoptions, and only -d is specified in toconnetionoptions, ssbdiagnose uses -S
from baseconnetionoptions and -d from toconnetionoptions.
CONFIGURATION
Requests a report of configuration errors between a pair of Service Broker services, or for a single service.
FROM SERVICE service_name
Specifies the service that initiates conversations.
<fromconnectionoptions>
Specifies the information that is required to connect to the database that holds the initiator service. If
fromconnectionoptions is not specified, ssbdiagnose uses the connection information from
baseconnectionoptions to connect to the initiator database. If fromconnectionoptions is specified it must
include the database that contains the initiator service. If fromconnectionoptions is not specified, the
baseconnectionoptions must specify the initiator database.
TO SERVICE service_name[, broker_id ]
Specifies the service that is the target for the conversations.
service_name: specifies the name of the target service.
broker_id: specifies the Service Broker ID that identifies the target database. broker_id is a GUID. You can run the
following query in the target database to find it:

SELECT service_broker_guid
FROM sys.databases
WHERE database_id = DB_ID();

<toconnectionoptions>
Specifies the information that is required to connect the database that holds the target service. If
toconnectionoptions is not specified, ssbdiagnose uses the connection information from
baseconnectionoptions to connect to the target database.
MIRROR
Specifies that the associated Service Broker service is hosted in a mirrored database. ssbdiagnose verifies that the
route to the service is a mirrored route, where MIRROR_ADDRESS was specified on CREATE ROUTE.
<mirrorconnectionoptions>
Specifies the information that is required to connect to the mirror database. If mirrorconnectionoptions is not
specified, ssbdiagnose uses the connection information from baseconnectionoptions to connect to the mirror
database.
ON CONTRACT contract_name
Requests that ssbdiagnose only check configurations that use the specified contract. If ON CONTRACT is not
specified, ssbdiagnose reports on the contract named DEFAULT.
ENCRYPTION { ON | OFF | ANONYMOUS }
Requests verification that the dialog is correctly configured for the specified level of encryption:
ON: Default setting. Full dialog security is configured. Certificates have been deployed on both sides of the dialog,
a remote service binding is present, and the GRANT SEND statement for the target service specified the initiator
user.
OFF: No dialog security is configured. No certificates have been deployed, no remote service binding was created,
and the GRANT SEND for the initiator service specified the public role.
ANONYMOUS: Anonymous dialog security is configured. One certificate has been deployed, the remote service
binding specified the anonymous clause, and the GRANT SEND for the target service specified the public role.
RUNTIME
Requests a report of issues that cause runtime errors for a Service Broker conversation. If neither -NEW or -ID are
specified, ssbdiagnose monitors all conversations in all databases specified in the connection options. If -NEW or
-ID are specified, ssbdiagnose builds a list of the IDs specified in the parameters.
While ssbdiagnose is running, it records all SQL Server Profiler events that indicate runtime errors. It records the
events that occur for the specified IDs, plus system-level events. If runtime errors are encountered, ssbdiagnose
runs a configuration report on the associated configuration.
By default, runtime errors are not included in the output report, only the results of the configuration analysis. Use -
SHOWEVENTS to have the runtime errors included in the report.
-SHOWEVENTS
Specifies that ssbdiagnose report SQL Server Profiler events during a RUNTIME report. Only events that are
considered error conditions are reported. By default, ssbdiagnose only monitors error events; it does not report
them in the output.
-NEW
Requests runtime monitoring of the first conversation that begins after ssbdiagnose starts running.
-ID
Requests runtime monitoring of the specified conversation elements. You can specify -ID multiple times.
If you specify a conversation handle, only events associated with the associated conversation endpoint are
reported. If you specify a conversation ID, all events for that conversation and its initiator and target endpoints are
reported. If a conversation group ID is specified, all events for all conversations and endpoints in the conversation
group are reported.
conversation_handle
A unique identifier that identifies a conversation endpoint in an application. Conversation handles are unique to
one endpoint of a conversation, the initiator and target endpoints have separate conversation handles.
Conversation handles are returned to applications by the @dialog_handle parameter of the BEGIN DIALOG
statement, and the conversation_handle column in the result set of a RECEIVE statement.
Conversation handles are reported in the conversation_handle column of the sys.transmission_queue and
sys.conversation_endpoints catalog views.
conversation_group_id
The unique identifier that identifies a conversation group.
Conversation group IDs are returned to applications by the @conversation_group_id parameter of the GET
CONVERSATION GROUP statement and the conversation_group_id column in the result set of a RECEIVE
statement.
Conversation group IDs are reported in the conversation_group_id columns of the sys.conversation_groups
and sys.conversation_endpoints catalog views.
conversation_id
The unique identifier that identifies a conversation. Conversation IDs are the same for both the initiator and target
endpoints of a conversation.
Conversation IDs are reported in the conversation_id column of the sys.conversation_endpoints catalog view.
-TIMEOUT timeout_interval
Specifies the number of seconds for a RUNTIME report to run. If -TIMEOUT is not specified the runtime report
runs indefinitely. -TIMEOUT is used only on RUNTIME reports, not CONFIGURATION reports. Use ctrl + C to
quit ssbdiagnose if -TIMEOUT was not specified or to end a runtime report before the time-out interval expires.
timeout_interval must be a number between 1 and 2,147,483,647.
<runtimeconnectionoptions>
Specifies the connection information for the databases that contain the services associated with conversation
elements being monitored. If all the services are in the same database, you only have to specify one CONNECT
TO clause. If the services are in separate databases you must supply a CONNECT TO clause for each database. If
runtimeconnectionoptions is not specified, ssbdiagnose uses the connection information from
baseconnectionoptions.
–E
Open a Windows Authentication connection to an instance of the Database Engine by using your current Windows
account as the login ID. The login must be a member of the sysadmin fixed-server role.
The -E option ignores the user and password settings of the SQLCMDUSER and SQLCMDPASSWORD
environment variables.
If neither -E nor -U is specified, ssbdiagnose uses the value from the SQLCMDUSER environment variable. If
SQLCMDUSER is not set either, ssbdiagnose uses Windows Authentication.
If the -E option is used together with the -U option or the -P option, an error message is generated.
-U login_id
Open a SQL Server Authentication connection by using the specified login ID. The login must be a member of the
sysadmin fixed-server role.
If neither -E nor -U is specified, ssbdiagnose uses the value from the SQLCMDUSER environment variable. If
SQLCMDUSER is not set either, ssbdiagnose tries to connect by using Windows Authentication mode based on
the Windows account of the user who is running ssbdiagnose.
If the -U option is used together with the -E option, an error message is generated. If the –U option is followed by
more than one argument, an error message is generated and the program exits.
-P password
Specifies the password for the -U login ID. Passwords are case sensitive. If the -U option is used and the -P option
is not used, ssbdiagnose uses the value from the SQLCMDPASSWORD environment variable. If
SQLCMDPASSWORD is not set either, ssbdiagnose prompts the user for a password.

IMPORTANT
When you type a SET SQLCMDPASSWORD command, your password will be visible to anyone who can see your monitor.
If the -P option is specified without a password ssbdiagnose uses the default password (NULL ).

IMPORTANT
Do not use a blank password. Use a strong password. For more information, see Strong Passwords.

The password prompt is displayed by printing the password prompt to the console, as follows: Password:

User input is hidden. This means that nothing is displayed and the cursor stays in position.
If the -P option is used with the -E option, an error message is generated.
If the -P option is followed by more than one argument, an error message is generated.
-S server_name[\instance_name]
Specifies the instance of the Database Engine that holds the Service Broker services to be analyzed.
Specify server_name to connect to the default instance of the Database Engine on that server. Specify
server_name\instance_name to connect to a named instance of the Database Engine on that server. If -S is not
specified, ssbdiagnose uses the value of the SQLCMDSERVER environment variable. If SQLCMDSERVER is not
set either, ssbdiagnose connects to the default instance of the Database Engine on the local computer.
-d database_name
Specifies the database that holds the Service Broker services to be analyzed. If the database does not exist, an error
message is generated. If -d is not specified, the default is the database specified in the default-database property
for your login.
-l login_timeout
Specifies the number of seconds before an attempt to connect to a server times out. If -l is not specified,
ssbdiagnose uses the value set for the SQLCMDLOGINTIMEOUT environment variable. If
SQLCMDLOGINTIMEOUT is not set either, the default time-out is thirty seconds. The login time-out must be a
number between 0 and 65534. If the value that is supplied is not numeric or does not fall into that range,
ssbdiagnose generates an error message. A value of 0 specifies time-out to be infinite.
-?
Displays command line help.

Remarks
Use ssbdiagnose to do the following:
Confirm that there are no configuration errors in a newly configured Service Broker application.
Confirm that there are no configuration errors after changing the configuration of an existing Service Broker
application.
Confirm that there are no configuration errors after a Service Broker database is detached and then
reattached to a new instance of the Database Engine.
Research whether there are configuration errors when messages are not successfully transmitted between
services.
Get a report of any errors that occur in a set of Service Broker conversation elements.

Configuration Reporting
To correctly analyze the configuration used by a conversation, run a ssbdiagnose configuration report that uses
the same options that are used by the conversation. If you specify a lower level of options for ssbdiagnose than
are used by the conversation, ssbdiagnose might not report conditions that are required by the conversation. If
you specify a higher level of options for ssbdiagnose, it might report items that are not required by the
conversation. For example, a conversation between two services in the same database can be run with
ENCPRYPTION OFF. If you run ssbdiagnose to validate the configuration between the two services, but use the
default ENCRYPTION ON setting, ssbdiagnose reports that the database is missing a master key. A master key is
not required for the conversation.
The ssbdiagnose configuration report analyzes only one Service Broker service or a single pair of services every
time it is run. To report on multiple pairs of Service Broker services, build a .cmd command file that calls
ssbdiagnose multiple times.

Runtime Reporting
When -RUNTIME is specified, ssbdiagnose searches all databases specified in runtimeconnectionoptions and
baseconnectionoptions to build a list of Service Broker IDs. The full list of IDs built depends on what is specified
for -NEW and -ID:
If neither -NEW or -ID are specified, the list includes all conversations for all databases specified in the
connection options.
If -NEW is specified, ssbdiagnose includes the elements for the first conversation that starts after
ssbdiagnose is run. This includes the conversation ID and the conversation handles for both the target and
initiator conversation endpoints.
If -ID is specified with a conversation handle, only that handle is included in the list.
If -ID is specified with a conversation ID, the conversation ID and the handles for both of its conversation
endpoints are added to the list.
If -ID is specified with a conversation group ID, all the conversation IDs and conversation handles in that
group are added to the list.
The list does not include elements from databases that are not covered by the connection options. For
example, assume that you use -ID to specify a conversation ID, but only provide a
runtimeconnectionoptions clause for the initiator database and not the target database. ssbdiagnose will
not include the target conversation handle in its list of IDs, only the conversation ID and the initiator
conversation handle.
ssbdiagnose monitors the SQL Server Profiler events from the databases covered by
runtimeconnectionoptions and baseconnectionoptions. It searches for Service Broker events that
indicate an error was encountered by one or more of the Service Broker IDs in the runtime list.
ssbdiagnose also searches for system-level Service Broker error events not specifically associated with any
conversation group.
If ssbdiagnose finds conversation errors, the utility will attempt to report on the root cause of the events by
also running a configuration report. ssbdiagnose uses the metadata in the databases to try to determine
the instances, Service Broker IDs, databases, services, and contracts used by the conversation. It then runs a
configuration report using all available information.
By default, ssbdiagnose does not report error events. It only reports the underlying issues found during the
configuration check. This minimizes the amount of information reported and helps you focus on the
underlying configuration issues. You can specify -SHOWEVENTS to see the error events encountered by
ssbdiagnose.

Issues Reported by ssbdiagnose


ssbdiagnose reports three classes of issues. In the XML output file, each class of issue is reported as a separate
type of the Issue element. The three types of issues reported by ssbdiagnose are as follows:
Diagnosis
Reports a configuration issue. This includes issues found either a CONFIGURATION report is running, or during
the configuration phase of a RUNTIME report. ssbdiagnose reports each configuration issue one time.
Event
Reports a SQL Server Profiler event that indicates a problem was encountered by a conversation being monitored
during a RUNTIME report. ssbdiagnose reports events every time they are generated. Events can be reported
multiple times if several conversations encounter the problem.
Problem
Reports an issue that is preventing ssbdiagnose from completing a configuration analysis or from monitoring
conversations.

sqlcmd Environment Variables


The ssbdiagnose utility supports the SQLCMDSERVER, SQLCMDUSER, SQLCMDPASSWORD, and
SQLCMDLOGINTIMOUT environment variables that are also used by the sqlcmd utility. You can set the
environment variables either by using the command prompt SET command, or by using the setvar command in
Transact-SQL scripts that you run by using sqlcmd. For more information about how to use setvar in sqlcmd, see
Use sqlcmd with Scripting Variables.

Permissions
In each connectionoptions clause, the login specified with either -E or -U must be a member of the sysadmin
fixed-server role in the instance specified in -S.

Examples
This section contains examples of using ssbdiagnose at a command prompt.
A. Checking the Configuration of Two Services in the Same Database
The following example shows how to request a configuration report when the following are true;
The initiator and target service are in the same database.
The database is in the default instance of the Database Engine.
The instances is on the same computer on which ssbdiagnose is run.
The ssbdiagnose utility reports the configuration that uses the DEFAULT contract because ON
CONTRACT is not specified.

ssbdiagnose -E -d MyDatabase CONFIGURATION FROM SERVICE /test/initiator TO SERVICE /test/target

B. Checking the Configuration of Two Services on Separate Computers That Use One Login
The following example shows how to request a configuration report when the initiator and target service are on
separate computers, but can be accessed by using the same Windows Authentication login.

ssbdiagnose -E CONFIGURATION FROM SERVICE /text/initiator -S InitiatorComputer -d InitiatorDatabase TO SERVICE


/test/target -S TargetComputer -d TargetDatabase ON CONTRACT TestContract

C. Checking the Configuration of Two Services on Separate Computers That Use Separate Logins
The following example shows how to request a configuration report when the initiator and target service are on
separate computers, and separate SQL Server Authentication logins are required for each instance of the Database
Engine.

ssbdiagnose CONFIGURATION FROM SERVICE /text/initiator


-S InitiatorComputer -U InitiatorLogin -p !wEx23Dvb
-d InitiatorDatabase TO SERVICE /test/target -S TargetComputer
-U TargetLogin -p ER!49jiy -d TargetDatabase ON CONTRACT TestContract

D. Checking Mirrored Service Configurations on Separate Computers With Anonymous Encryption


The following example shows how to request a configuration report when the initiator and target service are on
separate computers and the initiator is mirrored to a named instance. The report also verifies that the services are
configured to use anonymous encryption.

ssbdiagnose -E CONFIGURATION FROM SERVICE /text/initiator


-S InitiatorComputer -d InitiatorDatabase MIRROR
-S MirrorComputer/MirrorInstance TO SERVICE /test/target
-S TargetComputer -d TargetDatabase ON CONTRACT TestContract ENCRYPTION ANONYMOUS

E. Checking the Configuration of Two Contracts


The following example shows how to build a command file that requests configuration reports when the following
are true:
The initiator and target service are in the same database.
The database is in the default instance of the Database Engine.
The instance is on the same computer on which ssbdiagnose is run.
Each time ssbdiagnose is run it reports the configuration for a different contract between the same
services.

ssbdiagnose -E -d MyDatabase CONFIGURATION FROM SERVICE


/test/initiator TO SERVICE /test/target ON CONTRACT PayRaiseContract
ssbdiagnose -E -d MyDatabase CONFIGURATION FROM SERVICE /test/initiator
TO SERVICE /test/target ON CONTRACT PromotionContract

F. Monitor the status of a specific conversation on the local computer with a time out
The following example shows how to monitor a specific conversation where the initiator and target services are in
the same database in the default instance of the same computer that is running ssbdiagnose. The time-out
interval is set to 20 seconds.

ssbdiagnose -E -d TestDatabase RUNTIME -ID D68D77A9-B1CF-41BF-A5CE-279ABCAB140D -TIMEOUT 20

G. Monitor the status of a conversation that spans two computers


The following example shows how to monitor a specific conversation where the initiator and target services are on
separate computers.

ssbdiagnose RUNTIME -ID D68D77A9-B1CF-41BF-A5CE-279ABCAB140D


-TIMEOUT 10 CONNECT TO -E -S InitiatorComputer/InitiatorInstance
-d InitiatorDatabase CONNECT TO -E -S TargetComputer/TargetInstance
-d TargetDatabase

H. Monitor the status of a conversation in two databases in the same instance


The following example shows how to monitor a specific conversation where the initiator and target services are in
separate databases in the same instance of the Database Engine. The example uses the baseconnectionoptions
to specify the instance and login information, and two CONNECT TO clauses to specify the databases. -
SHOWEVENTS is specified so that all runtime events are included in the report output.

ssbdiagnose -E -S TestComputer/DevTestInstance RUNTIME -SHOWEVENTS


-ID 5094d4a7-e38c-4c37-da37-1d58b1cb8455 -TIMEOUT 10 CONNECT TO
-d InitiatorDatabase CONNECT TO -d TargetDatabase

I. Monitor the status of two conversations between two databases


The following example shows how to monitor two conversations where the initiator and target services are in
separate databases in the same instance of the Database Engine. The example uses the baseconnectionoptions
to specify the instance and login information, and two CONNECT TO clauses to specify the databases.

ssbdiagnose -E -S TestComputer/DevTestInstance RUNTIME


-ID 5094d4a7-e38c-4c37-da37-1d58b1cb8455
-ID 9b293be9-226b-4e22-e169-1d2c2c15be86 -TIMEOUT 10 CONNECT TO
-d InitiatorDatabase CONNECT TO -d TargetDatabase

J. Monitor the status of all conversations between two databases


The following example shows how to monitor all the conversation between two databases in the same instance of
the Database Engine. The example uses the baseconnectionoptions to specify the instance and login information,
and two CONNECT TO clauses to specify the databases.

ssbdiagnose -E -S TestComputer/DevTestInstance RUNTIME


-TIMEOUT 10 CONNECT TO -d InitiatorDatabase CONNECT TO
-d TargetDatabase

K. Ignore Specific Errors


The following example shows how to ignore known errors (303 and 304) in how activation is currently configured
in a test system.

ssbdiagnose -IGNORE 303 -IGNORE 304 -E -d TestDatabase


CONFIGURATION FROM SERVICE /test/initiator TO SERVICE /test/target
ON CONTRACT TextContract

L. Redirecting ssbdiagnose XML Output


The following example shows how to request that ssbdiagnose generate its output as an XML file that is
redirected to a file. The TestDiag.xml file can then be opened by an application to analyze or report ssbdiagnose
XML files. Or, you can view it from a general XML editor such as XML Notepad.

ssbdiagnose -XML -E -d MyDatabase CONFIGURATION FROM SERVICE


/test/initiator TO SERVICE /test/target > c:\MyDiagnostics\TestDiag.xml

M. Using an Environment Variable


The following example first sets the SQLCMDSERVER environment variable to hold the server name, and then
runs ssbdiagnose without specifying -S.
SET SQLCMDSERVER=MyComputer
ssbdiagnose -XML -E -d MyDatabase CONFIGURATION FROM SERVICE
/test/initiator TO SERVICE /test/target

See Also
SQL Server Service Broker
BEGIN DIALOG CONVERSATION (Transact-SQL )
CREATE BROKER PRIORITY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE CONTRACT (Transact-SQL )
CREATE ENDPOINT (Transact-SQL )
CREATE MASTER KEY (Transact-SQL )
CREATE MESSAGE TYPE (Transact-SQL )
CREATE QUEUE (Transact-SQL )
CREATE REMOTE SERVICE BINDING (Transact-SQL )
CREATE ROUTE (Transact-SQL )
CREATE SERVICE (Transact-SQL )
RECEIVE (Transact-SQL )
sys.transmission_queue (Transact-SQL )
sys.conversation_endpoints (Transact-SQL )
sys.conversation_groups (Transact-SQL )
SQL Command Prompt Utilities (Database Engine)
5/3/2018 • 2 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Command prompt utilities enable you to script SQL Server operations. The following table contains a list of
command prompt utilities that ship with SQL Server.

UTILITY DESCRIPTION INSTALLED IN

bcp Utility Used to copy data between an instance <drive:>\Program Files\ Microsoft SQL
of Microsoft SQL Server and a data file Server\Client
in a user-specified format. SDK\ODBC\110\Tools\Binn

dta Utility Used to analyze a workload and <drive>:\Program Files\Microsoft SQL


recommend physical design structures Server\nnn\Tools\Binn
to optimize server performance for that
workload.

dtexec Utility Used to configure and execute an <drive>:\Program Files\Microsoft SQL


Integration Services package. A user Server\nnn\DTS\Binn
interface version of this command
prompt utility is called DTExecUI, which
brings up the Execute Package Utility.

dtutil Utility Used to manage SSIS packages. <drive>:\Program Files\Microsoft SQL


Server\nnn\DTS\Binn

Deploy Model Solutions with the Used to deploy Analysis Services <drive>:\Program Files\Microsoft SQL
Deployment Utility projects to instances of Analysis Server\nnn\Tools\Binn\VShell\Common
Services. 7\IDE

mssql-scripter (Public Preview) Used to generate CREATE and INSERT See our GitHub repo for download and
T-SQL scripts for database objects in usage information.
SQL Server, Azure SQL Database, and
Azure SQL Data Warehouse.

osql Utility Allows you to enter Transact-SQL <drive>:\Program Files\Microsoft SQL


statements, system procedures, and Server\nnn\Tools\Binn
script files at the command prompt.

Profiler Utility Used to start SQL Server Profiler from a <drive>:\Program Files\Microsoft SQL
command prompt. Server\nnn\Tools\Binn

RS.exe Utility (SSRS) Used to run scripts designed for <drive>:\Program Files\Microsoft SQL
managing Reporting Services report Server\nnn\Tools\Binn
servers.

rsconfig Utility (SSRS) Used to configure a report server <drive>:\Program Files\Microsoft SQL
connection. Server\nnn\Tools\Binn

rskeymgmt Utility (SSRS) Used to manage encryption keys on a <drive>:\Program Files\Microsoft SQL
report server. Server\nnn\Tools\Binn
UTILITY DESCRIPTION INSTALLED IN

sqlagent90 Application Used to start SQL Server Agent from a <drive>:\Program Files\Microsoft SQL
command prompt. Server\<instance_name>\MSSQL\Binn

sqlcmd Utility Allows you to enter Transact-SQL <drive:>\Program Files\ Microsoft SQL
statements, system procedures, and Server\Client
script files at the command prompt. SDK\ODBC\110\Tools\Binn

SQLdiag Utility Used to collect diagnostic information <drive>:\Program Files\Microsoft SQL


for Microsoft Customer Service and Server\nnn\Tools\Binn
Support.

sqllogship Application Used by applications to perform <drive>:\Program Files\Microsoft SQL


backup, copy, and restore operations Server\nnn\Tools\Binn
and associated clean-up tasks for a log
shipping configuration without running
the backup, copy, and restore jobs.

SqlLocalDB Utility An execution mode of SQL Server <drive>:\Program Files\Microsoft SQL


targeted to program developers. Server\nnn\Tools\Binn\

sqlmaint Utility Used to execute database maintenance <drive>:\Program Files\Microsoft SQL


plans created in previous versions of Server\MSSQL13.MSSQLSERVER\MSSQ
SQL Server. L\Binn

sqlps Utility Used to run PowerShell commands and <drive>:\Program Files\Microsoft SQL
scripts. Loads and registers the SQL Server\nnn\Tools\Binn
Server PowerShell provider and cmdlets.

sqlservr Application Used to start and stop an instance of <drive>:\Program Files\Microsoft SQL
Database Engine from the command Server\MSSQL13.MSSQLSERVER\MSSQ
prompt for troubleshooting. L\Binn

Ssms Utility Used to start SQL Server Management <drive>:\Program Files\Microsoft SQL
Studio from a command prompt. Server\nnn\Tools\Binn\VSShell\Commo
n7\IDE

tablediff Utility Used to compare the data in two tables <drive>:\Program Files\Microsoft SQL
for non-convergence, which is useful Server\nnn\COM
when troubleshooting a replication
topology.

Command Prompt Utilities Syntax Conventions


CONVENTION USED FOR

UPPERCASE Statements and terms used at the operating system level.

monospace Sample commands and program code.

italic User-supplied parameters.

bold Commands, parameters, and other syntax that must be typed


exactly as shown.
See Also
Replication Distribution Agent
Replication Log Reader Agent
Replication Merge Agent
Replication Queue Reader Agent
Replication Snapshot Agent
bcp Utility
5/3/2018 • 32 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse

For content related to previous versions of SQL Server, see bcp Utility.
For the latest version of the bcp utility, see Microsoft Command Line Utilities 14.0 for SQL Server
For using bcp on Linux, see Install sqlcmd and bcp on Linux.
For detailed information about using bcp with Azure SQL Data Warehouse, see Load data with bcp.

The bulk copy program utility (bcp) bulk copies data between an instance of Microsoft SQL Server and a data file
in a user-specified format. The bcp utility can be used to import large numbers of new rows into SQL Server
tables or to export data out of tables into data files. Except when used with the queryout option, the utility
requires no knowledge of Transact-SQL. To import data into a table, you must either use a format file created for
that table or understand the structure of the table and the types of data that are valid for its columns.
For the syntax conventions that are used for the bcp syntax, see Transact-SQL Syntax Conventions (Transact-
SQL ).

NOTE
If you use bcp to back up your data, create a format file to record the data format. bcp data files do not include any
schema or format information, so if a table or view is dropped and you do not have a format file, you may be unable to
import the data.

SYNTAX
bcp [database_name.] schema.{table_name | view_name | "query"
{in data_file | out data_file | queryout data_file | format nul}

[-a packet_size]
[-b batch_size]
[-c]
[-C { ACP | OEM | RAW | code_page } ]
[-d database_name]
[-e err_file]
[-E]
[-f format_file]
[-F first_row]
[-G Azure Active Directory Authentication]
[-h"hint [,...n]"]
[-i input_file]
[-k]
[-K application_intent]
[-L last_row]
[-m max_errors]
[-n]
[-N]
[-o output_file]
[-P password]
[-q]
[-r row_term]
[-R]
[-S [server_name[\instance_name]]
[-t field_term]
[-T]
[-U login_id]
[-v]
[-V (80 | 90 | 100 | 110 | 120 | 130 ) ]
[-w]
[-x]

Arguments
data_file
Is the full path of the data file. When data is bulk imported into SQL Server, the data file contains the data to be
copied into the specified table or view. When data is bulk exported from SQL Server, the data file contains the data
copied from the table or view. The path can have from 1 through 255 characters. The data file can contain a
maximum of 2^63 - 1 rows.
database_name
Is the name of the database in which the specified table or view resides. If not specified, this is the default database
for the user.
You can also explicitly specify the database name with d-.
in data_file | out data_file | queryout data_file | format nul
Specifies the direction of the bulk copy, as follows:
in copies from a file into the database table or view.
out copies from the database table or view to a file. If you specify an existing file, the file is overwritten.
When extracting data, note that the bcp utility represents an empty string as a null and a null string as an
empty string.
queryout copies from a query and must be specified only when bulk copying data from a query.
format creates a format file based on the option specified (-n, -c, -w, or -N ) and the table or view
delimiters. When bulk copying data, the bcp command can refer to a format file, which saves you from re-
entering format information interactively. The format option requires the -f option; creating an XML format
file, also requires the -x option. For more information, see Create a Format File (SQL Server). You must
specify nul as the value (format nul).
owner
Is the name of the owner of the table or view. owner is optional if the user performing the operation owns
the specified table or view. If owner is not specified and the user performing the operation does not own the
specified table or view, SQL Server returns an error message, and the operation is canceled.
" query " Is a Transact-SQL query that returns a result set. If the query returns multiple result sets, only the first
result set is copied to the data file; subsequent result sets are ignored. Use double quotation marks around the
query and single quotation marks around anything embedded in the query. queryout must also be specified
when bulk copying data from a query.
The query can reference a stored procedure as long as all tables referenced inside the stored procedure exist prior
to executing the bcp statement. For example, if the stored procedure generates a temp table, the bcp statement
fails because the temp table is available only at run time and not at statement execution time. In this case, consider
inserting the results of the stored procedure into a table and then use bcp to copy the data from the table into a
data file.
table_name
Is the name of the destination table when importing data into SQL Server (in), and the source table when
exporting data from SQL Server (out).
view_name
Is the name of the destination view when copying data into SQL Server (in), and the source view when copying
data from SQL Server (out). Only views in which all columns refer to the same table can be used as destination
views. For more information on the restrictions for copying data into views, see INSERT (Transact-SQL ).
-a packet_size
Specifies the number of bytes, per network packet, sent to and from the server. A server configuration option can
be set by using SQL Server Management Studio (or the sp_configure system stored procedure). However, the
server configuration option can be overridden on an individual basis by using this option. packet_size can be from
4096 to 65535 bytes; the default is 4096.
Increased packet size can enhance performance of bulk-copy operations. If a larger packet is requested but cannot
be granted, the default is used. The performance statistics generated by the bcp utility show the packet size used.
-b batch_size
Specifies the number of rows per batch of imported data. Each batch is imported and logged as a separate
transaction that imports the whole batch before being committed. By default, all the rows in the data file are
imported as one batch. To distribute the rows among multiple batches, specify a batch_size that is smaller than the
number of rows in the data file. If the transaction for any batch fails, only insertions from the current batch are
rolled back. Batches already imported by committed transactions are unaffected by a later failure.
Do not use this option in conjunction with the -h "ROWS_PER_BATCH =bb" option.
-c
Performs the operation using a character data type. This option does not prompt for each field; it uses char as the
storage type, without prefixes and with \t (tab character) as the field separator and \r\n (newline character) as the
row terminator. -c is not compatible with -w.
For more information, see Use Character Format to Import or Export Data (SQL Server).
-C { ACP | OEM | RAW | code_page }
Specifies the code page of the data in the data file. code_page is relevant only if the data contains char, varchar, or
text columns with character values greater than 127 or less than 32.

NOTE
We recommend specifying a collation name for each column in a format file, except when you want the 65001 option to
have priority over the collation/code page specification.

CODE PAGE VALUE DESCRIPTION

ACP ANSI/Microsoft Windows (ISO 1252).

OEM Default code page used by the client. This is the default code
page used if -C is not specified.

RAW No conversion from one code page to another occurs. This is


the fastest option because no conversion occurs.

code_page Specific code page number; for example, 850.

Versions prior to version 13 ( SQL Server 2016 (13.x)) do not


support code page 65001 (UTF-8 encoding). Versions
beginning with 13 can import UTF-8 encoding to earlier
versions of SQL Server.

-d database_name
Specifies the database to connect to. By default, bcp.exe connects to the user’s default database. If -d
database_name and a three part name (database_name.schema.table, passed as the first parameter to bcp.exe) is
specified, an error will occur because you cannot specify the database name twice.If database_name begins with a
hyphen (-) or a forward slash (/), do not add a space between -d and the database name.
-e err_file
Specifies the full path of an error file used to store any rows that the bcp utility cannot transfer from the file to the
database. Error messages from the bcp command go to the workstation of the user. If this option is not used, an
error file is not created.
If err_file begins with a hyphen (-) or a forward slash (/), do not include a space between -e and the err_file value.
-E
Specifies that identity value or values in the imported data file are to be used for the identity column. If -E is not
given, the identity values for this column in the data file being imported are ignored, and SQL Server
automatically assigns unique values based on the seed and increment values specified during table creation.
If the data file does not contain values for the identity column in the table or view, use a format file to specify that
the identity column in the table or view should be skipped when importing data; SQL Server automatically assigns
unique values for the column. For more information, see DBCC CHECKIDENT (Transact-SQL ).
The -E option has a special permissions requirement. For more information, see "Remarks" later in this topic.
-f format_file
Specifies the full path of a format file. The meaning of this option depends on the environment in which it is used,
as follows:
If -f is used with the format option, the specified format_file is created for the specified table or view. To
create an XML format file, also specify the -x option. For more information, see Create a Format File (SQL
Server).
If used with the in or out option, -f requires an existing format file.

NOTE
Using a format file in with the in or out option is optional. In the absence of the -f option, if -n, -c, -w, or -N is not
specified, the command prompts for format information and lets you save your responses in a format file (whose
default file name is Bcp.fmt).

If format_file begins with a hyphen (-) or a forward slash (/), do not include a space between -f and the
format_file value.
-F first_row
Specifies the number of the first row to export from a table or import from a data file. This parameter requires a
value greater than (>) 0 but less than (<) or equal to (=) the total number rows. In the absence of this parameter,
the default is the first row of the file.
first_row can be a positive integer with a value up to 2^63-1. -F first_row is 1-based.
-G
This switch is used by the client when connecting to Azure SQL Database or Azure SQL Data Warehouse to
specify that the user be authenticated using Azure Active Directory authentication. The -G switch requires version
14.0.3008.27 or later. To determine your version, execute bcp -v. For more information, see Use Azure Active
Directory Authentication for authentication with SQL Database or SQL Data Warehouse.

TIP
To check if your version of bcp includes support for Azure Active Directory Authentication (AAD) type bcp -- (bcp<space>
<dash><dash>) and verify that you see -G in the list of available arguments.

Azure Active Directory Username and Password:


When you want to use an Azure Active Directory user name and password, you can provide the -G option
and also use the user name and password by providing the -U and -P options.
The following example exports data using Azure AD Username and Password where user and password is
an AAD credential. The example exports table bcptest from database testdb from Azure server
aadserver.database.windows.net and stores the data in file c:\last\data1.dat :

bcp bcptest out "c:\last\data1.dat" -c -t -S aadserver.database.windows.net -d testdb -G -U


alice@aadtest.onmicrosoft.com -P xxxxx

The following example imports data using Azure AD Username and Password where user and password is
an AAD credential. The example imports data from file c:\last\data1.dat into table bcptest for database
testdb on Azure server aadserver.database.windows.net using Azure AD User/Password:

bcp bcptest in "c:\last\data1.dat" -c -t -S aadserver.database.windows.net -d testdb -G -U


alice@aadtest.onmicrosoft.com -P xxxxx

Azure Active Directory Integrated


For Azure Active Directory Integrated authentication, provide the -G option without a user name or
password. This configuration assumes that the current Windows user account (the account the bcp
command is running under) is federated with Azure AD:
The following example exports data using Azure AD Integrated account. The example exports table
bcptest from database testdb using Azure AD Integrated from Azure server
aadserver.database.windows.net and stores the data in file c:\last\data2.dat :

bcp bcptest out "c:\last\data2.dat" -S aadserver.database.windows.net -d testdb -G -c -t

The following example imports data using Azure AD Integrated auth. The example imports data from file
c:\last\data2.txt into table bcptest for database testdb on Azure server
aadserver.database.windows.net using Azure AD Integrated auth:

bcp bcptest in "c:\last\data2.dat" -S aadserver.database.windows.net -d testdb -G -c -t

-h "load hints[ ,... n]" Specifies the hint or hints to be used during a bulk import of data into a table or view.
ORDER(column [ASC | DESC ] [,...n ])**
The sort order of the data in the data file. Bulk import performance is improved if the data being imported
is sorted according to the clustered index on the table, if any. If the data file is sorted in a different order, that
is other than the order of a clustered index key, or if there is no clustered index on the table, the ORDER
clause is ignored. The column names supplied must be valid column names in the destination table. By
default, bcp assumes the data file is unordered. For optimized bulk import, SQL Server also validates that
the imported data is sorted.
ROWS_PER_BATCH = bb
Number of rows of data per batch (as bb). Used when -b is not specified, resulting in the entire data file
being sent to the server as a single transaction. The server optimizes the bulk load according to the value
bb. By default, ROWS_PER_BATCH is unknown.
KILOBYTES_PER_BATCH = cc
Approximate number of kilobytes of data per batch (as cc). By default, KILOBYTES_PER_BATCH is
unknown.
TABLOCK
Specifies that a bulk update table-level lock is acquired for the duration of the bulk load operation;
otherwise, a row -level lock is acquired. This hint significantly improves performance because holding a lock
for the duration of the bulk-copy operation reduces lock contention on the table. A table can be loaded
concurrently by multiple clients if the table has no indexes and TABLOCK is specified. By default, locking
behavior is determined by the table option table lock on bulk load.

NOTE
If the target table is clustered columnstore index, TABLOCK hint is not required for loading by multiple concurrent
clients because each concurrent thread is assigned a separate rowgroup within the index and loads data into it.
Please refer to columnstore index conceptual topics for details,

CHECK_CONSTRAINTS
Specifies that all constraints on the target table or view must be checked during the bulk-import operation.
Without the CHECK_CONSTRAINTS hint, any CHECK and FOREIGN KEY constraints are ignored, and
after the operation the constraint on the table is marked as not-trusted.

NOTE
UNIQUE, PRIMARY KEY, and NOT NULL constraints are always enforced.
At some point, you will need to check the constraints on the entire table. If the table was nonempty before
the bulk import operation, the cost of revalidating the constraint may exceed the cost of applying CHECK
constraints to the incremental data. Therefore, we recommend that normally you enable constraint checking
during an incremental bulk import.
A situation in which you might want constraints disabled (the default behavior) is if the input data contains
rows that violate constraints. With CHECK constraints disabled, you can import the data and then use
Transact-SQL statements to remove data that is not valid.

NOTE
bcp now enforces data validation and data checks that might cause scripts to fail if they are executed on invalid data
in a data file.

NOTE
The -m max_errors switch does not apply to constraint checking.

FIRE_TRIGGERS
Specified with the in argument, any insert triggers defined on the destination table will run during the bulk-
copy operation. If FIRE_TRIGGERS is not specified, no insert triggers will run. FIRE_TRIGGERS is ignored
for the out, queryout, and format arguments.
-i input_file
Specifies the name of a response file, containing the responses to the command prompt questions for each
data field when a bulk copy is being performed using interactive mode (-n, -c, -w, or -N not specified).
If input_file begins with a hyphen (-) or a forward slash (/), do not include a space between -i and the
input_file value.
-k
Specifies that empty columns should retain a null value during the operation, rather than have any default
values for the columns inserted. For more information, see Keep Nulls or Use Default Values During Bulk
Import (SQL Server).
-K application_intent
Declares the application workload type when connecting to a server. The only value that is possible is
ReadOnly. If -K is not specified, the bcp utility will not support connectivity to a secondary replica in an
Always On availability group. For more information, see Active Secondaries: Readable Secondary Replicas
(Always On Availability Groups).
-L last_row
Specifies the number of the last row to export from a table or import from a data file. This parameter
requires a value greater than (>) 0 but less than (<) or equal to (=) the number of the last row. In the
absence of this parameter, the default is the last row of the file.
last_row can be a positive integer with a value up to 2^63-1.
-m max_errors
Specifies the maximum number of syntax errors that can occur before the bcp operation is canceled. A syntax
error implies a data conversion error to the target data type. The max_errors total excludes any errors that can be
detected only at the server, such as constraint violations.
A row that cannot be copied by the bcp utility is ignored and is counted as one error. If this option is not included,
the default is 10.
NOTE
The -m option also does not apply to converting the money or bigint data types.

-n
Performs the bulk-copy operation using the native (database) data types of the data. This option does not prompt
for each field; it uses the native values.
For more information, see Use Native Format to Import or Export Data (SQL Server).
-N
Performs the bulk-copy operation using the native (database) data types of the data for noncharacter data, and
Unicode characters for character data. This option offers a higher performance alternative to the -w option, and is
intended for transferring data from one instance of SQL Server to another using a data file. It does not prompt for
each field. Use this option when you are transferring data that contains ANSI extended characters and you want to
take advantage of the performance of native mode.
For more information, see Use Unicode Native Format to Import or Export Data (SQL Server).
If you export and then import data to the same table schema by using bcp.exe with -N, you might see a truncation
warning if there is a fixed length, non-Unicode character column (for example, char(10)).
The warning can be ignored. One way to resolve this warning is to use -n instead of -N.
-o output_file
Specifies the name of a file that receives output redirected from the command prompt.
If output_file begins with a hyphen (-) or a forward slash (/), do not include a space between -o and the output_file
value.
-P password
Specifies the password for the login ID. If this option is not used, the bcp command prompts for a password. If this
option is used at the end of the command prompt without a password, bcp uses the default password (NULL ).

IMPORTANT
Do not use a blank password. Use a strong password.

To mask your password, do not specify the -P option along with the -U option. Instead, after specifying bcp along
with the -U option and other switches (do not specify -P ), press ENTER, and the command will prompt you for a
password. This method ensures that your password will be masked when it is entered.
If password begins with a hyphen (-) or a forward slash (/), do not add a space between -P and the password value.
-q
Executes the SET QUOTED_IDENTIFIERS ON statement in the connection between the bcp utility and an
instance of SQL Server. Use this option to specify a database, owner, table, or view name that contains a space or a
single quotation mark. Enclose the entire three-part table or view name in quotation marks ("").
To specify a database name that contains a space or single quotation mark, you must use the –q option.
-q does not apply to values passed to -d.
For more information, see Remarks, later in this topic.
-r row_term
Specifies the row terminator. The default is \n (newline character). Use this parameter to override the default row
terminator. For more information, see Specify Field and Row Terminators (SQL Server).
If you specify the row terminator in hexadecimal notation in a bcp.exe command, the value will be truncated at
0x00. For example, if you specify 0x410041, 0x41 will be used.
If row_term begins with a hyphen (-) or a forward slash (/), do not include a space between -r and the row_term
value.
-R
Specifies that currency, date, and time data is bulk copied into SQL Server using the regional format defined for
the locale setting of the client computer. By default, regional settings are ignored.
-S server_name [\instance_name] Specifies the instance of SQL Server to which to connect. If no server is
specified, the bcp utility connects to the default instance of SQL Server on the local computer. This option is
required when a bcp command is run from a remote computer on the network or a local named instance. To
connect to the default instance of SQL Server on a server, specify only server_name. To connect to a named
instance of SQL Server, specify server_name\instance_name.
-t field_term
Specifies the field terminator. The default is \t (tab character). Use this parameter to override the default field
terminator. For more information, see Specify Field and Row Terminators (SQL Server).
If you specify the field terminator in hexadecimal notation in a bcp.exe command, the value will be truncated at
0x00. For example, if you specify 0x410041, 0x41 will be used.
If field_term begins with a hyphen (-) or a forward slash (/), do not include a space between -t and the field_term
value.
-T
Specifies that the bcp utility connects to SQL Server with a trusted connection using integrated security. The
security credentials of the network user, login_id, and password are not required. If –T is not specified, you need to
specify –U and –P to successfully log in.

IMPORTANT
When the bcp utility is connecting to SQL Server with a trusted connection using integrated security, use the -T option
(trusted connection) instead of the user name and password combination. When the bcp utility is connecting to SQL
Database or SQL Data Warehouse, using Windows authentication or Azure Active Directory authentication is not supported.
Use the -U and -P options.

-U login_id
Specifies the login ID used to connect to SQL Server.

IMPORTANT
When the bcp utility is connecting to SQL Server with a trusted connection using integrated security, use the -T option
(trusted connection) instead of the user name and password combination. When the bcp utility is connecting to SQL
Database or SQL Data Warehouse, using Windows authentication or Azure Active Directory authentication is not supported.
Use the -U and -P options.

-v
Reports the bcp utility version number and copyright.
-V (80 | 90 | 100 | 110 | 120 | 130 )
Performs the bulk-copy operation using data types from an earlier version of SQL Server. This option does not
prompt for each field; it uses the default values.
80 = SQL Server 2000 (8.x)
90 = SQL Server 2005
100 = SQL Server 2008 and SQL Server 2008 R2
110 = SQL Server 2012 (11.x)
120 = SQL Server 2014 (12.x)
130 = SQL Server 2016 (13.x)
For example, to generate data for types not supported by SQL Server 2000 (8.x), but were introduced in later
versions of SQL Server, use the -V80 option.
For more information, see Import Native and Character Format Data from Earlier Versions of SQL Server.
-w
Performs the bulk copy operation using Unicode characters. This option does not prompt for each field; it uses
nchar as the storage type, no prefixes, \t (tab character) as the field separator, and \n (newline character) as the
row terminator. -w is not compatible with -c.
For more information, see Use Unicode Character Format to Import or Export Data (SQL Server).
-x
Used with the format and -f format_file options, generates an XML -based format file instead of the default non-
XML format file. The -x does not work when importing or exporting data. It generates an error if used without
both format and -f format_file.

Remarks
The bcp 13.0 client is installed when you install Microsoft SQL Server 2017 tools. If tools are installed for both
SQL Server 2017 and an earlier version of SQL Server, depending on the order of values of the PATH
environment variable, you might be using the earlier bcp client instead of the bcp 13.0 client. This environment
variable defines the set of directories used by Windows to search for executable files. To discover which version
you are using, run the bcp /v command at the Windows Command Prompt. For information about how to set the
command path in the PATH environment variable, see Windows Help.
The bcp utility can also be downloaded separately from the Microsoft SQL Server 2016 Feature Pack. Select either
ENU\x64\MsSqlCmdLnUtils.msi or ENU\x86\MsSqlCmdLnUtils.msi .

XML format files are only supported when SQL Server tools are installed together with SQL Server Native Client.
For information about where to find or how to run the bcp utility and about the command prompt utilities syntax
conventions, see Command Prompt Utility Reference (Database Engine).
For information on preparing data for bulk import or export operations, see Prepare Data for Bulk Export or
Import (SQL Server).
For information about when row -insert operations that are performed by bulk import are logged in the
transaction log, see Prerequisites for Minimal Logging in Bulk Import.

Native Data File Support


In SQL Server 2017, the bcp utility supports native data files compatible with SQL Server 2000 (8.x), SQL Server
2005, SQL Server 2008, SQL Server 2008 R2, and SQL Server 2012 (11.x).

Computed Columns and timestamp Columns


Values in the data file being imported for computed or timestamp columns are ignored, and SQL Server
automatically assigns values. If the data file does not contain values for the computed or timestamp columns in
the table, use a format file to specify that the computed or timestamp columns in the table should be skipped
when importing data; SQL Server automatically assigns values for the column.
Computed and timestamp columns are bulk copied from SQL Server to a data file as usual.

Specifying Identifiers That Contain Spaces or Quotation Marks


SQL Server identifiers can include characters such as embedded spaces and quotation marks. Such identifiers
must be treated as follows:
When you specify an identifier or file name that includes a space or quotation mark at the command
prompt, enclose the identifier in quotation marks ("").
For example, the following bcp out command creates a data file named Currency Types.dat :

bcp AdventureWorks2012.Sales.Currency out "Currency Types.dat" -T -c

To specify a database name that contains a space or quotation mark, you must use the -q option.
For owner, table, or view names that contain embedded spaces or quotation marks, you can either:
Specify the -q option, or
Enclose the owner, table, or view name in brackets ([]) inside the quotation marks.

Data Validation
bcp now enforces data validation and data checks that might cause scripts to fail if they are executed on invalid
data in a data file. For example, bcp now verifies that:
The native representation of float or real data types are valid.
Unicode data has an even-byte length.
Forms of invalid data that could be bulk imported in earlier versions of SQL Server might fail to load now;
whereas, in earlier versions, the failure did not occur until a client tried to access the invalid data. The added
validation minimizes surprises when querying the data after bulk load.

Bulk Exporting or Importing SQLXML Documents


To bulk export or import SQLXML data, use one of the following data types in your format file.

DATA TYPE EFFECT

SQLCHAR or SQLVARYCHAR The data is sent in the client code page or in the code page
implied by the collation). The effect is the same as specifying
the -c switch without specifying a format file.

SQLNCHAR or SQLNVARCHAR The data is sent as Unicode. The effect is the same as
specifying the -w switch without specifying a format file.

SQLBINARY or SQLVARYBIN The data is sent without any conversion.

Permissions
A bcp out operation requires SELECT permission on the source table.
A bcp in operation minimally requires SELECT/INSERT permissions on the target table. In addition, ALTER
TABLE permission is required if any of the following is true:
Constraints exist and the CHECK_CONSTRAINTS hint is not specified.

NOTE
Disabling constraints is the default behavior. To enable constraints explicitly, use the -h option with the
CHECK_CONSTRAINTS hint.

Triggers exist and the FIRE_TRIGGER hint is not specified.

NOTE
By default, triggers are not fired. To fire triggers explicitly, use the -h option with the FIRE_TRIGGERS hint.

You use the -E option to import identity values from a data file.

NOTE
Requiring ALTER TABLE permission on the target table was new in SQL Server 2005. This new requirement might cause bcp
scripts that do not enforce triggers and constraint checks to fail if the user account lacks ALTER table permissions for the
target table.

Character Mode (-c) and Native Mode (-n) Best Practices


This section has recommendations for to character mode (-c) and native mode (-n).
(Administrator/User) When possible, use native format (-n) to avoid the separator issue. Use the native
format to export and import using SQL Server. Export data from SQL Server using the -c or -w option if
the data will be imported to a non- SQL Server database.
(Administrator) Verify data when using BCP OUT. For example, when you use BCP OUT, BCP IN, and then
BCP OUT verify that the data is properly exported and the terminator values are not used as part of some
data value. Please consider overriding the default terminators (using -t and -r options) with random
hexadecimal values to avoid conflicts between terminator values and data values.
(User) Use a long and unique terminator (any sequence of bytes or characters) to minimize the possibility of
a conflict with the actual string value. This can be done by using the -t and -r options.

Examples
This section contains the following examples:
A. Identify bcp utility version
B. Copying table rows into a data file (with a trusted connection)
C. Copying table rows into a data file (with Mixed-mode Authentication)
D. Copying data from a file to a table
E. Copying a specific column into a data file
F. Copying a specific row into a data file
G. Copying data from a query to a data file
H. Creating format files
I. Using a format file to bulk import with bcp
Example Test Conditions
The examples below make use of the WideWorldImporters sample database for SQL Server (starting 2016) and
Azure SQL Database. WideWorldImporters can be downloaded from https://github.com/Microsoft/sql-server-
samples/releases/tag/wide-world-importers-v1.0. See RESTORE (Transact-SQL ) for the syntax to restore the
sample database. Except where specified otherwise, the examples assume that you are using Windows
Authentication and have a trusted connection to the server instance on which you are running the bcp command.
A directory named D:\BCP will be used in many of the examples.
The script below creates an empty copy of the WideWorldImporters.Warehouse.StockItemTransactions table and then
adds a primary key constraint. Run the following T-SQL script in SQL Server Management Studio (SSMS )

USE WideWorldImporters;
GO

SET NOCOUNT ON;

IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'Warehouse.StockItemTransactions_bcp')


BEGIN
SELECT * INTO WideWorldImporters.Warehouse.StockItemTransactions_bcp
FROM WideWorldImporters.Warehouse.StockItemTransactions
WHERE 1 = 2;

ALTER TABLE Warehouse.StockItemTransactions_bcp


ADD CONSTRAINT PK_Warehouse_StockItemTransactions_bcp PRIMARY KEY NONCLUSTERED
(StockItemTransactionID ASC);
END

NOTE
Truncate the StockItemTransactions_bcp table as needed.
TRUNCATE TABLE WideWorldImporters.Warehouse.StockItemTransactions_bcp;

A. Identify bcp utility version


At a command prompt, enter the following command:

bcp -v

B. Copying table rows into a data file (with a trusted connection)


The following examples illustrates the out option on the WideWorldImporters.Warehouse.StockItemTransactions
table.
Basic
This example creates a data file named StockItemTransactions_character.bcp and copies the table data into
it using character format.
At a command prompt, enter the following command:
bcp WideWorldImporters.Warehouse.StockItemTransactions out D:\BCP\StockItemTransactions_character.bcp -
c -T

Expanded
This example creates a data file named StockItemTransactions_native.bcp and copies the table data
into it using the native format. The example also: specifies the maximum number of syntax errors, an
error file, and an output file.
At a command prompt, enter the following command:

bcp WideWorldImporters.Warehouse.StockItemTransactions OUT


D:\BCP\StockItemTransactions_native.bcp -m 1 -n -e D:\BCP\Error_out.log -o D:\BCP\Output_out.log
-S -T

Review Error_out.log and Output_out.log . should be blank. Compare the file sizes between
Error_out.log
StockItemTransactions_character.bcp and StockItemTransactions_native.bcp .
C. Copying table rows into a data file (with mixed-mode authentication)
The following example illustrates the out option on the WideWorldImporters.Warehouse.StockItemTransactions table.
This example creates a data file named StockItemTransactions_character.bcp and copies the table data into it using
character format.
The example assumes that you are using mixed-mode authentication, you must use the -U switch to specify your
login ID. Also, unless you are connecting to the default instance of SQL Server on the local computer, use the -S
switch to specify the system name and, optionally, an instance name.
At a command prompt, enter the following command: (The system will prompt you for your password.)

bcp WideWorldImporters.Warehouse.StockItemTransactions out D:\BCP\StockItemTransactions_character.bcp -c -


U<login_id> -S<server_name\instance_name>

D. Copying data from a file to a table


The following examples illustrate the in option on the WideWorldImporters.Warehouse.StockItemTransactions_bcp
table using files created above.
Basic
This example uses the StockItemTransactions_character.bcp data file previously created.
At a command prompt, enter the following command:

bcp WideWorldImporters.Warehouse.StockItemTransactions_bcp IN
D:\BCP\StockItemTransactions_character.bcp -c -T

Expanded
This example uses the StockItemTransactions_native.bcp data file previously created. The example also: use
the hint TABLOCK, specifies the batch size, the maximum number of syntax errors, an error file, and an
output file.
At a command prompt, enter the following command:

bcp WideWorldImporters.Warehouse.StockItemTransactions_bcp IN D:\BCP\StockItemTransactions_native.bcp -


b 5000 -h "TABLOCK" -m 1 -n -e D:\BCP\Error_in.log -o D:\BCP\Output_in.log -S -T
Review Error_in.log and Output_in.log .
E. Copying a specific column into a data file
To copy a specific column, you can use the queryout option. The following example copies only the
StockItemTransactionID column of the Warehouse.StockItemTransactions table into a data file.

At a command prompt, enter the following command:

bcp "SELECT StockItemTransactionID FROM WideWorldImporters.Warehouse.StockItemTransactions WITH (NOLOCK)"


queryout D:\BCP\StockItemTransactionID_c.bcp -c -T

F. Copying a specific row into a data file


To copy a specific row, you can use the queryout option. The following example copies only the row for the
person named Amy Trefl from the WideWorldImporters.Application.People table into a data file Amy_Trefl_c.bcp .
Note: the -d switch is used identify the database.
At a command prompt, enter the following command:

bcp "SELECT * from Application.People WHERE FullName = 'Amy Trefl'" queryout D:\BCP\Amy_Trefl_c.bcp -d
WideWorldImporters -c -T

G. Copying data from a query to a data file


To copy the result set from a Transact-SQL statement to a data file, use the queryout option. The following
example copies the names from the WideWorldImporters.Application.People table, ordered by full name, into the
People.txt data file. Note: the -t switch is used to create a comma delimited file.

At a command prompt, enter the following command:

bcp "SELECT FullName, PreferredName FROM WideWorldImporters.Application.People ORDER BY FullName" queryout


D:\BCP\People.txt -t, -c -T

H. Creating format files


The following example creates three different format files for the Warehouse.StockItemTransactions table in the
WideWorldImporters database. Review the contents of each created file.

At a command prompt, enter the following commands:

REM non-XML character format


bcp WideWorldImporters.Warehouse.StockItemTransactions format nul -f D:\BCP\StockItemTransactions_c.fmt -c -T

REM non-XML native format


bcp WideWorldImporters.Warehouse.StockItemTransactions format nul -f D:\BCP\StockItemTransactions_n.fmt -n -T

REM XML character format


bcp WideWorldImporters.Warehouse.StockItemTransactions format nul -f D:\BCP\StockItemTransactions_c.xml -x -c
-T

NOTE
To use the -x switch, you must be using a bcp 9.0 client. For information about how to use the bcp 9.0 client, see "Remarks."

For more information, see Non-XML Format Files (SQL Server) and XML Format Files (SQL Server).
I. Using a format file to bulk import with bcp
To use a previously created format file when importing data into an instance of SQL Server, use the -f switch with
the in option. For example, the following command bulk copies the contents of a data file,
StockItemTransactions_character.bcp , into a copy of the Warehouse.StockItemTransactions_bcp table by using the
previously created format file, StockItemTransactions_c.xml . Note: the -L switch is used to import only the first 100
records.
At a command prompt, enter the following command:

bcp WideWorldImporters.Warehouse.StockItemTransactions_bcp in D:\BCP\StockItemTransactions_character.bcp -L


100 -f D:\BCP\StockItemTransactions_c.xml -T

NOTE
Format files are useful when the data file fields are different from the table columns; for example, in their number, ordering,
or data types. For more information, see Format Files for Importing or Exporting Data (SQL Server).

J. Specifying a code page


The following partial code example shows bcp import while specifying a code page 65001:

bcp.exe MyTable in "D:\data.csv" -T -c -C 65001 -t , ...

The following partial code example shows bcp export while specifying a code page 65001:

bcp.exe MyTable out "D:\data.csv" -T -c -C 65001 -t , ...

Additional Examples
THE FOLLOWING TOPICS CONTAIN EXAMPLES OF USING BCP:

Data Formats for Bulk Import or Bulk Export (SQL Server)


 ● Use Native Format to Import or Export Data (SQL Server)
 ● Use Character Format to Import or Export Data (SQL Server)
 ● Use Unicode Native Format to Import or Export Data (SQL Server)
 ● Use Unicode Character Format to Import or Export Data (SQL Server)

Specify Field and Row Terminators (SQL Server)

Keep Nulls or Use Default Values During Bulk Import (SQL Server)

Keep Identity Values When Bulk Importing Data (SQL Server)

Format Files for Importing or Exporting Data (SQL Server))


 ● Create a Format File (SQL Server)
 ● Use a Format File to Bulk Import Data (SQL Server)
 ● Use a Format File to Skip a Table Column (SQL Server)
 ● Use a Format File to Skip a Data Field (SQL Server)
 ● Use a Format File to Map Table Columns to Data-File Fields (SQL Server)

Examples of Bulk Import and Export of XML Documents (SQL Server)

See Also
Prepare Data for Bulk Export or Import (SQL Server)
BULK INSERT (Transact-SQL )
OPENROWSET (Transact-SQL )
SET QUOTED_IDENTIFIER (Transact-SQL )
sp_configure (Transact-SQL )
sp_tableoption (Transact-SQL )
Format Files for Importing or Exporting Data (SQL Server)
SqlLocalDB Utility
5/3/2018 • 3 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Use the SqlLocalDB utility to create an instance of Microsoft SQL Server 2016 ExpressLocalDB. The
SqlLocalDB utility (SqlLocalDB.exe) is a simple command line tool to enable users and developers to create and
manage an instance of SQL Server ExpressLocalDB. For information about how to use LocalDB, see SQL Server
2016 Express LocalDB.

Syntax
SqlLocalDB.exe
{
[ create | c ] \<instance-name> \<instance-version> [-s ]
| [ delete | d ] \<instance-name>
| [ start | s ] \<instance-name>
| [ stop | p ] \<instance-name> [ -i ] [ -k ]
| [ share | h ] [" <user_SID> " | " <user_account> " ] " \<private-name> " " \<shared-name> "
| [ unshare | u ] " \<shared-name> "
| [ info | i ] \<instance-name>
| [ versions | v ]
| [ trace | t ] [ on | off ]
| [ help | -? ]
}

Arguments
[ create | c ] <instance-name> <instance-version> [-s ]
Creates a new of instance of SQL Server ExpressLocalDB. SqlLocalDB uses the version of SQL Server Express
binaries specified by <instance-version> argument. The version number is specified in numeric format with at least
one decimal. The minor version numbers (service packs) are optional. For example the following two version
numbers are both acceptable: 11.0, or 11.0.1186. The specified version must be installed on the computer. If not
specified, the version number defaults to the version of the SqlLocalDB utility. Adding –s starts the new instance
of LocalDB.
[ share | h ]
Shares the specified private instance of LocalDB using the specified shared name. If the user SID or account name
is omitted, it defaults to the current user.
[ unshared | u ]
Stops the sharing of the specified shared instance of LocalDB.
[ delete | d ] <instance-name>
Deletes the specified instance of SQL Server ExpressLocalDB.
[ start | s ] "<instance-name>"
Starts the specified instance of SQL Server ExpressLocalDB. When successful the statement returns the named
pipe address of the LocalDB.
[ stop | p ] <instance-name> [-i ] [-k ]
Stops the specified instance of SQL Server ExpressLocalDB. Adding –i requests the instance shutdown with the
NOWAIT option. Adding –k kills the instance process without contacting it.
[ info | i ] [ <instance-name> ]
Lists all instance of SQL Server ExpressLocalDB owned by the current user.
<instance-name> returns the name, version, state (Running or Stopped), last start time for the specified instance
of SQL Server ExpressLocalDB, and the local pipe name of the LocalDB.
[ trace | t ] on | off
trace on enables tracing for the SqlLocalDB API calls for the current user. trace off disables tracing.
-?
Returns brief descriptions of each SqlLocalDB option.

Remarks
The instance name argument must follow the rules for SQL Server identifiers or it must be enclosed in double
quotes.
Executing SqlLocalDB without arguments returns the help text.
Operations other than start can only be performed on an instance belonging to currently logged in user. An
SQLLOCALDB Instance, when shared, can only be started and stopped by the owner of the instance.

Examples
A. Creating an Instance of LocalDB
The following example creates an instance of SQL Server ExpressLocalDB named DEPARTMENT using the SQL
Server 2017 binaries and starts the instance.

SqlLocalDB.exe create "DEPARTMENT" 12.0 -s

B. Working with a Shared Instance of LocalDB


Open a command prompt using Administrator privileges.

SqlLocalDB.exe create "DeptLocalDB"


SqlLocalDB.exe share "DeptLocalDB" "DeptSharedLocalDB"
SqlLocalDB.exe start "DeptLocalDB"
SqlLocalDB.exe info "DeptLocalDB"
REM The previous statement outputs the Instance pipe name for the next step
sqlcmd –S np:\\.\pipe\LOCALDB#<use your pipe name>\tsql\query
CREATE LOGIN NewLogin WITH PASSWORD = 'Passw0rd!!@52';
GO
CREATE USER NewLogin;
GO
EXIT

Execute the following code to connect to the shared instance of LocalDB using the NewLogin login.

sqlcmd –S (localdb)\.\DeptSharedLocalDB -U NewLogin -P Passw0rd!!@52

See Also
SQL Server 2016 Express LocalDB
Command-Line Management Tool: SqlLocalDB.exe
osql Utility
5/3/2018 • 11 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The osql utility allows you to enter Transact-SQL statements, system procedures, and script files. This utility uses
ODBC to communicate with the server.

IMPORTANT
This feature will be removed in a future version of SQL Server. Avoid using this feature in new development work, and plan to
modify applications that currently use the feature. Use sqlcmd instead. For more information, see sqlcmd Utility.

Syntax
osql
[-?] |
[-L] |
[
{
{-Ulogin_id [-Ppassword]} | –E }
[-Sserver_name[\instance_name]] [-Hwksta_name] [-ddb_name]
[-ltime_out] [-ttime_out] [-hheaders]
[-scol_separator] [-wcolumn_width] [-apacket_size]
[-e] [-I] [-D data_source_name]
[-ccmd_end] [-q "query"] [-Q"query"]
[-n] [-merror_level] [-r {0 | 1}]
[-iinput_file] [-ooutput_file] [-p]
[-b] [-u] [-R] [-O]
]

Arguments
-?
Displays the syntax summary of osql switches.
-L
Lists the locally configured servers and the names of the servers broadcasting on the network.

NOTE
Due to the nature of broadcasting on networks, osql may not receive a timely response from all servers. Therefore the list of
servers returned may vary for each invocation of this option.

-U login_id
Is the user login ID. Login IDs are case-sensitive.
-P password
Is a user-specified password. If the -P option is not used, osql prompts for a password. If the -P option is used at
the end of the command prompt without any password, osql uses the default password (NULL ).
IMPORTANT
Do not use a blank password. Use a strong password. For more information, see Strong Passwords.

Passwords are case-sensitive.


The OSQLPASSWORD environment variable allows you to set a default password for the current session.
Therefore, you do not have to hard code a password into batch files.
If you do not specify a password with the -P option, osql first checks for the OSQLPASSWORD variable. If no
value is set, osql uses the default password, NULL. The following example sets the OSQLPASSWORD variable at
a command prompt and then accesses the osql utility:

C:\>SET OSQLPASSWORD=abracadabra
C:\>osql

IMPORTANT
To mask your password, do not specify the -P option along with the -U option. Instead, after specifying osql along with the -
U option and other switches (do not specify -P), press ENTER, and osql will prompt you for a password. This method ensures
that your password will be masked when it is entered.

-E
Uses a trusted connection instead of requesting a password.
-S server_name[ \instance_name]
Specifies the instance of SQL Server to connect to. Specify server_name to connect to the default instance of SQL
Server on that server. Specify server_name\instance_name to connect to a named instance of SQL Server on that
server. If no server is specified, osql connects to the default instance of SQL Server on the local computer. This
option is required when executing osql from a remote computer on the network.
-H wksta_name
Is a workstation name. The workstation name is stored in sysprocesses.hostname and is displayed by sp_who. If
this option is not specified, the current computer name is assumed.
-d db_name
Issues a USE db_name statement when osqlis started.
-l time_out
Specifies the number of seconds before an osql login times out. The default time-out for login to osql is eight
seconds.
-t time_out
Specifies the number of seconds before a command times out. If a time_out value is not specified, commands do
not time out.
-h headers
Specifies the number of rows to print between column headings. The default is to print headings one time for each
set of query results. Use -1 to specify that no headers will be printed. If –1 is used, there must be no space between
the parameter and the setting (-h-1, not -h -1).
-s col_separator
Specifies the column-separator character, which is a blank space by default. To use characters that have special
meaning to the operating system (for example, | ; & < >), enclose the character in double quotation marks (").
-w column_width
Allows the user to set the screen width for output. The default is 80 characters. When an output line has reached its
maximum screen width, it is broken into multiple lines.
-a packet_size
Allows you to request a different-sized packet. The valid values for packet_size are 512 through 65535. The default
value osql is the server default. Increased packet size can enhance performance on larger script execution where
the amount of SQL statements between GO commands is substantial. Microsoft testing indicates that 8192 is
typically the fastest setting for bulk copy operations. A larger packet size can be requested, but osql defaults to the
server default if the request cannot be granted.
-e
Echoes input.
-I
Sets the QUOTED_IDENTIFIER connection option on.
-D data_source_name
Connects to an ODBC data source that is defined using the ODBC driver for SQL Server. The osql connection
uses the options specified in the data source.

NOTE
This option does not work with data sources defined for other drivers.

-c cmd_end
Specifies the command terminator. By default, commands are terminated and sent to SQL Server by entering GO
on a line by itself. When you reset the command terminator, do not use Transact-SQL reserved words or characters
that have special meaning to the operating system, whether preceded by a backslash or not.
-q " query "
Executes a query when osql starts, but does not exit osql when the query completes. (Note that the query
statement should not include GO ). If you issue a query from a batch file, use %variables, or environment
%variables%. For example:

SET table=sys.objects
osql -E -q "select name, object_id from %table%"

Use double quotation marks around the query and single quotation marks around anything embedded in the
query.
-Q" query "
Executes a query and immediately exits osql. Use double quotation marks around the query and single quotation
marks around anything embedded in the query.
-n
Removes numbering and the prompt symbol (>) from input lines.
-m error_level
Customizes the display of error messages. The message number, state, and error level are displayed for errors of
the specified severity level or higher. Nothing is displayed for errors of levels lower than the specified level. Use -1
to specify that all headers are returned with messages, even informational messages. If using -1, there must be no
space between the parameter and the setting (-m -1, not -m -1).
-r { 0| 1}
Redirects message output to the screen (stderr). If you do not specify a parameter, or if you specify 0, only error
messages with a severity level 11 or higher are redirected. If you specify 1, all message output (including "print") is
redirected.
-i input_file
Identifies the file that contains a batch of SQL statements or stored procedures. The less than (<) comparison
operator can be used in place of -i.
-o output_file
Identifies the file that receives output from osql. The greater than (>) comparison operator can be used in place of
-o.
If input_file is not Unicode and -u is not specified, output_file is stored in OEM format. If input_file is Unicode or -u
is specified, output_file is stored in Unicode format.
-p
Prints performance statistics.
-b
Specifies that osql exits and returns a DOS ERRORLEVEL value when an error occurs. The value returned to the
DOS ERRORLEVEL variable is 1 when the SQL Server error message has a severity of 11 or greater; otherwise,
the value returned is 0. Microsoft MS -DOS batch files can test the value of DOS ERRORLEVEL and handle the
error appropriately.
-u
Specifies that output_file is stored in Unicode format, regardless of the format of the input_file.
-R
Specifies that the SQL Server ODBC driver use client settings when converting currency, date, and time data to
character data.
-O
Specifies that certain osql features be deactivated to match the behavior of earlier versions of isql. These features
are deactivated:
EOF batch processing
Automatic console width scaling
Wide messages
It also sets the default DOS ERRORLEVEL value to -1.

NOTE
The -n, -O and -D options are no longer supported by osql.

Remarks
The osql utility is started directly from the operating system with the case-sensitive options listed here. After
osqlstarts, it accepts SQL statements and sends them to SQL Server interactively. The results are formatted and
displayed on the screen (stdout). Use QUIT or EXIT to exit from osql.
If you do not specify a user name when you start osql, SQL Server checks for the environment variables and uses
those, for example, osqluser=(user) or osqlserver=(server). If no environment variables are set, the workstation
user name is used. If you do not specify a server, the name of the workstation is used.
If neither the -U or -P options are used, SQL Server attempts to connect using Microsoft Windows Authentication
Mode. Authentication is based on the Microsoft Windows account of the user running osql.
The osql utility uses the ODBC API. The utility uses the SQL Server ODBC driver default settings for the SQL
Server ISO connection options. For more information, see Effects of ANSI Options.

NOTE
The osql utility does not support CLR user-defined data types. To process these data types, you must use the sqlcmd utility.
For more information, see sqlcmd Utility.

OSQL Commands
In addition to Transact-SQL statements within osql, these commands are also available.

COMMAND DESCRIPTION

GO Executes all statements entered after the last GO.

RESET Clears any statements you have entered.

QUIT or EXIT( ) Exits from osql.

CTRL+C Ends a query without exiting from osql.

NOTE
The !! and ED commands are no longer supported by osql.

The command terminators GO (by default), RESET EXIT, QUIT, and CTRL+C, are recognized only if they appear at
the beginning of a line, immediately following the osql prompt.
GO signals both the end of a batch and the execution of any cached Transact-SQL statements. When you press
ENTER at the end of each input line, osql caches the statements on that line. When you press ENTER after typing
GO, all of the currently cached statements are sent as a batch to SQL Server.
The current osql utility works as if there is an implied GO at the end of any script executed, therefore all
statements in the script execute.
End a command by typing a line beginning with a command terminator. You can follow the command terminator
with an integer to specify how many times the command should be run. For example, to execute this command
100 times, type:

SELECT x = 1
GO 100

The results are printed once at the end of execution. osql does not accept more than 1,000 characters per line.
Large statements should be spread across multiple lines.
The command recall facilities of Windows can be used to recall and modify osql statements. The existing query
buffer can be cleared by typing RESET.
When running stored procedures, osql prints a blank line between each set of results in a batch. In addition, the "0
rows affected" message does not appear when it does not apply to the statement executed.

Using osql Interactively


To use osql interactively, type the osql command (and any of the options) at a command prompt.
You can read in a file containing a query (such as Stores.qry) for execution by osql by typing a command similar to
this:

osql -E -i stores.qry

You can read in a file containing a query (such as Titles.qry) and direct the results to another file by typing a
command similar to this:

osql -E -i titles.qry -o titles.res

IMPORTANT
When possible, use the -Eoption (trusted connection).

When using osql interactively, you can read an operating-system file into the command buffer with :rfile_name.
This sends the SQL script in file_name directly to the server as a single batch.

NOTE
When using osql, SQL Server treats the batch separator GO, if it appears in a SQL script file, as a syntax error.

Inserting Comments
You can include comments in a Transact-SQL statement submitted to SQL Server by osql. Two types of
commenting styles are allowed: -- and /*...*/.

Using EXIT to Return Results in osql


You can use the result of a SELECT statement as the return value from osql. If it is numeric, the last column of the
last result row is converted to a 4-byte integer (long). MS -DOS passes the low byte to the parent process or
operating system error level. Windows passes the entire 4-byte integer. The syntax is:

EXIT ( < query > )

For example:

EXIT(SELECT @@ROWCOUNT)

You can also include the EXIT parameter as part of a batch file. For example:

osql -E -Q "EXIT(SELECT COUNT(*) FROM '%1')"

The osql utility passes everything between the parentheses () to the server exactly as entered. If a stored system
procedure selects a set and returns a value, only the selection is returned. The EXIT() statement with nothing
between the parentheses executes everything preceding it in the batch and then exits with no return value.
There are four EXIT formats:
EXIT
NOTE
Does not execute the batch; quits immediately and returns no value.

EXIT()

NOTE
Executes the batch, and then quits and returns no value.

EXIT(query)

NOTE
Executes the batch, including the query, and then quits after returning the results of the query.

RAISERROR with a state of 127

NOTE
If RAISERROR is used within an osql script and a state of 127 is raised, osql will quit and return the message ID back to the
client. For example:

RAISERROR(50001, 10, 127)

This error will cause the osql script to end and the message ID 50001 will be returned to the client.
The return values -1 to -99 are reserved by SQL Server; osql defines these values:
-100
Error encountered prior to selecting return value.
-101
No rows found when selecting return value.
-102
Conversion error occurred when selecting return value.

Displaying money and smallmoney Data Types


osql displays the money and smallmoney data types with two decimal places although SQL Server stores the
value internally with four decimal places. Consider the example:

SELECT CAST(CAST(10.3496 AS money) AS decimal(6, 4))


GO

This statement produces a result of 10.3496 , which indicates that the value is stored with all decimal places intact.

See Also
Comment (MDX)
-- (Comment) (MDX)
CAST and CONVERT (Transact-SQL )
RAISERROR (Transact-SQL )
Profiler Utility
5/3/2018 • 3 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The profiler utility launches the SQL Server Profiler tool. The optional arguments listed later in this topic allow
you to control how the application starts.

NOTE
The profiler utility is not intended for scripting traces. For more information, see SQL Server Profiler.

Syntax
profiler
[ /? ] |
[
{
{ /U login_id [ /P password ] }
| /E
}
{[ /S sql_server_name ] | [ /A analysis_services_server_name ] }
[ /D database ]
[ /T "template_name" ]
[ /B { "trace_table_name" } ]
{ [/F "filename" ] | [ /O "filename" ] }
[ /L locale_ID ]
[ /M "MM-DD-YY hh:mm:ss" ]
[ /R ]
[ /Z file_size ]
]

Arguments
/?
Displays the syntax summary of profiler arguments.
/U login_id
Is the user login ID for SQL Server Authentication. Login IDs are case sensitive.

NOTE
When possible, use Windows Authentication..

/P password
Specifies a user-specified password for SQL Server Authentication.
/E
Specifies connecting with Windows Authentication with the current user's credentials.
/S sql_server_name
Specifies an instance of SQL Server. Profiler will automatically connect to the specified server using the
authentication information specified in the /U and /P switches or the /E switch. To connect to a named instance of
SQL Server, use /S sql_server_name\instance_name.
/A analysis_services_server_name
Specifies an instance of Analysis Services. Profiler will automatically connect to the specified server using the
authentication information specified in the /U and /P switches or the /E switch. To connect to a named instance of
SQL Server use /A analysis_services_server_name\instance_name.
/D database
Specifies the name of the database to be used with the connection. This option will select the default database for
the specified user if no database is specified.
/B " trace_table_name "
Specifies a trace table to load when the profiler is launched. You must specify the database, the user or schema, and
the table.
/T" template_name "
Specifies the template that will be loaded to configure the trace. The template name must be in quotes. The
template name must be in either the system template directory or the user template directory. If two templates
with the same name exist in both directories, the template from the system directory will be loaded. If no template
with the specified name exists, the standard template will be loaded. Note that the file extension for the template
(.tdf ) should not be specified as part of the template_name. For example:

/T "standard"

/F" filename "


Specifies the path and filename of a trace file to load when profiler is launched. The entire path and filename must
be in quotes. This option cannot be used with /O.
/O " filename "
Specifies the path and filename of a file to which trace results should be written. The entire path and filename must
be in quotes. This option cannot be used with /F.
/L locale_ID
Not available.
/M " MM -DD -YY hh:mm:ss "
Specifies the date and time for the trace to stop. The stop time must be in quotes. Specify the stop time according
to the parameters in the table below:

PARAMETER DEFINITION

MM Two-digit month

DD Two-digit day

YY Two-digit year

hh Two-digit hour on a 24-hour clock

mm Two-digit minute

ss Two-digit second
NOTE
The "MM-DD-YY hh:mm:ss" format can only be used if the Use regional settings to display date and time values option
is enabled in SQL Server Profiler. If this option is not enabled, you must use the "YYYY-MM-DD hh:mm:ss" date and time
format.

/R
Enables trace file rollover.
/Z file_size
Specifies the size of the trace file in megabytes (MB ). The default size is 5 MB. If rollover is enabled, all rollover files
will be limited to the value specified in this argument.

Remarks
To start a trace with a specific template, use the /S and /T options together. For example, to start a trace using the
Standard template on MyServer\MyInstance, enter the following at the command prompt:

profiler /S MyServer\MyInstance /T "Standard"

See Also
Command Prompt Utility Reference (Database Engine)
sqlagent90 Application
5/3/2018 • 2 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The sqlagent90 application starts SQL Server Agent from the command prompt. Usually, SQL Server Agent
should be run from SQL Server Management Studio or by using SQL -SMO methods in an application. Only run
sqlagent90 from the command prompt when you are diagnosing SQL Server Agent, or when you are directed to
it by your primary support provider.

Syntax
sqlagent90
-c [-v] [-i instance_name]

Arguments
-c
Indicates that SQL Server Agent is running from the command prompt and is independent of the Microsoft
Windows Services Control Manager. When -c is used, SQL Server Agent cannot be controlled from either the
Services application in Administrative Tools or SQL Server Configuration Manager. This argument is mandatory.
-v
Indicates that SQL Server Agent runs in verbose mode and writes diagnostic information to the command-prompt
window. The diagnostic information is the same as the information written to the SQL Server Agent error log.
-i instance_name
Indicates that SQL Server Agent connects to the named SQL Server instance specified by instance_name.

Remarks
After displaying a copyright message, sqlagent90 displays output in the command prompt window only when the
-v switch is specified. To stop sqlagent90, press CTRL+C at the command prompt. Do not close the command-
prompt window before stopping sqlagent90.

See Also
Automated Administration Tasks (SQL Server Agent)
sqlcmd Utility
5/30/2018 • 32 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse

For SQL Server 2014 and lower, see sqlcmd Utility.


For using sqlcmd on Linux, see Install sqlcmd and bcp on Linux.

The sqlcmd utility lets you enter Transact-SQL statements, system procedures, and script files at the command
prompt, in Query Editor in SQLCMD mode, in a Windows script file or in an operating system (Cmd.exe) job
step of a SQL Server Agent job. This utility uses ODBC to execute Transact-SQL batches.

NOTE
The most recent versions of the sqlcmd utility is available as a web release from the Download Center. You need version 13.1
or higher to support Always Encrypted ( -g ) and Azure Active Directory authentication ( -G ). (You may have several
versions of sqlcmd.exe installed on your computer. Be sure you are using the correct version. To determine the version,
execute sqlcmd -? .)

You can try the sqlcmd utility from Azure Cloud Shell as it is pre-installed by default:
To run sqlcmd statements in SSMS, select SQLCMD Mode from the top navigation Query Menu dropdown.

IMPORTANT
SQL Server Management Studio (SSMS) uses the Microsoft .NET Framework SqlClient for execution in regular and SQLCMD
mode in Query Editor. When sqlcmd is run from the command line, sqlcmd uses the ODBC driver. Because different
default options may apply, you might see different behavior when you execute the same query in SQL Server Management
Studio in SQLCMD Mode and in the sqlcmd utility.

Currently, sqlcmd does not require a space between the command line option and the value. However, in a future
release, a space may be required between the command line option and the value.
Other topics:
Start the sqlcmd Utility
Use the sqlcmd Utility

Syntax
sqlcmd
-a packet_size
-A (dedicated administrator connection)
-b (terminate batch job if there is an error)
-c batch_terminator
-C (trust the server certificate)
-d db_name
-e (echo input)
-E (use trusted connection)
-f codepage | i:codepage[,o:codepage] | o:codepage[,i:codepage]
-g (enable column encryption)
-G (use Azure Active Directory for authentication)
-h rows_per_header
-H workstation_name
-i input_file
-I (enable quoted identifiers)
-j (Print raw error messages)
-k[1 | 2] (remove or replace control characters)
-K application_intent
-l login_timeout
-L[c] (list servers, optional clean output)
-m error_level
-M multisubnet_failover
-N (encrypt connection)
-o output_file
-p[1] (print statistics, optional colon format)
-P password
-q "cmdline query"
-Q "cmdline query" (and exit)
-r[0 | 1] (msgs to stderr)
-R (use client regional settings)
-s col_separator
-S [protocol:]server[instance_name][,port]
-t query_timeout
-u (unicode output file)
-U login_id
-v var = "value"
-V error_severity_level
-w column_width
-W (remove trailing spaces)
-x (disable variable substitution)
-X[1] (disable commands, startup script, environment variables, optional exit)
-y variable_length_type_display_width
-Y fixed_length_type_display_width
-z new_password
-Z new_password (and exit)
-? (usage)

Command-line Options
Login-Related Options
-A
Logs in to SQL Server with a Dedicated Administrator Connection (DAC ). This kind of connection is used to
troubleshoot a server. This will only work with server computers that support DAC. If DAC is not available,
sqlcmd generates an error message and then exits. For more information about DAC, see Diagnostic Connection
for Database Administrators. The -A option is not supported with the -G option. When connecting to SQL
Database using -A, you must be a SQL server administrator. The DAC is not availble for an Azure Active
Directory adminstrator.
-C
This switch is used by the client to configure it to implicitly trust the server certificate without validation. This
option is equivalent to the ADO.NET option TRUSTSERVERCERTIFICATE = true .
-d db_name
Issues a USE db_name statement when you start sqlcmd. This option sets the sqlcmd scripting variable
SQLCMDDBNAME. This specifies the initial database. The default is your login's default-database property. If the
database does not exist, an error message is generated and sqlcmd exits.
-l login_timeout
Specifies the number of seconds before a sqlcmd login to the ODBC driver times out when you try to connect to
a server. This option sets the sqlcmd scripting variable SQLCMDLOGINTIMEOUT. The default time-out for login
to sqlcmd is eight seconds. When using the -G option to connect to SQL Database or SQL Data Warehouse and
authenticate using Azure Active Directory, a timeout value of at least 30 seconds is recommended. The login time-
out must be a number between 0 and 65534. If the value supplied is not numeric or does not fall into that range,
sqlcmd generates an error message. A value of 0 specifies time-out to be infinite.
-E
Uses a trusted connection instead of using a user name and password to log on to SQL Server . By default,
without -E specified, sqlcmd uses the trusted connection option.
The -E option ignores possible user name and password environment variable settings such as
SQLCMDPASSWORD. If the -E option is used together with the -U option or the -P option, an error message is
generated.
-g
Sets the Column Encryption Setting to Enabled . For more information, see Always Encrypted. Only master keys
stored in Windows Certificate Store are supported. The -g switch requires at least sqlcmd version 13.1. To
determine your version, execute sqlcmd -? .
-G
This switch is used by the client when connecting to SQL Database or SQL Data Warehouse to specify that the
user be authenticated using Azure Active Directory authentication. This option sets the sqlcmd scripting variable
SQLCMDUSEAAD = true. The -G switch requires at least sqlcmd version 13.1. To determine your version,
execute sqlcmd -? . For more information, see Connecting to SQL Database or SQL Data Warehouse By Using
Azure Active Directory Authentication. The -A option is not supported with the -G option.

IMPORTANT
The -G option only applies to Azure SQL Database and Azure Data Warehouse.

Azure Active Directory Username and Password:


When you want to use an Azure Active Directory user name and password, you can provide the -G option
and also use the user name and password by providing the -U and -P options.

Sqlcmd -S testsrv.database.windows.net -d Target_DB_or_DW -U bob@contoso.com -P MyAADPassword -G

This will generate the following connection string in the backend:

SERVER = Target_DB_or_DW.testsrv.database.windows.net;UID=
bob@contoso.com;PWD=MyAADPassword;AUTHENTICATION = ActiveDirectoryPassword

Azure Active Directory Integrated


For Azure Active Directory Integrated authentication, provide the -G option without a user name or
password:
Sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -G

This will generate the following connection string in the backend:

SERVER = Target_DB_or_DW.testsrv.database.windows.net Authentication = ActiveDirectoryIntegrated;


Trusted_Connection=NO

NOTE
The -E option (Trusted_Connection) cannot be used along with the -G option).

-H workstation_name
A workstation name. This option sets the sqlcmd scripting variable SQLCMDWORKSTATION. The workstation
name is listed in the hostname column of the sys.sysprocesses catalog view and can be returned using the
stored procedure sp_who. If this option is not specified, the default is the current computer name. This name can
be used to identify different sqlcmd sessions.
-j Prints raw error messages to the screen.
-K application_intent
Declares the application workload type when connecting to a server. The only currently supported value is
ReadOnly. If -K is not specified, the sqlcmd utility will not support connectivity to a secondary replica in an
Always On availability group. For more information, see Active Secondaries: Readable Secondary Replica (Always
On Availability Groups)
-M multisubnet_failover
Always specify -M when connecting to the availability group listener of a SQL Server availability group or a SQL
Server Failover Cluster Instance. -M provides for faster detection of and connection to the (currently) active
server. If –M is not specified, -M is off. For more information about [!INCLUDEssHADR, Creation and
Configuration of Availability Groups (SQL Server), Failover Clustering and Always On Availability Groups (SQL
Server), and Active Secondaries: Readable Secondary Replicas(Always On Availability Groups).
-N
This switch is used by the client to request an encrypted connection.
-P password
Is a user-specified password. Passwords are case sensitive. If the -U option is used and the -P option is not used,
and the SQLCMDPASSWORD environment variable has not been set, sqlcmd prompts the user for a password.
To specify a null password (not recommended) use -P "". And remember to always:
Use a strong password!!
The password prompt is displayed by printing the password prompt to the console, as follows: Password:

User input is hidden. This means that nothing is displayed and the cursor stays in position.
The SQLCMDPASSWORD environment variable lets you set a default password for the current session.
Therefore, passwords do not have to be hard-coded into batch files.
The following example first sets the SQLCMDPASSWORD variable at the command prompt and then accesses
the sqlcmd utility. At the command prompt, type:
SET SQLCMDPASSWORD= p@a$$w0rd
At the following command prompt, type:
sqlcmd
If the user name and password combination is incorrect, an error message is generated.
NOTE! The OSQLPASSWORD environment variable was kept for backward compatibility. The
SQLCMDPASSWORD environment variable takes precedence over the OSQLPASSWORD environment
variable; this means that sqlcmd and osql can be used next to each other without interference and that old scripts
will continue to work.
If the -P option is used with the -E option, an error message is generated.
If the -P option is followed by more than one argument, an error message is generated and the program exits.
-S [protocol:]server[\instance_name][,port]
Specifies the instance of SQL Server to which to connect. It sets the sqlcmd scripting variable SQLCMDSERVER.
Specify server_name to connect to the default instance of SQL Server on that server computer. Specify
server_name [ \instance_name ] to connect to a named instance of SQL Server on that server computer. If no
server computer is specified, sqlcmd connects to the default instance of SQL Server on the local computer. This
option is required when you execute sqlcmd from a remote computer on the network.
protocol can be tcp (TCP/IP ), lpc (shared memory), or np (named pipes).
If you do not specify a server_name [ \instance_name ] when you start sqlcmd, SQL Server checks for and uses
the SQLCMDSERVER environment variable.

NOTE
The OSQLSERVER environment variable has been kept for backward compatibility. The SQLCMDSERVER environment
variable takes precedence over the OSQLSERVER environment variable; this means that sqlcmd and osql can be used next
to each other without interference and that old scripts will continue to work.

-U login_id
Is the login name or contained database user name. For contained database users you must provide the database
name option (-d).

NOTE
The OSQLUSER environment variable is available for backward compatibility. The SQLCMDUSER environment variable takes
precedence over the OSQLUSER environment variable. This means that sqlcmd and osql can be used next to each other
without interference. It also means that existing osql scripts will continue to work.

If neither the -U option nor the -P option is specified, sqlcmd tries to connect by using Microsoft Windows
Authentication mode. Authentication is based on the Windows account of the user who is running sqlcmd.
If the -U option is used with the -E option (described later in this topic), an error message is generated. If the –U
option is followed by more than one argument, an error message is generated and the program exits.
-z new_password
Change password:
sqlcmd -U someuser -P s0mep@ssword -z a_new_p@a$$w0rd

-Z new_password
Change password and exit:
sqlcmd -U someuser -P s0mep@ssword -Z a_new_p@a$$w0rd

Input/Output Options
-f codepage | i:codepage[,o:codepage] | o:codepage[,i:codepage]
Specifies the input and output code pages. The codepage number is a numeric value that specifies an installed
Windows code page.
Code-page conversion rules:
If no code pages are specified, sqlcmd will use the current code page for both input and output files, unless
the input file is a Unicode file, in which case no conversion is required.
sqlcmd automatically recognizes both big-endian and little-endian Unicode input files. If the -u option has
been specified, the output will always be little-endian Unicode.
If no output file is specified, the output code page will be the console code page. This enables the output to
be displayed correctly on the console.
Multiple input files are assumed to be of the same code page. Unicode and non-Unicode input files can be
mixed.
Enter chcp at the command prompt to verify the code page of Cmd.exe.
-i input_file[,input_file2...]
Identifies the file that contains a batch of SQL statements or stored procedures. Multiple files may be
specified that will be read and processed in order. Do not use any spaces between file names. sqlcmd will
first check to see whether all the specified files exist. If one or more files do not exist, sqlcmd will exit. The -i
and the -Q/-q options are mutually exclusive.
Path examples:

-i C:\<filename>
-i \\<Server>\<Share$>\<filename>
-i "C:\Some Folder\<file name>"

File paths that contain spaces must be enclosed in quotation marks.


This option may be used more than once: -iinput_file -II input_file.
-o output_file
Identifies the file that receives output from sqlcmd.
If -u is specified, the output_file is stored in Unicode format. If the file name is not valid, an error message is
generated, and sqlcmd exits. sqlcmd does not support concurrent writing of multiple sqlcmd processes to the
same file. The file output will be corrupted or incorrect. See the -f switch for more information about file formats.
This file will be created if it does not exist. A file of the same name from a prior sqlcmd session will be
overwritten. The file specified here is not the stdout file. If a stdout file is specified this file will not be used.
Path examples:

-o C:< filename>
-o \\<Server>\<Share$>\<filename>
-o "C:\Some Folder\<file name>"

File paths that contain spaces must be enclosed in quotation marks.


-r[0 | 1]
Redirects the error message output to the screen (stderr). If you do not specify a parameter or if you specify 0,
only error messages that have a severity level of 11 or higher are redirected. If you specify 1, all error message
output including PRINT is redirected. Has no effect if you use -o. By default, messages are sent to stdout.
-R
Causes sqlcmd to localize numeric, currency, date, and time columns retrieved from SQL Server based on the
client’s locale. By default, these columns are displayed using the server’s regional settings.
-u
Specifies that output_file is stored in Unicode format, regardless of the format of input_file.
Query Execution Options
-e
Writes input scripts to the standard output device (stdout).
-I
Sets the SET QUOTED_IDENTIFIER connection option to ON. By default, it is set to OFF. For more information,
see SET QUOTED_IDENTIFIER (Transact-SQL ).
-q" cmdline query "
Executes a query when sqlcmd starts, but does not exit sqlcmd when the query has finished running. Multiple-
semicolon-delimited queries can be executed. Use quotation marks around the query, as shown in the following
example.
At the command prompt, type:
sqlcmd -d AdventureWorks2012 -q "SELECT FirstName, LastName FROM Person.Person WHERE LastName LIKE 'Whi%';"

sqlcmd -d AdventureWorks2012 -q "SELECT TOP 5 FirstName FROM Person.Person;SELECT TOP 5 LastName FROM
Person.Person;"

IMPORTANT
Do not use the GO terminator in the query.

If -b is specified together with this option, sqlcmd exits on error. -b is described later in this topic.
-Q" cmdline query "
Executes a query when sqlcmd starts and then immediately exits sqlcmd. Multiple-semicolon-delimited queries
can be executed.
Use quotation marks around the query, as shown in the following example.
At the command prompt, type:
sqlcmd -d AdventureWorks2012 -Q "SELECT FirstName, LastName FROM Person.Person WHERE LastName LIKE 'Whi%';"

sqlcmd -d AdventureWorks2012 -Q "SELECT TOP 5 FirstName FROM Person.Person;SELECT TOP 5 LastName FROM
Person.Person;"

IMPORTANT
Do not use the GO terminator in the query.

If -b is specified together with this option, sqlcmd exits on error. -b is described later in this topic.
-t query_timeout
Specifies the number of seconds before a command (or SQL statement) times out. This option sets the sqlcmd
scripting variable SQLCMDSTATTIMEOUT. If a time_out value is not specified, the command does not time out.
The querytime_out must be a number between 1 and 65534. If the value supplied is not numeric or does not fall
into that range, sqlcmd generates an error message.
NOTE
The actual time out value may vary from the specified time_out value by several seconds.

-vvar = value[ var = value...]


Creates a sqlcmdscripting variable that can be used in a sqlcmd script. Enclose the value in quotation marks if
the value contains spaces. You can specify multiple var="values" values. If there are errors in any of the values
specified, sqlcmd generates an error message and then exits.
sqlcmd -v MyVar1=something MyVar2="some thing"

sqlcmd -v MyVar1=something -v MyVar2="some thing"

-x
Causes sqlcmd to ignore scripting variables. This is useful when a script contains many INSERT statements that
may contain strings that have the same format as regular variables, such as $(variable_name).
Formatting Options
-h headers
Specifies the number of rows to print between the column headings. The default is to print headings one time for
each set of query results. This option sets the sqlcmd scripting variable SQLCMDHEADERS. Use -1 to specify
that headers must not be printed. Any value that is not valid causes sqlcmd to generate an error message and
then exit.
-k [1 | 2]
Removes all control characters, such as tabs and new line characters from the output. This preserves column
formatting when data is returned. If 1 is specified, the control characters are replaced by a single space. If 2 is
specified, consecutive control characters are replaced by a single space. -k is the same as -k1.
-s col_separator
Specifies the column-separator character. The default is a blank space. This option sets the sqlcmd scripting
variable SQLCMDCOLSEP. To use characters that have special meaning to the operating system such as the
ampersand (&), or semicolon (;), enclose the character in quotation marks ("). The column separator can be any 8-
bit character.
-w column_width
Specifies the screen width for output. This option sets the sqlcmd scripting variable SQLCMDCOLWIDTH. The
column width must be a number greater than 8 and less than 65536. If the specified column width does not fall
into that range, sqlcmd generates and error message. The default width is 80 characters. When an output line
exceeds the specified column width, it wraps on to the next line.
-W
This option removes trailing spaces from a column. Use this option together with the -s option when preparing
data that is to be exported to another application. Cannot be used with the -y or -Y options.
-y variable_length_type_display_width
Sets the sqlcmd scripting variable SQLCMDMAXVARTYPEWIDTH . The default is 256. It limits the number of characters
that are returned for the large variable length data types:
varchar(max)
nvarchar(max)
varbinary(max)
xml
UDT (user-defined data types)
text
ntext
image

NOTE
UDTs can be of fixed length depending on the implementation. If this length of a fixed length UDT is shorter that
display_width, the value of the UDT returned is not affected. However, if the length is longer than display_width, the output
is truncated.

IMPORTANT
Use the -y 0 option with extreme caution because it may cause serious performance issues on both the server and the
network, depending on the size of data returned.

-Y fixed_length_type_display_width
Sets the sqlcmd scripting variable SQLCMDMAXFIXEDTYPEWIDTH . The default is 0 (unlimited). Limits the number of
characters that are returned for the following data types:
char( n ), where 1<=n<=8000
nchar(n n ), where 1<=n<=4000
varchar(n n ), where 1<=n<=8000
nvarchar(n n ), where 1<=n<=4000
varbinary(n n ), where 1<=n<=4000
variant
Error Reporting Options
-b
Specifies that sqlcmd exits and returns a DOS ERRORLEVEL value when an error occurs. The value that is
returned to the DOS ERRORLEVEL variable is 1 when the SQL Server error message has a severity level
greater than 10; otherwise, the value returned is 0. If the -V option has been set in addition to -b, sqlcmd
will not report an error if the severity level is lower than the values set using -V. Command prompt batch
files can test the value of ERRORLEVEL and handle the error appropriately. sqlcmd does not report errors
for severity level 10 (informational messages).
If the sqlcmd script contains an incorrect comment, syntax error, or is missing a scripting variable,
ERRORLEVEL returned is 1.
-m error_level
Controls which error messages are sent to stdout. Messages that have a severity level greater than or
equal to this level are sent. When this value is set to -1, all messages including informational messages, are
sent. Spaces are not allowed between the -m and -1. For example, -m -1 is valid, and -m -1 is not.
This option also sets the sqlcmd scripting variable SQLCMDERRORLEVEL. This variable has a default of
0.
-V error_severity_level
Controls the severity level that is used to set the ERRORLEVEL variable. Error messages that have severity
levels greater than or equal to this value set ERRORLEVEL. Values that are less than 0 are reported as 0.
Batch and CMD files can be used to test the value of the ERRORLEVEL variable.
Miscellaneous Options
-a packet_size
Requests a packet of a different size. This option sets the sqlcmd scripting variable SQLCMDPACKETSIZE.
packet_size must be a value between 512 and 32767. The default = 4096. A larger packet size can enhance
performance for execution of scripts that have lots of SQL statements between GO commands. You can
request a larger packet size. However, if the request is denied, sqlcmd uses the server default for packet
size.
-c batch_terminator
Specifies the batch terminator. By default, commands are terminated and sent to SQL Server by typing the
word "GO" on a line by itself. When you reset the batch terminator, do not use Transact-SQL reserved
keywords or characters that have special meaning to the operating system, even if they are preceded by a
backslash.
-L [c]
Lists the locally configured server computers, and the names of the server computers that are broadcasting
on the network. This parameter cannot be used in combination with other parameters. The maximum
number of server computers that can be listed is 3000. If the server list is truncated because of the size of
the buffer a warning message is displayed.

NOTE
Because of the nature of broadcasting on networks, sqlcmd may not receive a timely response from all servers. Therefore,
the list of servers returned may vary for each invocation of this option.

If the optional parameter c is specified, the output appears without the Servers: header line and each server line is
listed without leading spaces. This is referred to as clean output. Clean output improves the processing
performance of scripting languages.
-p[1]
Prints performance statistics for every result set. The following is an example of the format for performance
statistics:
Network packet size (bytes): n

x xact[s]:

Clock Time (ms.): total t1 avg t2 (t3 xacts per sec.)

Where:
x = Number of transactions that are processed by SQL Server .
t1 = Total time for all transactions.
t2 = Average time for a single transaction.
t3 = Average number of transactions per second.
All times are in milliseconds.
If the optional parameter 1 is specified, the output format of the statistics is in colon-separated format that can be
imported easily into a spreadsheet or processed by a script.
If the optional parameter is any value other than 1, an error is generated and sqlcmd exits.
-X[1]
Disables commands that might compromise system security when sqlcmd is executed from a batch file. The
disabled commands are still recognized; sqlcmd issues a warning message and continues. If the optional
parameter 1 is specified, sqlcmd generates an error message and then exits. The following commands are
disabled when the -X option is used:
ED
!! command
If the -X option is specified, it prevents environment variables from being passed on to sqlcmd. It also
prevents the startup script specified by using the SQLCMDINI scripting variable from being executed. For
more information about sqlcmd scripting variables, see Use sqlcmd with Scripting Variables.
-?
Displays the version of sqlcmd and a syntax summary of sqlcmd options.

Remarks
Options do not have to be used in the order shown in the syntax section.
When multiple results are returned, sqlcmd prints a blank line between each result set in a batch. In addition, the
<x> rows affected message does not appear when it does not apply to the statement executed.

To use sqlcmd interactively, type sqlcmd at the command prompt with any one or more of the options described
earlier in this topic. For more information, see Use the sqlcmd Utility

NOTE
The options -L, -Q, -Z or -i cause sqlcmd to exit after execution.

The total length of the sqlcmd command line in the command environment (Cmd.exe), including all arguments
and expanded variables, is that which is determined by the operating system for Cmd.exe.

Variable Precedence (Low to High)


1. System-level environmental variables.
2. User-level environmental variables
3. Command shell (SET X=Y ) set at command prompt before running sqlcmd.
4. sqlcmd-v X=Y
5. :Setvar X Y

NOTE
To view the environmental variables, in Control Panel, open System, and then click the Advanced tab.

sqlcmd Scripting Variables


VARIABLE RELATED SWITCH R/W DEFAULT

SQLCMDUSER -U R ""
VARIABLE RELATED SWITCH R/W DEFAULT

SQLCMDPASSWORD -P -- ""

SQLCMDSERVER -S R "DefaultLocalInstance"

SQLCMDWORKSTATION -H R "ComputerName"

SQLCMDDBNAME -d R ""

SQLCMDLOGINTIMEOUT -l R/W "8" (seconds)

SQLCMDSTATTIMEOUT -t R/W "0" = wait indefinitely

SQLCMDHEADERS -h R/W "0"

SQLCMDCOLSEP -s R/W ""

SQLCMDCOLWIDTH -w R/W "0"

SQLCMDPACKETSIZE -a R "4096"

SQLCMDERRORLEVEL -m R/W 0

SQLCMDMAXVARTYPEWIDT -y R/W "256"


H

SQLCMDMAXFIXEDTYPEWI -Y R/W "0" = unlimited


DTH

SQLCMDEDITOR R/W "edit.com"

SQLCMDINI R ""

SQLCMDUSEAAD -G R/W ""

SQLCMDUSER, SQLCMDPASSWORD and SQLCMDSERVER are set when :Connect is used.


R indicates the value can only be set one time during program initialization.
R/W indicates that the value can be modified by using the setvar command and subsequent commands will be
influenced by the new value.

sqlcmd Commands
In addition to Transact-SQL statements within sqlcmd, the following commands are also available:

GO [count] :List

[:] RESET :Error

[:] ED :Out
[:] !! :Perftrace

[:] QUIT :Connect

[:] EXIT :On Error

:r :Help

:ServerList :XML [ON | OFF]

:Setvar :Listvar

Be aware of the following when you use sqlcmd commands:


All sqlcmd commands, except GO, must be prefixed by a colon (:).

IMPORTANT
To maintain backward compatibility with existing osql scripts, some of the commands will be recognized without the
colon. This is indicated by the [:].

sqlcmd commands are recognized only if they appear at the start of a line.
All sqlcmd commands are case insensitive.
Each command must be on a separate line. A command cannot be followed by a Transact-SQL statement
or another command.
Commands are executed immediately. They are not put in the execution buffer as Transact-SQL statements
are.
Editing Commands
[:] ED
Starts the text editor. This editor can be used to edit the current Transact-SQL batch, or the last executed
batch. To edit the last executed batch, the ED command must be typed immediately after the last batch has
completed execution.
The text editor is defined by the SQLCMDEDITOR environment variable. The default editor is 'Edit'. To
change the editor, set the SQLCMDEDITOR environment variable. For example, to set the editor to
Microsoft Notepad, at the command prompt, type:
SET SQLCMDEDITOR=notepad

[:] RESET
Clears the statement cache.
:List
Prints the content of the statement cache.
Variables
:Setvar <var> [ "value" ]
Defines sqlcmd scripting variables. Scripting variables have the following format: $(VARNAME) .
Variable names are case insensitive.
Scripting variables can be set in the following ways:
Implicitly using a command-line option. For example, the -l option sets the SQLCMDLOGINTIMEOUT
sqlcmd variable.
Explicitly by using the :Setvar command.
By defining an environment variable before you run sqlcmd.

NOTE
The -X option prevents environment variables from being passed on to sqlcmd.

If a variable defined by using :Setvar and an environment variable have the same name, the variable defined by
using :Setvar takes precedence.
Variable names must not contain blank space characters.
Variable names cannot have the same form as a variable expression, such as $(var).
If the string value of the scripting variable contains blank spaces, enclose the value in quotation marks. If a value
for a scripting variable is not specified, the scripting variable is dropped.
:Listvar
Displays a list of the scripting variables that are currently set.

NOTE
Only scripting variables that are set by sqlcmd, and those that are set using the :Setvar command will be displayed.

Output Commands
:Error
< filename >| STDERR|STDOUT
Redirect all error output to the file specified by file name, to stderr or to stdout. The Error command can appear
multiple times in a script. By default, error output is sent to stderr.
file name
Creates and opens a file that will receive the output. If the file already exists, it will be truncated to zero bytes. If the
file is not available because of permissions or other reasons, the output will not be switched and will be sent to the
last specified or default destination.
STDERR
Switches error output to the stderr stream. If this has been redirected, the target to which the stream has been
redirected will receive the error output.
STDOUT
Switches error output to the stdout stream. If this has been redirected, the target to which the stream has been
redirected will receive the error output.
:Out < filename >| STDERR| STDOUT
Creates and redirects all query results to the file specified by file name, to stderr or to stdout. By default, output is
sent to stdout. If the file already exists, it will be truncated to zero bytes. The Out command can appear multiple
times in a script.
:Perftrace < filename >| STDERR| STDOUT
Creates and redirects all performance trace information to the file specified by file name, to stderr or to stdout.
By default performance trace output is sent to stdout. If the file already exists, it will be truncated to zero bytes.
The Perftrace command can appear multiple times in a script.
Execution Control Commands
:On Error[ exit | ignore]
Sets the action to be performed when an error occurs during script or batch execution.
When the exit option is used, sqlcmd exits with the appropriate error value.
When the ignore option is used, sqlcmd ignores the error and continues executing the batch or script. By default,
an error message will be printed.
[:] QUIT
Causes sqlcmd to exit.
[:] EXIT[ (statement) ]
Lets you use the result of a SELECT statement as the return value from sqlcmd. If numeric, the first column of the
last result row is converted to a 4-byte integer (long). MS -DOS passes the low byte to the parent process or
operating system error level. Windows 200x passes the whole 4-byte integer. The syntax is:
:EXIT(query)

For example:
:EXIT(SELECT @@ROWCOUNT)

You can also include the EXIT parameter as part of a batch file. For example, at the command prompt, type:
sqlcmd -Q "EXIT(SELECT COUNT(*) FROM '%1')"

The sqlcmd utility sends everything between the parentheses () to the server. If a system stored procedure selects
a set and returns a value, only the selection is returned. The EXIT() statement with nothing between the
parentheses executes everything before it in the batch and then exits without a return value.
When an incorrect query is specified, sqlcmd will exit without a return value.
Here is a list of EXIT formats:
:EXIT
Does not execute the batch, and then quits immediately and returns no value.
:EXIT( )
Executes the batch, and then quits and returns no value.
:EXIT(query)
Executes the batch that includes the query, and then quits after it returns the results of the query.
If RAISERROR is used within a sqlcmd script and a state of 127 is raised, sqlcmd will quit and return the
message ID back to the client. For example:
RAISERROR(50001, 10, 127)

This error will cause the sqlcmd script to end and return the message ID 50001 to the client.
The return values -1 to -99 are reserved by SQL Server ; sqlcmd defines the following additional return
values:

RETURN VALUES DESCRIPTION

-100 Error encountered prior to selecting return value.


RETURN VALUES DESCRIPTION

-101 No rows found when selecting return value.

-102 Conversion error occurred when selecting return value.

GO [count]
GO signals both the end of a batch and the execution of any cached Transact-SQL statements.The batch is
executed multiple times as separate batches; you cannot declare a variable more than once in a single batch.
Miscellaneous Commands
:r < filename >
Parses additional Transact-SQL statements and sqlcmd commands from the file specified by <filename>into the
statement cache.
If the file contains Transact-SQL statements that are not followed by GO, you must enter GO on the line that
follows :r.

NOTE
< filename > is read relative to the startup directory in which sqlcmd was run.

The file will be read and executed after a batch terminator is encountered. You can issue multiple :r commands.
The file may include any sqlcmd command. This includes the batch terminator GO.

NOTE
The line count that is displayed in interactive mode will be increased by one for every :r command encountered. The :r
command will appear in the output of the list command.

:Serverlist
Lists the locally configured servers and the names of the servers broadcasting on the network.
:Connect server_name[\instance_name] [-l timeout] [-U user_name [-P password]]
Connects to an instance of SQL Server . Also closes the current connection.
Time-out options:

0 wait forever

n>0 wait for n seconds

The SQLCMDSERVER scripting variable will reflect the current active connection.
If timeout is not specified, the value of the SQLCMDLOGINTIMEOUT variable is the default.
If only user_name is specified (either as an option, or as an environment variable), the user will be prompted to
enter a password. This is not true if the SQLCMDUSER or SQLCMDPASSWORD environment variables have
been set. If neither options nor environment variables are provided, Windows Authentication mode is used to
login. For example to connect to an instance, instance1 , of SQL Server , myserver , by using integrated security
you would use the following:
:connect myserver\instance1
To connect to the default instance of myserver using scripting variables, you would use the following:
:setvar myusername test

:setvar myservername myserver

:connect $(myservername) $(myusername)

[:] !!< command>


Executes operating system commands. To execute an operating system command, start a line with two
exclamation marks (!!) followed by the operating system command. For example:
:!! Dir

NOTE
The command is executed on the computer on which sqlcmd is running.

:XML [ON | OFF]


For more information, see XML Output Format and JSON Output Format in this topic
:Help
Lists sqlcmd commands together with a short description of each command.
sqlcmd File Names
sqlcmd input files can be specified with the -i option or the :r command. Output files can be specified with the -o
option or the :Error, :Out and :Perftrace commands. The following are some guidelines for working with these
files:
:Error, :Out and :Perftrace should use separate <filename>. If the same <filename> is used, inputs from
the commands may be intermixed.
If an input file that is located on a remote server is called from sqlcmd on a local computer and the file
contains a drive file path such as :out c:\OutputFile.txt. The output file will be created on the local computer
and not on the remote server.
Valid file paths include: C:\<filename> , \\<Server>\<Share$>\<filename> and "C:\Some Folder\<file name>" .
If there is a space in the path, use quotation marks.
Each new sqlcmd session will overwrite existing files that have the same names.
Informational Messages
sqlcmd prints any informational message that are sent by the server. In the following example, after the Transact-
SQL statements are executed, an informational message is printed.
At the command prompt, type the following:
sqlcmd

At the sqlcmd prompt type:

USE AdventureWorks2012;

GO

When you press ENTER, the following informational message is printed: "Changed database context to
'AdventureWorks2012'."
Output Format from Transact-SQL Queries
sqlcmd first prints a column header that contains the column names specified in the select list. The column names
are separated by using the SQLCMDCOLSEP character. By default, this is a space. If the column name is shorter
than the column width, the output is padded with spaces up to the next column.
This line will be followed by a separator line that is a series of dash characters. The following output shows an
example.
Start sqlcmd. At the sqlcmd command prompt, type the following:
USE AdventureWorks2012;

SELECT TOP (2) BusinessEntityID, FirstName, LastName

FROM Person.Person;

GO

When you press ENTER, the following result set is retuned.


BusinessEntityID FirstName LastName

---------------- ------------ ----------

285 Syed Abbas

293 Catherine Abel

(2 row(s) affected)

Although the BusinessEntityID column is only 4 characters wide, it has been expanded to accommodate the
longer column name. By default, output is terminated at 80 characters. This can be changed by using the -w
option, or by setting the SQLCMDCOLWIDTH scripting variable.
XML Output Format
XML output that is the result of a FOR XML clause is output, unformatted, in a continuous stream.
When you expect XML output, use the following command: :XML ON .

NOTE
sqlcmd returns error messages in the usual format. Notice that the error messages are also output in the XML text stream
in XML format. By using :XML ON , sqlcmd does not display informational messages.

To set the XML mode off, use the following command: :XML OFF .
The GO command should not appear before the XML OFF command is issued because the XML OFF command
switches sqlcmd back to row -oriented output.
XML (streamed) data and rowset data cannot be mixed. If the XML ON command has not been issued before a
Transact-SQL statement that outputs XML streams is executed, the output will be garbled. If the XML ON
command has been issued, you cannot execute Transact-SQL statements that output regular row sets.

NOTE
The :XML command does not support the SET STATISTICS XML statement.

JSON Output Format


When you expect JSON output, use the following command: :XML ON . Otherwise the output includes both the
column name and the JSON text. This output is not valid JSON.
To set the XML mode off, use the following command: :XML OFF .
For more info, see XML Output Format in this topic.
Using Azure Active Directory Authentication
Examples using Azure Active Directory Authentication:

sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -G -l 30
sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -U bob@contoso.com -P MyAADPassword -G -l 30

sqlcmd Best Practices


Use the following practices to help maximize security and efficiency.
Use integrated security.
Use -X in automated environments.
Secure input and output files by using appropriate NTFS file system permissions.
To increase performance, do as much in one sqlcmd session as you can, instead of in a series of sessions.
Set time-out values for batch or query execution higher than you expect it will take to execute the batch or
query.

See Also
Start the sqlcmd Utility
Run Transact-SQL Script Files Using sqlcmd
Use the sqlcmd Utility
Use sqlcmd with Scripting Variables
Connect to the Database Engine With sqlcmd
Edit SQLCMD Scripts with Query Editor
Manage Job Steps
Create a CmdExec Job Step
SQLdiag Utility
5/3/2018 • 18 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The SQLdiag utility is a general purpose diagnostics collection utility that can be run as a console application or as
a service. You can use SQLdiag to collect logs and data files from SQL Server and other types of servers, and use
it to monitor your servers over time or troubleshoot specific problems with your servers. SQLdiag is intended to
expedite and simplify diagnostic information gathering for Microsoft Customer Support Services.

NOTE
This utility may be changed, and applications or scripts that rely on its command line arguments or behavior may not work
correctly in future releases.

SQLdiag can collect the following types of diagnostic information:


Windows performance logs
Windows event logs
SQL Server Profiler traces
SQL Server blocking information
SQL Server configuration information
You can specify what types of information you want SQLdiag to collect by editing the configuration file
SQLDiag.xml, which is described in a following section.

Syntax
sqldiag
{ [/?] }
|
{ [/I configuration_file]
[/O output_folder_path]
[/P support_folder_path]
[/N output_folder_management_option]
[/M machine1 [ machine2 machineN]| @machinelistfile]
[/C file_compression_type]
[/B [+]start_time]
[/E [+]stop_time]
[/A SQLdiag_application_name]
[/T { tcp [ ,port ] | np | lpc } ]
[/Q] [/G] [/R] [/U] [/L] [/X] }
|
{ [START | STOP | STOP_ABORT] }
|
{ [START | STOP | STOP_ABORT] /A SQLdiag_application_name }

Arguments
/?
Displays usage information.
/I configuration_file
Sets the configuration file for SQLdiag to use. By default, /I is set to SQLDiag.Xml.
/O output_folder_path
Redirects SQLdiag output to the specified folder. If the /O option is not specified, SQLdiag output is written to a
subfolder named SQLDIAG under the SQLdiag startup folder. If the SQLDIAG folder does not exist, SQLdiag
attempts to create it.

NOTE
The output folder location is relative to the support folder location that can be specified with /P. To set an entirely different
location for the output folder, specify the full directory path for /O.

/P support_folder_path
Sets the support folder path. By default, /P is set to the folder where the SQLdiag executable resides. The support
folder contains SQLdiag support files, such as the XML configuration file, Transact-SQL scripts, and other files that
the utility uses during diagnostics collection. If you use this option to specify an alternate support files path,
SQLdiag will automatically copy the support files it requires to the specified folder if they do not already exist.

NOTE
To set your current folder as the support path, specify %cd% on the command line as follows:
SQLDIAG /P %cd%

/N output_folder_management_option
Sets whether SQLdiag overwrites or renames the output folder when it starts up. Available options:
1 = Overwrites the output folder (default)
2 = When SQLdiag starts up, it renames the output folder to SQLDIAG_00001, SQLDIAG_00002, and so on.
After renaming the current output folder, SQLdiag writes output to the default output folder SQLDIAG.

NOTE
SQLdiag does not append output to the current output folder when it starts up. It can only overwrite the default output
folder (option 1) or rename the folder (option 2), and then it writes output to the new default output folder named SQLDIAG.

/M machine1 [ machine2machineN ] | @machinelistfile


Overrides the machines specified in the configuration file. By default the configuration file is SQLDiag.Xml, or is
set with the /I parameter. When specifying more than one machine, separate each machine name with a space.
Using @machinelistfile specifies a machine list filename to be stored in the configuration file.
/C file_compression_type
Sets the type of file compression used on the SQLdiag output folder files. Available options:
0 = none (default)
1 = uses NTFS compression
/B [+]start_time
Specifies the date and time to start collecting diagnostic data in the following format:
YYYYMMDD_HH:MM:SS
The time is specified using 24-hour notation. For example, 2:00 P.M. should be specified as 14:00:00.
Use + without the date (HH:MM:SS only) to specify a time that is relative to the current date and time. For
example, if you specify /B +02:00:00, SQLdiag will wait 2 hours before it starts collecting information.
Do not insert a space between + and the specified start_time.
If you specify a start time that is in the past, SQLdiag forcibly changes the start date so the start date and time are
in the future. For example, if you specify /B 01:00:00 and the current time is 08:00:00, SQLdiag forcibly changes
the start date so that the start date is the next day.
Note that SQLdiag uses the local time on the computer where the utility is running.
/E [+]stop_time
Specifies the date and time to stop collecting diagnostic data in the following format:
YYYYMMDD_HH:MM:SS
The time is specified using 24-hour notation. For example, 2:00 P.M. should be specified as 14:00:00.
Use + without the date (HH:MM:SS only) to specify a time that is relative to the current date and time. For
example, if you specify a start time and end time by using /B +02:00:00 /E +03:00:00, SQLdiag waits 2 hours
before it starts collecting information, then collects information for 3 hours before it stops and exits. If /B is not
specified, SQLdiag starts collecting diagnostics immediately and ends at the date and time specified by /E.
Do not insert a space between + and the specified start_time or end_time.
Note that SQLdiag uses the local time on the computer where the utility is running.
/A SQLdiag_application_name
Enables running multiple instances of the SQLdiag utility against the same SQL Server instance.
Each SQLdiag_application_name identifies a different instance of SQLdiag. No relationship exists between a
SQLdiag_application_name instance and a SQL Server instance name.
SQLdiag_application_name can be used to start or stop a specific instance of the SQLdiag service.
For example:
SQLDIAG START /A SQLdiag_application_name
It can also be used with the /R option to register a specific instance of SQLdiag as a service. For example:
SQLDIAG /R /A SQLdiag_application_name

NOTE
SQLdiag automatically prefixes DIAG$ to the instance name specified for SQLdiag_application_name. This provides a
sensible service name if you register SQLdiag as a service.

/T { tcp [ ,port ] | np | lpc }


Connects to an instance of SQL Server using the specified protocol.
tcp [,port]
Transmission Control Protocol/Internet Protocol (TCP/IP ). You can optionally specify a port number for the
connection.
np
Named pipes. By default, the default instance of SQL Server listens on named pipe \\.\pipe\sql\query and
\\.\pipe\MSSQL$<instancename>\sql\query for a named instance. You cannot connect to an instance of SQL Server
by using an alternate pipe name.
lpc
Local procedure call. This shared memory protocol is available if the client is connecting to an instance of SQL
Server on the same computer.
/Q
Runs SQLdiag in quiet mode. /Q suppresses all prompts, such as password prompts.
/G
Runs SQLdiag in generic mode. When /G is specified, on startup SQLdiag does not enforce SQL Server
connectivity checks or verify that the user is a member of the sysadmin fixed server role. Instead, SQLdiag defers
to Windows to determine whether a user has the appropriate rights to gather each requested diagnostic.
If /G is not specified, SQLdiag checks to determine whether the user is a member of the Windows
Administrators group, and will not collect SQL Server diagnostics if the user is not an Administrators group
member.
/R
Registers SQLdiag as a service. Any command line arguments that are specified when you register SQLdiag as a
service are preserved for future runs of the service.
When SQLdiag is registered as a service, the default service name is SQLDIAG. You can change the service name
by using the /A argument.
Use the START command line argument to start the service:
SQLDIAG START
You can also use the net start command to start the service:
net start SQLDIAG
/U
Unregisters SQLdiag as a service.
Use the /A argument also if unregistering a named SQLdiag instance.
/L
Runs SQLdiag in continuous mode when a start time or end time is also specified with the /B or /E arguments,
respectively. SQLdiag automatically restarts after diagnostics collection stops due to a scheduled shutdown. For
example, by using the /E or the /X arguments.

NOTE
SQLdiag ignores the /L argument if a start time or end time is not specified by using the /B and /E command line
arguments.

Using /L does not imply the service mode. To use /L when running SQLdiag as a service, specify it on the
command line when you register the service.
/X
Runs SQLdiag in snapshot mode. SQLdiag takes a snapshot of all configured diagnostics and then shuts down
automatically.
START | STOP | STOP_ABORT
Starts or stops the SQLdiag service. STOP_ABORT forces the service to shut down as quickly as possible without
finishing collection of diagnostics it is currently collecting.
When these service control arguments are used, they must be the first argument used on the command line. For
example:
SQLDIAG START
Only the /A argument, which specifies a named instance of SQLdiag, can be used with START, STOP, or
STOP_ABORT to control a specific instance of the SQLdiag service. For example:
SQLDIAG START /A SQLdiag_application_name

Security Requirements
Unless SQLdiag is run in generic mode (by specifying the /G command line argument), the user who runs
SQLdiag must be a member of the Windows Administrators group and a member of the SQL Server sysadmin
fixed server role. By default, SQLdiag connects to SQL Server by using Windows Authentication, but it also
supports SQL Server Authentication.

Performance Considerations
The performance effects of running SQLdiag depend on the type of diagnostic data you have configured it to
collect. For example, if you have configured SQLdiag to collect SQL Server Profiler tracing information, the more
event classes you choose to trace, the more your server performance is affected.
The performance impact of running SQLdiag is approximately equivalent to the sum of the costs of collecting the
configured diagnostics separately. For example, collecting a trace with SQLdiag incurs the same performance cost
as collecting it with SQL Server Profiler. The performance impact of using SQLdiag is negligible.

Required Disk Space


Because SQLdiag can collect different types of diagnostic information, the free disk space that is required to run
SQLdiag varies. The amount of diagnostic information collected depends on the nature and volume of the
workload that the server is processing and may range from a few megabytes to several gigabytes.

Configuration Files
On startup, SQLdiag reads the configuration file and the command line arguments that have been specified. You
specify the types of diagnostic information that SQLdiag collects in the configuration file. By default, SQLdiag
uses the SQLDiag.Xml configuration file, which is extracted each time the tool runs and is located in the SQLdiag
utility startup folder. The configuration file uses the XML schema, SQLDiag_schema.xsd, which is also extracted
into the utility startup directory from the executable file each time SQLdiag runs.
Editing the Configuration Files
You can copy and edit SQLDiag.Xml to change the types of diagnostic data that SQLdiag collects. When editing
the configuration file always use an XML editor that can validate the configuration file against its XML schema,
such as Management Studio. You should not edit SQLDiag.Xml directly. Instead, make a copy of SQLDiag.Xml and
rename it to a new file name in the same folder. Then edit the new file, and use the /I argument to pass it to
SQLdiag.
Editing the Configuration File When SQLdiag Runs as a Service
If you have already run SQLdiag as a service and need to edit the configuration file, unregister the SQLDIAG
service by specifying the /U command line argument and then re-register the service by using the /R command
line argument. Unregistering and re-registering the service removes old configuration information that was cached
in the Windows registry.
Output Folder
If you do not specify an output folder with the /O argument, SQLdiag creates a subfolder named SQLDIAG under
the SQLdiag startup folder. For diagnostic information collection that involves high volume tracing, such as SQL
Server Profiler , make sure that the output folder is on a local drive with enough space to store the requested
diagnostic output.
When SQLdiag is restarted, it overwrites the contents of the output folder. To avoid this, specify /N 2 on the
command line.

Data Collection Process


When SQLdiag starts, it performs the initialization checks necessary to collect the diagnostic data that have been
specified in SQLDiag.Xml. This process may take several seconds. After SQLdiag has started collecting diagnostic
data when it is run as a console application, a message displays informing you that SQLdiag collection has started
and that you can press CTRL+C to stop it. When SQLdiag is run as a service, a similar message is written to the
Windows event log.
If you are using SQLdiag to diagnose a problem that you can reproduce, wait until you receive this message
before you reproduce the problem on your server.
SQLdiag collects most diagnostic data in parallel. All diagnostic information is collected by connecting to tools,
such as the SQL Server sqlcmd utility or the Windows command processor, except when information is collected
from Windows performance logs and event logs. SQLdiag uses one worker thread per computer to monitor the
diagnostic data collection of these other tools, often simultaneously waiting for several tools to complete. During
the collection process, SQLdiag routes the output from each diagnostic to the output folder.

Stopping Data Collection


After SQLdiag starts collecting diagnostic data, it continues to do so unless you stop it or it is configured to stop at
a specified time. You can configure SQLdiag to stop at a specified time by using the /E argument, which allows
you to specify a stop time, or by using the /X argument, which causes SQLdiag to run in snapshot mode.
When SQLdiag stops, it stops all diagnostics it has started. For example, it stops SQL Server Profiler traces it was
collecting, it stops executing Transact-SQL scripts it was running, and it stops any sub processes it has spawned
during data collection. After diagnostic data collection has completed, SQLdiag exits.

NOTE
Pausing the SQLdiag service is not supported. If you attempt to pause the SQLdiag service, it stops after it finishes
collecting the diagnostics that it was collecting when you paused it. If you restart SQLdiag after stopping it, the application
restarts and overwrites the output folder. To avoid overwriting the output folder, specify /N 2 on the command line.

To stop SQLdiag when running as a console application


If you are running SQLdiag as a console application, press CTRL+C in the console window where SQLdiag is
running to stop it. After you press CTRL+C, a message displays in the console window informing you that
SQLDiag data collection is ending, and that you should wait until the process shuts down, which may take several
minutes.
Press Ctrl+C twice to terminate all child diagnostic processes and immediately terminate the application.
To stop SQLdiag when running as a service
If you are running SQLdiag as a service, run SQLDiag STOP in the SQLdiag startup folder to stop it.
If you are running multiple instances of SQLdiag on the same computer, you can also pass the SQLdiag instance
name to on the command line when you stop the service. For example, to stop a SQLdiag instance named
Instance1, use the following syntax:

SQLDIAG STOP /A Instance1

NOTE
/A is the only command-line argument that can be used with START, STOP, or STOP_ABORT. If you need to specify a
named instance of SQLdiag with one of the service control verbs, specify /A after the control verb on the command line as
shown in the previous syntax example. When control verbs are used, they must be the first argument on the command line.

To stop the service as quickly as possible, run SQLDIAG STOP_ABORT in the utility startup folder. This command
aborts any diagnostics collecting currently being performed without waiting for them to finish.

NOTE
Use SQLDiag STOP or SQLDIAG STOP_ABORT to stop the SQLdiag service. Do not use the Windows Services Console to
stop SQLdiag or other SQL Server services.

Automatically Starting and Stopping SQLdiag


To automatically start and stop diagnostic data collection at a specified time, use the /Bstart_time and /Estop_time
arguments, using 24-hour notation. For example, if you are troubleshooting a problem that consistently appears at
approximately 02:00:00, you can configure SQLdiag to automatically start collecting diagnostic data at 01:00 and
automatically stop at 03:00:00. Use the /B and /E arguments to specify the start and stop time. Use 24-hour
notation to specify an exact start and stop date and time with the format YYYYMMDD_HH:MM:SS. To specify a
relative start or stop time, prefix the start and stop time with + and omit the date portion (YYYYMMDD_) as
shown in the following example, which causes SQLdiag to wait 1 hour before it starts collecting information, then
it collects information for 3 hours before it stops and exits:

sqldiag /B +01:00:00 /E +03:00:00

When a relative start_time is specified, SQLdiag starts at a time that is relative to the current date and time. When
a relative end_time is specified, SQLdiag ends at a time that is relative to the specified start_time. If the start or
end date and time that you have specified is in the past, SQLdiag forcibly changes the start date so that the start
date and time are in the future.
This has important implications on the start and end dates you choose. Consider the following example:

sqldiag /B +01:00:00 /E 08:30:00

If the current time is 08:00, the end time passes before diagnostic collection actually begins. Because SQLDiag
automatically adjusts start and end dates to the next day when they occur in the past, in this example diagnostic
collection starts at 09:00 today (a relative start time has been specified with +) and continues collecting until 08:30
the following morning.
Stopping and Restarting SQLdiag to Collect Daily Diagnostics
To collect a specified set of diagnostics on a daily basis without having to manually start and stop SQLdiag, use
the /L argument. The /L argument causes SQLdiag to run continuously by automatically restarting itself after a
scheduled shutdown. When /L is specified, and SQLdiag stops because it has reached the end time specified with
the /E argument, or it stops because it is being run in snapshot mode by using the /X argument, SQLdiag restarts
instead of exiting.
The following example specifies that SQLdiag run in continuous mode to automatically restart after diagnostic
data collecting occurs between 03:00:00 and 05:00:00.

sqldiag /B 03:00:00 /E 05:00:00 /L

The following example specifies that SQLdiag run in continuous mode to automatically restart after taking a
diagnostic data snapshot at 03:00:00.

sqldiag /B 03:00:00 /X /L

Running SQLdiag as a Service


When you want to use SQLdiag to collect diagnostic data for long periods of time during which you might need to
log out of the computer on which SQLdiag is running, you can run it as a service.
To register SQLDiag to run as a service
You can register SQLdiag to run as a service by specifying the /R argument at the command line. This registers
SQLdiag to run as a service. The SQLdiag service name is SQLDIAG. Any other arguments you specify on the
command line when you register SQLDiag as a service are preserved and reused when the service is started.
To change the default SQLDIAG service name, use the /A command-line argument to specify another name.
SQLdiag automatically prefixes DIAG$ to any SQLdiag instance name specified with /A to create sensible service
names.
To unregister the SQLDIAG service
To unregister the service, specify the /U argument. Unregistering SQLdiag as a service also deletes the Windows
registry keys of the service.
To start or restart the SQLDIAG service
To start or restart the SQLDIAG service, run SQLDiag START from the command line.
If you are running multiple instances of SQLdiag by using the /A argument, you can also pass the SQLdiag
instance name on the command line when you start the service. For example, to start a SQLdiag instance named
Instance1, use the following syntax:

SQLDIAG START /A Instance1

You can also use the net start command to start the SQLDIAG service.
When you restart SQLdiag, it overwrites the contents in the current output folder. To avoid this, specify /N 2 on
the command line to rename the output folder when the utility starts.
Pausing the SQLdiag service is not supported.

Running Multiple Instances of SQLdiag


Run multiple instances of SQLdiag on the same computer by specifying /ASQLdiag_application_name on the
command line. This is useful for collecting different sets of diagnostics simultaneously from the same SQL Server
instance. For example, you can configure a named instance of SQLdiag to continuously perform lightweight data
collection. Then, if a specific problem occurs on SQL Server, you can run the default SQLdiag instance to collect
diagnostics for that problem, or to gather a set of diagnostics that Microsoft Customer Support Services has asked
you to gather to diagnose a problem.

Collecting Diagnostic Data from Clustered SQL Server Instances


SQLdiag supports collecting diagnostic data from clustered SQL Server instances. To gather diagnostics from
clustered SQL Server instances, make sure that "." is specified for the name attribute of the <Machine> element
in the configuration file SQLDiag.Xml and do not specify the /G argument on the command line. By default, "." is
specified for the name attribute in the configuration file and the /G argument is turned off. Typically, you do not
need to edit the configuration file or change the command line arguments when collecting from a clustered SQL
Server instance.
When "." is specified as the machine name, SQLdiag detects that it is running on a cluster, and simultaneously
retrieves diagnostic information from all virtual instances of SQL Server that are installed on the cluster. If you
want to collect diagnostic information from only one virtual instance of SQL Server that is running on a computer,
specify that virtual SQL Server for the name attribute of the <Machine> element in SQLDiag.Xml.

NOTE
To collect SQL Server Profiler trace information from clustered SQL Server instances, administrative shares (ADMIN$) must be
enabled on the cluster.

See Also
Command Prompt Utility Reference (Database Engine)
sqlmaint Utility
5/3/2018 • 10 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Thesqlmaint utility performs a specified set of maintenance operations on one or more databases. Use sqlmaint
to run DBCC checks, back up a database and its transaction log, update statistics, and rebuild indexes. All database
maintenance activities generate a report that can be sent to a designated text file, HTML file, or e-mail account.
sqlmaint executes database maintenance plans created with previous versions of SQL Server. To run SQL Server
maintenance plans from the command prompt, use the dtexec Utility.

IMPORTANT
This feature will be removed in the next version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature. Use SQL Server maintenance plan feature instead. For more
information on maintenance plans, see Maintenance Plans.

Syntax
sqlmaint
[-?] |
[
[-S server_name[\instance_name]]
[-U login_ID [-P password]]
{
[-D database_name | -PlanName name | -PlanID guid ]
[-Rpt text_file]
[-To operator_name]
[-HtmlRpt html_file [-DelHtmlRpt <time_period>] ]
[-RmUnusedSpace threshold_percentfree_percent]
[-CkDB | -CkDBNoIdx]
[-CkAl | -CkAlNoIdx]
[-CkCat]
[-UpdOptiStats sample_percent]
[-RebldIdx free_space]
[-SupportComputedColumn]
[-WriteHistory]
[
{-BkUpDB [backup_path] | -BkUpLog [backup_path] }
{-BkUpMedia
{DISK [
[-DelBkUps <time_period>]
[-CrBkSubDir ]
[-UseDefDir ]
]
| TAPE
}
}
[-BkUpOnlyIfClean]
[-VrfyBackup]
]
}
]
<time_period> ::=
number[minutes | hours | days | weeks | months]

Arguments
The parameters and their values must be separated by a space. For example, there must be a space between -S and
server_name.
-?
Specifies that the syntax diagram for sqlmaint be returned. This parameter must be used alone.
-S server_name[ \instance_name]
Specifies the target instance of Microsoft SQL Server. Specify server_name to connect to the default instance of
SQL Server Database Engine on that server. Specify server_name\instance_name to connect to a named instance
of Database Engine on that server. If no server is specified, sqlmaint connects to the default instance of Database
Engine on the local computer.
-U login_ID
Specifies the login ID to use when connecting to the server. If not supplied, sqlmaint attempts to use Microsoft
Windows Authentication. If login_ID contains special characters, it must be enclosed in double quotation marks (");
otherwise, the double quotation marks are optional.

IMPORTANT
When possible, use Windows Authentication.
-P password
Specifies the password for the login ID. Only valid if the -U parameter is also supplied. If password contains special
characters, it must be enclosed in double quotation marks; otherwise, the double quotation marks are optional.

IMPORTANT
The password is not masked. When possible, use Windows Authentication.

-D database_name
Specifies the name of the database in which to perform the maintenance operation. If database_name contains
special characters, it must be enclosed in double quotation marks; otherwise, the double quotation marks are
optional.
-PlanName name
Specifies the name of a database maintenance plan defined using the Database Maintenance Plan Wizard. The
only information sqlmaint uses from the plan is the list of the databases in the plan. Any maintenance activities
you specify in the other sqlmaint parameters are applied to this list of databases.
-PlanID guid
Specifies the globally unique identifier (GUID ) of a database maintenance plan defined using the Database
Maintenance Plan Wizard. The only information sqlmaint uses from the plan is the list of the databases in the
plan. Any maintenance activities you specify in the other sqlmaint parameters are applied to this list of databases.
This must match a plan_id value in msdb.dbo.sysdbmaintplans.
-Rpt text_file
Specifies the full path and name of the file into which the report is to be generated. The report is also generated on
the screen. The report maintains version information by adding a date to the file name. The date is generated as
follows: at the end of the file name but before the period, in the form _yyyyMMddhhmm. yyyy = year, MM =
month, dd = day, hh = hour, mm = minute.
If you run the utility at 10:23 A.M. on December 1, 1996, and this is the text_file value:

c:\Program Files\Microsoft SQL Server\Mssql\Backup\AdventureWorks2012_maint.rpt

The generated file name is:

c:\Program Files\Microsoft SQL Server\Mssql\Backup\AdventureWorks2012_maint_199612011023.rpt

The full Universal Naming Convention (UNC ) file name is required for text_file when sqlmaint accesses a remote
server.
-To operator_name
Specifies the operator to whom the generated report is sent through SQL Mail.
-HtmlRpt html_file
Specifies the full path and name of the file into which an HTML report is to be generated. sqlmaint generates the
file name by appending a string of the format _yyyyMMddhhmm to the file name, just as it does for the -Rpt
parameter.
The full UNC file name is required for html_file when sqlmaint accesses a remote server.
-DelHtmlRpt <time_period>
Specifies that any HTML report in the report directory be deleted if the time interval after the creation of the
report file exceeds <time_period>. -DelHtmlRpt looks for files whose name fits the pattern generated from the
html_file parameter. If html_file is c:\Program Files\Microsoft SQL
Server\Mssql\Backup\AdventureWorks2012_maint.htm, then -DelHtmlRpt causes sqlmaint to delete any files
whose names match the pattern C:\Program Files\Microsoft SQL
Server\Mssql\Backup\AdventureWorks2012_maint*.htm and that are older than the specified <time_period>.
-RmUnusedSpace threshold_percent free_percent
Specifies that unused space be removed from the database specified in -D. This option is only useful for databases
that are defined to grow automatically. Threshold_percent specifies in megabytes the size that the database must
reach before sqlmaint attempts to remove unused data space. If the database is smaller than the
threshold_percent, no action is taken. Free_percent specifies how much unused space must remain in the database,
specified as a percentage of the final size of the database. For example, if a 200-MB database contains 100 MB of
data, specifying 10 for free_percent results in the final database size being 110 MB. Note that a database is not
expanded if it is smaller than free_percent plus the amount of data in the database. For example, if a 108-MB
database has 100 MB of data, specifying 10 for free_percent does not expand the database to 110 MB; it remains
at 108 MB.
-CkDB | -CkDBNoIdx
Specifies that a DBCC CHECKDB statement or a DBCC CHECKDB statement with the NOINDEX option be run in
the database specified in -D. For more information, see DBCC CHECKDB.
A warning is written to text_file if the database is in use when sqlmaint runs.
-CkAl | -CkAlNoIdx
Specifies that a DBCC CHECKALLOC statement with the NOINDEX option be run in the database specified in -D.
For more information, see DBCC CHECKALLOC (Transact-SQL ).
-CkCat
Specifies that a DBCC CHECKCATALOG (Transact-SQL ) statement be run in the database specified in -D. For
more information, see DBCC CHECKCATALOG (Transact-SQL ).
-UpdOptiStats sample_percent
Specifies that the following statement be run on each table in the database:

UPDATE STATISTICS table WITH SAMPLE sample_percent PERCENT;

If the tables contain computed columns, you must also specify the -SupportedComputedColumn argument
when you use -UpdOptiStats.
For more information, see UPDATE STATISTICS (Transact-SQL ).
-RebldIdx free_space
Specifies that indexes on tables in the target database should be rebuilt by using the free_space percent value as
the inverse of the fill factor. For example, if free_space percentage is 30, then the fill factor used is 70. If a
free_space percentage value of 100 is specified, then the indexes are rebuilt with the original fill factor value.
If the indexes are on computed columns, you must also specify the -SupportComputedColumn argument when
you use -RebldIdx.
-SupportComputedColumn
Must be specified to run DBCC maintenance commands with sqlmaint on computed columns.
-WriteHistory
Specifies that an entry be made in msdb.dbo.sysdbmaintplan_history for each maintenance action performed by
sqlmaint. If -PlanName or -PlanID is specified, the entries in sysdbmaintplan_history use the ID of the specified
plan. If -D is specified, the entries in sysdbmaintplan_history are made with zeroes for the plan ID.
-BkUpDB [ backup_path] | -BkUpLog [ backup_path ]
Specifies a backup action. -BkUpDb backs up the entire database. -BkUpLog backs up only the transaction log.
backup_path specifies the directory for the backup. backup_path is not needed if -UseDefDir is also specified, and
is overridden by -UseDefDir if both are specified. The backup can be placed in a directory or a tape device address
(for example, \\.\TAPE0). The file name for a database backup is generated automatically as follows:

dbname_db_yyyyMMddhhmm.BAK

where
dbname is the name of the database being backed up.
yyyyMMddhhmm is the time of the backup operation with yyyy = year, MM = month, dd = day, hh = hour,
and mm = minute.
The file name for a transaction backup is generated automatically with a similar format:

dbname_log_yyyymmddhhmm.BAK

If you use the -BkUpDB parameter, you must also specify the media by using the -BkUpMedia parameter.
-BkUpMedia
Specifies the media type of the backup, either DISK or TAPE.
DISK
Specifies that the backup medium is disk.
-DelBkUps< time_period >
For disk backups, specifies that any backup file in the backup directory be deleted if the time interval after the
creation of the backup exceeds the <time_period>.
-CrBkSubDir
For disk backups, specifies that a subdirectory be created in the [backup_path] directory or in the default backup
directory if -UseDefDir is also specified. The name of the subdirectory is generated from the database name
specified in -D. -CrBkSubDir offers an easy way to put all the backups for different databases into separate
subdirectories without having to change the backup_path parameter.
-UseDefDir
For disk backups, specifies that the backup file be created in the default backup directory. UseDefDir overrides
backup_path if both are specified. With a default Microsoft SQL Server setup, the default backup directory is
C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup.
TAPE
Specifies that the backup medium is tape.
-BkUpOnlyIfClean
Specifies that the backup occur only if any specified -Ck checks did not find problems with the data. Maintenance
actions run in the same sequence as they appear in the command prompt. Specify the parameters -CkDB, -
CkDBNoIdx, -CkAl, -CkAlNoIdx, -CkTxtAl, or -CkCat before the -BkUpDB/-BkUpLog parameter(s) if you are
also going to specify -BkUpOnlyIfClean, or the backup occurs whether or not the check reports problems.
-VrfyBackup
Specifies that RESTORE VERIFYONLY be run on the backup when it completes.
number[minutes| hours| day| weeks| months]
Specifies the time interval used to determine if a report or backup file is old enough to be deleted. number is an
integer followed (without a space) by a unit of time. Valid examples:
12weeks
3months
15days
If only number is specified, the default date part is weeks.

Remarks
The sqlmaint utility performs maintenance operations on one or more databases. If -D is specified, the operations
specified in the remaining switches are performed only on the specified database. If -PlanName or -PlanID are
specified, the only information sqlmaint retrieves from the specified maintenance plan is the list of databases in
the plan. All operations specified in the remaining sqlmaint parameters are applied against each database in the
list obtained from the plan. The sqlmaint utility does not apply any of the maintenance activities defined in the
plan itself.
The sqlmaint utility returns 0 if it runs successfully or 1 if it fails. Failure is reported:
If any of the maintenance actions fail.
If -CkDB, -CkDBNoIdx, -CkAl, -CkAlNoIdx, -CkTxtAl, or -CkCat checks find problems with the data.
If a general failure is encountered.

Permissions
The sqlmaint utility can be executed by any Windows user with Read and Execute permission on sqlmaint.exe ,
which by default is stored in the x:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER1\MSSQL\Binn folder.
Additionally, the SQL Server login specified with -login_ID must have the SQL Server permissions required to
perform the specified action. If the connection to SQL Server uses Windows Authentication, the SQL Server login
mapped to the authenticated Windows user must have the SQL Server permissions required to perform the
specified action.
For example, using the -BkUpDB requires permission to execute the BACKUP statement. And using the -
UpdOptiStats argument requires permission to execute the UPDATE STATISTICS statement. For more
information, see the "Permissions" sections of the corresponding topics in Books Online.

Examples
A. Performing DBCC checks on a database

sqlmaint -S MyServer -D AdventureWorks2012 -CkDB -CkAl -CkCat -Rpt C:\MyReports\AdvWks_chk.rpt

B. Updating statistics using a 15% sample in all databases in a plan. Also, shrink any of the database that have
reached 110 MB to having only 10% free space

sqlmaint -S MyServer -PlanName MyUserDBPlan -UpdOptiStats 15 -RmUnusedSpace 110 10

C. Backing up all the databases in a plan to their individual subdirectories in the default x:\Program
Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup directory. Also, delete any backups
older than 2 weeks

sqlmaint -S MyServer -PlanName MyUserDBPlan -BkUpDB -BkUpMedia DISK -UseDefDir -CrBkSubDir -DelBkUps 2weeks

D. Backing up a database to the default x:\Program Files\Microsoft SQL


Server\MSSQL13.MSSQLSERVER\MSSQL\Backup directory.\

sqlmaint -S MyServer -BkUpDB -BkUpMedia DISK -UseDefDir

See Also
BACKUP (Transact-SQL )
UPDATE STATISTICS (Transact-SQL )
sqllogship Application
5/3/2018 • 4 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The sqllogship application performs a backup, copy, or restore operation and associated clean-up tasks for a log
shipping configuration. The operation is performed on a specific instance of Microsoft SQL Server for a specific
database.
For the syntax conventions, see Command Prompt Utility Reference (Database Engine).

Syntax
sqllogship -server instance_name { -backup primary_id | -copy secondary_id | -restore secondary_id } [ –
verboselevel level ] [ –logintimeout timeout_value ] [ -querytimeout timeout_value ]

Arguments
-server instance_name
Specifies the instance of SQL Server where the operation will run. The server instance to specify depends on
which log-shipping operation is being specified. For -backup, instance_name must be the name of the primary
server in a log shipping configuration. For -copy or -restore, instance_name must be the name of a secondary
server in a log shipping configuration.
-backup primary_id
Performs a backup operation for the primary database whose primary ID is specified by primary_id. You can
obtain this ID by selecting it from the log_shipping_primary_databases system table or by using the
sp_help_log_shipping_primary_database stored procedure.
The backup operation creates the log backup in the backup directory. The sqllogship application then cleans out
any old backup files, based on the file retention period. Next, the application logs history for the backup operation
on the primary server and the monitor server. Finally, the application runs sp_cleanup_log_shipping_history, which
cleans out old history information, based on the retention period.
-copy secondary_id
Performs a copy operation to copy backups from the specified secondary server for the secondary database, or
databases, whose secondary ID is specified by secondary_id. You can obtain this ID by selecting it from the
log_shipping_secondary system table or by using the sp_help_log_shipping_secondary_database stored procedure.
The operation copies the backup files from the backup directory to the destination directory. The sqllogship
application then logs the history for the copy operation on the secondary server and the monitor server.
-restore secondary_id
Performs a restore operation on the specified secondary server for the secondary database, or databases, whose
secondary ID is specified by secondary_id. You can obtain this ID by using the
sp_help_log_shipping_secondary_database stored procedure.
Any backup files in the destination directory that were created after the most recent restore point are restored to
the secondary database, or databases. The sqllogship application then cleans out any old backup files, based on
the file retention period. Next, the application logs history for the restore operation on the secondary server and
the monitor server. Finally, the application runs sp_cleanup_log_shipping_history, which cleans out old history
information, based on the retention period.
–verboselevel level
Specifies the level of messages added to the log shipping history. level is one of the following integers:

LEVEL DESCRIPTION

0 Output no tracing and debugging messages.

1 Output error-handling messages.

2 Output warnings and error-handling messages.

3 Output informational messages, warnings, and error-handling


messages. This is the default value.

4 Output all debugging and tracing messages.

–logintimeout timeout_value
Specifies the amount of time allotted for attempting to log in to the server instance before the attempt times out.
The default is 15 seconds. timeout_value is int.
-querytimeout timeout_value
Specifies the amount of time allotted for starting the specified operation before the attempt times out. The default
is no timeout period. timeout_value is int.

Remarks
We recommend that you use the backup, copy, and restore jobs to perform the backup, copy and restore when
possible. To start these jobs from a batch operation or other application, call the sp_start_job stored procedure.
The log shipping history created by sqllogship is interspersed with the history created by log shipping backup,
copy, and restore jobs. If you plan to use sqllogship repeatedly to perform backup, copy, or restore operations for
a log shipping configuration, consider disabling the corresponding log shipping job or jobs. For more information,
see Disable or Enable a Job.
The sqllogship application, SqlLogShip.exe, is installed in the x:\Program Files\Microsoft SQL
Server\130\Tools\Binn directory.

Permissions
sqllogship uses Windows Authentication. The Windows Authentication account where the command is run
requires Windows directory access and SQL Server permissions. The requirement depends on whether the
sqllogship command specifies the -backup, -copy, or -restore option.

OPTION DIRECTORY ACCESS PERMISSIONS

-backup Requires read/write access to the Requires the same permissions as the
backup directory. BACKUP statement. For more
information, see BACKUP (Transact-
SQL).
OPTION DIRECTORY ACCESS PERMISSIONS

-copy Requires read access to the backup Requires the same permissions as the
directory and write access to the copy sp_help_log_shipping_secondary_databa
directory. se stored procedure.

-restore Requires read/write access to the copy Requires the same permissions as the
directory. RESTORE statement. For more
information, see RESTORE (Transact-
SQL).

NOTE
To find out the paths of the backup and copy directories, you can run the sp_help_log_shipping_secondary_database
stored procedure or view the log_shipping_secondary table in msdb. The paths of the backup directory and destination
directory are in the backup_source_directory and backup_destination_directory columns, respectively.

See Also
About Log Shipping (SQL Server)
log_shipping_primary_databases (Transact-SQL )
log_shipping_secondary (Transact-SQL )
sp_cleanup_log_shipping_history (Transact-SQL )
sp_help_log_shipping_primary_database (Transact-SQL )
sp_help_log_shipping_secondary_database (Transact-SQL )
sp_start_job (Transact-SQL )
sqlps Utility
5/3/2018 • 3 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The sqlps utility starts a Windows PowerShell session with the SQL Server PowerShell provider and cmdlets
loaded and registered. You can enter PowerShell commands or scripts that use the SQL Server PowerShell
components to work with instances of SQL Server and their objects.

IMPORTANT
This feature is in maintenance mode and may be removed in a future version of Microsoft SQL Server. Avoid using this
feature in new development work, and plan to modify applications that currently use this feature. Use the sqlps PowerShell
module instead. For more information about the sqlps module, see Import the SQLPS Module.

Syntax
sqlps
[ [ [ -NoLogo ][ -NoExit ][ -NoProfile ]
[ -OutPutFormat { Text | XML } ] [ -InPutFormat { Text | XML } ]
]
[ -Command { -
| script_block [ -args argument_array ]
| string [ command_parameters ]
}
]
]
[ -? | -Help ]

Arguments
-NoLogo
Specifies that the sqlps utility hide the copyright banner when it starts.
-NoExit
Specifies that the sqlps utility continue running after the startup commands have completed.
-NoProfile
Specifies that the sqlps utility not load a user profile. User profiles record commonly used aliases, functions, and
variables for use across PowerShell sessions.
-OutPutFormat { Text | XML }
Specifies that the sqlps utility output be formatted as either text strings (Text) or in a serialized CLIXML format
(XML ).
-InPutFormat { Text | XML }
Specifies that input to the sqlps utility is formatted as either text strings (Text) or in a serialized CLIXML format
(XML ).
-Command
Specifies the command for the sqlps utility to run. The sqlps utility runs the command and then exits, unless -
NoExit is also specified. Do not specify any other switches after -Command, they will be read as command
parameters.

-Command- specifies that the sqlps utility read the input from the standard input.
script_block [ -argsargument_array ]
Specifies a block of PowerShell commands to run, the block must be enclosed in braces: {}. Script_block can only be
specified when the sqlps utility is called from either PowerShell or another sqlps utility session. The
argument_array is an array of PowerShell variables containing the arguments for the PowerShell commands in
the script_block.
string [ command_parameters ]
Specifies a string that contains the PowerShell commands to be run. Use the format "& {command}". The
quotation marks indicate a string, and the invoke operator (&) causes the sqlps utility to run the command.
[ -? | -Help ]
Shows the syntax summary of the sqlps utility options.

Remarks
The sqlps utility starts the PowerShell environment (PowerShell.exe) and loads the SQL Server PowerShell
module. The module, also named sqlps, loads and registers these SQL Server PowerShell snap-ins:
Microsoft.SqlServer.Management.PSProvider.dll
Implements the SQL Server PowerShell provider and associated cmdlets such as Encode-SqlName and
Decode-SqlName.
Microsoft.SqlServer.Management.PSSnapin.dll
Implements the Invoke-Sqlcmd and Invoke-PolicyEvaluation cmdlets.
You can use the sqlps utility to do the following:
Interactively run PowerShell commands.
Run PowerShell script files.
Run SQL Server cmdlets.
Use the SQL Server provider paths to navigate through the hierarchy of SQL Server objects.
By default, the sqlps utility runs with the scripting execution policy set to Restricted. This prevents running
any PowerShell scripts. You can use the Set-ExecutionPolicy cmdlet to enable running signed scripts, or
any scripts. Only run scripts from trusted sources, and secure all input and output files by using the
appropriate NTFS permissions. For more information about enabling PowerShell scripts, see Running
Windows PowerShell Scripts.
The version of the sqlps utility in SQL Server 2008 and SQL Server 2008 R2 was implemented as a
Windows PowerShell 1.0 mini-shell. Mini-shells have certain restrictions, such as not allowing users to load
snap-ins other than those loaded by the mini-shell. These restrictions do not apply to the SQL Server 2012
(11.x) and higher versions of the utility, which have been changed to use the sqlps module.

Examples
A. Run the sqlps utility in default, interactive mode without the copyright banner

sqlps -NoLogo
B. Run a SQL Server PowerShell script from the command prompt

sqlps -Command "&{.\MyFolder.MyScript.ps1}"

C. Run a SQL Server PowerShell script from the command prompt, and keep running after the script
completes

sqlps -NoExit -Command "&{.\MyFolder.MyScript.ps1}"

See Also
Enable or Disable a Server Network Protocol
SQL Server PowerShell
sqlservr Application
5/3/2018 • 5 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The sqlservr application starts, stops, pauses, and continues an instance of Microsoft SQL Server from a
command prompt.

Syntax
sqlservr [-sinstance_name] [-c] [-dmaster_path] [-f]
[-eerror_log_path] [-lmaster_log_path] [-m]
[-n] [-Ttrace#] [-v] [-x] [-gnumber]

Arguments
-s instance_name
Specifies the instance of SQL Server to connect to. If no named instance is specified, sqlservr starts the default
instance of SQL Server.

IMPORTANT
When starting an instance of SQL Server, you must use the sqlservr application in the appropriate directory for that
instance. For the default instance, run sqlservr from the \MSSQL\Binn directory. For a named instance, run sqlservr from
the \MSSQL$instance_name\Binn directory.

-c
Indicates that an instance of SQL Server is started independently of the Windows Service Control Manager. This
option is used when starting SQL Server from a command prompt, to shorten the amount of time it takes for SQL
Server to start.

NOTE
When you use this option, you cannot stop SQL Server by using SQL Server Service Manager or the net stop command, and
if you log off the computer, SQL Server is stopped.)

-d master_path
Indicates the fully qualified path for the master database file. There are no spaces between -d and master_path. If
you do not provide this option, the existing registry parameters are used.
-f
Starts an instance of SQL Server with minimal configuration. This is useful if the setting of a configuration value
(for example, over-committing memory) has prevented the server from starting.
-e error_log_path
Indicates the fully qualified path for the error log file. If not specified, the default location is <Drive>:\Program
Files\Microsoft SQL Server\MSSQL\Log\Errorlog for the default instance and <Drive>:\Program Files\Microsoft
SQL Server\MSSQL$instance_name\Log\Errorlog for a named instance. There are no spaces between -e and
error_log_path.
-l master_log_path
Indicates the fully qualified path for the master database transaction log file. There are no spaces between -l and
master_log_path.
-m
Indicates to start an instance of SQL Server in single-user mode. Only a single user can connect when SQL Server
is started in single-user mode. The CHECKPOINT mechanism, which guarantees that completed transactions are
regularly written from the disk cache to the database device, is not started. (Typically, this option is used if you
experience problems with system databases that require repair.) Enables the sp_configure allow updates option.
By default, allow updates is disabled.
-n
Allows you to start a named instance of SQL Server. Without the -s parameter set, the default instance attempts to
start. You must switch to the appropriate BINN directory for the instance at a command prompt before starting
sqlservr.exe. For example, if Instance1 were to use \mssql$Instance1 for its binaries, the user must be in the
\mssql$Instance1\binn directory to start sqlservr.exe -s instance1. If you start an instance of SQL Server with
the -n option, it is advisable to use the -e option too, or SQL Server events are not logged.
-T trace#
Indicates that an instance of SQL Server should be started with a specified trace flag (trace#) in effect. Trace flags
are used to start the server with nonstandard behavior. For more information, see Trace Flags (Transact-SQL ).

IMPORTANT
When specifying a trace flag, use -T to pass the trace flag number. A lowercase t (-t) is accepted by SQL Server; however, -t
sets other internal trace flags required by SQL Server support engineers.

-v
Displays the server version number.
-x
Disables the keeping of CPU time and cache-hit ratio statistics. Allows maximum performance.
-g memory_to_reserve
Specifies an integer number of megabytes (MB ) of memory that SQL Server leaves available for memory
allocations within the SQL Server process, but outside the SQL Server memory pool. The memory outside of the
memory pool is the area used by SQL Server for loading items such as extended procedure .dll files, the OLE
DB providers referenced by distributed queries, and automation objects referenced in Transact-SQL statements.
The default is 256 MB.
Use of this option may help tune memory allocation, but only when physical memory exceeds the configured limit
set by the operating system on virtual memory available to applications. Use of this option may be appropriate in
large memory configurations in which the memory usage requirements of SQL Server are atypical and the virtual
address space of the SQL Server process is totally in use. Incorrect use of this option can lead to conditions under
which an instance of SQL Server may not start or may encounter run-time errors.
Use the default for the -g parameter unless you see any of the following warnings in the SQL Server error log:
"Failed Virtual Allocate Bytes: FAIL_VIRTUAL_RESERVE <size>"
"Failed Virtual Allocate Bytes: FAIL_VIRTUAL_COMMIT <size>"
These messages may indicate that SQL Server is trying to free parts of the SQL Server memory pool in
order to find space for items such as extended stored procedure .dll files or automation objects. In this case,
consider increasing the amount of memory reserved by the -g switch.
Using a value lower than the default increases the amount of memory available to the buffer pool and
thread stacks; this may, in turn, provide some performance benefit to memory-intensive workloads in
systems that do not use many extended stored procedures, distributed queries, or automation objects.

Remarks
In most cases, the sqlservr.exe program is only used for troubleshooting or major maintenance. When SQL Server
is started from the command prompt with sqlservr.exe, SQL Server does not start as a service, so you cannot stop
SQL Server using net commands. Users can connect to SQL Server, but SQL Server tools show the status of the
service, so SQL Server Configuration Manager correctly indicates that the service is stopped. SQL Server
Management Studio can connect to the server, but it also indicates that the service is stopped.

Compatibility Support
The -h parameter is not supported in SQL Server 2017. This parameter was used in earlier versions of 32-bit
instances of SQL Server to reserve virtual memory address space for Hot Add memory metadata when AWE is
enabled. For more information, see Discontinued SQL Server Features in SQL Server 2016.

See Also
Database Engine Service Startup Options
tablediff Utility
5/3/2018 • 5 minutes to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The tablediff utility is used to compare the data in two tables for non-convergence, and is particularly useful for
troubleshooting non-convergence in a replication topology. This utility can be used from the command prompt or
in a batch file to perform the following tasks:
A row by row comparison between a source table in an instance of Microsoft SQL Server acting as a
replication Publisher and the destination table at one or more instances of SQL Server acting as replication
Subscribers.
Perform a fast comparison by only comparing row counts and schema.
Perform column-level comparisons.
Generate a Transact-SQL script to fix discrepancies at the destination server to bring the source and
destination tables into convergence.
Log results to an output file or into a table in the destination database.

Syntax
tablediff
[ -? ] |
{
-sourceserver source_server_name[\instance_name]
-sourcedatabase source_database
-sourcetable source_table_name
[ -sourceschema source_schema_name ]
[ -sourcepassword source_password ]
[ -sourceuser source_login ]
[ -sourcelocked ]
-destinationserver destination_server_name[\instance_name]
-destinationdatabase subscription_database
-destinationtable destination_table
[ -destinationschema destination_schema_name ]
[ -destinationpassword destination_password ]
[ -destinationuser destination_login ]
[ -destinationlocked ]
[ -b large_object_bytes ]
[ -bf number_of_statements ]
[ -c ]
[ -dt ]
[ -et table_name ]
[ -f [ file_name ] ]
[ -o output_file_name ]
[ -q ]
[ -rc number_of_retries ]
[ -ri retry_interval ]
[ -strict ]
[ -t connection_timeouts ]
}
Arguments
[ -? ]
Returns the list of supported parameters.
-sourceserver source_server_name[\instance_name]
Is the name of the source server. Specify source_server_name for the default instance of SQL Server. Specify
source_server_name\instance_name for a named instance of SQL Server.
-sourcedatabase source_database
Is the name of the source database.
-sourcetable source_table_name
Is the name of the source table being checked.
-sourceschema source_schema_name
The schema owner of the source table. By default, the table owner is assumed to be dbo.
-sourcepassword source_password
Is the password for the login used to connect to the source server using SQL Server Authentication.

IMPORTANT
When possible, supply security credentials at runtime. If you must store credentials in a script file, you should secure the file
to prevent unauthorized access.

-sourceuser source_login
Is the login used to connect to the source server using SQL Server Authentication. If source_login is not supplied,
then Windows Authentication is used when connecting to the source server. When possible, use Windows
Authentication.
-sourcelocked
The source table is locked during the comparison using the TABLOCK and HOLDLOCK table hints.
-destinationserver destination_server_name[\instance_name]
Is the name of the destination server. Specify destination_server_name for the default instance of SQL Server.
Specify destination_server_name\instance_name for a named instance of SQL Server.
-destinationdatabase subscription_database
Is the name of the destination database.
-destinationtable destination_table
Is the name of the destination table.
-destinationschema destination_schema_name
The schema owner of the destination table. By default, the table owner is assumed to be dbo.
-destinationpassword destination_password
Is the password for the login used to connect to the destination server using SQL Server Authentication.

IMPORTANT
When possible, supply security credentials at runtime. If you must store credentials in a script file, you should secure the file
to prevent unauthorized access.

-destinationuser destination_login
Is the login used to connect to the destination server using SQL Server Authentication. If destination_login is not
supplied, then Windows Authentication is used when connecting to the server. When possible, use Windows
Authentication.
-destinationlocked
The destination table is locked during the comparison using the TABLOCK and HOLDLOCK table hints.
-b large_object_bytes
Is the number of bytes to compare for large object data type columns, which includes: text, ntext, image,
varchar(max), nvarchar(max) and varbinary(max). large_object_bytes defaults to the size of the column. Any
data above large_object_bytes will not be compared.
-bf number_of_statements
Is the number of Transact-SQL statements to write to the current Transact-SQL script file when the -f option is
used. When the number of Transact-SQL statements exceeds number_of_statements, a new Transact-SQL script
file is created.
-c
Compare column-level differences.
-dt
Drop the result table specified by table_name, if the table already exists.
-et table_name
Specifies the name of the result table to create. If this table already exists, -DT must be used or the operation will
fail.
-f [ file_name ]
Generates a Transact-SQL script to bring the table at the destination server into convergence with the table at the
source server. You can optionally specify a name and path for the generated Transact-SQL script file. If file_name is
not specified, the Transact-SQL script file is generated in the directory where the utility runs.
-o output_file_name
Is the full name and path of the output file.
-q
Perform a fast comparison by only comparing row counts and schema.
-rc number_of_retries
Number of times that the utility retries a failed operation.
-ri retry_interval
Interval, in seconds, to wait between retries.
-strict
Source and destination schema are strictly compared.
-t connection_timeouts
Sets the connection timeout period, in seconds, for connections to the source server and destination server.

Return Value
VALUE DESCRIPTION

0 Success

1 Critical error
VALUE DESCRIPTION

2 Table differences

Remarks
The tablediff utility cannot be used with non- SQL Server servers.
Tables with sql_variant data type columns are not supported.
By default, the tablediff utility supports the following data type mappings between source and destination
columns.

SOURCE DATA TYPE DESTINATION DATA TYPE

tinyint smallint, int, or bigint

smallint int or bigint

int bigint

timestamp varbinary

varchar(max) text

nvarchar(max) ntext

varbinary(max) image

text varchar(max)

ntext nvarchar(max)

image varbinary(max)

Use the -strict option to disallow these mappings and perform a strict validation.
The source table in the comparison must contain at least one primary key, identity, or ROWGUID column. When
you use the -strict option, the destination table must also have a primary key, identity, or ROWGUID column.
The Transact-SQL script generated to bring the destination table into convergence does not include the following
data types:
varchar(max)
nvarchar(max)
varbinary(max)
timestamp
xml
text
ntext
image

Permissions
To compare tables, you need SELECT ALL permissions on the table objects being compared.
To use the -et option, you must be a member of the db_owner fixed database role, or at least have CREATE TABLE
permission in the subscription database and ALTER permission on the destination owner schema at the
destination server.
To use the -dt option, you must be a member of the db_owner fixed database role, or at least have ALTER
permission on the destination owner schema at the destination server.
To use the -o or -f options, you must have write permissions to the specified file directory location.

See Also
Compare Replicated Tables for Differences (Replication Programming)
Download and install sqlpackage
6/28/2018 • 2 minutes to read • Edit Online

sqlpackage runs on Windows, macOS, and Linux.


Download and install the latest .NET Framework release and macOS and Linux previews:

PLATFORM DOWNLOAD RELEASE DATE VERSION BUILD

Windows Installer June 22, 2018 17.8 14.0.4079.2

macOS (preview) .zip May 9, 2018 0.0.1 15.0.4057.1

Linux (preview) .zip May 9, 2018 0.0.1 15.0.4057.1

For details about the latest release, see the release notes.

Get sqlpackage for Windows


This release of sqlpackage includes a standard Windows installer experience, and a .zip:
1. Download and run the DacFramework.msi installer for Windows.
2. Open a new Command Prompt window, and run sqlpackage.exe
sqlpackage is installed to the C:\Program Files\Microsoft SQL Server\140\DAC\bin folder

Get sqlpackage (preview) for macOS


1. Download sqlpackage for macOS.
2. To extract the file and launch sqlpackage, open a new Terminal window and type the following commands:
.zip Installation:

mv ~/Downloads/sqlpackage-linux-<version string> ~/sqlpackage


echo 'export PATH="$PATH:~/sqlpackage"' >> ~/.bash_profile
source ~/.bash_profile
sqlpackage

Get sqlpackage (preview) for Linux


1. Download sqlpackage for Linux by using one of the installers or the tar.gz archive:
2. To extract the file and launch sqlpackage, open a new Terminal window and type the following commands:
.zip Installation:

cd ~
mkdir sqlpackage
unzip ~/Downloads/sqlpackage-linux-<version string>.zip ~/sqlpackage
echo 'export PATH="$PATH:~/sqlpackage"' >> ~/.bashrc
source ~/.bashrc
sqlpackage
NOTE
On Debian, Redhat, and Ubuntu, you may have missing dependencies. Use the following commands to install these
dependencies depending on your version of Linux:

Debian:

sudo apt-get install libuwind8

Redhat:

yum install libunwind


yum install libicu

Ubuntu:

sudo apt-get install libunwind8

# install the libicu library based on the Ubuntu version


sudo apt-get install libicu52 # for 14.x
sudo apt-get install libicu55 # for 16.x
sudo apt-get install libicu57 # for 17.x
sudo apt-get install libicu60 # for 18.x

Uninstall sqlpackage (preview)


If you installed sqlpackage using the Windows installer, then uninstall the same way you remove any Windows
application.
If you installed sqlpackage with a .zip or other archive, then simply delete the files.

Supported Operating Systems


sqlpackage runs on Windows, macOS, and Linux, and is supported on the following platforms:
Windows
Windows 10
Windows 8.1
Windows 8
Windows 7 SP1
Windows Server 2016
Windows Server 2012 R2
Windows Server 2012
Windows Server 2008 R2
macOS
macOS 10.13 High Sierra
macOS 10.12 Sierra
Linux (x64)
Red Hat Enterprise Linux 7.4
Red Hat Enterprise Linux 7.3
SUSE Linux Enterprise Server v12 SP2
Ubuntu 16.04

Next Steps
Learn more about sqlpackage
Microsoft Privacy Statement
sqlpackage release notes
6/28/2018 • 2 minutes to read • Edit Online

Download the latest version

sqlpackage 17.8
Release date: June 22, 2018
Build: 14.0.4079.2
The release includes the following fixes:
Improved error messages for connection failures, including the SqlClient exception message.
Added MaxParallelism command-line parameter to specify the degree of parallelism for database operations.
Support index compression on single partition indexes for import/export.
Fixed a reverse engineering issue for XML column sets with SQL 2017 and later.
Fixed an issue where scripting the database compatibility level 140 was ignored for Azure SQL Database.

sqlpackage 17.4.1
Release date: January 25, 2018
Build: 14.0.3917.1
The release includes the following fixes:
When importing an Azure SQL Database .bacpac to an on-premise instance, fixed errors due to 'Database
master keys without password are not supported in this version of SQL Server'.
Database catalog collation support.
Fixed an unresolved pseudo column error for graph tables.
Added ThreadMaxStackSize command-line parameter to parse TSQL with a large number of nested
statements.
Fixed using the SchemaCompareDataModel with SQL authentication to compare schemas.

sqlpackage 17.4.0
Release date: December 12, 2017
Build: 14.0.3881.1
The release includes the following fixes:
Do not block when encountering a database compatibility level not understood. Instead, the latest Azure SQL
Database or on-premise platform will be assumed.
Added support for 'temporal retention policy' on SQL2017+ and Azure SQL Database.
Added /DiagnosticsFile:"C:\Temp\sqlpackage.log" command-line parameter to specify a file path to save
diagnostic information.
Added /Diagnostics command-line parameter to log diagnostic information to the console.

sqlpackage on macOS and Linux 0.0.1 (preview)


Release date: May 9, 2018
Build: 15.0.4057.1
This release contains the cross-platform preview build of sqlpackage that targets .NET Core 2.0, and can run on
macOS and Linux.
This release is an early preview with following known issues:
The /p:CommandTimeout parameter is hard coded to 120.
Build and deployment contributors aren't supported.
Will be fixed after moving to .NET Core 2.1 where System.ComponentModel.Composition.dll is
supported.
Need to handle case-sensitive paths.
SQL CLR UDT types aren't supported, including SQL Server CLR UDT Types: SqlGeography, SqlGeometry, &
SqlHierarchyId.
Older .dacpac and .bacpac files that use json data serialization aren't supported.
Referenced .dacpacs (for example master.dacpac) may not resolve due to issues with case-sensitive file systems.
A workaround is to capitalize the name of the reference file (for example MASTER.BACPAC ).
SqlPackage.exe
7/13/2018 • 62 minutes to read • Edit Online

SqlPackage.exe is a command-line utility that automates the following database development tasks:
Extract: Creates a database snapshot (.dacpac) file from a live SQL Server or Azure SQL Database.
Publish: Incrementally updates a database schema to match the schema of a source .dacpac file. If the
database does not exist on the server, the publish operation creates it. Otherwise, an existing database is
updated.
Export: Exports a live database - including database schema and user data - from SQL Server or Azure SQL
Database to a BACPAC package (.bacpac file).
Import: Imports the schema and table data from a BACPAC package into a new user database in an
instance of SQL Server or Azure SQL Database.
DeployReport: Creates an XML report of the changes that would be made by a publish action.
DriftReport: Creates an XML report of the changes that have been made to a registered database since it
was last registered.
Script: Creates a Transact-SQL incremental update script that updates the schema of a target to match the
schema of a source.
The SqlPackage.exe command line allows you to specify these actions along with action-specific parameters and
properties.
Download the latest version. For details about the latest release, see the release notes.

Command-Line Syntax
SqlPackage.exe initiates the actions specified using the parameters, properties, and SQLCMD variables specified
on the command line.

SqlPackage {parameters}{properties}{SQLCMD Variables}

Help for the Extract action


PARAMETER SHORT FORM VALUE DESCRIPTION

/Action: /a Extract Specifies the action to be


performed.

/Diagnostics: /d {True|False} Specifies whether diagnostic


logging is output to the
console. Defaults to False.

/DiagnosticsFile: /df {string} Specifies a file to store


diagnostic logs.
PARAMETER SHORT FORM VALUE DESCRIPTION

/OverwriteFiles: /of {True|False} Specifies if sqlpackage.exe


should overwrite existing
files. Specifying false causes
sqlpackage.exe to abort
action if an existing file is
encountered. Default value is
True.

/Properties: /p {PropertyName}={Value} Specifies a name value pair


for an action-specific
property; {PropertyName}=
{Value}. Refer to the help for
a specific action to see that
action's property names.
Example: sqlpackage.exe
/Action:Publish /?.

/Quiet: /q {True|False} Specifies whether detailed


feedback is suppressed.
Defaults to False.

/SourceConnectionString: /scs {string} Specifies a valid SQL


Server/Azure connection
string to the source
database. If this parameter is
specified, it shall be used
exclusively of all other source
parameters.

/SourceDatabaseName: /sdn {string} Defines the name of the


source database.

/SourceEncryptConnection /sec {True|False} Specifies if SQL encryption


: should be used for the
source database connection.

/SourcePassword: /sp {string} For SQL Server Auth


scenarios, defines the
password to use to access
the source database.

/SourceServerName: /ssn {string} Defines the name of the


server hosting the source
database.

/SourceTimeout: /st {int} Specifies the timeout for


establishing a connection to
the source database in
seconds.

/SourceTrustServerCertific /stsc {True|False} Specifies whether to use SSL


ate: to encrypt the source
database connection and
bypass walking the
certificate chain to validate
trust.
PARAMETER SHORT FORM VALUE DESCRIPTION

/SourceUser: /su {string} For SQL Server Auth


scenarios, defines the SQL
Server user to use to access
the source database.

/TargetFile: /tf {string} Specifies a target file (that is,


a .dacpac file) to be used as
the target of action instead
of a database. If this
parameter is used, no other
target parameter shall be
valid. This parameter shall be
invalid for actions that only
support database targets.

/TenantId: /tid {string} Represents the Azure AD


tenant ID or domain name.
This option is required to
support guest or imported
Azure AD users as well as
Microsoft accounts such as
outlook.com, hotmail.com,
or live.com. If this parameter
is omitted, the default
tenant ID for Azure AD will
be used, assuming that the
authenticated user is a
native user for this AD.
However, in this case any
guest or imported users
and/or Microsoft accounts
hosted in this Azure AD are
not supported and the
operation will fail.
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).
PARAMETER SHORT FORM VALUE DESCRIPTION

/UniversalAuthentication: /ua {True|False} Specifies if Universal


Authentication should be
used. When set to True, the
interactive authentication
protocol is activated
supporting MFA. This option
can also be used for Azure
AD authentication without
MFA, using an interactive
protocol requiring the user
to enter their username and
password or integrated
authentication (Windows
credentials). When
/UniversalAuthentication is
set to True, no Azure AD
authentication can be
specified in
SourceConnectionString
(/scs). When
/UniversalAuthentication is
set to False, Azure AD
authentication must be
specified in
SourceConnectionString
(/scs).
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).

Properties specific to the Extract action


PROPERTY VALUE DESCRIPTION

/p: CommandTimeout=(INT32 '60') Specifies the command timeout in


seconds when executing queries against
SQL Server.

/p: DacApplicationDescription=(STRING) Defines the Application description to


be stored in the DACPAC metadata.

/p: DacApplicationName=(STRING) Defined the Application name to be


stored in the DACPAC metadata. The
default value is the database name.

/p: DacMajorVersion=(INT32 '1') Defines the major version to be stored


in the DACPAC metadata.

/p: DacMinorVersion=(INT32 '0') Defines the minor version to be stored


in the DACPAC metadata.
PROPERTY VALUE DESCRIPTION

/p: ExtractAllTableData=(BOOLEAN) Indicates whether data from all user


tables is extracted. If 'true', data from all
user tables is extracted, and you cannot
specify individual user tables for
extracting data. If 'false', specify one or
more user tables to extract data from.

/p: ExtractApplicationScopedObjectsOnly= If true, only extract application-scoped


(BOOLEAN 'True') objects for the specified source. If false,
extract all objects for the specified
source.

/p: ExtractReferencedServerScopedElements If true, extract login, server audit, and


=(BOOLEAN 'True') credential objects referenced by source
database objects.

/p: ExtractUsageProperties=(BOOLEAN) Specifies whether usage properties,


such as table row count and index size,
will be extracted from the database.

/p: IgnoreExtendedProperties=(BOOLEAN) Specifies whether extended properties


should be ignored.

/p: IgnorePermissions=(BOOLEAN 'True') Specifies whether permissions should be


ignored.

/p: IgnoreUserLoginMappings=(BOOLEAN) Specifies whether relationships between


users and logins are ignored.

/p: Storage=({File|Memory} 'File') Specifies the type of backing storage for


the schema model used during
extraction.

/p: TableData=(STRING) Indicates the table from which data will


be extracted. Specify the table name
with or without the brackets
surrounding the name parts in the
following format:
schema_name.table_identifier.

/p: VerifyExtraction=(BOOLEAN) Specifies whether the extracted dacpac


should be verified.

Publish Parameters, Properties, and SQLCMD Variables


A SqlPackage.exe publish operation incrementally updates the schema of a target database to match the structure
of a source database. Publishing a deployment package that contains user data for all or a subset of tables update
the table data in addition to the schema. Data deployment overwrites the schema and data in existing tables of the
target database. Data deployment will not change existing schema or data in the target database for tables not
included in the deployment package.
Help for Publish action
PARAMETER SHORT FORM VALUE DESCRIPTION

/Action: /a Publish Specifies the action to be


performed.

/AzureKeyVaultAuthMetho /akv {Interactive|ClientIdSecret} Specifies what authentication


d: method is used for accessing
Azure KeyVault

/ClientId: /cid {string} Specifies the Client ID to be


used in authenticating
against Azure KeyVault,
when necessary

/Diagnostics: /d {True|False} Specifies whether diagnostic


logging is output to the
console. Defaults to False.

/DiagnosticsFile: /df {string} Specifies a file to store


diagnostic logs.

/OverwriteFiles: /of {True|False} Specifies if sqlpackage.exe


should overwrite existing
files. Specifying false causes
sqlpackage.exe to abort
action if an existing file is
encountered. Default value is
True.

/Profile: /pr {string} Specifies the file path to a


DAC Publish Profile. The
profile defines a collection of
properties and variables to
use when generating
outputs.

/Properties: /p {PropertyName}={Value} Specifies a name value pair


for an action-specific
property;{PropertyName}=
{Value}. Refer to the help for
a specific action to see that
action's property names.
Example: sqlpackage.exe
/Action:Publish /?.

/Quiet: /q {True|False} Specifies whether detailed


feedback is suppressed.
Defaults to False.

/Secret: /secr {string} Specifies the Client Secret to


be used in authenticating
against Azure KeyVault,
when necessary
PARAMETER SHORT FORM VALUE DESCRIPTION

/SourceConnectionString: /scs {string} Specifies a valid SQL


Server/Azure connection
string to the source
database. If this parameter is
specified, it shall be used
exclusively of all other source
parameters.

/SourceDatabaseName: /sdn {string} Defines the name of the


source database.

/SourceEncryptConnection /sec {True|False} Specifies if SQL encryption


: should be used for the
source database connection.

/SourceFile: /sf {string} Specifies a source file to be


used as the source of action
instead of a database. If this
parameter is used, no other
source parameter shall be
valid.

/SourcePassword: /sp {string} For SQL Server Auth


scenarios, defines the
password to use to access
the source database.

/SourceServerName: /ssn {string} Defines the name of the


server hosting the source
database.

/SourceTimeout: /st {int} Specifies the timeout for


establishing a connection to
the source database in
seconds.

/SourceTrustServerCertific /stsc {True|False} Specifies whether to use SSL


ate: to encrypt the source
database connection and
bypass walking the
certificate chain to validate
trust.

/SourceUser: /su {string} For SQL Server Auth


scenarios, defines the SQL
Server user to use to access
the source database.

/TargetConnectionString: /tcs {string} Specifies a valid SQL


Server/Azure connection
string to the target
database. If this parameter is
specified, it shall be used
exclusively of all other target
parameters.
PARAMETER SHORT FORM VALUE DESCRIPTION

/TargetDatabaseName: /tdn {string} Specifies an override for the


name of the database that is
the target of sqlpackage.exe
Action.

/TargetEncryptConnection /tec {True|False} Specifies if SQL encryption


: should be used for the
target database connection.

/TargetPassword: /tp {string} For SQL Server Auth


scenarios, defines the
password to use to access
the target database.

/TargetServerName: /tsn {string} Defines the name of the


server hosting the target
database.

/TargetTimeout: /tt {int} Specifies the timeout for


establishing a connection to
the target database in
seconds. For Azure AD, it is
recommended that this
value be greater than or
equal to 30 seconds.

/TargetTrustServerCertific /ttsc {True|False} Specifies whether to use SSL


ate: to encrypt the target
database connection and
bypass walking the
certificate chain to validate
trust.

/TargetUser: /tu {string} For SQL Server Auth


scenarios, defines the SQL
Server user to use to access
the target database.
PARAMETER SHORT FORM VALUE DESCRIPTION

/TenantId: /tid {string} Represents the Azure AD


tenant ID or domain name.
This option is required to
support guest or imported
Azure AD users as well as
Microsoft accounts such as
outlook.com, hotmail.com,
or live.com. If this parameter
is omitted, the default
tenant ID for Azure AD will
be used, assuming that the
authenticated user is a
native user for this AD.
However, in this case any
guest or imported users
and/or Microsoft accounts
hosted in this Azure AD are
not supported and the
operation will fail.
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).

/UniversalAuthentication: /ua {True|False} Specifies if Universal


Authentication should be
used. When set to True, the
interactive authentication
protocol is activated
supporting MFA. This option
can also be used for Azure
AD authentication without
MFA, using an interactive
protocol requiring the user
to enter their username and
password or integrated
authentication (Windows
credentials). When
/UniversalAuthentication is
set to True, no Azure AD
authentication can be
specified in
SourceConnectionString
(/scs). When
/UniversalAuthentication is
set to False, Azure AD
authentication must be
specified in
SourceConnectionString
(/scs).
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).
PARAMETER SHORT FORM VALUE DESCRIPTION

/Variables: /v {PropertyName}={Value} Specifies a name value pair


for an action-specific
variable;{VariableName}=
{Value}. The DACPAC file
contains the list of valid
SQLCMD variables. An error
results if a value is not
provided for every variable.

Properties specific to the Publish action


PROPERTY VALUE DESCRIPTION

/p: AdditionalDeploymentContributorArgu Specifies additional deployment


ments=(STRING) contributor arguments for the
deployment contributors. This should
be a semi-colon delimited list of values.

/p: AdditionalDeploymentContributors= Specifies additional deployment


(STRING) contributors, which should run when
the dacpac is deployed. This should be a
semi-colon delimited list of fully
qualified build contributor names or IDs.

/p: AllowDropBlockingAssemblies= This property is used by SqlClr


(BOOLEAN) deployment to cause any blocking
assemblies to be dropped as part of the
deployment plan. By default, any
blocking/referencing assemblies will
block an assembly update if the
referencing assembly needs to be
dropped.

/p: AllowIncompatiblePlatform=(BOOLEAN) Specifies whether to attempt the action


despite incompatible SQL Server
platforms.

/p: AllowUnsafeRowLevelSecurityDataMove Do not block data motion on a table


ment=(BOOLEAN) that has Row Level Security if this
property is set to true. Default is false.

/p: BackupDatabaseBeforeChanges= Backups the database before deploying


(BOOLEAN) any changes.

/p: BlockOnPossibleDataLoss=(BOOLEAN Specifies that the publish episode


'True') should be terminated if there is a
possibility of data loss resulting from
the publish.operation.

/p: BlockWhenDriftDetected=(BOOLEAN Specifies whether to block updating a


'True') database whose schema no longer
matches its registration or is
unregistered.

/p: CommandTimeout=(INT32 '60') Specifies the command timeout in


seconds when executing queries against
SQL Server.
PROPERTY VALUE DESCRIPTION

/p: CommentOutSetVarDeclarations= Specifies whether the declaration of


(BOOLEAN) SETVAR variables should be commented
out in the generated publish script. You
might choose to do this if you plan to
specify the values on the command line
when you publish by using a tool such
as SQLCMD.EXE.

/p: CompareUsingTargetCollation= This setting dictates how the database's


(BOOLEAN) collation is handled during deployment;
by default the target database's
collation will be updated if it does not
match the collation specified by the
source. When this option is set, the
target database's (or server's) collation
should be used.

/p: CreateNewDatabase=(BOOLEAN) Specifies whether the target database


should be updated or whether it should
be dropped and re-created when you
publish to a database.

/p: DatabaseEdition= Defines the edition of an Azure SQL


({Basic|Standard|Premium|Default} Database.
'Default')

/p: DatabaseMaximumSize=(INT32) Defines the maximum size in GB of an


Azure SQL Database.

/p: DatabaseServiceObjective=(STRING) Defines the performance level of an


Azure SQL Database such as"P0" or
"S1".

/p: DeployDatabaseInSingleUserMode= if true, the database is set to Single User


(BOOLEAN) Mode before deploying.

/p: DisableAndReenableDdlTriggers= Specifies whether Data Definition


(BOOLEAN 'True') Language (DDL) triggers are disabled at
the beginning of the publish process
and re-enabled at the end of the
publish action.

/p: DoNotAlterChangeDataCaptureObjects If true, Change Data Capture objects


=(BOOLEAN 'True') are not altered.

/p: DoNotAlterReplicatedObjects= Specifies whether objects that are


(BOOLEAN 'True') replicated are identified during
verification.
PROPERTY VALUE DESCRIPTION

/p: DoNotDropObjectType=(STRING) An object type that should not be


dropped when DropObjectsNotInSource
is true. Valid object type names are
Aggregates, ApplicationRoles,
Assemblies, AsymmetricKeys,
BrokerPriorities, Certificates,
ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.
PROPERTY VALUE DESCRIPTION

/p: DoNotDropObjectTypes=(STRING) A semicolon-delimited list of object


types that should not be dropped when
DropObjectsNotInSource is true. Valid
object type names are Aggregates,
ApplicationRoles, Assemblies,
AsymmetricKeys, BrokerPriorities,
Certificates, ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.

/p: DropConstraintsNotInSource= Specifies whether constraints that do


(BOOLEAN 'True') not exist in the database snapshot
(.dacpac) file will be dropped from the
target database when you publish to a
database.

/p: DropDmlTriggersNotInSource= Specifies whether DML triggers that do


(BOOLEAN 'True') not exist in the database snapshot
(.dacpac) file will be dropped from the
target database when you publish to a
database.

/p: DropExtendedPropertiesNotInSource= Specifies whether extended properties


(BOOLEAN 'True') that do not exist in the database
snapshot (.dacpac) file will be dropped
from the target database when you
publish to a database.
PROPERTY VALUE DESCRIPTION

/p: DropIndexesNotInSource=(BOOLEAN Specifies whether indexes that do not


'True') exist in the database snapshot (.dacpac)
file will be dropped from the target
database when you publish to a
database.

/p: DropObjectsNotInSource=(BOOLEAN) Specifies whether objects that do not


exist in the database snapshot (.dacpac)
file will be dropped from the target
database when you publish to a
database. This value takes precedence
over DropExtendedProperties.

/p: DropPermissionsNotInSource= Specifies whether permissions that do


(BOOLEAN) not exist in the database snapshot
(.dacpac) file will be dropped from the
target database when you publish
updates to a database.

/p: DropRoleMembersNotInSource= Specifies whether role members that are


(BOOLEAN) not defined in the database snapshot
(.dacpac) file will be dropped from the
target database when you publish
updates to a database.

/p: DropStatisticsNotInSource=(BOOLEAN Specifies whether statistics that do not


'True') exist in the database snapshot (.dacpac)
file will be dropped from the target
database when you publish to a
database.
PROPERTY VALUE DESCRIPTION

/p: ExcludeObjectType=(STRING) An object type that should be ignored


during deployment. Valid object type
names are Aggregates, ApplicationRoles,
Assemblies, AsymmetricKeys,
BrokerPriorities, Certificates,
ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.
PROPERTY VALUE DESCRIPTION

/p: ExcludeObjectTypes=(STRING) A semicolon-delimited list of object


types that should be ignored during
deployment. Valid object type names
are Aggregates, ApplicationRoles,
Assemblies, AsymmetricKeys,
BrokerPriorities, Certificates,
ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.

/p: GenerateSmartDefaults=(BOOLEAN) Automatically provides a default value


when updating a table that contains
data with a column that does not allow
null values.

/p: IgnoreAnsiNulls=(BOOLEAN 'True') Specifies whether differences in the


ANSI NULLS setting should be ignored
or updated when you publish to a
database.

/p: IgnoreAuthorizer=(BOOLEAN) Specifies whether differences in the


Authorizer should be ignored or
updated when you publish to a
database.

/p: IgnoreColumnCollation=(BOOLEAN) Specifies whether differences in the


column collations should be ignored or
updated when you publish to a
database.
PROPERTY VALUE DESCRIPTION

/p: IgnoreColumnOrder=(BOOLEAN) Specifies whether differences in table


column order should be ignored or
updated when you publish to a
database.

/p: IgnoreComments=(BOOLEAN) Specifies whether differences in the


comments should be ignored or
updated when you publish to a
database.

/p: IgnoreCryptographicProviderFilePath= Specifies whether differences in the file


(BOOLEAN 'True') path for the cryptographic provider
should be ignored or updated when
you publish to a database.

/p: IgnoreDdlTriggerOrder=(BOOLEAN) Specifies whether differences in the


order of Data Definition Language
(DDL) triggers should be ignored or
updated when you publish to a
database or server.

/p: IgnoreDdlTriggerState=(BOOLEAN) Specifies whether differences in the


enabled or disabled state of Data
Definition Language (DDL) triggers
should be ignored or updated when
you publish to a database.

/p: IgnoreDefaultSchema=(BOOLEAN) Specifies whether differences in the


default schema should be ignored or
updated when you publish to a
database.

/p: IgnoreDmlTriggerOrder=(BOOLEAN) Specifies whether differences in the


order of Data Manipulation Language
(DML) triggers should be ignored or
updated when you publish to a
database.

/p: IgnoreDmlTriggerState=(BOOLEAN) Specifies whether differences in the


enabled or disabled state of DML
triggers should be ignored or updated
when you publish to a database.

/p: IgnoreExtendedProperties=(BOOLEAN) Specifies whether differences in the


extended properties should be ignored
or updated when you publish to a
database.

/p: IgnoreFileAndLogFilePath=(BOOLEAN Specifies whether differences in the


'True') paths for files and log files should be
ignored or updated when you publish
to a database.

/p: IgnoreFilegroupPlacement=(BOOLEAN Specifies whether differences in the


'True') placement of objects in FILEGROUPs
should be ignored or updated when
you publish to a database.
PROPERTY VALUE DESCRIPTION

/p: IgnoreFileSize=(BOOLEAN 'True') Specifies whether differences in the file


sizes should be ignored or whether a
warning should be issued when you
publish to a database.

/p: IgnoreFillFactor=(BOOLEAN 'True') Specifies whether differences in the fill


factor for index storage should be
ignored or whether a warning should be
issued when you publish to a database.

/p: IgnoreFullTextCatalogFilePath= Specifies whether differences in the file


(BOOLEAN 'True') path for the full-text catalog should be
ignored or whether a warning should be
issued when you publish to a database.

/p: IgnoreIdentitySeed=(BOOLEAN) Specifies whether differences in the seed


for an identity column should be
ignored or updated when you publish
updates to a database.

/p: IgnoreIncrement=(BOOLEAN) Specifies whether differences in the


increment for an identity column should
be ignored or updated when you
publish to a database.

/p: IgnoreIndexOptions=(BOOLEAN) Specifies whether differences in the


index options should be ignored or
updated when you publish to a
database.

/p: IgnoreIndexPadding=(BOOLEAN 'True') Specifies whether differences in the


index padding should be ignored or
updated when you publish to a
database.

/p: IgnoreKeywordCasing=(BOOLEAN Specifies whether differences in the


'True') casing of keywords should be ignored
or updated when you publish to a
database.

/p: IgnoreLockHintsOnIndexes= Specifies whether differences in the lock


(BOOLEAN) hints on indexes should be ignored or
updated when you publish to a
database.

/p: IgnoreLoginSids=(BOOLEAN 'True') Specifies whether differences in the


security identification number (SID)
should be ignored or updated when
you publish to a database.

/p: IgnoreNotForReplication=(BOOLEAN) Specifies whether the not for replication


settings should be ignored or updated
when you publish to a database.
PROPERTY VALUE DESCRIPTION

/p: IgnoreObjectPlacementOnPartitionSche Specifies whether an object's placement


me=(BOOLEAN 'True') on a partition scheme should be
ignored or updated when you publish
to a database.

/p: IgnorePartitionSchemes=(BOOLEAN) Specifies whether differences in partition


schemes and functions should be
ignored or updated when you publish
to a database.

/p: IgnorePermissions=(BOOLEAN) Specifies whether differences in the


permissions should be ignored or
updated when you publish to a
database.

/p: IgnoreQuotedIdentifiers=(BOOLEAN Specifies whether differences in the


'True') quoted identifiers setting should be
ignored or updated when you publish
to a database.

/p: IgnoreRoleMembership=(BOOLEAN) Specifies whether differences in the role


membership of logins should be
ignored or updated when you publish
to a database.

/p: IgnoreRouteLifetime=(BOOLEAN 'True') Specifies whether differences in the


amount of time that SQL Server retains
the route in the routing table should be
ignored or updated when you publish
to a database.

/p: IgnoreSemicolonBetweenStatements= Specifies whether differences in the


(BOOLEAN 'True') semi-colons between T-SQL statements
will be ignored or updated when you
publish to a database.

/p: IgnoreTableOptions=(BOOLEAN) Specifies whether differences in the


table options will be ignored or updated
when you publish to a database.

/p: IgnoreUserSettingsObjects=(BOOLEAN) Specifies whether differences in the user


settings objects will be ignored or
updated when you publish to a
database.

/p: IgnoreWhitespace=(BOOLEAN 'True') Specifies whether differences in white


space will be ignored or updated when
you publish to a database.

/p: IgnoreWithNocheckOnCheckConstraint Specifies whether differences in the


s=(BOOLEAN) value of the WITH NOCHECK clause for
check constraints will be ignored or
updated when you publish.
PROPERTY VALUE DESCRIPTION

/p: IgnoreWithNocheckOnForeignKeys= Specifies whether differences in the


(BOOLEAN) value of the WITH NOCHECK clause for
foreign keys will be ignored or updated
when you publish to a database.

/p: IncludeCompositeObjects=(BOOLEAN) Include all composite elements as part


of a single publish operation.

/p: IncludeTransactionalScripts=(BOOLEAN) Specifies whether transactional


statements should be used where
possible when you publish to a
database.

/p: NoAlterStatementsToChangeClrTypes= Specifies that publish should always


(BOOLEAN) drop and re-create an assembly if there
is a difference instead of issuing an
ALTER ASSEMBLY statement.

/p: PopulateFilesOnFileGroups=(BOOLEAN Specifies whether a new file is also


'True') created when a new FileGroup is
created in the target database.

/p: RegisterDataTierApplication= Specifies whether the schema is


(BOOLEAN) registered with the database server.

/p: RunDeploymentPlanExecutors= Specifies whether


(BOOLEAN) DeploymentPlanExecutor contributors
should be run when other operations
are executed.

/p: ScriptDatabaseCollation=(BOOLEAN) Specifies whether differences in the


database collation should be ignored or
updated when you publish to a
database.

/p: ScriptDatabaseCompatibility= Specifies whether differences in the


(BOOLEAN) database compatibility should be
ignored or updated when you publish
to a database.

/p: ScriptDatabaseOptions=(BOOLEAN Specifies whether target database


'True') properties should be set or updated as
part of the publish action.

/p: ScriptDeployStateChecks=(BOOLEAN) Specifies whether statements are


generated in the publish script to verify
that the database name and server
name match the names specified in the
database project.

/p: ScriptFileSize=(BOOLEAN) Controls whether size is specified when


adding a file to a filegroup.
PROPERTY VALUE DESCRIPTION

/p: ScriptNewConstraintValidation= At the end of publish all of the


(BOOLEAN 'True') constraints will be verified as one set,
avoiding data errors caused by a check
or foreign key constraint in the middle
of publish. If set to False, your
constraints are published without
checking the corresponding data.

/p: ScriptRefreshModule=(BOOLEAN 'True') Include refresh statements at the end of


the publish script.

/p: Storage=({File|Memory}) Specifies how elements are stored when


building the database model. For
performance reasons the default is
InMemory. For large databases, File
backed storage is required.

/p: TreatVerificationErrorsAsWarnings= Specifies whether errors encountered


(BOOLEAN) during publish verification should be
treated as warnings. The check is
performed against the generated
deployment plan before the plan is
executed against your target database.
Plan verification detects problems such
as the loss of target-only objects (such
as indexes) that must be dropped to
make a change. Verification will also
detect situations where dependencies
(such as a table or view) exist because of
a reference to a composite project, but
do not exist in the target database. You
might choose to do this to get a
complete list of all issues, instead of
having the publish action stop on the
first error.

/p: VerifyCollationCompatibility= Specifies whether collation compatibility


(BOOLEAN 'True') is verified.

/p: VerifyDeployment=(BOOLEAN 'True') Specifies whether checks should be


performed before publishing that will
stop the publish action if issues are
present that might block successful
publishing. For example, your publish
action might stop if you have foreign
keys on the target database that do not
exist in the database project, and that
causes errors when you publish.

SQLCMD Variables
The following table describes the format of the option that you can use to override the value of a SQL command
(sqlcmd) variable used during a publish action. The values of variable specified on the command line override
other values assigned to the variable (for example, in a publish profile).
PARAMETER DEFAULT DESCRIPTION

/Variables:{PropertyName}={Value} Specifies a name value pair for an


action-specific variable; {VariableName}=
{Value}. The DACPAC file contains the
list of valid SQLCMD variables. An error
results if a value is not provided for
every variable.

Export Parameters and Properties


A SqlPackage.exe Export action exports a live database from SQL Server or Azure SQL Database to a BACPAC
package (.bacpac file). By default, data for all tables will be included in the .bacpac file. Optionally, you can specify
only a subset of tables for which to export data. Validation for the Export action ensures Azure SQL Database
compatibility for the complete targeted database even if a subset of tables is specified for the export.
Help for Export action
PARAMETER SHORT FORM VALUE DESCRIPTION

/Action: /a Export Specifies the action to be


performed.

/Diagnostics: /d {True|False} Specifies whether diagnostic


logging is output to the
console. Defaults to False.

/DiagnosticsFile: /df {string} Specifies a file to store


diagnostic logs.

/OverwriteFiles: /of {True|False} Specifies if sqlpackage.exe


should overwrite existing
files. Specifying false causes
sqlpackage.exe to abort
action if an existing file is
encountered. Default value is
True.

/Properties: /p {PropertyName}={Value} Specifies a name value pair


for an action-specific
property;{PropertyName}=
{Value}. Refer to the help for
a specific action to see that
action's property names.
Example: sqlpackage.exe
/Action:Publish /?.

/Quiet: /q {True|False} Specifies whether detailed


feedback is suppressed.
Defaults to False.

/SourceConnectionString: /scs {string} Specifies a valid SQL


Server/Azure connection
string to the source
database. If this parameter is
specified, it shall be used
exclusively of all other source
parameters.
PARAMETER SHORT FORM VALUE DESCRIPTION

/SourceDatabaseName: /sdn {string} Defines the name of the


source database.

/SourceEncryptConnection /sec {True|False} Specifies if SQL encryption


: should be used for the
source database connection.

/SourcePassword: /sp {string} For SQL Server Auth


scenarios, defines the
password to use to access
the source database.

/SourceServerName: /ssn {string} Defines the name of the


server hosting the source
database.

/SourceTimeout: /st {int} Specifies the timeout for


establishing a connection to
the source database in
seconds.

/SourceTrustServerCertific /stsc {True|False} Specifies whether to use SSL


ate: to encrypt the source
database connection and
bypass walking the
certificate chain to validate
trust.

/SourceUser: /su {string} For SQL Server Auth


scenarios, defines the SQL
Server user to use to access
the source database.

/TargetFile: /tf {string} Specifies a target file (that is,


a .dacpac file) to be used as
the target of action instead
of a database. If this
parameter is used, no other
target parameter shall be
valid. This parameter shall be
invalid for actions that only
support database targets.
PARAMETER SHORT FORM VALUE DESCRIPTION

/TenantId: /tid {string} Represents the Azure AD


tenant ID or domain name.
This option is required to
support guest or imported
Azure AD users as well as
Microsoft accounts such as
outlook.com, hotmail.com,
or live.com. If this parameter
is omitted, the default
tenant ID for Azure AD will
be used, assuming that the
authenticated user is a
native user for this AD.
However, in this case any
guest or imported users
and/or Microsoft accounts
hosted in this Azure AD are
not supported and the
operation will fail.
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).

/UniversalAuthentication: /ua {True|False} Specifies if Universal


Authentication should be
used. When set to True, the
interactive authentication
protocol is activated
supporting MFA. This option
can also be used for Azure
AD authentication without
MFA, using an interactive
protocol requiring the user
to enter their username and
password or integrated
authentication (Windows
credentials). When
/UniversalAuthentication is
set to True, no Azure AD
authentication can be
specified in
SourceConnectionString
(/scs). When
/UniversalAuthentication is
set to False, Azure AD
authentication must be
specified in
SourceConnectionString
(/scs).
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).
Properties specific to the Export action
PROPERTY VALUE DESCRIPTION

/p: CommandTimeout=(INT32 '60') Specifies the command timeout in


seconds when executing queries against
SQL Server.

/p: Storage=({File|Memory} 'File') Specifies the type of backing storage for


the schema model used during
extraction.

/p: TableData=(STRING) Indicates the table from which data will


be extracted. Specify the table name
with or without the brackets
surrounding the name parts in the
following format:
schema_name.table_identifier.

/p: TargetEngineVersion= Specifies what the target engine version


({Default|Latest|V11|V12} 'Latest') is expected to be. This affects whether
to allow objects supported by Azure
SQL Database servers with V12
capabilities, such as memory-optimized
tables, in the generated bacpac.

/p: VerifyFullTextDocumentTypesSupported Specifies whether the supported full-


=(BOOLEAN) text document types for MicrosoftAzure
SQL Database v12 should be verified.

Import Parameters and Properties


A SqlPackage.exe Import action imports the schema and table data from a BACPAC package - .bacpac file – into a
new or empty database in SQL Server or Azure SQL Database. At the time, of the import operation to an existing
database, the target database cannot contain any user-defined schema objects.
Help for command actions
PARAMETER SHORT FORM VALUE DESCRIPTION

/Action: /a Import Specifies the action to be


performed.

/Diagnostics: /d {True|False} Specifies whether diagnostic


logging is output to the
console. Defaults to False.

/DiagnosticsFile: /df {string} Specifies a file to store


diagnostic logs.

/Properties: /p {PropertyName}={Value} Specifies a name value pair


for an action-specific
property;{PropertyName}=
{Value}. Refer to the help for
a specific action to see that
action's property names.
Example: sqlpackage.exe
/Action:Publish /?.
PARAMETER SHORT FORM VALUE DESCRIPTION

/Quiet: /q {True|False} Specifies whether detailed


feedback is suppressed.
Defaults to False.

/SourceFile: /sf {string} Specifies a source file to be


used as the source of action.
If this parameter is used, no
other source parameter shall
be valid.

/TargetConnectionString: /tcs {string} Specifies a valid SQL


Server/Azure connection
string to the target
database. If this parameter is
specified, it shall be used
exclusively of all other target
parameters.

/TargetDatabaseName: /tdn {string} Specifies an override for the


name of the database that is
the target ofsqlpackage.exe
Action.

/TargetEncryptConnection /tec {True|False} Specifies if SQL encryption


: should be used for the
target database connection.

/TargetPassword: /tp {string} For SQL Server Auth


scenarios, defines the
password to use to access
the target database.

/TargetServerName: /tsn {string} Defines the name of the


server hosting the target
database.

/TargetTimeout: /tt {int} Specifies the timeout for


establishing a connection to
the target database in
seconds. For Azure AD, it is
recommended that this
value be greater than or
equal to 30 seconds.

/TargetTrustServerCertific /ttsc {True|False} Specifies whether to use SSL


ate: to encrypt the target
database connection and
bypass walking the
certificate chain to validate
trust.

/TargetUser: /tu {string} For SQL Server Auth


scenarios, defines the SQL
Server user to use to access
the target database.
PARAMETER SHORT FORM VALUE DESCRIPTION

/TenantId: /tid {string} Represents the Azure AD


tenant ID or domain name.
This option is required to
support guest or imported
Azure AD users as well as
Microsoft accounts such as
outlook.com, hotmail.com,
or live.com. If this parameter
is omitted, the default
tenant ID for Azure AD will
be used, assuming that the
authenticated user is a
native user for this AD.
However, in this case any
guest or imported users
and/or Microsoft accounts
hosted in this Azure AD are
not supported and the
operation will fail.
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).

/UniversalAuthentication: /ua {True|False} Specifies if Universal


Authentication should be
used. When set to True, the
interactive authentication
protocol is activated
supporting MFA. This option
can also be used for Azure
AD authentication without
MFA, using an interactive
protocol requiring the user
to enter their username and
password or integrated
authentication (Windows
credentials). When
/UniversalAuthentication is
set to True, no Azure AD
authentication can be
specified in
SourceConnectionString
(/scs). When
/UniversalAuthentication is
set to False, Azure AD
authentication must be
specified in
SourceConnectionString
(/scs).
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).
Properties specific to the Import action:

PROPERTY VALUE DESCRIPTION

/p: CommandTimeout=(INT32 '60') Specifies the command timeout in


seconds when executing queries against
SQL Server.

/p: DatabaseEdition= Defines the edition of an Azure SQL


({Basic|Standard|Premium|Default} Database.
'Default')

/p: DatabaseMaximumSize=(INT32) Defines the maximum size in GB of an


Azure SQL Database.

/p: DatabaseServiceObjective=(STRING) Defines the performance level of an


Azure SQL Database such as"P0" or
"S1".

/p: ImportContributorArguments= Specifies deployment contributor


(STRING) arguments for the
deploymentcontributors. This should be
a semi-colon delimited list of values.

/p: ImportContributors=(STRING) Specifies the deployment contributors,


which should run when the bacpac is
imported. This should be a semi-colon
delimited list of fully qualified build
contributor names or IDs.

/p: Storage=({File|Memory}) Specifies how elements are stored when


building the database model. For
performance reasons the default is
InMemory. For large databases, File
backed storage is required.

DeployReport Parameters and Properties


A SqlPackage.exe report action creates an XML report of the changes that would be made by a publish action.
Help for DeployReport action
PARAMETER SHORT FORM VALUE DESCRIPTION

/Action: /a DeployReport Specifies the action to be


performed.

/Diagnostics: /d {True|False} Specifies whether diagnostic


logging is output to the
console. Defaults to False.

/DiagnosticsFile: /df {string} Specifies a file to store


diagnostic logs.

/OutputPath: /op {string} Specifies the file path where


the output files are
generated.
PARAMETER SHORT FORM VALUE DESCRIPTION

/OverwriteFiles: /of {True|False} Specifies if sqlpackage.exe


should overwrite existing
files. Specifying false causes
sqlpackage.exe to abort
action if an existing file is
encountered. Default value is
True.

/Profile: /pr {string} Specifies the file path to a


DAC Publish Profile. The
profile defines a collection of
properties and variables to
use when generating
outputs.

/Properties: /p {PropertyName}={Value} Specifies a name value pair


for an action-specific
property; {PropertyName}=
{Value}. Refer to the help for
a specific action to see that
action's property names.
Example: sqlpackage.exe
/Action:Publish /?.

/Quiet: /q {True|False} Specifies whether detailed


feedback is suppressed.
Defaults to False.

/SourceConnectionString: /scs {string} Specifies a valid SQL


Server/Azure connection
string to the source
database. If this parameter is
specified, it shall be used
exclusively of all other source
parameters.

/SourceDatabaseName: /sdn {string} Defines the name of the


source database.

/SourceEncryptConnection /sec {True|False} Specifies if SQL encryption


: should be used for the
source database connection.

/SourceFile: /sf {string} Specifies a source file to be


used as the source of action
instead of a database. If this
parameter is used, no other
source parameter shall be
valid.

/SourcePassword: /sp {string} For SQL Server Auth


scenarios, defines the
password to use to access
the source database.
PARAMETER SHORT FORM VALUE DESCRIPTION

/SourceServerName: /ssn {string} Defines the name of the


server hosting the source
database.

/SourceTimeout: /st {int} Specifies the timeout for


establishing a connection to
the source database in
seconds.

/SourceTrustServerCertific /stsc {True|False} Specifies whether to use SSL


ate: to encrypt the source
database connection and
bypass walking the
certificate chain to validate
trust.

/SourceUser: /su {string} For SQL Server Auth


scenarios, defines the SQL
Server user to use to access
the source database.

/TargetConnectionString: /tcs {string} Specifies a valid SQL


Server/Azure connection
string to the target
database. If this parameter is
specified, it shall be used
exclusively of all other target
parameters.

/TargetDatabaseName: /tdn {string} Specifies an override for the


name of the database that is
the target of sqlpackage.exe
Action.

/TargetEncryptConnection /tec {True|False} Specifies if SQL encryption


: should be used for the
target database connection.

/TargetFile: /tf {string} Specifies a target file (that is,


a .dacpac file) to be used as
the target of action instead
of a database. If this
parameter is used, no other
target parameter shall be
valid. This parameter shall be
invalid for actions that only
support database targets.

/TargetPassword: /tp {string} For SQL Server Auth


scenarios, defines the
password to use to access
the target database.

/TargetServerName: /tsn {string} Defines the name of the


server hosting the target
database.
PARAMETER SHORT FORM VALUE DESCRIPTION

/TargetTimeout: /tt {int} Specifies the timeout for


establishing a connection to
the target database in
seconds. For Azure AD, it is
recommended that this
value be greater than or
equal to 30 seconds.

/TargetTrustServerCertific /ttsc {True|False} Specifies whether to use SSL


ate: to encrypt the target
database connection and
bypass walking the
certificate chain to validate
trust.

/TargetUser: /tu {string} For SQL Server Auth


scenarios, defines the SQL
Server user to use to access
the target database.

/TenantId: /tid {string} Represents the Azure AD


tenant ID or domain name.
This option is required to
support guest or imported
Azure AD users as well as
Microsoft accounts such as
outlook.com, hotmail.com,
or live.com. If this parameter
is omitted, the default
tenant ID for Azure AD will
be used, assuming that the
authenticated user is a
native user for this AD.
However, in this case any
guest or imported users
and/or Microsoft accounts
hosted in this Azure AD are
not supported and the
operation will fail.
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).
PARAMETER SHORT FORM VALUE DESCRIPTION

/UniversalAuthentication: /ua {True|False} Specifies if Universal


Authentication should be
used. When set to True, the
interactive authentication
protocol is activated
supporting MFA. This option
can also be used for Azure
AD authentication without
MFA, using an interactive
protocol requiring the user
to enter their username and
password or integrated
authentication (Windows
credentials). When
/UniversalAuthentication is
set to True, no Azure AD
authentication can be
specified in
SourceConnectionString
(/scs). When
/UniversalAuthentication is
set to False, Azure AD
authentication must be
specified in
SourceConnectionString
(/scs).
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).

/Variables: /v {PropertyName}={Value} Specifies a name value pair


for an action-specific
variable; {VariableName}=
{Value}. The DACPAC file
contains the list of valid
SQLCMD variables. An error
results if a value is not
provided for every variable.

Properties specific to the DeployReport action


PROPERTY VALUE DESCRIPTION

/p: AdditionalDeploymentContributorArgu Specifies additional deployment


ments=(STRING) contributor arguments for the
deployment contributors. This should
be a semi-colon delimited list of values.

/p: AdditionalDeploymentContributors= Specifies additional deployment


(STRING) contributors, which should run when
the dacpac is deployed. This should be a
semi-colon delimited list of fully
qualified build contributor names or IDs.
PROPERTY VALUE DESCRIPTION

/p: AllowDropBlocking Assemblies= This property is used by SqlClr


(BOOLEAN) deployment to cause any blocking
assemblies to be dropped as part of the
deployment plan. By default, any
blocking/referencing assemblies will
block an assembly update if the
referencing assembly needs to be
dropped.

/p: AllowIncompatiblePlatform=(BOOLEAN) Specifies whether to attempt the action


despite incompatible SQL Server
platforms.

/p: AllowUnsafeRowLevelSecurityDataMove Do not block data motion on a table


ment=(BOOLEAN) that has Row Level Security if this
property is set to true. Default is false.

/p: BackupDatabaseBeforeChanges= Backups the database before deploying


(BOOLEAN) any changes.

/p: BlockOnPossibleDataLoss=(BOOLEAN Specifies that the publish episode


'True') should be terminated if there is a
possibility of data loss resulting from
the publish.operation.

/p: BlockWhenDriftDetected=(BOOLEAN Specifies whether to block updating a


'True') database whose schema no longer
matches its registration or is
unregistered.

/p: CommandTimeout=(INT32 '60') Specifies the command timeout in


seconds when executing queries against
SQL Server.

/p: CommentOutSetVarDeclarations= Specifies whether the declaration of


(BOOLEAN) SETVAR variables should be commented
out in the generated publish script. You
might choose to do this if you plan to
specify the values on the command line
when you publish by using a tool such
as SQLCMD.EXE.

/p: CompareUsingTargetCollation= This setting dictates how the database's


(BOOLEAN) collation is handled during deployment;
by default the target database's
collation will be updated if it does not
match the collation specified by the
source. When this option is set, the
target database's (or server's) collation
should be used.

/p: CreateNewDatabase=(BOOLEAN) Specifies whether the target database


should be updated or whether it should
be dropped and re-created when you
publish to a database.
PROPERTY VALUE DESCRIPTION

/p: DatabaseEdition= Defines the edition of an Azure SQL


({Basic|Standard|Premium|Default} Database.
'Default')

/p: DatabaseMaximumSize=(INT32) Defines the maximum size in GB of an


Azure SQL Database.

/p: DatabaseServiceObjective=(STRING) Defines the performance level of an


Azure SQL Database such as "P0" or
"S1".

/p: DeployDatabaseInSingleUserMode= if true, the database is set to Single User


(BOOLEAN) Mode before deploying.

/p: DisableAndReenableDdlTriggers= Specifies whether Data Definition


(BOOLEAN 'True') Language (DDL) triggers are disabled at
the beginning of the publish process
and re-enabled at the end of the
publish action.

/p: DoNotAlterChangeDataCaptureObjects If true, Change Data Capture objects


=(BOOLEAN 'True') are not altered.

/p: DoNotAlterReplicatedObjects= Specifies whether objects that are


(BOOLEAN 'True') replicated are identified during
verification.
PROPERTY VALUE DESCRIPTION

/p: DoNotDropObjectType=(STRING) An object type that should not be


dropped when DropObjectsNotInSource
is true. Valid object type names are
Aggregates, ApplicationRoles,
Assemblies, AsymmetricKeys,
BrokerPriorities, Certificates,
ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.
PROPERTY VALUE DESCRIPTION

/p: DoNotDropObjectTypes=(STRING) A semicolon-delimited list of object


types that should not be dropped when
DropObjectsNotInSource is true. Valid
object type names are Aggregates,
ApplicationRoles, Assemblies,
AsymmetricKeys, BrokerPriorities,
Certificates, ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.

/p: DropConstraintsNotInSource= Specifies whether constraints that do


(BOOLEAN 'True') not exist in the database snapshot
(.dacpac) file will be dropped from the
target database when you publish to a
database.

/p: DropDmlTriggersNotInSource= Specifies whether DML triggers that do


(BOOLEAN 'True') not exist in the database snapshot
(.dacpac) file will be dropped from the
target database when you publish to a
database.

/p: DropExtendedPropertiesNotInSource= Specifies whether extended properties


(BOOLEAN 'True') that do not exist in the database
snapshot (.dacpac) file will be dropped
from the target database when you
publish to a database.
PROPERTY VALUE DESCRIPTION

/p: DropIndexesNotInSource=(BOOLEAN Specifies whether indexes that do not


'True') exist in the database snapshot (.dacpac)
file will be dropped from the target
database when you publish to a
database.

/p: DropObjectsNotInSource=(BOOLEAN) Specifies whether objects that do not


exist in the database snapshot (.dacpac)
file will be dropped from the target
database when you publish to a
database. This value takes precedence
over DropExtendedProperties.

/p: DropPermissionsNotInSource= Specifies whether permissions that do


(BOOLEAN) not exist in the database snapshot
(.dacpac) file will be dropped from the
target database when you publish
updates to a database.

/p: DropRoleMembersNotInSource= Specifies whether role members that are


(BOOLEAN) not defined in the database snapshot
(.dacpac) file will be dropped from the
target database when you publish
updates to a database.

/p: DropStatisticsNotInSource=(BOOLEAN Specifies whether statistics that do not


'True') exist in the database snapshot (.dacpac)
file will be dropped from the target
database when you publish to a
database.
PROPERTY VALUE DESCRIPTION

/p: ExcludeObjectType=(STRING) An object type that should be ignored


during deployment. Valid object type
names are Aggregates, ApplicationRoles,
Assemblies, AsymmetricKeys,
BrokerPriorities, Certificates,
ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.
PROPERTY VALUE DESCRIPTION

/p: ExcludeObjectTypes=(STRING) A semicolon-delimited list of object


types that should be ignored during
deployment. Valid object type names
are Aggregates, ApplicationRoles,
Assemblies, AsymmetricKeys,
BrokerPriorities, Certificates,
ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.

/p: GenerateSmartDefaults=(BOOLEAN) Automatically provides a default value


when updating a table that contains
data with a column that does not allow
null values.

/p: IgnoreAnsiNulls=(BOOLEAN 'True') Specifies whether differences in the


ANSI NULLS setting should be ignored
or updated when you publish to a
database.

/p: IgnoreAuthorizer=(BOOLEAN) Specifies whether differences in the


Authorizer should be ignored or
updated when you publish to a
database.

/p: IgnoreColumnCollation=(BOOLEAN) Specifies whether differences in the


column collations should be ignored or
updated when you publish to a
database.
PROPERTY VALUE DESCRIPTION

/p: IgnoreColumnOrder=(BOOLEAN) Specifies whether differences in table


column order should be ignored or
updated when you publish to a
database.

/p: IgnoreComments=(BOOLEAN) Specifies whether differences in the


comments should be ignored or
updated when you publish to a
database.

/p: IgnoreCryptographicProviderFilePath= Specifies whether differences in the file


(BOOLEAN 'True') path for the cryptographic provider
should be ignored or updated when
you publish to a database.

/p: IgnoreDdlTriggerOrder=(BOOLEAN) Specifies whether differences in the


order of Data Definition Language
(DDL) triggers should be ignored or
updated when you publish to a
database or server.

/p: IgnoreDdlTriggerState=(BOOLEAN) Specifies whether differences in the


enabled or disabled state of Data
Definition Language (DDL) triggers
should be ignored or updated when
you publish to a database.

/p: IgnoreDefaultSchema=(BOOLEAN) Specifies whether differences in the


default schema should be ignored or
updated when you publish to a
database.

/p: IgnoreDmlTriggerOrder=(BOOLEAN) Specifies whether differences in the


order of Data Manipulation Language
(DML) triggers should be ignored or
updated when you publish to a
database.

/p: IgnoreDmlTriggerState=(BOOLEAN) Specifies whether differences in the


enabled or disabled state of DML
triggers should be ignored or updated
when you publish to a database.

/p: IgnoreExtendedProperties=(BOOLEAN) Specifies whether differences in the


extended properties should be ignored
or updated when you publish to a
database.

/p: IgnoreFileAndLogFilePath=(BOOLEAN Specifies whether differences in the


'True') paths for files and log files should be
ignored or updated when you publish
to a database.

/p: IgnoreFilegroupPlacement=(BOOLEAN Specifies whether differences in the


'True') placement of objects in FILEGROUPs
should be ignored or updated when
you publish to a database.
PROPERTY VALUE DESCRIPTION

/p: IgnoreFileSize=(BOOLEAN 'True') Specifies whether differences in the file


sizes should be ignored or whether a
warning should be issued when you
publish to a database.

/p: IgnoreFillFactor=(BOOLEAN 'True') Specifies whether differences in the fill


factor for index storage should be
ignored or whether a warning should be
issued when you publish to a database

/p: IgnoreFullTextCatalogFilePath= Specifies whether differences in the file


(BOOLEAN 'True') path for the full-text catalog should be
ignored or whether a warning should be
issued when you publish to a database.

/p: IgnoreIdentitySeed=(BOOLEAN) Specifies whether differences in the seed


for an identity column should be
ignored or updated when you publish
updates to a database.

/p: IgnoreIncrement=(BOOLEAN) Specifies whether differences in the


increment for an identity column should
be ignored or updated when you
publish to a database.

/p: IgnoreIndexOptions=(BOOLEAN) Specifies whether differences in the


index options should be ignored or
updated when you publish to a
database.

/p: IgnoreIndexPadding=(BOOLEAN 'True') Specifies whether differences in the


index padding should be ignored or
updated when you publish to a
database.

/p: IgnoreKeywordCasing=(BOOLEAN Specifies whether differences in the


'True') casing of keywords should be ignored
or updated when you publish to a
database.

/p: IgnoreLockHintsOnIndexes= Specifies whether differences in the lock


(BOOLEAN) hints on indexes should be ignored or
updated when you publish to a
database.

/p: IgnoreLoginSids=(BOOLEAN 'True') Specifies whether differences in the


security identification number (SID)
should be ignored or updated when
you publish to a database.

/p: IgnoreNotForReplication=(BOOLEAN) Specifies whether the not for replication


settings should be ignored or updated
when you publish to a database.
PROPERTY VALUE DESCRIPTION

/p: IgnoreObjectPlacementOnPartitionSche Specifies whether an object's placement


me=(BOOLEAN 'True') on a partition scheme should be
ignored or updated when you publish
to a database.

/p: IgnorePartitionSchemes=(BOOLEAN) Specifies whether differences in partition


schemes and functions should be
ignored or updated when you publish
to a database.

/p: IgnorePermissions=(BOOLEAN) Specifies whether differences in the


permissions should be ignored or
updated when you publish to a
database.

/p: IgnoreQuotedIdentifiers=(BOOLEAN Specifies whether differences in the


'True') quoted identifiers setting should be
ignored or updated when you publish
to a database.

/p: IgnoreRoleMembership=(BOOLEAN) Specifies whether differences in the role


membership of logins should be
ignored or updated when you publish
to a database.

/p: IgnoreRouteLifetime=(BOOLEAN 'True') Specifies whether differences in the


amount of time that SQL Server retains
the route in the routing table should be
ignored or updated when you publish
to a database

/p: IgnoreSemicolonBetweenStatements= Specifies whether differences in the


(BOOLEAN 'True') semi-colons between T-SQL statements
will be ignored or updated when you
publish to a database.

/p: IgnoreTableOptions=(BOOLEAN) Specifies whether differences in the


table options will be ignored or updated
when you publish to a database.

/p: IgnoreUserSettingsObjects=(BOOLEAN) Specifies whether differences in the user


settings objects will be ignored or
updated when you publish to a
database.

/p: IgnoreWhitespace=(BOOLEAN 'True') Specifies whether differences in white


space will be ignored or updated when
you publish to a database.

/p: IgnoreWithNocheckOnCheckConstraint Specifies whether differences in the


s=(BOOLEAN) value of the WITH NOCHECK clause for
check constraints will be ignored or
updated when you publish to a
database.
PROPERTY VALUE DESCRIPTION

/p: IgnoreWithNocheckOnForeignKeys= Specifies whether differences in the


(BOOLEAN) value of the WITH NOCHECK clause for
foreign keys will be ignored or updated
when you publish to a database.

/p: IncludeCompositeObjects=(BOOLEAN) Include all composite elements as part


of a single publish operation.

/p: IncludeTransactionalScripts=(BOOLEAN) Specifies whether transactional


statements should be used where
possible when you publish to a
database.

/p: NoAlterStatementsToChangeClrTypes= Specifies that publish should always


(BOOLEAN) drop and re-create an assembly if there
is a difference instead of issuing an
ALTER ASSEMBLY statement.

/p: PopulateFilesOnFileGroups=(BOOLEAN Specifies whether a new file is also


'True') created when a new FileGroup is
created in the target database.

/p: RegisterDataTierApplication= Specifies whether the schema is


(BOOLEAN) registered with the database server.

/p: RunDeploymentPlanExecutors= Specifies whether


(BOOLEAN) DeploymentPlanExecutor contributors
should be run when other operations
are executed.

/p: ScriptDatabaseCollation=(BOOLEAN) Specifies whether differences in the


database collation should be ignored or
updated when you publish to a
database.

/p: ScriptDatabaseCompatibility= Specifies whether differences in the


(BOOLEAN) database compatibility should be
ignored or updated when you publish
to a database.

/p: ScriptDatabaseOptions=(BOOLEAN Specifies whether target database


'True') properties should be set or updated as
part of the publish action.

/p: ScriptDeployStateChecks=(BOOLEAN) Specifies whether statements are


generated in the publish script to verify
that the database name and server
name match the names specified in the
database project.

/p: ScriptFileSize=(BOOLEAN) Controls whether size is specified when


adding a file to a filegroup.
PROPERTY VALUE DESCRIPTION

/p: ScriptNewConstraintValidation= At the end of publish all of the


(BOOLEAN 'True') constraints will be verified as one set,
avoiding data errors caused by a check
or foreign key constraint in the middle
of publish. If set to False, your
constraints are published without
checking the corresponding data.

/p: ScriptRefreshModule=(BOOLEAN 'True') Include refresh statements at the end of


the publish script.

/p: Storage=({File|Memory}) Specifies how elements are stored when


building the database model. For
performance reasons the default is
InMemory. For large databases, File
backed storage is required.

/p: TreatVerificationErrorsAsWarnings= Specifies whether errors encountered


(BOOLEAN) during publish verification should be
treated as warnings. The check is
performed against the generated
deployment plan before the plan is
executed against your target database.
Plan verification detects problems such
as the loss of target-only objects (such
as indexes) that must be dropped to
make a change. Verification will also
detect situations where dependencies
(such as a table or view) exist because of
a reference to a composite project, but
do not exist in the target database. You
might choose to do this to get a
complete list of all issues, instead of
having the publish action stop on the
first error.

/p: UnmodifiableObjectWarnings= Specifies whether warnings should be


(BOOLEAN 'True') generated when differences are found in
objects that cannot be modified, for
example, if the file size or file paths were
different for a file.

/p: VerifyCollationCompatibility= Specifies whether collation compatibility


(BOOLEAN 'True') is verified.

/p: VerifyDeployment=(BOOLEAN 'True') Specifies whether checks should be


performed before publishing that will
stop the publish action if issues are
present that might block successful
publishing. For example, your publish
action might stop if you have foreign
keys on the target database that do not
exist in the database project, and that
causes errors when you publish.

DriftReport Parameters
A SqlPackage.exe report action creates an XML report of the changes that have been made to the registered
database since it was last registered.
Help for DriftReport action
PARAMETER SHORT FORM VALUE DESCRIPTION

/Action: /a DriftReport Specifies the action to be


performed.

/Diagnostics: /d {True|False} Specifies whether diagnostic


logging is output to the
console. Defaults to False.

/DiagnosticsFile: /df {string} Specifies a file to store


diagnostic logs.

/OutputPath: /op {string} Specifies the file path where


the output files are
generated.

/OverwriteFiles: /of {True|False} Specifies if sqlpackage.exe


should overwrite existing
files. Specifying false causes
sqlpackage.exe to abort
action if an existing file is
encountered. Default value is
True.

/Quiet: /q {True|False} Specifies whether detailed


feedback is suppressed.
Defaults to False.

/TargetConnectionString: /tcs {string} Specifies a valid SQL


Server/Azure connection
string to the target
database. If this parameter is
specified, it shall be used
exclusively of all other target
parameters.

/TargetDatabaseName: /tdn {string} Specifies an override for the


name of the database that is
the target ofsqlpackage.exe
Action.

/TargetEncryptConnection /tec {True|False} Specifies if SQL encryption


: should be used for the
target database connection.

/TargetPassword: /tp {string} For SQL Server Auth


scenarios, defines the
password to use to access
the target database.

/TargetServerName: /tsn {string} Defines the name of the


server hosting the target
database.
PARAMETER SHORT FORM VALUE DESCRIPTION

/TargetTimeout: /tt {int} Specifies the timeout for


establishing a connection to
the target database in
seconds. For Azure AD, it is
recommended that this
value be greater than or
equal to 30 seconds.

/TargetTrustServerCertific /ttsc {True|False} Specifies whether to use SSL


ate: to encrypt the target
database connection and
bypass walking the
certificate chain to validate
trust.

/TargetUser: /tu {string} For SQL Server Auth


scenarios, defines the SQL
Server user to use to access
the target database.

/TenantId: /tid {string} Represents the Azure AD


tenant ID or domain name.
This option is required to
support guest or imported
Azure AD users as well as
Microsoft accounts such as
outlook.com, hotmail.com,
or live.com. If this parameter
is omitted, the default
tenant ID for Azure AD will
be used, assuming that the
authenticated user is a
native user for this AD.
However, in this case any
guest or imported users
and/or Microsoft accounts
hosted in this Azure AD are
not supported and the
operation will fail.
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).
PARAMETER SHORT FORM VALUE DESCRIPTION

/UniversalAuthentication: /ua {True|False} Specifies if Universal


Authentication should be
used. When set to True, the
interactive authentication
protocol is activated
supporting MFA. This option
can also be used for Azure
AD authentication without
MFA, using an interactive
protocol requiring the user
to enter their username and
password or integrated
authentication (Windows
credentials). When
/UniversalAuthentication is
set to True, no Azure AD
authentication can be
specified in
SourceConnectionString
(/scs). When
/UniversalAuthentication is
set to False, Azure AD
authentication must be
specified in
SourceConnectionString
(/scs).
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).

Script Parameters and Properties


A SqlPackage.exe script action creates a Transact-SQL incremental update script that updates the schema of a
target database to match the schema of a source database.
Help for the Script action
PARAMETER SHORT FORM VALUE DESCRIPTION

/Action: /a Script Specifies the action to be


performed.

/Diagnostics: /d {True|False} Specifies whether diagnostic


logging is output to the
console. Defaults to False.

/DiagnosticsFile: /df {string} Specifies a file to store


diagnostic logs.

/OutputPath: /op {string} Specifies the file path where


the output files are
generated.
PARAMETER SHORT FORM VALUE DESCRIPTION

/OverwriteFiles: /of {True|False} Specifies if sqlpackage.exe


should overwrite existing
files. Specifying false causes
sqlpackage.exe to abort
action if an existing file is
encountered. Default value is
True.

/Profile: /pr {string} Specifies the file path to a


DAC Publish Profile. The
profile defines a collection of
properties and variables to
use when generating
outputs.

/Properties: /p {PropertyName}={Value} Specifies a name value pair


for an action-specific
property;{PropertyName}=
{Value}. Refer to the help for
a specific action to see that
action's property names.
Example: sqlpackage.exe
/Action:Publish /?.

/Quiet: /q {True|False} Specifies whether detailed


feedback is suppressed.
Defaults to False.

/SourceConnectionString: /scs {string} Specifies a valid SQL


Server/Azure connection
string to the source
database. If this parameter is
specified, it shall be used
exclusively of all other source
parameters.

/SourceDatabaseName: /sdn {string} Defines the name of the


source database.

/SourceEncryptConnection /sec {True|False} Specifies if SQL encryption


: should be used for the
source database connection.

/SourceFile: /sf {string} Specifies a source file to be


used as the source of action.
If this parameter is used, no
other source parameter shall
be valid.

/SourcePassword: /sp {string} For SQL Server Auth


scenarios, defines the
password to use to access
the source database.

/SourceServerName: /ssn {string} Defines the name of the


server hosting the source
database.
PARAMETER SHORT FORM VALUE DESCRIPTION

/SourceTimeout: /st {int} Specifies the timeout for


establishing a connection to
the source database in
seconds.

/SourceTrustServerCertific /stsc {True|False} Specifies whether to use SSL


ate: to encrypt the source
database connection and
bypass walking the
certificate chain to validate
trust.

/SourceUser: /su {string} For SQL Server Auth


scenarios, defines the SQL
Server user to use to access
the source database.

/TargetConnectionString: /tcs {string} Specifies a valid SQL


Server/Azure connection
string to the target
database. If this parameter is
specified, it shall be used
exclusively ofall other target
parameters.

/TargetDatabaseName: /tdn {string} Specifies an override for the


name of the database that is
the target of sqlpackage.exe
Action.

/TargetEncryptConnection /tec {True|False} Specifies if SQL encryption


: should be used for the
target database connection.

/TargetFile: /tf {string} Specifies a target file (that is,


a .dacpac file) to be used as
the target of action instead
of a database. If this
parameter is used, no other
target parameter shall be
valid. This parameter shall be
invalid for actions that only
support database targets.

/TargetPassword: /tp {string} For SQL Server Auth


scenarios, defines the
password to use to access
the target database.

/TargetServerName: /tsn {string} Defines the name of the


server hosting the target
database.
PARAMETER SHORT FORM VALUE DESCRIPTION

/TargetTimeout: /tt {int} Specifies the timeout for


establishing a connection to
the target database in
seconds. For Azure AD, it is
recommended that this
value be greater than or
equal to 30 seconds.

/TargetTrustServerCertific /ttsc {True|False} Specifies whether to use SSL


ate: to encrypt the target
database connection and
bypass walking the
certificate chain to validate
trust.

/TargetUser: /tu {string} For SQL Server Auth


scenarios, defines the SQL
Server user to use to access
the target database.

/TenantId: /tid {string} Represents the Azure AD


tenant ID or domain name.
This option is required to
support guest or imported
Azure AD users as well as
Microsoft accounts such as
outlook.com, hotmail.com,
or live.com. If this parameter
is omitted, the default
tenant ID for Azure AD will
be used, assuming that the
authenticated user is a
native user for this AD.
However, in this case any
guest or imported users
and/or Microsoft accounts
hosted in this Azure AD are
not supported and the
operation will fail.
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).
PARAMETER SHORT FORM VALUE DESCRIPTION

/UniversalAuthentication: /ua {True|False} Specifies if Universal


Authentication should be
used. When set to True, the
interactive authentication
protocol is activated
supporting MFA. This option
can also be used for Azure
AD authentication without
MFA, using an interactive
protocol requiring the user
to enter their username and
password or integrated
authentication (Windows
credentials). When
/UniversalAuthentication is
set to True, no Azure AD
authentication can be
specified in
SourceConnectionString
(/scs). When
/UniversalAuthentication is
set to False, Azure AD
authentication must be
specified in
SourceConnectionString
(/scs).
For more information about
Active Directory Universal
Authentication, see Universal
Authentication with SQL
Database and SQL Data
Warehouse (SSMS support
for MFA).

/Variables: /v {PropertyName}={Value} Specifies a name value pair


for an action-specific
variable;{VariableName}=
{Value}. The DACPAC file
contains the list of valid
SQLCMD variables. An error
results if a value is not
provided for every variable.

Properties specific to the Script action


PROPERTY VALUE DESCRIPTION

/p: AdditionalDeploymentContributorArgu Specifies additional deployment


ments=(STRING) contributor arguments for the
deployment contributors. This should
be a semi-colon delimited list of values.

/p: AdditionalDeploymentContributors= Specifies additional deployment


(STRING) contributors, which should run when
the dacpac is deployed. This should be a
semi-colon delimited list of fully
qualified build contributor names or IDs.
PROPERTY VALUE DESCRIPTION

/p: AllowDropBlockingAssemblies= This property is used by SqlClr


(BOOLEAN) deployment to cause any blocking
assemblies to be dropped as part of the
deployment plan. By default, any
blocking/referencing assemblies will
block an assembly update if the
referencing assembly needs to be
dropped.

/p: AllowIncompatiblePlatform=(BOOLEAN) Specifies whether to attempt the action


despite incompatible SQL Server
platforms.

/p: AllowUnsafeRowLevelSecurityDataMove Do not block data motion on a table


ment=(BOOLEAN) that has Row Level Security if this
property is set to true. Default is false.

/p: BackupDatabaseBeforeChanges= Backups the database before deploying


(BOOLEAN) any changes.

/p: BlockOnPossibleDataLoss=(BOOLEAN Specifies that the publish episode


'True') should be terminated if there is a
possibility of data loss resulting from
the publish.operation.

/p: BlockWhenDriftDetected=(BOOLEAN Specifies whether to block updating a


'True') database whose schema no longer
matches its registration or is
unregistered.

/p: CommandTimeout=(INT32 '60') Specifies the command timeout in


seconds when executing queries against
SQL Server.

/p: CommentOutSetVarDeclarations= Specifies whether the declaration of


(BOOLEAN) SETVAR variables should be commented
out in the generated publish script. You
might choose to do this if you plan to
specify the values on the command line
when you publish by using a tool such
as SQLCMD.EXE.

/p: CompareUsingTargetCollation= This setting dictates how the database's


(BOOLEAN) collation is handled during deployment;
by default the target database's
collation will be updated if it does not
match the collation specified by the
source. When this option is set, the
target database's (or server's) collation
should be used.

/p: CreateNewDatabase=(BOOLEAN) Specifies whether the target database


should be updated or whether it should
be dropped and re-created when you
publish to a database.
PROPERTY VALUE DESCRIPTION

/p: DatabaseEdition= Defines the edition of an Azure SQL


({Basic|Standard|Premium|Default} Database.
'Default')

/p: DatabaseMaximumSize=(INT32) Defines the maximum size in GB of an


Azure SQL Database.

/p: DatabaseServiceObjective=(STRING) Defines the performance level of an


Azure SQL Database such as "P0" or
"S1".

/p: DeployDatabaseInSingleUserMode= if true, the database is set to Single User


(BOOLEAN) Mode before deploying.

/p: DisableAndReenableDdlTriggers= Specifies whether Data Definition


(BOOLEAN 'True') Language (DDL) triggers are disabled at
the beginning of the publish process
and re-enabled at the end of the
publish action.

/p: DoNotAlterChangeDataCaptureObjects If true, Change Data Capture objects


=(BOOLEAN 'True') are not altered.

/p: DoNotAlterReplicatedObjects= Specifies whether objects that are


(BOOLEAN 'True') replicated are identified during
verification.
PROPERTY VALUE DESCRIPTION

/p: DoNotDropObjectType=(STRING) An object type that should not be


dropped when DropObjectsNotInSource
is true. Valid object type names are
Aggregates, ApplicationRoles,
Assemblies, AsymmetricKeys,
BrokerPriorities, Certificates,
ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.
PROPERTY VALUE DESCRIPTION

/p: DoNotDropObjectTypes=(STRING) A semicolon-delimited list of object


types that should not be dropped
whenDropObjectsNotInSource is true.
Valid object type names are Aggregates,
ApplicationRoles, Assemblies,
AsymmetricKeys, BrokerPriorities,
Certificates, ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.

/p: DropConstraintsNotInSource= Specifies whether constraints that do


(BOOLEAN 'True') not exist in the database snapshot
(.dacpac) file will be dropped from the
target database when you a database.

/p: DropDmlTriggersNotInSource= Specifies whether DML triggers that do


(BOOLEAN 'True') not exist in the database snapshot
(.dacpac) file will be dropped from the
target database when you a database.

/p: DropExtendedPropertiesNotInSource= Specifies whether extended properties


(BOOLEAN 'True') that do not exist in the database
snapshot (.dacpac) file will be dropped
from the target database when you
publish to a database.

/p: DropIndexesNotInSource=(BOOLEAN Specifies whether indexes that do not


'True') exist in the database snapshot (.dacpac)
file will be dropped from the target
database when you a database.
PROPERTY VALUE DESCRIPTION

/p: DropObjectsNotInSource=(BOOLEAN) Specifies whether objects that do not


exist in the database snapshot (.dacpac)
file will be dropped from the target
database when you a database. This
value takes precedence over
DropExtendedProperties.

/p: DropPermissionsNotInSource= Specifies whether permissions that do


(BOOLEAN) not exist in the database snapshot
(.dacpac) file will be dropped from the
target database when you publish
updates to a database.

/p: DropRoleMembersNotInSource= Specifies whether role members that are


(BOOLEAN) not defined in the database snapshot
(.dacpac) file will be dropped from the
target database when you publish
updates to a database.

/p: DropStatisticsNotInSource=(BOOLEAN Specifies whether statistics that do not


'True') exist in the database snapshot (.dacpac)
file will be dropped from the target
database when you a database.

/p: ExcludeObjectType=(STRING) An object type that should be ignored


during deployment. Valid object
typenames are Aggregates,
ApplicationRoles, Assemblies,
AsymmetricKeys, BrokerPriorities,
Certificates, ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.
PROPERTY VALUE DESCRIPTION

/p: ExcludeObjectTypes=(STRING) A semicolon-delimited list of object


types that should be ignored during
deployment. Valid object type names
are Aggregates, ApplicationRoles,
Assemblies, AsymmetricKeys,
BrokerPriorities, Certificates,
ColumnEncryptionKeys,
ColumnMasterKeys, Contracts,
DatabaseRoles, DatabaseTriggers,
Defaults, ExtendedProperties,
ExternalDataSources,
ExternalFileFormats, ExternalTables,
Filegroups, FileTables, FullTextCatalogs,
FullTextStoplists, MessageTypes,
PartitionFunctions, PartitionSchemes,
Permissions, Queues,
RemoteServiceBindings,
RoleMembership, Rules,
ScalarValuedFunctions,
SearchPropertyLists, SecurityPolicies,
Sequences, Services, Signatures,
StoredProcedures, SymmetricKeys,
Synonyms, Tables, TableValuedFunctions,
UserDefinedDataTypes,
UserDefinedTableTypes,
ClrUserDefinedTypes, Users, Views,
XmlSchemaCollections, Audits,
Credentials, CryptographicProviders,
DatabaseAuditSpecifications,
DatabaseScopedCredentials, Endpoints,
ErrorMessages, EventNotifications,
EventSessions, LinkedServerLogins,
LinkedServers, Logins, Routes,
ServerAuditSpecifications,
ServerRoleMembership, ServerRoles,
ServerTriggers.

/p: GenerateSmartDefaults=(BOOLEAN) Automatically provides a default value


when updating a table that
containsdata with a column that does
not allow null values.

/p: IgnoreAnsiNulls=(BOOLEAN 'True') Specifies whether differences in the


ANSI NULLS setting should be ignored
or updated when you publish to a
database.

/p: IgnoreAuthorizer=(BOOLEAN) Specifies whether differences in the


Authorizer should be ignored or
updated when you publish to a
database.

/p: IgnoreColumnCollation=(BOOLEAN) Specifies whether differences in the


column collations should be ignored or
updated when you publish to a
database.
PROPERTY VALUE DESCRIPTION

/p: IgnoreColumnOrder=(BOOLEAN) Specifies whether differences in table


column order should be ignored or
updated when you publish to a
database.

/p: IgnoreComments=(BOOLEAN) Specifies whether differences in the


comments should be ignored or
updated when you publish to a
database.

/p: IgnoreCryptographicProviderFilePath= Specifies whether differences in the file


(BOOLEAN 'True') path for the cryptographic provider
should be ignored or updated when
you publish to a database.

/p: IgnoreDdlTriggerOrder=(BOOLEAN) Specifies whether differences in the


order of Data Definition Language
(DDL) triggers should be ignored or
updated when you publish to a
database or server.

/p: IgnoreDdlTriggerState=(BOOLEAN) Specifies whether differences in the


enabled or disabled state of Data
Definition Language (DDL) triggers
should be ignored or updated when
you publish to a database.

/p: IgnoreDefaultSchema=(BOOLEAN) Specifies whether differences in the


default schema should be ignored or
updated when you publish to a
database.

/p: IgnoreDmlTriggerOrder=(BOOLEAN) Specifies whether differences in the


order of Data Manipulation Language
(DML) triggers should be ignored or
updated when you publish to a
database.

/p: IgnoreDmlTriggerState=(BOOLEAN) Specifies whether differences in the


enabled or disabled state of DML
triggers should be ignored or updated
when you publish to a database.

/p: IgnoreExtendedProperties=(BOOLEAN) Specifies whether differences in the


extended properties should be ignored
or updated when you publish to a
database.

/p: IgnoreFileAndLogFilePath=(BOOLEAN Specifies whether differences in the


'True') paths for files and log files should be
ignored or updated when you publish
to a database.

/p: IgnoreFilegroupPlacement=(BOOLEAN Specifies whether differences in the


'True') placement of objects in FILEGROUPs
should be ignored or updated when
you publish to a database.
PROPERTY VALUE DESCRIPTION

/p: IgnoreFileSize=(BOOLEAN 'True') Specifies whether differences in the file


sizes should be ignored or whether a
warning should be issued when you
publish to a database.

/p: IgnoreFillFactor=(BOOLEAN 'True') Specifies whether differences in the fill


factor for index storage should be
ignored or whether a warning should be
issued when you publish.

/p: IgnoreFullTextCatalogFilePath= Specifies whether differences in the file


(BOOLEAN 'True') path for the full-text be ignored or
whether a warning should be issued
when you a database.

/p: IgnoreIdentitySeed=(BOOLEAN) Specifies whether differences in the seed


for an identity column should be
ignored or updated when you publish
updates to a database.

/p: IgnoreIncrement=(BOOLEAN) Specifies whether differences in the


increment for an identity column should
be ignored or updated when you
publish to a database.

/p: IgnoreIndexOptions=(BOOLEAN) Specifies whether differences in the


index options should be ignored or
updated when you publish to a
database.

/p: IgnoreIndexPadding=(BOOLEAN 'True') Specifies whether differences in the


index padding should be ignored or
updated when you publish to a
database.

/p: IgnoreKeywordCasing=(BOOLEAN Specifies whether differences in the


'True') casing of keywords should be ignored
or updated when you publish to a
database.

/p: IgnoreLockHintsOnIndexes= Specifies whether differences in the lock


(BOOLEAN) hints on indexes should be ignored or
updated when you publish to a
database.

/p: IgnoreLoginSids=(BOOLEAN 'True') Specifies whether differences in the


security identification number (SID)
should be ignored or updated when
you publish to a database.

/p: IgnoreNotForReplication=(BOOLEAN) Specifies whether the not for replication


settings should be ignored or updated
when you publish to a database.
PROPERTY VALUE DESCRIPTION

/p: IgnoreObjectPlacementOnPartitionSche Specifies whether an object's placement


me=(BOOLEAN 'True') on a partition scheme should be
ignored or updated when you publish
to a database.

/p: IgnorePartitionSchemes=(BOOLEAN) Specifies whether differences in partition


schemes and functions should be
ignored or updated when you publish
to a database.

/p: IgnorePermissions=(BOOLEAN) Specifies whether differences in the


permissions should be ignored or
updated when you publish to a
database.

/p: IgnoreQuotedIdentifiers=(BOOLEAN Specifies whether differences in the


'True') quoted identifiers setting should be
ignored or updated when you publish
to a database.

/p: IgnoreRoleMembership=(BOOLEAN) Specifies whether differences in the role


membership of logins should be
ignored or updated when you publish
to a database.

/p: IgnoreRouteLifetime=(BOOLEAN 'True') Specifies whether differences in the


amount of time that SQL Server retains
the route in the routing table should be
ignored or updated when you publish
to a database.

/p: IgnoreSemicolonBetweenStatements= Specifies whether differences in the


(BOOLEAN 'True') semi-colons between T-SQL statements
will be ignored or updated when you
publish to a database.

/p: IgnoreTableOptions=(BOOLEAN) Specifies whether differences in the


table options will be ignored or updated
when you publish to a database.

/p: IgnoreUserSettingsObjects=(BOOLEAN) Specifies whether differences in the user


settings objects will be ignored or
updated when you publish to a
database.

/p: IgnoreWhitespace=(BOOLEAN 'True') Specifies whether differences in white


space will be ignored or updated when
you publish to a database.

/p: IgnoreWithNocheckOnCheckConstraint Specifies whether differences in the


s=(BOOLEAN) value of the WITH NOCHECK clause for
check constraints will be ignored or
updated when you publish.
PROPERTY VALUE DESCRIPTION

/p: IgnoreWithNocheckOnForeignKeys= Specifies whether differences in the


(BOOLEAN) value of the WITH NOCHECK clause for
foreign keys will be ignored or updated
when you publish to a database.

/p: IncludeCompositeObjects=(BOOLEAN) Include all composite elements as part


of a single publish operation.

/p: IncludeTransactionalScripts=(BOOLEAN) Specifies whether transactional


statements should be used where
possible when you publish to a
database.

/p: NoAlterStatementsToChangeClrTypes= Specifies that publish should always


(BOOLEAN) drop and re-create an assembly if there
is a difference instead of issuing an
ALTER ASSEMBLY statement.

/p: PopulateFilesOnFileGroups=(BOOLEAN Specifies whether a new file is also


'True') created when a new FileGroup is
created in the target database.

/p: RegisterDataTierApplication= Specifies whether the schema is


(BOOLEAN) registered with the database server.

/p: RunDeploymentPlanExecutors= Specifies whether


(BOOLEAN) DeploymentPlanExecutor contributors
should be run when other operations
are executed.

/p: ScriptDatabaseCollation=(BOOLEAN) Specifies whether differences in the


database collation should be ignored or
updated when you publish to a
database.

/p: ScriptDatabaseCompatibility= Specifies whether differences in the


(BOOLEAN) database compatibility should be
ignored or updated when you publish
to a database.

/p: ScriptDatabaseOptions=(BOOLEAN Specifies whether target database


'True') properties should be set or updated as
part of the publish action.

/p: ScriptDeployStateChecks=(BOOLEAN) Specifies whether statements are


generated in the publish script to verify
that the database name and server
name match the names specified in the
database project.

/p: ScriptFileSize=(BOOLEAN) Controls whether size is specified when


adding a file to a filegroup.
PROPERTY VALUE DESCRIPTION

/p: ScriptNewConstraintValidation= At the end of publish all of the


(BOOLEAN 'True') constraints will be verified as one set,
avoiding data errors caused by a check
or foreign key constraint in the middle
of publish. If set to False, your
constraints are published without
checking the corresponding data.

/p: ScriptRefreshModule=(BOOLEAN 'True') Include refresh statements at the end of


the publish script.

/p: Storage=({File|Memory}) Specifies how elements are stored when


building the database model. For
performance reasons the default is
InMemory. For large databases, File
backed storage is required.

/p: TreatVerificationErrorsAsWarnings= Specifies whether errors encountered


(BOOLEAN) during publish verification should be
treated as warnings. The check is
performed against the generated
deployment plan before the plan is
executed against your target database.
Plan verification detects problems such
as the loss of target-only objects (such
as indexes) that must be dropped to
make a change. Verification will also
detect situations where dependencies
(such as a table or view) exist because of
a reference to a composite project, but
do not exist in the target database. You
might choose to do this to get a
complete list of all issues, instead of
having the publish action stop on the
first error.

/p: UnmodifiableObjectWarnings= Specifies whether warnings should be


(BOOLEAN 'True') generated when differences are found in
objects that cannot be modified, for
example, if the file size or file paths were
different for a file.

/p: VerifyCollationCompatibility= Specifies whether collation compatibility


(BOOLEAN 'True') is verified.

/p: VerifyDeployment=(BOOLEAN 'True') Specifies whether checks should be


performed before publishing that will
stop the publish action if issues are
present that might block successful
publishing. For example, your publish
action might stop if you have foreign
keys on the target database that do not
exist in the database project, and that
causes errors when you publish.

Anda mungkin juga menyukai