Quantcast
Channel: SQL Database Engine Blog
Viewing all 184 articles
Browse latest View live

Monitor local storage usage on General Purpose Azure SQL Managed Instance

$
0
0

Azure SQL Managed Instance has predefined storage space that depends on the values of reserved storage and vCores that you choose when you provision the instance. In this post you will see how to monitor storage space on the Managed Instance.

In Managed Instance you can reach three storage limits:

  • Storage limit of managed instance that you choose on the portal. This limit cannot be bigger than 8TB in General Purpose or 4TB in Business Critical. In this post you can find how to check storage usage and create alerts using SQL Agent.
  • (General Purpose only) Allocation limit of underlying remote Azure premium storage which is described here.
  • (General Purpose only) the storage limit of local SSD disk storage – in Managed Instance tempdb is placed in local SSD that can have 24GB*<number of vCores> space. If you reach this limit, you would not be able to create temporary objects in tempdb.

If you reach the storage limit, you would need to increase the storage space/number of vCores or to free some resources. It is important to add more storage before you reach the limits, because changing storage is done using upgrade service tier operation that can take few hours.

sys.dm_os_volume_stats provide information about the volumes including total and used storage on the Managed Instance. You can find storage information using the following query:

SELECT volume_mount_point,
used_gb = CAST(MIN(total_bytes / 1024. / 1024 / 1024) AS NUMERIC(8,1)),
available_gb = CAST(MIN(available_bytes / 1024. / 1024 / 1024) AS NUMERIC(8,1)),
total_gb = CAST(MIN((total_bytes+available_bytes) / 1024. / 1024 / 1024) AS NUMERIC(8,1))
FROM sys.master_files AS f 
CROSS APPLY sys.dm_os_volume_stats(f.database_id, f.file_id)
GROUP BY volume_mount_point;

If you execute this query, it will return you amount of total, available and used storage on remote azure premium storage and local SSD:

In this case, I have 8-core instance that has 8*24GB = 192GB of local SSD storage shown as c:\WFRoot\ volume. http:// volume shows how much storage you ar eusing on remote Azure Premium Disk storage.

You should periodically monitor results of this query and react if you see that available_gb is decreasing, because you might get out of the space.

You can also create SQL Agent Job that will periodically run this query and send you a warning using db_mail if you will reach the maximum storage space. In this post you can find how to check remote storage usage and create alerts using SQL Agent so you can use the similar approach with the local storage.

 

 

 

 


Running Azure CosmosDB queries from SQL Server using ODBC driver

$
0
0

Azure CosmosDB provides ODBC driver that enables you to query CosmosDB collections like classic databases. In this post you will see how to query CosmosDB collections from Sql Server using Transact-Sql.

Why querying CosmosDB from SQL Server? Cosmos DB enables you to store documents and other non-relational types of data and provides SQL/API that enables you to fetch, filter, and sort results. However, in some cases, you would need to run more complex queries that have GROUP BY, HAVING, analytical functions, or to join your non-relational data from CosmosDB with data that you are storing in SQL Server tables. In that case, you might want to leverage full power of TransactSQL to query data in Cosmos DB.

Cosmos DB setup

First you need to setup CosmosDB account and add some documents to your collections. I have created CosmosDB account called odbc, with database WWI and collection Orders, and added three documents:

I’m accessing this collection using SQL API.

Driver setup

Now you need to install ODBC Driver for CosmosDB on the computer where you have SQL Server installed. I’m using Microsoft Azure Cosmos DB ODBC 64-bit.msi for 64-bit Windows – 64-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2.

Once you install this driver, you should setup ODBC source in system DSN and test the connection:

Querying CosmosDB

Once you setup everything, you can use classic OPENROWSET function to query CosmosDB data by specifying CosmosDB DSN in the connection:

As a results you will get three rows representing three documents in CosmosDB. Missing fields will be returned as NULL. You can also filter results based on some criterion like:

SELECT a.*
FROM OPENROWSET('MSDASQL', 
'DSN=cosmosdb1', 
'select * from Orders where billto_Name = ''John Smith''') as a

One interesting thing that you need to be aware is that complex JSON objects such as shipTo or billTo from the first image are flattened and every field is returned in the format <object name>_<field name>. You should be aware of this if you expected to have sub-objects as-is. Also, in my case, array properties such as tags are not mapped/returned.

Conclusion

ODBC Driver for Cosmos DB is enables you to run Transact-SQL queries on CosmosDb data, which might be useful if you need rich data analytic on remote data stored in CosmosDB. In this case, you can send query to CosmosDB with predicate that will filter the results, select only the fields that you need and then do rich-analysis on SQL Server using full Transact-SQL language.

Note that this is possible only on SQL Server and not in Azure SQL Database because CosmosDb driver is not installed on Azure SQL and you cannot add your own drivers. If you are interested for this feature on Azure SQL you can send the idea on feedback for SQL Database or SQL Managed Instance.

Identify log write limits on Azure SQL Managed Instance using QPI library

$
0
0

Azure SQL Managed Instance is fully managed SQL Server instance hosted in Azure cloud. Managed Instance introduces some limits such as max log write throughput that can slow down your workload. In this post you will see how to identify write log throughput issue on Managed Instance.

Azure SQL Managed Instance has several built-in resource limits such as max log write rate. The reason for introducing this log write limit is the necessity to ensure that log backups can catch-up incoming data.

In this post, I’m using QPI library to easily analyze wait statistics on Managed Instance. To install QPI library go to the installation section and download the SQL script for your version of SQL Server (it supports Azure SQL/SQL Server 2016+ because it depends on Query Store views).

Disclaimer: QPI library is open source library provided as-is and not maintained by Microsoft. There are not guarantees that the results are correct and that there are not bugs in calculations. This is a helper library that can help you to more easily analyze performance of your Managed Instance, but you can do the same job by looking directly at DMVs.

With this library, I can easily take a snapshot of wait statistics, wait some time and read the wait statistics values:

exec qpi.snapshot_wait_stats;

waitfor delay '00:00:03';

select *
from qpi.wait_stats
order by wait_time_s desc;

Example of the result returned by this query is shown below:

In this example, you can see that the tasks on Managed Instance are waiting on INSTANCE_LOG_RATE_GOVERNOR wait type (with a link to the description of the wait type). You can go further and identify the queries that are causing these as it is described here: issues https://blogs.msdn.microsoft.com/sqlserverstorageengine/2019/03/05/analyzing-wait-statistics-on-managed-instance/

As an alternative, you can analyze IO performance on the Managed Instance to identify bottlenecks using the following procedure/view:

  • qpi.snapshot_file_stats procedure will take a snapshot of io statistics from sys.dm_io_virtual_file_stats DM function. You MUST take the snapshot because sys.dm_io_virtual_file_stats contains cumulative information and you need to calculate sample in the recent time interval.
  • qpi.file_stats view will get the file statistics since the last snapshot. This view contains several calculations such as IOPS, throughput based on a data from sys.dm_io_virtual_file_stats.

The following query will summarize write throughput (MB/s) and IOPS on the instance categorized per file types (LOG/DATA):

You can see in the result that my instance is using 47.5 MB/s writing in log file and 1226 IOPS in total. If you look at the description of  resource limits in Azure documentation, you will see that ~48MB/s is log rate limit that I’m hitting and this is the reason why I see dominant INSTANCE_LOG_RATE_GOVERNOR wait statistic.

This analysis tells you that you are running some log-write heavy operation that uses most of the write log – in this case I’m running several REBUILD INDEX operations.

Our new Azure SQL Database blog site is live

$
0
0
Follow us in our new blog site going forward: https://techcommunity.microsoft.com/t5/Azure-SQL-Database/bg-p/Azure-SQL-Database... Read more
Viewing all 184 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>