title | description | author | ms.author | ms.reviewer | ms.date | ms.service | ms.subservice | ms.topic | helpviewer_keywords |
---|---|---|---|---|---|---|---|---|---|
MSSQLSERVER_701 |
MSSQLSERVER_701 |
MashaMSFT |
mathoma |
wiassaf |
03/16/2021 |
sql |
supportability |
reference |
701 (Database Engine error) |
MSSQLSERVER_701
[!INCLUDE SQL Server]
Details
Attribute | Value |
---|---|
Product Name | SQL Server |
Event ID | 701 |
Event Source | MSSQLSERVER |
Component | SQLEngine |
Symbolic Name | NOSYSMEM |
Message Text | There’s insufficient system memory to run this query. |
[!NOTE]
This article is focused on SQL Server. For information on troubleshooting out of memory issues in Azure SQL Database, see Troubleshoot out of memory errors with Azure SQL Database.
Explanation
Error 701 occurs when [!INCLUDEssNoVersion] has failed to allocate sufficient memory to run a query. Insufficient memory can be caused by a number of factors that include operating system settings, physical memory availability, other components use memory inside SQL Server, or memory limits on the current workload. In most cases, the transaction that failed isn’t the cause of this error. Overall, the causes can be grouped into three:
External or OS memory pressure
External pressure refers to high memory utilization coming from a component outside of the process that leads to insufficient memory for SQL Server. You have to find if other applications on the system are consuming memory and contribute to low memory availability. SQL Server is one of the few applications designed to respond to OS memory pressure by cutting back its memory use. This means, if some application or driver asks for memory, the OS sends a signal to all applications to free up memory and SQL Server will respond by reducing its own memory usage. Very few other applications respond because they aren’t designed to listen for that notification. So if SQL starts cutting back its memory usage, its memory pool is reduced and whichever components need memory may not get it. You start getting 701 and other memory-related errors. For more information, see SQL Server Memory Architecture
Internal memory pressure, not coming from SQL Server
Internal memory pressure refers to low memory availability caused by factors inside the SQL Server process. There are components that may run inside the SQL Server process that are «external» to the SQL Server engine. Examples include DLLs like linked servers, SQLCLR components, extended procedures (XPs), and OLE Automation (sp_OA*
). Others include anti-virus or other security programs that inject DLLs inside a process for monitoring purposes. An issue or poor design in any of these components could lead to large memory consumption. For example, consider a linked server caching 20 million rows of data that come from an external source into SQL Server memory. As far as SQL Server is concerned, no memory clerk will report high memory usage, but memory consumed inside the SQL Server process will be high. This memory growth from a linked server DLL, for example, would cause SQL Server to start cutting its memory usage (see above) and will create low-memory conditions of for components inside SQL Server, causing errors like 701.
Internal memory pressure, coming from SQL Server component(s)
Internal memory pressure coming from components inside SQL Server Engine can also lead to error 701. There are hundreds of components, tracked via memory clerks, that allocate memory in SQL Server. You must identify which memory clerk(s) are responsible for the largest memory allocations to be able to resolve this further. For example, if you find that the OBJECTSTORE_LOCK_MANAGER memory clerk is showing the large memory allocation, you need to further understand why the Lock Manager is consuming so much memory. You may find there are queries that acquire a large number of locks and optimize them by using indexes, or shorten transactions that hold locks for long periods, or check if lock escalation is disabled. Each memory clerk or component has a unique way of accessing and using memory. For more information, see memory clerk types and their descriptions.
User action
If error 701 appears occasionally or for a brief period, there may be a short-lived memory issue that resolved itself. You may not need to take action in those cases. However, if the error occurs multiple times, on multiple connections and persists for periods of seconds or longer, follow the steps to troubleshoot further.
The following list outlines general steps that will help in troubleshooting memory errors.
Diagnostic tools and capture
The diagnostics tools that will allow you to collect troubleshooting data are Performance Monitor, sys.dm_os_memory_clerks, and DBCC MEMORYSTATUS.
Configure and collect the following counters with Performance Monitor:
- Memory:Available MB
- Process:Working Set
- Process:Private Bytes
- SQL Server:Memory Manager: (all counters)
- SQL Server:Buffer Manager: (all counters)
Collect periodic outputs of this query on the impacted SQL Server
SELECT pages_kb, type, name, virtual_memory_committed_kb, awe_allocated_kb FROM sys.dm_os_memory_clerks ORDER BY pages_kb DESC
Pssdiag or SQL LogScout
An alternative, automated way to capture these data points is to use tools like PSSDIAG or SQL LogScout.
- If you use Pssdiag, configure to capture Perfmon collector and the Custom DiagnosticsSQL Memory Error collector
- If you use SQL LogScout, configure to capture the Memory scenario
The following sections describe more detailed steps for each scenario — external or internal memory pressure.
External pressure: diagnostics and solutions
-
To diagnose low memory conditions on the system outside of SQL Server process, collect Performance monitor counters. Investigate if applications or services other than SQL Server are consuming memory on this server by looking at these counters:
- Memory:Available MB
- Process:Working Set
- Process:Private Bytes
Here is a sample Perfmon log collection using PowerShell
clear $serverName = $env:COMPUTERNAME $Counters = @( ("\$serverName" +"MemoryAvailable MBytes"), ("\$serverName" +"Process(*)Working Set"), ("\$serverName" +"Process(*)Private Bytes") ) Get-Counter -Counter $Counters -SampleInterval 2 -MaxSamples 1 | ForEach-Object { $_.CounterSamples | ForEach-Object { [pscustomobject]@{ TimeStamp = $_.TimeStamp Path = $_.Path Value = ([Math]::Round($_.CookedValue, 3)) } } }
-
Review the System Event log and look for memory related errors (for example, low virtual memory).
-
Review the Application Event log for application-related memory issues.
Here is a sample PowerShell script to query the System and Applicaiton Event logs for the keyword «memory». Feel free to use other strings like «resource» for your search:
Get-EventLog System -ComputerName "$env:COMPUTERNAME" -Message "*memory*" Get-EventLog Application -ComputerName "$env:COMPUTERNAME" -Message "*memory*"
-
Address any code or configuration issues for less critical applications or services to reduce their memory usage.
-
If applications besides [!INCLUDEssNoVersion] are consuming resources, try stopping or rescheduling these applications, or consider running them on a separate server. These steps will remove external memory pressure.
Internal memory pressure, not coming from SQL Server: diagnostics and solutions
To diagnose internal memory pressure caused by modules (DLLs) inside SQL Server, use the following approach:
-
If SQL Server isn’t* using Locked Pages in Memory (AWE API), then most its memory is reflected in the Process:Private Bytes counter (
SQLServr
instance) in Performance Monitor. The overall memory usage coming from within SQL Server engine is reflected in SQL Server:Memory Manager: Total Server Memory (KB) counter. If you find a significant difference between the value Process:Private Bytes and SQL Server:Memory Manager: Total Server Memory (KB), then that difference is likely coming from a DLL (linked server, XP, SQLCLR, etc.). For example if Private bytes is 300 GB and Total Server Memory is 250 GB, then approximately 50 GB of the overall memory in the process is coming from outside SQL Server engine. -
If SQL Server is using Locked Pages in Memory (AWE API), then it is more challenging to identify the issue because Performance monitor doesn’t offer AWE counters that track memory usage for individual processes. The overall memory usage coming from within SQL Server engine is reflected in SQL Server:Memory Manager: Total Server Memory (KB) counter. Typical Process:Private Bytes values may vary between 300 MB and 1-2 GB overall. If you find a significant usage of Process:Private Bytes beyond this typical use, then the difference is likely coming from a DLL (linked server, XP, SQLCLR, etc.). For example, if Private bytes counter is 5-4 GB and SQL Server is using Locked Pages in Memory (AWE), then a large part of the Private bytes may be coming from outside SQL Server engine. This is an approximation technique.
-
Use the Tasklist utility to identify any DLLs that are loaded inside SQL Server space:
tasklist /M /FI "IMAGENAME eq sqlservr.exe"
-
You could also use this query to examine loaded modules (DLLs) and see if something isn’t expected to be there
SELECT * FROM sys.dm_os_loaded_modules
-
If you suspect that a Linked Server module is causing significant memory consumption, then you can configure it to run out of process by disabling Allow inprocess option. See Create Linked Servers for more information. Not all linked server OLEDB providers may run out of process; contact the product manufacturer for more information.
-
In the rare case that OLE automation objects are used (
sp_OA*
), you may configure the object to run in a process outside SQL Server by setting context = 4 (Local (.exe) OLE server only.). For more information, see sp_OACreate.
Internal memory usage by SQL Server engine: diagnostics and solutions
-
Start collecting performance monitor counters for [!INCLUDEssNoVersion]:SQL Server:Buffer Manager, SQL Server: Memory Manager.
-
Query the SQL Server memory clerks DMV multiple times to see where the highest consumption of memory occurs inside the engine:
SELECT pages_kb, type, name, virtual_memory_committed_kb, awe_allocated_kb FROM sys.dm_os_memory_clerks ORDER BY pages_kb DESC
-
Alternatively, you can observe the more detailed DBCC MEMORYSTATUS output and the way it changes when you see these error messages.
-
If you identify a clear offender among the memory clerks, focus on addressing the specifics of memory consumption for that component. Here are several examples:
- If MEMORYCLERK_SQLQERESERVATIONS memory clerk is consuming memory, identify queries that are using huge memory grants and optimize them via indexes, rewrite them (remove ORDER by for example), or apply query hints.
- If a large number of ad hoc query plans are cached, then the CACHESTORE_SQLCP memory clerk would use large amounts of memory. Identify non-parameterized queries whose query plans can’t be reused and parameterize them by either converting to stored procedures, or by using sp_executesql, or by using FORCED parameterization.
- If object plan cache store CACHESTORE_OBJCP is consuming much memory, then do the following: identify which stored procedures, functions, or triggers are using lots of memory and possibly redesign the application. Commonly this may happen due to large amounts of database or schemas with hundreds of procedures in each.
- If the OBJECTSTORE_LOCK_MANAGER memory clerk is showing the large memory allocations, identify queries that apply many locks and optimize them by using indexes. Shorten transactions that cause locks not to be released for long periods in certain isolation levels, or check if lock escalation is disabled.
Quick relief that may make memory available
The following actions may free some memory and make it available to [!INCLUDEssNoVersion]:
-
Check the following SQL Server memory configuration parameters and consider increasing max server memory if possible:
-
max server memory
-
min server memory
Notice unusual settings. Correct them as necessary. Account for increased memory requirements. Default settings are listed in Server memory configuration options.
-
-
If you haven’t configured max server memory especially with Locked Pages in Memory, consider setting to a particular value to allow some memory for the OS. See Locked Pages in Memory server configuration option.
-
Check the query workload: number of concurrent sessions, currently executing queries and see if there are less critical applications that may be stopped temporarily or moved to another SQL Server.
-
If you’re running SQL Server on a virtual machine (VM), ensure the memory for the VM isn’t overcommitted. For ideas on how to configure memory for VMs, see this blog Virtualization – Overcommitting memory and how to detect it within the VM and Troubleshooting ESX/ESXi virtual machine performance issues (memory overcommitment)
-
You can run the following DBCC commands to free several [!INCLUDEssNoVersion] memory caches.
-
DBCC FREESYSTEMCACHE
-
DBCC FREESESSIONCACHE
-
DBCC FREEPROCCACHE
-
-
If you’re using Resource Governor, we recommend that you check the resource pool or workload group settings and see if they aren’t limiting memory too drastically.
-
If the problem continues, you’ll need to investigate further and possibly increase server resources (RAM).
- Remove From My Forums
-
Question
-
Hi experts,
We got several SQL Errors: 701 today and I found one document mentioning this (
http://blogs.msdn.com/b/poojakamath/archive/2014/12/24/there-is-insufficient-system-memory-in-resource-pool-internal-to-run-this-query-error-701-severity-17-state-123-quot-on-ssb-target-server-cachestore-brokerto-is-consuming-memory.aspx )
but it seems my Server Broker is ok. Anyway, how to analyze the result of DBCC MEMORYSTATUS and identify which processes/components consume too much memory?
The SQL server version is Microsoft SQL Server 2012 (SP2-CU7) (KB3072100) — 11.0.5623.0 (X64)
Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.2 <X64> (Build 9200: )and runnning on HP DL980, 64 cores, 1TB DRAM.
— errorlog —
2016-02-16 04:26:39.26 spid2121 Error: 14421, Severity: 16, State: 1.
2016-02-16 04:26:39.26 spid2121 The log shipping secondary database DL980-3.TCP has restore threshold of 45 minutes and is out of sync. No restore was performed for 408579 minutes. Restored latency is 17 minutes. Check agent log and logshipping
monitor information.
2016-02-16 04:27:59.42 spid2284 Failed allocate pages: FAIL_PAGE_ALLOCATION 207
2016-02-16 04:28:00.10 spid2284
Process/System Counts Value
—————————————- ———-
Available Physical Memory 23512313856
Available Virtual Memory 74109546496
Available Paging File 337114435584
Working Set 44043255808
Percent of Committed Memory in WS 100
Page Faults 26218401
System physical memory high 0
System physical memory low 0
Process physical memory low 0
Process virtual memory low 0
2016-02-16 04:28:00.10 spid2284
Memory Manager KB
—————————————- ———-
VM Reserved 8515836296
VM Committed 32772696
Locked Pages Allocated 950265632
Large Pages Allocated 6604800
Emergency Memory 1024
Emergency Memory In Use 1024
Target Committed 983040024
Current Committed 983038328
Pages Allocated 883841392
Pages Reserved 14696
Pages Free 4449992
Pages In Use 142901680
Page Alloc Potential 790986320
NUMA Growth Phase 2
Last OOM Factor 3
Last OS Error 0
2016-02-16 04:28:00.10 spid2284
Memory node Id = 0 KB
—————————————- ———-
VM Reserved 8515446968
VM Committed 32537016
Locked Pages Allocated 90342984
Pages Allocated 66683032
Pages Free 131632
Target Committed 122880000
Current Committed 122880000
Foreign Committed 359536
Away Committed 0
Taken Away Committed 0
2016-02-16 04:28:00.10 spid2284
Memory node Id = 1 KB
—————————————- ———-
VM Reserved 35840
VM Committed 35860
Locked Pages Allocated 122844136
Pages Allocated 116877064
Pages Free 618016
Target Committed 122880000
Current Committed 122880000
Foreign Committed 1600924
Away Committed 0
Taken Away Committed 0
2016-02-16 04:28:00.10 spid2284
Memory node Id = 2 KB
—————————————- ———-
VM Reserved 189456
VM Committed 35860
Locked Pages Allocated 122842716
Pages Allocated 116615568
Pages Free 757064
Target Committed 122880000
Current Committed 122878576
Foreign Committed 715576
Away Committed 0
Taken Away Committed 0
2016-02-16 04:28:00.10 spid2284
Memory node Id = 3 KB
—————————————- ———-
VM Reserved 32768
VM Committed 32788
Locked Pages Allocated 122847208
Pages Allocated 117247616
Pages Free 578216
Target Committed 122880000
Current Committed 122880000
Foreign Committed 1373532
Away Committed 0
Taken Away Committed 0
2016-02-16 04:28:00.10 spid2284
Memory node Id = 4 KB
—————————————- ———-
VM Reserved 32768
VM Committed 32788
Locked Pages Allocated 122847212
Pages Allocated 117208608
Pages Free 556024
Target Committed 122880000
Current Committed 122880000
Foreign Committed 745884
Away Committed 0
Taken Away Committed 0
2016-02-16 04:28:00.10 spid2284
Memory node Id = 5 KB
—————————————- ———-
VM Reserved 32768
VM Committed 32788
Locked Pages Allocated 122847212
Pages Allocated 116304184
Pages Free 1117072
Target Committed 122880000
Current Committed 122880000
Foreign Committed 5846644
Away Committed 0
Taken Away Committed 0
2016-02-16 04:28:00.10 spid2284
Memory node Id = 6 KB
—————————————- ———-
VM Reserved 32768
VM Committed 32788
Locked Pages Allocated 122846956
Pages Allocated 116393200
Pages Free 304096
Target Committed 122880000
Current Committed 122879744
Foreign Committed 4729832
Away Committed 0
Taken Away Committed 0
2016-02-16 04:28:00.10 spid2284
Memory node Id = 7 KB
—————————————- ———-
VM Reserved 32768
VM Committed 32788
Locked Pages Allocated 122847208
Pages Allocated 116511992
Pages Free 387880
Target Committed 122880000
Current Committed 122880000
Foreign Committed 1912704
Away Committed 0
Taken Away Committed 0
2016-02-16 04:28:00.10 spid2284
Memory node Id = 64 KB
—————————————- ———-
VM Reserved 0
VM Committed 20
Locked Pages Allocated 0
2016-02-16 04:28:00.10 spid2284
MEMORYCLERK_SQLGENERAL (node 0) KB
—————————————- ———-
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
Pages Allocated 139792
2016-02-16 04:28:00.10 spid2284
MEMORYCLERK_SQLGENERAL (node 1) KB
—————————————- ———-
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
Pages Allocated 24
2016-02-16 04:28:00.10 spid2284
MEMORYCLERK_SQLGENERAL (node 3) KB
—————————————- ———-
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
Pages Allocated 16
2016-02-16 04:28:00.10 spid2284
MEMORYCLERK_SQLGENERAL (node 4) KB
—————————————- ———-
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
Pages Allocated 216
2016-02-16 04:28:00.10 spid2284
MEMORYCLERK_SQLGENERAL (node 6) KB
—————————————- ———-
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
Pages Allocated 216
2016-02-16 04:28:00.10 spid2284
MEMORYCLERK_SQLGENERAL (node 7) KB
—————————————- ———-
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
Pages Allocated 144
2016-02-16 04:28:00.10 spid2284
MEMORYCLERK_SQLGENERAL (Total) KB
—————————————- ———-
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
Pages Allocated 140408
2016-02-16 04:28:00.10 spid22842016-02-16 04:28:17.39 spid2284 Error: 701, Severity: 17, State: 123.
2016-02-16 04:28:17.39 spid2284 There is insufficient system memory in resource pool ‘default’ to run this query. Please check complete DBCC MEMORY STATUS below.-
Edited by
Tuesday, February 16, 2016 6:41 AM
-
Edited by
Answers
-
Hi Shanky,
I google and find this bug. (
https://support.microsoft.com/en-gb/kb/3074434 )I prepare to install SP3. Any thoughts?
—
Aaron
Morelli on Twitter: «New Trace Flag: 8075 fixes VAS …<cite class=»_Rm» style=»font-style:normal;color:#006621;font-size:14px;»>https://twitter.com/sqlcrossjoin/status/623525334441312256</cite>
Jul 21, 2015 — @sqlcrossjoin … @sqlcrossjoin …. New Trace
Flag: 8075 fixes VAS exhaustion (on x64!) problem after applying SQL 2012
SP2 CU7.—
If you see continuous growth in virtual address space even after you apply the fix, you can determine which queries or operations are requesting large chunks of memory by using the Page_allocated extended
event. A sample script looks like this:CREATE EVENT SESSION [memory_tracking] ON SERVER ADD EVENT sqlos.page_allocated( ACTION(package0.callstack,sqlos.cpu_id,sqlos.task_address,sqlos.worker_address,sqlserver.database_id,sqlserver.query_hash,sqlserver.request_id,sqlserver.session_id,sqlserver.sql_text) WHERE ([number_pages]>(1))) ADD TARGET package0.event_file(SET filename=N'E:DataMSSQL11.MSSQLSERVERMSSQLLogmemory_tracking.xel') WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=PER_CPU,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF) GO
-
Edited by
dennislee124
Tuesday, February 23, 2016 7:32 AM -
Marked as answer by
dennislee124
Wednesday, March 2, 2016 7:18 AM
-
Edited by
-
Dennis,
Creating event trace to find out queries taking lot of VAS is good idea but I am not sure about enabling trace flag, If I were you I would apply SP3 to be on safer side.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Wiki Articles
MVP-
Marked as answer by
dennislee124
Wednesday, March 2, 2016 7:18 AM
-
Marked as answer by
Today I am going to discuss SQL Server memory issues that your database Instance might face during bulk data load or during any memory intensive operations. You may also face insufficient memory issue when you try to execute numerous queries on large data sets at the same time in SQL Server. In such circumstances, SQL Server throws error 701 or error 802 with error text “There is insufficient system memory to run this query.” Full error descriptions for both error codes are given below.
Error 701, Severity: 17, State: 193.
There is insufficient system memory to run this query.
Or
Error 802, Severity: 17, State: 20
There is insufficient memory available in the buffer pool.
Error 701 or Error 802: Root Cause
If SQL Server is running out of memory and has no more memory available to allocate to its transactions then it will generate SQL Server memory error 701 or error 802. SQL Server Memory pressure or insufficient memory issue can be raised due to huge data load or other memory intensive operations.
We should check DBCC MEMORYSTATUS output to get latest snapshot of current memory status of the SQL Server. This command is very useful in troubleshooting memory issues in SQL Server. You might even have failed to run this DBCC command because system does not have enough memory. You need to reduce some memory pressure by stopping or killing some processes and then try to run this command to gather in depth information about SQL Server memory state.
The transaction that failed due to memory pressure and generates error 701 or error 802 might be or might not be the cause of this error. We should analyze the trends to reach on any conclusion.
Solution
There are mainly two reasons behind getting SQL Server memory issues for most of the SQL Server systems. Either your server has less memory to handle the current day to day workload or you are running some of the adhoc transactions that will require more memory for their execution like huge bulk data load, complex reports etc. If your SQL Server is running with insufficient memory during day to day activities then you should seriously evaluate the total workload and propose optimum memory size that can handle your current workload.
If you are planning to run any heavy ad hoc transaction then you should temporary increase the memory that can handle the transaction smoothly or you should run such transactions on a server that are built to handle such loads. You should also consider to reduce the batch size that is optimum for your system.
SQL Server is memory intensive application. We need to plan memory sizing for a server after proper evaluation of server state and overall load that will run on that SQL Server instance. But if you are facing memory issue on one of your SQL Server then first we need to troubleshoot it and find the root cause of this memory pressure. Here I am giving you step by step method to find the actual culprit that are causing the memory issue and possible solutions that can reduce some memory overheads.
First Make sure your SQL Server instance is running on dedicated server and not with other applications. If other applications are also using your database server then make sure they are not eating most of the memory. And, If most of the system memory is taken by other applications then you need to seriously think about migrating those application from this database server.
If you cannot migrate these applications to another server then workaround for such issues are to allocate the maximum memory value to your SQL Server instance. You can do it either by using sp_configure or using GUI. Read attached article to change max memory value for your SQL Server instance.
- Understanding SQL Server Memory Allocation and Role of MIN/MAX Server Memory Configuration
Next thing you should check is the server memory configuration. Check below SQL Server memory configuration parameters:
- max server memory
- min server memory
- min memory per query
Notice any unusual settings and fix them as necessary. All above three options have been covered in detain in above attached article.
Now, check the workload in terms of number of concurrent sessions, currently executing queries along with their memory grants. You should also focus on SQL Server transactions that are seeking huge memory values. Look at the transactions and gather a report for high memory intensive operations. Evaluate them and if possible optimize them so that they can take minimum memory.
- Understanding Memory Wait type Resource_SEMAPHORE and how to fix it?
You can also run below DBCC commands to free several SQL Server memory caches. This will also help you and make some relief for your current running transactions but in longer run it will not help if your system will need more memory to proceed the transactions.
- DBCC FREESYSTEMCACHE
- DBCC FREESESSIONCACHE
- DBCC FREEPROCCACHE
Another effective way to look in to memory status is by getting the output of DBCC MEMORYSTATUS command. You would get a good starting point to look in to culprit memory consumers.
We can also start collecting performance monitor counters for memory. Perfmon counters like the value of SQL Server: Buffer ManagerBuffer Cache Hit Ratio, SQL Server: Memory Manager and Page Life Expectancy will be very helpful in identifying memory pressure on SQL Server system. Analyze the trend for these counters and then decide whether issue is coming during some specific operations or system is continuously facing memory pressure.
Based on these analyses you can choose to either increase the system memory, increase the SQL Server memory allocation or reschedule some of the memory specific transactions to the off hours when load is minimum on the system. You can also reduce the batch size of the transaction to reduce the memory pressure. This will be certainly helpful in troubleshooting memory issues.
It is not necessary that your transactions are poorly designed always that is why SQL Server is taking lot of memory to process it. Sometimes even our systems on which SQL Server runs might have problem that end up with memory pressure. So you should consider all these points while fixing any performance issue.
I hope you like this article. Please follow us on our Facebook page and Twitter handle to get latest updates.
Read More:
- 4 Usages of DBCC SQLPERF in SQL Server
- Manage Transaction Log File during Bulk Data Load
- How to Improve Bulk Data Load Performance?
- Overview of SQL Server Performance Tuning
- Identify Disk Bottleneck in SQL Server using Perfmon Disk Counters
- Author
- Recent Posts
I am working as a Technical Architect in one of the top IT consulting firm. I have expertise on all versions of SQL Server since SQL Server 2000. I have lead multiple SQL Server projects like consolidation, upgrades, migrations, HA & DR. I love to share my knowledge. You can contact me on my social accounts for any consulting work.
Summary
Article Name
SQL Server Error 701: There is Insufficient System Memory to Run this Query
Description
Today I am going to discuss SQL Server memory issues that your database Instance might face during bulk data load or during any memory intensive operations. In such circumstances, SQL Server throws error 701 or error 802 with error text “There is insufficient system memory to run this query.”
This question actually seems to come up every so often here. Mark has the correct (and most commonly employed) answer, but let me try to add what I can to make this clearer.
The error message is a little misleading. SQL Server tells you that it doesn’t have enough memory to run the query, but what it really means is that it doesn’t have enough memory to parse the query.
When it comes to running the query, SQL Server can use all it wants — gigabytes if necessary. Parsing is another story; the server has to build a parse tree and there is only a very limited amount of memory available for that. I’ve never found the actual limit documented anywhere but for a typical batch full of INSERT
statements, it can’t handle more than a few MB at a time.
So I am sorry to tell you this but you cannot make SQL Server execute this script exactly as it is written. No way, no how, doesn’t matter what settings you tweak. You do, however, have a number of options to work around it:
Specifically, you have three options:
-
Use
GO
statements. This is used by SSMS and various other tools as a batch separator. Instead of a single parse tree being generated for the entire script, individual parse trees are generated for each segment of the batch separated byGO
. This is what most people do, and it is very simple to still make the script transactionally-safe, as others have demonstrated and I won’t repeat here. -
Instead of generating a massive script to insert all of the rows, keep the data in a text file (i.e. comma-separated). Then import it using the bcp utility. If you need this to be «scriptable» — i.e. the import needs to happen in the same script/transaction as the
CREATE TABLE
statement, then use BULK INSERT instead. AlthoughBULK INSERT
is a non-logged operation, believe it or not, it can still be placed within aBEGIN TRAN
/COMMIT TRAN
block. -
If you really, really want the
INSERT
to be a logged operation, and don’t want the insertions to happen in batches, then you can use OPENROWSET to open up a text file, excel file, etc. as an ad-hoc «table», and then insert this into your newly-created table. I’m normally loath to ever recommend the use ofOPENROWSET
, but as this is clearly an administrative script, it’s not really a major problem.
Previous comments suggest that you’re uncomfortable with #1, although that may just be because of an incorrect assumption that it can’t be done in a single transaction, in which case see Thomas’s answer. But if you’re dead-set on going another way, I suggest going with #2, creating a text file and using BULK INSERT
. An example of a «safe» script would be:
BEGIN TRAN
BEGIN TRY
CREATE TABLE MyTable (...)
BULK INSERT MyTable
FROM 'C:ScriptsDataMyTableData.txt'
WITH (
FIELDTERMINATOR = ',',
ROWTERMINATOR = 'rn',
BATCHSIZE = 1000,
MAXERRORS = 1
)
COMMIT
END TRY
BEGIN CATCH
ROLLBACK
END CATCH
Hopefully this helps put you on the right track. I’m pretty sure this covers all of your available «in the box» options — beyond these, you’d have to start writing actual application programs or shell scripts to do the work, and I don’t think that level of complexity is really warranted here.
This story begins, as so many often do, with a few key elements:
1) A client who experienced a problem on Thursday and waited until Friday evening to report it, even though the problem was «critical» enough to ask us to work on it all weekend if we couldn’t solve it.
2) A co-worker who got locked out of the client machine, thereby getting me involved.
3) A relatively obscure error message that I hadn’t seen before.
In this case the error message was an Application Log error 701 — «There is insufficient system memory in resource pool ‘internal’ to run this query»:
…followed by lots of 18053/17300’s:
The next thing that happened were DBCC MEMORYSTATUS dumps into the SQL Server error log:
The client reported an interesting problem — they were scripting stored procedures for modification (scripting out the ALTER PROCEDURE, editing it, and running it to apply changes) and they found that the process worked for a brief time, but then after a few successful runs, executing the ALTER statements started throwing the «insufficient system memory in resource pool ‘internal'» messages.
Looking at the MEMORYSTATUS dumps in the ErrorLogs I had available, I saw that the only unusual values were very large «Pages Allocated» values for the MEMORYCLERK_XE.
The client had tried rebooting the server and restarting the MSSQLServer service multiple times before reporting the issue to us, and each time they found that it helped for a short time but within an hour or so the problem had returned.
This behavior (let’s reboot — reboot always fixes it!) resulted in the SQL Server Error Logs having been cycled out so that I couldn’t see the original log from the original failure.
—
BEGIN RANT
Every SQL Server should be set up to retain more than the default six logs — there are too many situations (like the one I am describing here) where six restarts (and therefore six historical logs) just isn’t good enough.
Best case is to set the retention to 40-50 logs *and* to create a SQL Server Agent job to run sp_cycle_errorlog on a nightly basis (say at 11:59:59 or 12:00:01 every night) which generates a new log every day. In most scenarios this will give you just over a month’s error logs, and in a scenario such as this one, it will give multiple days’ logs *and* allow for multiple MSSQLServer service restarts.
Even if you can’t/don’t want to implement the daily cycle job, you should definitely increase the retention to some larger number, such as 15 or 20 — the log files are relatively small text files, and even if your SQL service doesn’t restart regularly (because you don’t apply Windows patches, etc., in which case that is a different rant) the files don’t get that large. (They can be unwieldy to open in the log viewer, but they aren’t that large that they fill your disk.)
END RANT
—
Without the SQL Server Error Log from the time of the original failure, I went into the Windows Application and System Logs and the SQL Server Default Trace trying to find anything else of value. Unfortunately the only other item was one that I saw in the available logs, «Failed Allocate Pages: FAIL_PAGE_ALLOCATION 1» which unfortunately didn’t tell me anything else other than the failure was related to a Single Page Allocation (1) rather than a Multi-Page Allocation.
Possible scenarios I was able to rule out at this point:
1) SQL Server 2012 SP1 Memory Leak — Fixed by 2012SP1CU2, and they were already at 2012SP1CU8
2) Service Broker Misconfiguration — Service Broker was not enabled.
Having exhausted my Google-Fu, I turned to my next avenue — #sqlhelp on Twitter:
—
Note my rookie mistake in the middle (I blame it on working on the problem for a couple of hours on a Friday night when I wasn’t really the primary on-call, but that’s just ducking the blame) — I forgot to convert the pages value from the MEMORYSTATUS dump from 8KB pages into actual bytes/KB/MB/GB. I have done this correctly many times in the past, but in this case forgetting to make the conversion made what I already knew to be a bad situation…worse.
So I knew the instance had a memory problem — but why did this suddenly start, and what was the cause?
Trying to mitigate the issue, I recommended that the client increase the memory on their VM. (IMPORTANT NOTE — this was not in any way a VM vs. physical server issue!) The client increased the memory from 4GB to 12GB and I set the Max Server Memory to 11000MB (too high for long-term but set there to test against the 1409032 pages shown in the Tweet above), monitored the situation for another 30 minutes, and tried to go to bed, planning to review the situation in the morning.
No Good.
Pages came in overnight and when I got online I found a tweet that provided a whole new insight on the situation:
That made me pause and consider something that I hadn’t thought about — in all of my Googling of the various problems earlier in the evening, I hadn’t considered *which* memory clerk (MEMORYCLERK_XE) was complaining.
I checked the Extended Events section of Management Studio, and sure enough, their was a user-defined Extended Events session named «Locks» alongside the default system health session.
I checked the XE sessions DMV and found that the Locks session was almost as large as the system health session:
SELECT name, total_buffer_size
FROM sys.dm_xe_sessions
I opened the properties of the Locks session and not being an XE person myself took the most obvious route (for me anyway) — I started comparing it to the default system health session properties to see what was different — and then it hit me.
The system health session writes to a ring buffer (memory) and an eventfile (disk). The ring buffer target is a memory target and the eventfile target persists the completed buffers to disk.
The user-defined Locks session was set up to write to a ring buffer (OK — or so I thought) and a histogram (hmm) but not to an event file. I looked up the histogram and found that it too was a memory target. So the session was set up to write to memory and memory, and I have a memory problem…I see.
To investigate further I scripted out the two sessions and looked at the definitions of the targets. For the system health session:
ADD TARGET package0.event_file(SET filename=N’system_health.xel’,max_file_size=(5),max_rollover_files=(4)),
ADD TARGET package0.ring_buffer(SET max_events_limit=(5000),max_memory=(4096))
WITH (MAX_MEMORY=4096 KB, EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS, MAX_DISPATCH_LATENCY=120 SECONDS,MAX_EVENT_SIZE=0 KB, MEMORY_PARTITION_MODE=NONE, TRACK_CAUSALITY=OFF, STARTUP_STATE=ON)
For the Locks session:
ADD TARGET package0.histogram(SET filtering_event_name=N’sqlserver.lock_acquired’, source=N’sqlserver.query_hash’),
ADD TARGET package0.ring_buffer(SET max_events_limit=(1000000))
WITH (MAX_MEMORY=4096 KB, EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS, MAX_DISPATCH_LATENCY=30 SECONDS, MAX_EVENT_SIZE=0 KB, MEMORY_PARTITION_MODE=NONE, TRACK_CAUSALITY=ON, STARTUP_STATE=OFF)
Originally I thought the histogram was the issue, but after following up with Jonathan Kehayias (@sqlpoolboy/blog) of SQLskills — when you think of Extended Events, you should think of Jonathan — I found I was mistaken. I thought that the MAX_MEMORY=4096KB would throttle the ring buffer to 4MB. Instead, Jonathan explained to me that the MAX_MEMORY parameter in the WITH clause just determines the «event session memory buffer space for buffering events before they are dispatched to the targets for consumption» — the buffer *before* the ring buffer. There is a MAX_MEMORY parameter for the target but it needs to be included is the SET clause (like the max_events_limit) to affect the size of the target. If the SET MAX_MEMORY is set, it will cause the buffer to cycle, effectively having the events overwrite themselves, once the size is reached. With no parameter defined, there is *no* cap on the size of the ring buffer other than the indirect limit imposed by the max_events_limit parameter.
As you can see above, the default system_health session has its ring buffer set to a max_events_limit of 5,000 events, while the user-defined «Locks» session has a max_events_limit of 1,000,000 events! (Say it with me — one Millllllion events). The catch Jonathan described to me is that events can be of any size, so you can’t directly convert a certain number of events to any particular MB/GB size.
As he so often does, Jonathan went above and beyond my basic question and found he could easily reproduce this problem (the XE session causing the 701 errors) on a SQL 2012 instance in his lab using the same XE session configuration. As Jonathan notes in this blog post on sqlperformance.com, this is an issue especially with SQL 2012 and the way XE works in 2012.
—
The catch to all of this is that had I realized it was an XE problem right away, there might (*might*) have been enough data still present in the default trace for me to see who had «Object:Created» the «Locks» object, but by the time I realized it was an XE situation the default trace had spun past the time of the original problem when the XE session was created, so there was no way (that I know of) to find the login responsible without someone confessing at the client.
The cautionary tool here is don’t use a tool in Production that you don’t understand (and remember, almost every environment is Production to someone!) — I have not been able to find out what the unknown admin was trying to accomplish with this session, but had they understood how XE sessions and the ring buffers work, they never would have set a max_events_limit so high without an accompanying max_memory parameter to cap the size of the ring buffer itself.
Thanks to Denny Cherry (@mrdenny/blog), Robert Davis (@sqlsoldier/blog), @sql_handle, and especially Jonathan Kehayias for their insights and #sqlhelp — community is amazing and I am proud to be part of it.
This was several hours for me on a Friday night and a Saturday early morning — hopefully this will help someone else in a similar situation in the future!