
#POOLMON.EXE VERIFIED DRIVER#
Poolmon is included in the Windows Driver Kit.

In the aforementioned poolmon link, note the explanation on how pool tags work – a pool tag is a four-letter string that’s used to label the pool allocation and should be unique to each driver that is making the allocation (keep this in mind – more on this later). A great explanation of poolmon and how it works can be found here. Poolmon is a tool that can be used to view kernel memory pools and their consumption. There are two tools that we will use to analyze kernel memory consumption, Poolmon and Windows Performance Recorder. Now to find out what is consuming paged pool. These are all classic indicators of an issue with kernel memory depletion. At this point we have a server with increasing paged pool consumption, increasing overall handle count, and Event ID 2004s in the system event log during the time the server was in a hard hang. I took note of the system time and the number of handles for lsass.exe. As a general rule of thumb, any process with a handle count higher than 10k should be investigated for a possible leak. Lsass.exe had the most number of handles of any process, but was still a fairly low count. To see what process has the highest handle count we can add the “Handles” column to task manager by clicking View | Select Columns and then sorting by it. In a few hours I checked back with the server and saw that the paged pool kernel memory had increased, as well as the overall handle count for the system. This appears to back up my suspicion of an issue with Kernel memory on the box being depleted. This indicated that the server was low on virtual memory (kernel) during the time it was in the hard hang state. What I found was an entry in the System event log for Resource-Exhaustion Detector, event ID 2004. I reviewed the event logs around the time just before the server was rebooted. Since everything appeared to be operating normally at the moment I decided to do some post mortem investigation. I took note of the values for Kernel memory and the overall handle count of the system.

My suspicion zeroed me in on Kernel memory consumption, and handle count as seen here (this is just a shot of a random vm for reference): Having seen this strange behavior before I suspected a leak in kernel memory but alas we are not in the business of speculation. A great place to start and get a very quick at a glance view of the health of the server. All we have to go on was the fact that the box would go belly up almost weekly like clockwork. Since it’s in a hard hang state you actually can’t get into the server until after it was rebooted so the investigation started when the server wasn’t actually exhibiting the problem… yet. Like clockwork, this would happen every 5-7 days and the server would need to be rebooted again.įirst things first. After about 90 hours or 4-5 days the server would become unresponsive, go into a hard hang state, and the services it was hosting would be unavailable necessitating a reboot to restore functionality.

The server we are looking at here is a virtual machine running 2008 R2 SP1. Sometimes a low virtual memory condition can cause the operating system to become unstable and hang. Though paged pool depletion takes considerably more effort on an 圆4 based system it’s not impossible or unheard of for it to happen and cause a server to go down or into a hard hang state.
#POOLMON.EXE VERIFIED SOFTWARE#
This post is a great complement to Jerry Devore's post on diagnosing a leak in non paged pool using event viewer, poolmon, and perfmon! Today I’m going to talk about analysis of a leak in paged pool kernel memory and the tools and methods used to diagnose and find root cause of the issue.Ī memory leak occurs when software (drivers) make kernel memory allocations and never free them, over time this can deplete kernel memory. Jesse Esquivel here again with another post I hope you find useful. First published on TechNet on Mar 09, 2014
