Submit a ticketCall us

WebinarUpcoming Webinar: Know What’s Changed – with NEW Server Configuration Monitor

Change management in IT is critical. But, even with a good change management process, changes are too often not correctly tracked, if at all. The configuration of your servers and applications is a key factor in their performance, availability, and security. Many incidents can be tracked back to an authorized (and sometimes unauthorized) configuration change, whether to a system file, configuration file, or Windows® Registry entry. Join SolarWinds VP of product management Brandon Shopp to discover how the new SolarWinds® Server Configuration Monitor is designed to help you.

Register now.

Home > Success Center > Network Performance Monitor (NPM) > A Node Shows Incorrect CPU & Memory Utilization

A Node Shows Incorrect CPU & Memory Utilization



Memory Gauges for some Nodes shows high or 100% memory utilization in the SolarWinds Web Console.


  • All SolarWinds Software Versions that use Memory gauges.



This issue is caused when NPM / SAM / etc poll the incorrect or wrong OID by default for the Monitored Device. The monitored device may respond to multiple OID's for CPU & Memory or have multiple CPU's


Note: The following resolution is for a change to a single node only, it is not a global change for all devices / nodes.




Change the poller type in use to a more suitable poller for memory information of Checkpoint:


1. Launch pollerchecker.exe from c:\program files (x86)\Solarwinds\Orion\ (or your own installation directory) on the main Orion server
2. Select the Node.
3. Select Poller type (Memory in this instance).
4. Click Detect Pollers.
5. Right-click on the supported Pollers and complete a Poll now.
6. Check if any of the supported Pollers produce the expected results.
7. Enable the correct poller by selecting the poller and clicking on Add/Replace Pollers.
8. Click Yes in the confirmation dialog.

9. Return to the web console and check the CPU & Memory Gauge(s) via the SolarWinds Web Console, after the next scheduled poll of the node. 


Last modified



Internal Use Only