Submit a ticketCall us

ebook60.pngHow to be a Cisco® ASA ace

Our eBook, Thou Shalt Not Pass…I Think?! can help you overcome the challenges of monitoring and managing Cisco ASA firewalls. This eBook is a great read if you’ve been frustrated with monitoring firewalls, managing ACL configs, and troubleshooting VPN connections.

Get your free eBook.

Home > Success Center > Log & Event Manager (LEM) > LEM is queueing and dropping event data

LEM is queueing and dropping event data

Created by Justin Rouviere, last modified by Abdul.Aziz on Jan 20, 2017

Views: 1,401 Votes: 1 Revisions: 5


The manager.log file shows that LEM begins to queue data and drops event data.

The following is an example of when the LEM shows dropping alerts in manager.log:

10:40:49 PST 2016) II:INFO [SnakQ] {EventPump:Rules:75} :postAlert:Total alerts dropped: 792000


All LEM versions


LEM queues and eventually drops data when there are too many alerts received for the resources reserved for LEM. The following are examples:

  • Something in the environment is causing a spike of alert data, such as an attack, a device that is misconfigured or broken.
  • New nodes were added and LEM does not have the resources necessary to handle the new load.
  • Rules are firing often, either because of an influx of alert data or because the rule has been misconfigured.
  • This can also happen due to known HSQL DB max file size limitation of 16 GB, which causes /tmp to become full and "cleantemp" does not help


Increase the resources reserved for LEM or reduce the alert data:

  1. Verify which queue is filling and causing the issue:
    1. Log in to CMC.
      • Virtual Console: Click Advanced Configuration and then press Enter.
      • SSH Client: Log in using your CMC credentials.
    2. Type appliance and press enter.
    3. Type diskusage and press enter.
  2. Check the areas below to identify the cause and solution:

    1. LEM Partition is 100%
    2. Logs/Data partition is 100%
    3. Temp is 10% or more and Database Queues has a high number of alerts waiting in memory.
    4. Rules Queue and/or EPIC Rules Queue has a high number of alerts waiting in memory.
    5. Console Queue has a high number of alerts waiting in memory.


Other Solution:

To apply 6.2.1 HF2 or upgrade to 6.3.1 (make sure to backup your settings and rules and take proper backup of LEM appliance)

If you are still having an issue contact SolarWinds Support.


Last modified