Submit a ticketCall us

Announcing NPM 12.2
With NPM 12.2 you can monitor your Cisco ASA firewalls, to monitor VPN tunnels for basic visibility and troubleshooting tunnels. NPM 12.2 also uses the SolarWinds Orion Installer so you can easily install and upgrade one or more Orion Platform products simultaneously.
See new features and improvements.

Home > Success Center > Virtualization Manager (VMAN) > Collector Not Responding

Collector Not Responding

Created by Kevin Sperrazza, last modified by Rodim Suarez on Mar 06, 2017

Views: 38 Votes: 1 Revisions: 5

Overview

While jobs are running, collections start to fail with a red exclamation point and the message:

Collector Not Responding

 

2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQ Broker[PHL-VMAN-Collector] Scheduler [27]: TIMED_WAITING, block: 0.000 (1x), wait: 97189.954 (23830x), time: 5.294
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQ Transport: tcp:///10.25.0.191:61616 [30]: BLOCKED, block: 96879.708 (153x), wait: 0.000 (12x), time: 2.097, blocked on java.lang.Object@75af1035 owned by [44] BrokerService[PHL-VMAN-Collector] Task-2
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQConnection[ID:phlprvavmanc001.kmhp.com-44110-1452688979465-3:1] Scheduler [42]: WAITING, block: 0.000 (0x), wait: 0.000 (1x), time: 0.000
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQConnection[ID:phlprvavmanc001.kmhp.com-44110-1452688979465-3:3] Scheduler [146]: WAITING, block: 0.000 (0x), wait: 97072.530 (1x), time: 0.000
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQConnection[ID:phlprvavmanc001.kmhp.com-44110-1452688979465-3:4] Scheduler [255]: WAITING, block: 0.000 (0x), wait: 96722.522 (1x), time: 0.000
201

Environment

VMAN 6.X

Cause 

Larger environments with federated collectors do not have enough memory to send the data over to the master appliance which causes the ActiveMQ layer to back up and block processing. You should see the below messages in the logs.

Resolution

  1. Run the following command via command line against the VMan master appliance.
    sudo sed -i "s/>41943040</>141943040</g" /etc/hyper9/broker.properties
    sudo service tomcat6 restart
  2. Restart any Federated Collectors so any current jobs are cleared out.

 

Last modified
19:23, 5 Mar 2017

Tags

Classifications

Public