Submit a ticketCall us

WebinarUpcoming Webinar: How Help Desk and Remote Support Pays for Itself

Learn how help desk software can simplify ticketing management, allow you to track hardware and software assets, and accelerate the speed of IT support and service delivery. Gain insights on how remote support tools allow your IT team to maximize their efficiency and ticket resolution by expediting desktop troubleshooting, ultimately helping keep end-users happy and productive.

Register here.

Home > Success Center > Virtualization Manager (VMAN) > VMAN - Knowledgebase Articles > Collector Not Responding

Collector Not Responding

Created by Kevin Sperrazza, last modified by Rodim Suarez_ret on Mar 06, 2017

Views: 286 Votes: 1 Revisions: 5

Overview

While jobs are running, collections start to fail with a red exclamation point and the message:

Collector Not Responding

 

2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQ Broker[PHL-VMAN-Collector] Scheduler [27]: TIMED_WAITING, block: 0.000 (1x), wait: 97189.954 (23830x), time: 5.294
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQ Transport: tcp:///10.25.0.191:61616 [30]: BLOCKED, block: 96879.708 (153x), wait: 0.000 (12x), time: 2.097, blocked on java.lang.Object@75af1035 owned by [44] BrokerService[PHL-VMAN-Collector] Task-2
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQConnection[ID:phlprvavmanc001.kmhp.com-44110-1452688979465-3:1] Scheduler [42]: WAITING, block: 0.000 (0x), wait: 0.000 (1x), time: 0.000
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQConnection[ID:phlprvavmanc001.kmhp.com-44110-1452688979465-3:3] Scheduler [146]: WAITING, block: 0.000 (0x), wait: 97072.530 (1x), time: 0.000
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQConnection[ID:phlprvavmanc001.kmhp.com-44110-1452688979465-3:4] Scheduler [255]: WAITING, block: 0.000 (0x), wait: 96722.522 (1x), time: 0.000
201

Environment

VMAN 6.X

Cause 

Larger environments with federated collectors do not have enough memory to send the data over to the master appliance which causes the ActiveMQ layer to back up and block processing. You should see the below messages in the logs.

Resolution

  1. Run the following command via command line against the VMan master appliance.
    sudo sed -i "s/>41943040</>141943040</g" /etc/hyper9/broker.properties
    sudo service tomcat6 restart
  2. Restart any Federated Collectors so any current jobs are cleared out.

 

Last modified

Tags

Classifications

Public