Submit a ticketCall us

Cloud Workloads: Meet Your New Hybrid IT Reality
Have you found yourself in that evolving, hybrid IT grey area and wondering if cloud workloads are now part of your purview? And if so, will monitoring cloud workloads require a new set of dedicated cloud monitoring tools? Your answers: yes, they should be, and no, they don’t.

Find out how SolarWinds® Server & Application Monitor (SAM) can help you monitor your cloud workloads side by side with your on-premises workloads. Register Now.

Home > Success Center > Virtualization Manager (VMAN) > Collector Not Responding

Collector Not Responding

Created by Kevin Sperrazza, last modified by Rodim Suarez on Mar 06, 2017

Views: 49 Votes: 1 Revisions: 5

Overview

While jobs are running, collections start to fail with a red exclamation point and the message:

Collector Not Responding

 

2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQ Broker[PHL-VMAN-Collector] Scheduler [27]: TIMED_WAITING, block: 0.000 (1x), wait: 97189.954 (23830x), time: 5.294
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQ Transport: tcp:///10.25.0.191:61616 [30]: BLOCKED, block: 96879.708 (153x), wait: 0.000 (12x), time: 2.097, blocked on java.lang.Object@75af1035 owned by [44] BrokerService[PHL-VMAN-Collector] Task-2
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQConnection[ID:phlprvavmanc001.kmhp.com-44110-1452688979465-3:1] Scheduler [42]: WAITING, block: 0.000 (0x), wait: 0.000 (1x), time: 0.000
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQConnection[ID:phlprvavmanc001.kmhp.com-44110-1452688979465-3:3] Scheduler [146]: WAITING, block: 0.000 (0x), wait: 97072.530 (1x), time: 0.000
2016-01-14 10:43:00,484 [SelfMonitoring]  INFO com.solarwinds.vman.selfmonitoring.ThreadMonitoring:77 -   ActiveMQConnection[ID:phlprvavmanc001.kmhp.com-44110-1452688979465-3:4] Scheduler [255]: WAITING, block: 0.000 (0x), wait: 96722.522 (1x), time: 0.000
201

Environment

VMAN 6.X

Cause 

Larger environments with federated collectors do not have enough memory to send the data over to the master appliance which causes the ActiveMQ layer to back up and block processing. You should see the below messages in the logs.

Resolution

  1. Run the following command via command line against the VMan master appliance.
    sudo sed -i "s/>41943040</>141943040</g" /etc/hyper9/broker.properties
    sudo service tomcat6 restart
  2. Restart any Federated Collectors so any current jobs are cleared out.

 

Last modified

Tags

Classifications

Public