Submit a ticketCall us

WebinarUpcoming Webinar: Know What’s Changed – with NEW Server Configuration Monitor

Change management in IT is critical. But, even with a good change management process, changes are too often not correctly tracked, if at all. The configuration of your servers and applications is a key factor in their performance, availability, and security. Many incidents can be tracked back to an authorized (and sometimes unauthorized) configuration change, whether to a system file, configuration file, or Windows® Registry entry. Join SolarWinds VP of product management Brandon Shopp to discover how the new SolarWinds® Server Configuration Monitor is designed to help you.

Register now.

Home > Success Center > Network Performance Monitor (NPM) > NPM Documentation > Scalability Engine Guidelines for SolarWinds Orion Products

Scalability Engine Guidelines for SolarWinds Orion Products

 
Updated July 17, 2018 
 

For a PDF of this article, click the PDF icon under the Search bar at the top right of this page.

This Orion Platform topic applies to the highlighted products:

DPAIMEOCETSIPAMLMNCMNPMNTASAMSRMUDTVMANVNQMWPM

Your Orion Platform product installation consists of a Main Polling Server (Orion Web Console and the Main Polling Engine), and the Orion Database Server. Polling engines gather device statistics and store the information on the Orion Database Server. The Main Polling server reads the stored information from the Orion Database Server.

Check out this short video (:58) on the Enterprise-class scalability of Orion products.

Your Main Polling Engine polls a definite number of elements depending on the Orion Platform product. Too many elements on a single polling engine can have a negative impact on your SolarWinds server. When the maximum polling throughput on a single polling engine is reached, the polling intervals are automatically increased to handle the higher load. To keep default polling intervals, you need to add polling capacity.

What is a scalability engine?

"Scalability engine" is a general term that refers to any server that extends the monitoring capacity of your SolarWinds installation, such as Additional Polling Engines, Additional Web Servers, or High Availability backups.

How do I improve the monitoring capacity?

  • Install an Additional Polling Engine to disperse the load between multiple servers.
  • Install an Additional Web Server to balance the load on the Main Polling server, for example when more users are logged in at the same time, or in secured environments where you have your Orion Platform products behind a firewall.
  • For NPM and SAM, poll more elements by using multiple Additional Polling Engine licenses on a polling engine ("stacking").

Additional Polling Engines can cope with database outages, handle bad or slow connections, and work over WAN. Additional Polling Engines use Microsoft Message Queuing (MSMQ) to enable the local storage for up to 1 GB of polled data per polling engine to prevent data loss when the connection between the polling engine and the database is temporarily lost.

How do I know that I need to scale my Orion Platform product?

When you exceed a polling engine capacity, Orion Platform products notify you about it.

  • See the Notifications in the Orion Web Console.

    notificationspollingonly.png

  • Review your alerts. If the Polling rate limit exceeded out-of-the-box alert is enabled, the alert sends an email and adds an entry to All Active Alerts.

    pollinglimitexceededalert.png

  • Go to the Polling Settings page in the Orion Web Console. Click Settings > All Settings, and then click Polling Settings under Thresholds & Polling.

 

Available Scalability Engine deployment options

Additional information on scalability improvements

Requirements

Before you begin, be sure your Additional Polling Engines and Additional Web Servers meet the following requirements.

  • The Main Polling Engine should be upgraded or installed before upgrading/installing the Additional Polling Engine.
  • The latency (RTT) between each Additional Polling Engine and the database server should be below 300 ms. Degradation may begin around 200 ms, depending on your utilization. Ping the Orion SQL Server to find the current latency. A reliable static connection between the server and the regions.
  • Installing an Additional Polling Engine and an Additional Web Server on the same host is not supported.
  • Recommended hardware specifics for Additional Polling Engines in XL deployments (up to 100 Additional Polling Engines in a single deployment):
    • 4-core processor or better,
    • 16 GB memory

Additional Polling Engine Ports

Additional Polling Engines have the same port requirements as the Main Polling Engine. The following ports are the minimum required for an Additional Polling Engine to ensure the most basic functions.

Port

Protocol

Service/
Process
Direction

Description

1433

TCP

SolarWinds Collector Service Outbound The port used for communication between the APE and the Orion database.
1801 TCP Message Queuing WCF Inbound The port used for MSMQ messaging from the Orion Web Console to the Additional Polling Engine.

5671

TCP

RabbitMQ Bidirectional

The port used for SSL-encrypted RabbitMQ messaging from the Orion Web Console to the Additional Polling Engine.

17777

TCP

SolarWinds Information Service Bidirectional

The port used for communication between the Additional Polling Engine and the Orion Web Console.

Additional Web Server Ports

Port

Protocol

Service/Process Direction

Description

80

TCP

World Wide Web Publishing Service Inbound

Default additional web server port. Open the port to enable communication from your computers to the Orion Web Console.

If you specify any port other than 80, you must include that port in the URL used to access the web console. For example, if you specify an IP address of 192.168.0.3 and port 8080, the URL used to access the web console is http://192.168.0.3:8080 .

1433

TCP

SolarWinds Collector Service Outbound

The port used for communication between the SolarWinds server and the SQL Server. Open the port from your Orion Web Console to the SQL Server.

1801 TCP Message queuing Outbound The port used for MSMQ messaging from the Additional Web Server to the Main Polling Engine.
5671 TCP RabbitMQ Outbound The port used for SSL-encrypted RabbitMQ messaging from the Additional Web Server to the Additional Polling Engine.

17777

TCP

SolarWinds Collector Service Outbound

Orion module traffic. Open the port to enable communication from your polling engine to the web server, and from the web server to your polling engine.

Centralized Deployment with Additional Polling Engines

Centralized Deployment with Additional Polling Engines polls data locally in each region and the polled data is stored centrally on the database server in the primary region. All licenses are shared in a Centralized Deployment. Use this deployment if your organization requires centralized IT management and localized collection of monitoring data.

Users can view all network data from the Orion Web Console in the Primary Region where the main SolarWinds Orion server is installed.

Users can log in to a local Web Console if an Additional Web Server is installed in a secondary region.

With Centralized Deployment, you can:

  • Add, delete, and modify nodes, users, alerts and reports centrally, on the Main Orion Server.
  • Scale all installed Orion Platform products. Scaling one Orion Platform product increases the capacity of the other Orion Platform products. For example, installing an Additional Polling Engine for NPM also increases the polling capacity for SAM.
  • Specify the polling engine that collects data for monitored nodes and reassign nodes between polling engines.

All Key Performance Indicators (KPIs), such as Node Response Times, are calculated from the perspective of the polling engine. For example, the response time for a monitored node in Region 2 is equal to the round trip time from the Additional Polling Engine in Region 2 to that node.


centralizeddeployment.png

 

For additional information on Centralized Deployment, see the SolarWinds Orion Platform Scalability Tech Tip.

Distributed Deployment with Main and Additional Polling Engines in regions

In a Distributed Deployment each region is licensed independently, and data is polled and stored locally in each region. Scale each region independently by adding Additional Polling Engines. You can access monitoring data from each region in a central location with the Enterprise Operations Console (EOC).

SolarWinds Enterprise Operations Console must be installed and licensed if you want to view aggregated data from multiple SolarWinds Orion servers in a Distributed Deployment.

With Distributed Deployment you can:

  • Use local administration to manage, administer, and upgrade each region independently.
  • Create, modify, or delete nodes, users, alerts, and reports separately in each region.
  • Export and import objects, such as alert definitions, Universal Device Pollers, and SAM templates between instances.
  • Mix and match modules and license sizes as needed. For example:
    • Region 1 has deployed NPM SL500, NTA for NPM SL500, UDT 2500, and 3 additional polling engines
    • Region 2 has deployed NPM SLX, SAM AL1500, UDT 50,000, and 3 additional polling engines
    • Region 3 has deployed NPM SL100 only and 3 additional polling engines

EOC 2.0 leverages a function called SWIS Federation to query for specific data only when needed. This method allows EOC to display live, on-demand data from all monitored SolarWinds Sites. and does not store historical data.

distributeddeployment.png

Centralized Deployment with Remote Office Pollers

To deploy your Orion Platform product in numerous, remote locations when you do not need to scale up your installation, use a Remote Office Poller for Additional Polling Engine (ROP, mini-poller).

See Scalability Engine Guidelines by product to verify if your product supports Remote Office Pollers.

Select a Remote Office Poller by the number of elements you need to poll:

  • ROP250 polls up to 250 elements.
  • ROP1000 polls up to 1000 elements.

Follow the steps for installing and activating Additional Polling Engines to deploy Remote Office Pollers.

Deploy Scalability Engines

  1. Review the Scalability Engine deployment options.
  2. Review the requirements and pre-flight checklist.

    See the requirements for large deployments in Orion multi-module system guidelines.

  3. On Orion Platform 2017.3 MSP4 and later, use the Orion Installer to deploy Additional Polling Engines and Additional Web Servers.

    If you are running an earlier version of the Orion Platform, you can install or upgrade scalability engines with the Orion Scalability Engine Installer.

Pre-flight checklist

Before you install or upgrade an Additional Polling Engine in your environment, complete the following actions:

Be sure your product uses Orion Platform 2016.2 and later.

To find out the Orion Platform version, log in to the Orion Web Console and see the Orion Platform version in the footer. If the version is 2016.1 and earlier, see Orion Bundle for additional servers.

checkbox.gif Install or upgrade the Main Polling Engine.
Ensure product versions match between the Primary Polling Engine, all Additional Polling Engines, and Additional Web Servers. This includes the version of .NET. Find a version number listed in the footer of the Web Console. If your product versions do not match, you must upgrade before you can install Additional Polling Engines.
Verify port requirements for your SolarWinds product.
Acquire a user name and password with administrative privileges to the Orion Web Console on your Main Polling Engine.
Be sure the Additional Polling Engine uses the same SQL database as the Main Polling Engine.
Verify the latency between your Orion database server and the Additional Polling Engine. Performance degradation can begin around 200 ms.
If you configured an alert with a Send Email action to trigger on a node monitored by an Additional Polling Engine, confirm that the Additional Polling Engine can access your SMTP server.

Add the IP address of your Additional Polling Engine to Windows Servers on the Security tab.

Make sure that the following options are set:

  • Ensure that a case-sensitive community name has been specified.
  • Ensure that Accept SNMP packets from any host is selected OR ensure that the ipMonitor system is listed within the Accept SNMP packets from these hosts list.
  • Ensure that your network devices allow SNMP access from the new polling engine. On Cisco devices, you can for example modify the Access Control List.

Deploy Additional Polling Engines and Additional Web Servers

With Orion Platform 2017.3 MSP4 and later, the Orion Installer can be used to install Additional Polling Engines and Additional Web Servers

Stack licenses (NPM and SAM)

If your NPM or SAM polling engines have enough resources available, you can stack the licenses. Stacking licenses enhances the polling capacity of your Main Polling Engine or Additional Polling Engine. A stack requires only one IP address, regardless of the number of additional polling engines.

If the resources on your polling engine are already constrained and you cannot allocate additional resources, consider installing an Additional Polling Engine.

Assign multiple licenses to a polling engine with the web-based License Manager. The maximum number of licenses you can apply to a single server depends on the Orion Platform product.

  1. In the Orion Web Console, click Settings > All Settings > License Manager.
  2. Click Add/Upgrade License, enter the activation key and registration details, and click Activate.

    The activated license with activation details displays in the License Manager.

  3. In the License Manager, select the license, and click Assign.
  4. Select a polling engine, and click Assign.

    The license is stacked on the selected polling engine, and its polling capacity is extended.

DameWare in Centralized Mode

DameWare Scalability Engine Guidelines

Scalability Options

150 concurrent Internet Sessions per Internet Proxy

5,000 Centralized users per Centralized Server

10,000 Hosts in Centralized Global Host list

5 MRC sessions per Console

Database Performance Analyzer (DPA)

DPA Scalability Engine Guidelines

Scalability Options

Less than 20 database instances monitored on a system with 1 CPU and 1 GB RAM

21 - 50 database instances monitored on a system with 2 CPU and 2 GB RAM

51 - 100 database instances monitored on a system with 4 CPU and 4 GB RAM

101 - 250 database instances monitored on a system with 4 CPU and 8 GB RAM

More than 250 database instances monitored through Central Server mode

See Link together separate DPA servers in the DPA Administrator Guide

Engineer's Toolset on the Web

Engineer's Toolset on the Web Scalability Engine Guidelines

Scalability Options

45 active tools per Engineer's Toolset on the Web instance

3 tools per user session

1 active tool per mobile session

10 nodes monitored at the same time per tool

48 interfaces monitored at the same time per tool

12 metrics rendered at same time per tool

Enterprise Operations Console (EOC)

EOC Scalability Engine Guidelines

Scalability Options

EOC 2.1 was successfully tested with 30 SolarWinds Sites with a total of 1 million elements (nodes, interfaces, and volumes).

WAN and/or Bandwidth Considerations

Minimal monitoring traffic is sent between the EOC server and any remote Orion servers or Additional Polling Engines

Connectivity For redundancy, multiple EOC servers can be connected to the same SolarWinds Site.
Latency

SolarWinds recommends that latency between the EOC server and connected SolarWinds Sites be less than 200 ms (both ways). EOC can function at higher latencies, but performance might be affected. 

EOC was tested with up to 500 ms of latency and remained functional, but performance (specifically with reports) was affected.

IP Address Manager (IPAM)

IPAM Scalability Engine Guidelines

Scalability Options

3 million IPs per SolarWinds IPAM instance

1 million IPs per APE.

Log and Event Manager (LEM)

LEM Scalability Engine Guidelines

Scalability Options

Maximum 120 million events per day

10,000 rule hits per day

Log Manager for Orion (LM)

LM Scalability Engine Guidelines

Scalability Options

1000 events per second

NetFlow Traffic Analyzer (NTA)

NTA Scalability Engine Guidelines

Remote Office Poller

No

Main Polling Engine Limits

50k FPS per polling engine

For more information, see Network Performance Monitor (NPM)

Scalability Options

Up to 300k FPS

For more information, see Network Performance Monitor (NPM)

WAN and/or Bandwidth Considerations

1.5% - 3% of total traffic seen by exporter

Other Considerations

See Flow environment best practices in the NTA Getting Started Guide.

Network Configuration Manager (NCM)

NCM Scalability Engine Guidelines

Remote Office Poller

No

Main Polling Engine Limits

~10K devices

Scalability Options

Each SolarWinds NCM instance can support up to 100 additional polling engines. 

Starting with Orion Platform 2017.3 SP3, a maximum of 100 additional polling engines per instance is supported.

Each additional polling engine can support ~10K devices. However, the number of devices in the entire environment (the primary engine + all APEs) cannot exceed ~30K. 

Examples:

  • The primary engine and two APEs could support 10K devices each, for a total of 30K devices.
  • The primary engine and 20 APEs could support around 1,400 devices each, but the combined total cannot exceed the 30K maximum. 

Integrated standalone mode

Network Performance Monitor (NPM)

NPM Scalability Engine Guidelines

Stackable Polling Engines

NPM 12.0 and later: up to four total polling engines may be installed on a single server, for example one Primary Polling Engine with one to three Additional Polling Engines, or four Additional Polling Engines on the same server

NPM 11.5.3 and earlier: up to three polling engines on a single server

A stack requires only 1 IP address, regardless of the number of APEs.

Remote Office Poller

NPM 10.4 and later

ROP250 supports 250 elements

ROP1000 supports 1000 elements

Main Polling Engine Limits

~12k elements at standard polling frequencies:

  • Node and interface up/down: 2 minutes/poll
  • Node statistics: 10 minutes/poll
  • Interface statistics: 9 minutes/poll

25 - 50 concurrent Orion Web Console users

SNMP Traps: ~500 messages per second (~1.8 million messages/hr)

Syslog: 700 - 1,000 messages/second (2.5 - 3.6 million messages/hr)

To monitor more than ~400,000 elements, consider using SolarWinds Enterprise Operations Console.

Scalability Options

One polling engine for every ~12,000 elements. See How is SolarWinds NPM licensed?

Starting with Orion Platform 2017.3 SP3, a maximum of 100 polling engines per instance with up to 100,000 elements monitored per instance.

Starting with Orion Platform 2018.2, a maximum of 100 additional polling engines per instance with up to 400,000 elements monitored per instance.

WAN and/or Bandwidth Considerations

Minimal monitoring traffic is sent between the primary SolarWinds NPM server and any Additional Polling Engines that are connected over a WAN. Most traffic related to monitoring is between an Additional Polling Engine and the SolarWinds Orion database.

NetPathTM Scalability

The scalability of NetPath™ depends on the complexity of the paths you are monitoring, and the interval at which you are monitoring them.

In most network environments:

  • You can add up to 100 paths per polling engine.
  • You can add 10 - 20 paths per probe.

    See NetPath requirements for more information.

Other Considerations

How much bandwidth does SolarWinds require for monitoring?

See "Orion Server Hardware Requirements" in the SolarWinds NPM Administrator Guide

Orion Agents

Orion Agents Scalability Engine Guidelines

Scalability Options

1,000 agents per polling server

Patch Manager

Patch Manager Scalability Engine Guidelines

Scalability Options

1,000 nodes per automation server

1,000 nodes per SQL Server Express instance (SQL Server does not have this limitation)

SQL Express is limited to 10 GB storage. For large deployments, SolarWinds recommends using remote SQL.

Quality of Experience (QoE)

QoE Scalability Engine Guidelines

Scalability Options

1,000 QoE sensors per polling server

50 application per sensor

Server & Application Monitor (SAM)

SAM Scalability Engine Guidelines

Stackable Polling Engines

SAM 6.3 and later

2 polling engines can be installed on a single server.

Remote Office Poller

SAM 5.5 and later

Main Polling Engine Limits

~8 — 10K component monitors per polling engine

25 — 50 concurrent Orion Web Console users

Scalability Options

One Additional Polling Engine (APE) for every 8 — 10K component monitors.

Maximum of 150K component monitors per primary SAM installation (1 Orion server + 14 APEs). 

Starting in Orion Platform 2018.2, a maximum of 100 APEs per instance.

WAN and/or Bandwidth Considerations

Minimal monitoring traffic is sent between the Orion server and any APEs connected over a WAN. Most traffic related to monitoring is between an APE and the Orion database server.

Bandwidth requirements depend on the size of the relevant component monitor. Based on 67.5 kB / WMI poll and a 5-minute polling frequency, the estimate is 1.2 Mbps for 700 component monitors. See How do SNMP and WMI polling compare?

WMI is best suited for environments where latency is < 100ms. See also WMI Security Blog.

Serv-U FTP Server and MFT Server

Serv-U FTP Server and MFT Server Scalability Engine Guidelines

Scalability Options

500 simultaneous FTP and HTTP transfers per Serv-U instance

50 simultaneous SFTP and HTTPS transfers per Serv-U instance

For more information, see the Serv-U Distributed Architecture Guide.

Storage Resource Monitor (SRM)

SRM Scalability Engine Guidelines

Stackable Polling Engines

No, one APE instance can be deployed on a single host

Remote Office Poller

Yes

Poller remotability is a feature enabling the local storage, using MSMQ, of up to ~1 GB of polled data per poller in case the connection between the polling engine and the database is temporarily lost.

Main Polling Engine Limits

Maximum of 40K LUNs per polling engine (primary or additional)

25 - 50 concurrent Orion Web Console users

Scalability Options

Use Additional Polling Engines for horizontal scaling

The upper limit that can be handled by a single SRM instance is 160K LUNs. For larger environments, please contact SolarWinds for further assistance.

WAN and/or Bandwidth Considerations

Minimal monitoring traffic is sent between the primary SRM server and any Additional Polling Engines that are connected over a WAN. Most traffic related to monitoring is between an Additional Polling Engine and the SolarWinds database.

User Device Tracker (UDT)

UDT Scalability Engine Guidelines

Remote Office Poller

No

Main Polling Engine Limits

100k ports

Scalability Options

1 Additional Polling Engine per 100k additional ports

Maximum of 500k port per instance (1 Primary Polling Engine and 4 Additional Polling Engines)

WAN and/or Bandwidth Considerations

None

Other Considerations

UDT version 3.1 supports the ability to schedule port discovery

In UDT version 3.1 the Max Discovery Size is 2,500 nodes/150,000 ports

Virtualization Manager (VMAN)

VMAN Scalability Engine Guidelines

Scalability Options

(VMAN in Orion Platform)

1 Additional Polling Engine per 3000 monitored virtual machines

Scalability Options (Legacy VMAN appliance) 1 Additional federated collector per 3000 monitored virtual machines
Main Polling Engine system requirements

The main polling engine should be upgraded to meet greater polling demands as the virtual environment increases in size. See the VMAN Deployment Sizing Guide.

 

Deployment Sizing Guide For VMAN-specific sizing and scaling guidelines, see the VMAN Deployment Sizing Guide.

VoIP & Network Quality Manager (VNQM)

VNQM Scalability Engine Guidelines

Remote Office Poller

No

Primary Polling Engine Limits

~5,000 IP SLA operations

~200k calls/day with 20k calls/hour spike capacity

Scalability Options

1 Additional Polling Engine per 5,000 IP SLA operations and 200,000 calls per day

Maximum of 15,000 IP SLA operations and 200,000 calls per day per SolarWinds VNQM instance (SolarWinds VNQM + 2 VNQM Additional Polling Engines)

WAN and/or Bandwidth Considerations

Between Call Manager and VNQM: 34 Kbps per call, based on estimates of ~256 bytes per CDR and CMR and based on 20k calls per hour

Web Help Desk (WHD)

WHD Scalability Engine Guidelines

Deployments with fewer than 20 techs

You can run Web Help Desk on a system with:

  • A supported 32-bit operating system
  • A 32-bit Java Virtual Machine (JVM)
  • 4GB RAM (up to 3.7GB for the tech sessions, JVM support, operating system, and any additional services you need to run on the system)

This configuration supports 10 - 20 tech sessions with no onboard memory issues.

To adjust the maximum memory setting, edit the MAXIMUM_MEMORY option in the WebHelpDesk/conf/whd.conf file.

Deployments with more than 20 techs

If your deployment will support more than 20 tech sessions, SolarWinds recommends installing Web Help Desk on a system running:

  • A supported 64-bit operating system
  • A 64-bit JVM
  • 3GB RAM for 20 tech sessions plus 1GB RAM for each additional 10 tech sessions

To enable the 64-bit JVM, add the following argument to the JAVA_OPTS option in the /library/WebHelpDesk/conf/whd.conf file:

JAVA_OPTS="-d64"

To increase the max heap memory on a 64-bit JVM, edit the MAXIMUM_MEMORY option in the WebHelpDesk/conf/whd.conf file.

For other operating systems, install your own 64-bit JVM and then update the JAVA_HOME option in the WebHelpDesk/conf/whd.conf file to point to your Java installation.

Web Performance Monitor (WPM)

WPM Scalability Engine Guidelines

Remote Office Poller

Not directly supported, but recordings may be made from multiple locations

Main Polling Engine Limits

12 recordings per WPM Player. See Scalability Options, below, for details.

Scalability Options

SolarWinds recommends one transaction location per 12 monitored transactions.

You can use the Player Load Percentage widget to estimate the number of transactions assigned to a machine that hosts a WPM Player. Many factors are involved, including:

  • The complexity of assigned transactions.
  • The length of playback for each transaction.
  • The length of intervals between each transaction playback.
  • The processor speed and RAM available on the machine hosting the WPM Player.
  • The amount of SEUM-User or domain accounts involved in playback. See How WPM works and Manage SEUM-User accounts.

If you notice a high load percentage, consider increasing the time intervals between polls and/or adding more players to a given location. Adding more players can reduce the load by distributing the load more evenly.

Frequently Asked Questions

Does each module have its own polling engine?

No, an Additional Polling Engine may have all relevant modules installed on it, and it performs polling for all installed modules. An Additional Polling Engine works the same way as your Main Polling Engine on your main server.

For example, if you have NPM and SAM installed, install one Additional Polling Engine and it performs polling for both NPM and SAM.

Are polling limits cumulative or independent? For example, can a single polling engine poll 12k NPM elements AND 10k SAM monitors together?

Yes, a single polling engine can poll up to the limits of each module installed, if sufficient hardware resources are available.

Are there different size license available for the Additional Polling Engine?

No, the Additional Polling Engine is available with an unlimited license.

Can you add an Additional Polling Engine to any size module license?

Yes, you can add an Additional Polling Engine to any size license.

Adding an Additional Polling Engine does not increase your license size. For example, if you are licensed for an NPM SL100, adding an Additional Polling Engine does not increase the licensed limit of 100 nodes/interfaces/volumes, but the polling load is spread across two polling engines.

What happens if the connection from a polling engine to the Orion Database Server is lost?

If there is a connection outage to the Orion Database Server, polling engines use Microsoft Message Queuing (MSMQ) to cache the polled data on the Additional Polling Engine servers.

The amount of data that can be cached depends on the disk space available on the polling engine server. The default storage space is 1 GB. Up to one hour of data can be cached.

When the connection to the database is restored, the Orion Database Server is updated with the locally cached data, the oldest data is processed first.

If the database connection is broken for a longer than an hour, the collector queue becomes full, the newest data is discarded until a connection to the database is re-established.

Last modified

Tags

Classifications

Public