Submit a ticketCall us

Webinar: Web Help Desk for HR, Facilities and Accounting Departments
This webinar will focus on use cases for HR, Facilities and Accounting.

Having a unified ticketing and asset management system for all the departments in your company can provide end-users with a seamless experience and make things easier for your IT team. Yet, with different business tasks and objectives, many departments don’t fully understand the capabilities of Web Help Desk and how the software can be customized for effective use in their departments.
Register Now.

Home > Success Center > Netflow Traffic Analyzer (NTA) > How do I determine the right amount of storage for my NTA Flow Storage Database, based on my NTA 3.x usage

How do I determine the right amount of storage for my NTA Flow Storage Database, based on my NTA 3.x usage

Table of contents

Updated January 12, 2017

Overview

How do I determine the right amount of storage for my NTA Flow Storage Database, based on my NTA 3.x usage?

 

Environment

  • NTA 4.x

Detail

If you are using NTA 3.x, you can run the following query against your Orion DB using SQL Management Studio:

DECLARE @FSDB_row_size INT;
DECLARE @detailed_row_count BIGINT;
DECLARE @retention_period   BIGINT;
DECLARE @AmountOfDetailTablesPerNode AS INT
SET @FSDB_row_size = 126;
---------------------------------------------------
-- ** Select settings from NTA Global settings **
SELECT @AmountOfDetailTablesPerNode = CONVERT(INT, Value)
FROM NetFlowGlobalSettings
WHERE KeyName ='RetainUncompressedDataIn15MinuteIncrements'
SELECT @retention_period = value*24*60*60 FROM dbo.NetFlowGlobalSettings WHERE KeyName='RetainCompressedDataInDays'
-- ** Get all detail tables **
SELECT * INTO #AllDetailTables FROM
(
    SELECT SUBSTRING(name,15,PATINDEX('%[0-9][_]%', name)-14) AS NodeID,
           RIGHT(name, LEN(name) - PATINDEX('%[0-9][_]%', name)-1) as Interval,
           name, id
    FROM dbo.sysobjects
    WHERE name LIKE 'NetFlowDetail[_][0-9]%' AND OBJECTPROPERTY(id, N'IsUserTable') = 1
) AS A
-- ** Get only first [amount of 15 minutes intervals] detail tables per each node **
DECLARE @script as NVARCHAR(max)
SET @script ='
SELECT * INTO ##DetailTables FROM #AllDetailTables a
WHERE name IN
(
    SELECT TOP ' + CONVERT(nvarchar, @AmountOfDetailTablesPerNode) + ' name FROM #AllDetailTables b
    WHERE b.NodeID=a.NodeID
    ORDER BY Interval
)
ORDER BY name
'
EXECUTE sp_executesql @script
-- ** Get sum of all rows in given detail tables **
SELECT @detailed_row_count = SUM(row_count)
FROM sys.dm_db_partition_stats stats
JOIN ##DetailTables dt ON (stats.object_id = dt.id)
-- ** Calculate target size based on amount of rows in detail tables **
SELECT ((@detailed_row_count / (@AmountOfDetailTablesPerNode*15*60)) * @FSDB_row_size*@retention_period) / 1024 / 1024 / 1024 AS[Required_FSDB_Disk_Size_GB];
DROP TABLE #AllDetailTables
DROP TABLE ##DetailTables

The result should look like this:
ntafsdbsizescript.png

This is the estimated size (in GB) required by the NTA Flow Storage Database after the upgrade from 3.x for the currently configured retention period (default=30 days), assuming the flow pattern remains similar to what it is in NTA 3.x. 

Note: This size is calculated based on the last hour activity, so there can be significant variations, as the last hour may not reflect longer periods of time. We recommend customers to use this as a general guideline and perform their own testing in their own environment. The result is much more accurate than estimating the required storage based on average PDU’s processed per second number, as described in the previous item.
 

 

 

 

Last modified
09:57, 12 Jan 2017

Tags

Classifications

Public