There is a lot of hype about Big Data and it helps to develop some perspective. This blog talks about why  Big Data is happening, should Network Managers care and what should be done. Interestingly, it is the network management tool  that will help you.


Here's a myth: Big Data is an uncontrollable monster that has popped suddenly out of no where and no one knows what to do about it. Its like people said a few decades ago, " oh my, TV will have 400 channels !! what will we do !!". Sigh.


The truth: Big data is a reality that stems from the technical advances we have made since the 1960s. Moore's law states that hardware power (including CPU processing power, hard disk storage) has been doubling every 18 months and this is likely to continue till 2020. This means a computer which had a memory of 1 KB, now has a memory of 1 GB. As more resources are getting available, we are generating more and more data to fill the space - Parkinson's law. No wonder data creation is doubling 18 months. And technology should be able to deal with it and get the most out of it. There is no need to panic, Solarwinds will help you solve your problem and as always, we will not make a huge noise about it.


Why should Network Managers care

Data and hardware are only doubling, but network capacity and complexity are almost tripling every 18 months. This is because as hardware power doubles, network capacity and complexity needs to more than double to keep up with the usage. Know that if your network capacity today is 50 Mbps, it will be 150 Mbps in the next 18 months and 4500 Mbps after 18 more months. It is the network management tool that becomes a bottleneck, not the hardware and certainly not the data creation ability. One of the secret sauces to your Big Data recipe is a network management tool that scales in a very short period of time.

nw mgmt and moore's law.jpg

Reference: Network Management Fundamentals, Alexander Clemm Phd, Cisco Press

Network management tool selection criteria

Let us see what are the parameters you need to keep in mind while investing in a network management product. I have given examples of how we at Solarwinds, built Orion, NPM to deal with scale in a very short span of time. All you need to do is buy more polling engines and NPM will do the rest for you. I strongly encourage you to look at other competitors to see if they can give you the right scale in the right time for the right price.

 

First, will the tool be able to scale three times its current size in 18 months? Some tools need large sales, configuration and training cycles spanning many months.

Second, do the scale related features of the tool support the following?

Scaling aspects

Analogy to the party example from earlier blog

What it means

Example: How NPM does it

Operational concurrency

Serving different items on the menu simultaneously instead of one after the other

Maximizing communication concurrency to maximize management operations throughput e.g. sending requests and responses simultaneously

Parallel processing of input data

Event propagation

Quick response time to fill a glass when the drink is over

Allowing events to propagate and update the system as quick as possible

The summary page is auto refreshed and the message center allows searching and filtering of events

Scoping

Using a larger tray to serve drinks

Access and manipulate large chunks of data

Bulk response collection

Distribution and addressing

Using multiple trays to serve drinks simultaneously by more helpers

Allow processing to be distributed across multiple servers

Uses multiple parallel polling engines

Table: Aspects to consider in a Network Management software to catch up with Moore’s law

And third, how does the tool define the scale it can handle. For example: what does it mean to supports millions of objects? What is an object? Is it a device, an interface, a Boolean state? How often will these objects be synchronized with the network resources? What type of hardware does the application need to run? Here are some basic questions we need to consider:

  1. Management operations throughput: What type of operation can be performed, of what complexity, on which type of object per unit time?
  2. Event throughput: What is the maximum throughput that can be achieved per unit time in which scenario (receipt of events, processing of data)?
    • NPM has an event throughput of 700-1000 messages per second
  3. Network synchronization capacity: How many network objects can the application synchronize with per unit of time?
    • NPM process up to 12K elements within two minutes of standard polling interval and 10 minutes of statistics data collection

 

Conclusion

In conclusion, it is important to know how exactly your network will scale in future and how soon. Just having a general idea that it will scale is not enough. Big Data is not a sudden reality. As Network Managers we need to prepare for it and we need to be very careful about choosing the right network management tool, else it will become a bottleneck. We at Solarwinds, love to hear about your thoughts. Tell us about other scaling parameters that you considered while buying NPM or any other tool. Why were they important to you?