cancel
Showing results for 
Search instead for 
Did you mean: 

Learning from the Past for Better Data-Driven Decisions

Level 12

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

With the advent of the Internet of Things (IoT) and connected devices, the amount of data agencies collect continues to grow, as do the challenges associated with managing that data. Handling these big data challenges will require federal IT pros to use new data mining methodologies that are ideal for hybrid cloud environments. These methodologies can improve network efficiency through automated and intelligent decision-making that’s driven by predictive analytics.

Today’s environments require a break from the data analysis methods of the past, which were time-consuming and required an enormous amount of manual labor. Things were difficult before the IoT, connected devices, and hybrid cloud environments became commonplace; today, it’s nearly impossible.

Data lives across numerous departmental silos, making it hard for IT departments to keep track of it all. It’s difficult to achieve clear insights into these types of environments using traditional data mining approaches, and even more difficult to take those insights and use them to ensure consistent and flawless network performance.

Agencies need tools that have a cross-stack view of their IT data so they can compare disparate metrics and events across hybrid cloud infrastructure, identify patterns and the root cause of problems, and analyze historical data to help pinpoint the cause of system behavior.

Predicting the Future

Automated data mining paired with predictive analytics addresses both the need to identify useful data patterns and use that analysis to predict—and prevent—possible network issues. By using predictive analytics, administrators can automatically analyze and act on historical trends in order to predict future states of systems. Past performance issues can be evaluated in conjunction with current environments, enabling networks to “learn” from previous incidents and avert future issues.

With predictive analysis, administrators can be quickly alerted about potential problems so they can address issues before they occur. The system derives this intelligence based on past experiences and known performance issues, and can apply that knowledge to the present situation so that network slowdowns or downtime can be proactively prevented.

Learning from the Past

Administrators can take things a step further and incorporate prescriptive analytics and machine learning into their data analysis mix. Prescriptive analytics and machine learning actually provide recommendations to prevent problems, like potential viruses or malware. This can help agencies overcome threats and react to suspicious behavior by establishing what “normal” network activity looks like.

Using new, modern approaches to data analysis can help agencies make sense of their data and keep their networks running at the utmost efficiency. Predictive and prescriptive analysis, along with machine learning, can help keep networks running smoothly and prevent potential issues before they occur. Each of these approaches will prove invaluable as agencies’ data needs continue to grow.

Find the full article on Government Computer News.

8 Comments
vinay.by
Level 16

Nice write up

gfsutherland
Level 14

Interesting article. Read-React-Remember-Repeat?

rschroeder
Level 21

We all KNOW what happens to those who don't study history.

And we know NPM is a pretty good tool for seeing recent history, and extrapolating from it to predict possible futures.

When will we have a Solarwinds Data Warehouse option that lets us view historic data and project trends based on the last five years, or ten years of data?  I'm not looking for every port's data to be available, but it would be SWEET if I could select a few ports for that kind of treatment.  I'd start with:

  • Internet-facing ports
  • Router physical ports

Depending on cost, I'd extend this kind of historical monitoring access as close to the access ports as possible.  Core ports and Distribution ports at a minimum.

Then correlate them with a CMDB that integrates the information from Change Management so we can see what happened when bandwidth utilization / demand increased.  (Or decreased.  That that ever happen?)

The more I think on it, History professors and History teachers are sort of a wetware version of a Solarwinds data warehouse of human events.  They're the interface between activities, historic records, and real time human needs.

Open mind:  escape on tangent . . .

mtgilmore1
Level 13

Big brother is watching you.  Predicting what you are going to do....  Beware.

michael.kent
Level 13

Love the thought of network monitoring software knowing what 'Normal' is and then alerting when things aren't.

tinmann0715
Level 16

A position/role that I have recently become fascinated with is the Data Scientist. I had the opportunity to spend some time with one at an SAP conference in August for their Leonardo product. Talk about meticulous...

tallyrich
Level 15

These are things we should all be doing in any environment, but with the cloud and fast pace of adoption of new technologies/products/services it's even more important.

Cloud can be a great money saver in situations where you only need resources on a limited basis - i.e. weekly live streaming. Turn up a machine, when the load gets heavy turn up another etc. When the stream is over you turn things off and the billing stops. On the other hand with it that easy the resources start to take on the appearance of "free" and the cost can quickly spiral out of control. Planning is key, knowledge of the growth patterns is key and having everyone involved always in the loop as to cost, tech and all the other elements.

byrona
Level 21

I would love to see Orion incorporate some machine learning and be able to tell me about things that I may not have thought to write a specific alert for.

About the Author
Joseph is a software executive with a track record of successfully running strategic and execution-focused organizations with multi-million dollar budgets and globally distributed teams. He has demonstrated the ability to bring together disparate organizations through his leadership, vision and technical expertise to deliver on common business objectives. As an expert in process and technology standards and various industry verticals, Joseph brings a unique 360-degree perspective to help the business create successful strategies and connect the “Big Picture” to execution. Currently, Joseph services as the EVP, Engineering and Global CTO for SolarWinds and is responsible for the technology strategy, direction and execution for SolarWinds products and systems. Working directly for the CEO and partnering across the executive staff in product strategy, marketing and sales, he and his team is tasked to provide overall technology strategy, product architecture, platform advancement and engineering execution for Core IT, Cloud and MSP business units. Joseph is also responsible for leading the internal business application and information technology activities to ensure that all SolarWinds functions, such as HR, Marketing, Finance, Sales, Product, Support, Renewals, etc. are aligned from a systems perspective; and that we use the company's products to continuously improve their functionality and performance, which ensures success and expansion for both SolarWinds and customers.