At a special panel for RSA Security’s gargantuan conference this year, Jerry Sto. Tomas, director of global information security at Allergan said "I don't call it Big Data...,” when discussing the challenges security professionals face in gathering useful information. I felt vindicated based upon the work I’ve been doing that challenges the current trend towards log and event gluttony. News flash: Information technology groups aren’t failing in their attempts at being proactive because they aren’t collecting enough events. Most organizations have enough SNMP, syslog, and netflow data to circle the planet a few times. I believe it’s because most of the applications available in the market to correlate data require a PhD in physics because they’re so complicated to use. Then there’s the impossible task of trying to find the performance “sweet spot” of these applications. I recently needed to perform some log correlation for an investigation and after carefully constructing the query in our event reporting and analysis tool, still hadn’t received the full results after four hours. As I yearned for the bygone days of using the grep/sed/awk triple-threat on some text files, I thought, “Since when do you need a supercomputer for log and event correlation?”
As our society relies more heavily on the latest greatest thing and the More is better, bigger is better mantra, at the end of the day it is still the people, not the machines that save the day.
As processors and storage have increased, it seems programmers have gotten lazier. Just throwing layers of unfiltered or useless information at us to be able to say, look how much more we can give you. Someone one has to be able to interpret and validate the information. And truth be told, in our industry, I still see people predict cyber attacks and vulnerability issues with anecdotal evidence or gut feeling based on real time observation.
We can throw all the data we want at people, but someone with real world experience and that "it" factor can do more with a little data than someone who put in four years at college and a run at the CCNA Securty and CISSP certs -- who just though this would be a well paying job or give them their dictator fix -- can do with a terabyte of data.
Since when, indeed...
I guess it's since some time in the not-so-distant past when application/solution suppliers seemingly heard that what everyone wanted was event/log/forensic information as an available output, erring on the side of information hemorrhage just to be safe.
Those of us who only have to keep track of logging for routing/switching/firewalls for years are pretty fortunate nowadays - syslog is easily-digestable and standardized for decent readability without a lot of finish work.
Application/platform/other event creation, however, doesn't always work as easily - because we need more specific information from our applications, don't we? The more info we want to get, the greater the complexity of the event creation logic used to forward that info, and as a result, the tools to manipulate this data into something consumable increase in complexity as well. Not to mention integrating application-level events with transport-level with some intelligence.
With SDN rolling down the tracks at increasing speed, I can see that inroads may be made with event correlation and intelligence when transport/routing/switching systems are more tightly integrated with - and intelligent to - the compute/memory/storage pieces that live alongside them. It's early days yet, I think.
I agree. Unless there's a product on the market that can analyze the data correctly and easily, then most data collected is in fact useless. Luckily for me, I only collect data from switches/routers. I remember attempting to use HP Openview and it was way too complicated at the time. Ciscoworks was a little easier but still required a course to be able to make it really usefull. Solarwinds on the other hand, makes everything a breeze in comparison.
But as so much data comes in (have to make sure everything is collected in case of an "event"), a supercomputer is needed to be able to get the info as quickly as possible. Nobody want to wait 30 mins for a report.
We are in the process of implementing Log Rhythm for this purpose. It is owned by another team so I won't get the opportunity to play with it too much. One of the key reasons it was chosen was for it's correlation engine. We will see how well it performs once it's fully functioning.
while I realize this is being done by a different team I thought I would throw out there that I had the opportunity to talk to a Gartner Researcher regarding SIEM products and he indicated that out of the dozen or so products that he was tracking that both Log Rhythm and SolarWinds Log & Event Manager are both great products and very comparable.
One of the key reasons we left off the SW product was because of lack of scale. Per the engineers we talked to at SW if the size got too great you had to stand up a new (completely separate) instance. Makes it hard to correlate if I'm sending my data two places. As for features of the SW product we were very impressed but we are quite frankly too large to utilize it.
Interesting! When I talked to SW about scale I was told that you can scale out the environment by adding additional appliances as well but my understanding of their explanation was that it all acted as one system, just adding additional capacity. Perhaps I misunderstood something there.
i spoke with two different engineers and posed the same question and both provided the same answer. Honestly after we were told that we moved on and I didn't question it further.
SolarWinds solutions are rooted in our deep connection to our user base in the THWACK® online community. More than 150,000 members are here to solve problems, share technology and best practices, and directly contribute to our product development process. Learn more today by joining now.