cancel
Showing results for 
Search instead for 
Did you mean: 

Looking for SIEM Love in Some of the Wrong Places?

Level 11

Good morning, Thwack!

I'm Jody Lemoine. I'm a network architect specializing in the small and mid-market space... and for December 2014, I'm also a Thwack Ambassador.

While researching the ideal sweet spot for SIEM log sources, I found myself wondering where and how far one should go for an effective analysis. I've seen logging depth discussed a great deal, but where are we with with sources?

The beginning of a SIEM system's value is its ability to collect logs from multiple systems into a single view. Once this is combined with an analysis engine that can correlate these and provide a contextual view, the system can theoretically pinpoint security concerns that would otherwise go undetected. This, of course, assumes that the system is looking in all of the right places.

A few years ago, the top sources for event data were firewalls, application servers and a database servers. Client computers weren't high on the list, presumably (and understandably) because of the much larger amount of aggregate data that would need to be collected and analyzed. Surprisingly, IDS/IPS and NAS/SAN logs were even lower on the scale. [Source: Information Week Reports - IT Pro Ranking: SIEM - June 2012]

These priorities suggested a focus on detecting incidents that involve standard access via established methods: the user interface via the firewall, the APIs via the application server, and the query interface via the database server. Admittedly, these were the most probable sources for incidents, but the picture was hardly complete. Without the IDS/IPS and NAS/SAN logs, any intrusion outside of the common methods wouldn't even be a factor in the SIEM system's analysis.

We've now reached the close of 2014, two and a half years later. Have we evolved in our approach to SIEM data sources, or have the assumptions of 2012 stood the test of years? If they have, is it because these sources have been sufficient or are there other factors preventing a deeper look?

9 Comments
cahunt
Level 17

Any small to Mid Size organization should be able to address this easily, considering there aren't more managers in the dugout than there are players on the field. Larger organizations get silo'ed and hopefully your information security group is good enough to tackle this need - sometimes after it gets discovered that you already own at least two of the tested products and have purchased a third to use across the board.

ghostinthenet
Level 11

This is where we get into the question of need. In your experience, does going below the level of sessions, APIs and database queries for our logging sources make sense? Is there a need? If so, is that need enough to justify the additional resources required to factor them in?

cahunt
Level 17

It's all about budget in the end. But along the way if you have someone who has had that bad experience they will push the 'need'. The hard part, and possibly the crux of any manager/director is that balance of the resources along with the resourceful staff to run and maintain them. In any sort of aftermath having too much to review and parse through is certainly not a problem. At the very least I would say not even being able to come up with a story or scenario of what events took place would be the deciding factor in letting that hammer fall when it comes to having someone worth their weight. For yours, the company's your employee's protection and safe guarding all that you work for, the resources are justified. Then consider your customer's personal data along side your own intellectual property.. yes and yes the resources allocated become even more valuable and justified. Visibility over your domain is the first part of being able to maintain and secure what is yours.

ghostinthenet
Level 11

So the idea is to maximize the logging sources and depth, even if the routine analysis doesn't factor all of that in. Essentially, keeping a historical database to be examined in the event that something gets through. That makes sense. Where then does the routine analysis end?

cahunt
Level 17

Never if you ask me, but I don't write the checks. Best that this would be a continued effort - maybe given some time daily/weekly depending on the availability of your drones. Of course that news worthy event will always make these efforts ramp up.

jswan
Level 13

I'm not sure I fully understand your question, but from the perspective of doing security-oriented network forensics the data sources I use most frequently are:

Web proxy logs

NetFlow

Authentication logs (AD, RADIUS, VPN, etc)

DHCP

DNS query logs

IDS logs

Web server and file server logs

Firewall logs

The things I frequently want but are hard to get:

Executable file hashes and execution time

Executable-to-socket correlation (i.e., the network flow TCP 1.1.1.1:30000 > 2.2.2.2:80 was started by foo.exe with SHA1 hash 0x1234etc at 2014-12-02T01:00:00)

ghostinthenet
Level 11

The question was related to primary data sources used. In 2012, it was mostly focused on firewall logging, web/API logging and database logging. I was interested in whether this was still the case or not... and in either case, why?

You've got a much more comprehensive set of data sources, so I have to ask. How many of these are you using for your routine analysis versus keeping as historical data for forensics?

jswan
Level 13

I regularly (at least daily) look at alerts and/or reports for:

IDS

NetFlow

Authentication

Probably weekly I look at filtered reports for DNS and DHCP. Other people look at web and file server logs, mainly for operational reasons. The other stuff would be forensics related. We don't have a fully automated way to pull hashes or process audits from end stations today, but we're looking into it.

I don't find firewall logs to be all that useful unless I'm troubleshooting a problem or looking at forensics; I find I can get better information out of NetFlow most of the time.

jkump
Level 15

Good information.

About the Author
Network Greasemonkey, Packet Macrame Specialist, Virtual Pneumatic Tube Transport Designer and Connectivity Nerfherder. The possible titles are too many to count, but they don’t really mean much when I’m essentially a hired gun in the wild west that is modern networking. I’m based in the Niagara region of Ontario, Canada and operate tishco networks, a consulting firm specializing in the wholesale provisioning of networking services to IT firms for resale to their respective clientele. Over my career, I have developed a track record designing and deploying a wide variety of successful networking solutions in areas of routing, switching, data security, unified communications and wireless networking. These range from simple networks for small-to-medium business clients with limited budgets to large infrastructure VPN deployments with over 450 endpoints. My broad experience with converged networks throughout Canada and the world have helped answer many complex requirements with elegant, sustainable and scalable solutions. In addition, I maintain current Cisco CCDP and CCIE R&S (41436) certifications. I tweet at @ghostinthenet, am a Tech Field Day delegate, render occasional pro-bono assistance on sites like the Cisco Support Community and Experts' Exchange and occasionally rant publicly on my experiences by "limpet blogging" on various sites. Outside of the realm of IT, I am both a husband and father. In what meagre time remains, I contribute to my community by serving as an RCAF Reserve Officer, supporting my local squadron of the Royal Canadian Air Cadets as their Commanding Officer.