Syslog - The Blue-Collar Worker Of The Data Center

Striking a balance between good visibility into infrastructure events and too much noise is difficult. I’ve worked in plenty of enterprise environments where multiple tools are deployed (often covering the same infrastructure elements) to monitor for critical infrastructure events. They are frequently deployed with minimal customization and seen as a pain to deal with by the teams who aren’t directly responsible for them. They’re infrequently updated, and it’s a lot of hard work to get all of the elements of your infrastructure covered. Some of this is due to the tools being difficult to deploy, and some of it is due to a lack of available resources, but mostly the problem is people.

Information security is a similar beast to deal with. There are many tools available to help monitor your environment for security breaches and critical issues, and lots of data centers with multiple tools installed. Yet, enterprises continue to suffer from having poor visibility into their environments. 

Syslog is an underrated tool. I call it the blue-collar worker of the DC. It will happily sit in your environment and collect your log data for you, ready to dish up information about problems and respond to issues at a moment’s notice. All you have to do is tell your devices to send information in its direction and it takes care of the rest. It catches the messages, stores them, analyzes them, and sends you information when there’s something you should probably look at.

But it’s not just a useful way to view events in your DC. It can also be a very useful tool when it comes to managing security incidents. 

First, set up sensible alerting in your syslog environment. An important consideration is understanding what you want to alert on. Turn on too much information and the alerts will get sent to someone’s trash file rather than read. Turn on too little and you won’t catch the minor issues before they become major ones. You should also think about the right mechanism to use for alerting. Some people work well with email, while others prefer messages or dashboard access.

The second thing is to understand what you want to look for in your events. Events such as logins using local credentials when your devices use directory services are an example of something that should trigger an alert. The key is to understand what’s normal in your environment and alert on things that aren’t. It can take some time to understand what is normal, but it’s worth the effort.

Third, you must understand how to respond when an alert comes your way. Syslog is a great tool to have running in the DC because it centralizes your infrastructure logging and provides a single place to look when things go awry. But you still need to evaluate the severity of the problem, the available resolutions, and the impact on the business.

The key to managing security events with syslog is to have the right information at hand. Syslog gives you the information you need in terms of what, when, and who. It can’t always tell you how it happened, but having that information in one place makes working that out easier.

Infrastructure operations can be challenging, particularly when there’s been some kind of security incident. Having the right tools in place gives you a better chance of getting through those events without too many problems and your sanity intact.

  • I was logging way more than they state before mine would break. All of my messages logged to a file on disk. Try to keep it local disk if possible, not SAN and it will do better.

    pastedImage_0.png

  • 2 million per hour was about the breaking point on mine. It depends on if you are logging them to Disk or not.

  • thwackerinva​ That information is documented HERE.  I hope this helps!

  • Is there a general rule of thumb, regarding how much data Kiwi Syslog can gather/filter in a day?  I'd like to push more stuff to that server, but I feel like I might be pressing my luck if I send too much more traffic/data to it.

  • I just responded to THIS post regarding Syslog and I feel that response is totally relevant here also:

    While I think managing your logs is great, just basic syslog and alerting will no longer cut it for environments of any size due to the sheer volume of logs you will end up dealing with.  Syslog is certainly a great place to start but I think most folks will quickly outgrow a simple Syslog solution and need something more advanced with more advanced parsing, dash-boarding and reporting capabilities.  When dealing with large quantities of logs I have found alerting almost unruly, in a few cases for specific events it can work out well but overall it becomes unmanageable.  What I have found works out best is having a more advanced log management system or even SIEM that is capable of processing the events and turning them into dashboards that provide business and/or operational insights that are then reviewed either constantly or multiple times a day by folks that have been trained on what to look for.

    From a security approach a great way to start is using a Threat Matrix where you start with the different threats you are looking for, then you note what data you need to identify the threat, then how you go about detecting such a threat and lastly what you do when a threat is detected.  This approach is product agnostic and provides a structure for you to build your technology and training around.  It also gives your analysts or threat hunters prescriptive paths to follow.  Keep in mind that this is a living document that is always being expanded and modified as things change.

Thwack - Symbolize TM, R, and C