THWACK Tuesday Tip :: How to Quickly Identify Suspicious Network Behavior With Intuitive Dashboards

FEATURED EPISODE: 
How to Quickly Identify Suspicious Network Behavior With Intuitive Dashboards
How to Quickly Identify Suspicious Network Behavior With Intuitive Dashboards
February 18, 2020

Log and event data are a boundless and valuable resource for identifying suspicious network activity and stopping potential breaches. However, analyzing lines and lines of text-based data can make this resource more trouble than it's worth. In this video, we'll explore the different ways you can customize log data in an easy-to-understand and visual dashboard in Security Event Manager to help turn it into something you can act on.
Anonymous
  • Another thing about dashboards... if you haven't added the new KPI widget to your dashboard it's great to watch the Oldest Stored event occurred time, Logs/Data used storage percentage, and Logs/Data used storage.  Now that I have my appliance setup correctly it's not blowing up with the snapshots anymore and aging off the oldest correlated events correctly.  There are two settings in CMC you have to set to the same amount so it won't allow the appliance to fill up.  diskusageconfig and dbdiskconfig are the two I think if I'm remembering correctly.  I had to open a case to get it straightened out.  My appliance hasn't run out of space yet.  Commvault was using snapshots and eating up some of my datastore normally used by the SEM appliance.  I reduced the snapshots some and set the two settings in the CMC and now it's stable.

  •   The beauty of the snaps is if your appliance fills up and crashes you can roll back to a point in time that's right before the crash happened.  I've found that SEM, once it can reboot, will clean up itself somewhat and free up space.  The problem is if a critical part of the filesystem like /var in my case fills up alll the way... it won't be able to boot at all which is a horrible situation to find yourself in.  If you have snapshots being done on regular intervals you can just roll the datastore back to the closest time before the crash happened and once done then reboot the appliance and it'll recover and clean up.

    What I'm working on is figuring out how to configure SEM so this doesn't happen in the first place right?  The appliance shouldn't just let itself crash like this.  I'm monitoring it in Orion with NPM but the issue I've ran into is it's all about the datastore.  The problem is a lot goes into that datastore.  It's not just the VM and it's volumes but also the reserved snapshot storage.  Often this is configured as a percent of the total for snapshot storage.  I'm still working on finding a configuration that won't crash the datastore under some circumstances by filling up.

    Bill

  • I probably need to get with you   on teh Commvault piece.   I had not thought of that and we run commvault.  I am not sure if i got the Root PW but I was able to get past my issues but reinstalling.   Sometimes it has to be done.  

  • I agree ... much better now in HTML5... I'm already thinking about figuring out using the html pieces and showing them in Orion.  Did you ever get the root password for your appliance?  Each appliance has a unique root pwd but it's not something you usually ever need.  I keep constant snapshots of my appliance so I can roll back to any point in time on the NetApp NAS.  I used to do it with NetApp snapshots but the past few years I've been using Commvault snapshots because we are using Commvault backup system.  It's saved my bacon a few more times than once.

    I've had some issue with the tomcat webserver the LEM and now SEM uses.  For no apparent reason tomcat logs would fill up the /var partition and crash the LEM appliance.  I'm hoping this won't happen now on 2019.4 with SEM.

    One thing is for sure... the support guys for SEM know the product really well because many have been with the product since even before SW aquired the product.  They know their appliance.

    Bill

  • Much much better.   as a SEM user, I love it.