4 Replies Latest reply on Aug 7, 2014 1:09 PM by byrona

    LEM: It's time to step out of SPLUNK's shadow, spread your wings and be all you can be!


      I absolutely love LEM and the capabilities it has.  As a service provider LEM is one of the products we use to offer Log and Event management services to help support our customers.  LEM provides a quick ROI and shows extremely well in front of customers and works extremely well for our customers that have regulatory compliance needs.  The problem I run into is using LEM as a solution for Log & Event management in a non regulatory compliant environment where the customer is more focused on operational management and diagnostics; this is where Splunk overshadows LEM.


      Make no mistake about it, I much prefer LEM over Splunk.  For all of the things that LEM does, I think it does most of them better than Splunk; the problem is the things that Splunk does that LEM doesn't.  Splunk has a more scalable architecture and has no limit on what logs it can consume and process for operational management and diagnostics.  When I have to tell customers that I am limited on what logs I can consume with LEM and the logs that are important to them are not supported, Splunk enters the conversation .  I fully believe that with focus on a few key areas LEM could easily outshine LEM, or at least step out of it's shadow and make a showing in the Gartner's Leaders quadrant.


      LEM already does a great job in the security and regulatory compliance realm; focus on expanding FIM to include Linux and Networking and it will dominate on that front.


      For operational management and diagnostic implementations we have a bit of work to do so I can keep Splunk out of the conversation...

      • Make it so that LEM can consume any type of log, no matter what!
        • This doesn't necessarily need to do the normalization that is provided by the connectors, just get those logs into LEM and make them searchable and reportable
        • If our user base is large enough having a studio where customers can build their own connectors could really help as well, your user and community base will help if you provide them the means
      • Create a more scalable architecture that allows me to drop collectors into DMZ's as an aggregation point that then sends it's logs back to the master
        • This helps with secure distributed environments and/or multi-site deployments
      • Anomaly Detection!
        • This is certainly less important than the two items above; however, this would provide a huge value not only in the security realm but also in the operational realm


      Thanks for reading and I look forward to any and all feedback you may have!

        • Re: LEM: It's time to step out of SPLUNK's shadow, spread your wings and be all you can be!
          nicole pauls

          A couple of thoughts on the collector side - we're currently working on adding agentless event log support, so you COULD deploy an agent that collected both syslog (via kiwi, or a second system) and Windows logs from remote systems. Kind of a hack, not exactly what you're asking for, but may get 90% of the job done and is at least more flexible architecture-wise. It does mean no active response/FIM/USB, so we'd still need to build the "forwarder/collector" model out further to get there.


          On the anomaly detection side, I agree, it would be nice to see some behavioral/statistical analysis and baselining, similar to what SAM has been doing on the performance side. These would be better "early warning systems" than just pure thresholds, too.


          Lastly.... I get mixed messages on the connectors, so I'm totally up for a discussion here.


          We've thought about developing a generic connector (it would probably be multiples - generic flat file, generic CSV, generic field=value, generic SQL database, generic Event Log, etc) that would pull in the data with some generic event type (even... GenericEvent, the top of our tree). We'd pull in the fields to EventInfo or ExtraneousInfo with very little parsing (with the event log we can do some automatic metadata but with most unstructured logs all we get is the date and source).


          The push back we get is that these events would not map to our default rules AT ALL, which means you'd be stuck building your own rules, and a lot of people get stuck building rules because it's not necessarily intuitive the first few times you go through it. Also, on the reporting side, those reports could be huge and you'd need to build out custom/filtered reports (which again might not be an intuitive process).


          The second big factor is volume. With connectors, we have the ability to filter less than useful data and not create performance problems, or make it so there's a different connector or set of configuration steps for high volume event sources (let's say, plugging in your Exchange message tracking logs - ouch). We don't really have those tools if we build a generic connector, so there's a chance you could get in a performance situation quickly (necessitating us to build a connector anyway).


          In the end, it comes down to: does the benefit outweigh the cost?


          The device studio sort of thing is where we want to be, we are just concerned that needs to come first in order to not tie the noose around your neck so tightly.


          Interested in your, anyone's thoughts on this.

            • Re: LEM: It's time to step out of SPLUNK's shadow, spread your wings and be all you can be!

              I think the biggest issue to tackle is the connector bit.  When a customer asks "can you support logs for <insert item here>", you need to be able to say YES in nearly all cases.  Every time I tell somebody NO, I get Splunk thrown in my face .  Based on your responses HERE, HERE, and HERE, I get to go back to a customer and relay that information, they have already said they will go with Splunk.  This is bad for both us and SolarWinds as it makes it more difficult for us to sell our Log Management service that is based off LEM thus we will be purchasing less licensing from SolarWinds.


              I think a generic connector is EXACTLY what you need so lets think on that...


              The fact that it wont' apply to any of the default rules is not a problem so long as you make that clear.  I think that its always better to error on the side of giving your users more capabilities than not.  Flag these connectors as being for advanced users and specify why in a pop-up or something creative like that; you have an awesome UX team so I am sure they can help there.


              The volume issue seems like it could also be addressed in several different possible ways, a few that quickly come to mind are as follows...

              • As part of the generic connector add on a filter that users can build as part of the connector configuration, this would offload the filtering to the end-nodes and reduce what comes into LEM
              • Add the ability to expire the logs that are pulled by a specific connector so that they are not kept as long in the database
              • Provide the ability to dump logs from the database on a routine database via some form of archive process; I think we have discussed this in a different thread as well

              These are just a few ideas that come to mind, I am sure if you posted this problem out to the community you would get all sorts of creative ideas, you have a great group of people here willing to help solve problems (and the best part is we work for free). 


              On the reporting side you definitely need some serious reform (as I have pointed out in a different thread).  The filtering using the reporting tools is AWFUL!  Move the reporting into the WebUI using a better model more conducive to customized reporting and filtering.


              Thanks for responding and hopefully we can keep these types of discussions moving for future improvements of the product and to help me sell more of it! 


              P.S.  I realize that it's easy for me to sit here and suggest changes without understanding what would be necessary to make those things work, I realize that I don't fully understand the back-end of the product and the implications of what I ask for.  I am just trying to provide feedback to help make a product I love better and to help better position the product and myself to be successful out there in the world.