I have the same problem. I have my LEM on a host pretty much by itself, and half the host resources are dedicated solely to the LEM vm and the disk is local, not san disk.
The searches seem very laggy and slow most of the time, but with the reports console and the web console. I find if I have to generic search parameters or to large of a date, then search will pretty much fail. As it stands right now I can only do a search of data for a week long period of time in the web console using ndepth. If my date range is longer then that the search always fails no matter how specific of a search I put in.
The flash on the web console is also getting pretty rough to deal with and I usually have to close out my browser every couple of hours of having the console loaded or I run into memory issues. This is more of an issue with flash then the LEM product though.
That's almost exactly our scenario. I have one test instance running on a SAN (to take peformance stats), and the production instance is on a standalone machine. Searches on the SAN LEM hit the max search time and fail, which leads me to think that LEM cannot handle the volume of messages Support says it should.
I know that the database makes heavy use of compression to make it very space efficient. My current alert database is 389.27 GB and has 7.8 Billion alerts stored in it. I average between 4 Million and 5 Million alerts a day and the system has no issues keeping up with that part of it. I can't help but wonder if the compression is being overzealous and sacrificing search performance in return for storage capacity. I have no real evidence of any of this in reality, but that is my speculation. I have just learned how to work around the limitations of the search functions for the most part. While they are annoying, I am able to work around them. I can see that being a massive deal breaker for other people though.
Let me rephrase that comment: "LEM cannot effectively search through the volume of message Support says it should.". Our installation has no problem with storing ~48 million messages/day.
I'd be curious about more of the details you're running into so that I could maybe help understand what's going on in your specific situation. It sounds like you've reached out to Support for this before, otherwise I would do the standard suggestion to have them take a look at your specific situation.
A couple of things just to start with a foundation:
1) nDepth works best searching the last week of data. Outside of that your mileage may vary and may vary quite a bit based on your environment.
2) The default search timeout can actually scale up to 30 minutes by default.
Here's the standard KB article for this type of issue if you want to review the standard suggestions:
With proper reservations (48 million events per day, I would say 6 cores and 24 GB at a minimum and 8 and 32 if you wanted better performance) for your setup you should be able to pull back a search for a week in a few minutes if you didn't have any filters. There are definitely other variables that can go into it, so it's worth investigating those as well.
How many rules do you have firing per day? Very active rules can contribute to performance issues.
Have you had Support test the LEM's effective IOPS? I mention this as slower disk (as unlikely as it may be in your situation) can manifest itself as performance issues.
If there is some performance or configuration issues for your LEM, then Support should be able to help identify those and they could be causing an issue with nDepth searches.
That's all assuming the basics though, at least:
Reservations are set appropriately for the event load.
Searching in the last week.
Outside of that situation many more variables can come into play and it would help to have more specific information on what you're running into in your environment.