I had to go digging in the internal docs for this. The caveat is that the virtual appliance only supports one logical disk, so any RAID is being done invisibly to the LEM by your storage infrastructure. There may be better ways to optimize it based on your virtualization environment, storage methods, and the infrastructure connecting the VM to that storage.
What is true in any case is that the LEM is a very high I/O product, since (at least during normal operation) we anticipate it will be constantly writing new information to the alert and/or nDepth databases, and anything you can do to maximize those I/Ops is a good idea.
When Log and Event Manager was sold as a hardware appliance by Trigeo, the Manager and DB (if it was separate) were set up with RAID 5. This would maximize available storage while still providing some redundancy, but was obviously constrained by the physical chassis and number of connections.
Solarwinds doesn't support or not support any RAID configuration, but recommends that you choose the configuration that best meets your needs for redundancy while allowing the LEM to keep its queues down.
To add to what Curtis said, we generally don't see issues with direct-attached storage, where we start to run into problems is network-attached storage where the latency is high. Usually virtual appliances/machines think in terms of IOPS over RAID levels, and as long as your storage is capable of enough IOPS you'll be okay. Of course, faster disk = faster performance
Reports are a high read-write situation because we actually create a cache of the data from the primary datastore into a secondary temporary datastore for reporting. On top of that, older data gets compressed, so we end up decompressing, caching, and then reading from that cache to create your report.
Searching can also be high read-write when searching older data, but it doesn't have to do any caching - just has to decompress the older data temporarily. For recent data, it's a lot of reads.
Inserting data is dependent on your event load, of course, and that's the primary use case for fast writes (more data = faster disk to keep up).
The two variables are really event load and expected performance - for better performance or higher event loads you need more IOPS/faster disk, for adequate performance or lower event loads you can relax a little.