I implement both approaches and there is a right tool for every job, so you need to understand what it is that you need in order to select the correct solution.
The problem with storage replication stems from the requirement for like arrays. While there are obvious efficiencies to be gained when the replication solution can make assumptions about both the source and target, matching arrays isn't feasible in all situations. Consider DRaaS providers: they would lose the benefits of scale if they had to purchase one of every type of array in order to handle customer replication requests. In my experience, it is much easier to leverage a common abstraction at the hypervisor or application level, especially if you're potentially going to leverage a cloud provider's infrastructure for DR.
From a customer perspective, using the hypervisor or application replication model, I can even leverage a high-dollar, high-performance storage array in production and something less expensive (or even host-local storage) for DR. What's more is that hypervisor-controlled replication provides greater (per-VM vs. per-LUN) granularity for protection, so I can gradually move into a replicated data solution, manage VM provisioning and protection a from a single location, and prioritize based on application SLAs.
Currently we are using storage-level replication with NetApp. We also have application level replication in place for many customers, specifically for SQL. Since we are a service provider our customers requirements often drive how we configure things. We are not currently using hypervisor level replication; however we do have plans for that in the future using a remote site for DR.
Ultimately I agree with dobaer, it's all about using the right tool for the right job.