Skip navigation

Guest Blog by David Marshall, Sr. Marketing Manager and Social Media Strategist at Virtual Bridges

Virtual Bridges and SolarWinds have jointly put together a four part blog series intended to provide guidance in implementing and managing a VDI environment.  These will be posted alternately on the SolarWinds Whiteboard blog at http://thwack.solarwinds.com/community/solarwinds-community/whiteboard/blog and the Virtual Bridges blog http://www.vbridges.com/company/news/blog/ . 

This is part one in the series, the other parts are:

 

  • Part 1: Achieving Success with VDI, the First Time (Whiteboard Blog)
  • Part 2: Successful Performance and Capacity Management for VDI (Virtual Bridges Blog)
  • Part 3: Considering VDI? 10 tips to ensure your VDI investment delivers real returns (Whiteboard Blog)
  • Part 4: Key Considerations for Storage Optimization for VDI (Virtual Bridges Blog)

 

We hope this is useful to you and we look forward to your feedback.

 

One of the questions we frequently hear from customers is how to get started with VDI. Over the next two weeks, Virtual Bridges and SolarWinds will be taking a deeper look at the subject, partnering to help educate the community with a multi-part blog series. As our first post, we’ll walk through key steps to achieving VDI success – the first time.

Windows migration, Bring Your Own Device (BYoD), cost reduction, mobility, security, compliance – and these are just a few of the many business reasons driving VDI adoption. Whatever your rationale, there are a number of keys to achieving success with your VDI project, but it all starts with careful planning of the infrastructure.

Before committing to any VDI initiative, make sure you know the answers to the following:

  • What are the key elements of my VDI infrastructure?
  • What are some of the key factors that impact the size, capacity, performance of this infrastructure?
  • How do I monitor, manage this infrastructure to ensure reliability and performance?
  • How do I measure success?

 

Now that you have the questions in mind, let’s get started on the answers.


VDI Basics


When you think about VDI, you have three key consistent components – Servers, Storage and Networks. It’s how you work with your VDI vendor to design, size and manage the infrastructure that can be the difference between success and ongoing headaches.

  • Servers – The servers required for VDI are industry standard x86-architecture servers. Local disk storage is recommended for the servers, however you can use any physical form factor including blade servers. VDI works best when multiple servers are clustered together to meet the capacity needs of thousands of desktops. In addition, clustering buys you load balancing and high availability. You want the servers to be stateless so there is no loss of critical data if the server goes down. In addition, it is ideal if the servers themselves scale horizontally much like a web server farm. As your capacity needs grow, you simply add more servers to the cluster since the same software component runs on every server. The local storage can be attached via standard IDE or iSCSI interfaces. Dual 1 Gig-E network cards are recommended on the servers. These network ports are treated individually for different types of traffic or can be teamed or bonded together.

 

  • Storage – in a VDI implementation, the cluster of servers is connected to shared storage which is the persistent repository of all data. The storage is NAS-based and the servers rely on file-based access to the shared storage. Additionally, both NFS and CIFS file protocols are supported.

 

A key warning, storage can be a significant capacity and performance bottleneck with VDI and can lead to significant frustrations. Shared storage using a Storage Optimizer technology can alleviate this and reduce the performance requirements. The key to this technology is the ability to leverage the local disks on the servers for almost all real-time operations. The IOPS on the shared storage is reduced 4-5 per VDI session even for persistent desktops.

 

  • Network – The network is another critical aspect of the VDI infrastructure. This is primarily the network that connects the access or client devices to the VDI sessions running on the servers. Typically, this part of the network is a WAN. Most VDI implementations worry about optimizing for WAN bandwidth and latency. Make sure to consider the reliability of the network and remember that VDI requires a persistent network connection.

 

Stay tuned for our next post which will highlight the infrastructure requirements to keep in mind as you get started with VDI.


For more on VDI, check out Virtual Bridges.

So it’s no grand surprise that VMware was going to buy someone in the SDN (software defined networking) space, but $1.26B for such a young company?  Anyway, shock aside, software defined networking is here to stay and VMware’s acquisition certainly gives the space another boost, not that it needed one given Cisco’s recent moves and the broad industry support behind OpenFlow.  For those of you wondering what the heck I’m talking about, you can check out this whitepaper.

It’s clear to me that software defined networks are going to be in the future of networking and will likely move beyond adoption in just big networks to also play a role in the mid-market and smaller networks.  Of course, the penetration into the mid-market will take much longer, but there’s too much inherent value that will be created by the shift for the move not to happen.

Today SDNs are being touted in the high end data centers because they solve a real problem in terms of what it takes to create a network fabric that works in tandem with the speed of the cloud.  The folks at VMware did a post that describes the strategy behind the acquisition and I think it articulates the problem well.  But the promise of low cost commodity networking hardware with specialized software that runs on top of it is an appealing one.  Think about what it would have cost Google to run their data centers on IBM hardware versus what they’ve done with commodity hardware. Now replicate that on the networking side with Cisco hardware versus commodity hardware. 

If you’re in a mid-size company you might think that this doesn’t apply to you because you don’t have the data center scale that Google has, but don’t be fooled.  Software defined networking will mean lower hardware costs and a more flexible network.  You get to buy commodity hardware and really just manage the controllers (software).  Now SDNs have a long way to go, they don’t offer all the network services that traditional networking technologies offer today, but those will come.

For our part, we are constantly looking and listening to the feedback you give us on when you’re adopting these new technologies and the challenges that you’re having with them. Some of our customers are already leveraging our products to do some work with products from companies like Vyatta (see the device template on thwack), but many of you may not have started.

So, when will software defined networks make it in to your business?  Let us know.

In WSUS (Windows Server Update Services) and Windows Update, Microsoft has one of the most reliable patch management engines in existence. According to Microsoft, Windows Update currently updates more than 800 million PCs worldwide. But over the past two years Microsoft patches themselves have been causing more and more problems, such as those users had installing Windows 7 Service Pack 1 last April and the recent Skype (now owned by Microsoft) patch - a package that was supposed to be update-only, but actually initiated full deployments to all targeted systems.

 

In other cases, the patch packages fail to find machines that need updates, or attempt to install patches where they’re not needed. Over the last year, I’ve heard two to three cases where Microsoft patches reported a failed installation when the patch has actually installed. Sometimes, a report of a failed update is actually triggered by an older update, not the most recent one.

 

This leads to a lot of confusion and wasted effort for the patch administrator. Usually, something happens unexpectedly and you don’t know why – like a report of an unsuccessful patch but you can’t identify where it failed. The other side of that, which is even more dangerous, is when a patch needs to be applied, but it can’t identify all the systems that need it and thus never gets deployed. What you don’t know can hurt you.

 

In only a very few cases is the problem a question of interaction between the patch and WSUS. Most often, it’s flagrant errors in the way the patch logic itself has been written.  After close to 14 years of solid performance one can ask why has the quality of Microsoft patches fallen so quickly?

 

Can Product Groups Do Patches, Too?

 

I don’t have a window into Microsoft, but my sense is there has been reorganization in the WSUS and the Windows Update hierarchy recently, and perhaps resources once devoted to WSUS, a mature offering, have been diverted to other projects such as System Center Configuration Manager. It’s almost as if WSUS has gone into maintenance mode, with Microsoft treating it as a product that doesn’t need continued enhancement.

 

Another factor, I suspect, is that the updates for each product are developed by the individual product groups, each of whom is supposed to use Microsoft’s standard methodology for developing update packs. But expecting each product group to have the same skill sets, level of experience and, frankly, level of commitment to patching is asking for trouble. For a process as complex as patching, and a product line as complex as Microsoft’s, a more centralized management structure is the only way to deliver the quality customers expect.

 

 

What can Microsoft Do Better?

 

What steps would I like to see Microsoft take? First, publish (to both Microsoft product groups and externally) more details about how the WSUS infrastructure works, and the best practices for working with it. This should include information about how to use Microsoft’s tools for building packages and injecting them into the WSUS database. Internally, this documentation will help keep the product groups on track with best practices for creating patches. For those of us on the outside, this detailed documentation will help us understand what has gone wrong when a Microsoft patch blows up.

 

Second, the folks in Redmond need to do whatever it takes – whether that means organizational or personnel moves, or both – to follow those best practices themselves. Millions of customers worldwide rely on Microsoft to lead the way in patch management, at least with their own systems. If we can’t trust Microsoft to patch their own products correctly, what else can we trust them with?

 

And finally, it would be helpful if Microsoft would expose in the WSUS console the actual logic rules defined in a Microsoft patch package.  The SolarWinds Patch Manager console exposes the logic rules that we use for third party packages – and that readily accessible information has helped immensely with customer troubleshooting of unexpected behaviors.

 

Being able to see, for example, that IE9 on Vista Service Pack 2 had an unexpected dependency on a certain video driver would have helped hundreds of patch administrators last year when nobody could figure out why IE9 wouldn’t install on Vista SP2 systems.

After months of speculation, it seems that Dell is primed to win its bidding war for Quest Software. The price tag: $2.4 billion, a 21% premium to the deal Quest had reached with private-equity buyer Insight Venture Partners in March. While this is great news for Quest shareholders, Quest users might not share the enthusiasm. Dell has been on a shopping spree in the last 18 months with a clear intent to sell a combination of hardware, software and services to larger enterprises and compete more directly with the IBMs and HPs of the world.

 

This is particularly ironic when you look back at where Quest started out in the late nineties: a lower-cost alternative to the Big 4 IT Management vendors (IBM, BMC, CA, and HP)… But, not a surprising fate. While their products were originally less complicated and more user-centric than the Big 4 (which contributed to their initial success), they proceeded to use their capital to acquire smaller companies after they went public, and in many cases, bought several companies that did the same thing.  For instance, they bought at least three companies that offered virtually the same Active Directory Management functionality.  It was a similar story with virtualization management.  Most importantly, they failed to consolidate these overlapping properties, so today, they find themselves with a very messy portfolio.

 

Will the Dell acquisition fix that problem? Time will tell. It may give them the resources to try to figure it out over time, but Quest users should either expect disruption in the product lines or, if they can’t figure out the product rationalization, they should expect the overlapping products to continue to languish, with little attention.  In either case, Quest users have the right to question what level of investment and innovation in their products will be sustained, what the roadmap will look like, and whether their products will now be part of much complex and expensive software platform. Not to mention the anticipated professional services fees that will be tagged along as Dell competes with HP and IBM for these services dollars.

 

At SolarWinds, we have a different philosophy. We don’t believe in professional services. We don’t believe in complex suite of software that users hardly ever get to work. Instead we focus our investment and resources on the needs of IT professionals across organizations of all sizes who demand powerful, usable products that can deliver immediate value a price that does not break the bank. We like simplicity. And we like even better when our customers praise our powerful, scalable and affordable products.

 

If you are a Quest user, you are probably already looking at alternatives. Well the quest is over (no pun intended). You should definitely give a look at our Server & Application, Virtualization and Network Management products.  We make it easy for you with free 30 day trial, and a price likely lower than your current software maintenance cost.

 

Now that will make you smile!

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.