JK:  How did you get started blogging (blog.scottlowe.org)?

SL:  I started writing on my blog in 2005.  At the time, I was learning a lot, and I wanted a way to capture the knowledge I was gaining.  I guess you could say that my blog started as more of a knowledge base than anything else.  It wasn’t until 2007 when I liveblogged VMworld 2007 when the site really took off.


JK: Do you take your topics from things you are working on at your job or do you take comments and questions from readers as topics?


SL:  Most of my topics come from whatever I am working on; however, from time to time, someone will email me with a question or a comment, and that might turn into a post.  Sometimes the question is about a problem with which the reader is struggling, and sometimes the question is about one of the books that I’ve written or a presentation that I’ve given.  Pulling topics from whatever I’m working on professionally is pretty common among the other bloggers that I know.


JK: Virtualization technology is fairly mature now.  Are there any virtualization concepts that are still not widely understood?


SL:  I’d say the one thing I see a lot is that customers try to do things the same way in the virtual world as in the physical environment, and that is often not the best approach.  Because VMware and other leading virtualization players out there make it so easy and so seamless to run workloads in virtualized environments, administrators don’t take the time to optimize for a virtualized environment.  This is especially true for virtual desktop environments and business critical workloads.  The virtual desktop environment is one area this is especially true, where people just re-create what they are doing for physical desktops, and they don’t truly optimize for the virtual environment.


I think because vSphere and other virtualization solutions do such a great job of making everything seem the same as it was, people don’t even realize they could be doing more.  People port the application over, it runs, and they don’t understand that they could optimize it and make it run even better than it was running in the physical environment.


JK: Are there still challenges with or objections to moving mission critical applications to virtualized environments?


SL:  If I had to define only a single issue, it’s that organizations don’t realize that their virtualization platforms are capable of supporting mission critical applications because they are just looking at recreating what existed in the physical world.  I think it was Albert Einstein who said, “You cannot fix problems using the same thinking that you used to create them.”  The same applies to virtual environments – you can’t use the same thinking in running a mission critical workload in the physical environment as the virtual environment.  Customers will attempt to run mission critical workloads, but because they did not optimize it or the performance is different, they assume there is too much overhead, etc.  All of the virtualization platforms are very robust and capable of handling mission critical workloads.  Customers just have to go about designing the environment a little bit differently than perhaps they realize.

JK: In terms of looking at performance of the applications, would you say it is a must have to look at the application performance and virtualized elements at the same time?

SL:  You need a comprehensive view of all the different layers in your datacenter.  Now we have another layer – where before we had workloads sitting on bare metal, now we have an abstraction layer.  The abstraction is beneficial in that it gives us hardware independence, workload mobility, and easier disaster recovery.  On the other hand, that abstraction also introduces an inability to see what is happening on the other side of that layer.

 

Consider this: a VM sees only what the hypervisor wants it to see, but you need to see what the application is doing, what the OS is doing, and also what the host (or hypervisor) is doing. 
Building on the same theme we have been discussing, the problem is that customers look at things using the same monitoring solution they used in the physical environment – one that is not virtualization aware. Because the tool is not virtualization aware, it might not gather information from all the appropriate layers, and this results in incorrect information. This incorrect information prevents people from properly assessing the performance of the application, and whether SLAs are being satisfied.  It’s only through looking at all the different layers that you can get comprehensive information, in my opinion.

 

As we now move into environments where a single application could compromise multiple VMs and multiple hosts, it becomes necessary to correlate performance across hosts, operating systems and applications to get a holistic view of performance for customers. And I use the term “customers” to mean the consumers of the services IT provides, whether those consumers are internal, as in a business unit, or external (the end users).

 

JK: I noticed you’ve been writing quite a bit about libvirt recently.  How has libvirt matured over the last year?

SL:  My experience with libvirt has only been in the past few months with my new role.  There is a tremendous amount of promise in libvirt, as with many of the open source projects. Unfortunately, many open source projects still lack some of the enterprise support mechanisms necessary for enterprises to adopt.


Without commercial support mechanisms, where you see the adoption of open source projects is in organizations where they have the ability to look at the code and fix the code themselves, like MSPs and telcos.  These types of companies are already writing solutions for their customers, and they need to keep their costs down, so they leverage their expertise to support these open source projects while also satisfying the needs of their customers.

 

When I talk about commercial support mechanisms, think about companies like Red Hat. Red Hat has made it possible for enterprises to use an open source project like Linux, in that they can get assistance from Red Hat if there is an issue with the code.

 

As I said, I think Libvirt is a very promising project in my opinion. By the way, Libivrt is a Red Hat sponsored project, but it is not commercially backed as a product today.  Open vSwitch is in a similar position, although the inclusion of Open vSwitch in several commercial products might change that situation.  We also hear the same thing about OpenStack, which is promising technology but will require commercial backing for broad adoption.

 

JK: I noticed you participate in user groups around infrastructure coding, can you tell me a little more about this trend?

 

SL:  Organizations are pressing employees to work at greater scale with fewer resources and at greater speed.  The only way to do that is through automation and orchestration.  Because companies need to do things as inexpensively as possible, we don’t see organizations going out there and paying for these very expensive, highly complex automation/orchestration solutions, which then require professional services to get implemented.  Instead, organizations start write shell scripts, or start looking at open source projects to help automate some of their tasks.  As organizations continue along this path, I see administrators needing to embrace automation and orchestration as a core part of their job or they won’t be able to scale effectively.

 

For that reason, I have been advising users in these user groups to take a look at Puppet and Chef & others, and to look at ideas from and the culture of the   DevOps space.  Anywhere an organization can apply orchestration and automation, they will reap the benefits of responding more quickly, and having more consistent configurations, which helps with troubleshooting and performance.  I personally am going down that route and am looking extensively at Puppet.

 

I don’t think necessarily that administrators need to be programmers or programmers need to administrators, but administrators need to have some sort of idea about creating configuration files that might require some quasi-programming, like with the Puppet Domain Specific Language (DSL), which is similar to Ruby.

 

JK: What other IT trends should administrators pay attention to as they plan for next year?

SL:  I just gave a presentation at a user group meeting in Canada on this topic, and I listed three technologies to which users should pay attention. Here are the three technologies I gave the attendees:


1) Network virtualization or software defined networking (SDN).  This technology is about creating logical networks on top of physical networks, similar to what has been done on the server side.  VMware recently acquired Nicira for this technology, although there are other players in the market as well.
2) Open vSwitch is something I think administrators should really watch.  It is the basis of a number of network virtualization products.  Administrators should understand its role in network virtualization.
3) Automation & Orchestration – It’s important, in my opinion, for administrators to continue to try to bring greater levels of automation and configuration management into the environment.  This is important to deploy workloads more quickly, and have assurance these workloads will operate over time – eliminating configuration drift and similar operational challenges.