cancel
Showing results for 
Search instead for 
Did you mean: 
Create Post

Thin Provisioning—Have You Moved From Fat to Thin?

Level 10

As an admin, how do you ensure that you don’t run out of disk space? In my opinion, thin provisioning is the best option. It reduces the amount of storage that needs to be purchased for any application to start working. Also, monitoring thin provisioning helps you understand the total available free space and thus you can allocate more storage dynamically (when needed). In a previous blog I wrote, I explained how thin provisioning works and the environment it can be useful in. Now I’d like to discuss the different approaches for converting from fat volume to thin.


Once you’ve decided to move forward with thin provisioning, you can start implementing all your new projects with minimum investment. With thin provisioning, it’s very important to account for your active data (in fat volume) and to be aware of challenges you might encounter. For example, when conducting a regular copy of existing data from fat volumes to thinall the blocks associated with fat volume will be copied to the thin, ultimately wasting any benefits from thin provisioning.


There’s several ways to approach copying existing data. Let’s look at a few:


File copy approach

This is the oldest approach for migrating data from fat volume to thin volume. In this method, the old fat data is backed up at the file level and restored as new thin data. The disadvantage of this type of backup and restore is that it’s very time consuming. In addition, this type of migration can cause interruption to the application. However, an advantage to the file copy approach is that it marks the zero value blocks as available to be overwritten.


Block-by-block copy approach

Another common practice is using a tool that does a block-by-block copy from an old array (fat volume) to a new thin volume. This method offers much higher performance compared to the file copy method. However, the drawback to this method is the zero-detection issuemeaning fat volumes will have unused capacity which will be filled with zero’s awaiting the eventual probability of an application writing data to it. So, when you do general migration by copying block-by-block data from array to the new, you receive no benefit from thin provisioning. The copied data will have unused space with zero-blocks, and you end up with wasted space.


Zero-detection

A tool that can handle zero block detection can also be used. The tool should remove the zero valued blocks, while copying the old array to the new. This zero-detection technology can be software based or hardware based. Both software and hardware based fat to thin conversions can help remove zero blocks. However, the software based fat to thin conversion has a disadvantagethe software needs to be installed on a server. That means this software will consume large amounts of server resources and will impact other server activities. The hardware based fat to thin conversion also has a disadvantageit’s on the expensive side.


As discussed, all the methods to convert from fat volumes to thin have advantages and disadvantages. But, you cannot continue using traditional provisioning or fat provisioning for storagesince fat provisioning wastes money and results in poor storage utilization. Therefore, I highly advise using thin provisioning in your environment, but make sure you convert your old fat volumes to thin ones before you do.

After you have implemented thin provisioning, you can start the over-committing of storage space. Be sure to keep an eye out for my upcoming blog where I will discuss the over-commitment of storage. 

8 Comments
MVP
MVP

interesting...not being involved in that the perspective is appreciated.

Level 10

When looking at a VMware posting for converting Thick to Thin (VMware KB: Changing the thick or thin provisioning of a virtual disk) or Cloning and Converting (VMware KB: Cloning and converting virtual machine disks with vmkfstools) it sounds like they would have some kind of Zero Detection built into their system.

Level 17

Very Interesting, who knew FAT still prevailed in some places.

Level 15

Interesting perspective.  As I am starting to get involved with our SAN environment.  The background information provided was helpful.  THanks!

MVP
MVP

We have always used thin provisioning unless a specific appliance required another requirement. We find that it is better to allocate as needed on demand rather then eating up un-required disk space.

Level 14

We do the same thing as you kurtrh‌.  We setup autogrow with increments.  We use initial and max volume sizes and pretty much use thin provisioning for everything.  We haven't really had any problems with it yet.  Other than always needing more space as our environment is always growing.

Level 12

very nice............

Fat to thin makes sense.  It's just another way of applying what Citrix Thin Clients did for PC replacement; look at the huge improvements that resulted from that--especially WAN utilization decreases.

Virtualization is another example.  Five years ago our SysAdmins had to stop installing new apps because they'd run out of weight capacity in two of our data centers--too many servers!

Since then we've got to UCM chasses and VMWare and where we had a few thousand physical boxes housing apps, now many of them have been replaced by UCM blades and VM solutions.  The result is our data center profile is only a fraction of what it was back then, and that results in a big savings in power consumption and air conditioning. Not to mention the applications keep coming--growing and expanding and multiplying without the former requirement of putting only two or three apps on a dedicated server.

Thin is in!