Chris Wahl (@ChrisWahl) recently wrote a great product review of Virtualization Manager on his blog Wahl Network. It got me thinking that it would be nice to get to know him a little better. So, we tracked him down and got answers to a few questions for this month’s IT Blogger Spotlight. Enjoy!


SWI: So, tell us a little about Wahl Network.


CW: I use the tagline “Technical Solutions for Technical People” as a way to quickly sum up Wahl Network. As I stumble on things I think are interesting – such as new products, tips for a solution or ways to identify and overcome technical challenges – I write about them. Most of my experiences are drawn from maintaining and managing a home lab, working with clients as a technical architect and traveling the world to speak and engage with the IT community at events and conferences.


Posts that solve problems are my favorite to write. They feel like the purest form of giving back to the community, and I especially love it when someone comments to let me know that I either solved their problem or helped them figure it out.


I also run a rather popular segment on creating, buying and building home labs. A number of my resources address the food groups – servers, storage and networking – that can be used in a home lab environment. In my opinion, it’s gained so much traction because budgets are so tight and there are no right ways to build a home lab – as long as it works for you, it’s a great home lab! It’s also fun to have a totally safe segment of equipment to bang on that won’t result in taking down production, but also gives you the power to really understand how the various hardware and software bits fit together.


SWI: Nice. How did it all begin?


CW: I used to use a wiki-style site for recording my notes for various projects and workarounds at work. After several years of reading and finding very helpful tips and tricks on a number of blogs, I decided I’d do the same. I founded Wahl Network in 2010 with two goals: to record my thoughts around different challenges and solutions, and to make those same thoughts available on the Internet. I chose Wordpress because it was simple to use, popular on the Internet and eliminated my need to maintain the back-end such as backups, OS, etc.


SWI: OK. So, obviously your day job very much ties into the topics you blog about. What do you do for a living?


CW: After 13 years working on the customer side of operations – systems admin, IT manager and virtualization engineer – I finally went over to the consulting side of IT. My role as a senior solutions architect blends a variety of fun things into one job: I help clients elevate their operations and build efficient data centers, I get to educate IT folks on a variety of solutions that exist in the market and how they can take advantage of them and I get to create written and video content for our company blog.


I also own my own company, which is focused on creating content for the greater community. So, beyond my blog, I also create training courses and materials for Pluralsight, speak at various VMware User Groups, write for a number of technical publishers and co-host the Virtualization User Podcast as a Service (VUPaaS) with some great folks.


SWI: Very cool. How did you get into IT in the first place?


It’s more like IT got into me. I put my hands on an Apple computer at a very early age and never looked back. At first, I started playing games – like Choplifter, Burger Time and Lode Runner – and later I decided to learn how to write games in BASIC. As I grew older, I ended up co-hosting a BBS – we hosted VGA Planets – and doing some development work for my local grade school as a lab assistant. Once I finished my undergrad in network and communications management, the sky was the limit. I’ve always known that I wanted to work with technology and am extremely pleased at the amount of encouragement, support and guidance I received from my parents, spouse and mentors.


SWI: You mentioned your spouse, so that means you must have a life outside of IT and blogging, right? What are some of your non-IT hobbies?


CW: I prefer to completely unplug from the Internet for recreation. This typically means going skeet or trap shooting at one of the local clubs – I shoot a 12 gauge. I also enjoy working on various woodworking, plumbing or electrical projects around the house. There is something incredibly rewarding about seeing a room both as it is and as it can be, and then working with your hands to build something new and special.


SWI: Stepping back into the topic of IT for minute. What are the tools you find you use the most?


CW: Scripting is still my first love, so I tend to gravitate towards PowerShell by way of the ISE and an app called PowerGUI. I’m also a bit of a SQL gearhead, so I spend a fair amount of time in good ol’ SQL Management Studio. I’ve begun shifting my gaze towards open source projects. As of the past year or so I have been spending a much more time staring at GitHub – I run it on my PC – and IDLE for Python scripting. Other tools that I’ve really enjoyed working with include Onyx – VMware fling, Notepad++, SecureCRT for managing SSH sessions and keys and KeePass.


SWI: Given your expertise on both the admin and consulting sides combined with your industry blogger perspective, what do you think is next for IT?


CW: The need for IT specialists is beginning to erode, making room for a return to the IT generalist. Skillset requirements are evolving from hardware-centric to software-centric. With abstraction technologies taking care of a great deal of the complexity within a number of hardware products – especially storage – even large IT shops are no longer going to need to find a super deeply technical person to manage the gear on the floor. Instead, an IT generalist who knows a fair amount about storage, network and compute – servers – along with a healthy dash of scripting will be king.


As an example, I recently wrote about a company that sells hybrid flash storage arrays. There is no tuning, configuration or performance tweaks to apply. You buy, rack and setup the box. After that, it just works. This is a rather new trend in storage products, but it’s catching on even with the larger and more established vendors. The business wants to consume the resource immediately, not wait while IT fiddles around with making it work. This trend is driving the way vendors build products and businesses purchase and consume products. As such, the people that are needed in IT will have to be able to adapt. Change is hard, but overall I think this will result in a greater job satisfaction and less Sev1 calls in the middle of the night for IT!


You guys rock!

Posted by arjantim May 26, 2014

The last couple of weeks I've been the storage ambassador for the thwack community. I've really enjoyed sharing my thoughts with you guys and reading all of your comments. I noticed that the thwack community is great and there is a lot of knowledge floating around, that needs to be shared.


In the ever changing world of IT, storage is a component that really matters. In a Software Defined world where every cloud does things as a service Storage is one of the keystones. Laying the foundation is done by choosing the materials and plotting the right sizes. I've seen a lot of companies struggle with their infrastructures because they didn't lay the right foundation.


It's not really where I wanted to go. As said, I really liked the interaction in the Thwack community. And it made me wonder what you guys think of it, and where you see room for improvement.


What are the questions you need answers on?


What would you want to hear more about?


What resources do you use?


There is so much we can help each other with, and I know you guys and girls rock, but let's keep helping each other! I for one are more then willing to answer the questions you have, and I'm eager to hear what you are using, reading and working on?


See you in the comments! And keep on rocking!

Latency is the principal enemy of an administrator. If your virtual infrastructure running smoothly and latency is at acceptable level then everything is fine, but if latency at the storage side or on the network goes through the ceiling, then you're in trouble. You might be also running very sensitive - from latency perspective - applications.


Concerning network latency, there can be a situation where you can tweak few things not only on the virtual infrastructure, but also on physical infrastructure. If the packets traveling from one VM living on one ESXi host, to another VM living on another ESXi host, at one moment they cross a physical switch. Few improvements can be achieved on the physical switch.


How to "fight" latency and which best fine tuning the admin can implement for very latency sensitive VMs?


  • Bios Settings - often new servers with newer CPUs from intel or AMD can be set to power saving, but High power is the only way to go! C-State increases latency too, even if saving energy, so if you're using latency sensitive application and you are chasing every single microsecond of latency in your environment, then you need to disable C-state. Also make sure that vSphere (ESXi) is set to high performance. Depending of your hardware manufacturer, you should check its documentation for best possible performance settings too.


  • NUMA - Processor affinity for vCPU should be shedulled on specific NUMA nodes and also memory affinity for all VM memory shall be allocated from those NUMA nodes. Example for vSphere you can enable it through Manage > Settings > Edit > Advanced > Edit configuration button >  where you'll need to Add new option "numa.nodeAffinity" and as a value you'll use coma sepparated list for multiple nodes. Depending of your virtualization platform, if it's vSphere, then it depends of your version. Check the documentation for "numa.nodeAffinity".


  • Physical and virtual NIC settings - It's possible to disable interrupt moderation on physical NICs, which will be beneficial to achieve lower latency for very sensitive, low-latency applications. The feature is also called interrupt throttling. It's been instated however to prevent the particular host to get overwhelmed with CPU cycles which only treats interrupts. However keep in mind that even if disabling interrupt moderation on physical NICs is benefical for very-latency-sensitive-VMs, it brings some CPU overhead on the host and can possibly affect other VMs running on the host from the performance perspective. Virtual NICs tweaks - here is important to choose the right vNIC type, like VMXNET3 which is now the default type for most Guest OS, but you should check it out, especially for applications and vMs that you really need the best possible performance. It's also possible to disable virtual interrupt coalescing on the vNIC (by adding "ethernetX.coalescingScheme" with value "disabled"). Host based settings is also possible however it will affect ALL virtual machines running on the particular host.


  • Virtual disk SCSI controller choice - depending of the OS you're using on particular VM(s) it can be useful to change from what's usually the default one (LSI Logic SAS) into VMware Paravirtual, which leads to lower CPU utilization and higher throughput (Roughly 7-10%). To deliver the same number of IOPS it uses less CPU cycles (10% lower CPU utilization). Now it's perhaps the time to use it at past vSphere 4.1 problems were already solved long time ago! Its the most effective driver, which has the most efficiency. Note that only 4 PVSCSI controllers per VM are currently supported.


  • Guest OS optimizations - what to say on GuestOS optimization. I've already mentioned the importance of vCPU and memory sizing. But there are also possible other tweaks - at the application level and guestOS level. Depending of the workloads you're running, you can find specific optimization guides for VDI deployments where you need to tweek the master image, deactivate some services etc, for server based OS, things are usually based on VM after VM optimization. But basically you can already start with the virtual hardware, where you can delete the unnecessary floppy drives, COM ports or USB ports. Then other important application which often uses some huge in guest resources is Java.
  • There is a configuration for using large memory pages. This has to be done on the Java side by adding a command line option when launching java (-XX: +UseLargePages) - see info here. Where on the GuestOS tweak is often called "Lock pages in Memory" and can be done for a VM or Group of VMs via GPO (see here). But VMware ESXi settings are beneficial but do have some drawbacks too, considering that there are other optimization techniques involved as well.


Wrap up:


In few articles I tried to help other admins, users of virtualization technologies, to make the most benefit of their virtual infrastructures. The posts can be found here:

Towel-day-Infographic1.jpgAs I'm trying to explain Towel Day to the non-geek folks in my life, I'm struck yet again with how few people know about the brilliance of Douglas Adams. The conversations always seem to go something like this. "Seriously?! You've never heard of Hitchhiker's Guide? Hollywood even made that one into a (bad) movie?! I suppose that means you've never read Dirk Gently, Last Change to See, or seen the couple of Doctor Who episodes he wrote." /sigh. Blank stares and I change the subject.


Stephen Fry, another genius of our time, has said "He changed the way people spoke. You still hear some of his jokes from the Hitchhiker's books being told in pubs." If you are not an Adams fan, I will avoid waxing poetic about this awesome man and his work and simply tell you to go. Go pick up any of his works and you'll see why he's such a staple of a community that holds intelligence above all and isn't afraid of being a bit silly. Or just read Wikipedia http://en.wikipedia.org/wiki/Douglas_Adams. That's what Ford Perfect would probably do.


Towel Day was created as the ongoing celebration of the late Douglas Adams' (1952-2001) life and work. The first Towel Day was decreed exactly two weeks after Adams death on May 25th, as a day designated for an international celebration of his life.


The great man himself describes the usefulness of a towel:

"A towel, it says, is about the most massively useful thing an interstellar hitchhiker can have. Partly it has great practical value. You can wrap it around you for warmth as you bound across the cold moons of Jaglan Beta; you can lie on it on the brilliant marble-sanded beaches of Santraginus V, inhaling the heady sea vapours; you can sleep under it beneath the stars which shine so redly on the desert world of Kakrafoon; use it to sail a miniraft down the slow heavy River Moth; wet it for use in hand-to-hand-combat; wrap it round your head to ward off noxious fumes or avoid the gaze of the Ravenous Bugblatter Beast of Traal (such a mind-bogglingly stupid animal, it assumes that if you can't see it, it can't see you); you can wave your towel in emergencies as a distress signal, and of course dry yourself off with it if it still seems to be clean enough.


More importantly, a towel has immense psychological value. For some reason, if a strag (strag: non-hitch hiker) discovers that a hitchhiker has his towel with him, he will automatically assume that he is also in possession of a toothbrush, face flannel, soap, tin of biscuits, flask, compass, map, ball of string, gnat spray, wet weather gear, space suit etc., etc. Furthermore, the strag will then happily lend the hitch hiker any of these or a dozen other items that the hitch hiker might accidentally have "lost." What the strag will think is that any man who can hitch the length and breadth of the galaxy, rough it, slum it, struggle against terrible odds, win through, and still knows where his towel is, is clearly a man to be reckoned with.


  — Douglas Adams, The Hitchhiker's Guide to the Galaxy


I invite you to pop on over to http://www.towelday.org/ and maybe take in an event. Or maybe just snuggle up with your favorite towel and a good book.
How will you be celebrating Towel Day? Have pictures? Post them below!

Following the Target Store breach last December, eBay has been the next recent victim of data theft. While Target is still dealing with the fallout from its massive data breach, eBay has asked its 128 million active users to change their passwords. One thing that both of these breaches have in common is that when they occur, the victim comes under scrutiny and is always asked the question, "What were the security and response systems in place that allowed this to happen?"


These instances act as a wake-up call, and remind us that we need to do a reality check for disaster preparedness in our organizations. Do you have your guard up against attempts of unauthorized entities? Are the required security controls in place?



One statement that stands strong for all security incidents is– Simple security best practices help you prepare for when things go wrong.

Based on the nature, diversity, complexity, volume, and size of an organization’s business transactions, it’s high time that companies take a proactive stance to counter unexpected cyber threats and breach attempts. Enforcement of these controls is not enough. It’s also mandatory to conduct continuous monitoring and improvements of security policies.


A key aspect of being prepared for bad times is to ensure that external regulatory standards are applied and continuously monitored for conformance. Companies like eBay that deal with credit/debit card transactions, must implement or revisit their existing PCI compliance strategy. To achieve compliance with externally imposed polices, ensure compliance with internal systems of control. Automation and use of software to continuously monitor and quickly configure network elements helps accomplish effective enforcement of security policies. This saves time and helps administrators stay watchful and ensure that the network is secure.


In spite of enforcing protective controls, there’s still no assurance that you will not be a victim of a breach. Given the ever evolving talent of cyber criminals, organizations, in addition to protecting their network and data, must have a feasible response system. Pointing to the recent Heartbleed bug incident, organizations should have a plan and the means to quickly recover and guard from incidents that are not in their control. It’s critical to have the necessary tools to implement the recovery plan without delays caused by factors like resource availability, device complexity/number, up-to-date device/network information, and so on.


Even more than the millions of dollars lost due to interruption in operations, organizations suffer the most on loss of reputation and credibility. The price paid for not being vigilant cannot be gauged and differs from industry to industry. Are you doing everything you can to avoid putting your organization at such risk?

Every other day, the Internet is flooded with reports of card holder information theft, financial data loss due to misconfigured ‘secure network environment’, identity theft and so on.


If you are in the financial services industry, how do you create a secure environment that is compliant with the Payment Card Industry Data Security Standard (PCI DSS)? To start with, the PCI compliance standard defines various merchant levels, validation types, and most importantly, PCI requirements (12 requirements) and hundreds of controls/sub-controls that ought to be followed to the letter.


According to a recent Verizon 2014 PCI Compliance Report, only 11.1% of companies passed all 12 requirements, and a little over 50% of companies passed 7 requirements.



Hackers are upping their ante. Getting into the specifics of PCI compliance to protect financial data can be daunting, yet unavoidable. Well, the good news is, with a proper NCCM software, you can ensure that:

  1. Your network is secure and compliant
  2. You efficiently pass audits and avoid ‘last minute’ pressure (not to mention that unique combination of surprise audits & Murphy’s Law!), and
  3. You don’t contribute to the ‘cost of non-compliance


Cost of non-compliance: Costs incurred in terms of heavy fines (millions of USD) for regulatory non-compliance, and/or, losing financial data amounting to millions/billions of dollars.

Some inconvenient stats: Global card fraud – over $11 billion in 2012 (The Nilson Report, 2013). Losses from fraud using cards issued in Single Euro Payments Area (SEPA) is about €1.33 billion, in 2012 (Third Report on Card Fraud, European Central Bank).


Ensuring 100% PCI compliance in your network can be challenging due to one or more of the following:

  • Many routers, switches and firewalls – manually tracking configuration changes is a pain
  • Manually running cron jobs to backup configurations - time consuming/error-prone
  • Manually pushing configs via TFTP servers, to the network devices
  • Manually checking PCI requirements on a periodic basis, and apply changes as appropriate
  • Your existing software not supporting a multi-vendor environment
  • You don’t have visibility to what changed when, and by who
  • The current manual processes are outrageously laborious as you may have hundreds of network devices to manage, and too few network admins


Of course, all network admins try their best to ensure compliance and keep their networks secure, doing so in their own style. A few important things they may need, to better manage compliance would be:

  • Getting hold of readily available PCI reports
  • Having fine control over policies, reports and rules
  • Automating remediation scripts on a node or bunch of nodes
  • Change approval management
  • Backing-up/updating/restoring devices to compliant configurations when config changes go awry


The PCI DSS standard is here to stay, and it’s only going to get tougher and tougher to counter the rising fraud rates. So, how are you coping up in complying with PCI standards?

With a lot of customers running their infrastructure almost a 100% virtualized these days, I see more and more people moving away from array-based replication to application-based (or hypervisor-based) replication, with Zerto ZVM, VMware vSphere replication, Veeam Replication, PHD virtual ReliableDR and Hyper-converged vendors offering their own replication methods the once so mighty array-based replication feature, seems beaten by application-based replication.


Last week I went to a customer where Storage-based replication was the only replication type used, but their architects were changing the game in the virtual environment and application-based replication was the road they wanted to ride... Where virtualization administrators and most of the administrators saw a lot potential in application-based replication the storage administrators and a some of the other administrators were more convinced of what storage-based replication offered them for all these years


What about you?Do you prefer storage-level replication? or is application/hypervisor level replication the way to go?

Performance in virtual world is complex. But it's one of the most important topic as well! Every admin I meet puts performance on the first (second) place of his priorities. The other priority is Disaster recovery. Depending of the environments, admins usually prefers to deliver best performance, but cares about fast backups (and recovery) too. No one likes the users complains about performance, right?


Virtualization brought another layer of complexity - Performance bottlenecks.  In full virtual or mixed environments where some shops may also add another complexity with two different hypervizors, things might not be very simple, but rather complex to solve.


Usually there isn't only a single bottleneck as usually bottlenecks are followed with miss-configurations on the VM side. There are some principal miss-configurations that often repeat. Miss-configuration on the VMs side, and also some space waste the infrastructure side as well, which can lead to loss of performance. Here are few areas where to seek for improvements:


  • VMs with multiple vCPU when only single vCPU needed - start with less to add more later.
  • VMs with types of network adapters not adequate for the OS - check the requirements and follow the documentation (RTFM)
  • VMs with too much memory allocation - does that VM really needs 16Gb of RAM?
  • Storage bottlenecks - we all know that storage is the slowest part of the datacenter (except flash). It's changing with server side flash and different acceleration solutions, but also with hyper-converged solutions like VMware VSAN. But the problems might be on the storage network or HBA.


The virtual environment struggles also with the fact that sometimes in smaller shops there aren't much rules on who does what. The case when several admins, plus a developer team takes the virtual infrastructure for their playground. Everyone creates VMs which then sits there doing nothing, VMs snapshots laying around consuming valuable SAN disk space and suddenly there is not enough space on the datastores, backups stops working or VMs are having performance problems. There is a name for that - VM sprawl!


Here also, some order first please. Define rules who does what. Which VMs shall exists and in which situation is important to keep snapshot (or not). Think of archiving VMs rather than creating snapshots. There are some good backup tools for that.


Modern virtualization management tools can help in a situation when suddenly a performance is falling. Some of the tools integrate a possibility to detect configuration changes in the virtual environment. It means that you can see what changed the day that the performance started to be bad. It not always help as there are other factors that falls outside of the scope, and the tools might be monitoring the changes only of the virtual environment itself and not the outside physical world. But it can narrow the problem.

Performance forms the basis of measurement of any system, process, or solution that is performing an action. How good the performance of the system is tells us how effective and beneficial it is in meeting its intended purpose. Monitoring performance by measuring performance indicators is the right way to keep performance under control. Performance monitoring is not the same for all aspects of the IT infrastructure. A virtualization environment (VMware® or Hyper-V®, etc.) differs from a server or application environment in the context of performance management. There is more to monitor with virtualization: different entities, different metrics interfacing between the physical hardware and the guest operating system. What makes virtualization performance management difficult is that the guest OS is not seeing the physical hardware, and instead seeing the virtual hardware that is emulated by the hypervisor. To gain complete visibility, what you need is a VMware and Hyper-V monitoring solution that is aware of the virtualization layer and can distinguish it from the physical hardware and exactly pinpoint the source of an issue.


The performance of a virtualization environment is best measured by capturing a combination of VM and host statistics such as:

  • CPU usage and the time taken for a CPU to be available
  • Memory actively used, allocated and swapped by the VMkernel
  • Disk usage in terms of I/O, latency to process command issued by the guest OS
  • Network in terms of the number of transmit and receive packets dropped


In a virtualized environment, pinpointing the source of a problem can get complex as the symptoms can be caused by many different things. The very nature of a virtual environment – which is all about the sharing of resources by hosts and VMs and the flexibility to move VMs – can make troubleshooting performance problems more difficult.  For example, a host performance issue can cause the VM resource contention and performance bottlenecks. A VM performance issue may lead to slowdown of applications running on it.

To stay ahead of performance problems, it is essential to detect the underlying resource contention and the top resource consumers in the environment, and also to be able to spin up performance trends to get a forecast of where the virtual environment is headed in terms of resource utilization, capacity and workload assignment.


End-to-end and holistic virtualization monitoring – from host to VM to storage – is the best solution that helps get insight into performance metrics of your virtual infrastructure. If you need to get ahead of virtualization performance problems, you need to be able to receive real-time alerts pointing out irregularities, so you can take corrective actions immediately. In its essence, performance management is a must-have capability to constantly monitor the health of the virtualization infrastructure, identify issues in real time and prevent them causing virtualization performance bottlenecks and eventually application performance issues.


Read this free white paper to learn more about virtualization performance management.


As a systems administrator you are often tasked with knowing a lot about server and application performance. Tools like SAM can give you insight to the overall server health, and AppInsight for SQL can show you some details about the activity querying an instance of SQL Server. But what do you do when you need to take a deeper dive into the database engine?


Join me next week, May 22nd, at 12 Noon ET for a live webinar titled "Optimizing the Database for Faster Application Performance". I will talk about general monitoring and troubleshooting best practices, how SAM can help identify servers that need attention, and how to use Database Performance Analyzer to drill down inside the database engine to find the root cause for slow query performance.


You can register for the event by going here:http://marketo.confio.com/DatabasePerformanceforSAMMay_RegPage.html?source=SWI (note: you need a valid business email address to register!)


See you then!

System Center is a great platform to perform application monitoring and management, but it contains a few gaps where native functionality isn’t provided. Fortunately, System Center is an extensible platform and SysAdmins can leverage 3rd party tools that quickly fill technology gaps and often plug into the System Center infrastructure to provide seamless visibility.  SolarWinds®, a partner in the System Center Alliance program, offers multiple capabilities to help SysAdmins fill functional gaps in System Center, including:

  • Non-Microsoft® application monitoring & custom application monitoring
  • Capacity planning for Hyper-V® & VMware® virtual environments
  • Third party patch management
  • Multi-vendor database performance
  • Storage & network management


Application performance management tools, such as Server & Application Monitor (SAM) offers additional visibility beyond the System Center. Moreover, it’s a tool that can be easily deployed with your System Center environment so you can seamlessly manage other aspects of your IT environment.


  • Monitor the Monitor: SolarWinds has worked with Cameron Fuller, Operations Manager MVP, to build application monitoring templates to monitor the health of System Center Operations Manager (services, Agent & Management Server) and System Center Configuration Manager 2012.  To get a detailed list of the services, apps, and TCP ports monitored, please see details in the Template Reference Document
  • Deep Microsoft application monitoring: Go deeper than just looking at availability and performance of critical Microsoft applications using System Center. Microsoft application monitoring tools like Server & Application Monitor (SAM) from SolarWinds provide a level of depth to manage issues in applications like Exchange and SQL Server® that System Center doesn’t provide. For Exchange environments, Server & Application Monitor provides details on individual mailbox performance (attachments, synced mailboxes, etc.). In addition, you can monitor Exchange Server performance metrics like RPC slow requests, replication status checks, and mailbox database capacity.   SAM also provides detailed dashboards for finding SQL server issues like most expensive queries, active sessions, and database & transaction log size. 
  • Non-Microsoft application monitoring: An IT infrastructure isn’t only limited to Microsoft and because System Center has limited capabilities to manage non-Microsoft applications, IT pros require management packs for non-Microsoft application monitoring.  SolarWinds provides the ability to quickly add application monitoring for non-Microsoft apps. Additionally, SolarWinds supports over 150 apps out-of-the-box.  With SolarWinds’ Management Pack, you can integrate application performance metrics right into System Center Operations Manager. 
  • Capacity planning for Hybrid Virtual Environments: In addition to having visibility into application performance, SolarWinds provides capacity planning for organizations with larger virtual server farms or a private cloud infrastructure. Whether it’s Microsoft Hyper-V or VMware, you get instant insights into capacity planning, performance, configuration, and usage of your environment with the built-in virtualization dashboard.
  • Contextual visibility from the app to the LUN: For applications that leverage shared compute and storage resources, Admins are able to contextually navigate from the application to the datastore and related LUN performance across multiple vendors in each layer of the infrastructure. SolarWinds supports over 150 apps, Hyper-V & VMware, and all the major storage vendors.  Integrated visibility is provided natively with SolarWinds Server & Application Monitor, Virtualization Manager, and Storage Manager.
  • Third-party patch management: Expand the scope of WSUS by leveraging pre-packaged third party updates for common apps, such as Java®, Abobe®, etc. A patch management tool, such as Patch Manager, which natively integrates with System Center, manages patch updates for Microsoft, as well as third party applications. In addition, deploy, manage, and report on patches for 3rd party applications. Further, get notified when new patches are available for deployment, and synchronize patches automatically based on a fixed schedule with this integration.
  • Database performance management: System Center offers limited visibility into performance of non-Microsoft databases from Oracle®, IBM®, Sybase®, etc.  Database Performance Analyzer is an agentless, multi-vendor database performance monitoring tool that goes deeper into monitoring the performance of databases. For example, it allows Admins to look at the performance problems most impacting end-user response time and provides historical trends over days, months, and years.
  • Network management: In the previous launch of System Center, Microsoft added limited network management capabilities. SolarWinds provides advanced network management capabilities to include network top talkers, multi-vendor device details, network traffic summaries, NetFlow analysis, and more. This information can be integrated to System Center Operations Manager with the free Management Pack


In conclusion, Admins should utilize a single comprehensive view of servers, applications, and the virtual & network infrastructure. Especially when all are integrated into System Center. In turn, you can look at the exact performance issue, easily detect hardware & application failure, and proactively alert for slow applications.


Learn how SolarWinds can help you by allowing you to gain additional visibility beyond System Center for comprehensive application, virtualization, and systems management by checking out this interactive online demo.

Bring your own device (BYOD) is not new to IT. Most companies are allowing employee-owned devices access on the corporate network. It may lead to cost-savings to the management and a fair deal of employee convenience, but, for IT, it is certainly an uphill task with a boatload of management headaches. While there are known security issues and lack of management capabilities due to increasing device endpoints (thanks to BYOD!), organizations are making their BYOD policies more stringent and pushing IT departments to ensure security, compliance, and optimized IT resource utilization.


What most companies have been overlooking with the BYOD trend is that, this has given rise to greater personalization and consumerization of IT services than anticipated. Employees are starting to feel comfortable using their own devices – which is fine – but they have also started leveraging shadow IT services wherein they are bringing into the enterprise network their own applications (aka BYOA), collaboration systems, and cloud storage. This practice is gaining popularity and is called bring your own IT (BYOIT) – which not only includes mobile devices, tablets and laptops, but a host of other IT services and third-party applications that IT teams have no control on. To make it more arduous for IT teams, BYOIT is happening from employee-owned devices which already IT has less visibility into. To name a few, employees are

  • Installing applications from the Internet to resolve issues
  • Leveraging third-party cloud services for data storage
  • Using insecure file transfer methods to share data
  • Connecting mass storage devices to enterprise workstations
  • Using antivirus and antimalware software that are not IT-approved
  • Using collaboration systems and instant messengers for communication outside the firewall
  • Utilizing network bandwidth for personal use – streaming videos and downloading apps


Besides the obvious security and compliance implications, there are many IT management headaches such as:

  • Maintaining inventory of connected devices and tracking BYOD assets to employees
  • Inconsistent IT approach to patch management and upgrades
  • Difficulty to enforce policies due to different device platforms and operating systems
  • Difficulty in handling IT tickets, and more time spent investigating issues and troubleshooting them
  • Difficulty to spread IT awareness as different users are using different devices and platforms


Do you face this challenge in your organization? If you are in IT, how do you control this situation and put reins on BYO-anything?

There is a technology available through most of the United States capable of providing net bit rates in the range of “terabits per second” and extremely low latency. Though big data enterprises like Google, Microsoft and Facebook are already using this for their data transfers, not many Fortune 500 enterprises have considered this. The technology is known as ‘dark fiber’.


What is dark fiber?


During the best years of the dot.com bubble in the late 90’s, telecom and other large utility companies foreseeing an exponential demand for network access laid more fiber optic cables than needed. One reason was, they expected a readily available network capacity will help capture the market. Another is, more than 70% of the costs of laying fiber optic cables goes towards labor and other infrastructure development1. It made sense to lay more fiber than needed to save on future labor expenses than laying new ones as and when needed. But two factors left most of these fiber unused:

  1. The development of Wavelength-division Multiplexing (I refuse to explain that, but you can read it up here) increased the capacity of existing optical fiber cables by a factor of 100!
  2. The dot.com bubble burst and the demand for network connectivity died down.

In fiber optic communication, light pulses are what carries information and so when transmitting data, the fiber lights up. Any fiber that is not transmitting data remains unlit and is called ‘dark fiber’. Today, the term dark fiber mostly refers to fiber optic cables that were laid expecting demand but is now not in use or was abandoned. There are thousands of miles of dark fiber2 available throughout the United States being sold or leased out by the companies that built them or purchased them from bankrupt telecoms.


Should you consider dark fiber?


Dark fiber is suitable for enterprises that need low latency, high speed connectivity with zero interference from service providers and has the capex to invest. Here are a few scenarios where dark fiber can help.


Point-to-point connections, such as those to network-neutral data centers, cloud providers, DR and back-up sites would do better with Gigabit or even Terabit transfers speeds. Fiber optic cables are capable of exactly that – a single pair of fiber can transfer Gigabits of data per second.


There are enterprises whose bandwidth speed requirements can change from a few Gbps to an unpredictably high limit. Optical fiber is capable of virtually unlimited data speeds allowing it to meet high and unpredictable bandwidth demands.


Enterprises, especially those involved in stock trading or online gaming and those using newer communication technologies such as HD video conferencing and VoIP need ultra-low latency connections which optical fiber is capable of providing.


Dark fiber also provides net neutrality. If you are purchasing Gold class QoS from your ISP for priority data delivery to your data center or branches, dark fiber needs none of that. With dark fiber, data is delivered over your privately owned cable and because of its high bandwidth capabilities, there is no need for traffic prioritization too.


Finally, you get the ability to transfer data from your offices to data centers or the cloud without having to worry about the data being read, modified or stolen. Dark fiber is your private connection where only you have access to both the data as well as the fiber that transmits the data.


And a few more facts:


Dark fiber is an excellent choice if you already have a router that supports fiber connections, thereby ensuring last mile high speed data delivery. But before you consider buying or leasing dark fiber, make sure you have a real business requirement. Here are a few more facts to consider:

  • Renting or buying dark fiber is cheap but you still need to invest in hardware and other equipment needed to light up the cables.
  • Optical fiber is a physical asset that needs maintenance. Consider the costs involved in maintaining the fiber and related infrastructure.
  • The time needed to identify and resolve outages is much higher than with Ethernet. But, on the other side, issues such as cable cuts happen very rarely with fiber optic cables due to the manner in which they are laid.


If dark fiber costs are prohibitive, consider alternatives such as ‘wavelength services’ where instead of the whole fiber, you lease a specific wavelength on the fiber based on requirements.

Still sounds like hype? Trust copper!


In IT we're used to having a buzzword every now and then. Most technicians just continue doing what they're good at and maintain and upgrade their infrastructure in a manner only they can, as they now how the company is serviced in the best way possible, with the available funds and resources.


As a succesor to the cloud buzzword we now have software defined everything. Software Defined DataCenter (SDDC), Software Defined Networking (SDN), Software Defined Storage (SDS) and so on. And although I don't like buzzwords, there is a bigger meaning behind Software Defined Everything.


  When looking at the present DataCenter you can see that the evolution of hardware has been very impressive. CPU, Storage, RAM and Network have evolved to a stage where software seems to have become the bottleneck. That's where Software Defined Everything comes into the picture, making the software that will use the potential of the hardware. My only point is that everything should be "Software" defined is bypassing everything the hardware vendors have done , and will continue doing, which is a tremendous amount of hard work, research and development to give us the Datacenter possibilties we have today. So naming it "software defined" is wrong if you ask me.


  Looking at Storage there is a lot of great storage software to leverage the just as great storage hardware. Looking at  some of the Solarwinds Storage Manager features like:



  • Heterogeneous Storage Management
  • Storage Performance Monitoring
  • Automated Storage Capacity Planning
  • VM to Physical Storage Mapping

You can see hoq software uses the hardware to provide the technician the tools they need to manage their datacenter, and giving the customer the IT they need. In future posts I would like to go deeper in some of these features, but for now I just wanted to share my thoughts on why software is an extension to hardware AND the other way around. You’re more then welcome to disagree, and leave your view in the comments.


See you in the comments

One of the cool function of any virtualization management suite is not only to have a nice dashboard showing which VMs performs poorly on which datastore/host, but also to be able to see what will happens if:


  • I create 10 VMs a week - in how many weeks/days I'll need a new SAN or new host(s) (storage capacity planning)
  • I add more VMs -  I increase my workloads by adding 20 new SQL server databases
  • Workload rise - I want to know the storage latency I'll have on my datastore if the overall workload increases 20%.


The prediction possibilities. If you could predict the future in your life you would probably be milionaire right? You know you can't.


However if you want to predict future in your datacenter? Yes, you can! For example by simulating the increase of the workflows you get the whole picture in 2 weeks, 2 months or more. Pretty awesome.


As an IT guy I usually visit existing datacenters so pure green fields are rare. Usually the datacenter, or the client has already some virtualization already in place. In this case I always use some initial workloads which are then taken into consideration to predict the future expansion.


A client of mine asked me recently on how many new hosts he will need to accommodate X new VMs and the grow for Y VMs a day. No longer guessing. The tools are here for such a tasks and if used correctly they give accurate results.


Did you tried digging through one of those tools or you continue guessing?

It comes as no surprise that cyber intrusions and data breaches are increasing in the industry today given the sophistication in the threat landscape. We have seen catastrophic breaches in the recent times with the likes of Target, Neiman Marcus, Michaels, and many more. While cybersecurity protection looks vulnerable for all sectors – be it retail, financial, manufacturing, or even the public sector – there is more concern for healthcare IT security and data protection. The number of security incidents are on the upsurge for healthcare IT, and according to the breach report by healthcare IT security firm Redspin, HIPAA data breaches have risen by 138% in 2013 over 2012.



In a private notice to healthcare providers, the FBI has issued a warning that the healthcare industry is not as resilient to cyber intrusions compared to the financial and retail sectors, therefore the possibility of increased cyber intrusions is likely.

While it’s a good thing that the FBI is focusing efforts to alert healthcare companies on their security policies, it’s also a worrying fact that the state of healthcare information security is in shambles. The recent Verizon DBIR 2014 considers data theft and data loss to be the most popular method of compromising cybersecurity in the healthcare industry in 2013.



Insider Misuse

POS Intrusion






Experts say that medical information is becoming more sought to hack these days given the different things a hacker could do with this data.

  • It’s more difficult to find out that stolen healthcare information has been used when compared to financial records which the hackers use immediately to purloin money
  • Criminals also use medical records to impersonate patients with diseases so they can obtain prescriptions for prescription-only medication and drugs
  • Stolen healthcare patient data is supposedly costlier in the underground crime market in comparison with credit card records


Take a look at this lit from hhs.gov on all recorded healthcare breaches in the US: https://ocrnotifications.hhs.gov/iframe?dir=desc&sort=affected_count


The recent St. Joseph Health System breach revealed that the Texas hospital exposed up to 405,000 past and current patient records which also included employee and employee beneficiary information. Healthcare IT just keeps getting assailed by cyber criminals, and they are just not able to set up the right security defense for information security. Breaches don’t only impact the healthcare institution with the stolen records. There are other implications including compliance violation and penalties, reputational loss, and lawsuits which can take a bigger financial toll.



The New York-Presbyterian Hospital and the Columbia University Medical Center have agreed to pay the largest-ever HIPAA violation settlement, totaling $4.8 million, in response to a joint data breach report submitted by the affiliated healthcare institutions in 2010 that reportedly exposed the electronic protected health information (ePHI) of 6,800 patients. Earlier in 2011, Mayland-based healthcare-provider Cignet Health faced a massive $4.3 million fine HIPAA violation.



Well, healthcare IT teams can start with risk assessment. If you are responsible for IT security in a healthcare organization, answer the following questions first:

  • What data do I store in my systems?
  • Do I know the existing vulnerabilities in my system?
  • What protection mechanism is in place?
  • Do I have governing policies to secure IT assets?

In case of a breach,

  • Do I have a threat/breach detection system?
  • Do I have a breach containment mechanism?
  • Do I have a response plan and automated remediation in place during a data breach?

If you are complying with regulatory requirements such as HIPAA and HITECH,

  • Have I reviewed HIPAA and HITECH guidelines and requirements?
  • Have I the right measures and policies in place in conformance with compliance norms?


Once you have understood the state of risk to IT infrastructure, find out cost-effective options to enhance cybersecurity and network defense. Read this free white paper to understand more about healthcare IT security and risk management.

Healthcare WP.png

The following is an actual description of a discussion one of our engineers, Matt Quick, had with a customer as told in his words.


A customer using the product evaluation copy of SolarWinds Server and Application Monitor (SAM) called us wanting an extension on the trial because “SAM was broke, keeps alerting a component when we know everything was fine.”  I asked to take a look at the customer’s historical data.  The component in question was actually from the Windows 2003-2012 Services and Counters, specifically “Pages/sec” was going critical.  I’d seen this before, and it always relates back to the disk.


“But this VM is backended by a NetApp!!  It can do 55,000 IOPS!!!”  Yeah, I was suspicious at that, so, I asked them, “ok, do you have Storage Manager (STM) or another storage monitoring product installed so we can check?”  Sure do, and he promptly informed me that NetApp’s Balancepoint was telling him that while he averaged about 860 IOPS per day, during that hour he spiked to 1350 IOPS, still well within his supposed “55,000” IOP limit.


Ok, so, I go into SolarWinds Storage Manager, hit search in the upper right, find the VM with the component in question.  I go to the storage tab and go into Logical mapping, find which LUN and aggregate it belongs to.  Next, I go into the NetApp, look at the RAID report to see how many IOPS he can do.  A quick calculation later, I estimate about 3500 IOPS total.  Customer then realizes the original number of “55,000 IOPS” probably is not real in his specific setup.  Then I look at the volume IOPS report on the NetApp, during the same timeframe.  Sure enough, March 1st @ 8:30 pm, 3,500 IOP spike.


“But Balancepoint says I got 1350 at that time!”  So, I ask him to open it up, and sure enough, 1350 @ 9pm.  I ask him to look at the next data point…800 IOPS @ 11pm.  He was looking at a bi-hourly aggregate.  Sure enough, if you aggregate the 8pm hour, you get 1350.  And we couldn’t figure out how to zoom in on NetApp’s software.  At this point the customer is speechless as he realizes his current tools were giving him incomplete information.


Then I ask him if Virtualization Manager (VMan) is installed, and sure enough it is.  I look in STM at which datastores are on that aggregate in NetApp, and I add all of them into a performance chart in VMan for the same timeframe and isolated it to a single datastore causing the problem.  From there I add all related VMs to that datastore, and boom, we found the culprit VM with the problem:  Apparently someone was running some kind of backup every day @ 8:30 pm.


All this from what looked like an ‘erroneous’ SAM alert.


This story exemplifies the value of an integrated set of tools that gives you visibility across the extended application stack, from the application and its processes and services through the underlying infrastructure so that you can identify the root cause and then solve hard problems. The following video gives an overview of how we are making this possible with the integration of Server and Application Monitor, Virtualization Manager and Storage Manager to provide extended application stack visibility.

If you have used SAM, STM, VMAN and Database Performance Analyzer to find the root cause of tricky problems or to prevent problems, please share your story (your story is worth a cool 50 thwack points)!




During the last few weeks (or even years) there was a lot of news on Flash. Flash as a Cache, All Flash Arrays (AFA) Hybrid Arrays and Flash in memory banks (UltraDIMM). It seems the Start-ups make a lot of rumble on all this new technology, but looking at the announcements made during the first two days during EMCWorld2014 it looks like the big fish are hunting the small ones, and will soon circle around the smaller fish, they’re ready to eat them… or not ;-D


Processing the IOPS as close to CPU as possible and with the quickest medium out there seems to be the way to go. But we need a way to leverage this . Which application goes on which medium and when will the medium “Flush” the data to the next Tier? Or will we just rely on caching software and let it decide where the data will be stored?


Everybody seems to be using Flash as the Tier behind RAM, but does that mean HDD’s are dead? Or will they function as the capacity tier as long as Flash is much more expensive then traditional storage? And what about the future, like ReRAM or other Flash successors?


But in the midst of all this rumble there is a question that needs aswering… How to manage this new storage world and a special question is how to manage the multiple storage arrays a lot of the companies have these days. I’m  writing a VMware design for a company in the Netherlands and they have HP and NetApp storage in their Datacenters, and managing them is not as simple as it can be… What is your secret to manage the datacenter of the future (or the one you’re managing now), or do you already use the perfect tool and are you willing to share?

Gone are the days of desk-to-desk IT assistance. IT admins are increasingly using remote support tools to simplify manual efforts and save time in troubleshooting remotely located systems. Traditional remote connectivity tools establish remote control sessions only over the enterprise network – LAN or WLAN. The drawback with this type of connectivity is that when your end-users are outside the corporate network – when they are working at home or working on-the-go without VPN access – it is not possible for IT teams to provide remote administration and support.


In today’s fast-paced digital world, having users travelling and working from different locations and network set-ups with or without VPN access puts more onus on IT teams to find more convenient ways to provide tech support to these users. This doesn’t mean, organizations should invest in an expensive IT workaround to address this challenge. The smart approach for IT pros would be to find and invest in a simple and affordable remote support tool that has multiple options for offering remote support.


End-User Location

Remote Support Connectivity

Inside the corporate network (inside firewall)

IT technician should be able to remotely connect to these systems easily from a single management console

Outside the corporate network with VPN access (inside firewall)

IT technician should be able to remotely connect to these systems easily from a single management console

Outside the corporate network without VPN access (outside firewall)

IT technician should be able to connect to these systems via a secure Internet proxy and initiate over-the-internet remote sessions


IT Technician Location

Remote Support Connectivity

Present at his system within the corporate network (inside firewall)

IT technician can directly establish remote session with end-user systems using a remote support tool

No access to system, with only mobile access, within corporate network (inside firewall)

IT technician can leverage a client-server model with his mobile (as the client) and a mobile gateway server deployed within the LAN  to view and control end-user system directly from the mobile interface

Outside the corporate network without VPN access, with only mobile access, outside corporate network (outside firewall)

IT technician can leverage the same client-server model as discussed above, with his mobile (as the client) and a mobile gateway server (configured on a server in a DMZ or on a server accessible through a VPN connection) to view and control end-user system directly from the mobile interface


Gaining access to all these remote connectivity options, IT support and administration can really become anywhere and anytime. IT pros can continue to provide remote support whether the end-user is inside or outside the corporate network.


Check out DameWare Remote Support version 11.0 which provides an easy-to-use remote control functionality to access remote systems both inside and outside the corporate firewall. With centralized administration server, secure Internet proxy for over-the-internet sessions, and mobile remote connectivity options, DameWare Remote Support can be your cost-effective and reliable IT aid with the capability to support IT administration round-the-clock whenever and wherever!


In a private notice to healthcare providers, the FBI has issued a warning that the United States healthcare industry is not as resilient to cyber intrusions compared to the financial and retail sectors, therefore the possibility of increased cyber intrusions is likely. 

"The healthcare industry is not as resilient to cyber intrusions compared to the financial and retail sectors, therefore the possibility of increased cyber intrusions is likely," the Federal Bureau of Investigation said in a private notice it has been distributing to healthcare providers, obtained by Reuters.

In addition to weaker defenses, there is more to gain.  Health data is far more valuable to hackers on the black market than credit card numbers because it tends to contain details that can be used to access bank accounts or obtain prescriptions for controlled substances.

So why has this occurred?  Does healthcare just not care about security?  That’s far from the case.  Here are some challenges we’ve seen healthcare organizations have in building stronger defenses:

  • Weak regulatory enforcement:  While the HIPAA security standard is one of the oldest IT security regulations out there, until recently, it hasn’t been well policed.  This means that healthcare, which is largely cost-sensitive and for a large part not for profit, didn’t make security spending as big a priority. 
  • Availability Trumps All:  Whether its doctors that refuse to take extra steps to access health data, or concern that certain security measures may threaten availability and the care process, healthcare as a process is resistant to any security measures that might block or delay access. 
  • Managing the Cost of Care:  The debate on how to lower the cost of care has been going on for decades.  Healthcare is particularly sensitive on managing costs and security often doesn’t come cheap – from both a product and workforce perspective. 


Aside from the FBI Warning, there are real costs associated with data breaches and non-compliance has never been greater for healthcare. 

Need Some Tips on Improving Healthcare Security? 

Watch our 2 Part Video: For helpful tips on how to manage these changes, we’ve put together a short 2 part video series on how to improve healthcare security without spending a fortune.

Part 1 covers the shift in the regulatory and risk environment and Part 2 goes through some helpful tips on how to cost-effectively improve healthcare security


Read a SolarWinds security expert developed guide on improving security:  For more information, read this free white paper to understand more about healthcare IT security and risk management.


Share your tips and tricks here too.  We'd love to keep the conversation going. 

One of the good parts of my job consists in helping customers benefit from implementing virtualization management solutions. In most cases they start looking for solutions only after they experience problems. Slow VMs, slow applications, you know….


Then what? Fire in the house. The first person to call is the “guy who manages” the whole system – my client. Complains of slow applications or network performance. It’s of course already too late to fix the problem. Especially if there is no virtualization management suite which would allow to not only monitor, but also predict with “what if scenarios”.

Wouldn't it be better to fix the problems before they occurs? Because the slow VMs performance do have source somewhere and it would be nice if you could be alerted that, for example, “In one month time you’ll run out of space in your datastore”, or “In 2 months the capacity of your 2 brand new ESXi hosts will not be sufficient” because you’re creating 5 new VMs per day…


Usually the admins have to firefight two things (but usually much more):


  • Internal problems on applications they run on their infrastructure.
  • Problems related to VM sprawl where the virtualization system becomes inefficient and the only solution would be to throw in more hardware.


One of the first functions of every virtualization management suite is the capability to get the « full picture », to see, for example, that half of the client’s VMs are performing poorly because they’re over (or under) provisioned with CPU or memory, that each second VM is running with snapshots.


Other problems can be arising from bad performance of existing hardware, which is often out of warranty and too old to do the job that is required.

Now what? Buy first and optimize later? No, that’s not what I would usually advise. The optimization phase would consist in helping the client solve their problem first and then give them advice to implement a solid virtualization management solution. The client thanks you not only for your help, which saved his bacon, but also for the advice you gave him to save him from future problems.

FTP, FTPS and SFTP are the most widely used file transfer protocols in the industry today. All 3 of them are different in terms of the data exchange process, security provisions and firewall considerations. Let’s discuss how these are different so it’s easier for you to select the right protocol based your requirement.



File Transfer Protocol (FTP)

FTP works in a client-server architecture. One computer acts as the server to store data and another acts as the client to send or request files from the server. FTP typically uses port 21 for communication and the FTP server will listen in for client communications on the port.

FTP exchanges data using two separate channels:

  • Command Channel: The command channel is typically used for transmitting (send and receive) commands (e.g. USER, PASS commands) over port 21 (on the server side) between the FTP client and server. This channel will remain open until the client sends out the QUIT command, or if the server forcibly disconnects due to inactivity.
  • Data Channel: The data channel is used for transmitting data. For an active mode FTP the data channel will normally be on port 20 (on the server side). And for passive mode, a random port will be selected and used. In this channel, data in the form of directory listings (e.g. LIST, STOR and RETR commands) and file transfers (e.g. normal uploading and downloading of a file). Unlike the command channel, the data channel will close connection on the port once the data transfer is complete.


FTP is an unencrypted protocol and is susceptible to interception and attacks. The requirement of ports to remain open also poses a security risk.


File Transfer Protocol over SSL (FTPS)

FTPS is just an extension to FTP which adds support for cryptographic protocols such as Transport Layer Security (TLS) and Secure Sockets Layer (SSL). FTPS allows the encryption of both the control and data channel connections either concurrently or independently. There are two types of FTPS methods possible:

  • Implicit FTPS: This is a simple technique which involves using standard secure TLS sockets in place of plain sockets at all points.  Since standard TLS sockets require an exchange of security data immediately upon connection, it is not possible to offer standard FTP and implicit FTPS on the same port.  For this reason another port needs to be opened – usually port 990 for FTPS control channel and port 989 for FTPS data channel.
  • Explicit FTPS: In this technique, the FTPS client must explicitly request security from an FTPS server, and then step up a mutually agreed encryption method. If a client does not request security, the FTPS server can either allow the client to continue in unsecure mode or refuse/limit the connection.


The primary difference between both the techniques is that in the explicit method the FTPS-aware clients can invoke security with an FTPS-aware server without breaking overall FTP functionality with non-FTPS-aware clients. Whereas in the implicit method, all clients of the FTPS server must be aware that SSL is to be used on the session, and so becomes incompatible with non-FTPS-aware clients.


SSH File Transfer Protocol (SFTP)

SFTP is not FTP run over SSH, but rather a new protocol designed from the ground up to provide secure file access, file transfer, and file management functionalities over any reliable data stream. Here, there is no concept of command channel or data channel. Instead, both data and commands are encrypted and transferred in specially formatted binary packets via a single connection secured via SSH.

  • For basic authentication, you may use a username and password to secure the file transfer, but for more advanced authentication, you can use SSH keys (combination of public and private keys).
  • Though SFTP clients are functionally similar, you cannot use a traditional FTP client to perform file transfer via SFTP. You must use an SFTP client for this.


A major functionality benefit in SFTP over FTP and FTPS is that in addition to just file transfer, you can also perform file management functions such as permission and attribute manipulation, file locking, etc.







Unencrypted information exchange in both command and data channels.

Communication is human readable.

Encryptions happens on both command and data channels via either implicit SSL or explicit SSL.

Communication is human-readable.

All information exchange between the FTP server and client are encrypted via SSH protocol. SFTP can also encrypts the session.

Communication is not human-readable as it’s in a binary format.

Firewall Port for Server

Allow inbound connections on port 21

Allow inbound connections on port 21 and/or 990, 989

Allow inbound connections on port 22

Firewall Port for Client

Allow outbound connections to port 21 and passive port range defined by server

Allow outbound connections to port 21 and passive port range defined by server

Allow outbound connections to port 22


Choosing which protocol you want to use for file transfer is totally dependent on what your requirement is and how secure you want the file sharing method to be. An effective way would be to use a third-party managed file transfer server which supports all these 3 options so it’s more convenient for you to adjust based on your need.

I’m very pleased to announce that Thomas LaRock, AKA sqlrockstar, has joined the Head Geek team. Many of you already know Tom from his work at Confio, where he played a critical part in their rapid growth as the Ignite product Evangelist. He remains in that role for what is now SolarWinds Database Performance Analyzer (DPA) and we’re thrilled to add his expertise to our group.


What makes Thomas such a highly ranked GEEK*?

  • He’s a Microsoft SQL Server Certified Master and six-time Microsoft SQL Server MVP. (He joins Lawrence Garvin as the second Microsoft MVP on the Head Geek team!)
  • He is known as “SQL Rockstar” in SQL circles and on his blog, which he started in 2003.
  • With over 15 years of IT industry experience, he has worked as programmer, developer, analyst, and database administrator (DBA), among other roles.
  • He wrote “DBA Survivor: Become a Rock Star DBA,” sharing his wisdom and experience with junior- and mid-level DBAs to help them excel in their careers.
  • He is President of the Board of Directors for the Professional Association for SQL Server (PASS), an independent, not-for-profit association dedicated to supporting, educating and promoting the global SQL Server community.


*GEEK: We use this term with all the affection we can muster. Geeks rule. But you knew that, right?

Tom really knows his stuff and he’s great at sharing that knowledge. Seriously, see for yourself on his blog: http://thomaslarock.com. Oh, and he’s already received his SolarWinds lab coat, has started appearing in SolarWinds Lab episodes, and is writing on Geek Speak. Keep it up, Thomas!


Tom lives in Massachusetts with his family and loves running, basketball and until recently, rugby. He also enjoys cooking and is a film junkie. He earned a Master’s in Mathematics from Washington State, and also loves bacon.


Welcome Thomas!


Twitter: @HeadGeeks, @SQLRockstar

thwack: SQLRockstar

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.