1 2 3 4 Previous Next

Geek Speak

1,633 posts

Back in 2005, when I managed the VMware infrastructure at a large insurance company, we had many contractors located off-shore. These contractors were located mostly in India and were primarily programmers working on internal systems. We had a number of issues with them having to do with inconsistent latencies, and inconsistent desktop designs when we had them VPN into our network. We decided to deploy a cluster of VMware hosts, and onto these deploy static Windows XP desktops with the goal of making the environment more stabile, consistent and manageable. While this was not what we consider today to be VDI, I’ll call it Virtual Desktop 1.0. It worked. We were able to deploy new machines from images, dedicate VLans to specific security zones, having them sit in the DMZ when appropriate, etc. Plus, we were able to mitigate the latency issues between interface and application back end, as the desktops resided inside the data center. We no longer had any issues with malware or viruses, and when a machine would become compromised in any way, we were able to swiftly respond to the end user’s needs by simply redeploying machines for them. The location of their data resided on a network volume, so in the event of a redeploy, they were able to retain their settings and personal drives from the redirected home directory. It was a very viable solution for us.


Citrix had accomplished this type of concept years earlier, but as a primarily VMware shop, we wanted to leverage our ELA for this. However, we did have a few applications deployed by MetaFrame, which was still a functional solution.


Time moved on, and VMware View was released. This had the added ability to deploy applications and images from thin images, and ease the special requirements on the storage. In addition, the desktop images now could be either persistent or non-persistent. Meaning, if our goal was to put out fresh desktops to the user upon log in. In this case, our biggest benefit was that the desktop would only take up space on the storage when in use, and if the user was not in the system, they’d have no footprint whatsoever.


There were some issues, though, in this. The biggest concern in this was that the non-persistent desktops, upon login, would demand such processing power that we’d experience significant “Boot Storms.” It would cause our users to experience significant drag on the system. At the time, with a series of LUNS dedicated to this environment, only spinning disc, we had IO issues forcing us to sit in a traditional or fully persistent state.


In my next post, I’m going to talk about how the issues of VDI became one of the industry’s main drivers to add IO to the storage, and to expand the ease at which we were able to push applications to these machines.


The promise of VDI has some very compelling rationale. I’ve only outlined a few above, but in addition, the concepts of pushing apps to mobile devices, “Bring Your Own Device” as well as other functions are so very appealing. I will talk next about how VDI has grown, solutions have become more elegant, and how hardware has fixed many of our issues.

“Spear phishing continues to be a favored means by APT attackers to infiltrate target networks”. - Trend Micro Research Paper 2012


“The reason for the growth in spear phishing: it works”. - FireEye Spear Phishing Attacks White Paper



One morning, a colleague in my data center network team and I received the following email:

Phishing Email.jpg

I heard my colleague called the Help Desk and reported that he had clicked a link in an email that he thought a possible phishing email a few minutes before. It could be a damaging magical click to my company; it could make my company to the US headline news. But…


Two days before my colleague clicked the link on that phishing email…


Our Information Security (InfoSec) team coordinated with the Help Desk, Email team, Network Security team (my team), and an outside vendor to create a phishing email campaign as part of the user security education. The outcomes were favorable, meaning there were users beside my colleague failed the test. The follow-up user educations were convincing (of course, for those who failed…).



Above is an example of Phishing, that phishing emails attack mass audience. Cybercriminals, however, are increasingly using targeted attacks against individuals instead of large scale campaigns. The individually targeted attack, aka Spear Phishing, is usually associated with Advanced Persistent Threat (APT) for long term cyberespionage.


The following incidents show that spear phishing has been pretty “successful” and the damages were unthought-of.



Employees of more than 100 Email Service Providers (ESPs) experienced targeted email attacks. The well-crafted emails addressed those ESP employees by name. Even worse, email security company Return Path, the security provider to those ESPs, was also compromised.



Four individuals in the security firm RSA were recipients of the spear phishing malicious emails. The success of the attacks resulted the access of RSA’s proprietary information of the two-factor authentication platform SecurID by the cybercriminals. Due to the RSA breach, several US high-profile SecurID customers were compromised.



The White House confirmed that a computer system in the White House Military Office was attacked by Chinese hackers and that it affected an unclassified network. This hack began with a spear phishing attack against White House staffers and a White House Communications Agency staff opened an email he wasn’t supposed to open.



An Associated Press journalist clicked a link that appeared to be a Washington Post news story on a targeted email. The AP’s official Twitter account was then hacked. A fake tweet reporting two explosions in the White House erased $136 billion in equity market value from the New York Stock Exchange index. In the same year, a hacker group in China was said to have hacked more than 100 US companies via spear phishing emails, stealing proprietary manufacturing processes, business plans, communications data, etc. In addition, you remember Target’s massive data breach, right?



Unauthorized access to the Centralized Zone Data System (CZDS) of the Internet Corporation for Assigned Names and Numbers (ICANN) was obtained. ICANN is the overseer of the Internet’s addressing system. ICANN announced that they believed the compromised credentials were resulted from a spear phishing attack. By that attack, accesses to ICANN's public Governmental Advisory Committee wiki, blog, and whois information portal were also gained. Again, you still remember Home Depot’s 2014 breach that exposed 56 million payment cards and 53 million email addresses, right?



US confirmed that the Pentagon was hit by a spear phishing attack in July, most likely from Russian hackers, which compromised the information of around 4,000 military and civilian personnel who work for the Joint Chiefs of Staff. The hackers used automated social engineering tactics to gain information from employee social media accounts and then used that information to conduct a spear phishing attack.



How do we protect against and detect the increasing spear phishing attacks? Our beloved Defense-In-Depth comes to our mind. NGFW, IPS/IDS, SPF/DKIM key validations, signature-less analysis services for zero-day exploit detection, IP/domain reputation services, web proxy, and up-to-day client/server patching to name a few. Is the well-built security infrastructure sufficient for spear phishing? The incidents listed above tell us NO. In the case of RSA breach, it only took one out of four individuals who fell to the trap to make hackers happy. So, user education is an essential component of spear phishing defensive strategies. Make smarter users. Remind them not to fall into spear phishing trap regularly and send them mock phishing drills randomly.


I won’t ask you to share your spear phishing story. But how does your organization protect against spear phishing? What does your organization provide user awareness and training? Please share. I would like to hear from you.

As you might have read a couple weeks ago in my AWS review, I started this post almost a month ago. Unfortunetly I ran into a snag after creating my database insatnce and it kept me from completing the article. HP Support however was pretty awesome and jumped on the issue right away, infact within 48 hours the problem was fixed and I was able to continue. With that said let's jump right in... however you need to keep one thing in mind... This service is probably the newest of any of the services I've looked at so far, I think I actually signed up when it was in beta. It is also so new that not all geo's have it as an option yet too.



As I mentioned this feature isn't available everywhere, so I had to navigate to the US West Geo in order to find it. Once there I was able to click the button in Horizon and get started creating.


HP sticks pretty close to the standard Openstack Horizon dashboard layout, so things are pretty clean throughout the UI.


Once you click launch instance you got the pretty much the same questions as we did on every other wizard.


I tossed in some generic stuff for my first database on the instance.


I did find that they make restoring backups pretty simple... Just create a new instance and select the backup image to use... simple as that!


And there you have it, a HP DBaaS! If you look closely you can also see where some of my problems started to show up before HP fixed them... Volume Size showed as "Not Available".



So this is where I noticed that I was having problems the first time through... When I went to look for my user's and databases in the Horizon UI everything would time out and just say that the information wasn't available. After HP support did their thing everything was much happier... I should note that they were able to replicate the issue on other instances, so it must have been a bug which they were able to quickly fix. (I do wonder though if I was the first person to try their MySQL DBaaS offering though LOL, because it was unusable before the fix...)


Anyhow, the details/Overview page did work just find and it is where I got my connection string from... down at the bottom.


The database tab simply shows what databases are on the instance and gives you the ability to delete them.


So this is where my love hate relationship starts... If I reboot my instance my user account will show up... for a while... then I get this awesome error. So I decided to assume that it's a Horizon issue and not a problem with the instance, plus I know my username is jpaul. One other note about the user area... Even when it is working, I could not find a way to change my password for the instance or add other users to the instance... I would think from the MySQL CLI I could but the GUI is pretty bare.

user data.jpg

Well I wasn't about to let a busted web interface slow me down... I'll open another support ticket today and I'm sure they will have it fixed again in no time... But I know my username, I know the IP address of the instance... and I have a MySQL client.


Unfortunely, the CLI didn't go much further. Looks like I will certainly have to open a ticket.


The only other thing I can really show you right now is how to restore a backup.



So to do a backup there is a create backup button listed beside of each instance that you have running. I won't go into to much detail because it's the same as any other create backup button.


Restoring from one of those backups is pretty simple too, you simply navigate back to the Launch Instance button, and fill out the same form that you did before when I created the initial instance, with one exception... this time on the Restore From Backup tab you select which backup you want to restore. Simple as that.




Let me start by saving I hate doing "bad" reviews, even if they are constructive. I will say that I am a litle disappointed that I have to reopen a ticket with HP support too. I think I opened my first ticket on like a Tueday or a wednesday with them... It took them until Thursday or Friday to figure out the problem and then they expected my to respond within 24 hours (which was a weekend) and I didn't, so they closed the case.


Overall I think that the service has promise, but right now I would say it still needs some work. I will post an update once HP support helps me get things going...

My name's Rick Schroeder, and I've been a member of Thwack since 2004.


I manage a health care network of 40,000+ computers across three states, and I've been enjoying using SolarWinds products like NPM, NCM, and NTA to proactively manage "Pure Network Services" across my organization.


In my environment "Pure Networking" means my team supports LAN, WAN, Firewall services, Wireless environments, and VPN solutions.  Our babies are the Edge, Access, Distribution, Core, and Data Center blocks.  We don't directly support user-accessible applications, nor servers or work stations or other end devices--we focus solely on LAN and WAN services, switches, routers, AP's, and firewalls.  This truly makes my team the "D.O.T. for the Info Highway."  Not dealing with edge devices and users and their apps is a luxury, but there's still a LOT of work to do, and I'm always looking for new tools and products that can help my team of six manage more systems in a better way.


When Danielle Higgins asked me to be a Guest Blogger for Thwack I was pleased to share some of my thoughts with you about information SolarWinds has given us to get what we need from Administration.


Sophisticated, powerful and well-developed tools that can improve our customers' experience are not free.  Persuading Administration to allocate budget to purchase them can be an intimidating challenge.  But as Joel Dolisy and Leon Adato explained, it's all about seeing the world from Administration's point of view.   They are featured in a video for the Thwack Mission for August, 2015 called Buy Me a Pony: How to Make IT Requests that Management will Approve - YouTube  SolarWinds has leveraged their knowledge to provide a resource you can use to build a successful pitch for allocating funds to get what you need--and it applies to anything, not just Orion products.


I watched the video and took away a lot of great ideas.  I made notes during the video and then put them together into an e-mail that I shared with my team.  We're using it as a guide for how to improve the network by learning the steps to make a winning presentation for additional resources (tools, people, etc.) to Management.  You can use the same steps and get the progress towards tools and resources you need by using the concepts I've prepared below:





One day you’ll want to champion a new product.  When you make your case for that product, your success depends on focusing on points that Management wants.  Your job is to show them how funding your project will accomplish the things they find important.


Some examples of important items to decision makers:

  • Growth (revenue, market share, etc.).
  • Cost Reduction (improve cash flow, save $$).
  • Risk (Avoid or resolve compliance issues, prevent exposure).


Find out how your new tool will fit the items above, then promote those features.  This simple idea will give you a better chance of getting Administrative approval than if you spend your time describing to them the technical features or coolness of the product.


Target what will get Management’s attention and support. 


Example:  Suppose an e-mail system keeps crashing.  You know replacing it will prevent that issue, so your goal is to buy/install a new e-mail system.  If you can’t convince Management that your project will match their top needs, you won't get quick approval to proceed.


If Management pays most attention to Risk Avoidance, then show them how preventing e-mail crashes reduces Risk.  Show SOX and HIPAA compliance will be much improved by a new e-mail solution and you’re halfway to getting funding for your project.


If Management is concentrating on bottom line issues, show how a new e-mail system will improve cash flow & save money.  Now your pitch is much more likely to receive the right response from them.


If Management pays most attention to Dollars and Growth and Cost Reduction, then learn the cost of e-mail crashes & show them how your recommendation addresses those specific items.  An example:

  • Crashing e-mail services cost money.  Find out how much and show them:
    • X dollars of support staff time per minute of down time and recovery time.  You could do the calculation based industry standard salaries broken down to hours & minutes and show the actual dollar cost of support to recover from an e-mail server crash. Talk about persuasive!
    • 3X dollars in lost new orders as customers fail to get timely responses and turn to the competition because they can’t get responses to their e-mail.
    • 20X dollars in lost order payments that come in via e-mail.
    • 50X dollars in missed opportunities when our competitors get to the customers before us because our e-mail service is down.



Don’t make up facts.  Being honest builds their trust in you and your team.  The key is to create a solution to your problem that not only fixes your issues, but fixes Management’s problems, too.


Remember that telling the down side of a product is not necessarily bad.  If you don’t include the full impact of your project to the business, decision makers may decide your proposal is not yet mature.  They might think "So you’re asking for a free puppy?  And you're not talking about food costs & vet bills and training, etc.?"   That can be the path to a quick denial from them.


When you hear “no”, it may mean:

  • Your pitch is right on, but the timing isn’t good for the company right now due to cash flow issues.
  • You simply haven’t convinced the decision maker yet.
  • You haven’t given the decision maker what they need to successfully take your case to THEIR manager.  They don’t want to look foolish merely because your enthusiasm about a product; you have to show you really know what you're talking about.  Then they can champion your cause to their boss without being at risk.
  • You haven’t shown how your application or hardware matches the company’s core focus (Risk, Revenue/Growth, Cost, etc.).
  • You haven’t brought data that can be verified, or it’s too good to be true, or it simply is not believable.


If you haven’t shown the downsides, management knows you:

  • Haven’t done due diligence to discover them.
  • Are hiding them.
  • Are caught up in the vendor’s sales pitch so much that you only see what’s shiny, and haven’t thought about (and found) problems with the product.


Your job is to treat “No” as if it only means “not yet.”


“No” does NOT mean:

  • You can’t come back with this again.
  • They're denying it because they don't like/trust you.
  • Your idea is no good.
  • You can’t have it due to politics.


Instead of accepting the denial and feeling you've failed, you can think of “no” as administrators simply saying your cake isn’t completely baked yet.  Your interpretation should be they meant "Once your cake's fully cooked we're interested in having you bring it back for review."



When you hear “no”, ask questions to find out why “no” is the right answer for Administration at the present moment:

  • What must be changed for the outcome to be “yes”?  What is not being heard that is needed?
  • Can we come back at a better time and talk about this again?  When?
  • How can we align the project better with the business focus/goals?
  • Is there a better point in the budget year to look at this?
  • Do we need other in-house skills, maybe an outside contractor, before we tackle this?
  • Do we need a training plan to develop our staff’s skills for the new technology?
  • Can we set up a small-scale demo to show you the product’s merits in-house via a proof-of-concept?



Remember:  There are no “IT Projects”.  There are only Business Projects that IT supports.



Help everyone on your team understand how they must make this project align with the Business’s goals when they present it to peers and administrators.


Example:  Suppose your goal is moving away from Nagios to Orion:

  • Bring the idea to your System Administrators, DBA’s, Apps Analysts--anyone who uses the old product (or who COULD be a candidate to use it)
  • Get their input and wish list, then show them how your new tools will give them better access and insight into their environments' functionality.
  • Ask them what they’re not getting from Nagios, listen carefully and then show them how Orion can provide those specific items to them.
  • Set up a 30 day free demo and then get them excited about what a SHARED tool like Orion can do across the silos.
  • Now you've turned them into supporters for the new project, and they can report positively about it to their managers.


Who is the funding Approver?

The CIO.


Who are the consumers of the new project?

Managers of Apps and Servers.


What part of the decision making process happens outside your view?

The CIO goes to the department Managers for their opinion. But you did your homework and the Managers have already heard great reports about your project from their trusted teams.


Result:  They share the good news to the CIO, and the CIO OK’s the purchase.


It's all about finding how your good idea is ALSO a good idea to the decision makers.  It's a formula for success.



Don’t overdo the presentation. If you want to lose the audience, include every stat, every permutation of dollars and numbers, show them a PowerPoint presentation with more than 10 slides, etc.

Your detailed/extra available information is not appropriate for the initial presentation.  But keep it for answering future queries.


It’s more important the bosses feel comfortable with the solution than it is to overwhelm them with details.


Here are some resources I found on https://thwack.solarwinds.com/ that can help you get buy-in from decision makers.  These tools may convince them that your project will make a significant contribution to improving the environment:

  • Feature Function Matrix: 
      • It lists all the features that a great monitoring tool might have.
      • It lets you compare what you have in-house today to what a competitive SolarWinds tool provides.
      • Shows the gap between the services and information you have with an existing product which can be filled with the new SolarWinds product.
      • Lets you identify primary sources of truth.  Example:
        • Suppose you have multiple ping latency measuring tools and you’re not eliminating any of them.
          • The Feature Function Matrix lets you prioritize them.
          • Now you can see which ones are most important, and which tools are backups to the important ones.
  • Sample RFP’s

o   Let you score weights of features.

o   Auto-calculates the information you need to show the decision makers.

o   Allows competitive vendors to show their strengths & cost.


  • In the Geek Speak forum you can find items associated with Cost Of Monitoring.  They might be just what you need to show management, highlight the cost of:

o   Not monitoring.

o   Monitoring with the wrong tool.

o   Monitoring the wrong things.

o   Monitoring but not getting the right notifications.

o   Forgetting to automate the monitoring response.


Here's hoping you can leverage the ideas I've shared to successfully improve your environment.  Remember, these processes can be applied to anything--getting a raise, adding more staff, increasing WAN pipes, improving server hardware, getting a company car--the sky's the limit (along with your creative ways to sell great ideas).


Swift Packets!


Rick S.

One of the hottest technologies in networking right now is Software Defined WAN (SD WAN), and pretty much every solution touts Zero Touch Deployment (ZTD) as a key feature. ZTD for SD WAN involves connecting a new device to the LAN and WAN at a site and it will magically become part of the corporate WAN without any need to have pre-configured the device, shipped a flash drive to site, or have smart hands on site configuring it.

It seems then that SD WAN has pretty much nailed the concept of “plug and play” configuration; but what does this have to do with Network Management?


Managing New Devices


We have plenty of ways to manage the addition of new devices to our networks. Perhaps the most obvious example is DHCP, which alleviates the need to manually configure IP addresses, masks, gateways, and DNS servers on new end points. Server administrators have largely addressed the wider issue of initializing new hosts; DHCP and PXE can bring up an uninitialized server (or VM) on the network and have it automatically install a specified OS image. Beyond that, tools like Puppet and Chef pre-installed on the base image can then allow the automatic installation and configuration of services OS, and can be used for configuration management and validation from then on.


It would be nice if adding a new device to the network were that easy. Can our management systems do that for us? Sure, we can use DHCP to give a device a management IP address, but what about taking it to the next step by installing software and configuring it?


Some Existing Technologies



You could argue that Cisco is the granddaddy of automatic installation, as they have supported tftpboot for software images and configuration files for many years now, but even ignoring the unreliability of TFTP for data transfers, it has been a distinctly flaky system, and requires configuration on the end system to get the desired results.




IP phones have in general addressed this issue a little bit more intelligently, using DHCP options to provide them with important information. Cisco use DHCP Option 150 to tell the phone where to download its configuration file, which for a new phone will mean starting the registration process with Call Manager. This is a little odd, because DHCP already has Option 66 to define a TFTP server, but Cisco is not alone. Avaya, for example, can use Option 176 to override Option 66 and provide a list of information to the client, including the Media Gateway server IP and port. ShoreTel avoid TFTP, and provides the ShoreTel Director IP along with other information via DHCP Options 155 and 156.


Looking at this, I see two clear –but common– elements at play:


  1. DHCP is incredibly flexible. Information can be delivered to end clients giving them a wide range of information which could help them determine where to boot from, what image to run, and what configuration file to load.
  2. DHCP is incredibly flexible. This means that vendors have used that flexibility to deploy their applications in the way that best suited them, and do not seem to share a common approach. Equally interesting, some of the DHCP options being used are in conflict with the existing formally assigned definitions.


Network Device Management



I’d argue that the reason PXE has been so successful in the server world is because it’s a single standard that’s implemented consistently regardless of vendor. The Open Network Install Environment (ONIE) –contributed to the Open Source community by Cumulus Networks– is trying to do the same thing for network devices, with a typical use case being the deployment of an uninitialized bare metal switch triggering the download of the customer’s chosen network operating system. Once the switch boots the new OS though, we’re typically back to vendor-specific mechanisms by which to obtain configuration.


Puppet and Chef


Puppet and Chef are a great concept for initial configurations, and Juniper’s inclusion of a Puppet Agent in Junos on some of their platforms is certainly a nice touch, but it becomes rapidly obvious both from the perspective of limited feature support and the concept of periodic polling by the agent (the default is every 30 minutes) that Puppet really may not be the best “real time” device management solution.


One DHCP to Rule Them All


So here’s a straw man that you can set fire to as needed. What if –stop laughing at the back– we decided to use DHCP to provision a new device with everything it needed to get on the network and be manageable? Let’s set up DHCP options to send:


  • IP / Mask / Default Gateway
  • Optional directive to make the IP your permanent IP (and not use DHCP on the next boot)
  • Management network host routes as needed
  • Device hostname and domain
  • Name servers (DNS)
  • SNMP RO strings
  • A directive (sounds like Puppet here) to enable SSH and generate keys, if needed.
  • A URL to register itself with network management system


This much would get the device on the network and make it manageable using the NMS. Maybe we take it that a step further:


  • Like PXE, a host to ask for your boot image (include logic to say “if it differs from what you already have”), so that a new device will load the standard image for that platform automatically.
  • A path to a configuration file perhaps?


If this was supported across multiple vendors, in theory a device could be connected, would add itself to the NMS (and have the right ACLs to allow it), would have the correct code that we wanted running on it, would load a specific or default configuration from a path we pointed to. Configuration from this point onwards would have to be using some other mechanism, but at least we achieve the goal of ZTD without using an entirely new protocol. All we need on our side, is a script that maps device properties to a management MAC address (ideally), or that automatically provisions devices based on the vendor/device type and location.


Genius or Crazy?


Would this help? Would it be good if a device could register itself automatically in NMS? What else would you need? Let’s either build this up or tear it down; is this the best idea in the world or the most stupid?


I look forward to your opinions. And hey, if this turns out to be a world-changing idea and takes off, I’m going to call it TugBoot or something like that. It’s only fair.


Please let me know.

What does this CI concept mean? A converged architecture is, to put it simply, a purpose built system, with the inclusion of Server, Storage, and Network, designed to ease the build of a centralized environment for the data center. By this I mean, there’re servers and storage combined with networking, so that with very minor configuration after plugging the equipment in, one will be able to manage the environment as a whole.


In the early days of this concept, prior to the creation of VCE, I worked on a team at EMC called the vSpecialists. Our task was to seek out appropriate sales opportunities wherein an architecture like this would be viable, and qualify our prospects for what was called the vBlock. These included Cisco Switches, the just released Nexus line, Cisco Servers, the also freshly released UCS blades and EMC storage. These vBlocks were very conscripted toward sizing, and very dedicated to housing virtualized environments. VMware was critical to the entire infrastructure. In order for these systems to be validated by the federation, all workloads on these would need to be virtualized. The key, and the reason that this was more significant than what customers may already have in their environments was the management layer. A piece of software combining UCS Manager, IOS for the Switch layer, and storage management was pulled together called Ionix. This was where the magic occurred.


Then came a number of competitors. NetApp released the FlexPod in response. The FlexPod was just that. More flexible. Workloads were not required to be virtual exclusively, obviously, storage in this case would be NetApp, and importantly, the customer would be able to configure these based less rigidly on their sizing requirements, and build them up further as they needed.


There were other companies, most notably Hewlett Packard and IBM that built alternative solutions, but the vBlock and FlexPod were really the main players.


After a bit of time, a new category was created. This was called HyperConvergence. The early players in this field were Nutanix and Simplivity. Both these companies built much smaller architectures. These are called Hyper Converged, most reasonably. They were originally seen as entry points for organizations looking to virtualize from a zero point, or point type solutions for new circumstantial projects like VDI. They’ve grown both in technology and function to the point where companies today are basing their entire virtual environments on them. While Nutanix is leveraging new OS models, building management layers onto KVM and replication strategies for DR, Simplivity has other compelling pieces, such as a storage deduplication model, and replication making for compelling rationale for pursuing.

There are also many new players in the HyperConverged marketplace, making it the fastest growing segment of the market now. Hybrid cloud models are making these types of approaches very appealing to IT managers setting direction for the future of their data centers. Be sure to look for new players in the game like Pivot3, Scale Computing, IdealStor, and bigger companies like Hewlett Packard and the EVO approach from Vmware with Evo Rail and Evo Rack getting it’s official launch this week at VMworld.

“After having spent the last two weeks in Asia I find myself sitting in a hotel room in Tokyo pondering something. I delivered a few talks in Singapore and in Manila and was struck by the fact that we’re still talking about SQL injection as a problem”. - Dave Lewis, CSO Online, July 31, 2015





The following story is based on an actual event.


A Chief Security Officer (CSO) called a junior InfoSec engineer (ENG) after 5PM.


CSO: “I am looking for your manager. Our main website was hacked…”

ENG: “He left already. No, I heard that people complaint the website was slow this afternoon. The web team is working on it”.

CSO: “I am telling you that our website was hacked! There are garbage records in the database behind the website. The DBAs are trying to clean up the database. We were hacked by SQL injection!”

ENG: “…”

CSO: Call your boss now! Ask him to turn around and go back to the office immediately!”


Several teams of that poor company spent the whole night to clean up the mess. They needed to restore the database to bring back the main website.



In my last Thwack Ambassador post, OMG! My Website Got Hacked!, I summarized the last four OWASP Top 10 lists since 2004. Injection in general, and SQL Injection in particular, was number 1 of the OWASP Top 10 in 2010 and 2013. I predict that SQL injection will still be number 1 in the upcoming report of the OWSAP Top 10 in 2016. Check out this list of SQL injection incidents. Do you notice the increasing number of incidents in 2014 and 2015?


It’s another Christmas Day. In Phrack Magazine issue 54, December 25, 1998, there was an article on “piggyback SQL commands”  written by Jeff Forristal under the pseudonym rain.forest.puppy. Folks, 1998 was the year at which SQL injection vulnerability was publicly mentioned, although the vulnerability had probably existed long before then. Almost 17 years have passed since Jeff Forristal wrote his article “ODBC and MS SQL server 6.5” in Phrack Magazine, and still many companies are hit hardly by the SQL injection attacks today.


If you want to know more about the technical details of the SQL injection, I recommend you read Troy Hunt’s "Everything you wanted to know about SQL injection (but were afraid to ask)". Then you’ll appreciate the XKCD comic, Exploits of a Mom, that I included at the top of this post.


There are a few solutions to combat SQL injection; we may actually need all solutions combined to fight against SQL injection.


DATA SANITIZATION. Right. All user inputs to websites must be filtered. If you expect to receive a phone number in the input field, make sure you receive a phone number, nothing else.


SQL DEFENSES. As OWASP recommended, use parameterized statements, use stored procedures, escape all user supplied input, and enforce least database privilege. Don’t forget to log all database calls. And not the least, protect your database servers.


APPLICATION FIREWALL AND IPS. I agree that it’s not easy to customize security rules to fit your applications. But if you invest in AFW and/or IPS, they will be your first line of defense. Some vendors offer IDS-like, application behavioral model products to detect and block SQL injection attacks.


FINDING VULNERABILITIES AHEAD OF HACKERS. Perform constant security assessments and penetration testings to your web applications, both internal and internet-facing. Also, common sense wisdom: patch your web servers and database servers.


EDUCATION. EDUCATION. EDUCATION. Train your developers, DBAs, application owners, etc. to have a better understanding on information security. It will be beneficial to your company to train some white-hat hackers in different teams. Troy Hunt made a series of FREE videos for Pluralsight in 2013, Hack Yourself First: How to go on the Cyber-Offense. Troy made it clear in the Introduction that the series was for web developers. You don’t have to log in or register; just click on the orange play icons to launch the videos.



Do you have any story of SQL injection attack to share? You may not be able to share your own story, but you can share the stories you heard. Do you think that it’s hard to guard against SQL injection attacks and that’s why even many Fortune 500 companies still suffer from the treats? How do you protect your web applications and database servers from the SQL injection threats?

In the past few articles I’ve been covering the issues around moving your database from on-premise servers to the cloud. And there are definitely a lot of issues to be concerned about. In my last article I dug into the areas of latency and security but there are a host of other issues to be concerned about as well including the type of cloud implementation (IaaS, PaaS DBaaS), performance, geographical ownership of data and the last mile problem. While its clear that the cloud is certainly becoming more popular it’s also clearly not an inevitable path for everyone. Even so you shouldn’t necessarily seal the cloud off in a box and forget it for the next five years. Cloud technologies have evolved quickly and there are places where the cloud can be a benefit – even to IDBAs and IT Pros who have no intension of moving to the cloud. In this article I’ll tackle the cloud database issue from another angle. Where does using the cloud make sense? There are a few places where the cloud makes sense: disaster recovery (DR), availability, new infrastructure. Let’s take a closer look at each.



Off-site backups are an obvious area where the cloud can be a practical alternative to traditional off-site solutions. For regulatory and disaster recovery purposes most businesses need to maintain an offsite backup. These backups are often still in tape format. Maintaining an offsite storage service is an expense and there isn’t immediate access to the media.  Plus, there is a high rate of failure when performing restores from tape. Moving your offsite backups to the cloud can address these issues. The cloud is immediately available plus the digital backup is more reliable.  Of course since connection latency is more of an issue with the cloud than it is with on-premise infrastructure one important consideration for cloud-based backups is the time to backup and to restore. The backup time isn’t so much of an issue because that can be staged. Cloud restores can be speed up by utilizing network and data compression or low latency connections like ExpressRoute.


DR and HA

One of the places where the cloud makes the most sense is in the area of disaster recovery. The cloud can be a practical and cost effective alternative to a physical disaster recovery site. Establishing and maintaining a physical DR site can be a very expensive undertaking leaving it out of reach for many smaller and medium sized organizations. The cloud can be an affordable alternative for many of these organizations. Various types of technologies like Hyper-V Replica, VMware Replication and various third party products can replicate your on-premise virtual machines to cloud-based IaaS VMs. This enables you to have near-real time VM replicas in the cloud that can be enabled very rapidly in the event of a disaster. This type of solution leverages cloud storage and provides a disaster recovery solution that your normal day-to-day operations do not depend on. The cloud is utilized for off-site storage and possibly a temporary operations center if your on-premise data center fails.


Another closely related area is application availability. Technologies like SQL Server’s AlwaysOn Availability Groups can protect your on-premise of even cloud databases by using cloud-based VMs as asynchronous secondary replicas. For instance, you could setup a SQL Server AlwaysOn Availability Group that had synchronous on-premise secondary replicas that provided automatic failover and high availability and at the same time have asynchronous secondary replicas in Azure. If there was a site failure or a problem with the synchronous replica then the asynchronous replicas in Azure can be manually failed over to over with little to no data loss.



Another area where the cloud makes sense is for startups or other smaller businesses that are in need of an infrastructure update. For businesses where there’s no existing infrastructure or businesses that have aging infrastructure that needs to be replaced buying all new on-premise infrastructure can be a significant expense. Taking advantage of cloud technologies can enable business to get up and running without needing to capitalize a lot of their equipment costs.


Cloud technology still isn’t for everyone – perhaps especially not for DBAs and database professions -- but there are still areas where it makes sense to leverage cloud technologies.


The latest innovations in storage technology allow organizations to maximize their investment by getting the most out of their storage systems. The desired outcome of optimally managed data storage is that it helps businesses grow and become more agile. However, despite advances in storage technology, organizations are still experiencing significant network issues and downtime.


The problem lies in the fact that users do not understand how to properly deploy and use the technology. If used correctly, today’s new storage technologies can help an organization grow. But first, IT admins need to know their storage environment inside and out, including understanding things like NAS, SAN, data deduplication, capacity planning, and more.


In my previous blog, I talked about hyper-convergence, open source storage, and software-defined storage. Today, I will discuss a few more storage technologies.


Cloud storage


Cloud storage is essentially data that is stored in the Cloud. This kind of architecture is most useful for organizations that need to access their data from different geographic locations. When data is in the Cloud, the burden of data backup, archival, DR, etc. becomes outsourced. Cloud storage vendors promise data security, speedy deployment, and reliability among other things. They also claim that organizations don’t have to worry about overall storage, which includes purchasing, installing, managing, upgrading, and replacing storage hardware. In addition, with Cloud storage users can access files from anywhere there is Internet connectivity.


Flash storage


The IOPS of the spinning hard disk has not evolved much over the years. However, with the introduction of solid state storage (also known as flash storage) the increase in performance has been exponential. Flash storage is often the best solution for organizations running high IOPS applications. This is because it can reduce the latency for those applications, resulting in better performance. Flash storage offers other benefits, such as it consumes less power than other storage options, takes up less space in the data center, and allows more users to access storage simultaneously. Because flash storage tends to be a more expensive option, organizations still use hard disk drives (HDD) for Tier 1, Tier 2, and Tier 3, and reserve flash storage for Tier 0 (high performance) environments. 



Object storage


Storage admins are quite familiar with file storage (FS) and how data is accessed from NAS using NFS, CIFS, etc. But object storage is entirely different. It works best with huge amounts of unstructured data that needs to be organized. With object storage, there is no concept of a file system; the input and output happens via application program interface (API), which allows for the handling of large quantities of data. Object storage uses metadata, which is used by FS to locate the file. In cases where you need to frequently access certain data, it's better to go with file storage over object storage. Most people won’t wait several minutes for a Word doc to open, but are likely more patient when pulling a report that involves huge amounts of data.


As you can see, there’s a vast market for storage technology. Before choosing one of the many options, you should ask yourself “is this solution right for my storage environment? You should also consider the following when choosing a storage technology solution:


  • Will it improve my data security?
  • Will it minimize the time it currently takes to access my data?
  • Will it provide options to handle complex data growth?
  • Will there be a positive return on my investment?


  Hopefully this information will help you select the best storage platform for your needs. Let us know what technology you use, or what you look for in storage technology.

Well, if you have been following along with my DBaaS series you know that we are slowly getting through the different services I mentioned in the first article. Up this week is Amazon Web Services' (AWS) RDS offering. Out of all of the offerings we will go over I think it's easy to say that Amazon's offering is by far the most mature offering, namely due to the fact that it was released way back in 2009 (December 29th), and has a long history of regular release upgrades.


For AWS I didnt do the performance testing like I did last time with Google's offering because lately I've also been working on a demo site for a local IT group, so I thought why not migrate the local MySQL database that I had on my web server to the AWS offering and leave the apache services local on that server (so just a remote database instead of local). So as I go through things and you see the screenshots that is what I was trying to get working.




It's pretty obvious that AWS has been doing this a while. The process for creating a new database instance is very straight forward. We start by pickign what type of database, for the purposes of my project (Wordpress) I need RDS as it offers MySQL.


Next we select the engine type we want to use, in this case I am picking MySQL then I click Select.


The next thing AWS wants to know is if this is a production database that requires multiple availability zones and / or a provisioned amount of IOPS.


Then we get to the meat of the config.


Everyhting is pretty much the same as the other offerings, fill out the name of your instance, pick how much horse power you want and then put in some credentials.

The advanced settings step lets you pick what VPC you put your DB instance into. VPC's seem to be popping up more and more in the different use cases I've ran into for AWS. For this purposes I just left it as the default as I dont currently have any other VPC's.


Lastly on the Advanced Settings step you also can select how often automatic backups and maintenance windows take place. Then click Launch DB Instance to get started.



One of the things that I should have pointed out a while ago, but didn't because I assume everyone knew, is that DBaaS instances are basically just standard virtual machines that run inside of your account. The only thing that is different is that there are some more advanced control panel integration into that DBaaS VM.


OK so after a few minutes we can refresh the dashboard and see our new MySQL instance.


There you go! that's all there is to creating




So now that we have our database server created we can connect to it, but there is a catch. AWS uses firewall rules that block everything except the WAN IP address of where you created the instance from. So the first thing we need to do is go create a proper rule if your WAN IP isnt the same as where you will be connecting from. (So for my example my Home IP was auto white listed, but the web server that wil be connecting to the database is at a colo, so i needed to allow the colo IP to connect)

So create new firewall rules you will want to navigate over to the VPC Security Groups area and find the Security Group rule that was created for your DB instance. At the bottom you see an Inbound Rules tab and there is where you can edit the inbound rules to include the proper IP... In my case its the IP.


Once we have that rule in place I can login to the web server and try to connect via the MySQL command line client to verify.




In my example I already had wordpress installed and running and I just wanted to migrate the database to AWS. So what I did was use mysqldump on the local web server to dump wordpress to a .sql file. Then from the command line I ran the cilent again, this time telling it to import the SQL database into the AWS MySQL instance.


This was a new site so it didnt take long at all.


Once that was all done I simply edited my wordpress config file to point at the endpoint displayed on teh instance dashboard along with the username and password I setup previously and it worked on the first try!




Monitoring your database instance is super easy. Amazon has Cloudwatch, a monitoring platform build in, which can send you emails or other alerts once you setup some triggers. Cloudwatch also provides some nice graphs that show pretty much anything you could want to do.


Here is another shot. This one monitors the actual VM underneath the DBaaS layer.





So Backup is pretty easy ... you configured it when you created your instance. Optionally you can also click Instance Actions and then pick Take Snapshot to manually take a point in time snap.

manual snap.PNG

Restores are pretty much just as easy. Like with the other providers you arent really restoring the database so much as you are creating another Database insance at that point in time. To do that there is a "Restore to point in time" option in the instance actions... Which is a little misleadinig since you arent really restoring... oh well.


If you need some geo-redundancy you can also create a read replica, which can later be promoted to the active instance should the primary fail. The wizard to do so looks very much like all the other wizards. THe only real difference is the "source" field, where you pick which database instance you want to make a replica of.




I know what I think... experience matters. The simple fact that AWS has been doing this basically forever in terms of "cloud" and "as a service" technology, means that they have had a lot of time to work out the bugs and really polish the service.


And to be real honest I wanted to do a post about the HP Helion service this week... However the service is still in beta, and while testing it out I hit a snag which actually put a halt on tests. Ill share more about my experience with HP support next time in the article about their service.


Until next time!

Most organizations rely heavily on their data, and managing all that information can be challenging. Data storage comes with a range of complex considerations, including:


  • Rapid business growth creates an increase in stored data.
  • Stored data should be secure, but, at the same time, accessible to all users. This requires significant investments in time and money.
  • Foreseeing a natural or man made disaster, and ensuring adequate data backups in the event of those occurrences can be challenging.
  • Dealing with storage architecture complications can be difficult to manage.


And the list goes on.


The storage administrator is tasked with handling these issues. Luckily for the admin, there are new methods of storing data available on the market that can help. Before choosing one of these methods, it is important to ask “is this right for my environment?” To answer this question, you need to know each of the new methods in detail. This article talks about some trends that are now available for data storage.


Software-defined storage


Software-defined storage (SDS) manages data by pushing traffic to appropriate storage pools. It does this independent of hardware via an external programmatic control, usually using an application programming interface (API).


The SDS storage infrastructure allows the storage of data from different devices, different manufacturers, or a centralized location when a request comes from an application for a specific storage service. SDS will precisely match demand, and provide storage services based on the request (capacity, performance, protection, etc.).


Integrated computing platform (ICP)


Servers traditionally run individual VMs, and data storage is supported by network attached storage (NAS), direct attached storage (DAS), or storage area network (SAN). However, with ICP or hyper-converged architecture, computer, networking, and storage are combined into one physical solution.


In a nutshell, with ICP or hyper-convergence, we have a single box that can:


  • Provide computer power.
  • Act as a storage pool.
  • Virtualize data.
  • Communicate with most common systems.
  • Serve as a shared resource.


Open source storage solutions


As digital storage becomes more popular, it has created a path for the development of different open source storage solutions, such as Openstack and Hadoop. The open source community has developed, or is trying to develop, tools to help organizations store, secure, and manage their data. With open source storage solutions, there is a chance for organizations to reduce their CAPEX and OPEX costs. Open source solutions give companies the option to create in-house developments, which allows them to customize their storage solutions to their needs.


These open source solution might not be a plug and play mechanism, however there will be plenty of pieces ready. You should modify the pieces together and create something unique and that fits your company policies and needs.  This way the open source solutions provides a flexible and cost effective solution.


These are just a few storage solutions. My next blog, talk's about flash, Cloud, and object storage.


Keeping Your Secrets Secret

Posted by jgherbert Aug 24, 2015


RADIUS is down and now you can’t log into the core routers. That’s a shame because you’re pretty sure that you know what the problem is, and if you could log in, you could fix it. Thankfully, your devices are configured to fail back to local authentication when RADIUS is unavailable, but what’s the local admin password?




It’s a security risk to have the same local admin password on every device, especially since you haven’t changed that password in three years,” said the handsome flaxen-haired security consultant. “So what we’re doing to do,” he mused, pausing to slide on his sunglasses, “is to change them all. Every device gets it own unique password.



After much searching of your hard drive, you have finally found your copy of the encrypted file where the consultant stored all the new local admin passwords.



You’ve tried all the passwords you can think of but none of them are unlocking the file. You’re now searching through your email archives from last year in the hopes that maybe somebody sent it out to the team.



An Unrealistic Scenario?


That would never happen to you, right? Managing passwords is one of the dirty little tasks that server and network administrators have to do, and I’ve seen it handled in a few ways over the years including:


  • A passworded Excel file;
  • A shared PasswordSafe file;
  • A plain text file;
  • A wiki page with all the passwords;
  • An email sent out to the team with the latest password last time it was changed;
  • An extremely secure commercial “password vault” system requiring two user passwords to be entered in order to obtain access to the root passwords;
  • Written on the whiteboard which is on the wall by the engineering team, up in the top right hand corner, right above the team’s tally for “ID10T” and “PEBCAK” errors;
  • Nobody knows what the passwords actually are any more.


So what’s the right solution? Is a commercial product the best idea, or is that overkill? Some of the methods I listed above are perhaps obviously inappropriate or insecure. For those with good encryption capabilities, a password will be needed to access the data, but how do you remember that password? Having a single shared file can also be a problem in terms of updates because users inevitably take a copy of the file and keep it on their local system, and won’t know when the main copy of the file has been changed.


Maybe putting the file in a secured file share is an answer, or using a commercial tool that can use Active Directory to authenticate. That way, the emergency credential store can be accessed using the password you’re likely using every day, plus you gain the option to create an audit trail of access to the data. Assuming, of course, you can still authenticate against AD?


What Do You Do?


Right now I’m leaning towards a secured file share, as I see these advantages:


  • the file storage can be encrypted;
  • access can be audited;
  • access can be easily restricted by AD group;
  • it’s one central copy of the data;
  • it’s (virtually) free.


But maybe that’s the wrong decision. What do you do in your company, and do you think it’s the right way to store the passwords? I’d appreciate your input and suggestions.

In previous posts, I’ve explored differing aspects of concerns regarding orchestrations from a management level on Application Deployment, particularly in hybrid environments, and what to both seek and avoid in orchestration and monitoring in Cloud environments. I believe one of the most critical pieces of information, as well as one of the most elegant tasks to be addressed in the category of orchestration is that of Disaster Recovery.


While Backup and Restore are not really the key goals of a disaster recovery environment, in many cases, the considerations of backup and data integrity are part and parcel to a solid DR environment.


When incorporating cloud architectures into what began as a simple backup/recovery environment, we find that the geographic disbursement of locations is both a blessing and a curse.


As a blessing, I’ll say that the ability to accommodate for more than one data center and full replications means that with the proper considerations, an organization can have, at best a completely replicated environment with the ability to support either a segment of their users or as many as all their applications and users in the event of a Katrina or Sandy-like disaster. When an organization has this consideration in place, while we’re not discussing restoring files, we’re discussing a true disaster recovery event including uptime and functionality concerns.


As a curse, technical challenges in terms of replication, cache coherency, bandwidth and all compute storage and network require considerations. While some issues can be resolved by the sharing of infrastructure in hosted environments, some level of significant investment must be made and crossed off against the potential loss in business functionality should that business be faced with catastrophic data and functionality loss.


For the purposes of this thought, let’s go under the assumption that dual data centers are in place, with equivalent hardware to support a fully mirrored environment. The orchestration level, replication software, lifecycle management of these replications, management level ease of use, insight into physical and virtual architectures, these things are mission critical. Where do you go as an administrator to ensure that these pieces are all in place? Can you incorporate an appropriate backup methodology into your disaster recovery implementation? How about a tape-out function for long term archive?


In my experience, most organizations are attempting to retrofit their new-world solution to their old-world strategies, and with the exception of very few older solutions, these functions are not able to be incorporated into newer paradigms.

If I were to be seeking a DR solution based on already existing infrastructure, including but not exclusive to an already existing backup scenario, I want to find the easiest, most global solution that allows my backup solution to be incorporated. Ideally, as well, I’d like to include technologies such as global centralized Dedupe, lifecycle management, a single management interface, Virtual and Physical server backup, and potentially long term archival storage (potentially in a tape based archive) into my full scope solution. Do these exist? I’ve seen a couple solutions that feel as if they’d solve my goals. What are your experiences?


I feel that my next post should have something to do with Converged Architectures. What are your thoughts?

The recent recall of 1.4 million Fiat Chrysler cars over a remote hack vulnerability is just another patch management headache waiting to happen—only on a larger scale and more frequently. But that’s the future. Let’s talk about the current problems with patch management in organizations, big and small. In a recent SolarWinds® security survey, 62% of the respondents admitted to still using time-consuming, manual patch management processes.


Does this mean IT managers are not giving due attention to keeping their servers and workstations up-to-date? NO. Of course, security managers and system administrators know how much of a pain it is to have a 'situation' on their hands due to a bunch of unpatched, vulnerable machines in their environment. It’s never fun to be in a fire fight!


However, having a manual or incomplete patch management process in place is equivalent to having nothing at all when deploying patches as vulnerabilities arise from:

  • Potentially unwanted programs
  • Malware
  • Unsupported software
  • Newer threats (check US-CERT or ENISA)


As a security manager or system administrator, what do you think are the common challenges that come in the way of realizing an effective patch management process? Here are a few common issues:

  • Inconsistent 3rd-party patching using the existing Microsoft WSUS and SCCM solutions
  • Complexity in addressing compliance and audit requirements
  • Complexity in customizing patches and packages for user and computer groups
  • Administrative overhead due to an increase in BYOD usage in the environment


Given the frequency and scale of cyber-attacks and data compromises, having a thorough patch management process is a must-have—not a nice-to-have. But how fast can you put one together?


If you’re already managing patch deployments in your organization with WSUS, you’re covered for Microsoft® applications. You just have to implement a process for automating the patching of non-Microsoft (or 3rd-party) applications like Adobe®, Java™, etc.


WSUS also has its own limitations, like limited hardware inventory visibility and an inability to provide software inventory information. Having inventory information is crucial when you’re formulating a comprehensive patch management strategy.


The strategy should accommodate flexible and fully-customizable patch operations so the regular business activities don’t feel the impact. Or, you can count on having an ‘oh-dear’ moment, complete with a blank stare as you wonder “Why is this server rebooting at the wrong time and hurting my business?”

There are just too many pieces that must fall in place for an effective patch management strategy. If you don’t have one, you might begin by asking yourself…

  1. How am I planning to look out for newer security threats, and regular hot-fixes/patches?
  2. How will I assess the impact to my systems/business if I manage to identify the threats?
  3. How will I prioritize the patches that may affect my systems right away?
  4. What’s the back-up/restore plan?
  5. How will I test the patches before rolling them out to production systems?


The notion should be to not let patch management become a fire-fighting exercise. Even if it does become a fire-fighting exercise, the process should be clearly defined to minimize the impact of the security threat.


Effective patch management should become a good security practice to protect the IT systems from security threats, stay compliant, and eliminate business downtime and data compromises.


I’ve recently done a couple of articles about cloud databases and there have been a several common responses. First, it’s clear (and I can understand why) that moving your databases to the cloud is not something most IT and database professionals are keen on doing. However, more interestingly there were a couple of common concerns that kept coming up regarding cloud database implementations that I’ll tackle in the this article. The first is security and the second is latency.


Security Concerns

The first and foremost concern is security. Having toured the facilities provided by several cloud hosting vendors I can definitely say their physical security far exceeds the security that I’ve seen in the normal IT environment. While I’m sure that there are very secure private IT infrastructures, I’ve never seen private IT security equal to what cloud vendors offer. For instance, last year I toured two different facilities offered by different cloud providers. One of these facilities even had ex-military armed guards at the check-in. Next, inside the facilities there were man-traps at every door where the first set of doors must close before the second set opens. Then the actual computing hardware was located inside locked cages – sometimes two locked cages that required different security access codes in order to gain access to the real hardware behind the cloud. In addition, the electrical conduits came from two different providers. This far exceeds what most business provide. Most business do not have these levels of security and reliability. However, I realize that physical security isn’t the only concern. You do have to trust that the cloud vendor will respect your privacy concerns and that is not an issue when you are in control of the data security.


Minimizing Latency

The next biggest concern that readers have expressed is about latency of cloud applications. The primary concern isn’t about latency caused by lack of compute power or storage performance. The bigger concern is network latency. If everything is in the cloud then network latency not an issue you really need to worry about. For instance, if your SQL Server database is primarily the backend for a web application that lives in Azure and the web application also lives in Azure then network latency really isn’t an issue. In this example, you don’t really have to worry about the latency that the public internet can introduce because the database and the application never really have to send data across the Internet. But what if you have local processing that depends on a cloud-based SQL Server database? In that scenario Internet latency really can be an issue. While Azure to Azure connections between the application and database will not be subject to Internet latency Azure or other cloud connections to on-premise systems clearly can be subject to Internet latency. The Internet is a public domain and you can’t be guaranteed that bandwidth will be there if you need it.


Fortunately, there are alternatives to using the public Internet to access your cloud databases. Both Azure and Amazon support private high speed on-premise to cloud connection technologies. Amazon calls it Direct Connect while Microsoft Azure calls it ExpressRoute.  Both of these technologies are essentially private cloud connections that offer more reliability, faster speeds, lower latencies, and higher security than standard Internet connections. Essentially, they connect your private network directly to your cloud provider of choice without crossing the public internet. Noam Shendar, Vice President of Business Development, Zadara Storage stated that ExpressRoute provided one to two millisecond response time for Azure access. Very fast indeed. There low latency alternatives to the public Internet can help to overcome the latency hurdles for cloud-based databases.


The Bottom Line

Cloud vendors typically have implemented security measures that exceed most IT organizations. However, it really boils down to trust. You need to trust that the cloud personnel will secure your data and not permit or accidentally expose your data to unauthorized access. Next, while the Internet may be an unreliable medium, high performance alternatives like Direct Connect or ExpressRoute are available. Both can provide very fast on-premise to cloud database connections – at a price. To find out more about Direct Connect check out AWS Direct Connect to find out more about ExpressRoute look at look at ExpressRoute.


Filter Blog

By date:
By tag: