1 2 3 4 Previous Next

Geek Speak

1,629 posts

One of the hottest technologies in networking right now is Software Defined WAN (SD WAN), and pretty much every solution touts Zero Touch Deployment (ZTD) as a key feature. ZTD for SD WAN involves connecting a new device to the LAN and WAN at a site and it will magically become part of the corporate WAN without any need to have pre-configured the device, shipped a flash drive to site, or have smart hands on site configuring it.

It seems then that SD WAN has pretty much nailed the concept of “plug and play” configuration; but what does this have to do with Network Management?


Managing New Devices


We have plenty of ways to manage the addition of new devices to our networks. Perhaps the most obvious example is DHCP, which alleviates the need to manually configure IP addresses, masks, gateways, and DNS servers on new end points. Server administrators have largely addressed the wider issue of initializing new hosts; DHCP and PXE can bring up an uninitialized server (or VM) on the network and have it automatically install a specified OS image. Beyond that, tools like Puppet and Chef pre-installed on the base image can then allow the automatic installation and configuration of services OS, and can be used for configuration management and validation from then on.


It would be nice if adding a new device to the network were that easy. Can our management systems do that for us? Sure, we can use DHCP to give a device a management IP address, but what about taking it to the next step by installing software and configuring it?


Some Existing Technologies



You could argue that Cisco is the granddaddy of automatic installation, as they have supported tftpboot for software images and configuration files for many years now, but even ignoring the unreliability of TFTP for data transfers, it has been a distinctly flaky system, and requires configuration on the end system to get the desired results.




IP phones have in general addressed this issue a little bit more intelligently, using DHCP options to provide them with important information. Cisco use DHCP Option 150 to tell the phone where to download its configuration file, which for a new phone will mean starting the registration process with Call Manager. This is a little odd, because DHCP already has Option 66 to define a TFTP server, but Cisco is not alone. Avaya, for example, can use Option 176 to override Option 66 and provide a list of information to the client, including the Media Gateway server IP and port. ShoreTel avoid TFTP, and provides the ShoreTel Director IP along with other information via DHCP Options 155 and 156.


Looking at this, I see two clear –but common– elements at play:


  1. DHCP is incredibly flexible. Information can be delivered to end clients giving them a wide range of information which could help them determine where to boot from, what image to run, and what configuration file to load.
  2. DHCP is incredibly flexible. This means that vendors have used that flexibility to deploy their applications in the way that best suited them, and do not seem to share a common approach. Equally interesting, some of the DHCP options being used are in conflict with the existing formally assigned definitions.


Network Device Management



I’d argue that the reason PXE has been so successful in the server world is because it’s a single standard that’s implemented consistently regardless of vendor. The Open Network Install Environment (ONIE) –contributed to the Open Source community by Cumulus Networks– is trying to do the same thing for network devices, with a typical use case being the deployment of an uninitialized bare metal switch triggering the download of the customer’s chosen network operating system. Once the switch boots the new OS though, we’re typically back to vendor-specific mechanisms by which to obtain configuration.


Puppet and Chef


Puppet and Chef are a great concept for initial configurations, and Juniper’s inclusion of a Puppet Agent in Junos on some of their platforms is certainly a nice touch, but it becomes rapidly obvious both from the perspective of limited feature support and the concept of periodic polling by the agent (the default is every 30 minutes) that Puppet really may not be the best “real time” device management solution.


One DHCP to Rule Them All


So here’s a straw man that you can set fire to as needed. What if –stop laughing at the back– we decided to use DHCP to provision a new device with everything it needed to get on the network and be manageable? Let’s set up DHCP options to send:


  • IP / Mask / Default Gateway
  • Optional directive to make the IP your permanent IP (and not use DHCP on the next boot)
  • Management network host routes as needed
  • Device hostname and domain
  • Name servers (DNS)
  • SNMP RO strings
  • A directive (sounds like Puppet here) to enable SSH and generate keys, if needed.
  • A URL to register itself with network management system


This much would get the device on the network and make it manageable using the NMS. Maybe we take it that a step further:


  • Like PXE, a host to ask for your boot image (include logic to say “if it differs from what you already have”), so that a new device will load the standard image for that platform automatically.
  • A path to a configuration file perhaps?


If this was supported across multiple vendors, in theory a device could be connected, would add itself to the NMS (and have the right ACLs to allow it), would have the correct code that we wanted running on it, would load a specific or default configuration from a path we pointed to. Configuration from this point onwards would have to be using some other mechanism, but at least we achieve the goal of ZTD without using an entirely new protocol. All we need on our side, is a script that maps device properties to a management MAC address (ideally), or that automatically provisions devices based on the vendor/device type and location.


Genius or Crazy?


Would this help? Would it be good if a device could register itself automatically in NMS? What else would you need? Let’s either build this up or tear it down; is this the best idea in the world or the most stupid?


I look forward to your opinions. And hey, if this turns out to be a world-changing idea and takes off, I’m going to call it TugBoot or something like that. It’s only fair.


Please let me know.

What does this CI concept mean? A converged architecture is, to put it simply, a purpose built system, with the inclusion of Server, Storage, and Network, designed to ease the build of a centralized environment for the data center. By this I mean, there’re servers and storage combined with networking, so that with very minor configuration after plugging the equipment in, one will be able to manage the environment as a whole.


In the early days of this concept, prior to the creation of VCE, I worked on a team at EMC called the vSpecialists. Our task was to seek out appropriate sales opportunities wherein an architecture like this would be viable, and qualify our prospects for what was called the vBlock. These included Cisco Switches, the just released Nexus line, Cisco Servers, the also freshly released UCS blades and EMC storage. These vBlocks were very conscripted toward sizing, and very dedicated to housing virtualized environments. VMware was critical to the entire infrastructure. In order for these systems to be validated by the federation, all workloads on these would need to be virtualized. The key, and the reason that this was more significant than what customers may already have in their environments was the management layer. A piece of software combining UCS Manager, IOS for the Switch layer, and storage management was pulled together called Ionix. This was where the magic occurred.


Then came a number of competitors. NetApp released the FlexPod in response. The FlexPod was just that. More flexible. Workloads were not required to be virtual exclusively, obviously, storage in this case would be NetApp, and importantly, the customer would be able to configure these based less rigidly on their sizing requirements, and build them up further as they needed.


There were other companies, most notably Hewlett Packard and IBM that built alternative solutions, but the vBlock and FlexPod were really the main players.


After a bit of time, a new category was created. This was called HyperConvergence. The early players in this field were Nutanix and Simplivity. Both these companies built much smaller architectures. These are called Hyper Converged, most reasonably. They were originally seen as entry points for organizations looking to virtualize from a zero point, or point type solutions for new circumstantial projects like VDI. They’ve grown both in technology and function to the point where companies today are basing their entire virtual environments on them. While Nutanix is leveraging new OS models, building management layers onto KVM and replication strategies for DR, Simplivity has other compelling pieces, such as a storage deduplication model, and replication making for compelling rationale for pursuing.

There are also many new players in the HyperConverged marketplace, making it the fastest growing segment of the market now. Hybrid cloud models are making these types of approaches very appealing to IT managers setting direction for the future of their data centers. Be sure to look for new players in the game like Pivot3, Scale Computing, IdealStor, and bigger companies like Hewlett Packard and the EVO approach from Vmware with Evo Rail and Evo Rack getting it’s official launch this week at VMworld.

“After having spent the last two weeks in Asia I find myself sitting in a hotel room in Tokyo pondering something. I delivered a few talks in Singapore and in Manila and was struck by the fact that we’re still talking about SQL injection as a problem”. - Dave Lewis, CSO Online, July 31, 2015





The following story is based on an actual event.


A Chief Security Officer (CSO) called a junior InfoSec engineer (ENG) after 5PM.


CSO: “I am looking for your manager. Our main website was hacked…”

ENG: “He left already. No, I heard that people complaint the website was slow this afternoon. The web team is working on it”.

CSO: “I am telling you that our website was hacked! There are garbage records in the database behind the website. The DBAs are trying to clean up the database. We were hacked by SQL injection!”

ENG: “…”

CSO: Call your boss now! Ask him to turn around and go back to the office immediately!”


Several teams of that poor company spent the whole night to clean up the mess. They needed to restore the database to bring back the main website.



In my last Thwack Ambassador post, OMG! My Website Got Hacked!, I summarized the last four OWASP Top 10 lists since 2004. Injection in general, and SQL Injection in particular, was number 1 of the OWASP Top 10 in 2010 and 2013. I predict that SQL injection will still be number 1 in the upcoming report of the OWSAP Top 10 in 2016. Check out this list of SQL injection incidents. Do you notice the increasing number of incidents in 2014 and 2015?


It’s another Christmas Day. In Phrack Magazine issue 54, December 25, 1998, there was an article on “piggyback SQL commands”  written by Jeff Forristal under the pseudonym rain.forest.puppy. Folks, 1998 was the year at which SQL injection vulnerability was publicly mentioned, although the vulnerability had probably existed long before then. Almost 17 years have passed since Jeff Forristal wrote his article “ODBC and MS SQL server 6.5” in Phrack Magazine, and still many companies are hit hardly by the SQL injection attacks today.


If you want to know more about the technical details of the SQL injection, I recommend you read Troy Hunt’s "Everything you wanted to know about SQL injection (but were afraid to ask)". Then you’ll appreciate the XKCD comic, Exploits of a Mom, that I included at the top of this post.


There are a few solutions to combat SQL injection; we may actually need all solutions combined to fight against SQL injection.


DATA SANITIZATION. Right. All user inputs to websites must be filtered. If you expect to receive a phone number in the input field, make sure you receive a phone number, nothing else.


SQL DEFENSES. As OWASP recommended, use parameterized statements, use stored procedures, escape all user supplied input, and enforce least database privilege. Don’t forget to log all database calls. And not the least, protect your database servers.


APPLICATION FIREWALL AND IPS. I agree that it’s not easy to customize security rules to fit your applications. But if you invest in AFW and/or IPS, they will be your first line of defense. Some vendors offer IDS-like, application behavioral model products to detect and block SQL injection attacks.


FINDING VULNERABILITIES AHEAD OF HACKERS. Perform constant security assessments and penetration testings to your web applications, both internal and internet-facing. Also, common sense wisdom: patch your web servers and database servers.


EDUCATION. EDUCATION. EDUCATION. Train your developers, DBAs, application owners, etc. to have a better understanding on information security. It will be beneficial to your company to train some white-hat hackers in different teams. Troy Hunt made a series of FREE videos for Pluralsight in 2013, Hack Yourself First: How to go on the Cyber-Offense. Troy made it clear in the Introduction that the series was for web developers. You don’t have to log in or register; just click on the orange play icons to launch the videos.



Do you have any story of SQL injection attack to share? You may not be able to share your own story, but you can share the stories you heard. Do you think that it’s hard to guard against SQL injection attacks and that’s why even many Fortune 500 companies still suffer from the treats? How do you protect your web applications and database servers from the SQL injection threats?

In the past few articles I’ve been covering the issues around moving your database from on-premise servers to the cloud. And there are definitely a lot of issues to be concerned about. In my last article I dug into the areas of latency and security but there are a host of other issues to be concerned about as well including the type of cloud implementation (IaaS, PaaS DBaaS), performance, geographical ownership of data and the last mile problem. While its clear that the cloud is certainly becoming more popular it’s also clearly not an inevitable path for everyone. Even so you shouldn’t necessarily seal the cloud off in a box and forget it for the next five years. Cloud technologies have evolved quickly and there are places where the cloud can be a benefit – even to IDBAs and IT Pros who have no intension of moving to the cloud. In this article I’ll tackle the cloud database issue from another angle. Where does using the cloud make sense? There are a few places where the cloud makes sense: disaster recovery (DR), availability, new infrastructure. Let’s take a closer look at each.



Off-site backups are an obvious area where the cloud can be a practical alternative to traditional off-site solutions. For regulatory and disaster recovery purposes most businesses need to maintain an offsite backup. These backups are often still in tape format. Maintaining an offsite storage service is an expense and there isn’t immediate access to the media.  Plus, there is a high rate of failure when performing restores from tape. Moving your offsite backups to the cloud can address these issues. The cloud is immediately available plus the digital backup is more reliable.  Of course since connection latency is more of an issue with the cloud than it is with on-premise infrastructure one important consideration for cloud-based backups is the time to backup and to restore. The backup time isn’t so much of an issue because that can be staged. Cloud restores can be speed up by utilizing network and data compression or low latency connections like ExpressRoute.


DR and HA

One of the places where the cloud makes the most sense is in the area of disaster recovery. The cloud can be a practical and cost effective alternative to a physical disaster recovery site. Establishing and maintaining a physical DR site can be a very expensive undertaking leaving it out of reach for many smaller and medium sized organizations. The cloud can be an affordable alternative for many of these organizations. Various types of technologies like Hyper-V Replica, VMware Replication and various third party products can replicate your on-premise virtual machines to cloud-based IaaS VMs. This enables you to have near-real time VM replicas in the cloud that can be enabled very rapidly in the event of a disaster. This type of solution leverages cloud storage and provides a disaster recovery solution that your normal day-to-day operations do not depend on. The cloud is utilized for off-site storage and possibly a temporary operations center if your on-premise data center fails.


Another closely related area is application availability. Technologies like SQL Server’s AlwaysOn Availability Groups can protect your on-premise of even cloud databases by using cloud-based VMs as asynchronous secondary replicas. For instance, you could setup a SQL Server AlwaysOn Availability Group that had synchronous on-premise secondary replicas that provided automatic failover and high availability and at the same time have asynchronous secondary replicas in Azure. If there was a site failure or a problem with the synchronous replica then the asynchronous replicas in Azure can be manually failed over to over with little to no data loss.



Another area where the cloud makes sense is for startups or other smaller businesses that are in need of an infrastructure update. For businesses where there’s no existing infrastructure or businesses that have aging infrastructure that needs to be replaced buying all new on-premise infrastructure can be a significant expense. Taking advantage of cloud technologies can enable business to get up and running without needing to capitalize a lot of their equipment costs.


Cloud technology still isn’t for everyone – perhaps especially not for DBAs and database professions -- but there are still areas where it makes sense to leverage cloud technologies.


The latest innovations in storage technology allow organizations to maximize their investment by getting the most out of their storage systems. The desired outcome of optimally managed data storage is that it helps businesses grow and become more agile. However, despite advances in storage technology, organizations are still experiencing significant network issues and downtime.


The problem lies in the fact that users do not understand how to properly deploy and use the technology. If used correctly, today’s new storage technologies can help an organization grow. But first, IT admins need to know their storage environment inside and out, including understanding things like NAS, SAN, data deduplication, capacity planning, and more.


In my previous blog, I talked about hyper-convergence, open source storage, and software-defined storage. Today, I will discuss a few more storage technologies.


Cloud storage


Cloud storage is essentially data that is stored in the Cloud. This kind of architecture is most useful for organizations that need to access their data from different geographic locations. When data is in the Cloud, the burden of data backup, archival, DR, etc. becomes outsourced. Cloud storage vendors promise data security, speedy deployment, and reliability among other things. They also claim that organizations don’t have to worry about overall storage, which includes purchasing, installing, managing, upgrading, and replacing storage hardware. In addition, with Cloud storage users can access files from anywhere there is Internet connectivity.


Flash storage


The IOPS of the spinning hard disk has not evolved much over the years. However, with the introduction of solid state storage (also known as flash storage) the increase in performance has been exponential. Flash storage is often the best solution for organizations running high IOPS applications. This is because it can reduce the latency for those applications, resulting in better performance. Flash storage offers other benefits, such as it consumes less power than other storage options, takes up less space in the data center, and allows more users to access storage simultaneously. Because flash storage tends to be a more expensive option, organizations still use hard disk drives (HDD) for Tier 1, Tier 2, and Tier 3, and reserve flash storage for Tier 0 (high performance) environments. 



Object storage


Storage admins are quite familiar with file storage (FS) and how data is accessed from NAS using NFS, CIFS, etc. But object storage is entirely different. It works best with huge amounts of unstructured data that needs to be organized. With object storage, there is no concept of a file system; the input and output happens via application program interface (API), which allows for the handling of large quantities of data. Object storage uses metadata, which is used by FS to locate the file. In cases where you need to frequently access certain data, it's better to go with file storage over object storage. Most people won’t wait several minutes for a Word doc to open, but are likely more patient when pulling a report that involves huge amounts of data.


As you can see, there’s a vast market for storage technology. Before choosing one of the many options, you should ask yourself “is this solution right for my storage environment? You should also consider the following when choosing a storage technology solution:


  • Will it improve my data security?
  • Will it minimize the time it currently takes to access my data?
  • Will it provide options to handle complex data growth?
  • Will there be a positive return on my investment?


  Hopefully this information will help you select the best storage platform for your needs. Let us know what technology you use, or what you look for in storage technology.

Well, if you have been following along with my DBaaS series you know that we are slowly getting through the different services I mentioned in the first article. Up this week is Amazon Web Services' (AWS) RDS offering. Out of all of the offerings we will go over I think it's easy to say that Amazon's offering is by far the most mature offering, namely due to the fact that it was released way back in 2009 (December 29th), and has a long history of regular release upgrades.


For AWS I didnt do the performance testing like I did last time with Google's offering because lately I've also been working on a demo site for a local IT group, so I thought why not migrate the local MySQL database that I had on my web server to the AWS offering and leave the apache services local on that server (so just a remote database instead of local). So as I go through things and you see the screenshots that is what I was trying to get working.




It's pretty obvious that AWS has been doing this a while. The process for creating a new database instance is very straight forward. We start by pickign what type of database, for the purposes of my project (Wordpress) I need RDS as it offers MySQL.


Next we select the engine type we want to use, in this case I am picking MySQL then I click Select.


The next thing AWS wants to know is if this is a production database that requires multiple availability zones and / or a provisioned amount of IOPS.


Then we get to the meat of the config.


Everyhting is pretty much the same as the other offerings, fill out the name of your instance, pick how much horse power you want and then put in some credentials.

The advanced settings step lets you pick what VPC you put your DB instance into. VPC's seem to be popping up more and more in the different use cases I've ran into for AWS. For this purposes I just left it as the default as I dont currently have any other VPC's.


Lastly on the Advanced Settings step you also can select how often automatic backups and maintenance windows take place. Then click Launch DB Instance to get started.



One of the things that I should have pointed out a while ago, but didn't because I assume everyone knew, is that DBaaS instances are basically just standard virtual machines that run inside of your account. The only thing that is different is that there are some more advanced control panel integration into that DBaaS VM.


OK so after a few minutes we can refresh the dashboard and see our new MySQL instance.


There you go! that's all there is to creating




So now that we have our database server created we can connect to it, but there is a catch. AWS uses firewall rules that block everything except the WAN IP address of where you created the instance from. So the first thing we need to do is go create a proper rule if your WAN IP isnt the same as where you will be connecting from. (So for my example my Home IP was auto white listed, but the web server that wil be connecting to the database is at a colo, so i needed to allow the colo IP to connect)

So create new firewall rules you will want to navigate over to the VPC Security Groups area and find the Security Group rule that was created for your DB instance. At the bottom you see an Inbound Rules tab and there is where you can edit the inbound rules to include the proper IP... In my case its the IP.


Once we have that rule in place I can login to the web server and try to connect via the MySQL command line client to verify.




In my example I already had wordpress installed and running and I just wanted to migrate the database to AWS. So what I did was use mysqldump on the local web server to dump wordpress to a .sql file. Then from the command line I ran the cilent again, this time telling it to import the SQL database into the AWS MySQL instance.


This was a new site so it didnt take long at all.


Once that was all done I simply edited my wordpress config file to point at the endpoint displayed on teh instance dashboard along with the username and password I setup previously and it worked on the first try!




Monitoring your database instance is super easy. Amazon has Cloudwatch, a monitoring platform build in, which can send you emails or other alerts once you setup some triggers. Cloudwatch also provides some nice graphs that show pretty much anything you could want to do.


Here is another shot. This one monitors the actual VM underneath the DBaaS layer.





So Backup is pretty easy ... you configured it when you created your instance. Optionally you can also click Instance Actions and then pick Take Snapshot to manually take a point in time snap.

manual snap.PNG

Restores are pretty much just as easy. Like with the other providers you arent really restoring the database so much as you are creating another Database insance at that point in time. To do that there is a "Restore to point in time" option in the instance actions... Which is a little misleadinig since you arent really restoring... oh well.


If you need some geo-redundancy you can also create a read replica, which can later be promoted to the active instance should the primary fail. The wizard to do so looks very much like all the other wizards. THe only real difference is the "source" field, where you pick which database instance you want to make a replica of.




I know what I think... experience matters. The simple fact that AWS has been doing this basically forever in terms of "cloud" and "as a service" technology, means that they have had a lot of time to work out the bugs and really polish the service.


And to be real honest I wanted to do a post about the HP Helion service this week... However the service is still in beta, and while testing it out I hit a snag which actually put a halt on tests. Ill share more about my experience with HP support next time in the article about their service.


Until next time!

Most organizations rely heavily on their data, and managing all that information can be challenging. Data storage comes with a range of complex considerations, including:


  • Rapid business growth creates an increase in stored data.
  • Stored data should be secure, but, at the same time, accessible to all users. This requires significant investments in time and money.
  • Foreseeing a natural or man made disaster, and ensuring adequate data backups in the event of those occurrences can be challenging.
  • Dealing with storage architecture complications can be difficult to manage.


And the list goes on.


The storage administrator is tasked with handling these issues. Luckily for the admin, there are new methods of storing data available on the market that can help. Before choosing one of these methods, it is important to ask “is this right for my environment?” To answer this question, you need to know each of the new methods in detail. This article talks about some trends that are now available for data storage.


Software-defined storage


Software-defined storage (SDS) manages data by pushing traffic to appropriate storage pools. It does this independent of hardware via an external programmatic control, usually using an application programming interface (API).


The SDS storage infrastructure allows the storage of data from different devices, different manufacturers, or a centralized location when a request comes from an application for a specific storage service. SDS will precisely match demand, and provide storage services based on the request (capacity, performance, protection, etc.).


Integrated computing platform (ICP)


Servers traditionally run individual VMs, and data storage is supported by network attached storage (NAS), direct attached storage (DAS), or storage area network (SAN). However, with ICP or hyper-converged architecture, computer, networking, and storage are combined into one physical solution.


In a nutshell, with ICP or hyper-convergence, we have a single box that can:


  • Provide computer power.
  • Act as a storage pool.
  • Virtualize data.
  • Communicate with most common systems.
  • Serve as a shared resource.


Open source storage solutions


As digital storage becomes more popular, it has created a path for the development of different open source storage solutions, such as Openstack and Hadoop. The open source community has developed, or is trying to develop, tools to help organizations store, secure, and manage their data. With open source storage solutions, there is a chance for organizations to reduce their CAPEX and OPEX costs. Open source solutions give companies the option to create in-house developments, which allows them to customize their storage solutions to their needs.


These open source solution might not be a plug and play mechanism, however there will be plenty of pieces ready. You should modify the pieces together and create something unique and that fits your company policies and needs.  This way the open source solutions provides a flexible and cost effective solution.


These are just a few storage solutions. My next blog, talk's about flash, Cloud, and object storage.


Keeping Your Secrets Secret

Posted by jgherbert Aug 24, 2015


RADIUS is down and now you can’t log into the core routers. That’s a shame because you’re pretty sure that you know what the problem is, and if you could log in, you could fix it. Thankfully, your devices are configured to fail back to local authentication when RADIUS is unavailable, but what’s the local admin password?




It’s a security risk to have the same local admin password on every device, especially since you haven’t changed that password in three years,” said the handsome flaxen-haired security consultant. “So what we’re doing to do,” he mused, pausing to slide on his sunglasses, “is to change them all. Every device gets it own unique password.



After much searching of your hard drive, you have finally found your copy of the encrypted file where the consultant stored all the new local admin passwords.



You’ve tried all the passwords you can think of but none of them are unlocking the file. You’re now searching through your email archives from last year in the hopes that maybe somebody sent it out to the team.



An Unrealistic Scenario?


That would never happen to you, right? Managing passwords is one of the dirty little tasks that server and network administrators have to do, and I’ve seen it handled in a few ways over the years including:


  • A passworded Excel file;
  • A shared PasswordSafe file;
  • A plain text file;
  • A wiki page with all the passwords;
  • An email sent out to the team with the latest password last time it was changed;
  • An extremely secure commercial “password vault” system requiring two user passwords to be entered in order to obtain access to the root passwords;
  • Written on the whiteboard which is on the wall by the engineering team, up in the top right hand corner, right above the team’s tally for “ID10T” and “PEBCAK” errors;
  • Nobody knows what the passwords actually are any more.


So what’s the right solution? Is a commercial product the best idea, or is that overkill? Some of the methods I listed above are perhaps obviously inappropriate or insecure. For those with good encryption capabilities, a password will be needed to access the data, but how do you remember that password? Having a single shared file can also be a problem in terms of updates because users inevitably take a copy of the file and keep it on their local system, and won’t know when the main copy of the file has been changed.


Maybe putting the file in a secured file share is an answer, or using a commercial tool that can use Active Directory to authenticate. That way, the emergency credential store can be accessed using the password you’re likely using every day, plus you gain the option to create an audit trail of access to the data. Assuming, of course, you can still authenticate against AD?


What Do You Do?


Right now I’m leaning towards a secured file share, as I see these advantages:


  • the file storage can be encrypted;
  • access can be audited;
  • access can be easily restricted by AD group;
  • it’s one central copy of the data;
  • it’s (virtually) free.


But maybe that’s the wrong decision. What do you do in your company, and do you think it’s the right way to store the passwords? I’d appreciate your input and suggestions.

In previous posts, I’ve explored differing aspects of concerns regarding orchestrations from a management level on Application Deployment, particularly in hybrid environments, and what to both seek and avoid in orchestration and monitoring in Cloud environments. I believe one of the most critical pieces of information, as well as one of the most elegant tasks to be addressed in the category of orchestration is that of Disaster Recovery.


While Backup and Restore are not really the key goals of a disaster recovery environment, in many cases, the considerations of backup and data integrity are part and parcel to a solid DR environment.


When incorporating cloud architectures into what began as a simple backup/recovery environment, we find that the geographic disbursement of locations is both a blessing and a curse.


As a blessing, I’ll say that the ability to accommodate for more than one data center and full replications means that with the proper considerations, an organization can have, at best a completely replicated environment with the ability to support either a segment of their users or as many as all their applications and users in the event of a Katrina or Sandy-like disaster. When an organization has this consideration in place, while we’re not discussing restoring files, we’re discussing a true disaster recovery event including uptime and functionality concerns.


As a curse, technical challenges in terms of replication, cache coherency, bandwidth and all compute storage and network require considerations. While some issues can be resolved by the sharing of infrastructure in hosted environments, some level of significant investment must be made and crossed off against the potential loss in business functionality should that business be faced with catastrophic data and functionality loss.


For the purposes of this thought, let’s go under the assumption that dual data centers are in place, with equivalent hardware to support a fully mirrored environment. The orchestration level, replication software, lifecycle management of these replications, management level ease of use, insight into physical and virtual architectures, these things are mission critical. Where do you go as an administrator to ensure that these pieces are all in place? Can you incorporate an appropriate backup methodology into your disaster recovery implementation? How about a tape-out function for long term archive?


In my experience, most organizations are attempting to retrofit their new-world solution to their old-world strategies, and with the exception of very few older solutions, these functions are not able to be incorporated into newer paradigms.

If I were to be seeking a DR solution based on already existing infrastructure, including but not exclusive to an already existing backup scenario, I want to find the easiest, most global solution that allows my backup solution to be incorporated. Ideally, as well, I’d like to include technologies such as global centralized Dedupe, lifecycle management, a single management interface, Virtual and Physical server backup, and potentially long term archival storage (potentially in a tape based archive) into my full scope solution. Do these exist? I’ve seen a couple solutions that feel as if they’d solve my goals. What are your experiences?


I feel that my next post should have something to do with Converged Architectures. What are your thoughts?

The recent recall of 1.4 million Fiat Chrysler cars over a remote hack vulnerability is just another patch management headache waiting to happen—only on a larger scale and more frequently. But that’s the future. Let’s talk about the current problems with patch management in organizations, big and small. In a recent SolarWinds® security survey, 62% of the respondents admitted to still using time-consuming, manual patch management processes.


Does this mean IT managers are not giving due attention to keeping their servers and workstations up-to-date? NO. Of course, security managers and system administrators know how much of a pain it is to have a 'situation' on their hands due to a bunch of unpatched, vulnerable machines in their environment. It’s never fun to be in a fire fight!


However, having a manual or incomplete patch management process in place is equivalent to having nothing at all when deploying patches as vulnerabilities arise from:

  • Potentially unwanted programs
  • Malware
  • Unsupported software
  • Newer threats (check US-CERT or ENISA)


As a security manager or system administrator, what do you think are the common challenges that come in the way of realizing an effective patch management process? Here are a few common issues:

  • Inconsistent 3rd-party patching using the existing Microsoft WSUS and SCCM solutions
  • Complexity in addressing compliance and audit requirements
  • Complexity in customizing patches and packages for user and computer groups
  • Administrative overhead due to an increase in BYOD usage in the environment


Given the frequency and scale of cyber-attacks and data compromises, having a thorough patch management process is a must-have—not a nice-to-have. But how fast can you put one together?


If you’re already managing patch deployments in your organization with WSUS, you’re covered for Microsoft® applications. You just have to implement a process for automating the patching of non-Microsoft (or 3rd-party) applications like Adobe®, Java™, etc.


WSUS also has its own limitations, like limited hardware inventory visibility and an inability to provide software inventory information. Having inventory information is crucial when you’re formulating a comprehensive patch management strategy.


The strategy should accommodate flexible and fully-customizable patch operations so the regular business activities don’t feel the impact. Or, you can count on having an ‘oh-dear’ moment, complete with a blank stare as you wonder “Why is this server rebooting at the wrong time and hurting my business?”

There are just too many pieces that must fall in place for an effective patch management strategy. If you don’t have one, you might begin by asking yourself…

  1. How am I planning to look out for newer security threats, and regular hot-fixes/patches?
  2. How will I assess the impact to my systems/business if I manage to identify the threats?
  3. How will I prioritize the patches that may affect my systems right away?
  4. What’s the back-up/restore plan?
  5. How will I test the patches before rolling them out to production systems?


The notion should be to not let patch management become a fire-fighting exercise. Even if it does become a fire-fighting exercise, the process should be clearly defined to minimize the impact of the security threat.


Effective patch management should become a good security practice to protect the IT systems from security threats, stay compliant, and eliminate business downtime and data compromises.


I’ve recently done a couple of articles about cloud databases and there have been a several common responses. First, it’s clear (and I can understand why) that moving your databases to the cloud is not something most IT and database professionals are keen on doing. However, more interestingly there were a couple of common concerns that kept coming up regarding cloud database implementations that I’ll tackle in the this article. The first is security and the second is latency.


Security Concerns

The first and foremost concern is security. Having toured the facilities provided by several cloud hosting vendors I can definitely say their physical security far exceeds the security that I’ve seen in the normal IT environment. While I’m sure that there are very secure private IT infrastructures, I’ve never seen private IT security equal to what cloud vendors offer. For instance, last year I toured two different facilities offered by different cloud providers. One of these facilities even had ex-military armed guards at the check-in. Next, inside the facilities there were man-traps at every door where the first set of doors must close before the second set opens. Then the actual computing hardware was located inside locked cages – sometimes two locked cages that required different security access codes in order to gain access to the real hardware behind the cloud. In addition, the electrical conduits came from two different providers. This far exceeds what most business provide. Most business do not have these levels of security and reliability. However, I realize that physical security isn’t the only concern. You do have to trust that the cloud vendor will respect your privacy concerns and that is not an issue when you are in control of the data security.


Minimizing Latency

The next biggest concern that readers have expressed is about latency of cloud applications. The primary concern isn’t about latency caused by lack of compute power or storage performance. The bigger concern is network latency. If everything is in the cloud then network latency not an issue you really need to worry about. For instance, if your SQL Server database is primarily the backend for a web application that lives in Azure and the web application also lives in Azure then network latency really isn’t an issue. In this example, you don’t really have to worry about the latency that the public internet can introduce because the database and the application never really have to send data across the Internet. But what if you have local processing that depends on a cloud-based SQL Server database? In that scenario Internet latency really can be an issue. While Azure to Azure connections between the application and database will not be subject to Internet latency Azure or other cloud connections to on-premise systems clearly can be subject to Internet latency. The Internet is a public domain and you can’t be guaranteed that bandwidth will be there if you need it.


Fortunately, there are alternatives to using the public Internet to access your cloud databases. Both Azure and Amazon support private high speed on-premise to cloud connection technologies. Amazon calls it Direct Connect while Microsoft Azure calls it ExpressRoute.  Both of these technologies are essentially private cloud connections that offer more reliability, faster speeds, lower latencies, and higher security than standard Internet connections. Essentially, they connect your private network directly to your cloud provider of choice without crossing the public internet. Noam Shendar, Vice President of Business Development, Zadara Storage stated that ExpressRoute provided one to two millisecond response time for Azure access. Very fast indeed. There low latency alternatives to the public Internet can help to overcome the latency hurdles for cloud-based databases.


The Bottom Line

Cloud vendors typically have implemented security measures that exceed most IT organizations. However, it really boils down to trust. You need to trust that the cloud personnel will secure your data and not permit or accidentally expose your data to unauthorized access. Next, while the Internet may be an unreliable medium, high performance alternatives like Direct Connect or ExpressRoute are available. Both can provide very fast on-premise to cloud database connections – at a price. To find out more about Direct Connect check out AWS Direct Connect to find out more about ExpressRoute look at look at ExpressRoute.


Leon Adato

What Defines You?

Posted by Leon Adato Aug 14, 2015

A few months back, SearchNetworking editor, Chuck Moozakis interviewed me for an article discussing the future of network engineers in the IT landscape: “Will IT Generalists Replace Network Engineering Jobs?” As part of our discussion, he asked me, “what in your mind, defined you as a networking pro in 1995, in 2005, and in 2015?” My initial answers are below, but his question got me thinking.


How we identify ourselves is a complex interaction of our beliefs, perceptions, and experiences. Just to be clear: I'm not qualified to delve into the shadowy corners of the human psyche as it relates to the big questions of who we are.


But in a much more limited scope, how we identify within the scope of IT professionals is an idea I find fascinating and ripe for discussion.


Every branch of IT has a set of skills specific to it, but being able to execute those skills doesn't necessarily define you as "one of them." I can write a SQL query, but that doesn't make me a DBA. I can hack together a Perl script, but I am by no stretch of the imagination a programmer.


Adding to the confusion is that the "definitive" skills, those tasks which DO cause me to identify as a member of a particular specialty, change over time.


So that's my question for you. What "are" you in the world of IT? Are you a master DBA, a DevOps ninja, a network guru? Besides your affinity to that areayour love of all things SQL or your belief that Linux is better than any other OSwhat are the things you DO which in your mind "make" you part of that group? Tell me about it in the comments below.


For the record, here is how I answered Chuck's original question:

What made you identify as a networking professional in the year?”


I was a networking professional because I understood the physical layer. I knew that token ring needed a terminator, and how far a single line could run before attenuation won out. I knew about MAU’s and star topology. I could configure a variety of NIC’s on a variety of operating systems. I could even crimp my own CAT3 and CAT5 cables in straight-through or crossover configurations (and I knew when and why you needed each). While there were certainly middle pieces of the network to know aboutswitches, routers, and bridgesthe mental distance between the user screen and the server (because in those days the server WAS the application) was very short. Even to the nascent internet, everything was hard-coded. In environments that made the leap to TCP/IP (often in combination with NetWare, SmallTalk, and NetBIOS) all PC’s had public-facing IP addresses. NAT hadn’t been implemented yet.



You could almost look at the early-to-mid 2000’s as the golden age of the network professional. In addition to enjoying a VERY robust employment market, networking technologies were mature, sophisticated, complex, and varied. The CCNA exam still included questions on FDDI, Frame Relay, fractional T’s, and even a NetBIOS or SmallTalk question here or there (mostly how it mapped to the OSI model). But MPLS and dark fiber was happening, wireless (in the form of 802.11b with WEP) was on the rise, VoIP was stabilizing and coming down in cost to the point where businesses were seriously considering replacing all of their equipment, and the InfoSec professionals were being born in the form of ACL jockeys and people who knew how to do “penetrative testing” (i.e.: white-hack hacking). How did I fit in? By 2005 I was already focused on the specialization of monitoring (and had been for about 6 years), but I was a networking professional because I knew and understood at least SOME of what I just named, and could help organizations monitor it so they could start to pull back the veil on all that complexity.



Today’s networking professional stands on the cusp of a sea-change. SDN, IoT, BYOD, cloud and hybrid cloud (and their associated security needs) all stand to impact the scale of networks and the volume of data they transmit in ways unimaginable just 5 years ago. If you ask me why I consider myself a networking professional today, it’s not because I have network commands memorized or because I can rack and stack a core switch in under 20 minutes. It’s because I understand all of that, but I’m mentally ready for what comes next.


OMG! My Website Got Hacked!

Posted by mfmahler Aug 14, 2015

“After all, even the best-designed and most thoroughly audited web applications have far more issues, far more frequently, than their nonweb counterparts”. - Michal Zalewski in The Tangled Web



The following is a true story.


I was looking up future concerts at the Lincoln Center’s website and I was welcomed by the this page:

Lincoln Center Web Site Hacked 1.jpg


I (Me) then called the Lincoln Center Customer Service (CS).


CS: "Thank you for calling Lincoln Center...."

Me: "I'm glad that you have someone answer this call on Sunday. Your web site was hacked! When I browsed your homepage, it's directed to http://new.lincolncenter.org/live/ and the page was hacked”.

CS: "That’s alright. That is our new site”.

Me: "Sir, this is not right. Your website was hacked! Why don't you see for yourself..."

CS: "No, the website is fine”.

Me: "Sir, the page may be cached on your browser. Why don't you clear your browser cache and check your site again”.

CS: "Oh yeah. You're right. We got hacked”.

Me: "I think you should contact your IT and web administrator immediately. Have a good day. Bye”.

CS: "Thank you. Bye”.


Even Google captured the hack that day:

Lincoln Center Web Site Hacked 2.jpg


Websites are hacked everyday. With our daily life more and more relying on services on the internet, web application attacks becomes major concerns not only to the application hosts/owners, but also to the application users.


The Open Web Application Security Project (OWASP) is an international non-profit organization dedicated to improving security of web applications. Every three years since 2004, OWASP has published the Top 10 Project, the top 10 most critical web application risks worldwide. The most recent one is of 2013 and we expect an updated list in 2016. I summarized the previous Top 10 lists in the following table.


OWASP Top 10.jpg


Interesting observation from the table above is that injection (no matter of SQL, OS, or LDAP) and cross-site scripting (XSS) have been among the top web application risks for years. The nature of the web applications probably contribute to the ease of hacking with these techniques. So, how do we detect and protect from the web application threats?



No really. Web applications are of OSI Layer 7. While IPS/IDS do have some signatures for web application attacks, in general IPS/IDS are of up to OSI Layer 4. IPSs can be placed as the first line of defense and block the threatening traffic, but they don’t understand web application protocol logic, URLs, or parameters.



WAFs of of OSI Layer 7 and are designed to detect and prevent web-based attacks that IPSs can’t. WAFs are placed inline in front of the web servers and monitor traffic to and from the web servers. WAFs understand web protocol logic, like HTTP GET, POST, HEAD, etc. and also Javascript, SQL, HTML, Cookies, etc. That being said, deployment of WAFs require understanding of the web applications behind them. Customizations of WAF policies are not unusual and WAF administrators need work closely with the web developers. A few years ago my team (network security team) evaluated WAF, but the complexity of building policies for even one web application made my team stop the project.



From the OWASP Top 10 lists above, many risks can be considered as developers’ responsibility to secure the applications. Web developers are trained better in securing the codes; for example, the number 1 2004 risk, Unvalidated Input, fell out of the list in recent years. But I think we still have a long way to go for having developers with sound security mindset. I swear that I saw web applications requiring authentication ran on port 80 (HTTP)!


How does your organization detect and mitigate web application threats? Do you deploy or manage WAF? Is it a tough job to keep up with the web applications? How much thoughts do your web developers put when they build the applications? Do they have thoughtful security testings?


I would like to hear from you.


Lastly, I would like to point out that the IBM Security Systems Ethical Hacking team prepared a series of videos a series of videos based on the OWASP Top 10 2013 list. Beware of the marketing of IBM AppScan. But the videos show good examples of those top 10 web application attacks.

When CIO’s and key stakeholders were issued guidance on the implementation of IT shared services as part of a key strategy to eliminate waste and duplication back in 2012, it remained to be seen how quickly implantation would take place and how much benefits would permeate into agencies. As recently as April, we were still talking about how IT infrastructures still remain “walled off within individual agencies, between agencies, and even between bureaus and commands.” So we decided to take a closer look and ask federal IT pros what they are seeing on the ground.


Partnering with government research firm, Market Connections to survey 200 federal IT decision makers and influencers, we drilled down into their view of IT shared services. To start with, we provided a universal definition of IT shared services, it covers the entire spectrum of IT service opportunities, either within or across federal agencies, and where previously that service had been found in each individual agency. Is this how you define IT shared services?

To set the scene, only 21 percent of respondents indicated that IT shared services was a priority in terms of leadership focusfalling very close to the bottom of all priorities. Additionally, the IT pros surveyed indicated that they felt like they’re in control of shared services within their environment.


However, we are impressed with the amount of IT services being sharedspecifically, that 64 percent of DoD respondents indicated being recipients of shared services. The DoD adoption of enterprise email, a shared service, is probably the most visible and widespread use of a shared service in the DoD. DISA provides agencies with the ability to purchase an enterprise email system directly from its website and provide support services on a wide scale.

slide 1.jpg

Additionally, over 80 percent of respondents think that IT shared serviceseither within government or outsourcedprovides superior end-user experience. A large portion of respondents also believe that IT shared services benefit all stakeholders, including IT department personnel, agency leadership, and citizens.

slide 2.jpg

IT shared service implementation seems to still be facing an uphill battle and typical change management challenges. However, IT pros have still identified some key benefits, including: saving money, achieving economies of scale, standardized delivery and performance, and opportunities to innovate. And this is good to see, because these were many of the objectives of shared services to begin with.

slide 3.jpg

Are you using IT shared services in your environment? What challenges are you experiencing in implementation? What benefits are your end-users seeing?

Full survey results: http://www.solarwinds.com/assets/industry/surveys/solarwinds-state-of-government-it-management-and-m.aspx

This week, building on the DBaaS theme, I am taking a look at the Google Cloud SQL service. Last week I took a look at Microsoft's DBaaS offering via Azure, and this week we are looking at Google's offering which his MySQL compatible. The last article had a pretty decent flow I think with the different subtopics and I will try to keep with that same pattern this time too:

  • How to Create
  • How to Connect
  • How to Backup and Restore
  • (new addition) How it preforms

Let's dive in.

How to Create a Google Cloud SQL instance


Google's Cloud is one that I did not yet have an account with... no real reason why not, just hadn't played with it yet. But it only took about 30 seconds to get my credit card info in, collect my $300 dollar service discount (just for signing up and playing with the service), from there all I needed to do was find the SQL services area and click create.


The entire process for getting an instance online is contained on a single page. However its a pretty long page so I had to split it up into two images .

Here in part one you give your instance a name... For some reason I was thinking I was creating the actual database like I did in Azure, but here what we are actually doing is provisioning a MySQL instance which can contain many databases. After you name it, you select a size (which can easily be changed later)... select a region and pick how you want to be billed.

Depending on your use case you may want to investigate the two different billing methods. I haven't seen this method before, but basically you can have the database always ready to go and waiting for queries; or you can have Google pause the service after a predetermined time with no usage. If you are an 8-5 shop with no need for it to run after hours, the "Per Use" billing method may be cheaper. Then down a little further make sure you select "OnDemand" activation policy... this is the policy that will actually turn down the instance when its not needed.


On the bottom part of image one you can also see that they are going to charge you 24 cents a day if you want ipv4 addressing... If you are using a server with an ipv6 address you can leave this unchecked and just use ipv6. Over a month it will save you about 7 bucks. I think this method is actually pretty neat... it will help drive ipv6 adoption.

On the second part of creation page you are essentially just setting up the firewall rules so that you can restrict access to the database instance. Just like Azure, Google allows allows access to the instance from anywhere INSIDE of Google Compute by default, but if you want anything outside to have access you must explicitly allow it.


It only took a few seconds and my instance was created and I could access the details page. On this page you find pretty much everything you want to know about your instance. It also has some fairly realtime performance monitoring graphs which are pretty cool.


That's it. You now have a MySQL compatible instance running on Google's Cloud. You do not however have a database , to do that you can use your favorite MySQL client (Linux CLI, phpmyadmin, Toad, etc)


How to connect to your new Database DB Instance


I opted for using the Linux CLI client for MySQL because I have a lot of experience with it and already had it installed on serveral VM's. I was very easily able to connect to the database instance with the client and do a "show databases" command, the typical default MySQL databases were returned. I also created a database with the usual "create database <database name>" command.


While I was messing around the graphs on the details page were happily updating in near real time as well!


Since I had local MySQL instances installed I thought I would do some comparisons between a local Linux VM and a DBaaS instance with the mysqlslap utility. I followed a tutorial online that also used the sample employee data available from mysql.org.I ran this test on both my blog's local MySQL instance as well as the DBaaS instance.


How does it perform ?


Local MySQL instance (3 vCPU Xeon E5504 with 8GB of RAM and storage on an EMC CX3 FC San)

11-blog server.PNG

Same test on the DBaaS instance with the D16 instance setting. I also had to use a Google Compute VM to run the commands from, as I tried to run it from a linux instance at my Colo and things were hammering the WAN connection on my side like crazy.


When I say like crazy I mean about 70-80Mbps constantly, so I decided that to get a realistic number i would need something a little more local to the DB instance


I also tried out the HammerDB test tools and was able to get almost 80k tpm.

tpm d32.PNG

Note: One thing that I did notice when doing the performance testing was when I changed between instance sizes all connections to the instance would drop and take about 20-30 seconds to reconnect. So word of advise is to not change instance sizes unless you are willing to drop your connections for a bit of time.


How to do Backups and Restores


I saved this part for last because I really don't have much to talk about here, other than this part is a lot less user friendly than the Azure backup and restore. In fact other than selecting your 4 hour backup window when creating the database instance, none of the other settings are in the GUI. In fact throughout the Google documentation they recommend installing their SDK tools. After you have those utilities installed you can reference the documentaiton here Backups and Recovery - Cloud SQL — Google Cloud Platform to create backups and restore them.


I know... this section is pretty disappointing, but aside from retyping the Google documentation there really isnt too much to see here.




MySQL has to be my favorite database, probably because I've been using it since college. It's free, super easy to use, and more than enough for anything I have thrown at it thus far (personally). Google has created a crazy fast service with it too, I have never seen my bandwidth usage as high as it was when I kicked off the mysqlslap test, plus with the flexability to only bill you for when queries are actually being ran... that is awesome.


Overall I think that once Google gets backup and restore integrated into the GUI it will make it much more user friendly for the non-dba / coder to feel comfortable using.

Filter Blog

By date:
By tag: