What does this CI concept mean? A converged architecture is, to put it simply, a purpose built system, with the inclusion of Server, Storage, and Network, designed to ease the build of a centralized environment for the data center. By this I mean, there’re servers and storage combined with networking, so that with very minor configuration after plugging the equipment in, one will be able to manage the environment as a whole.


In the early days of this concept, prior to the creation of VCE, I worked on a team at EMC called the vSpecialists. Our task was to seek out appropriate sales opportunities wherein an architecture like this would be viable, and qualify our prospects for what was called the vBlock. These included Cisco Switches, the just released Nexus line, Cisco Servers, the also freshly released UCS blades and EMC storage. These vBlocks were very conscripted toward sizing, and very dedicated to housing virtualized environments. VMware was critical to the entire infrastructure. In order for these systems to be validated by the federation, all workloads on these would need to be virtualized. The key, and the reason that this was more significant than what customers may already have in their environments was the management layer. A piece of software combining UCS Manager, IOS for the Switch layer, and storage management was pulled together called Ionix. This was where the magic occurred.


Then came a number of competitors. NetApp released the FlexPod in response. The FlexPod was just that. More flexible. Workloads were not required to be virtual exclusively, obviously, storage in this case would be NetApp, and importantly, the customer would be able to configure these based less rigidly on their sizing requirements, and build them up further as they needed.


There were other companies, most notably Hewlett Packard and IBM that built alternative solutions, but the vBlock and FlexPod were really the main players.


After a bit of time, a new category was created. This was called HyperConvergence. The early players in this field were Nutanix and Simplivity. Both these companies built much smaller architectures. These are called Hyper Converged, most reasonably. They were originally seen as entry points for organizations looking to virtualize from a zero point, or point type solutions for new circumstantial projects like VDI. They’ve grown both in technology and function to the point where companies today are basing their entire virtual environments on them. While Nutanix is leveraging new OS models, building management layers onto KVM and replication strategies for DR, Simplivity has other compelling pieces, such as a storage deduplication model, and replication making for compelling rationale for pursuing.

There are also many new players in the HyperConverged marketplace, making it the fastest growing segment of the market now. Hybrid cloud models are making these types of approaches very appealing to IT managers setting direction for the future of their data centers. Be sure to look for new players in the game like Pivot3, Scale Computing, IdealStor, and bigger companies like Hewlett Packard and the EVO approach from Vmware with Evo Rail and Evo Rack getting it’s official launch this week at VMworld.

“After having spent the last two weeks in Asia I find myself sitting in a hotel room in Tokyo pondering something. I delivered a few talks in Singapore and in Manila and was struck by the fact that we’re still talking about SQL injection as a problem”. - Dave Lewis, CSO Online, July 31, 2015





The following story is based on an actual event.


A Chief Security Officer (CSO) called a junior InfoSec engineer (ENG) after 5PM.


CSO: “I am looking for your manager. Our main website was hacked…”

ENG: “He left already. No, I heard that people complaint the website was slow this afternoon. The web team is working on it”.

CSO: “I am telling you that our website was hacked! There are garbage records in the database behind the website. The DBAs are trying to clean up the database. We were hacked by SQL injection!”

ENG: “…”

CSO: Call your boss now! Ask him to turn around and go back to the office immediately!”


Several teams of that poor company spent the whole night to clean up the mess. They needed to restore the database to bring back the main website.



In my last Thwack Ambassador post, OMG! My Website Got Hacked!, I summarized the last four OWASP Top 10 lists since 2004. Injection in general, and SQL Injection in particular, was number 1 of the OWASP Top 10 in 2010 and 2013. I predict that SQL injection will still be number 1 in the upcoming report of the OWSAP Top 10 in 2016. Check out this list of SQL injection incidents. Do you notice the increasing number of incidents in 2014 and 2015?


It’s another Christmas Day. In Phrack Magazine issue 54, December 25, 1998, there was an article on “piggyback SQL commands”  written by Jeff Forristal under the pseudonym rain.forest.puppy. Folks, 1998 was the year at which SQL injection vulnerability was publicly mentioned, although the vulnerability had probably existed long before then. Almost 17 years have passed since Jeff Forristal wrote his article “ODBC and MS SQL server 6.5” in Phrack Magazine, and still many companies are hit hardly by the SQL injection attacks today.


If you want to know more about the technical details of the SQL injection, I recommend you read Troy Hunt’s "Everything you wanted to know about SQL injection (but were afraid to ask)". Then you’ll appreciate the XKCD comic, Exploits of a Mom, that I included at the top of this post.


There are a few solutions to combat SQL injection; we may actually need all solutions combined to fight against SQL injection.


DATA SANITIZATION. Right. All user inputs to websites must be filtered. If you expect to receive a phone number in the input field, make sure you receive a phone number, nothing else.


SQL DEFENSES. As OWASP recommended, use parameterized statements, use stored procedures, escape all user supplied input, and enforce least database privilege. Don’t forget to log all database calls. And not the least, protect your database servers.


APPLICATION FIREWALL AND IPS. I agree that it’s not easy to customize security rules to fit your applications. But if you invest in AFW and/or IPS, they will be your first line of defense. Some vendors offer IDS-like, application behavioral model products to detect and block SQL injection attacks.


FINDING VULNERABILITIES AHEAD OF HACKERS. Perform constant security assessments and penetration testings to your web applications, both internal and internet-facing. Also, common sense wisdom: patch your web servers and database servers.


EDUCATION. EDUCATION. EDUCATION. Train your developers, DBAs, application owners, etc. to have a better understanding on information security. It will be beneficial to your company to train some white-hat hackers in different teams. Troy Hunt made a series of FREE videos for Pluralsight in 2013, Hack Yourself First: How to go on the Cyber-Offense. Troy made it clear in the Introduction that the series was for web developers. You don’t have to log in or register; just click on the orange play icons to launch the videos.



Do you have any story of SQL injection attack to share? You may not be able to share your own story, but you can share the stories you heard. Do you think that it’s hard to guard against SQL injection attacks and that’s why even many Fortune 500 companies still suffer from the treats? How do you protect your web applications and database servers from the SQL injection threats?

In the past few articles I’ve been covering the issues around moving your database from on-premise servers to the cloud. And there are definitely a lot of issues to be concerned about. In my last article I dug into the areas of latency and security but there are a host of other issues to be concerned about as well including the type of cloud implementation (IaaS, PaaS DBaaS), performance, geographical ownership of data and the last mile problem. While its clear that the cloud is certainly becoming more popular it’s also clearly not an inevitable path for everyone. Even so you shouldn’t necessarily seal the cloud off in a box and forget it for the next five years. Cloud technologies have evolved quickly and there are places where the cloud can be a benefit – even to IDBAs and IT Pros who have no intension of moving to the cloud. In this article I’ll tackle the cloud database issue from another angle. Where does using the cloud make sense? There are a few places where the cloud makes sense: disaster recovery (DR), availability, new infrastructure. Let’s take a closer look at each.



Off-site backups are an obvious area where the cloud can be a practical alternative to traditional off-site solutions. For regulatory and disaster recovery purposes most businesses need to maintain an offsite backup. These backups are often still in tape format. Maintaining an offsite storage service is an expense and there isn’t immediate access to the media.  Plus, there is a high rate of failure when performing restores from tape. Moving your offsite backups to the cloud can address these issues. The cloud is immediately available plus the digital backup is more reliable.  Of course since connection latency is more of an issue with the cloud than it is with on-premise infrastructure one important consideration for cloud-based backups is the time to backup and to restore. The backup time isn’t so much of an issue because that can be staged. Cloud restores can be speed up by utilizing network and data compression or low latency connections like ExpressRoute.


DR and HA

One of the places where the cloud makes the most sense is in the area of disaster recovery. The cloud can be a practical and cost effective alternative to a physical disaster recovery site. Establishing and maintaining a physical DR site can be a very expensive undertaking leaving it out of reach for many smaller and medium sized organizations. The cloud can be an affordable alternative for many of these organizations. Various types of technologies like Hyper-V Replica, VMware Replication and various third party products can replicate your on-premise virtual machines to cloud-based IaaS VMs. This enables you to have near-real time VM replicas in the cloud that can be enabled very rapidly in the event of a disaster. This type of solution leverages cloud storage and provides a disaster recovery solution that your normal day-to-day operations do not depend on. The cloud is utilized for off-site storage and possibly a temporary operations center if your on-premise data center fails.


Another closely related area is application availability. Technologies like SQL Server’s AlwaysOn Availability Groups can protect your on-premise of even cloud databases by using cloud-based VMs as asynchronous secondary replicas. For instance, you could setup a SQL Server AlwaysOn Availability Group that had synchronous on-premise secondary replicas that provided automatic failover and high availability and at the same time have asynchronous secondary replicas in Azure. If there was a site failure or a problem with the synchronous replica then the asynchronous replicas in Azure can be manually failed over to over with little to no data loss.



Another area where the cloud makes sense is for startups or other smaller businesses that are in need of an infrastructure update. For businesses where there’s no existing infrastructure or businesses that have aging infrastructure that needs to be replaced buying all new on-premise infrastructure can be a significant expense. Taking advantage of cloud technologies can enable business to get up and running without needing to capitalize a lot of their equipment costs.


Cloud technology still isn’t for everyone – perhaps especially not for DBAs and database professions -- but there are still areas where it makes sense to leverage cloud technologies.


The latest innovations in storage technology allow organizations to maximize their investment by getting the most out of their storage systems. The desired outcome of optimally managed data storage is that it helps businesses grow and become more agile. However, despite advances in storage technology, organizations are still experiencing significant network issues and downtime.


The problem lies in the fact that users do not understand how to properly deploy and use the technology. If used correctly, today’s new storage technologies can help an organization grow. But first, IT admins need to know their storage environment inside and out, including understanding things like NAS, SAN, data deduplication, capacity planning, and more.


In my previous blog, I talked about hyper-convergence, open source storage, and software-defined storage. Today, I will discuss a few more storage technologies.


Cloud storage


Cloud storage is essentially data that is stored in the Cloud. This kind of architecture is most useful for organizations that need to access their data from different geographic locations. When data is in the Cloud, the burden of data backup, archival, DR, etc. becomes outsourced. Cloud storage vendors promise data security, speedy deployment, and reliability among other things. They also claim that organizations don’t have to worry about overall storage, which includes purchasing, installing, managing, upgrading, and replacing storage hardware. In addition, with Cloud storage users can access files from anywhere there is Internet connectivity.


Flash storage


The IOPS of the spinning hard disk has not evolved much over the years. However, with the introduction of solid state storage (also known as flash storage) the increase in performance has been exponential. Flash storage is often the best solution for organizations running high IOPS applications. This is because it can reduce the latency for those applications, resulting in better performance. Flash storage offers other benefits, such as it consumes less power than other storage options, takes up less space in the data center, and allows more users to access storage simultaneously. Because flash storage tends to be a more expensive option, organizations still use hard disk drives (HDD) for Tier 1, Tier 2, and Tier 3, and reserve flash storage for Tier 0 (high performance) environments. 



Object storage


Storage admins are quite familiar with file storage (FS) and how data is accessed from NAS using NFS, CIFS, etc. But object storage is entirely different. It works best with huge amounts of unstructured data that needs to be organized. With object storage, there is no concept of a file system; the input and output happens via application program interface (API), which allows for the handling of large quantities of data. Object storage uses metadata, which is used by FS to locate the file. In cases where you need to frequently access certain data, it's better to go with file storage over object storage. Most people won’t wait several minutes for a Word doc to open, but are likely more patient when pulling a report that involves huge amounts of data.


As you can see, there’s a vast market for storage technology. Before choosing one of the many options, you should ask yourself “is this solution right for my storage environment? You should also consider the following when choosing a storage technology solution:


  • Will it improve my data security?
  • Will it minimize the time it currently takes to access my data?
  • Will it provide options to handle complex data growth?
  • Will there be a positive return on my investment?


  Hopefully this information will help you select the best storage platform for your needs. Let us know what technology you use, or what you look for in storage technology.

Well, if you have been following along with my DBaaS series you know that we are slowly getting through the different services I mentioned in the first article. Up this week is Amazon Web Services' (AWS) RDS offering. Out of all of the offerings we will go over I think it's easy to say that Amazon's offering is by far the most mature offering, namely due to the fact that it was released way back in 2009 (December 29th), and has a long history of regular release upgrades.


For AWS I didnt do the performance testing like I did last time with Google's offering because lately I've also been working on a demo site for a local IT group, so I thought why not migrate the local MySQL database that I had on my web server to the AWS offering and leave the apache services local on that server (so just a remote database instead of local). So as I go through things and you see the screenshots that is what I was trying to get working.




It's pretty obvious that AWS has been doing this a while. The process for creating a new database instance is very straight forward. We start by pickign what type of database, for the purposes of my project (Wordpress) I need RDS as it offers MySQL.


Next we select the engine type we want to use, in this case I am picking MySQL then I click Select.


The next thing AWS wants to know is if this is a production database that requires multiple availability zones and / or a provisioned amount of IOPS.


Then we get to the meat of the config.


Everyhting is pretty much the same as the other offerings, fill out the name of your instance, pick how much horse power you want and then put in some credentials.

The advanced settings step lets you pick what VPC you put your DB instance into. VPC's seem to be popping up more and more in the different use cases I've ran into for AWS. For this purposes I just left it as the default as I dont currently have any other VPC's.


Lastly on the Advanced Settings step you also can select how often automatic backups and maintenance windows take place. Then click Launch DB Instance to get started.



One of the things that I should have pointed out a while ago, but didn't because I assume everyone knew, is that DBaaS instances are basically just standard virtual machines that run inside of your account. The only thing that is different is that there are some more advanced control panel integration into that DBaaS VM.


OK so after a few minutes we can refresh the dashboard and see our new MySQL instance.


There you go! that's all there is to creating




So now that we have our database server created we can connect to it, but there is a catch. AWS uses firewall rules that block everything except the WAN IP address of where you created the instance from. So the first thing we need to do is go create a proper rule if your WAN IP isnt the same as where you will be connecting from. (So for my example my Home IP was auto white listed, but the web server that wil be connecting to the database is at a colo, so i needed to allow the colo IP to connect)

So create new firewall rules you will want to navigate over to the VPC Security Groups area and find the Security Group rule that was created for your DB instance. At the bottom you see an Inbound Rules tab and there is where you can edit the inbound rules to include the proper IP... In my case its the IP.


Once we have that rule in place I can login to the web server and try to connect via the MySQL command line client to verify.




In my example I already had wordpress installed and running and I just wanted to migrate the database to AWS. So what I did was use mysqldump on the local web server to dump wordpress to a .sql file. Then from the command line I ran the cilent again, this time telling it to import the SQL database into the AWS MySQL instance.


This was a new site so it didnt take long at all.


Once that was all done I simply edited my wordpress config file to point at the endpoint displayed on teh instance dashboard along with the username and password I setup previously and it worked on the first try!




Monitoring your database instance is super easy. Amazon has Cloudwatch, a monitoring platform build in, which can send you emails or other alerts once you setup some triggers. Cloudwatch also provides some nice graphs that show pretty much anything you could want to do.


Here is another shot. This one monitors the actual VM underneath the DBaaS layer.





So Backup is pretty easy ... you configured it when you created your instance. Optionally you can also click Instance Actions and then pick Take Snapshot to manually take a point in time snap.

manual snap.PNG

Restores are pretty much just as easy. Like with the other providers you arent really restoring the database so much as you are creating another Database insance at that point in time. To do that there is a "Restore to point in time" option in the instance actions... Which is a little misleadinig since you arent really restoring... oh well.


If you need some geo-redundancy you can also create a read replica, which can later be promoted to the active instance should the primary fail. The wizard to do so looks very much like all the other wizards. THe only real difference is the "source" field, where you pick which database instance you want to make a replica of.




I know what I think... experience matters. The simple fact that AWS has been doing this basically forever in terms of "cloud" and "as a service" technology, means that they have had a lot of time to work out the bugs and really polish the service.


And to be real honest I wanted to do a post about the HP Helion service this week... However the service is still in beta, and while testing it out I hit a snag which actually put a halt on tests. Ill share more about my experience with HP support next time in the article about their service.


Until next time!

Most organizations rely heavily on their data, and managing all that information can be challenging. Data storage comes with a range of complex considerations, including:


  • Rapid business growth creates an increase in stored data.
  • Stored data should be secure, but, at the same time, accessible to all users. This requires significant investments in time and money.
  • Foreseeing a natural or man made disaster, and ensuring adequate data backups in the event of those occurrences can be challenging.
  • Dealing with storage architecture complications can be difficult to manage.


And the list goes on.


The storage administrator is tasked with handling these issues. Luckily for the admin, there are new methods of storing data available on the market that can help. Before choosing one of these methods, it is important to ask “is this right for my environment?” To answer this question, you need to know each of the new methods in detail. This article talks about some trends that are now available for data storage.


Software-defined storage


Software-defined storage (SDS) manages data by pushing traffic to appropriate storage pools. It does this independent of hardware via an external programmatic control, usually using an application programming interface (API).


The SDS storage infrastructure allows the storage of data from different devices, different manufacturers, or a centralized location when a request comes from an application for a specific storage service. SDS will precisely match demand, and provide storage services based on the request (capacity, performance, protection, etc.).


Integrated computing platform (ICP)


Servers traditionally run individual VMs, and data storage is supported by network attached storage (NAS), direct attached storage (DAS), or storage area network (SAN). However, with ICP or hyper-converged architecture, computer, networking, and storage are combined into one physical solution.


In a nutshell, with ICP or hyper-convergence, we have a single box that can:


  • Provide computer power.
  • Act as a storage pool.
  • Virtualize data.
  • Communicate with most common systems.
  • Serve as a shared resource.


Open source storage solutions


As digital storage becomes more popular, it has created a path for the development of different open source storage solutions, such as Openstack and Hadoop. The open source community has developed, or is trying to develop, tools to help organizations store, secure, and manage their data. With open source storage solutions, there is a chance for organizations to reduce their CAPEX and OPEX costs. Open source solutions give companies the option to create in-house developments, which allows them to customize their storage solutions to their needs.


These open source solution might not be a plug and play mechanism, however there will be plenty of pieces ready. You should modify the pieces together and create something unique and that fits your company policies and needs.  This way the open source solutions provides a flexible and cost effective solution.


These are just a few storage solutions. My next blog, talk's about flash, Cloud, and object storage.


RADIUS is down and now you can’t log into the core routers. That’s a shame because you’re pretty sure that you know what the problem is, and if you could log in, you could fix it. Thankfully, your devices are configured to fail back to local authentication when RADIUS is unavailable, but what’s the local admin password?




It’s a security risk to have the same local admin password on every device, especially since you haven’t changed that password in three years,” said the handsome flaxen-haired security consultant. “So what we’re doing to do,” he mused, pausing to slide on his sunglasses, “is to change them all. Every device gets it own unique password.



After much searching of your hard drive, you have finally found your copy of the encrypted file where the consultant stored all the new local admin passwords.



You’ve tried all the passwords you can think of but none of them are unlocking the file. You’re now searching through your email archives from last year in the hopes that maybe somebody sent it out to the team.



An Unrealistic Scenario?


That would never happen to you, right? Managing passwords is one of the dirty little tasks that server and network administrators have to do, and I’ve seen it handled in a few ways over the years including:


  • A passworded Excel file;
  • A shared PasswordSafe file;
  • A plain text file;
  • A wiki page with all the passwords;
  • An email sent out to the team with the latest password last time it was changed;
  • An extremely secure commercial “password vault” system requiring two user passwords to be entered in order to obtain access to the root passwords;
  • Written on the whiteboard which is on the wall by the engineering team, up in the top right hand corner, right above the team’s tally for “ID10T” and “PEBCAK” errors;
  • Nobody knows what the passwords actually are any more.


So what’s the right solution? Is a commercial product the best idea, or is that overkill? Some of the methods I listed above are perhaps obviously inappropriate or insecure. For those with good encryption capabilities, a password will be needed to access the data, but how do you remember that password? Having a single shared file can also be a problem in terms of updates because users inevitably take a copy of the file and keep it on their local system, and won’t know when the main copy of the file has been changed.


Maybe putting the file in a secured file share is an answer, or using a commercial tool that can use Active Directory to authenticate. That way, the emergency credential store can be accessed using the password you’re likely using every day, plus you gain the option to create an audit trail of access to the data. Assuming, of course, you can still authenticate against AD?


What Do You Do?


Right now I’m leaning towards a secured file share, as I see these advantages:


  • the file storage can be encrypted;
  • access can be audited;
  • access can be easily restricted by AD group;
  • it’s one central copy of the data;
  • it’s (virtually) free.


But maybe that’s the wrong decision. What do you do in your company, and do you think it’s the right way to store the passwords? I’d appreciate your input and suggestions.

In previous posts, I’ve explored differing aspects of concerns regarding orchestrations from a management level on Application Deployment, particularly in hybrid environments, and what to both seek and avoid in orchestration and monitoring in Cloud environments. I believe one of the most critical pieces of information, as well as one of the most elegant tasks to be addressed in the category of orchestration is that of Disaster Recovery.


While Backup and Restore are not really the key goals of a disaster recovery environment, in many cases, the considerations of backup and data integrity are part and parcel to a solid DR environment.


When incorporating cloud architectures into what began as a simple backup/recovery environment, we find that the geographic disbursement of locations is both a blessing and a curse.


As a blessing, I’ll say that the ability to accommodate for more than one data center and full replications means that with the proper considerations, an organization can have, at best a completely replicated environment with the ability to support either a segment of their users or as many as all their applications and users in the event of a Katrina or Sandy-like disaster. When an organization has this consideration in place, while we’re not discussing restoring files, we’re discussing a true disaster recovery event including uptime and functionality concerns.


As a curse, technical challenges in terms of replication, cache coherency, bandwidth and all compute storage and network require considerations. While some issues can be resolved by the sharing of infrastructure in hosted environments, some level of significant investment must be made and crossed off against the potential loss in business functionality should that business be faced with catastrophic data and functionality loss.


For the purposes of this thought, let’s go under the assumption that dual data centers are in place, with equivalent hardware to support a fully mirrored environment. The orchestration level, replication software, lifecycle management of these replications, management level ease of use, insight into physical and virtual architectures, these things are mission critical. Where do you go as an administrator to ensure that these pieces are all in place? Can you incorporate an appropriate backup methodology into your disaster recovery implementation? How about a tape-out function for long term archive?


In my experience, most organizations are attempting to retrofit their new-world solution to their old-world strategies, and with the exception of very few older solutions, these functions are not able to be incorporated into newer paradigms.

If I were to be seeking a DR solution based on already existing infrastructure, including but not exclusive to an already existing backup scenario, I want to find the easiest, most global solution that allows my backup solution to be incorporated. Ideally, as well, I’d like to include technologies such as global centralized Dedupe, lifecycle management, a single management interface, Virtual and Physical server backup, and potentially long term archival storage (potentially in a tape based archive) into my full scope solution. Do these exist? I’ve seen a couple solutions that feel as if they’d solve my goals. What are your experiences?


I feel that my next post should have something to do with Converged Architectures. What are your thoughts?

The recent recall of 1.4 million Fiat Chrysler cars over a remote hack vulnerability is just another patch management headache waiting to happen—only on a larger scale and more frequently. But that’s the future. Let’s talk about the current problems with patch management in organizations, big and small. In a recent SolarWinds® security survey, 62% of the respondents admitted to still using time-consuming, manual patch management processes.


Does this mean IT managers are not giving due attention to keeping their servers and workstations up-to-date? NO. Of course, security managers and system administrators know how much of a pain it is to have a 'situation' on their hands due to a bunch of unpatched, vulnerable machines in their environment. It’s never fun to be in a fire fight!


However, having a manual or incomplete patch management process in place is equivalent to having nothing at all when deploying patches as vulnerabilities arise from:

  • Potentially unwanted programs
  • Malware
  • Unsupported software
  • Newer threats (check US-CERT or ENISA)


As a security manager or system administrator, what do you think are the common challenges that come in the way of realizing an effective patch management process? Here are a few common issues:

  • Inconsistent 3rd-party patching using the existing Microsoft WSUS and SCCM solutions
  • Complexity in addressing compliance and audit requirements
  • Complexity in customizing patches and packages for user and computer groups
  • Administrative overhead due to an increase in BYOD usage in the environment


Given the frequency and scale of cyber-attacks and data compromises, having a thorough patch management process is a must-have—not a nice-to-have. But how fast can you put one together?


If you’re already managing patch deployments in your organization with WSUS, you’re covered for Microsoft® applications. You just have to implement a process for automating the patching of non-Microsoft (or 3rd-party) applications like Adobe®, Java™, etc.


WSUS also has its own limitations, like limited hardware inventory visibility and an inability to provide software inventory information. Having inventory information is crucial when you’re formulating a comprehensive patch management strategy.


The strategy should accommodate flexible and fully-customizable patch operations so the regular business activities don’t feel the impact. Or, you can count on having an ‘oh-dear’ moment, complete with a blank stare as you wonder “Why is this server rebooting at the wrong time and hurting my business?”

There are just too many pieces that must fall in place for an effective patch management strategy. If you don’t have one, you might begin by asking yourself…

  1. How am I planning to look out for newer security threats, and regular hot-fixes/patches?
  2. How will I assess the impact to my systems/business if I manage to identify the threats?
  3. How will I prioritize the patches that may affect my systems right away?
  4. What’s the back-up/restore plan?
  5. How will I test the patches before rolling them out to production systems?


The notion should be to not let patch management become a fire-fighting exercise. Even if it does become a fire-fighting exercise, the process should be clearly defined to minimize the impact of the security threat.


Effective patch management should become a good security practice to protect the IT systems from security threats, stay compliant, and eliminate business downtime and data compromises.


I’ve recently done a couple of articles about cloud databases and there have been a several common responses. First, it’s clear (and I can understand why) that moving your databases to the cloud is not something most IT and database professionals are keen on doing. However, more interestingly there were a couple of common concerns that kept coming up regarding cloud database implementations that I’ll tackle in the this article. The first is security and the second is latency.


Security Concerns

The first and foremost concern is security. Having toured the facilities provided by several cloud hosting vendors I can definitely say their physical security far exceeds the security that I’ve seen in the normal IT environment. While I’m sure that there are very secure private IT infrastructures, I’ve never seen private IT security equal to what cloud vendors offer. For instance, last year I toured two different facilities offered by different cloud providers. One of these facilities even had ex-military armed guards at the check-in. Next, inside the facilities there were man-traps at every door where the first set of doors must close before the second set opens. Then the actual computing hardware was located inside locked cages – sometimes two locked cages that required different security access codes in order to gain access to the real hardware behind the cloud. In addition, the electrical conduits came from two different providers. This far exceeds what most business provide. Most business do not have these levels of security and reliability. However, I realize that physical security isn’t the only concern. You do have to trust that the cloud vendor will respect your privacy concerns and that is not an issue when you are in control of the data security.


Minimizing Latency

The next biggest concern that readers have expressed is about latency of cloud applications. The primary concern isn’t about latency caused by lack of compute power or storage performance. The bigger concern is network latency. If everything is in the cloud then network latency not an issue you really need to worry about. For instance, if your SQL Server database is primarily the backend for a web application that lives in Azure and the web application also lives in Azure then network latency really isn’t an issue. In this example, you don’t really have to worry about the latency that the public internet can introduce because the database and the application never really have to send data across the Internet. But what if you have local processing that depends on a cloud-based SQL Server database? In that scenario Internet latency really can be an issue. While Azure to Azure connections between the application and database will not be subject to Internet latency Azure or other cloud connections to on-premise systems clearly can be subject to Internet latency. The Internet is a public domain and you can’t be guaranteed that bandwidth will be there if you need it.


Fortunately, there are alternatives to using the public Internet to access your cloud databases. Both Azure and Amazon support private high speed on-premise to cloud connection technologies. Amazon calls it Direct Connect while Microsoft Azure calls it ExpressRoute.  Both of these technologies are essentially private cloud connections that offer more reliability, faster speeds, lower latencies, and higher security than standard Internet connections. Essentially, they connect your private network directly to your cloud provider of choice without crossing the public internet. Noam Shendar, Vice President of Business Development, Zadara Storage stated that ExpressRoute provided one to two millisecond response time for Azure access. Very fast indeed. There low latency alternatives to the public Internet can help to overcome the latency hurdles for cloud-based databases.


The Bottom Line

Cloud vendors typically have implemented security measures that exceed most IT organizations. However, it really boils down to trust. You need to trust that the cloud personnel will secure your data and not permit or accidentally expose your data to unauthorized access. Next, while the Internet may be an unreliable medium, high performance alternatives like Direct Connect or ExpressRoute are available. Both can provide very fast on-premise to cloud database connections – at a price. To find out more about Direct Connect check out AWS Direct Connect to find out more about ExpressRoute look at look at ExpressRoute.


Leon Adato

What Defines You?

Posted by Leon Adato Employee Aug 14, 2015

A few months back, SearchNetworking editor, Chuck Moozakis interviewed me for an article discussing the future of network engineers in the IT landscape: “Will IT Generalists Replace Network Engineering Jobs?” As part of our discussion, he asked me, “what in your mind, defined you as a networking pro in 1995, in 2005, and in 2015?” My initial answers are below, but his question got me thinking.


How we identify ourselves is a complex interaction of our beliefs, perceptions, and experiences. Just to be clear: I'm not qualified to delve into the shadowy corners of the human psyche as it relates to the big questions of who we are.


But in a much more limited scope, how we identify within the scope of IT professionals is an idea I find fascinating and ripe for discussion.


Every branch of IT has a set of skills specific to it, but being able to execute those skills doesn't necessarily define you as "one of them." I can write a SQL query, but that doesn't make me a DBA. I can hack together a Perl script, but I am by no stretch of the imagination a programmer.


Adding to the confusion is that the "definitive" skills, those tasks which DO cause me to identify as a member of a particular specialty, change over time.


So that's my question for you. What "are" you in the world of IT? Are you a master DBA, a DevOps ninja, a network guru? Besides your affinity to that areayour love of all things SQL or your belief that Linux is better than any other OSwhat are the things you DO which in your mind "make" you part of that group? Tell me about it in the comments below.


For the record, here is how I answered Chuck's original question:

What made you identify as a networking professional in the year?”


I was a networking professional because I understood the physical layer. I knew that token ring needed a terminator, and how far a single line could run before attenuation won out. I knew about MAU’s and star topology. I could configure a variety of NIC’s on a variety of operating systems. I could even crimp my own CAT3 and CAT5 cables in straight-through or crossover configurations (and I knew when and why you needed each). While there were certainly middle pieces of the network to know aboutswitches, routers, and bridgesthe mental distance between the user screen and the server (because in those days the server WAS the application) was very short. Even to the nascent internet, everything was hard-coded. In environments that made the leap to TCP/IP (often in combination with NetWare, SmallTalk, and NetBIOS) all PC’s had public-facing IP addresses. NAT hadn’t been implemented yet.



You could almost look at the early-to-mid 2000’s as the golden age of the network professional. In addition to enjoying a VERY robust employment market, networking technologies were mature, sophisticated, complex, and varied. The CCNA exam still included questions on FDDI, Frame Relay, fractional T’s, and even a NetBIOS or SmallTalk question here or there (mostly how it mapped to the OSI model). But MPLS and dark fiber was happening, wireless (in the form of 802.11b with WEP) was on the rise, VoIP was stabilizing and coming down in cost to the point where businesses were seriously considering replacing all of their equipment, and the InfoSec professionals were being born in the form of ACL jockeys and people who knew how to do “penetrative testing” (i.e.: white-hack hacking). How did I fit in? By 2005 I was already focused on the specialization of monitoring (and had been for about 6 years), but I was a networking professional because I knew and understood at least SOME of what I just named, and could help organizations monitor it so they could start to pull back the veil on all that complexity.



Today’s networking professional stands on the cusp of a sea-change. SDN, IoT, BYOD, cloud and hybrid cloud (and their associated security needs) all stand to impact the scale of networks and the volume of data they transmit in ways unimaginable just 5 years ago. If you ask me why I consider myself a networking professional today, it’s not because I have network commands memorized or because I can rack and stack a core switch in under 20 minutes. It’s because I understand all of that, but I’m mentally ready for what comes next.

“After all, even the best-designed and most thoroughly audited web applications have far more issues, far more frequently, than their nonweb counterparts”. - Michal Zalewski in The Tangled Web



The following is a true story.


I was looking up future concerts at the Lincoln Center’s website and I was welcomed by the this page:

Lincoln Center Web Site Hacked 1.jpg


I (Me) then called the Lincoln Center Customer Service (CS).


CS: "Thank you for calling Lincoln Center...."

Me: "I'm glad that you have someone answer this call on Sunday. Your web site was hacked! When I browsed your homepage, it's directed to http://new.lincolncenter.org/live/ and the page was hacked”.

CS: "That’s alright. That is our new site”.

Me: "Sir, this is not right. Your website was hacked! Why don't you see for yourself..."

CS: "No, the website is fine”.

Me: "Sir, the page may be cached on your browser. Why don't you clear your browser cache and check your site again”.

CS: "Oh yeah. You're right. We got hacked”.

Me: "I think you should contact your IT and web administrator immediately. Have a good day. Bye”.

CS: "Thank you. Bye”.


Even Google captured the hack that day:

Lincoln Center Web Site Hacked 2.jpg


Websites are hacked everyday. With our daily life more and more relying on services on the internet, web application attacks becomes major concerns not only to the application hosts/owners, but also to the application users.


The Open Web Application Security Project (OWASP) is an international non-profit organization dedicated to improving security of web applications. Every three years since 2004, OWASP has published the Top 10 Project, the top 10 most critical web application risks worldwide. The most recent one is of 2013 and we expect an updated list in 2016. I summarized the previous Top 10 lists in the following table.


OWASP Top 10.jpg


Interesting observation from the table above is that injection (no matter of SQL, OS, or LDAP) and cross-site scripting (XSS) have been among the top web application risks for years. The nature of the web applications probably contribute to the ease of hacking with these techniques. So, how do we detect and protect from the web application threats?



No really. Web applications are of OSI Layer 7. While IPS/IDS do have some signatures for web application attacks, in general IPS/IDS are of up to OSI Layer 4. IPSs can be placed as the first line of defense and block the threatening traffic, but they don’t understand web application protocol logic, URLs, or parameters.



WAFs of of OSI Layer 7 and are designed to detect and prevent web-based attacks that IPSs can’t. WAFs are placed inline in front of the web servers and monitor traffic to and from the web servers. WAFs understand web protocol logic, like HTTP GET, POST, HEAD, etc. and also Javascript, SQL, HTML, Cookies, etc. That being said, deployment of WAFs require understanding of the web applications behind them. Customizations of WAF policies are not unusual and WAF administrators need work closely with the web developers. A few years ago my team (network security team) evaluated WAF, but the complexity of building policies for even one web application made my team stop the project.



From the OWASP Top 10 lists above, many risks can be considered as developers’ responsibility to secure the applications. Web developers are trained better in securing the codes; for example, the number 1 2004 risk, Unvalidated Input, fell out of the list in recent years. But I think we still have a long way to go for having developers with sound security mindset. I swear that I saw web applications requiring authentication ran on port 80 (HTTP)!


How does your organization detect and mitigate web application threats? Do you deploy or manage WAF? Is it a tough job to keep up with the web applications? How much thoughts do your web developers put when they build the applications? Do they have thoughtful security testings?


I would like to hear from you.


Lastly, I would like to point out that the IBM Security Systems Ethical Hacking team prepared a series of videos a series of videos based on the OWASP Top 10 2013 list. Beware of the marketing of IBM AppScan. But the videos show good examples of those top 10 web application attacks.

When CIO’s and key stakeholders were issued guidance on the implementation of IT shared services as part of a key strategy to eliminate waste and duplication back in 2012, it remained to be seen how quickly implantation would take place and how much benefits would permeate into agencies. As recently as April, we were still talking about how IT infrastructures still remain “walled off within individual agencies, between agencies, and even between bureaus and commands.” So we decided to take a closer look and ask federal IT pros what they are seeing on the ground.


Partnering with government research firm, Market Connections to survey 200 federal IT decision makers and influencers, we drilled down into their view of IT shared services. To start with, we provided a universal definition of IT shared services, it covers the entire spectrum of IT service opportunities, either within or across federal agencies, and where previously that service had been found in each individual agency. Is this how you define IT shared services?

To set the scene, only 21 percent of respondents indicated that IT shared services was a priority in terms of leadership focusfalling very close to the bottom of all priorities. Additionally, the IT pros surveyed indicated that they felt like they’re in control of shared services within their environment.


However, we are impressed with the amount of IT services being sharedspecifically, that 64 percent of DoD respondents indicated being recipients of shared services. The DoD adoption of enterprise email, a shared service, is probably the most visible and widespread use of a shared service in the DoD. DISA provides agencies with the ability to purchase an enterprise email system directly from its website and provide support services on a wide scale.

slide 1.jpg

Additionally, over 80 percent of respondents think that IT shared serviceseither within government or outsourcedprovides superior end-user experience. A large portion of respondents also believe that IT shared services benefit all stakeholders, including IT department personnel, agency leadership, and citizens.

slide 2.jpg

IT shared service implementation seems to still be facing an uphill battle and typical change management challenges. However, IT pros have still identified some key benefits, including: saving money, achieving economies of scale, standardized delivery and performance, and opportunities to innovate. And this is good to see, because these were many of the objectives of shared services to begin with.

slide 3.jpg

Are you using IT shared services in your environment? What challenges are you experiencing in implementation? What benefits are your end-users seeing?

Full survey results: http://www.solarwinds.com/assets/industry/surveys/solarwinds-state-of-government-it-management-and-m.aspx

This week, building on the DBaaS theme, I am taking a look at the Google Cloud SQL service. Last week I took a look at Microsoft's DBaaS offering via Azure, and this week we are looking at Google's offering which his MySQL compatible. The last article had a pretty decent flow I think with the different subtopics and I will try to keep with that same pattern this time too:

  • How to Create
  • How to Connect
  • How to Backup and Restore
  • (new addition) How it preforms

Let's dive in.

How to Create a Google Cloud SQL instance


Google's Cloud is one that I did not yet have an account with... no real reason why not, just hadn't played with it yet. But it only took about 30 seconds to get my credit card info in, collect my $300 dollar service discount (just for signing up and playing with the service), from there all I needed to do was find the SQL services area and click create.


The entire process for getting an instance online is contained on a single page. However its a pretty long page so I had to split it up into two images .

Here in part one you give your instance a name... For some reason I was thinking I was creating the actual database like I did in Azure, but here what we are actually doing is provisioning a MySQL instance which can contain many databases. After you name it, you select a size (which can easily be changed later)... select a region and pick how you want to be billed.

Depending on your use case you may want to investigate the two different billing methods. I haven't seen this method before, but basically you can have the database always ready to go and waiting for queries; or you can have Google pause the service after a predetermined time with no usage. If you are an 8-5 shop with no need for it to run after hours, the "Per Use" billing method may be cheaper. Then down a little further make sure you select "OnDemand" activation policy... this is the policy that will actually turn down the instance when its not needed.


On the bottom part of image one you can also see that they are going to charge you 24 cents a day if you want ipv4 addressing... If you are using a server with an ipv6 address you can leave this unchecked and just use ipv6. Over a month it will save you about 7 bucks. I think this method is actually pretty neat... it will help drive ipv6 adoption.

On the second part of creation page you are essentially just setting up the firewall rules so that you can restrict access to the database instance. Just like Azure, Google allows allows access to the instance from anywhere INSIDE of Google Compute by default, but if you want anything outside to have access you must explicitly allow it.


It only took a few seconds and my instance was created and I could access the details page. On this page you find pretty much everything you want to know about your instance. It also has some fairly realtime performance monitoring graphs which are pretty cool.


That's it. You now have a MySQL compatible instance running on Google's Cloud. You do not however have a database , to do that you can use your favorite MySQL client (Linux CLI, phpmyadmin, Toad, etc)


How to connect to your new Database DB Instance


I opted for using the Linux CLI client for MySQL because I have a lot of experience with it and already had it installed on serveral VM's. I was very easily able to connect to the database instance with the client and do a "show databases" command, the typical default MySQL databases were returned. I also created a database with the usual "create database <database name>" command.


While I was messing around the graphs on the details page were happily updating in near real time as well!


Since I had local MySQL instances installed I thought I would do some comparisons between a local Linux VM and a DBaaS instance with the mysqlslap utility. I followed a tutorial online that also used the sample employee data available from mysql.org.I ran this test on both my blog's local MySQL instance as well as the DBaaS instance.


How does it perform ?


Local MySQL instance (3 vCPU Xeon E5504 with 8GB of RAM and storage on an EMC CX3 FC San)

11-blog server.PNG

Same test on the DBaaS instance with the D16 instance setting. I also had to use a Google Compute VM to run the commands from, as I tried to run it from a linux instance at my Colo and things were hammering the WAN connection on my side like crazy.


When I say like crazy I mean about 70-80Mbps constantly, so I decided that to get a realistic number i would need something a little more local to the DB instance


I also tried out the HammerDB test tools and was able to get almost 80k tpm.

tpm d32.PNG

Note: One thing that I did notice when doing the performance testing was when I changed between instance sizes all connections to the instance would drop and take about 20-30 seconds to reconnect. So word of advise is to not change instance sizes unless you are willing to drop your connections for a bit of time.


How to do Backups and Restores


I saved this part for last because I really don't have much to talk about here, other than this part is a lot less user friendly than the Azure backup and restore. In fact other than selecting your 4 hour backup window when creating the database instance, none of the other settings are in the GUI. In fact throughout the Google documentation they recommend installing their SDK tools. After you have those utilities installed you can reference the documentaiton here Backups and Recovery - Cloud SQL — Google Cloud Platform to create backups and restore them.


I know... this section is pretty disappointing, but aside from retyping the Google documentation there really isnt too much to see here.




MySQL has to be my favorite database, probably because I've been using it since college. It's free, super easy to use, and more than enough for anything I have thrown at it thus far (personally). Google has created a crazy fast service with it too, I have never seen my bandwidth usage as high as it was when I kicked off the mysqlslap test, plus with the flexability to only bill you for when queries are actually being ran... that is awesome.


Overall I think that once Google gets backup and restore integrated into the GUI it will make it much more user friendly for the non-dba / coder to feel comfortable using.


The Simple Network Management Protocol – SNMP – was originally proposed by way of RFC–1067 back in 1988. While that doesn’t hold a candle to TCP/IP which is approaching middle age, SNMP, at the grand old age of 27, seems to be having a bit of a mid-life crisis.




One problem with SNMP is that it’s based on UDP, or as I like to call it, the “Unreliable Data Protocol”. This was advantageous once upon a time when memory and CPU were at a premium, because the low session overhead and lack of need to retransmit lost packets made it an ideal “best effort” management protocol. These days we don’t have the same constraints, so why do we continue to accept an unreliable protocol for our critical device management? If Telnet were based on UDP, how would you feel about it? I’m guessing you wouldn’t accept that, so why accept UDP for network management?

It’s kind of funny that we’re still using SNMP when you think about.


Inefficient Design


Querying a device using SNMP is slow, iterative and inefficient. Things were improved slightly with SNMPv2, but the basic mechanism behind SNMP continues to be clunky and slow. Worse, SNMPv3’s attempt at adding security to this clunkmeister remains laughably unused in most environments.




So if SNMP is the steaming pile of outdated monkey dung that I am suggesting it is, what should we use instead? I’m open to suggestions, because I know there must be something better than this.


SNMP Traps


For traps, rather than waiting for a UDP alert that may or may not get there, how about connecting over TCP and subscribing to an event stream? If you only want certain events, filter the stream (a smart filter would work on the server side, i.e. the device). This is pretty much what the Cisco UCS Manager does; the event stream reliably sends XML-formatted events to anybody that subscribes to them. This also means that you don’t get what I see so often in networks, which is a device wasting time sending traps to destinations that were decommissioned years ago. An event stream requires an active receiver to connect and subscribe, so events are only sent to current receivers.


SNMP Polling


The flavors of the month in the Software Defined Networking (SDN) world are things like NETCONF and REST APIs. These are TCP-based mechanisms by which to request and receive data formatted in JSON or XML, for example. You’ve spotted by now, I’m sure, that I’m network centric, but why not poll all devices this way? Rather than connecting each time a poll is requested, why not keep the connection alive and request data each time it’s needed? XML and JSON can seem rather inefficient as transfer mechanisms go, but if we’re running over HTTP, maybe we can use support for GZIP to keep things more efficient?

Parallel connections could be used to separate polling so that a high-frequency poll request for a few specific polls runs on one connection while the lower-frequency “all ports” poll runs on another.




If we put aside fancy formatted data, have you ever wondered whether it would be easier to just SSH to a device and issue the appropriate show commands once a minute, or once every 5 minutes? SSH can also support compression, so throughput can be minimized easily too. The data is not structured though, which is a huge disadvantage, so unless you’re polling a Junos device where you can request to see the response in XML format, you’ve got some work to do in order to turn that data into something useful. In principal though, this is a plausible if rather unwieldy solution.


And You?


How do you feel about SNMP; am I wrong? Are you using any alternatives – even for a small part of your assets – that you can share? Are you willing to use a different protocol? And why has SNMP continued to be the standard protocol when it’s evidently so lame? The fact that SNMP is ubiquitous at this point and is the de facto network management standard means that supplanting it with an alternative is a huge step, and any new standard is going to take a long time to establish itself;just look at IPv6.

Help Desk and Service Desk are names that are often interchanged in IT. But, have you ever thought, what does Help Desk and Service Desk meanand how do they differ?

Confused ??



In this article, I’ll take you through the different characteristics of Help Desk and Service Desk, and by analyzing the discussed points, you will be able to understand which tool best suits you and your organization.


A helpdesk is useful when a service needs to be provided for customers (mostly IT users and employees, both inside and outside of an organization). This service needs a multi-threaded troubleshooting approach to maximize its benefits.

Helpdesk provides:

Case management - Ensures all tickets are addressed and resolved in a timely manner. Further, this helps prevent tickets from being overlooked by technicians.

Asset management - Keeps asset information up-to-date. In turn, making it easier for technicians to quickly address and complete tickets. Moreover, you gain access to knowledge base articles related to the issue raised.

SLA’s - These are associated with help desk, but are not business oriented. Rather, they are technology focused. For example, SLA’s like - % of space left in the storage.

Service Desk:

Service Desk comes into the picture when technicians or business administrators need service and or coordination in the IT department. Any business that might integrate with IT for their business needs will then likely need Service Desk in place. In a nutshell, service desk is the point of contact for all IT-related requests and is much more complex and integrated than help desk.

Service Desk provides:

Incident management - Helps customers get back on track (in a productive state), as soon as possible. This can be done by passing information, work around for the issue, or by resolving the issue.

Problem management - Helps customers to identify the root cause of a reported issue.

Change management - Helps making orderly changes to IT infrastructure.

Configuration management - Helps to organize the assets change requests in organizations. The service desk also maintains the Configuration Management Database (CMDB), where all configuration data is saved.

Service desk also helps in the design, execution and maintenance of SLA’s (Service Level Agreement) and OLA (Operating Level Agreements).

So, which is best for you?  Take a look below at a few more things to consider when choosing Help Desk or Service Desk:



Help Desk

Service Desk


To provide excellent technical support.

To provide excellent incident management.


To solve problems efficiently and effectively.

Customer to be back up and running to productivity ASAP.


To have a single point of contact for all technical issues.

Have ITIL framework for IT processes.


IT infrastructure changes to happen in an informal process.

IT processes to be formally defined.


Keep track of the IT assets in the network.

Keep track of configuration changes and the relationships among them using CMDB.


Define to customer how you will solve the issue.

Define only SLA and OLA to customers.


Well, I hope this article helped you get a better understanding on which solution is best for you or your organization. Also, please feel free to fill in more interpretations about the differences between Help Desk and Service Desk in the comments below.

Security for mobile devices in the federal space is improving, but there’s still a lot of ground to cover


In June, we partnered with Market Connections to survey decision makers and influencers within the US federal IT space. While the report covers several areas, and I strongly recommend having a look at the whole survey, the section on mobile device usage caught my attention.



What is clear from the report is that mobile devices and the security issues that come with them are being taken seriously in federal IT—or at least some sectors. The problem is that “some sectors” is not nearly enough.


Let’s take one of the first statistics on the survey—permitting personal devices to access internal systems.


The good news is that 88% of the respondents said that some form of restriction is in place. But that leaves 11% with more or less unfettered access on their personal mobile devices. Eleven percent of the federal government is a huge number and there is almost no way to look at that 11% and derive a situation that is acceptable in terms of risk. And that’s just the first chart. Other areas of the survey show splits that are closer to 50-50.


For example, 65% of the respondents say that data encryption is in place. That means 35% of the mobile devices in the field right now have no encryption. Honestly, for almost all of the points on the survey, nothing less than a number very close to 100% is acceptable.



What can be done about this? Obviously this one essay won’t untie the Gordian Knot in one fell swoop, but I am not comfortable throwing my hands up and saying “yes, this is a very bad problem.” There are solutions and I want to at least make a start at presenting some of them.


Let’s start with some of the easy solutions:


Standard Phones

Whether or not phones are agency-provided, there should be a standard setup for each of them. Employees can provide their own phone as long as it’s on a short list of approved devices and allow for provisioning. This helps agencies avoid having devices on the network with known security gaps and allow certain other features to be implemented, like:

Remote Wipe

Any phone that connects to government assets should be subject to remote-wipe capability by the agency. This is a no-brainer, but 23% of respondents don’t have it or don’t know if it is in place (trust me, if it was there you’d know!).


Every mobile device that connects to federal systems should be set up with a VPN client, and that client set to automatically activate whenever a network is connected—whether wifi or cellular. Not only would this help with keeping the actual traffic secure, but it would force all federal network traffic through a common point of access which would allow for comprehensive monitoring of bandwidth usage, traffic patterns, user access, and more.


Security Training

The number one vector for hacking is still, after all these years, the user. Social engineering remains the biggest risk to the security of any infrastructure. Therefore before a device (and the owner) is permitted onto federal systems, the feds should require a training class on security practices—including password usage, two-factor authentication, safe and unsafe networks to connect on (airports, café hotspots, etc.), and even how to keep track of the device to avoid theft or tampering.


All of those options, while significant, don’t require massive changes in technology. It’s more a matter of standardization and enforcement within an organization. Let’s move on to a few suggestions which are a little more challenging technically.


Bandwidth monitoring

Note that this becomes much easier if the VPN idea mentioned earlier is in place. All mobile devices need to have their data usage monitored. When a device suddenly shows a spike in bandwidth it could indicate someone is siphoning the data off the device on a regular basis.


Of course, it could also mean the owner discovered Netflix and is binge-watching “The Walking Dead.” Which is why you also need…


Traffic Monitoring

Quantity of bytes only tells half the story. Where those bytes are going is the other half. Using a technique such as NetFlow, makes it possible to tell which systems each mobile device is engaged in conversations with—whether it’s 1 packet or 1 million. Unexpected connection to China? That would show up on a report. Ongoing outbound connections of the same amount of data to an out of state (or offshore) server? You can catch those as well.


The key is understanding what “normal” is—both from an organizational perspective and for each individual device. Ongoing monitoring would provide both the baseline and highlight the outliers that rise up out of those baseline data sets.


User Device Tracking

User Device Tracking correlates where a device logs on (which office, floor, switch port, etc.) along with the user that logs in on that device to build a picture of how a mobile device moves through the environment, who is accessing it (and accessing resources through it), and when it is on or off the network. Having a UDT solution in place is for more than just finding a rogue device in real-time. Like traffic monitoring, having a steady and ongoing collection of data allows administrators to map out where a device has been in the past.


In Closing

The mobile device section of the federal survey has much more useful information, but these recommendations I’ve raised—along with the reasons for why it is so desperately important to implement them—would, if executed, provide a higher level of security and stability. I hope you’ll take a moment to consider them for your organization.

Full survey results: http://www.solarwinds.com/assets/industry/surveys/solarwinds-state-of-government-it-management-and-m.aspx


First off, I’d like to introduce myself. My name is Damon, and I am the Managing Editor of Geek Speak. I’m not sure why it has taken me so long to post a blog. But that said, I’d like to thank you all for your continued support of thwack, Geek Speak, and for our amazing contributors.


While I’m not exactly versed in IT, I do respect all the various aspects of IT. However, I have always seemed to gravitate more towards Security. I am amazed at the number of security breaches that occur every year. There have been many notable breaches, but I remember firsthand experiencing the PlayStation® Network hack and it being down for an extended period of time (Yeah life was rough that weekend…no Destiny!).


On that note, I’d like to discuss a show I recently started watching, “Mr. Robot.” To be honest, I was skeptical when I heard about it, mainly because shows or movies like this always seem forced or just unrealistic. For example, “Blackhat” was alright, but come on really…Thor as a hacker?!


Okay, alright let’s get back to Mr. Robot, which stars Rami Malek as the lead character, Elliot Alderson. In short, Elliot is a Cyber Security Engineer by day and is plagued with a severe social anxiety disorder. So, as you can imagine, he struggles with interacting and relating to most people. But, by night he connects with the rest of the world—as he hacks them and assumes the life of a cyber-vigilante.


Without throwing too many spoilers out there, I will only address episode 1. In this episode, we are introduced to Elliot in a dimly lit café on a cold evening. Elliot frequently goes to this café to be alone and well because, as he states, they have “remarkably fast Wi-Fi.” However, this particular evening he has a different reason for visiting.


As he arrives, he sits down by the owner, alerting him that he’s going to expose him and his underground business ring to the authorities. The owner then becomes unruly and tells him to leave. Elliot calmly speaks to him and explains that just because he uses Tor, it doesn’t mean his traffic is private. He further states, “You know, whoever controls the exit nodes controls the traffic.” Immediately, the owner becomes frantic and offers Elliot money, but Elliot says it’s not about the money. With a little deductive reasoning, I’m sure you can figure out what happens next. Naturally, I was intrigued…I thought ok, they are talking technical here and it’s pretty suspenseful at that—pretty good start.


Now to the second part of the episode. The company Elliot works for is hit with a DDoS attack. Ultimately, he stops the attack, but in the process he receives a message from a hacker group called “fSociety.” This group is one of many hacker groups that are collectively planning a digital revolution by deleting debt records from one of the largest corporations in the world. The problem is, they need Elliot to achieve their end goal. Annnnd…that’s where I’m going to stop. Sorry, you’ll have to tune in yourself to see what’s been happening in the latest episodes.


So, this leads me to my main question, could this fictional story really happen today? And, if you’ve been tuning in, what do you think so far? Are some of the hacks or scenarios that occur realistic/possible? Let us know or just leave your thoughts on the latest episode—Tune in Wednesdays, 10/9C on USA.

On occasion considerations will be made regarding the the migration of applications to the cloud. There are may reasons why an organization might choose to do this.


Full application upgrades, and the desire to simplify that process moving forward. The desire for licensing centralization, and rollout of future upgrades could be a sure cause of functionally moving application sets to the cloud. A big motivator is the End of Life of older versions of operating systems requiring upgrades. Corporate acquisitions and the goal of bringing in a large workforce, particularly a geographically diverse one, into the organization’s structures can be a large motivator.


Above all, the desire to provide a stability in hardware redundancies can be a motivator.


When a company has desire to augment the internal datacenter with additional function, redundancy, hardware, uptimes, etc., the movement of applications to a hybrid cloud model can be the ideal solution to get them to that application nirvana.


To move a legacy application to a hybrid model is a multi-layered process, and sometimes cannot be done. In the case of legacy, home-grown applications, we often find ourselves facing a decision as to complete rewrite or retrofit. In many cases, particularly arcane database cases in which the expertise regarding design, upgrade, or installation has long since left the organization, a new front-end, possibly web-based may make the most sense. Often, these databases can remain intact, or require just a small amount of massaging to make them function properly in their new state.


Other applications, those more “Information Worker” in focus, such as Microsoft Office already have online equivalents, so potentially, the migration of these apps to a cloud function is unlikely to be as much of a problem as a complete reinstallation, or migration. Components like authentication have been smoothed out, such that these functions have become far more trivial than they were at inception.


However, as stated above, there are a number of high visibility and highly reasonable purposes for moving apps to the cloud. The question has to be which approach is most ideal? As I’ve stated in many of my previous posts, the global approach must be paramount in the discussion process. For example, should there be a motivation of application upgrade in the future, the goal may be to create a new hosted environment, with a full-stack install of the new version code, a cloning/synchronization of data, then whatever upgrades to the data set requisite. At that point, you have a new platform fully functional, awaiting testing, and awaiting deployment. This method allows for a complete backup to be maintained of a moment in time, and the ability to revert should it become necessary. At that point, another consideration that must have been entertained is how to deploy that front end. How is that application delivered to the end-users? Is there a web-based front end? Or, must there be fat applications pushed to the endpoints? A consideration at that point would be a centralized and cloud based application rollout such as VDI.

As you can see, there are many planning considerations involved in this type of scenario. Good project management, with time-stream consideration will ensure a project such as this proceeds smoothly.


My next installment, within the next couple of weeks will take on the management of Disaster Recovery as a Service. 

Moving your databases to the cloud clearly isn’t for everybody however there definitely are implementations where that makes sense. Smaller businesses and start-ups can often save considerable capital outlay by not having to invest in the hardware and software required to run an on-premise database. In addition, to cost saving there can be several other advantages as well. The cloud offers pay-as-you-go scalability. If you need more processing power, memory or storage you can easily buy more resources. Plus, there’s the advantage of global scalability and built-in geographical protections. By its very nature cloud resources can be accessed globally and most cloud providers have built-in geographical redundancy where your storage is replicated to secondary regions that could hundreds or thousands of miles away from your primary region. Finally, the cloud offers simplified operations. Business don’t have to worry about hardware maintenance or software patching. The cloud provider takes care of all of those basic maintenance tasks.


Choosing a cloud destination

If you’ve decided to that a move to the cloud might pay off for your business it’s important to realize that not all cloud databases are created equal. There are essentially two paths for running your databases in the cloud. The main database cloud providers used by most business today are Amazon RDS and Microsoft Azure.  When you go to deploy a database in the cloud you can use an Infrastructure as a Service (IaaS) approach where you run your database inside a virtual machine (VM) that hosted by a cloud provider or you can use Database-as-Service approach where you use the database services directly. An example of an IaaS implementation could be an Azure VM running Windows Server and SQL Server. An example of the DBaaS approach is Azure SQL Database. The trade-off between the two essentially boils down to control You have more explicit control over the OS and the SQL Server settings in an IaaS VM implementation and less control over these type of factors in a Azure SQL Database implementation.


SQL Server - Azure Migration Tools

After you’ve decided to go with either a DBaaS or an IaaS type of cloud implementation then the next step is to pick the tools that you will need to migrate your data to the cloud. For a IaaS SQL Server implementation you can use the following tool:


  • Deploy a SQL Server Database to a Windows Azure VM wizard -- The wizard makes a full database backup of your database schema and the data from a SQL Server user database. The wizard also creates an Azure VM for you.


If you are migrating to an Azure SQL Database then you can consider the following migration approaches.


  • SQL Server Management Studio (SSMS) – You can use SSMS to deploy a compatible database directly to Azure SQL Database or you can use SSMS to export a logical backup of the database as a BACPAC which can be imported by Azure SQL Database.
  • SQL Azure Migration Wizard (SAMW) – You can also use SAMW to analyze the schema of an existing database for Azure SQL Database compatibility and then generate a T-SQL script to deploy the schema and data.


Up in the Clouds

How you actually get to the cloud depends a great deal on the type of database cloud implementation that you choose. If you want more control then the IaaS option is best. If you want fewer operational responsibilities then the DBaaS option might be the better way to go.


Digital Attack Map December 25, 2014.jpg

“DDoS trends will include more attacks, the common use of multi-vector campaigns, the availability of booter services and low-cost DDoS campaigns that can take down a typical business or organization” - Q1 2015 State of the Internet Security Report


“Almost 40% of enterprises are completely or mostly unprepared for DDoS attacks”. - SANS Analyst Survey 2014



In Christmas 2014, Microsoft’s Xbox Live and Sony’s Playstation Network were hit by massive DDoS attacks by hacking group Lizard Squad. Xbox Live and Playstation Network were down for 24 hours and 2 days, respectively. Online gamers were not happy, I’m sure.


Earlier this year, GitHub, the largest public code repository in the world, was intermittently shut down for more than five days. The DDoS attacks were said to link to China’s “Great Cannon”.


We’ll never stop hearing new victims (or old ones) that are crippled by the distributed denial of service (DDoS). In fact, every new security report states record-breaking number of DDoS attacks compared to the previous one. The latest data shows that there is an increasing number of Simple Service Discovery Protocol (SSDP) attacks. I found this scary - any unsecured home-based device using Universal Plug and Play (UPnP) Protocol can be used for reflection attacks.


Did your company/organization suffer from DDoS?


How do your organization detect DDoS threats?


What DDoS mitigations do your organization implement?




Studies found that majority of the DDoS attacks were volumetric attacks at the the infrastructure layer. Firewalls, IPS/IDS, NGFW, IP reputation service, should be deployed in the defense-in-depth manner not only to protect an organization’s network perimeter against DDoS, but also to detect any inside-network infected device to launch DDoS against within or outside the organization. NetFlow or any flow-based technology is indispensable to provide visibility of any network abnormality.



Application firewalls, host-based IPS/IDS, application delivery controllers (ADC) provide up to Layer 7 visibility and protection against malicious traffic. And most importantly, don’t forget to patch your systems.



You feed all the logs, flow data, packet captures, etc. to SIEM, then what? I believe that SIEM is not SIEM without the human element. Even though vendors include many pre-built alerts/reports in SIEM, it’s human that fine-tune to fit an organization’s needs; a lot of man-power. Also, who say that DDoS won’t start from 2AM in the morning? Therefore, 24x7 coverage (think of NOC) is necessary.



Recently, we were approached by one of our service providers; they provide Security Operations Center (SOC) services to customers. In other words, they give customers 24x7x365 SIEM coverage. Service providers can also provide automatic DDoS mitigation, upstream blackholing, or even global content delivery network (CDN) services.



Just like organizations performing disaster recovery tests annually or twice a year, annual DDoS tests should be conducted. All IT departments will get familiar with the DDoS incident handling. Also, the organization’s DDoS mitigation weakness can be revealed and improved.



In the ‘80s Sci-Fi movie WarGames, there was scene in which big monitors in situation room showed traces of global missile attacks. Do you want to see something similar in real life? OK. OK. No missile attacks. Check out the following websites for a taste of current cyberattacks in real time.


Cyber Threat Map from FireEye

IPViking Map from Norse Corp

Digital Attack Map from Arbor Networks and Google

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.