Skip navigation
1 6 7 8 9 10 Previous Next

Geek Speak

1,735 posts

Do you want your applications to perform at their best 24 x 7? Would you like to boost your organization’s productivity? Of course, everyone wants this, and here’s how: continuous monitoring of the application stack. Application stack?!! What’s that? Keep reading this article and I’ll walk you through the application stack.


To understand the application stack you should first understand application deployment in a typical organization.


One of these components will be the sole reason for an application’s performance issues. Therefore, continuous monitoring of these components is a must. However, the most important will be the graphic illustration of all the components in the application deployment. The graphic illustration should show the status of all the components, giving you can get a bird’s eye view of the all components’ working status. But just a view of the statuses is not enough, you have to ensure that you have enough information when there is an issue with one of the components.


Let’s go through an example, such as an issue with one of your applications, like MS SQL.

You can first start with the server. Being an administrator you will know which server SQL is running on. Then you can directly check the performance of the server, including memory, disc, etc. But if it is running on a virtualized environment (which is mostly the case these days) you can drill down to see the host, virtual cluster, virtual datacenter, and the data store the VM belongs to. If there is an issue with any of these you should further drill down to locate the exact issue. Minute details of node, Hyper-V® host, ESX® host, Vserver, etc., should be checked. If everything is perfect, you can move on to the next item in the application stack.


You can check the volume the VM is using, the LUN where the data store of the concerned VM is located, storage pool, and storage arrays.


If the issue is with a Web-based application, the troubleshooting should start by tracking the application’s Web performance. You should be able to track the transitions, which indicates if it’s an issue with the application itself. Also, you can back track to see if it’s a network issue.


Basically, if you are able to monitor your system, VM, storage, Web performance, and network, the complete application troubleshooting process will fall into place. When you have information about all these components together in one console, your life as an application engineer becomes a lot easier. Moreover, you will be able to proactively troubleshoot any upcoming issues, helping you achieve consistency in your application performance.


The combination of a couple of tools will help you achieve all of the above. Switching from one application to another is difficult. However, Solarwinds® has its Application Stack Management Bundle, which gives you full visibility of all components associated with an application. Want to know more about the Application Stack Management Bundle? Visit Solarwinds’ at booth HH18 at IPExpo, London.

DameWare Remote Support allows IT admins to connect to remote computers wherever they are situated—whether they are inside the network or outside, and whether they are attended or unattended. To underscore and make clear the value DameWare brings to your IT team, we made this graphic explaining all the popular remote connection options available in DameWare Remote Support.


The latest release of DameWare Remote Support v12.0 has introduced a new connection type – which is #4 in the graphic below – unattended, over-the-Internet remote session. This adds to DameWare’s many remote connection capabilities allowing IT pros to provide on-time IT support and administration to remote computers whether they have any end-users present or not.




Deployment Modes for Remote Connection Types:

  • Use the built-in DameWare Mini Remote Control (DMRC) application available in DameWare Remote Support (DRS) to initiate remote connection.
  • Use the DameWare Remote Support application to leverage the built-in administration tools to troubleshoot Windows® systems and Active Directory®.


Remote Connection Type

Deployment Option

1. Attended Inside the LAN

DRS Standalone

(Use DMRC in DRS)

2. Unattended Inside the LAN

DRS Standalone

(Use DMRC in DRS)

3. Attended Outside the LAN

DRS Centralized

(Use DMRC in DRS + Central Server + Internet Proxy)

4. Unattended Outside the LAN (New in v12.0)

DRS Centralized

(Use DMRC in DRS + Central Server + Internet Proxy)

5. Remote Connection from Mobile Device

DRS Centralized

(Use DMRC in DRS + Central Server + Mobile Gateway)


Download the PDF of the Infographic below.

What makes an application perform optimally? I would say it is when there is collaborative performance from the server or the VM running the application, the network on which the application is used, and the storage. In this post, I provide information regarding storage and ways to configure it to avoid application performance bottlenecks.


LUN contention

LUN contention is the most common issue that storage admins try to avoid. Here are a few common mistakes that usually lead to LUN contention:

  • Deploying a new application on the same volume that handles busy systems, such as email, ERP, etc.
  • Using the same drives for applications with high IOPS.
  • Configuring many VMs on the same datastore.
  • Not matching storage drivers with processors.

Application issues can be traced to LUN contention only if the concerned database is being monitored. Drilling down to the appropriate LUN helps you make sure that the application runs fine.


Capacity planning

Poor application performance often can be tied to increased demand for services, and many times it can be storage and its IOPS. Storage is costly, and no organization wants to waste it on applications.

Capacity planning involves knowing your application, how much space it needs, and the kind of storage it requires. Capacity planning helps in predictive analytics, which allows users to choose the amount of storage their application requires. Capacity planning should be done before the application is moved into production. Doing so not only helps to right-size the application’s storage environments, but eventually helps lower the number of performance issues an application might experience during rollout.


Make sure it’s not the storage

SysAdmins often blame storage for application performance issues. It is always recommended to monitor your storage, which helps eliminate the blame game. Monitoring your storage helps you see whether it’s actually storage that’s causing performance issues, rather than the server or the network itself. Continuously monitoring your database can also help you avoid LUN contention. You will also be able to monitor the performance of your capacity planning, and be alerted when it’s not performing optimally.

Storage is the lowest common denominator of application monitoring. Application stack monitoring allows you to troubleshoot issues from the application itself. Consider the following troubleshooting checklist, and ask yourself:

  • Is it the application itself?
  • Is it the server on which the application is hosted?
  • Is it the VM?
  • Is it the storage?

I will walk you through the different layers and how they help troubleshoot application issues in my next blog. Also, to find out more about App stack monitoring, please visit us at booth HH18 at this year’s IP Expo in London.

When:           August 2005

Where:          Any random organization experimenting with e-commerce


Employee:         I can’t access the CRM! We might lose a deal today if I can’t send that quote.

Admin:               Okay, check the WAN, check the LAN, and the server crm1.primary.

Junior:                All fine.

Admin:               Great, Restart the application service. That should solve it.

Junior:                Yes, Boss! The app service was down. I restarted it and now CRM is back up.


When:             August 2015

Where:            Any random organization that depends on e-commerce


Sales Rep:           Hey! The CRM is down! I can’t see my data. Where are my leads! I’m losing deals!

Sales Director:     Quick! Raise a ticket, call the help desk, email them, and cc me!

Help desk:           The CRM is down again! Let me assign it to the application team.

App team:            Hmm. I’ll reassign the ticket to the server guys and see what they say.

SysAdmin:           Should we check the physical server? Or the VM instance? Maybe the database is down.

DB Admin:           Array, disc, and LUN are okay. There are no issues with queries. I think we might be fine.

Systems team:     Alright, time to blame the network!

Net Admin:           No! It’s not the network. It’s never the network. And it never will be the network!

Systems team:     Okay, where do we start? Server? VM? OS? Apache®? App?


See the difference?


App deployment today

Today’s networks have changed a lot. There are no established points of failure like there were when the networks were flat. Today’s enterprise networks are bigger, faster, and more complex than ever before. While current network capabilities provide more services to more users more efficiently, this also has led to an increase in the time it takes to resolve an issue, much less pinpoint the cause of failure.


For example, let’s say a user complains about failed transactions. Where would you begin troubleshooting? Keep in mind the fact that you’ll need to check the Web transaction failures, make sure the server is not misbehaving, and that the database is good. Don’t forget the hypervisors, VMs, OS, and the network. Also consider the fact that there’s switching between multiple monitoring tools, windows, and tabs, trying to correlate the information, finding what is dependent on what, collaborating with various teams, and more. All of this increases the mean time to repair (MTTR), which means increased service downtime and lost revenue for the enterprise.


Troubleshoot the stack

Applications are not standalone entities installed on a Windows® server anymore. Application deployment relies on a system of components that must perform in unison for the application to run optimally. A typical app deployment in most organizations looks like this:

app stack.png

When an application fails to function, any of these components could be blamed for the failure. When a hypervisor fails, you must troubleshoot multiple VMs and the multiple apps they host that may have also failed. Where would troubleshooting begin under these circumstances?


Ideally, the process would start with finding out which entity in the application stack failed or is in a critical state. Next, you determine the dependencies of that entity with other components in the application stack. For example, let’s say a Web-based application is slow. A savvy admin would begin troubleshooting by tracking Web performance, move to the database, and on to the hosting environment, which includes the VM, hypervisor, and the rest of the physical infrastructure.


To greatly reduce MTTR, it is suggested that you begin troubleshooting your application stack.  This will help move your organization closer to the magic three nines for availability. To make stack-based troubleshooting easier, admins can adopt monitoring tools that support correlation and mapping of dependencies, also known as the AppStack model of troubleshooting.


Learn more at Microsoft Convergence 2015.

If you would like to learn more, or see the AppStack demo, SolarWinds will be at booth 116 at Microsoft Convergence in Barcelona.

Security is Everyone’s Job



“Never was anything great achieved without danger.” -Niccolo Machiavelli


As we begin National Cybersecurity Month, it's a great time to reflect on how we can all protect ourselves at work and home. Let’s look at some current risks and see what changes we can adopt to mitigate these risks.


Email - we need it, love it, live it, but it’s risky.


Phishing is still the number one risk for most of us. Whether it’s an automatic preview in our work email system, or a browser injection on Web mail, SPAM and phishing are both a security risk and an irksome annoyance.


Unfortunately, we are not winning the battle against email cybercriminals and overzealous marketers, despite almost ubiquitous deployment of spam filters. Here we are in 2015, and spam still represents >10% of our inboxes.[1]  The statistics on phishing are even worse. From 2014 to 2015, the number of phishing sites increased from about 25,000 to 33,500, according to Google[2].


Furthermore, malicious email is becoming more sophisticated by embedding macros in ordinary looking attachments. In our busy lives, it’s easy to accidentally click on an attachment or link with malicious content.


The following are some email checks to keep top of mind:


Stay in familiar territory


Make sure the to: and reply to: emails match, or are from a company you know. Email phishers will try to fool you with an email that looks like someone you know, when it isn’t.



Watch out for typosquatters


These are email domains that are just slightly different from the real company name. These are commonly used in Business Email Compromise campaigns, where fraudsters trick businesses and consumers into sending money to a bank outside the US, often China or Russia. This money is very difficult to recover because we don’t have the right legal relationships, and international banking laws don’t provide the same protection as US laws.

These transactions pose a big business risk. We’ve lost 1.2 billion dollars in recent years. Even worse, this type of fraud is on the rise, up by 270% according to the FBI results released just last month.[3]



Personal email accounts are not safe from fraudsters


Personal email account breaches are difficult to detect because the fraudulent request comes from a real account. Hackers use the compromised account to steal money from relatives and friends. Particularly vulnerable are older parents and grandparents.


Don’t be a victim. Here are some safe computing practices that can help you avert email fraud:


Keep private information private


Never share your password. If someone genuinely needs access to your account (should never happen at work), change your password, then change it back when they are done.


Add variety to your login credentials


If you use a free email account, use a unique password for this account—not the one you use for social media, websites, and especially banking. Change your password frequently—at least once a quarter. It doesn’t need to be complicated, use your current password and add a special character for each quarter (see example below), or create your own that you can remember. Also, make your change date memorable, like the beginning of the quarter, or when you pay your mortgage.


  • 1st quarter “!”
  • 2nd quarter “?”
  • 3rd quarter “&”
  • 4th quarter “%”


This makes it more difficult for password crackers to guess your password, and if there is a password leak at another site, you haven’t handed over the keys to your email house as well.


Keep your system patched


Many of the security vulnerabilities exploited by hackers to compromise accounts are old and have been fixed by the vendors. If you are in a corporation, talk to IT about automatic updates. If you can’t patch because you are running an older application, ask IT about creating a VM (virtual machine) for you to run that old application. This helps you keep your system patched and up to date. At home, make sure your operating system, pdf reader (, and browsers are set for automatic updates. Patching these three things will protect you from the majority of risks.


Educate your friends and relatives


Warn your less tech-savvy acquaintances of the dangers of cyber fraud. Remind them that no true friend would ever ask for money in an email. If they do get such a request, advise them to make a phone call to the person. Also, give them the numbers of the fraud department at their bank so they have someone to call if they need advice.


Make sure your security software is current


Make sure everyone in your house has up-to-date anti-malware software. Put it on an auto-renewing charge if needed.


You may hear a lot of talk about next generation endpoint protection. And yes, anti-malware software is not perfect, but you still brush and floss your teeth. If you can’t afford an anti-malware software package, at least run the free Windows® Essentials (for Vista to Windows 8, after Windows 8, it is called Windows Defender). For Mac users, Sophos offers a free antivirus solution.



As Albert Einstein said, “A ship is always safe at the shore - but that is NOT what it is built for.” If we want to fully utilize the Internet, a little caution and paranoia can reduce the risks.






It’s getting cloudy. And I’m loving it! AWS re:Invent 2015 is back again and bigger than ever. SolarWinds will be there to talk and demo full-stack cloud monitoring so stop by Booth #643 for stellar swag and awesomely geeky conversations. Our Librato, Pingdom, and Papertrail subject matter experts will be on-site to answer questions about monitoring performance and events in the cloud. This includes application development, DevOps, infrastructure, and end-user experience aka the full stack.


SolarWinds is also presenting a Lightning Talk in the AWS Partner Theater Booth #645. joeld, SolarWinds CTO, and Nik Wekwerth, Librato Tech Evangelist, will discuss buying or building monitoring software on Wednesday, October 7th at 12:40PM PT and again on Thursday, October 8th at 12:40PM PT. Their talk is entitled Let’s not re:Invent the wheel: When to build vs. buy monitoring software. I will be live tweeting and wearing my rage against the virtual machine t-shirt (photo courtesy of @sthulin) so you can join in on the discussion by following @Solarwinds and hashtags: #SolarWinds #reInvent.



The top 3 things that I’m looking forward to at re:Invent 2015 are:

1.)        Security Ops best practices

2.)        Microservices and containers integration best practices

3.)        Big Data with machine learning algorithms


They highlight the additional business values enumerated below that IT organizations must realize to remain relevant:

1.)        Compliance and governance

2.)        Agility, availability, and customizability

3.)        Automation and orchestration for scalability


Further, these cloud services need to be integrated with what IT is already responsible; thus, Hybrid IT monitoring and management will be key to the success of IT organizations as IT pros use their four core skills to bridge utility for their business units. Finally, for more context around cloud monitoring, please check out "Why Cloud Monitoring Matters in the App Age" by Dave Josephsen, Developer Evangelist for SolarWinds.


After re:Invent, I will share my experience at event, highlight key takeaways, and present my thoughts on what it means for fellow IT pros.

[UPDATED: To include Booth #645 for the Lightning Talk Presentation and the added Thursday talk as well as the reference to Why Cloud Monitoring Matters]

In the past few articles I’ve discussed the different aspects of moving your databases to the cloud. Based on the feedback and the talks I’ve had with people in the industry clearly most businesses have no intention for moving their production databases into the cloud. The cloud is a good fit for startups and for many businesses using the hybrid cloud provides some affordable options for disaster recovery. Although today cloud usage for production database workloads is limited there’s another place where the cloud makes perfect sense: test and development.





When your development team is working of a project they often have the need to spin up new VMs from database servers, web servers as well as development and staging systems. It’s not uncommon to need several of these for each developer. While you can certainly do this using in-house resources you need to have the capacity to do so and many times provisioning all these new VMs requires the involvement of the virtualization administrator and sometimes even the storage administrator. These VMs and server systems are usually only needed on temporary basis but this process takes both time and resources.


The cloud like Azure or Amazon AWS really makes sense in a situation. Developers can provision the VMs with the applications that they need from the cloud without requiring any outside support or internal compute or storage resources.  Plus, in cloud providers like Microsoft Azure provide a number of prebuilt VM templates that can speed up the creation of fully provisioned VMs from hours to just minutes. For instance, to provision a VM on-premise you typically need to provision storage for the VM, create the VM, install the OS, configure the OS, and then install whatever applications are required. This can take hours for just one VM. If you’re using a private cloud this might be faster but most businesses don’t have private clouds. To provision a SQL Server VM in Azure you simply use the Azure Portal select the SQL Server template then select the size of VM that you want and the Azure storage account to place it in and hit the button. Azure will take care of creating the VM, installing the Windows Server operating system, optionally joining a domain, and installing SQL Server. The administrator doesn’t need to be involved and the whole process is much faster than provisioning an on-premise VM. When the project is completed the developer can just tear down the VMs and get ready for the next project.


Testing scenarios have similar advantages. There’s no need to deploy VMs on-site. Instead you can use VM templates from your cloud provider to rapidly deploy the scenarios that you want to test and discard them when you’re done. For instance, if you need to test a SQL Server AlwaysOn deployment then you can use something like the Azure AlwaysOn template to rapidly roll out a five VM deployment that includes two domain controllers, a file server witness and two SQL Server VM with AlwaysOn already configured enabling you to rapidly deploy your test environment. You pay for the resources that you use while you use them.


The cloud may not make sense for most production SQL Server workloads but it makes a lot of sense for test and development work.


There is a veritable food court of storage technologies on the market. How do you go about finding the solution that’s a good fit for your organization? In my earlier blogs I talked about hyper-convergence, open source storage, software-defined storage Cloud storage, flash storage, and object storage. To make the process of understanding storage technologies a bit easier, I’ve listed the major technologies and included the pros and cons of each. 


Storage TechnologyProsCons
Software-defined storage
  • Automate and manage data from a centralized location.
  • Build your own storage infrastructure, so you don’t have to worry about integrating different vendor products.
  • Flexible scaling allows you to add new features with just software updates.
  • Independent software and hardware means more components to manage.
  • Must ensure that the infrastructure matches the difference in latency and performance across your storage arrays.
Integrated computing platform (ICP), or hyper-convergence
  • Simplifies storage, computing, and network management.
  • Computer, storage, and network comes as a complete package.
  • Good package for a moderate IT budget.
  • Making granular upgrades or minor tweaks is challenging. For example, if the cluster gets low on storage, but the computer is performing well, a storage upgrade alone is not possible.
  • Must upgrade overall capability by adding another appliance when the cluster runs low on storage, even if the rest of the system is operating well.
Open source storage
  • Save money on purchase and maintenance.  
  • No compliance issues.
  • Code modification is based on your organization’s needs.
  • Hidden costs are involved, especially the cost of hiring a trained admin who knows how to operate the system, or training a current admin to do so.  
  • Must be compatible with other platforms in your organization.
  • No technical guidance or customer support. You are on your own if something fails.
Cloud storage
  • Minimal initial investment. 
  • Makes data available for users everywhere. Some availability outside the company VPN.
  • Multiple disaster recovery options keep data safe. 
  • You are charged according to the amount of storage you use. This can be expensive, but beneficial at times for some organizations because they don’t have to pay ahead. 
  • The security of your data depends on your 3rd parties.
  • Because the bulk of your data has to be transferred via the Internet, you will pay more for the bandwidth you use.
Flash storage
  • Able to continually increase capacity.
  • No moving parts (i.e. spinning disks) to create opportunities for storage mishandling.
  • More expensive than hard drives at a dollar per GB level.
  • Performance across vendors and models can vary significantly, even for the same capacity and endurance rating.
Object storage
  • Best solution for backup and recovery options.
  • Good scalability and distributed access.
  • Data is unstructured.
  • Not suited for organizations that deal with a lot of transaction data (i.e. data that frequently changes). 

“A company can spend hundreds of thousands of dollars on firewalls, intrusion detection systems and encryption and other security technologies, but if an attacker can call one trusted person within the company, and that person complies, and if the attacker gets in, then all that money spent on technology is essentially wasted”. - Kevin Mitnick


“But evil men and impostors will proceed from bad to worse, deceiving and being deceived”. - 2 Timothy 3:13 NASB



In the last five posts over the past three months I have explored the topics on Security Management. I touched upon the top three types of threats in the information security - Infrastructure, Application Attacks, and User Attacks. In this last one of my series, I’m going to look back on each post and to reflect on the audience’s feedbacks.



Dark Side Of The Encryption

The increasing amount of the encrypted traffic inbound and outbound on the network certainly challenges the visibility and the control of the security management. Some commented that the wonderful defense in depth still had something to be desired due to the nature of the encrypted traffic. I agree that our monitoring technology and techniques will need to evolve, but I believe that there hasn’t been a solution yet. No, inserting SSL Interception will break stuff.


It’s Christmas Day. Do You Know How Long You’ve Been DDoS’ed?

Many companies are still unprepared for the DDoS attacks. It’s hard to defend and mitigate massive DDoS attacks solely with the perimeter security equipment. Isn’t it nice that the DDoS attacks can be stopped at the ISP before hitting your door? Some commented that it’s indeed the practice by the companies they knew of or worked for. It won’t be a surprise that a gaming network will be taken offline by DDoS attacks before a major holiday.



OMG! My Website Got Hacked!

Let’s face it. The best designed and most thoroughly tested web applications still have many issues - just look up the OWASP Top 10 lists since 2004. Now we hook these web applications to the public internet, the wild wild web of good and malicious users. Same techniques were used again and again to successfully hack these internet-facing web applications. It’s not a matter of carelessness. In fact, web applications are written by human on frameworks and systems that have vulnerabilities.


Almost 17 Years of SQL Injection, Are We Done Yet?

The No. 1 technique that breaks web applications today is SQL injection. It’s not hard to figure out why this 17-year-old technique still cracks modern, well-protected web applications. One seldom finds a useful web site without an input form nowadays. If data sanitization is taught in every programming class, how come security, especially of SQL injection vulnerability, would become an afterthought? And how come there have been increasing number of SQL injection incidents in 2014 and 2015? I am looking forward to seeing the 2016 OWASP Top 10 List; I won’t surprise that Injection is still No. 1.



Spear Phishing - It Only Takes One Click

Ah, I like this topic. The reason is, as Kevin Mitnick put it, the human factor is truly security’s weakest link. We had phishing emails against mass audience in large scale campaigns. Now we have increasing targeted phishing, or spear phishing emails against individuals. The scary thing is that spear phishing works. Hey, even the Pentagon was hit by this kind of attack. Many companies started internal “simulated phishing” campaigns in order to increase their employees’ security awareness and observed improving results. However, hackers will still gain advantages from this human factor.



So, what’s my conclusion? Well, sharpen your information security skills because even though it’s getting more difficult, we are still able to win this loser’s game.


It’s been a great pleasure to interact with you on the above topics of the information security in this quarter. Please review my past posts in this series and leave your feedback here or on the individual post.

Over the last three months I have taken a look at 4 of the major cloud providers who offer DBaaS. I was hoping to review a couple additional ones but didn't get the opportunity (I'll explain more in a bit), so for now I thought we should sum up what I've learned into a single article so for those following along you can get my personal thoughts on the services and where they are heading (in my opinion).


In the kick off article I said I would like to look at several different service offerings to compare them, they included the following: (Note that each of the vendor links will take you to the respective review article on Thwack!)

  • Azure - Azure and I have a love hate relationship overall. I've always been a VMware guy... So I inherently HATE that HyperV is the underlying hypervisor lol. Yet at the same time, Azure has the best Cloud Control panel, in my opinion, that's out there. Azure is also the only service that I tried MS SQL DBaaS from... and it worked create, no major bumps or bruses after the fact. Overall if you need a MS SQL DB and want it delivered as a service ... Then I don't think you can really beat Azure, and I would definetly recommend it to the people I help.


  • Google - Google is one of the newer players in the Cloud Computing area at least from a public service offering. Yet their interface is pretty refined and the service works as advertised... in fact it worked so well that I really didn't have much to write about, which is why you see the minimal performance testing in the article as well... It was simply because without it the article would have been about 2 paragraphs. Overall I would say that if you are using Google's cloud for services already, then their are no reason that I can think of as to why you would not want to use their DBaaS offering. I think that the biggest thing Google will fight in the DBaaS world as well as the cloud computing world as a whole is their competitors Amazon and Azure.


  • Amazon - Speaking of Amazon ... Amazon is easily the most mature of the DBaaS offerings that I reviewed. It works as advertised and didn't give me any problems at all. Probably the biggest reason that I can think of as to why you would pick Amazon over the others is because you already have other stuff there... If you can keep your database local to your cloud apps then why wouldn't you? Plus Amazon probably has the most complete offering in terms of geographies... It's not all roses over at Amazon though... but more on that below.


  • HP Helion - I've been using the Helion service since before it was called Helion... I started off in July 2013 with their DNS service for my blog, and had only played a little with the actual compute services. I also favor HP a little just because they are using the purest form of OpenStack of any big service provider that I know of. But with that said they are also newest into the DBaaS market... and unfortunetly the service does reflect that... Their support is very responsive and was able to get things going for me, but only to have it break by the time I was able to get back in and test some more. Overall they were the only service that I would NOT recommend at this time. I would certainly keep them on the radar, but they have a little work to do to smooth things out before I would bet my pay check on their service.


  • OpenStack Trove - This is the only solution that I wasn't able to test... and probably the one I was really looking forward to testing the most! Why? Well because I'm a server hugger at heart... I like having hardware where I can see it, and care for it... I guess I'm old school. But Trove is basically a framework that allows you to deploy SQL DBaaS instances to your OpenStack cloud.... The problem? I don't have an OpenStack cloud setup, and when I looked into how much work it would have taken to do it properly... plus the complexity of Trove on top... I just gave up before I even started. And if you read my normal blog,, you will know that I don't give up easily. With that said, I think the biggest thing going against Trove at least for now and maybe in the near future is complexity. Once more of the OpenStack vendors integrate it with their distro's I think you will start to see it become more common,.

So do they work or what ?

The simple answer is heck yea they do! And they do it with pretty amazing uptime too. That isn't to say that there isn't downtime though, and it seems as though the old saying is true too, "The bigger they are the harder they fall."

I hate to say it, but the timing on the latest Amazon explosion couldn't have been better for this series of blog posts... but I'm sure the folks trying to find a hook-up or the ones trying to watch some Netflix this past saturday night would disagree with me. (BTW if you had not heard, Amazon had some issues with their database services this past Sunday morning...). Luckily however, the disruption only impacted one region of AWS... so if you had been a DBaaS customer you could have failed over to the west region or one of the many others.

The real lesson here is that services fail just like servers do, so if you are thinking about using DBaaS to avoid downtime, just remember that its still a server to somebody and there is still going to be downtime. The only difference is that it isn't your problem to fix it when there is a problem.


What would I do?

Please do not let me opinion point you in any one particular direction... You really need to test them for yourself to see what works best for you. But! If i had to pick one today I would pick Azure if I needed a MS SQL platform and Amazon if i needed MySQL. I base this on the thought that if I did need a database I would probably also have other servers I would need too. And if i Needed linux based stuff I would put it on Amazon... and if I needed Windows stuff I would put it on HyperV. Also keep in mind that I consider myself a server hugger... So before I would pick either one I would do a really good job to keep it on hardware I maintain and control... because I'm like that!


Whats next?

One other thing to keep in mind is that the cloud space is by no means done. VMware for example does not have a DBaaS offering today... but I do know it's in beta, which means it will be production ready before too long. I also expect Openstack to catch up as adoption rates continue to increase. So my advise is to test often, and when a project does require a DBaaS ... make sure to re-review at that time so you know your working with the best that is out there at that time.

This will be the last in my current series of posts on Network Management, and as I look back on the topics of my previous posts it is clear to me that there is no future for Network Management. So should you make sure to say thanks to our wonderful hosts at Solarwinds while you still can?

Not So Fast…


Yeah, hang on a moment, Solarwinds isn’t going anywhere quite yet (at least, I sure hope not!). I don’t really mean that we will stop managing things. Rather, I suspect we will end up changing the focus, and perhaps the mechanisms, of our management. With that in mind I thought it might be worth opening up a conversation on how things might go in the future. My crystal ball is a little hazy so I’m going to be a bit vague, and I’d like to hear your opinions on how you see things going over the next 5–10 years.


Solitary Confinement


When we talk about Network Management, it’s interesting that we’re focusing on managing just one part of the infrastructure, the part that for most companies is simply a transport for the stuff that makes the money. Mind you, we’re not alone in our myopia; server admins usually have a system in place to monitor their servers, but without the network, those servers aren’t really much use. The Storage team is keen on monitoring utilization, IOPS and similar, but maybe aren’t so concerned with everything else. The Security team monitors their Paranoia Alert and Notification Tracking System (PANTS), but that’s all they see, and the Database team is constantly squinting at their performance metrics and trying to improve them in isolation from everything else. I’m not saying that in the future we’ll stop keeping an eye on all these things, but I do believe we need to step back and start breaking down the barriers between these teams.


Rose Is a Rose Is a Rose Is a Rose


Resources are Resources are Resources are Resources. With the growing vision of a Software Defined Data Center (SDDC), it is becoming obvious that the current divisions that exist in most companies where we separate Network, Server, Database, Storage and Security are just not going to cut it going forward. The bottom line is that these are all just resources that support the applications (you know, the bits that make the money). To keep up with the ever-changing demands of applications, the SDDC starts viewing all these elements as resources to be provisioned on demand, in concert with one another.


From a management perspective this creates an interesting problem, which is that we need to have a wider view of our resources. We try not to allocate VMs on a host that’s already maxed out, and using that same logic, there’s no point allocating storage via an NFS mount on a SAN whose network ports are nearing saturation, or where the latency between the server and the SAN is suboptimal. We absolutely need to continue monitoring elements in the network, but we also need to understand how our resources interact and depend on one another in order to provide the best service to the applications using them. We need to understand not just where an application resides, but where its users and dependencies reside, and our –let’s call them Resource Management– systems will need to be able to monitor the infrastructure and present information and alerts to us in a way that is meaningful not just to a specific element, but to the tenants of that infrastructure. Do you spin up a new virtual firewall or use an existing one? Is a capacity problem one that should be solved by migrating VMs or by upgrading network or storage?


My point is, having separate teams watching over separate management systems is a bit like trying to walk through Grand Central Terminal with your eyes focused firmly on your feet.


The Future


“What has been will be again, what has been done will be done again; there is nothing new under the sun.” (Ecclesiastes, 1:9)


What would a Resource Management System (RMS) look like? I wish I knew. Maybe Solarwinds have some insight; they’re already using information gathered by one product to enhance the information in another. At some point, maybe we’ll stop seeing different element management products being sold and instead we’ll be licensing a more holistic product suite that, yes, manages all elements, but whose core benefit is the sum of all those elements. Such a system could actually become part of the larger Software Defined Data center as a provider of information that can be used to make automated deployment decisions.


Alerting needs to get smarter, and I’ve seen many previous attempts –often home grown– to relate an element failure to a service impact. It’s much easier to do this if we begin managing applications in relation to the resources they use rather than managing elements and trying to figure out who uses them afterwards. I think we have the ability to make this happen fairly quickly, but it does require that we start tracking applications, users and dependencies from the get go. Adding these things in afterwards can be a nightmare.


Whaddya Think?


Is this utopia or hell on Earth? How far are we from this goal, or are you already there? Or –and this is entirely possible– am I way off the mark with this entire concept? You are out there using Solarwinds today, so what do you think you’ll be buying and using from Solarwinds in the future?

Over the course of the past few articles I’ve been covering some of the issues around moving your databases to the cloud.  While the cloud is rapidly becoming an integral part of many IT infrastructures businesses have been reluctant to move their database workloads from on-premise into the cloud – and rightfully so. The primary concern centers around the ability to remain in control of your data. When that data is on-premise you are in control of it. When that data is in the cloud that control is in the hands of the cloud provider. There are other concerns as well. The ability for multitenant cloud hosts to provide adequate performance is an important concern. Security is another big issue and for some countries there is the need to ensure that all of their data is maintained within that country’s geographical boundaries. These sort if issues make moving to the cloud difficult and in some cases impossible. However, the cloud doesn’t have to be an all or nothing affair. The hybrid cloud provides a middle-of-the-road solution that can enable you to leverage cloud technologies where they make sense yet still maintain control of and security over your on-premise databases.


With the hybrid cloud you can utilize your on-premise infrastructure for your day-to-day operations but also take advantage of the cloud for other related operations like backup, high, availability or disaster recovery, For instance, one of the scenarios that utilize the hybrid cloud can be seen in SQL Server’s AlwaysOn Availability Groups. AlwaysOn Availability Groups were first introduced with SQL Server 2012. However, SQL Server 2014 adds several enhancements that enable you to take advantage of the hybrid cloud. AlwaysOn Availability Groups enable you to replicate the changes in multiple related databases to one or more secondary replicas. These secondary replicas can be on-premise and used for high availability or in the case of the hybrid cloud the secondary replicas could be located in the cloud and used for disaster recovery, reporting or backups. On-premise replicas typically use synchronous replication and can provide automatic failover. Secondary replicas in the cloud typically use asynchronous replication and they require manual failover for disaster recovery scenarios. SQL Server 2014 provides built-in Azure integration that enables this kind of hybrid cloud scenario. Likewise, SQL Server 2014 also has the ability to backup to Azure enabling you to easily leverage the cloud for offsite storage. The key to making hybrid cloud scenarios work is the network link between your on-premise network and the network that’s provided by the cloud provider.  Typically you need to have a hardware or software VPN in place that bridges your local network and the cloud.  The VPN is responsible for securely routing your local network traffic to the cloud.


The cloud doesn’t need to be an all or nothing solution. The hybrid cloud can be an effective way to leverage cloud technologies while still maintaining your primary database workloads on-premise.



SolarWinds at VMworld 2015

Posted by kong.yang Employee Sep 15, 2015


Thank you to all of you who took time to stop by and converse with the SolarWinds team at VMworld! We even had some thwack Community members - fcpsolaradmin rolltidega and Geek Speak Ambassadors cxi stop by the SolarWinds booth. If I missed listing you and you stopped by our booth, please comment below and I will add you into this post


VMworld THE Virtualization Conference

VMware VMworld is still THE virtualization conference! This year VMware added a few wrinkles such as a DevOps Day hackathon with vCloud Air. This was a not-so-subtle nod to DevOps, App Dev, and Infrastructure Developers. Again, there are disturbances in the Force as the rise of the IT Versatilists continue.

Three themes were highlighted by VMware at this year's VMworld (in no particular order):

  1. Container integration with the major container vendors like Docker, Kubernetes, CoreOS, Mesosphere's DataCenter OS, and Cloud Foundry;
  2. Cloud-Mobile strategy that included a joint keynote with Microsoft and introducing their ID management solution; and
  3. Data center abstraction continued via VMware EVO SDDC, which has VSAN for storage, NSX for networking, vSphere for Compute/memory, and vRealize for management.


Experts and Espresso Presented by SolarWinds

Once again, we had a blast hosting and presenting on the latest tech trends to VMworld attendees at our Experts and Espresso Events on Monday and Tuesday mornings at Jillians. Some would say that it was "Straight Outta Coffee." I do wish that I was a part of the photo below; but me, joeld and sqlrockstar we presenting to the packed house inside. I may resort to photoshopping myself into this classic image.


eBook released at VMworld: Four Skills to Master Your Virtualization Universe 

I was honored to have been asked to write an eBook covering four essential skills (discovery, alerting, remediation, troubleshooting) to master any virtualization universe for VMworld. For a brief intro on the eBook, check out this post. To view the eBook, please visit here. To download the eBook, please visit here. It was an extraordinary journey and much props goes to emilie.


With that, I hope that you are ready for any!

Back in 2005, when I managed the VMware infrastructure at a large insurance company, we had many contractors located off-shore. These contractors were located mostly in India and were primarily programmers working on internal systems. We had a number of issues with them having to do with inconsistent latencies, and inconsistent desktop designs when we had them VPN into our network. We decided to deploy a cluster of VMware hosts, and onto these deploy static Windows XP desktops with the goal of making the environment more stabile, consistent and manageable. While this was not what we consider today to be VDI, I’ll call it Virtual Desktop 1.0. It worked. We were able to deploy new machines from images, dedicate VLans to specific security zones, having them sit in the DMZ when appropriate, etc. Plus, we were able to mitigate the latency issues between interface and application back end, as the desktops resided inside the data center. We no longer had any issues with malware or viruses, and when a machine would become compromised in any way, we were able to swiftly respond to the end user’s needs by simply redeploying machines for them. The location of their data resided on a network volume, so in the event of a redeploy, they were able to retain their settings and personal drives from the redirected home directory. It was a very viable solution for us.


Citrix had accomplished this type of concept years earlier, but as a primarily VMware shop, we wanted to leverage our ELA for this. However, we did have a few applications deployed by MetaFrame, which was still a functional solution.


Time moved on, and VMware View was released. This had the added ability to deploy applications and images from thin images, and ease the special requirements on the storage. In addition, the desktop images now could be either persistent or non-persistent. Meaning, if our goal was to put out fresh desktops to the user upon log in. In this case, our biggest benefit was that the desktop would only take up space on the storage when in use, and if the user was not in the system, they’d have no footprint whatsoever.


There were some issues, though, in this. The biggest concern in this was that the non-persistent desktops, upon login, would demand such processing power that we’d experience significant “Boot Storms.” It would cause our users to experience significant drag on the system. At the time, with a series of LUNS dedicated to this environment, only spinning disc, we had IO issues forcing us to sit in a traditional or fully persistent state.


In my next post, I’m going to talk about how the issues of VDI became one of the industry’s main drivers to add IO to the storage, and to expand the ease at which we were able to push applications to these machines.


The promise of VDI has some very compelling rationale. I’ve only outlined a few above, but in addition, the concepts of pushing apps to mobile devices, “Bring Your Own Device” as well as other functions are so very appealing. I will talk next about how VDI has grown, solutions have become more elegant, and how hardware has fixed many of our issues.

“Spear phishing continues to be a favored means by APT attackers to infiltrate target networks”. - Trend Micro Research Paper 2012


“The reason for the growth in spear phishing: it works”. - FireEye Spear Phishing Attacks White Paper



One morning, a colleague in my data center network team and I received the following email:

Phishing Email.jpg

I heard my colleague called the Help Desk and reported that he had clicked a link in an email that he thought a possible phishing email a few minutes before. It could be a damaging magical click to my company; it could make my company to the US headline news. But…


Two days before my colleague clicked the link on that phishing email…


Our Information Security (InfoSec) team coordinated with the Help Desk, Email team, Network Security team (my team), and an outside vendor to create a phishing email campaign as part of the user security education. The outcomes were favorable, meaning there were users beside my colleague failed the test. The follow-up user educations were convincing (of course, for those who failed…).



Above is an example of Phishing, that phishing emails attack mass audience. Cybercriminals, however, are increasingly using targeted attacks against individuals instead of large scale campaigns. The individually targeted attack, aka Spear Phishing, is usually associated with Advanced Persistent Threat (APT) for long term cyberespionage.


The following incidents show that spear phishing has been pretty “successful” and the damages were unthought-of.



Employees of more than 100 Email Service Providers (ESPs) experienced targeted email attacks. The well-crafted emails addressed those ESP employees by name. Even worse, email security company Return Path, the security provider to those ESPs, was also compromised.



Four individuals in the security firm RSA were recipients of the spear phishing malicious emails. The success of the attacks resulted the access of RSA’s proprietary information of the two-factor authentication platform SecurID by the cybercriminals. Due to the RSA breach, several US high-profile SecurID customers were compromised.



The White House confirmed that a computer system in the White House Military Office was attacked by Chinese hackers and that it affected an unclassified network. This hack began with a spear phishing attack against White House staffers and a White House Communications Agency staff opened an email he wasn’t supposed to open.



An Associated Press journalist clicked a link that appeared to be a Washington Post news story on a targeted email. The AP’s official Twitter account was then hacked. A fake tweet reporting two explosions in the White House erased $136 billion in equity market value from the New York Stock Exchange index. In the same year, a hacker group in China was said to have hacked more than 100 US companies via spear phishing emails, stealing proprietary manufacturing processes, business plans, communications data, etc. In addition, you remember Target’s massive data breach, right?



Unauthorized access to the Centralized Zone Data System (CZDS) of the Internet Corporation for Assigned Names and Numbers (ICANN) was obtained. ICANN is the overseer of the Internet’s addressing system. ICANN announced that they believed the compromised credentials were resulted from a spear phishing attack. By that attack, accesses to ICANN's public Governmental Advisory Committee wiki, blog, and whois information portal were also gained. Again, you still remember Home Depot’s 2014 breach that exposed 56 million payment cards and 53 million email addresses, right?



US confirmed that the Pentagon was hit by a spear phishing attack in July, most likely from Russian hackers, which compromised the information of around 4,000 military and civilian personnel who work for the Joint Chiefs of Staff. The hackers used automated social engineering tactics to gain information from employee social media accounts and then used that information to conduct a spear phishing attack.



How do we protect against and detect the increasing spear phishing attacks? Our beloved Defense-In-Depth comes to our mind. NGFW, IPS/IDS, SPF/DKIM key validations, signature-less analysis services for zero-day exploit detection, IP/domain reputation services, web proxy, and up-to-day client/server patching to name a few. Is the well-built security infrastructure sufficient for spear phishing? The incidents listed above tell us NO. In the case of RSA breach, it only took one out of four individuals who fell to the trap to make hackers happy. So, user education is an essential component of spear phishing defensive strategies. Make smarter users. Remind them not to fall into spear phishing trap regularly and send them mock phishing drills randomly.


I won’t ask you to share your spear phishing story. But how does your organization protect against spear phishing? What does your organization provide user awareness and training? Please share. I would like to hear from you.

Filter Blog

By date:
By tag: