1 2 3 Previous Next

Geek Speak

1,629 posts

What makes an application perform optimally? I would say it is when there is collaborative performance from the server or the VM running the application, the network on which the application is used, and the storage. In this post, I provide information regarding storage and ways to configure it to avoid application performance bottlenecks.


LUN contention

LUN contention is the most common issue that storage admins try to avoid. Here are a few common mistakes that usually lead to LUN contention:

  • Deploying a new application on the same volume that handles busy systems, such as email, ERP, etc.
  • Using the same drives for applications with high IOPS.
  • Configuring many VMs on the same datastore.
  • Not matching storage drivers with processors.

Application issues can be traced to LUN contention only if the concerned database is being monitored. Drilling down to the appropriate LUN helps you make sure that the application runs fine.


Capacity planning

Poor application performance often can be tied to increased demand for services, and many times it can be storage and its IOPS. Storage is costly, and no organization wants to waste it on applications.

Capacity planning involves knowing your application, how much space it needs, and the kind of storage it requires. Capacity planning helps in predictive analytics, which allows users to choose the amount of storage their application requires. Capacity planning should be done before the application is moved into production. Doing so not only helps to right-size the application’s storage environments, but eventually helps lower the number of performance issues an application might experience during rollout.


Make sure it’s not the storage

SysAdmins often blame storage for application performance issues. It is always recommended to monitor your storage, which helps eliminate the blame game. Monitoring your storage helps you see whether it’s actually storage that’s causing performance issues, rather than the server or the network itself. Continuously monitoring your database can also help you avoid LUN contention. You will also be able to monitor the performance of your capacity planning, and be alerted when it’s not performing optimally.

Storage is the lowest common denominator of application monitoring. Application stack monitoring allows you to troubleshoot issues from the application itself. Consider the following troubleshooting checklist, and ask yourself:

  • Is it the application itself?
  • Is it the server on which the application is hosted?
  • Is it the VM?
  • Is it the storage?

I will walk you through the different layers and how they help troubleshoot application issues in my next blog. Also, to find out more about App stack monitoring, please visit us at booth HH18 at this year’s IP Expo in London.

How does troubleshooting today compare with troubleshooting 10 years ago?

When:           August 2005

Where:          Any random organization experimenting with e-commerce


Employee:         I can’t access the CRM! We might lose a deal today if I can’t send that quote.

Admin:               Okay, check the WAN, check the LAN, and the server crm1.primary.

Junior:                All fine.

Admin:               Great, Restart the application service. That should solve it.

Junior:                Yes, Boss! The app service was down. I restarted it and now CRM is back up.


When:             August 2015

Where:            Any random organization that depends on e-commerce


Sales Rep:           Hey! The CRM is down! I can’t see my data. Where are my leads! I’m losing deals!

Sales Director:     Quick! Raise a ticket, call the help desk, email them, and cc me!

Help desk:           The CRM is down again! Let me assign it to the application team.

App team:            Hmm. I’ll reassign the ticket to the server guys and see what they say.

SysAdmin:           Should we check the physical server? Or the VM instance? Maybe the database is down.

DB Admin:           Array, disc, and LUN are okay. There are no issues with queries. I think we might be fine.

Systems team:     Alright, time to blame the network!

Net Admin:           No! It’s not the network. It’s never the network. And it never will be the network!

Systems team:     Okay, where do we start? Server? VM? OS? Apache®? App?


Notice the difference?


App deployment today

Today’s networks have changed a lot. There are no established points of failure like there were when the networks were flat. Today’s enterprise networks are bigger, faster, and more complex than ever before. While current network capabilities provide more services to more users more efficiently, this also has led to an increase in the time it takes to resolve an issue, much less pinpoint the cause of failure.


For example, let’s say a user complains about failed transactions. Where would you begin troubleshooting? Keep in mind the fact that you’ll need to check the Web transaction failures, make sure the server is not misbehaving, and that the database is good. Don’t forget the hypervisors, VMs, OS, and the network. Also consider the fact that there’s switching between multiple monitoring tools, windows, and tabs, trying to correlate the information, finding what is dependent on what, collaborating with various teams, and more. All of this increases the mean time to repair (MTTR), which means increased service downtime and lost revenue for the enterprise.


Troubleshoot the stack

Applications are not standalone entities installed on a Windows® server anymore. Application deployment relies on a system of components that must perform in unison for the application to run optimally. A typical app deployment in most organizations looks like this:

app stack.png

When an application fails to function, any of these components could be blamed for the failure. When a hypervisor fails, you must troubleshoot multiple VMs and the multiple apps they host that may have also failed. Where would troubleshooting begin under these circumstances?


Ideally, the process would start with finding out which entity in the application stack failed or is in a critical state. Next, you determine the dependencies of that entity with other components in the application stack. For example, let’s say a Web-based application is slow. A savvy admin would begin troubleshooting by tracking Web performance, move to the database, and on to the hosting environment, which includes the VM, hypervisor, and the rest of the physical infrastructure.


To greatly reduce MTTR, it is suggested that you begin troubleshooting your application stack.  This will help move your organization closer to the magic three nines for availability. To make stack-based troubleshooting easier, admins can adopt monitoring tools that support correlation and mapping of dependencies, also known as the AppStack model of troubleshooting.


Learn more at IPExpo London

If you would like to learn more, or see the AppStack demo, SolarWinds will be at booth HH18 at IPExpo London.

Security is Everyone’s Job



“Never was anything great achieved without danger.” -Niccolo Machiavelli


As we begin National Cybersecurity Month, it's a great time to reflect on how we can all protect ourselves at work and home. Let’s look at some current risks and see what changes we can adopt to mitigate these risks.


Email - we need it, love it, live it, but it’s risky.


Phishing is still the number one risk for most of us. Whether it’s an automatic preview in our work email system, or a browser injection on Web mail, SPAM and phishing are both a security risk and an irksome annoyance.


Unfortunately, we are not winning the battle against email cybercriminals and overzealous marketers, despite almost ubiquitous deployment of spam filters. Here we are in 2015, and spam still represents >10% of our inboxes.[1]  The statistics on phishing are even worse. From 2014 to 2015, the number of phishing sites increased from about 25,000 to 33,500, according to Google[2].


Furthermore, malicious email is becoming more sophisticated by embedding macros in ordinary looking attachments. In our busy lives, it’s easy to accidentally click on an attachment or link with malicious content.


The following are some email checks to keep top of mind:


Stay in familiar territory


Make sure the to: and reply to: emails match, or are from a company you know. Email phishers will try to fool you with an email that looks like someone you know, when it isn’t.



Watch out for typosquatters


These are email domains that are just slightly different from the real company name. These are commonly used in Business Email Compromise campaigns, where fraudsters trick businesses and consumers into sending money to a bank outside the US, often China or Russia. This money is very difficult to recover because we don’t have the right legal relationships, and international banking laws don’t provide the same protection as US laws.

These transactions pose a big business risk. We’ve lost 1.2 billion dollars in recent years. Even worse, this type of fraud is on the rise, up by 270% according to the FBI results released just last month.[3]



Personal email accounts are not safe from fraudsters


Personal email account breaches are difficult to detect because the fraudulent request comes from a real account. Hackers use the compromised account to steal money from relatives and friends. Particularly vulnerable are older parents and grandparents.


Don’t be a victim. Here are some safe computing practices that can help you avert email fraud:


Keep private information private


Never share your password. If someone genuinely needs access to your account (should never happen at work), change your password, then change it back when they are done.


Add variety to your login credentials


If you use a free email account, use a unique password for this account—not the one you use for social media, websites, and especially banking. Change your password frequently—at least once a quarter. It doesn’t need to be complicated, use your current password and add a special character for each quarter (see example below), or create your own that you can remember. Also, make your change date memorable, like the beginning of the quarter, or when you pay your mortgage.


  • 1st quarter “!”
  • 2nd quarter “?”
  • 3rd quarter “&”
  • 4th quarter “%”


This makes it more difficult for password crackers to guess your password, and if there is a password leak at another site, you haven’t handed over the keys to your email house as well.


Keep your system patched


Many of the security vulnerabilities exploited by hackers to compromise accounts are old and have been fixed by the vendors. If you are in a corporation, talk to IT about automatic updates. If you can’t patch because you are running an older application, ask IT about creating a VM (virtual machine) for you to run that old application. This helps you keep your system patched and up to date. At home, make sure your operating system, pdf reader (adobe.com), and browsers are set for automatic updates. Patching these three things will protect you from the majority of risks.


Educate your friends and relatives


Warn your less tech-savvy acquaintances of the dangers of cyber fraud. Remind them that no true friend would ever ask for money in an email. If they do get such a request, advise them to make a phone call to the person. Also, give them the numbers of the fraud department at their bank so they have someone to call if they need advice.


Make sure your security software is current


Make sure everyone in your house has up-to-date anti-malware software. Put it on an auto-renewing charge if needed.


You may hear a lot of talk about next generation endpoint protection. And yes, anti-malware software is not perfect, but you still brush and floss your teeth. If you can’t afford an anti-malware software package, at least run the free Windows® Essentials (for Vista to Windows 8, after Windows 8, it is called Windows Defender). For Mac users, Sophos offers a free antivirus solution.



As Albert Einstein said, “A ship is always safe at the shore - but that is NOT what it is built for.” If we want to fully utilize the Internet, a little caution and paranoia can reduce the risks.


[1] http://www.radicati.com/wp/wp-content/uploads/2011/05/Email-Statistics-Report-2011-2015-Executive-Summary.pdf


[2] https://www.yahoo.com/tech/googles-security-news-malwares-down-and-youre-120208482874.html

[3] https://www.ic3.gov/media/2015/150827-1.aspx

It’s getting cloudy. And I’m loving it! AWS re:Invent 2015 is back again and bigger than ever. SolarWinds will be there to talk and demo full-stack cloud monitoring so stop by Booth #643 for stellar swag and awesomely geeky conversations. Our Librato, Pingdom, and Papertrail subject matter experts will be on-site to answer questions about monitoring performance and events in the cloud. This includes application development, DevOps, infrastructure, and end-user experience aka the full stack.


SolarWinds is also presenting a Lightning Talk in the AWS Partner Theater Booth #645. joeld, SolarWinds CTO, and Nik Wekwerth, Librato Tech Evangelist, will discuss buying or building monitoring software on Wednesday, October 7th at 12:40PM PT and again on Thursday, October 8th at 12:40PM PT. Their talk is entitled Let’s not re:Invent the wheel: When to build vs. buy monitoring software. I will be live tweeting and wearing my rage against the virtual machine t-shirt (photo courtesy of @sthulin) so you can join in on the discussion by following @Solarwinds and hashtags: #SolarWinds #reInvent.



The top 3 things that I’m looking forward to at re:Invent 2015 are:

1.)        Security Ops best practices

2.)        Microservices and containers integration best practices

3.)        Big Data with machine learning algorithms


They highlight the additional business values enumerated below that IT organizations must realize to remain relevant:

1.)        Compliance and governance

2.)        Agility, availability, and customizability

3.)        Automation and orchestration for scalability


Further, these cloud services need to be integrated with what IT is already responsible; thus, Hybrid IT monitoring and management will be key to the success of IT organizations as IT pros use their four core skills to bridge utility for their business units. Finally, for more context around cloud monitoring, please check out "Why Cloud Monitoring Matters in the App Age" by Dave Josephsen, Developer Evangelist for SolarWinds.


After re:Invent, I will share my experience at event, highlight key takeaways, and present my thoughts on what it means for fellow IT pros.

[UPDATED: To include Booth #645 for the Lightning Talk Presentation and the added Thursday talk as well as the reference to Why Cloud Monitoring Matters]

In the past few articles I’ve discussed the different aspects of moving your databases to the cloud. Based on the feedback and the talks I’ve had with people in the industry clearly most businesses have no intension for moving their production databases into the cloud. The cloud is a good fit for startups and for many businesses using the hybrid cloud provides some affordable options for disaster recovery. Although today cloud usage for production database workloads is limited there’s another place where the cloud makes perfect sense: test and development.





When your development team is working of a project they often have the need to spin up new VMs from database servers, web servers as well as development and staging systems. It’s not uncommon to need several of these for each developer. While you can certainly do this using in-house resources you need to have the capacity to do so and many times provisioning all these new VMs requires the involvement of the virtualization administrator and sometimes even the storage administrator. These VMs and server systems are usually only needed on temporary basis but this process takes both time and resources.


The cloud like Azure or Amazon AWS really makes sense in a situation. Developers can provision the VMs with the applications that they need from the cloud without requiring any outside support or internal compute or storage resources.  Plus, in cloud providers like Microsoft Azure provide a number of prebuilt VM templates that can speed up the creation of fully provisioned VMs from hours to just minutes. For instance, to provision a VM on-premise you typically need to provision storage for the VM, create the VM, install the OS, configure the OS, and then install whatever applications are required. This can take hours for just one VM. If you’re using a private cloud this might be faster but most businesses don’t have private clouds. To provision a SQL Server VM in Azure you simply use the Azure Portal select the SQL Server template then select the size of VM that you want and the Azure storage account to place it in and hit the button. Azure will take care of creating the VM, installing the Windows Server operating system, optionally joining a domain, and installing SQL Server. The administrator doesn’t need to be involved and the whole process is much faster than provisioning an on-premise VM. When the project is completed the developer can just tear down the VMs and get ready for the next project.


Testing scenarios have similar advantages. There’s no need to deploy VMs on-site. Instead you can use VM templates from your cloud provider to rapidly deploy the scenarios that you want to test and discard them when you’re done. For instance, if you need to test a SQL Server AlwaysOn deployment then you can use something like the Azure AlwaysOn template to rapidly roll out a five VM deployment that includes two domain controllers, a file server witness and two SQL Server VM with AlwaysOn already configured enabling you to rapidly deploy your test environment. You pay for the resources that you use while you use them.


The cloud may not make sense for most production SQL Server workloads but it makes a lot of sense for test and development work.


There is a veritable food court of storage technologies on the market. How do you go about finding the solution that’s a good fit for your organization? In my earlier blogs I talked about hyper-convergence, open source storage, software-defined storage Cloud storage, flash storage, and object storage. To make the process of understanding storage technologies a bit easier, I’ve listed the major technologies and included the pros and cons of each. 


Storage TechnologyProsCons
Software-defined storage
  • Automate and manage data from a centralized location.
  • Build your own storage infrastructure, so you don’t have to worry about integrating different vendor products.
  • Flexible scaling allows you to add new features with just software updates.
  • Independent software and hardware means more components to manage.
  • Must ensure that the infrastructure matches the difference in latency and performance across your storage arrays.
Integrated computing platform (ICP), or hyper-convergence
  • Simplifies storage, computing, and network management.
  • Computer, storage, and network comes as a complete package.
  • Good package for a moderate IT budget.
  • Making granular upgrades or minor tweaks is challenging. For example, if the cluster gets low on storage, but the computer is performing well, a storage upgrade alone is not possible.
  • Must upgrade overall capability by adding another appliance when the cluster runs low on storage, even if the rest of the system is operating well.
Open source storage
  • Save money on purchase and maintenance.  
  • No compliance issues.
  • Code modification is based on your organization’s needs.
  • Hidden costs are involved, especially the cost of hiring a trained admin who knows how to operate the system, or training a current admin to do so.  
  • Must be compatible with other platforms in your organization.
  • No technical guidance or customer support. You are on your own if something fails.
Cloud storage
  • Minimal initial investment. 
  • Makes data available for users everywhere. Some availability outside the company VPN.
  • Multiple disaster recovery options keep data safe. 
  • You are charged according to the amount of storage you use. This can be expensive, but beneficial at times for some organizations because they don’t have to pay ahead. 
  • The security of your data depends on your 3rd parties.
  • Because the bulk of your data has to be transferred via the Internet, you will pay more for the bandwidth you use.
Flash storage
  • Able to continually increase capacity.
  • No moving parts (i.e. spinning disks) to create opportunities for storage mishandling.
  • More expensive than hard drives at a dollar per GB level.
  • Performance across vendors and models can vary significantly, even for the same capacity and endurance rating.
Object storage
  • Best solution for backup and recovery options.
  • Good scalability and distributed access.
  • Data is unstructured.
  • Not suited for organizations that deal with a lot of transaction data (i.e. data that frequently changes). 

“A company can spend hundreds of thousands of dollars on firewalls, intrusion detection systems and encryption and other security technologies, but if an attacker can call one trusted person within the company, and that person complies, and if the attacker gets in, then all that money spent on technology is essentially wasted”. - Kevin Mitnick


“But evil men and impostors will proceed from bad to worse, deceiving and being deceived”. - 2 Timothy 3:13 NASB



In the last five posts over the past three months I have explored the topics on Security Management. I touched upon the top three types of threats in the information security - Infrastructure, Application Attacks, and User Attacks. In this last one of my series, I’m going to look back on each post and to reflect on the audience’s feedbacks.



Dark Side Of The Encryption

The increasing amount of the encrypted traffic inbound and outbound on the network certainly challenges the visibility and the control of the security management. Some commented that the wonderful defense in depth still had something to be desired due to the nature of the encrypted traffic. I agree that our monitoring technology and techniques will need to evolve, but I believe that there hasn’t been a solution yet. No, inserting SSL Interception will break stuff.


It’s Christmas Day. Do You Know How Long You’ve Been DDoS’ed?

Many companies are still unprepared for the DDoS attacks. It’s hard to defend and mitigate massive DDoS attacks solely with the perimeter security equipment. Isn’t it nice that the DDoS attacks can be stopped at the ISP before hitting your door? Some commented that it’s indeed the practice by the companies they knew of or worked for. It won’t be a surprise that a gaming network will be taken offline by DDoS attacks before a major holiday.



OMG! My Website Got Hacked!

Let’s face it. The best designed and most thoroughly tested web applications still have many issues - just look up the OWASP Top 10 lists since 2004. Now we hook these web applications to the public internet, the wild wild web of good and malicious users. Same techniques were used again and again to successfully hack these internet-facing web applications. It’s not a matter of carelessness. In fact, web applications are written by human on frameworks and systems that have vulnerabilities.


Almost 17 Years of SQL Injection, Are We Done Yet?

The No. 1 technique that breaks web applications today is SQL injection. It’s not hard to figure out why this 17-year-old technique still cracks modern, well-protected web applications. One seldom finds a useful web site without an input form nowadays. If data sanitization is taught in every programming class, how come security, especially of SQL injection vulnerability, would become an afterthought? And how come there have been increasing number of SQL injection incidents in 2014 and 2015? I am looking forward to seeing the 2016 OWASP Top 10 List; I won’t surprise that Injection is still No. 1.



Spear Phishing - It Only Takes One Click

Ah, I like this topic. The reason is, as Kevin Mitnick put it, the human factor is truly security’s weakest link. We had phishing emails against mass audience in large scale campaigns. Now we have increasing targeted phishing, or spear phishing emails against individuals. The scary thing is that spear phishing works. Hey, even the Pentagon was hit by this kind of attack. Many companies started internal “simulated phishing” campaigns in order to increase their employees’ security awareness and observed improving results. However, hackers will still gain advantages from this human factor.



So, what’s my conclusion? Well, sharpen your information security skills because even though it’s getting more difficult, we are still able to win this loser’s game.


It’s been a great pleasure to interact with you on the above topics of the information security in this quarter. Please review my past posts in this series and leave your feedback here or on the individual post.

Over the last three months I have taken a look at 4 of the major cloud providers who offer DBaaS. I was hoping to review a couple additional ones but didn't get the oppertunity (I'll explain more in a bit), so for now I thought we should sum up what I've learned into a single article so for those following along you can get my personal thoughts on the services and where they are heading (in my opinion).


In the kick off article I said I would like to look at several different service offerings to compare them, they included the following: (Note that each of the vendor links will take you to the respective review article on Thwack!)

  • Azure - Azure and I have a love hate relationship overall. I've always been a VMware guy... So I inherently HATE that HyperV is the underlying hypervisor lol. Yet at the same time, Azure has the best Cloud Control panel, in my opinion, that's out there. Azure is also the only service that I tried MS SQL DBaaS from... and it worked create, no major bumps or bruses after the fact. Overall if you need a MS SQL DB and want it delivered as a service ... Then I don't think you can really beat Azure, and I would definetly recommend it to the people I help.


  • Google - Google is one of the newer players in the Cloud Computing area at least from a public service offering. Yet their interface is pretty refined and the service works as advertised... in fact it worked so well that I really didn't have much to write about, which is why you see the minimal performance testing in the article as well... It was simply because without it the article would have been about 2 paragraphs. Overall I would say that if you are using Google's cloud for services already, then their are no reason that I can think of as to why you would not want to use their DBaaS offering. I think that the biggest thing Google will fight in the DBaaS world as well as the cloud computing world as a whole is their competitors Amazon and Azure.


  • Amazon - Speaking of Amazon ... Amazon is easily the most mature of the DBaaS offerings that I reviewed. It works as advertised and didn't give me any problems at all. Probably the biggest reason that I can think of as to why you would pick Amazon over the others is because you already have other stuff there... If you can keep your database local to your cloud apps then why wouldn't you? Plus Amazon probably has the most complete offering in terms of geographies... It's not all roses over at Amazon though... but more on that below.


  • HP Helion - I've been using the Helion service since before it was called Helion... I started off in July 2013 with their DNS service for my blog, and had only played a little with the actual compute services. I also favor HP a little just because they are using the purest form of OpenStack of any big service provider that I know of. But with that said they are also newest into the DBaaS market... and unfortunetly the service does reflect that... Their support is very responsive and was able to get things going for me, but only to have it break by the time I was able to get back in and test some more. Overall they were the only service that I would NOT recommend at this time. I would certainly keep them on the radar, but they have a little work to do to smooth things out before I would bet my pay check on their service.


  • OpenStack Trove - This is the only solution that I wasn't able to test... and probably the one I was really looking forward to testing the most! Why? Well because I'm a server hugger at heart... I like having hardware where I can see it, and care for it... I guess I'm old school. But Trove is basically a framework that allows you to deploy SQL DBaaS instances to your OpenStack cloud.... The problem? I don't have an OpenStack cloud setup, and when I looked into how much work it would have taken to do it properly... plus the complexity of Trove on top... I just gave up before I even started. And if you read my normal blog, http://jpaul.me, you will know that I don't give up easily. With that said, I think the biggest thing going against Trove at least for now and maybe in the near future is complexity. Once more of the OpenStack vendors integrate it with their distro's I think you will start to see it become more common,.

So do they work or what ?

The simple answer is heck yea they do! And they do it with pretty amazing uptime too. That isn't to say that there isn't downtime though, and it seems as though the old saying is true too, "The bigger they are the harder they fall."

I hate to say it, but the timing on the latest Amazon explosion couldn't have been better for this series of blog posts... but I'm sure the folks trying to find a hook-up or the ones trying to watch some Netflix this past saturday night would disagree with me. (BTW if you had not heard, Amazon had some issues with their database services this past Sunday morning...). Luckily however, the disruption only impacted one region of AWS... so if you had been a DBaaS customer you could have failed over to the west region or one of the many others.

The real lesson here is that services fail just like servers do, so if you are thinking about using DBaaS to avoid downtime, just remember that its still a server to somebody and there is still going to be downtime. The only difference is that it isn't your problem to fix it when there is a problem.


What would I do?

Please do not let me opinion point you in any one particular direction... You really need to test them for yourself to see what works best for you. But! If i had to pick one today I would pick Azure if I needed a MS SQL platform and Amazon if i needed MySQL. I base this on the thought that if I did need a database I would probably also have other servers I would need too. And if i Needed linux based stuff I would put it on Amazon... and if I needed Windows stuff I would put it on HyperV. Also keep in mind that I consider myself a server hugger... So before I would pick either one I would do a really good job to keep it on hardware I maintain and control... because I'm like that!


Whats next?

One other thing to keep in mind is that the cloud space is by no means done. VMware for example does not have a DBaaS offering today... but I do know it's in beta, which means it will be production ready before too long. I also expect Openstack to catch up as adoption rates continue to increase. So my what's next advise it test often, and when a project does require a DBaaS ... make sure to re-review at that time so you know your working with the best that is out there at that time.

This will be the last in my current series of posts on Network Management, and as I look back on the topics of my previous posts it is clear to me that there is no future for Network Management. So should you make sure to say thanks to our wonderful hosts at Solarwinds while you still can?

Not So Fast…


Yeah, hang on a moment, Solarwinds isn’t going anywhere quite yet (at least, I sure hope not!). I don’t really mean that we will stop managing things. Rather, I suspect we will end up changing the focus, and perhaps the mechanisms, of our management. With that in mind I thought it might be worth opening up a conversation on how things might go in the future. My crystal ball is a little hazy so I’m going to be a bit vague, and I’d like to hear your opinions on how you see things going over the next 5–10 years.


Solitary Confinement


When we talk about Network Management, it’s interesting that we’re focusing on managing just one part of the infrastructure, the part that for most companies is simply a transport for the stuff that makes the money. Mind you, we’re not alone in our myopia; server admins usually have a system in place to monitor their servers, but without the network, those servers aren’t really much use. The Storage team is keen on monitoring utilization, IOPS and similar, but maybe aren’t so concerned with everything else. The Security team monitors their Paranoia Alert and Notification Tracking System (PANTS), but that’s all they see, and the Database team is constantly squinting at their performance metrics and trying to improve them in isolation from everything else. I’m not saying that in the future we’ll stop keeping an eye on all these things, but I do believe we need to step back and start breaking down the barriers between these teams.


Rose Is a Rose Is a Rose Is a Rose


Resources are Resources are Resources are Resources. With the growing vision of a Software Defined Data Center (SDDC), it is becoming obvious that the current divisions that exist in most companies where we separate Network, Server, Database, Storage and Security are just not going to cut it going forward. The bottom line is that these are all just resources that support the applications (you know, the bits that make the money). To keep up with the ever-changing demands of applications, the SDDC starts viewing all these elements as resources to be provisioned on demand, in concert with one another.


From a management perspective this creates an interesting problem, which is that we need to have a wider view of our resources. We try not to allocate VMs on a host that’s already maxed out, and using that same logic, there’s no point allocating storage via an NFS mount on a SAN whose network ports are nearing saturation, or where the latency between the server and the SAN is suboptimal. We absolutely need to continue monitoring elements in the network, but we also need to understand how our resources interact and depend on one another in order to provide the best service to the applications using them. We need to understand not just where an application resides, but where its users and dependencies reside, and our –let’s call them Resource Management– systems will need to be able to monitor the infrastructure and present information and alerts to us in a way that is meaningful not just to a specific element, but to the tenants of that infrastructure. Do you spin up a new virtual firewall or use an existing one? Is a capacity problem one that should be solved by migrating VMs or by upgrading network or storage?


My point is, having separate teams watching over separate management systems is a bit like trying to walk through Grand Central Terminal with your eyes focused firmly on your feet.


The Future


“What has been will be again, what has been done will be done again; there is nothing new under the sun.” (Ecclesiastes, 1:9)


What would a Resource Management System (RMS) look like? I wish I knew. Maybe Solarwinds have some insight; they’re already using information gathered by one product to enhance the information in another. At some point, maybe we’ll stop seeing different element management products being sold and instead we’ll be licensing a more holistic product suite that, yes, manages all elements, but whose core benefit is the sum of all those elements. Such a system could actually become part of the larger Software Defined Data center as a provider of information that can be used to make automated deployment decisions.


Alerting needs to get smarter, and I’ve seen many previous attempts –often home grown– to relate an element failure to a service impact. It’s much easier to do this if we begin managing applications in relation to the resources they use rather than managing elements and trying to figure out who uses them afterwards. I think we have the ability to make this happen fairly quickly, but it does require that we start tracking applications, users and dependencies from the get go. Adding these things in afterwards can be a nightmare.


Whaddya Think?


Is this utopia or hell on Earth? How far are we from this goal, or are you already there? Or –and this is entirely possible– am I way off the mark with this entire concept? You are out there using Solarwinds today, so what do you think you’ll be buying and using from Solarwinds in the future?

Over the course of the past few articles I’ve been covering some of the issues around moving your databases to the cloud.  While the cloud is rapidly becoming an integral part of many IT infrastructures businesses have been reluctant to move their database workloads from on-premise into the cloud – and rightfully so. The primary concern centers around the ability to remain in control of your data. When that data is on-premise you are in control of it. When that data is in the cloud that control is in the hands of the cloud provider. There are other concerns as well. The ability for multitenant cloud hosts to provide adequate performance is an important concern. Security is another big issue and for some countries there is the need to ensure that all of their data is maintained within that country’s geographical boundaries. These sort if issues make moving to the cloud difficult and in some cases impossible. However, the cloud doesn’t have to be an all or nothing affair. The hybrid cloud provides a middle-of-the-road solution that can enable you to leverage cloud technologies where they make sense yet still maintain control of and security over your on-premise databases.


With the hybrid cloud you can utilize your on-premise infrastructure for your day-to-day operations but also take advantage of the cloud for other related operations like backup, high, availability or disaster recovery, For instance, one of the scenarios that utilize the hybrid cloud can be seen in SQL Server’s AlwaysOn Availability Groups. AlwaysOn Availability Groups were first introduced with SQL Server 2012. However, SQL Server 2014 adds several enhancements that enable you to take advantage of the hybrid cloud. AlwaysOn Availability Groups enable you to replicate the changes in multiple related databases to one or more secondary replicas. These secondary replicas can be on-premise and used for high availability or in the case of the hybrid cloud the secondary replicas could be located in the cloud and used for disaster recovery, reporting or backups. On-premise replicas typically use synchronous replication and can provide automatic failover. Secondary replicas in the cloud typically use asynchronous replication and they require manual failover for disaster recovery scenarios. SQL Server 2014 provides built-in Azure integration that enables this kind of hybrid cloud scenario. Likewise, SQL Server 2014 also has the ability to backup to Azure enabling you to easily leverage the cloud for offsite storage. The key to making hybrid cloud scenarios work is the network link between your on-premise network and the network that’s provided by the cloud provider.  Typically you need to have a hardware or software VPN in place that bridges your local network and the cloud.  The VPN is responsible for securely routing your local network traffic to the cloud.


The cloud doesn’t need to be an all or nothing solution. The hybrid cloud can be an effective way to leverage cloud technologies while still maintaining your primary database workloads on-premise.



SolarWinds at VMworld 2015

Posted by kong.yang Sep 15, 2015


Thank you to all of you who took time to stop by and converse with the SolarWinds team at VMworld! We even had some thwack Community members - fcpsolaradmin rolltidega and Geek Speak Ambassadors cxi stop by the SolarWinds booth. If I missed listing you and you stopped by our booth, please comment below and I will add you into this post


VMworld THE Virtualization Conference

VMware VMworld is still THE virtualization conference! This year VMware added a few wrinkles such as a DevOps Day hackathon with vCloud Air. This was a not-so-subtle nod to DevOps, App Dev, and Infrastructure Developers. Again, there are disturbances in the Force as the rise of the IT Versatilists continue.

Three themes were highlighted by VMware at this year's VMworld (in no particular order):

  1. Container integration with the major container vendors like Docker, Kubernetes, CoreOS, Mesosphere's DataCenter OS, and Cloud Foundry;
  2. Cloud-Mobile strategy that included a joint keynote with Microsoft and introducing their ID management solution; and
  3. Data center abstraction continued via VMware EVO SDDC, which has VSAN for storage, NSX for networking, vSphere for Compute/memory, and vRealize for management.


Experts and Espresso Presented by SolarWinds

Once again, we had a blast hosting and presenting on the latest tech trends to VMworld attendees at our Experts and Espresso Events on Monday and Tuesday mornings at Jillians. Some would say that it was "Straight Outta Coffee." I do wish that I was a part of the photo below; but me, joeld and sqlrockstar we presenting to the packed house inside. I may resort to photoshopping myself into this classic image.


eBook released at VMworld: Four Skills to Master Your Virtualization Universe 

I was honored to have been asked to write an eBook covering four essential skills (discovery, alerting, remediation, troubleshooting) to master any virtualization universe for VMworld. For a brief intro on the eBook, check out this post. To view the eBook, please visit here. To download the eBook, please visit here. It was an extraordinary journey and much props goes to emilie.


With that, I hope that you are ready for any!

Back in 2005, when I managed the VMware infrastructure at a large insurance company, we had many contractors located off-shore. These contractors were located mostly in India and were primarily programmers working on internal systems. We had a number of issues with them having to do with inconsistent latencies, and inconsistent desktop designs when we had them VPN into our network. We decided to deploy a cluster of VMware hosts, and onto these deploy static Windows XP desktops with the goal of making the environment more stabile, consistent and manageable. While this was not what we consider today to be VDI, I’ll call it Virtual Desktop 1.0. It worked. We were able to deploy new machines from images, dedicate VLans to specific security zones, having them sit in the DMZ when appropriate, etc. Plus, we were able to mitigate the latency issues between interface and application back end, as the desktops resided inside the data center. We no longer had any issues with malware or viruses, and when a machine would become compromised in any way, we were able to swiftly respond to the end user’s needs by simply redeploying machines for them. The location of their data resided on a network volume, so in the event of a redeploy, they were able to retain their settings and personal drives from the redirected home directory. It was a very viable solution for us.


Citrix had accomplished this type of concept years earlier, but as a primarily VMware shop, we wanted to leverage our ELA for this. However, we did have a few applications deployed by MetaFrame, which was still a functional solution.


Time moved on, and VMware View was released. This had the added ability to deploy applications and images from thin images, and ease the special requirements on the storage. In addition, the desktop images now could be either persistent or non-persistent. Meaning, if our goal was to put out fresh desktops to the user upon log in. In this case, our biggest benefit was that the desktop would only take up space on the storage when in use, and if the user was not in the system, they’d have no footprint whatsoever.


There were some issues, though, in this. The biggest concern in this was that the non-persistent desktops, upon login, would demand such processing power that we’d experience significant “Boot Storms.” It would cause our users to experience significant drag on the system. At the time, with a series of LUNS dedicated to this environment, only spinning disc, we had IO issues forcing us to sit in a traditional or fully persistent state.


In my next post, I’m going to talk about how the issues of VDI became one of the industry’s main drivers to add IO to the storage, and to expand the ease at which we were able to push applications to these machines.


The promise of VDI has some very compelling rationale. I’ve only outlined a few above, but in addition, the concepts of pushing apps to mobile devices, “Bring Your Own Device” as well as other functions are so very appealing. I will talk next about how VDI has grown, solutions have become more elegant, and how hardware has fixed many of our issues.

“Spear phishing continues to be a favored means by APT attackers to infiltrate target networks”. - Trend Micro Research Paper 2012


“The reason for the growth in spear phishing: it works”. - FireEye Spear Phishing Attacks White Paper



One morning, a colleague in my data center network team and I received the following email:

Phishing Email.jpg

I heard my colleague called the Help Desk and reported that he had clicked a link in an email that he thought a possible phishing email a few minutes before. It could be a damaging magical click to my company; it could make my company to the US headline news. But…


Two days before my colleague clicked the link on that phishing email…


Our Information Security (InfoSec) team coordinated with the Help Desk, Email team, Network Security team (my team), and an outside vendor to create a phishing email campaign as part of the user security education. The outcomes were favorable, meaning there were users beside my colleague failed the test. The follow-up user educations were convincing (of course, for those who failed…).



Above is an example of Phishing, that phishing emails attack mass audience. Cybercriminals, however, are increasingly using targeted attacks against individuals instead of large scale campaigns. The individually targeted attack, aka Spear Phishing, is usually associated with Advanced Persistent Threat (APT) for long term cyberespionage.


The following incidents show that spear phishing has been pretty “successful” and the damages were unthought-of.



Employees of more than 100 Email Service Providers (ESPs) experienced targeted email attacks. The well-crafted emails addressed those ESP employees by name. Even worse, email security company Return Path, the security provider to those ESPs, was also compromised.



Four individuals in the security firm RSA were recipients of the spear phishing malicious emails. The success of the attacks resulted the access of RSA’s proprietary information of the two-factor authentication platform SecurID by the cybercriminals. Due to the RSA breach, several US high-profile SecurID customers were compromised.



The White House confirmed that a computer system in the White House Military Office was attacked by Chinese hackers and that it affected an unclassified network. This hack began with a spear phishing attack against White House staffers and a White House Communications Agency staff opened an email he wasn’t supposed to open.



An Associated Press journalist clicked a link that appeared to be a Washington Post news story on a targeted email. The AP’s official Twitter account was then hacked. A fake tweet reporting two explosions in the White House erased $136 billion in equity market value from the New York Stock Exchange index. In the same year, a hacker group in China was said to have hacked more than 100 US companies via spear phishing emails, stealing proprietary manufacturing processes, business plans, communications data, etc. In addition, you remember Target’s massive data breach, right?



Unauthorized access to the Centralized Zone Data System (CZDS) of the Internet Corporation for Assigned Names and Numbers (ICANN) was obtained. ICANN is the overseer of the Internet’s addressing system. ICANN announced that they believed the compromised credentials were resulted from a spear phishing attack. By that attack, accesses to ICANN's public Governmental Advisory Committee wiki, blog, and whois information portal were also gained. Again, you still remember Home Depot’s 2014 breach that exposed 56 million payment cards and 53 million email addresses, right?



US confirmed that the Pentagon was hit by a spear phishing attack in July, most likely from Russian hackers, which compromised the information of around 4,000 military and civilian personnel who work for the Joint Chiefs of Staff. The hackers used automated social engineering tactics to gain information from employee social media accounts and then used that information to conduct a spear phishing attack.



How do we protect against and detect the increasing spear phishing attacks? Our beloved Defense-In-Depth comes to our mind. NGFW, IPS/IDS, SPF/DKIM key validations, signature-less analysis services for zero-day exploit detection, IP/domain reputation services, web proxy, and up-to-day client/server patching to name a few. Is the well-built security infrastructure sufficient for spear phishing? The incidents listed above tell us NO. In the case of RSA breach, it only took one out of four individuals who fell to the trap to make hackers happy. So, user education is an essential component of spear phishing defensive strategies. Make smarter users. Remind them not to fall into spear phishing trap regularly and send them mock phishing drills randomly.


I won’t ask you to share your spear phishing story. But how does your organization protect against spear phishing? What does your organization provide user awareness and training? Please share. I would like to hear from you.

As you might have read a couple weeks ago in my AWS review, I started this post almost a month ago. Unfortunetly I ran into a snag after creating my database insatnce and it kept me from completing the article. HP Support however was pretty awesome and jumped on the issue right away, infact within 48 hours the problem was fixed and I was able to continue. With that said let's jump right in... however you need to keep one thing in mind... This service is probably the newest of any of the services I've looked at so far, I think I actually signed up when it was in beta. It is also so new that not all geo's have it as an option yet too.



As I mentioned this feature isn't available everywhere, so I had to navigate to the US West Geo in order to find it. Once there I was able to click the button in Horizon and get started creating.


HP sticks pretty close to the standard Openstack Horizon dashboard layout, so things are pretty clean throughout the UI.


Once you click launch instance you got the pretty much the same questions as we did on every other wizard.


I tossed in some generic stuff for my first database on the instance.


I did find that they make restoring backups pretty simple... Just create a new instance and select the backup image to use... simple as that!


And there you have it, a HP DBaaS! If you look closely you can also see where some of my problems started to show up before HP fixed them... Volume Size showed as "Not Available".



So this is where I noticed that I was having problems the first time through... When I went to look for my user's and databases in the Horizon UI everything would time out and just say that the information wasn't available. After HP support did their thing everything was much happier... I should note that they were able to replicate the issue on other instances, so it must have been a bug which they were able to quickly fix. (I do wonder though if I was the first person to try their MySQL DBaaS offering though LOL, because it was unusable before the fix...)


Anyhow, the details/Overview page did work just find and it is where I got my connection string from... down at the bottom.


The database tab simply shows what databases are on the instance and gives you the ability to delete them.


So this is where my love hate relationship starts... If I reboot my instance my user account will show up... for a while... then I get this awesome error. So I decided to assume that it's a Horizon issue and not a problem with the instance, plus I know my username is jpaul. One other note about the user area... Even when it is working, I could not find a way to change my password for the instance or add other users to the instance... I would think from the MySQL CLI I could but the GUI is pretty bare.

user data.jpg

Well I wasn't about to let a busted web interface slow me down... I'll open another support ticket today and I'm sure they will have it fixed again in no time... But I know my username, I know the IP address of the instance... and I have a MySQL client.


Unfortunely, the CLI didn't go much further. Looks like I will certainly have to open a ticket.


The only other thing I can really show you right now is how to restore a backup.



So to do a backup there is a create backup button listed beside of each instance that you have running. I won't go into to much detail because it's the same as any other create backup button.


Restoring from one of those backups is pretty simple too, you simply navigate back to the Launch Instance button, and fill out the same form that you did before when I created the initial instance, with one exception... this time on the Restore From Backup tab you select which backup you want to restore. Simple as that.




Let me start by saving I hate doing "bad" reviews, even if they are constructive. I will say that I am a litle disappointed that I have to reopen a ticket with HP support too. I think I opened my first ticket on like a Tueday or a wednesday with them... It took them until Thursday or Friday to figure out the problem and then they expected my to respond within 24 hours (which was a weekend) and I didn't, so they closed the case.


Overall I think that the service has promise, but right now I would say it still needs some work. I will post an update once HP support helps me get things going...

My name's Rick Schroeder, and I've been a member of Thwack since 2004.


I manage a health care network of 40,000+ computers across three states, and I've been enjoying using SolarWinds products like NPM, NCM, and NTA to proactively manage "Pure Network Services" across my organization.


In my environment "Pure Networking" means my team supports LAN, WAN, Firewall services, Wireless environments, and VPN solutions.  Our babies are the Edge, Access, Distribution, Core, and Data Center blocks.  We don't directly support user-accessible applications, nor servers or work stations or other end devices--we focus solely on LAN and WAN services, switches, routers, AP's, and firewalls.  This truly makes my team the "D.O.T. for the Info Highway."  Not dealing with edge devices and users and their apps is a luxury, but there's still a LOT of work to do, and I'm always looking for new tools and products that can help my team of six manage more systems in a better way.


When Danielle Higgins asked me to be a Guest Blogger for Thwack I was pleased to share some of my thoughts with you about information SolarWinds has given us to get what we need from Administration.


Sophisticated, powerful and well-developed tools that can improve our customers' experience are not free.  Persuading Administration to allocate budget to purchase them can be an intimidating challenge.  But as Joel Dolisy and Leon Adato explained, it's all about seeing the world from Administration's point of view.   They are featured in a video for the Thwack Mission for August, 2015 called Buy Me a Pony: How to Make IT Requests that Management will Approve - YouTube  SolarWinds has leveraged their knowledge to provide a resource you can use to build a successful pitch for allocating funds to get what you need--and it applies to anything, not just Orion products.


I watched the video and took away a lot of great ideas.  I made notes during the video and then put them together into an e-mail that I shared with my team.  We're using it as a guide for how to improve the network by learning the steps to make a winning presentation for additional resources (tools, people, etc.) to Management.  You can use the same steps and get the progress towards tools and resources you need by using the concepts I've prepared below:





One day you’ll want to champion a new product.  When you make your case for that product, your success depends on focusing on points that Management wants.  Your job is to show them how funding your project will accomplish the things they find important.


Some examples of important items to decision makers:

  • Growth (revenue, market share, etc.).
  • Cost Reduction (improve cash flow, save $$).
  • Risk (Avoid or resolve compliance issues, prevent exposure).


Find out how your new tool will fit the items above, then promote those features.  This simple idea will give you a better chance of getting Administrative approval than if you spend your time describing to them the technical features or coolness of the product.


Target what will get Management’s attention and support. 


Example:  Suppose an e-mail system keeps crashing.  You know replacing it will prevent that issue, so your goal is to buy/install a new e-mail system.  If you can’t convince Management that your project will match their top needs, you won't get quick approval to proceed.


If Management pays most attention to Risk Avoidance, then show them how preventing e-mail crashes reduces Risk.  Show SOX and HIPAA compliance will be much improved by a new e-mail solution and you’re halfway to getting funding for your project.


If Management is concentrating on bottom line issues, show how a new e-mail system will improve cash flow & save money.  Now your pitch is much more likely to receive the right response from them.


If Management pays most attention to Dollars and Growth and Cost Reduction, then learn the cost of e-mail crashes & show them how your recommendation addresses those specific items.  An example:

  • Crashing e-mail services cost money.  Find out how much and show them:
    • X dollars of support staff time per minute of down time and recovery time.  You could do the calculation based industry standard salaries broken down to hours & minutes and show the actual dollar cost of support to recover from an e-mail server crash. Talk about persuasive!
    • 3X dollars in lost new orders as customers fail to get timely responses and turn to the competition because they can’t get responses to their e-mail.
    • 20X dollars in lost order payments that come in via e-mail.
    • 50X dollars in missed opportunities when our competitors get to the customers before us because our e-mail service is down.



Don’t make up facts.  Being honest builds their trust in you and your team.  The key is to create a solution to your problem that not only fixes your issues, but fixes Management’s problems, too.


Remember that telling the down side of a product is not necessarily bad.  If you don’t include the full impact of your project to the business, decision makers may decide your proposal is not yet mature.  They might think "So you’re asking for a free puppy?  And you're not talking about food costs & vet bills and training, etc.?"   That can be the path to a quick denial from them.


When you hear “no”, it may mean:

  • Your pitch is right on, but the timing isn’t good for the company right now due to cash flow issues.
  • You simply haven’t convinced the decision maker yet.
  • You haven’t given the decision maker what they need to successfully take your case to THEIR manager.  They don’t want to look foolish merely because your enthusiasm about a product; you have to show you really know what you're talking about.  Then they can champion your cause to their boss without being at risk.
  • You haven’t shown how your application or hardware matches the company’s core focus (Risk, Revenue/Growth, Cost, etc.).
  • You haven’t brought data that can be verified, or it’s too good to be true, or it simply is not believable.


If you haven’t shown the downsides, management knows you:

  • Haven’t done due diligence to discover them.
  • Are hiding them.
  • Are caught up in the vendor’s sales pitch so much that you only see what’s shiny, and haven’t thought about (and found) problems with the product.


Your job is to treat “No” as if it only means “not yet.”


“No” does NOT mean:

  • You can’t come back with this again.
  • They're denying it because they don't like/trust you.
  • Your idea is no good.
  • You can’t have it due to politics.


Instead of accepting the denial and feeling you've failed, you can think of “no” as administrators simply saying your cake isn’t completely baked yet.  Your interpretation should be they meant "Once your cake's fully cooked we're interested in having you bring it back for review."



When you hear “no”, ask questions to find out why “no” is the right answer for Administration at the present moment:

  • What must be changed for the outcome to be “yes”?  What is not being heard that is needed?
  • Can we come back at a better time and talk about this again?  When?
  • How can we align the project better with the business focus/goals?
  • Is there a better point in the budget year to look at this?
  • Do we need other in-house skills, maybe an outside contractor, before we tackle this?
  • Do we need a training plan to develop our staff’s skills for the new technology?
  • Can we set up a small-scale demo to show you the product’s merits in-house via a proof-of-concept?



Remember:  There are no “IT Projects”.  There are only Business Projects that IT supports.



Help everyone on your team understand how they must make this project align with the Business’s goals when they present it to peers and administrators.


Example:  Suppose your goal is moving away from Nagios to Orion:

  • Bring the idea to your System Administrators, DBA’s, Apps Analysts--anyone who uses the old product (or who COULD be a candidate to use it)
  • Get their input and wish list, then show them how your new tools will give them better access and insight into their environments' functionality.
  • Ask them what they’re not getting from Nagios, listen carefully and then show them how Orion can provide those specific items to them.
  • Set up a 30 day free demo and then get them excited about what a SHARED tool like Orion can do across the silos.
  • Now you've turned them into supporters for the new project, and they can report positively about it to their managers.


Who is the funding Approver?

The CIO.


Who are the consumers of the new project?

Managers of Apps and Servers.


What part of the decision making process happens outside your view?

The CIO goes to the department Managers for their opinion. But you did your homework and the Managers have already heard great reports about your project from their trusted teams.


Result:  They share the good news to the CIO, and the CIO OK’s the purchase.


It's all about finding how your good idea is ALSO a good idea to the decision makers.  It's a formula for success.



Don’t overdo the presentation. If you want to lose the audience, include every stat, every permutation of dollars and numbers, show them a PowerPoint presentation with more than 10 slides, etc.

Your detailed/extra available information is not appropriate for the initial presentation.  But keep it for answering future queries.


It’s more important the bosses feel comfortable with the solution than it is to overwhelm them with details.


Here are some resources I found on https://thwack.solarwinds.com/ that can help you get buy-in from decision makers.  These tools may convince them that your project will make a significant contribution to improving the environment:

  • Feature Function Matrix: 
      • It lists all the features that a great monitoring tool might have.
      • It lets you compare what you have in-house today to what a competitive SolarWinds tool provides.
      • Shows the gap between the services and information you have with an existing product which can be filled with the new SolarWinds product.
      • Lets you identify primary sources of truth.  Example:
        • Suppose you have multiple ping latency measuring tools and you’re not eliminating any of them.
          • The Feature Function Matrix lets you prioritize them.
          • Now you can see which ones are most important, and which tools are backups to the important ones.
  • Sample RFP’s

o   Let you score weights of features.

o   Auto-calculates the information you need to show the decision makers.

o   Allows competitive vendors to show their strengths & cost.


  • In the Geek Speak forum you can find items associated with Cost Of Monitoring.  They might be just what you need to show management, highlight the cost of:

o   Not monitoring.

o   Monitoring with the wrong tool.

o   Monitoring the wrong things.

o   Monitoring but not getting the right notifications.

o   Forgetting to automate the monitoring response.


Here's hoping you can leverage the ideas I've shared to successfully improve your environment.  Remember, these processes can be applied to anything--getting a raise, adding more staff, increasing WAN pipes, improving server hardware, getting a company car--the sky's the limit (along with your creative ways to sell great ideas).


Swift Packets!


Rick S.

Filter Blog

By date:
By tag: