Previously, I’ve spoken about project management and micro-services as being analogous to life management. To me, these have seemed to be really functional and so very helpful to my thought process regarding the goals I have set for myself in life. I hope that in some small way they’ve helped you to envision the things you want to change in your own life, and how you may be able to approach those changes as well.


In this, the third installment of the “My Life as IT Code” series, I’ll take those ideas, and try to explain how I visualize and organize my approach. From a project manager’s perspective, the flowchart has always been an important tool. The first step is to outline a precedence diagram, such as the one I created for a past project. It was a replication and disaster recovery project for a large, multi-site law firm that had leveraged VMware-based virtual machines, a Citrix remote desktop, and storage-to-storage replication to maintain consistent uptime. I broke individual time streams into flowcharts, giving the individual stakeholders clear indications of their personal tasks. I organized them into milestones that related to the project as a whole. I delivered both as Gantt Charts and flow charts to show how the projects could be visualized, revealing time used per task, as well as the tasks that I discretely broke into their constituent parts.


This same technique is applicable to some of these life hacks. While it can be difficult to assign timeframes to weight loss, for example, the milestones themselves can be quite easy to demarcate. There are, with some creative thinking, methods by which one may be able to achieve viable metrics against which progress can be marked, and effective tools for the establishment of reasonable milestones can be elucidated.


There are great tools to aid in exercise programs, which enforce daily or weekly targets, and these are great to leverage in one’s own plan. I have one on my phone called the seven-minute workout. I also note my daily steps, using the fitness tracker application on both my phone and on my smartwatch. The growth of these tools, along with the use of a scale, can show a graphic progress along the path to your goals. Weight loss is never a simple downward slope, but rather a decline that tends toward plateaus followed by restarts. However, as your progress does move forward, so, in turn, does the use of a graphical representation of your weight loss to encourage more work along those lines. For me, the best way to track progress is by using a spreadsheet, graphing on a simple x/y axis, which provides an effective visualization of progress. I do not suggest paying rigid attention to the scale, as these plateaus can be detrimental to the emotional effect of how one sees one’s progress.


I’ve never been a runner, but the ability to define distance plans, time to these distances, and delineations of the progress along those paths are most easily translatable to a flowchart. It’s important to know what you’re doing. Rather than saying things like, “I’m going to lose 40 lbs in 2018,” a more effective strategy is to focus on specifics, such as, "I plan on walking 10,000 steps per day, or five miles per day." It is easier and more impactful to adhere to smaller, more strategic commitments.


Meanwhile, the creation and attention to the flow chart is a hugely effective tool to help keep a person on track. It helps them pay attention to their goals, and gives them visibility into their progress. Once you have that, you can celebrate when you reach the milestones you have set for yourself.


As I’ve stated, the flowchart can be an amazing tool. And when you look at life goals with the eyes of a project manager, you give yourself defined timeframes and goals. The ability to visualize your goals, milestones, and intent can really assist in keeping them top of mind.


By Joe Kim, SolarWinds EVP, Engineering and Global CTO


Can you afford for your team to lose eight hours a day? According to the 2016 State of Data Center Architecture and Monitoring and Management report by ActualTech Media in partnership with my company, SolarWinds, that’s precisely what is happening to today’s IT administrators when they try to identify the root cause of virtualization and virtual machine (VM) performance problems. This is valuable time that could otherwise be spent developing applications and innovative solutions to help warfighters and agency employees achieve mission-critical objectives.


Virtualization has quickly become a foundational element of Federal IT, and while it offers many benefits, it’s also a major contributor to the increasing complexity. Adding additional hypervisors increases a network’s intricacies and makes it more difficult to manually discover the cause of a fault. There’s more to sift through and more opportunities for error.


Finding that error can be time-consuming if there are not automated virtualization management tools in place to help administrators track down the source. Automated solutions can provide actionable intelligence that can help federal IT administrators address virtual machine performance issues more quickly and proactively. They can save time and productivity by identifying virtualization issues in minutes, helping to ensure that networks remain operational.


Ironically, the key to saving time and improving productivity now and in the future involves travelling back in time through predictive analysis. This is the ability to identify and correlate current performance issues based on known issues that may have occurred in the past. Through predictive analysis, IT managers can access and analyze historical data and usage trends to respond to active issues.


Further, analysis of past usage trends and patterns helps IT administrators reclaim and allocate resources accordingly to respond to the demands their networks may be currently experiencing. They’ll be able to identify zombie, idle, or stale virtual machines that may be unnecessarily consuming valuable resources, and eradicate under- or over-allocated and orphaned files that may be causing application performance issues.


Predictive analytics can effectively be used to prevent future issues resulting from virtualization sprawl. By analyzing historical data and trends, administrators will be able to optimize their IT environments more effectively to better handle future workloads. They can run “what if” modeling scenarios using historical data to predict CPU, memory, network, and storage needs.


Predictive analytics is not just a “nice to have,” it’s something that is becoming increasingly in-demand among IT professionals. In fact, 86 percent of respondents to the State of Data Center Architecture and Monitoring and Management report identified predictive analytics as a “critical need.”


We often speak of virtualization management as the future of IT, but that’s only partially true. True virtualization management involves a combination of the past, present, and future. This combination gives federal IT managers the ability to better control their increasingly complex networks of virtual machines both today and tomorrow, by getting a glimpse into how those networks have performed in the past.


Find the full article on our partner DLT’s blog TechnicallySpeaking.

Last time I told you guys I really love the Ford story and how I view storage in the database realm. In this chapter, I would like to talk about another very important piece of this realm, The Network.


When I speak with system engineers working in a client's environment, there always seems to be a rivalry between storage and network regarding who's to blame for database issues. However, blaming one another doesn’t solve anything. To ensure that we are working together to solve customer issues, we need to first have solid information about their environment.


The storage part we discussed last time is responsible for storing the data, but there needs to be a medium to transport the data from the client to the server and between the server and storage. That is where the network comes in. And to stay with my Ford story from last time, this is where other aspects come into play. Speed is one of these, but speed can be measured in many ways, which seems to cause problems with the network. Let’s look at a comparison to other kinds of transportation.


Imagine that a city is a database where people need be organized. There are several ways to get the people there. Some are local, and thus the time for them to be IN the city is very short. Some are living in the suburbs, and their travel is a bit longer due to having a further distance to travel, with more people traveling the same road. If we go a bit further and concentrate on the cars again, there are a lot of cars driving to and from the city. How fast one comes to or from the city depends on others who are similarly motivated to get to their destination as quickly as possible.  Speed is therefore impacted by the way the drivers perform and what happens on the road ahead.


Sheep Jam


The network is the transportation medium for the database, so it is critical that this medium is used in the correct way. Some of the data might need something like a Hyperloop to travel back and forth over medium-to-long distances, while other data may have enough for those shorter trips.


Having excellent visibility into the data paths to see where congestion might become an issue is a very important measurement in the database world. As with traffic, it gives one insight into where troubles could arise, as well as offering the necessary information about how to solve the problem that is causing the jam.


I don't believe the network or storage is responsible. The issue is really about the how you build and maintain your infrastructure. If you need speed, make sure you buy the fastest thing possible. However, be aware that what you buy today is old tomorrow. Monitoring and maintenance are crucial when it comes to a high performing database. Make sure you know what your requirements are and what you end up getting to satisfy them. Be sure to talk to the other resource providers to make sure everything works perfectly together.


I'd love to hear your thoughts in the comments below.


"When the student is ready, the teacher appears."


I mentioned this idea back when I revealed that the Marvel® movie, Doctor Strange, offered a wealth of lessons for itinerant IT pros (and a few for us grizzled veterans, as well). You can find Part Four here and work your way back from there.


It seems inspiration has struck again, this time in the unlikeliest of cinema experiences. There, among the rampant gore and adamantium-laced rage (not to mention the frequent f-bombs), I was struck by how Logan1 held a few IT gems of its own.


It behooves me to remind you that there are many spoilers beyond this point. If you haven't seen the movie yet, and don't want to know what's coming, bookmark this page to enjoy later.


Your most reliable tool could, at some future point, become toxic if you aren't able to let go and move on.


In the movie, it is revealed that Logan is slowly dying from the inside out. Adamantium, it seems, is not exactly surgical-grade metal, and the toxins have been leaching into his system. Initially held off by his healing factor, the continuous presence of the poison finally takes its toll and does what war, enemies, drowning, and even multiple timelines and horrible sequels could not.


One good lesson we should all draw from this is to keep evil shadow government agencies from lacing our skeletons with untested metals.


But a more usable lesson might be to let go of tools, techniques, and ideas when they become toxic to us. Even when they still appear to be useful, the wise IT pro understands when it is time to let go of the old before it becomes a deadly embrace.


When you see some of yourself in the next generation of IT pros, give them the chance to be better than you were.


Logan: "Bad sh*t happens to people I care about. Understand me?"

Laura: "I should be fine then."


(Later) Logan: "Don't be what they made you."


Many IT professionals eventually reach a tipping point when the adrenaline produced by the new, shiny, and exciting tends to wear off, and the ugly starts to become apparent. Understand, a career in IT is no uglier than other careers.


There are a few potential reasons why the honeymoon phase tends to be more euphoric, and the emotional crash when the work becomes a grind more noticeable. It could possibly be because IT is still a relatively new field. Maybe it’s because IT reinvents itself every decade or so. Maybe it is because the cost of entry is relatively low. In other words, it often takes no more than a willingness to learn and a couple of decent breaks.


And when that tipping point comes, often a number of years into one's career, it's easy to become "that" person. The bitter, grizzled veteran. The skeptic. The cynic who tries to "help" by warning newcomers of the horror that awaits.


Or you become a different version of "that" person, the aloof loner who wants nothing to do with the fresh crop of geeks who just walked in off the street in the latest corporate hiring binge.


In either case, you do yourself and the world around you a great disservice with such behavior.


In the movie, Logan first avoids helping, and when that option is no longer available to him, he attempts to avoid getting emotionally involved. As an audience, we know (even if we've never read the "Old Man Logan" source material), that this tactic will ultimately fail. We know we'll see the salty, world-weary X-Man open his heart to a strange child before the final credits.


What's more, the movie makes plain the opportunities Logan throws away when he chooses a snide remark instead of attempting to get to know Laura, that strange child.


So, the lesson to us as IT professionals is that we shouldn't let a bad experience make us feel bad about ourselves, or about our career. And we certainly shouldn't let it get in the way of being a kind and welcoming person to someone new to their career. If anything, we - like Logan at the end of the movie - should try to find those small kernels of capital-T Truth and pass them along, hopefully in ways and at moments when our message will be heard and received in the spirit in which it is meant.


Persistent problems need to be faced, fixed, and removed, not ignored and categorized as someone else's problem.


Near the beginning of the movie, the reaver Donald Pierce tracks down Logan and asks him for information about Gabrielle, the nurse who rescued Laura from the facility where she and the other child mutants were being raised. Donald makes it clear that he isn’t interested in bringing Logan in for the bounty. He simply wants information.


Again, because of his drive to distance himself from the rest of the world, Logan takes this at face value. Even though it is clear that Pierce intends no good for whoever it is he was hunting, Logan is happy it just didn't involve him.


And of course, the choice comes back to haunt him.


Now I'm not suggesting that Logan should have clawed him in the face in that first scene because, even in as brutal a movie as Logan, that's still not how the world works. But what I am saying is that if you let Pierce be a metaphor for a problem that isn't directly threatening your environment right now, but could come home to roost with disastrous results later, then... yeah, I am saying that you should (metaphorically speaking) claw that bastard’s eyeballs out.


I'm looking at you, #WannaCry.


Even when your experiences have made you jaded, hang on to your capacity to care.


Tightly connected to the previous thought about encouraging the next generation of IT professionals is the idea that we need to do things NOW that allow us to hold on to our capacity to care about people. As Thomas LaRock wrote recently, "Relationships Matter More Than Money" (  I would extend this further to include the idea that relationships matter more than a job, and they certainly matter more than a bad day.


In the movie, no moment exemplifies this as poignantly as the line that became one of the key voiceover elements in the trailer. In finding a family in trouble, Charles demands they stop and help. Logan retorts, "Someone will come along!" Charles responds quietly but just as forcefully, "Someone HAS come along."


But that isn't all I learned! Stay tuned for future installments of this series. And until then, Excelsior!


1 “Logan” (2017), Marvel Entertainment, distributed by 20th Century Fox

AventureWorks Sample data

In my soon-to-be-released eBook, 10 Ways We Can Steal Your Data, we talk about The People Problem, how people not even trying to be malicious end up exposing data to others without even understanding how their actions put data at risk. But in this post, I want to talk about intentional data theft.


What happens when insiders value the data your organization stewards? There have been several newsworthy cases where insiders have recognized that they could profit from taking data and making it available to others. In today’s post, I cover two ways I can steal your data that fall under that category.

1.Get hired at a company where security is an afterthought

When working with one of my former clients (this organization is no longer in business, so I feel a bit freer to talk about this situation), an IT contractor with personal financial issues was hired to help with networking administration. From what I heard, he was a nice guy and a hard worker. One day, network equipment belonging to the company was found in his car and he was let go. However, he was rehired to work on a related project just a few months later. During this time, he was experiencing even greater financial pressures than before. 

Soon after he was rehired, the police called to say they had raided his home and found servers and other computer equipment with company asset control tags on them. They reviewed surveillance video that showed a security guard holding the door for the man as he carried equipment out in the early hours of the morning. The servers contained unencrypted personal data, including customer and payment information. Why? These were development servers where backups of production data were used as test data.

Apparently, the contractor was surprised to be hired back by a company that had caught him stealing, so he decided since he knew about physical security weaknesses, he would focus not on taking equipment, but the much more valuable customer and payment data. 

In another case, a South Carolina Medicaid worker requested a large number of patient records, then emailed that data to his personal address. This breach was discovered and he was fired. My favorite quotes from this story were:

Keck said that in hindsight, his agency relied too much on “internal relationships as our security system.”




Given his position in the agency, Lykes had no known need for the volume of information on Medicaid beneficiaries he transferred, Keck said.

How could this data breach be avoided?

It seems obvious to me, but rehiring a contractor who has already breached security seems like a bad idea. Having physical security that does not require paperwork to remove large quantities of equipment in the middle of the night also seems questionable. Don't let staffing pressures persuade you to make bad rehire decisions.

2. Get hired, then fired, but keep friends and family close


At one U.S. hospital, a staff member was caught stealing patient data for use in identity theft (apparently this a major reason why health data theft happens) and let go. But his wife, who worked at the hospital in a records administration role, maintained her position after he was gone. Not surprisingly, at least in hindsight, the data thefts continued.

There have also been data breach scenarios in which one employee paid another employee or employees to gather small numbers of records to send to a third party who aggregated those records into a more valuable stockpile of sellable data.

In other data breach stories, shared logins and passwords have led to former employees stealing data, locking out onsite teams, or even destroying data. I heard a story about one employee, who was swamped with work, who provided his credentials to a former employee who had agreed to assist with the workload. That former employee used the information he was given to steal and resell valuable trade secrets to his new employer.

How can these data breaches be avoided?

In the previously mentioned husband and wife scenario, I'm not sure what the impact should have been regarding the wife’s job. There was no evidence that she had been involved in the previous data breach. That said, it would have been a good idea to ensure that data access monitoring was focused on any family members of the accused.

Sharing logins and passwords is a security nightmare when employees leave. They rarely get reset, and even when they do they are often reset to a slight variation of the former password.


This reminds me of one more much easier way to steal data, one I covered in the 10 Ways eBook: If you use production data as test and development data, it’s likely there is no data access monitoring on that same sensitive data. And no “export controls” on it, either. This is a gaping hole in data security and it’s our job as data professionals to stop this practice.

What data breach causes have you heard about that allowed people to use unique approaches to stealing or leaking data? I'd love to hear from you in the comments below.

Leon Adato

A View from the Air

Posted by Leon Adato Employee Oct 27, 2017

I have something exciting to tell you: THWACKcamp is not unique. Stick with me and I'll explain why.


After a hair-raising 37-minute flight connection, I'm comfortably (if somewhat breathlessly) settled into the last row of a tiny plane, which is currently between Houston and Cleveland. And despite the fact that I listened to Patrick's closing comments over five hours ago, it may as well have been five minutes ago. I'm still invigorated by the energy around THWACKcamp 2017: the energy from the teams that put together 18 incredible sessions; the energy from the 2,000+ online chatters who joined to offer their thoughts, comments, and opinions; and the energy from the room full of THWACK MVPs who showed up to take part in this event, which seems to have taken on a life of its own (not a bad thing, in my opinion).


It's going to take me a while to process everything I heard and saw over the last two days, from the acts of graciousness, support, and professionalism to the sheer brilliance of the presenters. There's so much that I want to try out.


I'm inspired more than ever to pick up Python, both for network projects and simply for the pure joy of learning a new coding language.


I have a renewed sense of urgency to get my hands dirty with containers and orchestration so that I can translate that experience into monitoring knowledge.


I'm committed to reaching out to our community to hear your stories and help you tell them, either by giving you a platform or by sharing your stories as part of larger narratives.


So, if there was so much SolarWinds-y, THWACK-y, Head Geek-y goodness, why would I start this blog by saying THWACKcamp is not unique? Because that sense of excitement, engagement, and invigoration is exactly how I feel when I attend the best IT shows out there. I felt it flying home from the juggernaut that was Microsoft Ignite, which boasted 27,000 attendees. I felt it driving back from the 400-person-strong inaugural DevOpsDays Baltimore. That tells me that THWACKcamp is not just a boutique conference for a certain subset of monitoring snobs and SolarWinds aficionados. It's a convention for ALL of us. While the focus is necessarily on monitoring, there are takeaways for IT pros working across the spectrum of IT disciplines, from storage to security to cloud and beyond.


In short, THWACKcamp has arrived. I'll leave it to others to recount the numbers, but, by my count, attendance was in the thousands. That's nothing to sneeze at, and that's before you consider that it's free and 100% online, so many of the barriers for people to attend are removed.


I have to admit: my first sentence is deceptive. We ARE unique. At all those other shows I've been to, I have to wait weeks or months to be able to view sessions I regretfully missed at the time, or to review sessions for quotes or other things I might have missed. THWACKcamp is different. You can access all of that content NOW.


That's not exactly a groundbreaking technological feat, but it is unique in the industry. And more than that, it's refreshing.


So check out the sessions. Give them a re-listen and see if you catch another nugget of knowledge you missed the first time around. Share them with your teams. Come back to them whenever you want (just like you can do for THWACKcamp 2017 (THWACKcamp 2017 ) or past events:


Meanwhile, I'm already looking forward to THWACKcamp 2018. I know there are going to be some amazing technologies to share, features to dive into, and stories to tell.


Until then!

Every year, THWACKcamp is my favorite industry event. Yes, I like going to Microsoft Ignite, VMworld, and Cisco Live, but interfacing with more like-minded people is my bag. This is why things like the SolarWinds User Groups and THWACKcamp are my jam. I’ve been attending THWACKcamp since the beginning.


Let me start by saying that I was a proponent of SolarWinds’ solutions long before it became cool. Does that make me a SolarWinds hipster?  If so, I’ll wear that fedora at an ironic, yet jaunty angle. Years ago, I joined the THWACK community and sometime later was invited to be a THWACK MVP. Since that time, I moved to the “dark side” and became a Product Manager for SolarWinds.

Although I’ve participated in THWACKcamp as a customer since the beginning, this is my third year seeing THWACKcamp from the inside. I've been able to be a part of THWACKcamp since 2015, and I have to say that this year's event was the biggest and best. As always, the content was great. We got input from some of the biggest names in the industry and even had some grace our studio. This year there were more panels, more real-world discussions, and more input from the community than ever.

All of this was great, but I’m here to tell you that the best part was hearing from and meeting THWACK MVPs. This year we had 22 in attendance (25 if you count Leon, Mike, and myself ). You even got to see some of them during the live interludes between sessions.  Many are repeat visitors, others were people I’ve only met at SWUGs, but several were new faces. Though I’ve spoken with many of them online, this was the first time for me to meet them face to face.

Many of these people traveled thousands of miles to be here for the event. In an effort to make it as enjoyable as possible, we tried to line up a few outside events. Thus, the hangover.

Day 0

On Day 0 (the day before the official event), we set up a game night. I got to break out my uber-nerdiness and run a session of Dungeons & Dragons for some of the MVPs.

Here there be dragons... in dungeons.

We even decorated the game room (big thanks to mrssigma for gathering all the goodies)! And a week later, the decorations are still up.  I have no intentions of taking them down... ever.

The Gates of Fun!

Because you can't have a game night with just one game, another group gathered to play SysadMANIA. The laughter was loud and often.

sysADMANIA spotted in the wild!

Day 1

Then Day 1 happened and introductions were made, conversations exchanged, and assistance grabbed.

At the end of Day 1, there was a UX Happy Hour, which turned into a Happy Evening. So happy that even though the restaurant closed at 9:00 pm, we stayed until nearly 11:00 pm.

UX Happy Hour(s)

Day 2

Day 2 began a little slow. I mean, we all got there, but some of us were worse for the wear.

The second day also included more laughs with the MVPs, including a game of "Who Wore it Better?"

Who Wore it Better?

Day 2 ended with a surprise from the MVPs. They unveiled the T-shirts they designed in honor of the UX team.

It brought meech to tears and hugs were everywhere.

Day 2 ended with me going to my bowling league, so I had to beg out of the second-night shenanigans, but that doesn't mean that there isn't photographic evidence.

So I missed some fun.

For the second night in a row, the MVPs closed not one, but two restaurant/bars.

What a great year! It was really fun meeting up with people, talking tech, and building friendships.

Do you feel like you missed anything? Were you unable to watch one session because you had to choose another in the same time slot? Fear not, you can watch all of them here! Check out what you missed, or watch your favorite sessions again, as often as you can. If you're like me, you are already anticipating THWACKcamp 2018. I can't wait!

I'm home following a wonderful trip to Austin for THWACKcamp last week. I get to rest for a few days before heading to Seattle for the PASS Summit. If you are attending, stop by the booth! We can talk data or discuss why there wasn’t an autonomous car to drive you from SEATAC.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Your app is an onion: Why software projects spiral out of control

A must-read for anyone who has tried to build their own homegrown monitoring system and one day realized they had gone from being a SysAdmin to a project manager.


Sorry Microsoft, Delta is switching to iOS

Could we do something about the dot matrix printers used at every gate and the metric ton of paper printed for each flight? Please?


Driverless Cars Made Me Nervous. Then I Tried One.

I drove the family two hours through traffic to visit relatives last weekend. If the cars were in charge it would have taken one hour. I’m ready for the cars to take over because I know they will make far fewer mistakes than human drivers.


Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level

This article helps to underscore the advances we have made with AI in the past 20 years. It is also a way for me to remind you why autonomous self-tuning databases will be a reality soon.


Fact Checks

I know what Google is trying to do here, and I want to believe they have good intentions, but who fact checks the fact checkers? The problem with the internet is that you can’t believe anything that you see online. That’s not a new thing, either. Growing up my father would remind me, “Don't believe what you read and only half of what you see." I think Google would do best to simply put that into pages instead.


Why GE had to kill its annual performance reviews after more than three decades

This is wonderful news for, well, everyone. You may not realize this, but almost every company follows a performance review system that is a variant of what GE installed decades ago. With this shift by GE, we may finally see real change in the area of corporate professional development for all companies.



Don’t click on this if you have any experience using SSH and have even a mild case of OCD. You will lose hours of your life. (But maybe have your kids check it out and get them some command line experience).


I'm just going to leave this here:


By Joe Kim, SolarWinds EVP, Engineering and Global CTO


Because databases are so important to federal IT, I wanted to share a blog written earlier this year by my SolarWinds colleague, Thomas LaRock.


Federal IT pros must be prepared for every situation. For most, a database disaster is not a matter of if, but rather when. The solution is simple: have backups. The first step to surviving any database disaster is having a good backup available and ready to restore.


But, agencies shouldn’t assume that having a database backup is enough; while it’s the first step in any successful database disaster recovery plan, it’s certainly not the only step. Agencies should have in place a robust, comprehensive plan that starts with the assumption that a database disaster is inevitable, and build in layers of contingencies to help ensure quick data recovery and continuity of operations.


Let’s look at the building blocks of that plan.


Defining the database disaster recovery plan


There can be a lot of confusion surrounding the terminology used when creating a backup and recovery plan. So much, in fact, that it’s well worth the space to review definitions.


High Availability (HA): This essentially means “uptime.” If your servers have a high uptime percentage, then they are highly available. This uptime is usually the result of building out a series of redundancies regarding critical components of the system.


Disaster Recovery (DR): This essentially means “recovery.” If you are able to recover your data, then you have the makings of a DR plan. Recovery is usually the result of having backups that are current and available.


Recovery Point Objective (RPO): This is the point in time in which you can recover data—if there is a disaster and data is lost—as part of an overall continuity of operations plan. This defines an acceptable amount of data loss based on a time period. The key here is to establish a number based on actual data, and potential data loss.


Recovery Time Objective (RTO): This is the amount of time allowable for you to recover data.


It is important to note that the continuity of operations and recovery plan should include both a recovery point objective (RPO) and a recovery time objective (RTO).


Estimated Time to Restore (ETR): ETR estimates how long it will take to restore your data. This estimate will change as your data grows in size and complexity. Therefore, ETR is the reality upon which the RTO should be based.


Remember to check often to verify that the ETR for your data is less than the RTO before disaster strikes. Otherwise, you will not be as prepared as you should be.


Knowing your IT DR plan


Now that we know the lingo, let’s think about the plan itself. Some may consider that replication as all that’s necessary for successful data recovery. It’s not that simple. As I stated earlier, agencies should have a robust, comprehensive plan that includes layers of contingencies.


Remember that HA is not the same as DR. For an example of why that is, let’s look at a common piece of technology that is often used as both an HA and DR solution.


Take this scenario: You have a corruption at one site. The corruption is immediately replicated to all the other sites. That’s your HA in action. How you recover from this corruption is your DR. The reality is, you are only as good as your last backup.


Another reason for due diligence in creating a layered plan: keep your RPO and RTO in agreement.


For example, perhaps your RPO states that your agency database must be put back to a point in time no more than 15 minutes prior to the disaster, and your RTO is also 15 minutes. That means, if it takes 15 minutes in recovery time to be at a point 15 minutes prior to the disaster, you are going to have up to 30 minutes (and maybe more) of total downtime.


The reality of being down for 30 minutes and not the expected 15 minutes can make a dramatic difference in operations. Research, layers, and coordination between those layers are critical for a successful backup and recovery plan.


A final point to consider in your disaster recovery plan is cost. Prices rise considerably as you try to narrow the RPO and RTO gaps. As you approach zero downtime, and no data loss, your costs skyrocket depending upon the volume of data involved. Uptime is expensive.


Cost is the primary reason some agencies settle for a less-than-robust data recovery plan, and why others sometimes decide that (some) downtime is acceptable. Sure, downtime may be tolerable to some degree, but not having backups or a DR plan in place for when your database will fail? That is never acceptable.


Find the full article on Federal Technology Insider.

Is Shadow IT in your crosshairs?  As a network security professional, do you recognize the implications of people taking IT into their own hands and implementing solutions without corporate approval? Let's examine one area that I believe is a huge data security risk in terms of shadow IT: file sharing. Sure, solutions like Box, Dropbox, iCloud, and so on, make sharing files between users and locations very easy, but there’s an inherent problem with these solutions that users don’t think about, which is that once you start using one of these services outside of corporate control, you lose control. How so? Let’s have a look.


Let’s pick on Dropbox first to get a sense for what could happen. Now, I'll openly admit that in a pinch I’ve used Dropbox to share a link so I could open a file on another machine. This activity may seem to be benign, but in an age where data exfiltration is rampant, this can be something detrimental to business. Furthermore, what happens if the link you generate is accidentally shared via social media? I once read of someone taking some photos, emailing a link to a folder in Dropbox, but mistyping the recipient address. Whoops!  Now someone has access to these pictures, and they are hopefully not of a personal nature. But this is common practice these days. In fact, many organizations enforce a limit to the attachment size. Users can subvert this by sending Dropbox links. It was recently reported that a health care provider leaked user data inadvertently through an email error. I'm not sure that we will ever know if this was done using a Dropbox link or the like, but there's always the possibility that it could have been.


But what else can go wrong? Let's also consider the installation of the Dropbox application on a local system. In 2016, it was reported that Dropbox was giving itself permissions to control your computer without gaining user permissions. This was eventually sorted out and Apple began blocking this in MacOS Sierra. However, it reveals an underlying issue. When a user installs an application and doesn’t fully know how it operates, they are quite possibly exposing the organization to attack. In this case, if someone were to expose a flaw in Dropbox programming, they could effectively control your computer. While this is hypothetical, it could still happen, and it should be considered. This is one of the reasons IT organizations are a bit slower to approve applications for use internally. There is usually a vetting process that takes place in which these things are considered. I know that most of you will probably agree.


But let's stop picking on Dropbox. Several other services and applications allow users to share files that come with similar concerns. Google is known for scouring your data and using it for advertising purposes among other things. What if a user were to sign up for a free Gmail account, and use the free Google drive service to share files. Could Google be scanning and analyzing the files you store there? What can they do with that data? Who could they sell it to? What would they do with it? The list of questions goes on.


I must say that I'm not making the statement that these popular file-sharing services are bad. If an organization has reviewed the product, agrees to the EULA and it is approved internally, then have at it!  But what if it's not approved? That's the gray area I'm fishing in here. I mean, just think about it. There are peer-to-peer sharing and torrent sites, instant messengers, desktop sharing and control apps, and more. These all have a slew of concerns that follow. Let's also not forget that it's pretty easy these days to throw up an ad hoc FTP server that lacks security and allows connectivity and data transfer in clear text. Again, these all have the potential to become a means of data exfiltration as well as an attack vector for malware delivery, command and control connections, and the like.


So back to the problem at hand. Users will find their own solutions when we don't provide a satisfactory one for them. Sometimes this comes in the form of installing Dropbox or using some other form of file transfer to share data. While it may not be their intent to cause a security issue or share data with people they shouldn't, the fact is that it can happen. Are you as concerned as I am about this? What’s your take on this behavior, and what do you see being the happy medium between a well-vetted system for sharing data that is still user-friendly and friction-free in a users daily life?

My fellow Head Geek and Microsoft Data Platform MVP, Thomas LaRock, defines hard skills as a tech certification, a college degree, or any tech skill including any vendor- or industry-specific skills. These hard skills represent an achievement or acknowledgement of base expertise at a specific point in time. They are static by nature but represent job security.


Unfortunately, the only constant in IT is that things change over time. And this puts IT pros in a precarious position. Our industry is changing so fast that the skills refresh cycle of IT professionals is shortening to the point where certain certifications earned today may not get you and your career through tomorrow.


So when do you say goodbye to yesterday’s IT? Technology? Company? The answer tends to be, "it depends," which is unsatisfactory, but apropos given the many factors to consider, such as experience, expertise, investment, etc. That’s why it’s so hard to say goodbye to yesterday. How do you mind the gap? Soft skills. Of note, the top three soft skills beyond the tech are communication skills, teamwork, and adaptability, as noted in this THWACKcamp 2017 session. As an IT pro, are you skilled in any of these? How have you gone about improving these soft skills? Let me know in the comment section below.


P.S. Who wore it better? Dez adatole sqlrockstar or me?


THWACKcamp – the annual virtual learning event brought to you by SolarWinds – was amazing this year, in large part because so many live attendees were talking about new challenges IT admins they hadn’t imagined even five years ago. As usual, in our closing thoughts we mentioned that rate of change doesn’t seem to worry the THWACK community too much, because they’re proven, adaptable IT professionals with a solid track record of handling anything the business throws at them. And over and over in chat this year, attendees recommended technical learning resources like programming course links, more than ever before especially for SDN, Cloud and DevOps. So, this year we’re doing something a little different – we’re making all the THWACKcamp 2017 sessions available immediately.


The trick with THWACKcamp, is to create events that are dynamic, topical, and fun to attend live, but at the same time present a body of educational content that’s truly useful to all IT professionals for months or hopefully years afterward. In recent years, thousands of members of THWACK participated live because it’s a great opportunity to interact with other IT pros in real time. Then tens of thousands more then watch all the sessions on-demand. (Pause and rewind are especially helpful for how-tos.)


For SolarWinds, that event critical mass is indispensable because it’s part of how the team dials-in the content mix not only for the following year’s THWACKcamp, but for webinars, SolarWinds Lab, product guides in the Customer Success Center, and more. THWACKcamp is one of several tech events I recommend to anyone in IT, and all share at least a significant live element for this reason. I like the idea vendors are interested in observing honest, ungraded conversations of their users and curious IT professionals alike.


If you’re a member of SolarWinds user community curious about networking, cloud, virtualization, Linux, multi-path traffic, distributed application tracing, SIEM events, configuration, or other IT issues, and want to use your tools more effectively, check out the event session archive as always. But if you’re not a member, this year you don’t have to wait until the community preview period is over to check out thought leadership, career development, and technology roadmap conversations from industry experts and senior admins.


IT is evolving quickly and in surprising ways. Encourage curiosity in your team, continue to seek out online learning opportunities, and don’t be afraid to mix it up a little. We’re IT. We do more than keep the lights on. We prove to business over and over again, that we can learn anything.


Bookmark the THWACKcamp 2017 session archive now


DevOps Pitfalls

Posted by samplefive Oct 19, 2017

You've just been given the following assignment from your boss: He's asked you to make the organization more DevOps-oriented. You've gone back and forth on what his definition of DevOps is and what he wants you to accomplish. You're ready to get going, and think, "What could go wrong?" A good bit, it turns out. In the second post of this DevOps series, we are looking into some of the pitfalls and challenges of transitioning from a traditional operations model to a DevOps model.


Dev + DevOps + Ops

DevOps can help your organization when developers and operations are working together toward the same business goals. When you tighten communications, features and issues can be communicated back and forth fluidly between two teams.  Unfortunately, some organizations that have chosen to implement DevOps have placed a DevOps layer between developers and operations. Rather than increasing collaboration and communications between the developer and operations teams, this new team in between could potentially slow communications and the decision-making processes, resulting in failure. It's understandable to want people who are responsible for making this effort work, but rather than create a separate team, choose point people who will make sure that the development and operations teams are communicating effectively.


My Culture Just Ate Your DevOps

Saying that culture is an important piece in ensuring any projects' success is not a new concept. Having team buy-in on DevOps tasks is critical, and it's just as important to have a culture of communications and collaboration. But what if your organization's culture doesn't include that? Shaping an organization's culture, whether for DevOps or something else, can be a long and difficult process. There is not enough space in this post to talk about all of the things that could be done here.  Instead, start with addressing these three premises on how organizations can point their cultural ship in the right direction. First, as leadership, be the culture. You can't expect your organization to shift their cultural norms if the people at the top aren't exhibiting the culture that they want to see. The second suggestion to help you build your culture: Fake it 'til you make it. Lloyd Taylor, VP Infrastructure at Ngmoco said, "Behavior becomes culture." If you keep practicing the desired behaviors, eventually they become the culture. Finally, process becomes behavior. Having well-defined processes for how communications and DevOps interactions should work will build the behavior and in turn the culture.


Your Network Engineer Isn't a Developer

I use the term network engineer pretty loosely in the title of this section. For the purpose of my argument, it could be anyone on the operations side. Some organizations have many people in operations that do impressive scripting work to simplify tasks. These organizations may not have developers on staff and may look to these operators to be the developers in their DevOps initiatives. Don't get me wrong, for some organizations this may be exactly what they need and it may work out just fine. Unfortunately, what sometimes happens is the network engineer that is great at scripting lacks professional developer habits. Error handling, good software documentation, and regression testing are all things that an experienced developer would do out of habit that a network engineer might not think to address. (Okay, I'll admit that good software documentation can sometimes be difficult for experienced developers, too.) Can your operations person be a developer? They might have the skills and training, but don't count on them to be able to pull off all that a fully experienced developer can. If you happen to have an operations person that is a developer, maybe make that operator responsible for the points mentioned above.


Conversely, there are things on the operations side that your developers don't know about. For instance, your developers likely don't have deep knowledge of networking protocols, or even commands that would be required to configure them. One of the people that I spoke to for the last post experienced this very issue. In addition to experiencing long delivery times, his biggest concern was the appearance that the organization's developers were attempting to be network engineers when they were clearly not up for the job.


We Test in Production

It seems almost silly to mention this here, but you need to have a testing methodology for the product of your DevOps efforts. Even though it is basic, it's often overlooked that you shouldn't test your code in your production environment. If a large part of your DevOps program involves automation, and you roll out incorrect code, you roll out bad changes to your entire organization at crazy fast speeds. Yay, automation! Some development platforms allow you to test code to see what the output will be prior to running it in production. However, a solid test bed that replicates key components of the environment is helpful to have. The key is having an environment that you can quickly test and validate code behavior while not significantly slowing down the process for releasing the code to production.


Those are just a few of the pitfalls that frequently come up in a variety of DevOps initiatives. What do you think? What are the biggest challenges you've seen when someone is rolling out DevOps? Comment and let me know!

I can summarize this week in three words: THWACKcamp! THWACKcamp! THWACKcamp!


I am in Austin this week. If you are able to visit for THWACKcamp, please say hello and let's sit down to talk data. Or bacon.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


The Absurdly Underestimated Dangers of CSV Injection

And I thought the hover text malware was bad. Excel is used by 300 million people each day. Further proof that everything is awful.


You should get the flu shot—even if it won’t keep you from getting sick

Interesting piece regarding herd immunity. For some reason, I’ve subscribed the to the idea of herd immunity for things like smallpox, but I’ve never thought of it with regards to the flu. Probably because it’s much harder to prevent the flu. Then again, maybe that’s because we don’t all get flu shots.


How AI Will Change Strategy: A Thought Experiment

Honestly, I’m a bit shocked that Amazon doesn’t already ship me bacon and scotch automatically. No idea if this thought experiment will play out in real life or not, but if anyone can come up with a model to squeeze a few extra dollars out of our pockets (and make us feel good about it), it’s Jeff Bezos.


What Would It Look Like If We Put Warnings on IoT Devices Like We Do Cigarette Packets?

This needs to happen, and soon.


Microsoft teams up with Amazon again, this time on “Gluon” machine learning tech

This seems like an odd pairing: Microsoft and AWS getting together to collaborate on a deep learning project. The announcement seems more like the R&D teams getting together to work on a standard API to use with Azure and AWS. This could be an effort to remove Google as a player in this space.


KRACK Attack Devastates Wi-Fi Security

If the hackers don’t embed a message that says, “Release the KRACKen,” I will be very disappointed.


'Our minds can be hijacked': the tech insiders who fear a smartphone dystopia

A bit long but worth the read. I remind my children about this topic from time to time. We discuss how games will want you to do an in-app purchase, or what types of posts get the most likes, or how you suddenly got emails from companies you never recall contacting. Do yourself a favor and take time to help educate others about the attention economy.


The attention economy in action, sunset at Martha's Vineyard:

Tomorrow, more than 70 SolarWinds team members, five tech industry influencers, and 22 THWACK Community MVPs will celebrate months of preparation, as we welcome thousands of IT pros, DevOps pros, and MSP pros to THWACKcamp 2017.


Joining us will be Stephen Foskett of Gestalt IT, Phoummala Schmidt aka ExchangeGoddess, Michael Coté of Pivotal and host of Software Defined Talk, Nathen Harvey of Chef, Karen López of InfoAdvisors, and Clinton Wolfe of DevOpsDays Philly, all bringing their insight for some really engaging conversations.


Now in its sixth year, THWACKcamp is the leading worldwide virtual IT event that only SolarWinds could bring you. Because everything we do is about a direct connection to our end user, our annual learning event is custom-crafted to bring 19 technical sessions, practical how-tos, thought leadership, and engaging live chat directly to technology professionals who can't get away from the office without the network going down, without the data center going awry, or without the marketing team causing their entire MacBook inventory from bursting into flames.


Whether it's a panel conversation on the reports your CIO wants to see, a consideration of the Seven Deadly Sins of Data Management, conversations with IT pros who have transitioned to be data scientists [at least part-time], discussion of what it means when DevOps says "monitor," or an exploration of the five elements of a modern application performance monitoring (APM) strategy, we've got something for nearly everyone who lives and breathes IT.


Join us Wednesday and Thursday, October 18th and 19th, 2017, for two days of awesome content, and juice up your brain. We'd love to have you.

The journey of a thousand miles begins with a single step.

-Lao Tsu


In this, my second post in a five-part series, I’m approaching the concept of microservices as being analogous to my life. Hopefully, my experience will assist you in your life as an IT pro, and also serve as a life hack.


I’ve written about microservices before. It seems to me that the concept as it relates to agility in code design and implementation in the data center is profound. What I see as the magic of code development and deployment in the data center is compelling because it allows the code jockey to focus his efforts on the discrete area of development that needs to take place. It allows larger groups to divvy up their projects into chunks of work that make the most sense for those doing the coding and allows for these changes to take place dynamically. The agility of the application as a whole becomes practically fluid. The ability to roll back changes, should they be necessary, retains the levels of fluidity and dynamic nature necessary for the growth of a modern application in today’s data center (whether it is on-premises or in the cloud). Sometimes, those small incremental changes can prove to be so impactful that they ensure the success of these changes.


I can draw the same correlation of small changes to large benefits in the realm of life hacks. For example, as I’ve mentioned, I play guitar, though not particularly well. I’ve been playing since I was twelve, and literally, for decades, my ability has remained the same. But recently, I learned a few techniques that have helped me improve significantly. Small changes and new levels of understanding have helped me quite a bit. In this case, learning blues shapings on the fretboard, and altering my hand positioning have made a big difference.


The same can be said of weight control. I’m a chunky guy. I’ve struggled with my weight most of my life. Small changes in my approach to food and exercise have helped me recently in achieving my goals.


Sleep is another category in which I struggle. My patterns have been bad. My average sleep per night has averaged about 5 hours. To be pithy, I’m tired of being tired. So, I’ve looked at techniques to assist in what I see as success in sleep. Removing screens from the room, blackout curtains, and room temperature have all been isolated as key criteria to staying asleep once falling asleep. So, I’ve addressed these things. Another key category has been white noise. I’ve leveraged all of these tools and techniques to assist me. Again, I'm happy to say that recently these small changes have helped.


I like to view these small changes in the same light as microservices. Small incremental changes can make a significant difference in the life of the implementer.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO


With network vulnerabilities and attacks on the rise, I wanted to share a blog written earlier this year by my SolarWinds colleague, Leon Adato.


Trends such as bring your own device (BYOD), bring your own application (BYOA), software-defined networking (SDN), and virtual desktop infrastructure (VDI) have dramatically increased network vulnerabilities, where failures, slowdowns, or breaches can cause great damage. For the military, specifically, such occurrences can be serious and mission-altering, exposing incredibly sensitive data.


The network always has been and will be the foundation of defense information technology. The question is: How do you manage this foundation to address current network vulnerability challenges and those on the horizon? The solution is a combination of network simplicity and sophistication and good old-fashioned network security best practices.



Resource constraints—specifically, a small budget and lack of IT staff—are a constant. Automating various processes for network management can help agencies free up resources for allocation to other mission-focused tasks. For example, agencies can automate compliance by using configuration and patching tools that locate and remediate known vulnerabilities with limited human interaction.


Network monitoring

This task is vital. Continuous monitoring provides a complete view of users, devices, network activity, and traffic. Log data can be used for real-time event correlation to improve security. The goal is to achieve network stabilization amid growing complexity. Similarly, as the Defense Department moves to hybrid IT environments, monitoring tools provide critical information about which elements of the in-house infrastructure make sense to migrate to cloud from both a cost and workflow standpoint. And once applications are migrated, availability must be monitored and performance verified.


Configuration management

This offers another powerful tool. Backing up configurations lets changes be rolled back for fast recovery. Configurations can be monitored, and those that are noncompliant can automatically be remediated. Manual configuration management doesn’t scale and is nearly impossible based on the primary constraints of any military organization: low budget and small IT staff.


The BYOA dilemma

The Defense Department has struggled with this trend for years. It comes down to security and bandwidth. Off-duty personnel need fewer restrictions to use internet-enabled devices. (Okay, we call them game consoles, at least in certain military zones.) Of course, bandwidth isn’t cheap, and availability is significantly limited in deployed areas.


The department needs guidelines and necessary tools to enforce restrictions. It’s not difficult to eliminate rogue devices on the network, and users are more apt to follow guidelines if IT enforces them.


Looking ahead

Even in military environments, SDN quickly became a preferred method for greater network situational awareness, a centralized point of control and the ability to introduce new applications and services while lowering costs. The rapid speed of technology demands a change to the network, and SDN is a primary component of this change.


Interestingly, being at the forefront of technology implementation, federal IT professionals might find that industry does not yet have all the appropriate tools, strategies, and processes in place to alleviate potential network vulnerability issues. The solution? Network administrators should educate themselves ahead of the trends so they’re equipped to test, prepare, and balance risk versus reward as it affects mission requirements.


Find the full article on Signal.



In my soon-to-be-released eBook, 10 Ways I Can Steal Your Data, I cover the not-so-talked-about ways that people can access your enterprise data. It covers things like you're just GIVING me your data, ways you might not realize you are giving me your data, and how to keep those things from happening.


The 10 Ways eBook was prepared to complement my upcoming panel during next week's ThwackCamp on the data management lifecycle. You've registered for ThwackCamp, right? In this panel, a group of fun and sometimes irreverent IT professionals, including Thomas LaRock sqlrockstar, Stephen Foskett sfoskett and me, talk with Head Geek Kong Yang kong.yang about things we want to see in the discipline of monitoring and systems administration. We also did a fun video about stealing data. I knew I couldn't trust that Kong guy!


In this blog series, I want to talk about bit more about other ways I can steal your data. In fact, there are so many ways this can happen I could do a semi-monthly blog series from now until the end of the world. Heck, with so many data breaches happening, the end of the world might just be sooner than we think.


More Data, More Breaches

We all know that data protection is getting more and wider attention. But why is that? Yes, there are more breaches, but I also think legislation, especially the regulations coming out of Europe, such as General Data Protection Regulation (GDPR), means we are getting more reports. In the past, organizations would keep quiet about failures in their infrastructure and processes because they didn't want us to know about how poorly they treated our data. In fact, during the "software is eating the world" phase of IT professionals making software developers kings of world, most data had almost no protection and was haphazardly secured. We valued performance over privacy and security. We favored developer productivity over data protection. We loved our software more than we loved our data.


But this is all changing due to an increased focus on the way the enterprise values data.


I have some favorite mantras for data protection:


  • Data lasts longer than code, so treat it right
  • Data privacy is not security, but security is required to protect data privacy
  • Data protection must begin at requirements time
  • Data protection cannot be an after-production add-on
  • Secure your data and secure your job
  • Customer data is valuable to the customers, so if you value it, your customers will value your company
  • Data yearns to be free, but not to the entire world
  • Security features are used to protect data, but they have to be designed appropriately
  • Performance desires should never trump security requirements



And my favorite one:


  • ROI also stands for Risk of Incarceration: Keeping your boss out of jail is part of your job description



So keep an eye out for the announcement of the eBook release and return here in two weeks when I'll share even more ways I can steal your data.

It is true to the point of cliche that cloud came and changed everything, or at the very least is in the process of changing everything, even as we stand here looking at it. Workloads are moving to the cloud. Hybrid IT is a reality in almost every business. Terms and concepts like microservices, containers, and orchestration pepper almost every conversation in every IT department in every company.


Like many IT professionals, I wanted to increase my exposure without completely changing my career or having to carve out huge swaths of time to learn everything from the ground up. Luckily, there is a community ready-made to help folks of every stripe and background: DevOps. The DevOps mindset runs slightly counter to the traditional IT worldview. Along with a dedication to continuous delivery, automation, and process, a central DevOps tenet is, "It's about the people." This alone makes the community extremely open and welcoming to newcomers, especially newcomers like me who have an ops background and a healthy dose of curiosity.


So, for the last couple of years, I've been on a bit of journey. I wanted to see if there was a place in the DevOps community for an operations-minded monitoring warhorse like me. While I wasn't worried about being accepted into the DevOps community, I was worried if I would find a place where my interests and skills fit.


What concerned me the most was the use of words that sounded familiar but were presented in unfamiliar ways, chief among them the word "monitoring" itself. Every time I found a talk purporting to focus on monitoring, it was mostly about business analytics, developer-centric logging, and application tracing. I was presented with slides that presented such self-evident truths as:


"The easy and predictable issues are mostly solved for at scale, so all the interesting problems need high cardinality solutions."


Where, I wondered, were the hardcore systems monitoring experts? Where were the folks talking about leveraging what they learned in enterprise-class environments as they transitioned to the cloud?


I began to build an understanding that monitoring in DevOps was more than just a change of scale (like going from physical to virtual machines) or location (like going from on-premises to colo). But at the same time, it was less than an utterly different area of monitoring that had no bearing on what I've been doing for 20-odd years. What that meant was that, while I couldn't ignore DevOps' definition of monitoring, nor was I free to write it off as a variation of something I already knew.


Charity Majors (@mipsytipsy) has, for me at least, done the best job of painting a picture of what DevOps hopes to address with monitoring:

(excerpted from

"...And then on the client side: take mobile, for heaven's sake. The combinatorial explosion of (device types * firmwares * operating systems * apps) is a quantum leap in complexity on its own. Mix that in with distributed cache strategy, eventual consistency, datastores that split their brain between client and server, IoT, and the outsourcing of critical components to third-party vendors (which are effectively black boxes), and you start to see why we are all distributed systems engineers in the near and present future. Consider the prevailing trends in infrastructure: containers, schedulers, orchestrators. Microservices. Distributed data stores, polyglot persistence. Infrastructure is becoming ever more ephemeral and composable, loosely coupled over lossy networks. Components are shrinking in size while multiplying in count, by orders of magnitude in both directions... Compared to the old monoliths that we could manage using monitoring and automation, the new systems require new assumptions:

  • Distributed systems are never "up." They exist in a constant state of partially degraded service. Accept failure, design for resiliency, protect and shrink the critical path.
  • You can't hold the entire system in your head or reason about it; you will live or die by the thoroughness of your instrumentation and observability tooling.
  • You need robust service registration and discovery, load balancing, and backpressure between every combination of components..."

While Charity's post goes into greater detail about the challenge and some possible solutions, this excerpt should give you a good sense of the world she's addressing. With this type of insight, I began to form a better understanding of DevOps monitoring. But as my familiarity grew, so too did my desire to make the process easier for other "old monolith" (to use Charity's term) monitoring experts.


And this, more than anything else, was the driving force behind my desire to assemble some of the brightest minds in DevOps circles and discuss what it means, "When DevOps Says Monitor" for THWACKcamp this year. It is not too late to register for that session ( Better still, I hope you will join me for the full two-day conference and see what else might shake your assumptions, rattle your sense of normalcy, and set your feet on the road to the next stage of your IT journey.


PS: If you CAN'T join for the actual convention, not to worry! All the sessions will be available online to watch at a more convenient time.

I’ve always loved the story about the way Henry Ford built his automotive imperium. During the Industrial Revolution, it became increasingly important to automate the construction of products to gain a competitive advantage in the marketplace. Ford understood that building cars faster and more efficiently would be hugely advantageous. Developing an assembly line as well as a selling method (you could buy a Model-T in every color, as long as it was black.) If you want to know more about how Ford changed the automotive industry (and much more), there is plenty of information on the interwebs.


In the next couple of posts, I will dive a little deeper into the reasons why keeping your databases healthy in the digital revolution is so darn important. So please, let’s dive into the first puzzle of this important part of the database infrastructure we call storage.


As I already said, I really love the story of Ford and the way he changed the world forever. We, however, live in a revolutionary time that is changing the world even faster. It seems -- and seems is the right word if you ask me -- to focus on software instead of hardware. Given that the Digital Revolution is still relatively young, we must be like Henry and think like pioneers in this new space.


In the database realm, it seems to be very hard to know what the performance, or lack thereof,  s and where we should look to solve the problems at hand. In a lot of cases, it is almost automatic to blame it all on the storage, as the title implies. But knowledge is power as my friend SpongeBob has known for so long.

Spongebob on knowledge

Storage is an important part of the database world, and with constantly changing and evolving hardware technology, we can squeeze more and more performance out of our databases. That being said, there is always a bottleneck. Of course, it could be that storage is the bottleneck we’re looking for when our databases aren’t performing the way they should. But in the end, we need to know what the bottleneck is and how we can fix it. More important is the ability to analyze and monitor the environment in a way that we can predict and modify database performance so that it can be adjusted as needed before problems occur.


Henry Ford was looking for ways to fine-tune the way a car was built, and ultimately developed an assembly line for that purpose. His invention cut the amount of time it took to build a car from 12 hours to surprising two-and-a-half hours. In a database world, speed is important, but blaming storage and focusing on solving only part of the database puzzle is sshort-sighted. Knowing your infrastructure and being able to tweak and solve problems before they start messing with your performance is where it all starts. Do you think otherwise? Please let me know if I forgot something, or got it all wrong. Would love to start the discussion and see you on the next post.

Are you excited about THWACKcamp, yet? It's next week, and I am flying down to Austin to be there for the live broadcast. Go here to register. Do it now!


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


It’s official: Data science proves Mondays are the worst

I like having data to back up what we already knew about Mondays.


The Seven Deadly Sins of AI Predictions

A bit long, but worth the read. Many of these sins apply to predictions in general, as well as Sci-Fi writing.


Who should die when a driverless car crashes? Q&A ponders the future

I've thought and talked a lot about autonomous cars, but I’ve never once pondered what the car should do when faced with a decision about who should die, and who should live.


Traceroute Lies! A Typical Misinterpretation Of Output

Because I learned something new here and you might too: MPLS is as dumb as a stump.


Replacing Social Security Numbers Is Harder Than You Think

And they make for lousy primary keys, too.


Russia reportedly stole NSA secrets with help of Kaspersky—what we know now

I included this story even though it hasn't yet been confirmed because I wanted to remind everyone that there is at least one government agency that listens to the American people: the NSA.


History of the browser user-agent string

And you will read this, and there will be much rejoicing.


Shopping this past weekend I found these socks and now I realize that SolarWinds needs to up our sock game:

The THWACKcamp 2017 session, "Protecting the Business: Creating a Security Maturity Model with SIEM" is a must-see for anyone who’s curious about how event-based security managers actually work. SolarWinds Product Manager Jamie Hynds will join me to present a hands-on, end-to-end, how-to on configuring and using SolarWinds Log & Event Manager. The session will include configuring file integrity monitoring, understating the effects of normalization, and creating event correlation rules. We'll also do a live demonstration of USB Defender’s insertion, copy activity detection, and USB blocking, Active Directory® user, group, and group-policy configuration for account monitoring, lock-outs for suspicious activity, and detecting security log tampering.


Even if you’re not using LEM or a SIEM tool, this will be a valuable lesson on Active Directory threat considerations that will reveal real-world examples of attack techniques.


THWACKcamp is the premier virtual IT learning event connecting skilled IT professionals with industry experts and SolarWinds technical staff. Every year, thousands of attendees interact with each other on topics like network and systems monitoring. This year, THWACKcamp further expands its reach to consider emerging IT challenges like automation, hybrid IT, cloud-native APM, DevOps, security, and more. For 2017, we’re also including MSP operators for the first time.

THWACKcamp is 100% free and travel-free, and we'll be online with tips and tricks on how to your use SolarWinds products better, as well as best practices and expert recommendations on how to make IT more effective regardless of whose products you use. THWACKcamp comes to you so it’s easy for everyone on your team to attend. With over 16 hours of training, educational content, and collaboration, you won’t want to miss this!


Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

If you're in technology, chances are you don't go too many days without hearing something about DevOps. Maybe your organization is thinking about transitioning from a traditional operations model to a DevOps model. In this series of blog posts, I plan to delve into DevOps to help you determine if it's right for your organization and illustrate how to start down the path of migrating to a DevOps model. Because DevOps means different things to different people, let's start by defining exactly what it is.


My big, semi-official definition of DevOps is that it is a way to approach software engineering with the goal of closely integrating software development and software operations. This encompasses many things, including automation, shorter release times for software builds, and continuous monitoring and improvement of software releases. That’s a lot, and it’s understandable that a good amount of people may be confused about what DevOps is.


With that in mind, I reached out to people working in technology to get their thoughts on what exactly DevOps means to them. These individuals work in varying technology disciplines from data center engineers, Windows SysAdmins, and Agile developers to unified communications, wireless, and network engineers. About thirty people responded to my simple query of, "When someone says 'DevOps,' what do you think it means?" While there was a fair amount of overlap in some of the answers, here's a summary of where the spread fell out:
  • It's supposed to be the best combination of both dev and ops working in really tight coordination
  • Automation of basic tasks so that operations can focus on other things
  • It's glorified script, in other words, developers trying to understand network and storage concepts
  • It's the ability to get development or developers and operations to coexist in a single area with the same workflow and workloads
  • It's something for developers, but as far as what they actually do, I have no clue


A lot of these responses hit on some of the big definitions we discussed earlier, but none of them really sums up the many facets of DevOps. That’s okay. With the breadth of what DevOps encompasses, it makes sense for organizations to pick and choose the things that make the most sense for what they want to accomplish. The key part, regardless of what you are doing with DevOps, is the tight integration between development and software operations. Whether you’re looking to automate infrastructure configurations or support custom-written, inventory-tracking software, the message is the same. Make sure that your development and operations teams are closely integrated so that the tool does what it needs to do and can be quickly updated or fixed when something new is encountered.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO


Abruptly moving from legacy systems to the cloud is akin to building a new house without a foundation. Sure, it might have the greatest and most efficient new appliances and cool fixtures, but it’s not really going to work unless the fundamentals that support the entire structure are in place.


Administrators can avoid this pitfall by building modern networks designed for both the cloud of today and the needs of tomorrow. These networks must be software-defined, intelligent, open and able to accommodate both legacy technologies and more-advanced solutions during the cloud migration strategy period. Simultaneously, their administrators must have complete visibility into network operations and applications, wherever they may be hosted.


Let’s look at some building practices that administrators can use to effectively create a solid, modern and cloud-ready network foundation.


Create a blueprint to monitor network bandwidth on the cloud


Many network challenges will likely come from increased traffic derived from an onslaught of devices on the cloud. The result is that both traditional and non-traditional devices are enabling network traffic that will inevitably impact bandwidth. Backhaul issues can also occur, particularly with traditional network architectures that aren’t equipped to handle the load that more devices and applications can put on the network.


It’s becoming increasingly important for administrators to be able to closely monitor and analyze network traffic patterns. They must have a means to track bandwidth usage down to individual users, applications, and devices so they can more easily pinpoint the root cause of slowdowns before, during, and after deploying a cloud migration strategy.


Construct automated cloud security protocols


Agencies moving from a traditional network infrastructure to the cloud will want to make sure their security protocols evolve as well. Network notification software should automatically detect and report on potentially malicious activity, use of rogue or unauthorized devices, and other factors that can prove increasingly hazardous as agencies commence their cloud migration strategy efforts.


Automation will become vitally important because there are simply too many moving parts to a modern, cloud-ready network for managers to easily and manually control. In addition to the aforementioned monitoring practices, regular software updates should be automatically downloaded to ensure that the latest versions of network tools are installed. And administrators should consider instituting self-healing protocols that allow the network to automatically correct itself in case of a slowdown or breach.


Create an open-concept cloud environment


Lack of visibility can be a huge network management challenge when migrating to the cloud. Agency IT personnel must be able to maintain a holistic view of everything that’s happening on the network, wherever that activity may be taking place. Those taking a hybrid cloud approach will require network monitoring that allows them to see into the dark hallways that exist between on-premises and cloud infrastructures. They must also be able to continuously monitor the performance of those applications, regardless of where they exist.


Much as well-built real estate increases in value over time, creating a cloud-ready, modernized network will offer significant benefits, both now and in the future. Agencies will be able to enjoy better security and greater flexibility through networks that can grow along with demand, and they’ll have a much easier time managing the move to the cloud with an appropriate network infrastructure. In short, they’ll have a solid foundation upon which to build their cloud migration strategies.


Find the full article on Government Computer News.

Change is coming fast and furious, and in the eye of the storm are applications. I’ve seen entire IT departments go into scramble drills to see if they possessed the necessary personnel and talent to bridge the gap as their organizations embraced digital transformation. And by digital transformation, I mean the ubiquitous app or service that can be consumed from anywhere on any device at any given time. For organizations, it’s all about making money all the time from every possible engagement. Remove the barrier to consume and make the organization more money is what digital transformation is all about.


There are new roles and responsibilities to match these new tech paradigms. Businesses have to balance their technical debt and deficit and retire any part of their organization that cannot close the gap. Ultimately, IT leaders have to decide on the architecture that they’ll go with, and identify whether to buy or build that corresponding talent. The buy and build talent question becomes THE obstacle to success.


There is a need for IT talent that can navigate the change that is coming. Because of the increased velocity, volume, and variety that apps bring, IT leaders are going into binge-buying mode. In the rush to accomplish their goals, they don't take the time to seek out latent talent that is likely already in their organizations. Have IT pros become merely disposable resources? Are they another form of tech debt?


Buy or build? This is the billion dollar question because personnel talent is the primary driver of innovation. That talent turns technology and applications into industry disruptors. Where does your organization stand on this issue? Are they building or buying talent? Let me know in the comments below.

Some reports from your IT monitoring system help you do your job—like auditing, troubleshooting, managing resources, and planning. There are also reports that you create to let your boss know how the IT environment is performing, such as enterprise-level SLAs, availability/downtime, and future-proofing. But what exactly does senior management really need to know, and why?


In the "Executive BabelFish: The Reports Your CIO Wants to See" session, you’ll hear from SolarWinds CIO Rani Johnson, Director of IT, SANDOW, Mindy Marks, Kinnser Software CTO Joel Dolisy, and me, SolarWinds Head Geek Patrick Hubbard. We will present relevant information and how it is used.


THWACKcamp is the premier virtual IT learning event connecting skilled IT professionals with industry experts and SolarWinds technical staff. Every year, thousands of attendees interact with each other on topics like network and systems monitoring. This year, THWACKcamp further expands its reach to consider emerging IT challenges like automation, hybrid IT, cloud-native APM, DevOps, security, and more. For 2017, we’re also including MSP operators for the first time.

THWACKcamp is 100% free and travel-free, and we'll be online with tips and tricks on how to your use SolarWinds products better, as well as best practices and expert recommendations on how to make IT more effective regardless of whose products you use. THWACKcamp comes to you so it’s easy for everyone on your team to attend. With over 16 hours of training, educational content, and collaboration, you won’t want to miss this!


Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

I'm back from Microsoft Ignite, which was my third event in the past five weeks. I get to stay home for a couple of weeks before heading to the premier event of the year, THWACKcamp 2017! If you haven't registered for THWACKcamp, yet, go here and do it now.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


With new Microsoft breakthroughs, general purpose quantum computing moves closer to reality

By far the biggest announcement last week at Microsoft Ignite was (for me, at least) their research into quantum computing. I’m looking forward to using up to 40 qubits of compute on Azure in a future billing cycle. I can’t wait to both pay my bill and not pay my bill each month, too!


Do You Really Need That SQL to Be Dynamic?

No. #SavedYouAClick


Microsoft CEO Satya Nadella: We will regret sacrificing privacy for national security

Truer words will never be spoken.


Malware Analysis—Mouse Hovering Can Cause Infection

I suppose the easy answer here is to not allow for URLs to be included in hover text. But short links are likely a bigger concern than hover text, so maybe focus on disabling those first. If you can’t see the full URL, don’t click.


Breach at Sonic Drive-In May Have Impacted Millions of Credit, Debit Cards

Attacking a fast food chain? Hackers are crossing a line here.


Kalashnikov unveils electric “flying car”

In case you were wondering what to get me for Christmas this year (besides bacon, of course).


Mario was originally punching his companion Yoshi in Super Mario World

Mario. What a jerk.


Behold! The Glory that is an American State Fair:

Managing federal IT networks has always been a monumental task. They have traditionally been massive, monolithic systems that require significant resources to maintain.


One would think that this situation would have improved with the advent of virtualization, but the opposite has proved to be true. In many agencies, the rise of the virtual machine has led to massive VM sprawl, which wastes resources and storage capacity because of a lack of oversight and control over VM resource provisioning. Left unattended, VM sprawl can wreak havoc, from degraded network and application performance to network downtime.


Oversized VMs that were provisioned with more resources than necessary can waste storage and compute resources, and so can the overallocation of RAM and idle VMs.


There are two ways to successfully combat VM sprawl. First, administrators should put processes and policies in place to prevent it from happening. Even then, however, VM sprawl may occur, which makes it imperative that administrators also establish a second line of defense that keeps it in check during day-to-day operations.


Let’s take a closer look at strategies that can be implemented:




The best way to get an early handle on VM sprawl is to define specific policies and processes. This first step involves a combination of five different approaches, all designed to stop VM sprawl before it has a chance to spread.


  1. Establish role-based access control policies that clearly articulate who has the authority to create new VMs.
  2. Allocate resources based on actual utilization.
  3. Challenge oversized VM requests.
  4. Create standard VM categories to help filter out abnormal or oversized VM requests.
  5. Implement policies regarding snapshot lifetimes.




Unfortunately, VM sprawl can occur even if these initial defenses are put in place. Therefore, it’s incumbent upon IT teams to be able to maintain a second layer of defense that addresses sprawl during operations.


Consider a scenario in which a project is cancelled or delayed. Or, think about what happens in an environment where storage is incorrectly provisioned.


During operations, it’s important to use an automated approach to virtualization management that employs predictive analysis and reclamation capabilities. Using these solutions, federal IT managers can tap into data on past usage trends to optimize their current and future virtual environments. Through predictive analysis, administrators can apply what they’ve learned from historical analysis to address issues before they occur. They can also continually monitor and evaluate their virtual environments and get alerts when issues arise so problems can be remediated quickly and efficiently.


While each of these strategies by themselves can be effective in controlling VM sprawl, together they create a complete and sound foundation that can greatly improve and simplify virtualization management. They allow administrators to build powerful, yet contained, virtualized networks.


Find the full article on Government Computer News.

Recently, I wrote a post about the concept of a pre-mortem, which was inspired by an amazing podcast I’d heard (listen to it here). I felt that this concept could be interpreted exceptionally well within a framework of project management. The idea that thinking of as many variables and hindrances to the success of individual tasks, which in turn would delay the milestones necessary to the completion of the project as a whole, quite literally correlates to medical care. Addressing the goals linked to a person's or project's wellness really resonated with me.


In my new series of five posts, this being the first, I will discuss how concepts of IT code correlate to the way we live life. I will begin with how I see project management as a potentially correlative element to life in general and how to successfully live this life. This is not to say that I am entirely successful, as definitions of success are subjective. But I do feel that each day I get closer to setting and achieving better goals.


First, we need to determine what our goals are, and whether financial, physical, fitness, emotional, romantic, professional, social, or whatever else matters to you are success goals. For me, lately, a big one has become getting better at guitar. But ultimately, the goals themselves are not as important as achieving them.


So, how do I apply the tenets of project management to my own goals? First and foremost, the most critical step is keeping the goals in mind.


  • Define your goals and set timelines
  • Define the steps you need to take to achieve your goals
  • Determine the assets necessary to achieve these goals, including vendors, partners, friends, equipment, etc.
  • Define potential barriers to achieving those goals, such as travel for work, illness, family emergencies, etc.
  • Define those barriers (as best you can)
  • Establish a list of potential roadblocks, and establish correlating contingencies that could mitigate those roadblocks
  • Work it


What does the last point mean? It means engaging with individuals who are integral to setting and keeping commitments. These people will help keep an eye on the commitments you’ve made to yourself, the steps you’ve outlined for yourself, and doing the work necessary to make each discrete task, timeline, and milestone achievable.


If necessary, create a project diagram of each of your goals, including the above steps, and define these milestones with dates marked as differentiators. Align ancillary tasks with their subtasks, and defined those as in-line sub-projects. Personally, I do this step for every IT project with which I am involved. Visualizing a project using a diagram helps me keep each timeline in check. I also take each of these tasks and build an overall timeline of each sub-project, and put it together as a master diagram. By visualizing these, I can ensure that future projects aren't forgotten, while also giving me a clear view of when I need to contact outside resources for their buy-in. Defining my goals prior to going about achieving them allows me to be specific. Also, when I can see that I am accomplishing minor successes along the way (staying on top of the timeline I've set, for example), helps me stay motivated. 


Below is a sample of a large-scale precedence diagram I created for a global DR project I completed years ago.  As you start from the left side and continue to the right you’ll see each step along the way, with sub-steps, milestones, go/no-go determinations, and final project success on the far right. Of course, this diagram has been minimized to fit to the page. In this case, visibility into each step is not as important as is the definition and precedence of each step. What does that step rely on for the achievement of this step? These have all been defined on this diagram.



The satisfaction I garner from a personal or professional task well accomplished cannot be minimized.

I'm back from Microsoft Ignite and I've got to tell you that even though this was my first time at the event, I felt like it was one for the record books. My record books, if nothing else.


My feelings about Microsoft are... complicated. I've used Windows for almost 30 years, since version 2.10, which came for free on 12 5.25" floppies when you bought a copy of Excel 2.0. From that point on, while also using and supporting systems that ran DOS, Novell, MacOS, OS/2, SunOS, HP-UX, and Linux, Microsoft's graphical operating system was a consistent part of my life. For a while, I was non-committal about it. Windows had its strengths and weaknesses. But as time went on, and both the company and the technology changed (or didn't), I became apathetic, and then contrarian. I've run Linux on my primary systems for over a decade now, mostly out of spite.


That said, I'm also an IT pro. I know which side of the floppy the write-protect tab is flipped to. Disliking Windows technically didn't stop me from getting my MCSE or supporting it at work.


The Keynote

The preceding huge build-up is intended to imply that, previous to this year's MS:Ignite keynote, I wasn't ready to lower my guard, open my heart, and learn to trust (if not love) again. I had been so conflicted. But then I heard the keynote.


I appreciate, respect, and even rejoice in much of what was shared in Satya Nadella's address. His zen-like demeanor and measured approach proved, in every visible way, that he owns his role as Microsoft's chief visionary. (An amusing sidenote: On my way home after a week at Ignite, I heard Satya on NPR. Once again, he was calm, unassuming, passionate, focused, and clear.


Mr. Nadella's vision for Microsoft did not include a standard list of bulleted goals. Instead, he framed a far more comprehensive and interesting vision of the kind of world the company wants to build. This was made clear in many ways, from the simultaneous, real-time translation of his address into more than a dozen languages (using AI, of course), to the fact that, besides himself, just three panelists and one non-speaking demonstration tech were men. Women who occupied important technical roles at Microsoft filled the rest of the seats onstage. Each delivered presentations full of clarity, passion, and enthusiasm.


But the inspirational approach didn't stop there. One of the more captivating parts of the keynote came when Nadella spoke about the company's work on designing and building a quantum computer. This struck me as the kind of corporate goal worthy of our best institutions. It sounded, at least to my ears, like Microsoft had an attitude of, "We MIGHT not make it to this goal, but we're going to try our damnedest. Even if we don't go all the way, we're going to learn a ton just by trying to get there!"


The Floor

Out on the exhibitor floor, some of that same aspirational thinking was in evidence.


I spent more time than I ought to have in the RedHat booth. (I needed a little me time in a Linux-based safe space, okay?) The folks staffing the booth were happy to indulge me. They offered insights into their contributions to past cross-platform efforts, such as getting the Visual Studio ported to Linux. This provided context when I got to zero-in on the details of running SQL server on Linux. While sqlrockstar and I have a lot to say about that, the short story is that it's solid. You aren't going to see 30% performance gains (like you do, in fact, by running Visual Studio on Linux), but there's a modest increase that makes your choice to cuddle up to Tux more than just a novelty.


I swung by the MSDN booth on a lark (I maintain my own subscription) and was absolutely blown away when the staffer took the time to check out my account and show me some ways to slice $300 off my renewal price next year. That's a level of dedication to customer satisfaction that I have not expected from M$ in... well, forever, honestly.


I also swung by the booth that focused on SharePoint. As some people can attest (I'm looking at you, @jbiggley), I hate that software with the fiery passion of 1,000 suns and I challenged the techs there to change my mind. The person I spoke to got off to a pretty poor start by creating a document and then immediately losing it. That's pretty consistent with my experience of SharePoint, to be honest. He recovered gracefully and indicated a few features and design changes which MIGHT make the software easier to stomach, in case I'm ever forced able to work on that platform again.


In the booth, SolarWinds dominated in the way I've come to expect over the past four years of convention-going. There was a mad rush for our signature backpacks (the next time you can find us at a show, you NEED to sign up for one of them, and you need to hit the show floor early). The socks made a lot of people happy, of course. Folks from other booths even came over to see if they could score a pair.


What was gratifying to me was the way people came with specific, focused questions. There was less, "Welp, I let you scan my badge, so show me what you got" attitude than I've experienced at other shows. Folks came with a desire to address a specific challenge, to find out about a specific module, or even to replace their current solution.


The Buzz

But what had everyone talking? What was the "it" tech that formed the frame of reference for convention goers as they stalked the vendor booths? Microsoft gave everyone three things to think about:

  1. Powershell. This is, of course, not a new tech. But now more than ever (and I realize one could have said this a year ago, or even two, but it keeps becoming ever truer) a modicum of Powershell skills are a requirement if you expect your life as an IT pro to include Microsoft tech.
  2. Teams. This is the new name for the function that is currently occupied by Lync/Skype/Skype for Business. While there is a lot of development runway left before this software takes flight, you could already see (in the keynote demos and on the show floor) that it, along with Bing, will be central to Microsoft's information strategy. Yes, Bing. And no, I'm not making any further comment on that. I feel good enough about Microsoft that I won't openly mock them, but I haven't lost my damn mind.
  3. Project Honolulu. This is another one of those things that will probably require a completely separate blog post. But everyone who showed up at the booth (especially after the session where they unveiled the technical preview release) wanted to know where SolarWinds stood in relation to it.


The SWUG Life

Finally, there was the SolarWinds User Group (SWUG) session Tuesday night. In one of the most well-attended SWUGs ever, Steven Hunt, Kevin Sparenberg, and I had the privilege of presenting to a group of users whose enthusiasm and curiosity was undiminished, despite having been at the convention all day. Steven kicked things off with a deep dive into PerfStack, making a compelling case that we monitoring experts need to strongly consider our role as data scientists, considering the vast pile of information we sit on top of. Then I had a chance to show off NetPath a bit, taking Chris O'Brien's NetPath Chalk Talk and extending it to show off some of the hidden tricks and new insights we've gained after watching users interact with it for over a year. And Kevin brought it home with his Better Together talk, which honestly gets better every time I watch him deliver it.


The Summary

If you were at Ignite this year, I'd love to hear whether you think my observations are off-base or spot-on. And if you weren't there, I'd highly recommend adding it to your travel plans (it will be in Orlando again next year), especially if you tend to walk on the wild systems side in your IT journey. Share your thoughts in the comments below.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.