Skip navigation
1 6 7 8 9 10 Previous Next

Geek Speak

2,403 posts

Last year, we kicked off a new THWACK tradition: the December Word-a-Day Writing Challenge, which fostered a new kind of interaction in the THWACK community: By sharing personal essays, images, and thoughts, we started personal conversations and created human connections. In just one month, last year’s challenge generated nearly 20,000 views and over 1,500 comments. You can see the amazing writing and thoughtful responses here: Word-A-Day Challenge 2016


Much of this can be attributed to the amazing, engaging, thriving THWACK community itself. Whether the topic is the best starship captain, what IT lessons we can learn from blockbuster movies, or the best way to deal with nodes that have multiple IP addresses, we THWACKsters love to chat, debate, and most of all, help. But I also believe that some of last years' success can be attributed to the time of year. With December in sight, and some of us already working on budgets and project plans for the coming year, many of us find our thoughts taking an introspective turn. How did the last 12 months stack up to my expectations? What does the coming year hold? How can I best prepare to meet challenges head-on? By providing a simple prompt of a single, relatively innocuous word, the Word-a-Day Challenge gave many of us a much-needed blank canvas on which to paint our hopes, concerns, dreams, and experiences.


Which takes me to this years' challenge. Once again, each day will feature a single word. Once again, one brave volunteer will serve as the "lead" writer for the day, with his or her thoughts on the word of the day featured at the top of the post. Once again, you--the THWACK community--are invited to share your thoughts on the word of the day in the comments area below the lead post. And once again, we will be rewarding your contribution with sweet, sweet THWACK points.


What is different this year is that the word list has a decidedly more tech angle to it. Also, our lead writers represent a wider range of voices than last year, with contributors coming from SolarWinds product, marketing, and sales teams. The lead writers also include contributions from our MVP community, which gives you a chance to hear from some of our most experienced customer voices.


For those who are more fact-oriented, here are the challenge details:

  • The words will appear in the Word-a-Day challenge area, located here: Word-A-Day Challenge 2017
  • The challenge runs from December 1 to December 31
  • One word will be posted per day, at midnight, US CST (GMT -6)
  • The community has until the following midnight, US CST, to post a meaningful comment on that days' word
    • Comments will earn you 150 THWACK points
    • One comment per THWACK ID per day will be awarded points
    • Points will appear throughout the challenge, BUT NOT INSTANTLY. Chill.
    • A "Meaningful" comment doesn't necessarily mean "long-winded." We're simply looking for something more than a "Me, too!" or "Nice job!" sort of response
    • Words WILL post on the weekends, BUT...

    • ...For posts on Saturday and Sunday, the community will have until midnight CST on Monday (meaning the end of Monday, start of Tuesday) to share comments about those posts. For those folks who really REALLY don't work on the weekend (who ARE you people?!?)

  • Only comments posted in the comments area below the word for that day will be awarded THWACK points (which means words will noot count if posted on Geek Speak, product forums, your own blog, or on the psychic friend's network.)


Once again, the Word-a-Day 2017 challenge area can be found here: Word-A-Day Challenge 2017. While nothing will appear in this new forum until December 1, I encourage everyone to follow the page now to receive notifications about new posts as they appear.


If you would like to get to know our 28 contributors before the challenge starts, you can find their THWACK pages here:


And finally, I present the Word-a-Day list for 2017! I hope that posting them here and now will give you a chance to gather your thoughts and prepare your ideas prior to the Challenge. That way you can fully participate in the conversations that will undoubtedly arise each day.


  • December 01 - Identity
  • December 02 - Access
  • December 03 - Insecure
  • December 04 - Imposter
  • December 05 - Code
  • December 06 - FUD (Fear, Uncertainty, and Doubt)
  • December 07 - Pattern
  • December 08 - Virtual
  • December 09 - Binary
  • December 10 - Footprint
  • December 11 - Loop
  • December 12 - Obfuscate
  • December 13 - Bootstrap
  • December 14 - Cookie
  • December 15 - Argument
  • December 16 - Backbone
  • December 17 - Character
  • December 18 - Fragment
  • December 19 - Gateway
  • December 20 - Inheritance
  • December 21 - Noise
  • December 22 - Object
  • December 23 - Parity
  • December 24 - Peripheral
  • December 25 - Platform
  • December 26 - Utility
  • December 27 - Initial
  • December 28 - Recovery
  • December 29 - Segment
  • December 30 - Density
  • December 31 - Postscript

In the last two posts, I talked about databases and why network and storage are so darn important for their well-being. These are two things that a lot of companies are struggling with and in some ways, seem to have mastered over the years in their own data centers. But now it seems they will be obsolete when things like cloud are quickly becoming the new normal.


Let’s leave the automotive comparison for now and let’s concentrate on how cloud strategy is affecting the way we use and support our databases.


There are a lot of ways to describe the cloud, but I think the best way for me to describe it is captured in the picture below:

As Master Yoda explains, cloud is not something magical. You cannot solve all of your problems by simply moving all your resources there. It will not mean that once your database is in the cloud that your storage and network challenges will magically vanish. It is literally just another computer that, in many cases, you will even share with other companies that have moved their resources to the cloud as well.


As we all know, knowledge is power. The cloud doesn’t change this. If we want to migrate database workloads to the cloud, we need to measure and monitor. We need to know the impact of moving things to the cloud. As with the Force, the cloud is always there ready to be used. It is the how and when that will impact the outcome the most. There are multiple ways to leverage the cloud as a resource to gain an advantage, but that doesn’t mean that moving to the cloud is the best answer for you. Offense can be the best defense, but defense can also be the best offense. Make sure to know thy enemy, so you won’t be surprised.


May the cloud be with you!

Home for a week before heading to Orlando and SQL Live. This is my third event in five weeks, and it will be four events in seven weeks once I get to AWS re:Invent. That's a bit more travel than usual, but being at events is the best way to communicate with customers. I love talking data and collecting feedback. Discussing common issues and ways to solve them are some of my favorite things. Yeah, I'm weird.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Microsoft says 40 percent of all VMs in Azure now are running Linux

Want to keep away the database administrators that should never be touching your servers? Run Linux. Without a GUI, they stay away. Problem solved.


Millions of Professional Drivers Will Be Replaced by Self-Driving Vehicles

An interesting factor that will help drive the development of autonomous vehicles. Often times we hear about robots taking our jobs, but in this case, robots are going to take the jobs we don’t want anyway.


As Amazon’s Alexa Turns Three, It’s Evolving Faster Than Ever

I broke down and purchased an Echo Dot, after having used one while on vacation recently. Ask Alexa to “open a box of cats." You’re welcome.


Three years in a row – Microsoft is a leader in the ODBMS Magic Quadrant

Just in case you didn’t know who made the best data platform on the planet.


Microsoft presenter downloads Chrome after Edge fails

Don’t judge. We've all been there.


There’s No Fire Alarm for Artificial General Intelligence

A bit long, but well worth the time. As I was reading this I was thinking about the fire alarm for those of us discussing things like autonomous databases and the future of systems and database administration.


Security and privacy, startups, and the Internet of Things: some thoughts

Also a bit long, but worth the time. Some interesting insights into the why and how we might have gotten ourselves into the data security and privacy mess we are in today. SPOILER ALERT: It’s money.


One good thing about visiting Seattle is that I get to stop by the grave of Microsoft Bob:

tiles spelling out DATA THEFT

In my eBook, 10 Ways We Can Steal Your Data, I reveal ways that people can steal or destroy the data in your systems. In this blog post, I'm focusing on un-monitored and poorly monitored systems.


Third-party Vendors


The most notorious case of this type is the 2013 Target data theft incident in which 40 million credit and debit cards were stolen from Target's systems. This data breach is a case study on the role of monitoring and alerting. It led to fines and costs in the hundreds of millions of dollars for the retailer. Target had security systems in place, but the company wasn't monitoring the security of their third-party supplier. And, among other issues, Target did not respond to their monitoring reports.


The third-party vendor, an HVAC services provider, had a public-facing portal for logging in to monitor their systems. Access to this system was breached via an email phishing attack. This information, together with a detailed security case study and architecture published by another Target vendor, gave the attackers the information they needed to successfully install malware on Target Point-of-Sale (POS) servers and systems.


Target listed their vendors on their website. This list provided a funnel for attackers to find and exploit vendor systems. The attackers found the right vulnerability to exploit with one of the vendors, then leveraged the details from the other vendor to do their work.


Misconfigured, Unprotected, and Unsecured Resources


The attackers used vulnerabilities (backdoors, default credentials, and misconfigured domain controllers) to work their way through the systems. These are easy things to scan for and monitor. So much so that "script kiddies" can do this without even knowing how their scripts work. Why didn't IT know about these misconfigurations? Why were default credentials left in enterprise data center applications?  Why was information about ports and other configurations published publicly? No one of these issues may have led to the same outcome, but as I'll cover below, these together formed the perfect storm of mismanaged resources to make the data breach possible.



When all this was happening, Target's offsite monitoring team was alerted that unexpected activities were happening on a large scale. They notified Target, but there was no response.


Some of the reasons given were that there were too many false positives, so security staff had grown slow to respond to all reports. Alert tuning would have helped this issue. Other issues included having too few and undertrained security staff.


Pulling it All Together


There were monitoring controls in place at Target, as well as security staff, third-party monitoring services, and up-to-date compliance auditing. But the system as a whole failed due to not having an integrated, system-wide approach to security and threat management.



How can we mitigate these types of events?


  • Don't use many, separate monitoring and alerting systems
  • Follow data flows through the whole system, not just one system at a time
  • Tune alerts so that humans respond
  • Test responders to see if the alerts are working
  • Read the SANS case study on this breach
  • Don't let DevOps performance get in the way of threat management
  • Monitor for misconfigured resources
  • Monitor for unpatched resources
  • Monitor for rogue software installs
  • Monitor for default credentials
  • Monitor for open ports
  • Educate staff on over-sharing about systems
  • Monitor the press for reports about technical resources
  • Perform regular pen testing
  • Treat security as a daily operational practice for everyone, not just an annual review
  • Think like a hacker


I could just keep adding to this list.  Do you have items to add? List them below and I'll update.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO


There currently exists a global shortage in cybersecurity experts - not just by a few thousand, or even tens of thousands, but a shortage of some one million experts.


This isn’t good news for agencies, particularly with the rising complexity of hybrid IT infrastructures of both in-house managed and cloud-based services. Traditional network monitoring tools and strategies fail to provide complete visibility into the entire network, making it difficult to pinpoint the root causes of problems, let alone anticipate those problems before they occur. This can open up security holes to outside attackers and insider threats.


Federal cybersecurity teams need complete views of their networks and applications, regardless of whether they are on-site or hosted. IT managers must also be able to easily and quickly troubleshoot, identify, and fix issues wherever they reside. Even better, they should be equipped with systems that can predict when a problem may occur based on past historical data.


To help ensure the security of their networks, managers should explore options that offer three key benefits.


Better visibility. IT managers must manage and track multiple application stacks across their different environments. Therefore, they should consider solutions that track and monitor both on-premises and off-premises network activity.


These solutions must provide a single-pane-of-glass view into all network activities, and allow for review of data correlations across application stacks. Seeing different data types side by side can help you identify anomalies and track cybersecurity problems directly to the source. Timelines can be laid on top of this information to correlate the timing of an event to a specific slowdown or outage. This information can be used collectively to quickly remediate issues that could impact network security.


Better proactivity. Predictive analytics allows managers to create networks that effectively learn from past incidents and behaviors. Network monitoring tools can automatically scan for anomalies that have caused disruptions in the past. When something is detected, managers can receive notifications and directions on how to mitigate the problem before it happens.


In essence, IT managers go from reacting to network issues to proactively preventing them. This is a handy strategy that helps keep networks secure and running without demanding a lot of resources.


Better collaboration. One of the benefits of having a smaller staff is that the network management team can be more nimble, as long as they have the right collaboration tools in place. Individuals must be able to easily share data, charts, and metrics with the rest of the team. This sets up a baseline, helps prevent confusion, and helps bring the team together to tackle problems in a more efficient manner.


Collaboration becomes even more critical when working with hybrid IT environments. Everyone needs to be able to work off the same canvas to address potential security problems.


Better security and hybrid IT environments can coexist, but agencies need to make sure that the managers they have on staff are equipped with tools that make bringing these two vital concerns together in a more cohesive, efficient, and effective manner.


Find the full article on GovLoop.

I've rarely seen an employee of a company purposefully put their organization at risk while doing their job. If that happens, the employee is generally not happy, which likely means they’re not really doing their job. However, I have seen employees apply non-approved solutions to daily work issues. Why? Several reasons, probably, but I don’t think any are intentionally used to put their company at risk. How do I know this?


My early days as an instructor

When I started out as a Cisco instructor, I worked for a now-defunct learning partner that used Exchange for email. The server was spotty, and you could only check email on the go by using their Microsoft VPN. I hated it because it didn’t fit any of my workflows and created unnecessary friction. In response to this, I registered a domain that looked similar to the company’s domain and set up Google apps, now called G-Suite, for the domain. That way I could forward my work emails to an address that I set up. No one noticed for several months.  I would reply to them from my G-Suite address and they just went with it. Eventually, most people were sending emails directly to my “side” email.


After becoming the CTO, I migrated the company off our rusty Exchange server and over to G-Suite, but I couldn’t help but think that I would have reamed someone if they would have done what I did. In hindsight, it was not the smartest thing to do. But I wasn’t trying to cause any issues or leak confidential data; I was just trying to get my job done. Management needs to come to terms with the fact that if it makes an employee's work/life difficult, they will find another way. And it may not be the way you want.


Plugging the holes

Recently, I saw a commercial for FlexTAPE. It was amazing. In one part, you see a swimming pool with a huge hole in the side with water gushing out of it. A guy slaps a piece of FlexTAPE over the hole from the inside of the pool, and the water stops flowing. It reminded me of some IT organizations that metaphorically attempt to fix holes by applying FlexTAPE to them. But, by that point, so much water has escaped that the business has already been negatively impacted. Instead, companies should be looking for the slow leaks that can be repaired early on.


Going back to my first example, once people learned how I was handling my email, they started asking me to set up email addresses for them so they could do the same. First one colleague, then another. Eventually, several instructors had an “alternate” email address that they were using regularly. The size of that particular hole grew quite large.


At some point, management realized that they couldn’t pedal backward on the issue, and was forced to update certain protocols. I often wonder how much confidential information could have been leaked once I was no longer the only one using the new email domain. Fortunately, those who were using it didn’t have access to confidential information, but lots of content could have been exfiltrated. That would have been bad, but in my particular organization, I don’t know if anyone would have known.


Coming full circle

Today I own my own business and deal with several external clients. When I have employees, I try to be flexible because I understand the problem with friction. I also understand that friction may not be the only reason one turns to a non-approved solution to get their work done. For core business operations, organizations would do well to clearly define approved software packages. Should an employee use services like Dropbox, iCloud, Google Drive, or If they do, are there controls in place? How does the solution impact their role? Do employees have a way to express their frustrations without fear of reprimand? Having an open line of communication with an employee can help them feel like their role is important. It also helps management really understand the issues they face. If you neglect that, employees will choose their own solutions to get work done, and potentially create security issues. And we don’t want that now, do we?


The Just Us League

Posted by kong.yang Employee Nov 3, 2017

The original Justice League consisted of seven superheroes: Superman, Aquaman, Flash, Green Lantern, Martian Manhunter, Batman, and Wonder Woman. In a parallel universe, there is the IT Justice League, a union of IT superheroes that aims to protect IT organizations and their users from the perils of application downtime, high latency, and IT disasters. As IT pros, we like to think we possess superpowers of our own. Sometimes those powers are used for good; sometimes for evil. Most of the time, we use them to keep our jobs.


To have a successful, long career in IT, you can’t operate in a silo. You simply can't accomplish organizational, global goals by walking alone. Plus, life's too short to walk the journey all alone. In other words, it's best to share your IT pains and gains. Moreover, most of us are neither blessed with innate talents like Superman and Wonder Woman nor are we blessed with endless resources and capital like Bruce Wayne/Batman or Aquaman. Heck, even with the greatest willpower, we can't create something out of anything like Green Lantern or morph objects based on our desires like Martian Manhunter. In spite of this, we still stand and deliver when called upon, such as when trouble befalls our applications. Teamwork, collaboration, and community make integrating and delivering application services a much easier and more gratifying experience.


This is where the challenge of the Just Us League appears. The Just Us League is one where everyone knows your name and you just fit in because it’s always been that way. But what if you’re not a part of the original Just Us League? How do you join? What are the rules for joining and participating in the Just Us League? What practical tips do you use to open the Just Us League to include new-to-you people? How do you incorporate the trust but verify modus operandi to make sure you sure you are nurturing vibrant and growing teams, collaboration, and community? Let me know in the comment section below.

While the previous two posts in this DevOps series have been open-ended and applicable to people on both the development and operations side, this post is focused on operations personnel. After all, if you're a developer and asking if you need to learn to code, then you might not be in the right job. I've recently had the chance to speak to several people in operations roles that are tackling what their organization is calling DevOps. Inevitably, the discussion turns to coding and whether or not operations staff need to pick up development skills. It depends, and in


Why You Shouldn't Learn Development as an Operator

It may be tempting to learn how to code so that you can be the sole source for your DevOps efforts.  After all, who is closer to daily operations in your organization than you?  And if you can make it to where you cut out the developer, that means you'll get to your DevOps nirvana sooner, right? Not usually. Here's why:


It's About Turning Code Around Quickly

You might be able to pick up the skills that you need to get your program off the ground, but how long is it going to take?  In addition to the learning curve to get everything programmed the way it needs to be, operations has its own goals for supporting and operating the environment, and often the demands for one will not match up with the demands of the other.  If you are fortunate enough to have a development team, they can really speed up this process.  In addition to already having a solid basis on the coding piece, they can continue developing for the environment when operations has a fire that requires attention. By having the teams work together, you can ensure that the developers know what you need to know while not taking significant time away from your day-to-day job.


It's About Good Development Practices

When it boils down to it, DevOps is a development practice. It's a way for development and operations to have continual communications in the development process to ensure that the final product is what operations is hoping for.  In addition to the communications, on the back end, there is a whole host of development practices that get involved.  Proper code documentation, testing, and validation for just a couple.  If you're picking up coding for your DevOps effort, it's highly likely that some of these practices may be forgotten in an effort to get yourself up to speed on the coding basics.


Value in Collaboration

Aside from the points mentioned in the previous two sections, there is value to having someone from outside your operations organization participate in the development process. You may know exactly what you want, but a healthy back and forth can help the development team come up with suggestions for handling an issue that you may not have previously considered. The development team may have a different, and sometimes better, perspective on how to approach and resolve a problem.


Why You Should Learn Development as an Operator

Whether you choose to learn how to code or not is largely going to depend on what the goal is. In this section, we'll go over some valid reasons for wanting to pick up this skillset.


There Are No Developers

In earlier posts in this series, several people have commented that their boss wants them to implement DevOps in their organization. The one problem? They have no developers. You could easily say, "That's not DevOps," and be right in this situation.  The fact of the matter is that the supervisor has said that they are doing DevOps, so in this case, you will need to pick up coding skills, even though you are still technically on the operations side.


Speak the Language

By learning how to code, you can pick up some non-coding-specific knowledge that can help out in your DevOps efforts. Understanding what it takes to write an application can give you more realistic expectations regarding timelines when you speak with the developers in your organization. Additionally, by understanding basic development concepts, you will be able to understand what things are possible (or more difficult) with code in your application. Finally, having a grasp of some of the basic terminology around coding may help you better communicate what you are wanting from your applications to the developers who are writing them. Being able to understand basic coding concepts can expedite your DevOps processes by simplifying communications and setting achievable goals for the project.


No One Told Me There Would Be Learning

As the saying goes, if you're not moving forward, you're falling behind. One of the best reasons to learn to code as an operator is simple: expanding your skillset. By learning to code, your organization will benefit from you being able to speak the language with developers. You also increase your skillset and value, which could help in your career development.



Whether you decide to pick up coding or not is ultimately up to you. However, you may notice one key point from all of the reasoning above. Unless there are no developers in your organization, your goal for learning to code should not be to do the developers job for them. Instead, if you choose to go that route, look at it as an opportunity to improve communications between teams, or to improve yourself.

I’m at PASS Summit this week in Seattle. If you are reading this and also at PASS Summit, stop by the SolarWinds booth and say hello.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Microsoft’s Sonar lets you check your website for performance and security issues

Microsoft continues to take steps to help users understand the importance of data security and privacy.


Amazon wants the keys to your home

Meanwhile, Amazon expects users to not understand the importance of security.


Microsoft finally kills off the Kinect, but the tech will live on in other devices

Proving that people are likely to get off the couch and move around just long enough to understand how much they prefer laying on the couch.


InfoSec Needs to Embrace New Tech Instead of Ridiculing It

A brilliant piece that tackles an issue that has frustrated me for a long time. I’m not an early-adopter of tech, but I don’t dismiss new things as quickly as they arise, and I don’t mock others for wanting to try new things.


Windows 10 tip: Turn on the new anti-ransomware features in the Fall Creators Update

I took the time to apply the update before taking my trip and was happy to come across this piece of information regarding an anti-ransomware feature in Windows 10. I enabled this and I think you should, too.


Does Apple Think We’re Clowns?

Yes. Next question.


Why Amazon and Microsoft Shouldn't Lose Sleep Over Oracle's New Cloud Database

Because (1) Oracle has only announced features, we still haven’t seen them; (2) Amazon and Microsoft actually have products with features and customers using them; and (3) they are a fraction of the cost that Oracle will charge. And yet, somehow, Larry will make money anyway.


How many minutes are in "several?" Asking for a friend.


Previously, I’ve spoken about project management and micro-services as being analogous to life management. To me, these have seemed to be really functional and so very helpful to my thought process regarding the goals I have set for myself in life. I hope that in some small way they’ve helped you to envision the things you want to change in your own life, and how you may be able to approach those changes as well.


In this, the third installment of the “My Life as IT Code” series, I’ll take those ideas, and try to explain how I visualize and organize my approach. From a project manager’s perspective, the flowchart has always been an important tool. The first step is to outline a precedence diagram, such as the one I created for a past project. It was a replication and disaster recovery project for a large, multi-site law firm that had leveraged VMware-based virtual machines, a Citrix remote desktop, and storage-to-storage replication to maintain consistent uptime. I broke individual time streams into flowcharts, giving the individual stakeholders clear indications of their personal tasks. I organized them into milestones that related to the project as a whole. I delivered both as Gantt Charts and flow charts to show how the projects could be visualized, revealing time used per task, as well as the tasks that I discretely broke into their constituent parts.


This same technique is applicable to some of these life hacks. While it can be difficult to assign timeframes to weight loss, for example, the milestones themselves can be quite easy to demarcate. There are, with some creative thinking, methods by which one may be able to achieve viable metrics against which progress can be marked, and effective tools for the establishment of reasonable milestones can be elucidated.


There are great tools to aid in exercise programs, which enforce daily or weekly targets, and these are great to leverage in one’s own plan. I have one on my phone called the seven-minute workout. I also note my daily steps, using the fitness tracker application on both my phone and on my smartwatch. The growth of these tools, along with the use of a scale, can show a graphic progress along the path to your goals. Weight loss is never a simple downward slope, but rather a decline that tends toward plateaus followed by restarts. However, as your progress does move forward, so, in turn, does the use of a graphical representation of your weight loss to encourage more work along those lines. For me, the best way to track progress is by using a spreadsheet, graphing on a simple x/y axis, which provides an effective visualization of progress. I do not suggest paying rigid attention to the scale, as these plateaus can be detrimental to the emotional effect of how one sees one’s progress.


I’ve never been a runner, but the ability to define distance plans, time to these distances, and delineations of the progress along those paths are most easily translatable to a flowchart. It’s important to know what you’re doing. Rather than saying things like, “I’m going to lose 40 lbs in 2018,” a more effective strategy is to focus on specifics, such as, "I plan on walking 10,000 steps per day, or five miles per day." It is easier and more impactful to adhere to smaller, more strategic commitments.


Meanwhile, the creation and attention to the flow chart is a hugely effective tool to help keep a person on track. It helps them pay attention to their goals, and gives them visibility into their progress. Once you have that, you can celebrate when you reach the milestones you have set for yourself.


As I’ve stated, the flowchart can be an amazing tool. And when you look at life goals with the eyes of a project manager, you give yourself defined timeframes and goals. The ability to visualize your goals, milestones, and intent can really assist in keeping them top of mind.


By Joe Kim, SolarWinds EVP, Engineering and Global CTO


Can you afford for your team to lose eight hours a day? According to the 2016 State of Data Center Architecture and Monitoring and Management report by ActualTech Media in partnership with my company, SolarWinds, that’s precisely what is happening to today’s IT administrators when they try to identify the root cause of virtualization and virtual machine (VM) performance problems. This is valuable time that could otherwise be spent developing applications and innovative solutions to help warfighters and agency employees achieve mission-critical objectives.


Virtualization has quickly become a foundational element of Federal IT, and while it offers many benefits, it’s also a major contributor to the increasing complexity. Adding additional hypervisors increases a network’s intricacies and makes it more difficult to manually discover the cause of a fault. There’s more to sift through and more opportunities for error.


Finding that error can be time-consuming if there are not automated virtualization management tools in place to help administrators track down the source. Automated solutions can provide actionable intelligence that can help federal IT administrators address virtual machine performance issues more quickly and proactively. They can save time and productivity by identifying virtualization issues in minutes, helping to ensure that networks remain operational.


Ironically, the key to saving time and improving productivity now and in the future involves travelling back in time through predictive analysis. This is the ability to identify and correlate current performance issues based on known issues that may have occurred in the past. Through predictive analysis, IT managers can access and analyze historical data and usage trends to respond to active issues.


Further, analysis of past usage trends and patterns helps IT administrators reclaim and allocate resources accordingly to respond to the demands their networks may be currently experiencing. They’ll be able to identify zombie, idle, or stale virtual machines that may be unnecessarily consuming valuable resources, and eradicate under- or over-allocated and orphaned files that may be causing application performance issues.


Predictive analytics can effectively be used to prevent future issues resulting from virtualization sprawl. By analyzing historical data and trends, administrators will be able to optimize their IT environments more effectively to better handle future workloads. They can run “what if” modeling scenarios using historical data to predict CPU, memory, network, and storage needs.


Predictive analytics is not just a “nice to have,” it’s something that is becoming increasingly in-demand among IT professionals. In fact, 86 percent of respondents to the State of Data Center Architecture and Monitoring and Management report identified predictive analytics as a “critical need.”


We often speak of virtualization management as the future of IT, but that’s only partially true. True virtualization management involves a combination of the past, present, and future. This combination gives federal IT managers the ability to better control their increasingly complex networks of virtual machines both today and tomorrow, by getting a glimpse into how those networks have performed in the past.


Find the full article on our partner DLT’s blog TechnicallySpeaking.

Last time I told you guys I really love the Ford story and how I view storage in the database realm. In this chapter, I would like to talk about another very important piece of this realm, The Network.


When I speak with system engineers working in a client's environment, there always seems to be a rivalry between storage and network regarding who's to blame for database issues. However, blaming one another doesn’t solve anything. To ensure that we are working together to solve customer issues, we need to first have solid information about their environment.


The storage part we discussed last time is responsible for storing the data, but there needs to be a medium to transport the data from the client to the server and between the server and storage. That is where the network comes in. And to stay with my Ford story from last time, this is where other aspects come into play. Speed is one of these, but speed can be measured in many ways, which seems to cause problems with the network. Let’s look at a comparison to other kinds of transportation.


Imagine that a city is a database where people need be organized. There are several ways to get the people there. Some are local, and thus the time for them to be IN the city is very short. Some are living in the suburbs, and their travel is a bit longer due to having a further distance to travel, with more people traveling the same road. If we go a bit further and concentrate on the cars again, there are a lot of cars driving to and from the city. How fast one comes to or from the city depends on others who are similarly motivated to get to their destination as quickly as possible.  Speed is therefore impacted by the way the drivers perform and what happens on the road ahead.


Sheep Jam


The network is the transportation medium for the database, so it is critical that this medium is used in the correct way. Some of the data might need something like a Hyperloop to travel back and forth over medium-to-long distances, while other data may have enough for those shorter trips.


Having excellent visibility into the data paths to see where congestion might become an issue is a very important measurement in the database world. As with traffic, it gives one insight into where troubles could arise, as well as offering the necessary information about how to solve the problem that is causing the jam.


I don't believe the network or storage is responsible. The issue is really about the how you build and maintain your infrastructure. If you need speed, make sure you buy the fastest thing possible. However, be aware that what you buy today is old tomorrow. Monitoring and maintenance are crucial when it comes to a high performing database. Make sure you know what your requirements are and what you end up getting to satisfy them. Be sure to talk to the other resource providers to make sure everything works perfectly together.


I'd love to hear your thoughts in the comments below.


"When the student is ready, the teacher appears."


I mentioned this idea back when I revealed that the Marvel® movie, Doctor Strange, offered a wealth of lessons for itinerant IT pros (and a few for us grizzled veterans, as well). You can find Part Four here and work your way back from there.


It seems inspiration has struck again, this time in the unlikeliest of cinema experiences. There, among the rampant gore and adamantium-laced rage (not to mention the frequent f-bombs), I was struck by how Logan1 held a few IT gems of its own.


It behooves me to remind you that there are many spoilers beyond this point. If you haven't seen the movie yet, and don't want to know what's coming, bookmark this page to enjoy later.


Your most reliable tool could, at some future point, become toxic if you aren't able to let go and move on.


In the movie, it is revealed that Logan is slowly dying from the inside out. Adamantium, it seems, is not exactly surgical-grade metal, and the toxins have been leaching into his system. Initially held off by his healing factor, the continuous presence of the poison finally takes its toll and does what war, enemies, drowning, and even multiple timelines and horrible sequels could not.


One good lesson we should all draw from this is to keep evil shadow government agencies from lacing our skeletons with untested metals.


But a more usable lesson might be to let go of tools, techniques, and ideas when they become toxic to us. Even when they still appear to be useful, the wise IT pro understands when it is time to let go of the old before it becomes a deadly embrace.


When you see some of yourself in the next generation of IT pros, give them the chance to be better than you were.


Logan: "Bad sh*t happens to people I care about. Understand me?"

Laura: "I should be fine then."


(Later) Logan: "Don't be what they made you."


Many IT professionals eventually reach a tipping point when the adrenaline produced by the new, shiny, and exciting tends to wear off, and the ugly starts to become apparent. Understand, a career in IT is no uglier than other careers.


There are a few potential reasons why the honeymoon phase tends to be more euphoric, and the emotional crash when the work becomes a grind more noticeable. It could possibly be because IT is still a relatively new field. Maybe it’s because IT reinvents itself every decade or so. Maybe it is because the cost of entry is relatively low. In other words, it often takes no more than a willingness to learn and a couple of decent breaks.


And when that tipping point comes, often a number of years into one's career, it's easy to become "that" person. The bitter, grizzled veteran. The skeptic. The cynic who tries to "help" by warning newcomers of the horror that awaits.


Or you become a different version of "that" person, the aloof loner who wants nothing to do with the fresh crop of geeks who just walked in off the street in the latest corporate hiring binge.


In either case, you do yourself and the world around you a great disservice with such behavior.


In the movie, Logan first avoids helping, and when that option is no longer available to him, he attempts to avoid getting emotionally involved. As an audience, we know (even if we've never read the "Old Man Logan" source material), that this tactic will ultimately fail. We know we'll see the salty, world-weary X-Man open his heart to a strange child before the final credits.


What's more, the movie makes plain the opportunities Logan throws away when he chooses a snide remark instead of attempting to get to know Laura, that strange child.


So, the lesson to us as IT professionals is that we shouldn't let a bad experience make us feel bad about ourselves, or about our career. And we certainly shouldn't let it get in the way of being a kind and welcoming person to someone new to their career. If anything, we - like Logan at the end of the movie - should try to find those small kernels of capital-T Truth and pass them along, hopefully in ways and at moments when our message will be heard and received in the spirit in which it is meant.


Persistent problems need to be faced, fixed, and removed, not ignored and categorized as someone else's problem.


Near the beginning of the movie, the reaver Donald Pierce tracks down Logan and asks him for information about Gabrielle, the nurse who rescued Laura from the facility where she and the other child mutants were being raised. Donald makes it clear that he isn’t interested in bringing Logan in for the bounty. He simply wants information.


Again, because of his drive to distance himself from the rest of the world, Logan takes this at face value. Even though it is clear that Pierce intends no good for whoever it is he was hunting, Logan is happy it just didn't involve him.


And of course, the choice comes back to haunt him.


Now I'm not suggesting that Logan should have clawed him in the face in that first scene because, even in as brutal a movie as Logan, that's still not how the world works. But what I am saying is that if you let Pierce be a metaphor for a problem that isn't directly threatening your environment right now, but could come home to roost with disastrous results later, then... yeah, I am saying that you should (metaphorically speaking) claw that bastard’s eyeballs out.


I'm looking at you, #WannaCry.


Even when your experiences have made you jaded, hang on to your capacity to care.


Tightly connected to the previous thought about encouraging the next generation of IT professionals is the idea that we need to do things NOW that allow us to hold on to our capacity to care about people. As Thomas LaRock wrote recently, "Relationships Matter More Than Money" (  I would extend this further to include the idea that relationships matter more than a job, and they certainly matter more than a bad day.


In the movie, no moment exemplifies this as poignantly as the line that became one of the key voiceover elements in the trailer. In finding a family in trouble, Charles demands they stop and help. Logan retorts, "Someone will come along!" Charles responds quietly but just as forcefully, "Someone HAS come along."


But that isn't all I learned! Stay tuned for future installments of this series. And until then, Excelsior!


1 “Logan” (2017), Marvel Entertainment, distributed by 20th Century Fox

AventureWorks Sample data

In my soon-to-be-released eBook, 10 Ways We Can Steal Your Data, we talk about The People Problem, how people not even trying to be malicious end up exposing data to others without even understanding how their actions put data at risk. But in this post, I want to talk about intentional data theft.


What happens when insiders value the data your organization stewards? There have been several newsworthy cases where insiders have recognized that they could profit from taking data and making it available to others. In today’s post, I cover two ways I can steal your data that fall under that category.

1.Get hired at a company where security is an afterthought

When working with one of my former clients (this organization is no longer in business, so I feel a bit freer to talk about this situation), an IT contractor with personal financial issues was hired to help with networking administration. From what I heard, he was a nice guy and a hard worker. One day, network equipment belonging to the company was found in his car and he was let go. However, he was rehired to work on a related project just a few months later. During this time, he was experiencing even greater financial pressures than before. 

Soon after he was rehired, the police called to say they had raided his home and found servers and other computer equipment with company asset control tags on them. They reviewed surveillance video that showed a security guard holding the door for the man as he carried equipment out in the early hours of the morning. The servers contained unencrypted personal data, including customer and payment information. Why? These were development servers where backups of production data were used as test data.

Apparently, the contractor was surprised to be hired back by a company that had caught him stealing, so he decided since he knew about physical security weaknesses, he would focus not on taking equipment, but the much more valuable customer and payment data. 

In another case, a South Carolina Medicaid worker requested a large number of patient records, then emailed that data to his personal address. This breach was discovered and he was fired. My favorite quotes from this story were:

Keck said that in hindsight, his agency relied too much on “internal relationships as our security system.”




Given his position in the agency, Lykes had no known need for the volume of information on Medicaid beneficiaries he transferred, Keck said.

How could this data breach be avoided?

It seems obvious to me, but rehiring a contractor who has already breached security seems like a bad idea. Having physical security that does not require paperwork to remove large quantities of equipment in the middle of the night also seems questionable. Don't let staffing pressures persuade you to make bad rehire decisions.

2. Get hired, then fired, but keep friends and family close


At one U.S. hospital, a staff member was caught stealing patient data for use in identity theft (apparently this a major reason why health data theft happens) and let go. But his wife, who worked at the hospital in a records administration role, maintained her position after he was gone. Not surprisingly, at least in hindsight, the data thefts continued.

There have also been data breach scenarios in which one employee paid another employee or employees to gather small numbers of records to send to a third party who aggregated those records into a more valuable stockpile of sellable data.

In other data breach stories, shared logins and passwords have led to former employees stealing data, locking out onsite teams, or even destroying data. I heard a story about one employee, who was swamped with work, who provided his credentials to a former employee who had agreed to assist with the workload. That former employee used the information he was given to steal and resell valuable trade secrets to his new employer.

How can these data breaches be avoided?

In the previously mentioned husband and wife scenario, I'm not sure what the impact should have been regarding the wife’s job. There was no evidence that she had been involved in the previous data breach. That said, it would have been a good idea to ensure that data access monitoring was focused on any family members of the accused.

Sharing logins and passwords is a security nightmare when employees leave. They rarely get reset, and even when they do they are often reset to a slight variation of the former password.


This reminds me of one more much easier way to steal data, one I covered in the 10 Ways eBook: If you use production data as test and development data, it’s likely there is no data access monitoring on that same sensitive data. And no “export controls” on it, either. This is a gaping hole in data security and it’s our job as data professionals to stop this practice.

What data breach causes have you heard about that allowed people to use unique approaches to stealing or leaking data? I'd love to hear from you in the comments below.

Leon Adato

A View from the Air

Posted by Leon Adato Expert Oct 27, 2017

I have something exciting to tell you: THWACKcamp is not unique. Stick with me and I'll explain why.


After a hair-raising 37-minute flight connection, I'm comfortably (if somewhat breathlessly) settled into the last row of a tiny plane, which is currently between Houston and Cleveland. And despite the fact that I listened to Patrick's closing comments over five hours ago, it may as well have been five minutes ago. I'm still invigorated by the energy around THWACKcamp 2017: the energy from the teams that put together 18 incredible sessions; the energy from the 2,000+ online chatters who joined to offer their thoughts, comments, and opinions; and the energy from the room full of THWACK MVPs who showed up to take part in this event, which seems to have taken on a life of its own (not a bad thing, in my opinion).


It's going to take me a while to process everything I heard and saw over the last two days, from the acts of graciousness, support, and professionalism to the sheer brilliance of the presenters. There's so much that I want to try out.


I'm inspired more than ever to pick up Python, both for network projects and simply for the pure joy of learning a new coding language.


I have a renewed sense of urgency to get my hands dirty with containers and orchestration so that I can translate that experience into monitoring knowledge.


I'm committed to reaching out to our community to hear your stories and help you tell them, either by giving you a platform or by sharing your stories as part of larger narratives.


So, if there was so much SolarWinds-y, THWACK-y, Head Geek-y goodness, why would I start this blog by saying THWACKcamp is not unique? Because that sense of excitement, engagement, and invigoration is exactly how I feel when I attend the best IT shows out there. I felt it flying home from the juggernaut that was Microsoft Ignite, which boasted 27,000 attendees. I felt it driving back from the 400-person-strong inaugural DevOpsDays Baltimore. That tells me that THWACKcamp is not just a boutique conference for a certain subset of monitoring snobs and SolarWinds aficionados. It's a convention for ALL of us. While the focus is necessarily on monitoring, there are takeaways for IT pros working across the spectrum of IT disciplines, from storage to security to cloud and beyond.


In short, THWACKcamp has arrived. I'll leave it to others to recount the numbers, but, by my count, attendance was in the thousands. That's nothing to sneeze at, and that's before you consider that it's free and 100% online, so many of the barriers for people to attend are removed.


I have to admit: my first sentence is deceptive. We ARE unique. At all those other shows I've been to, I have to wait weeks or months to be able to view sessions I regretfully missed at the time, or to review sessions for quotes or other things I might have missed. THWACKcamp is different. You can access all of that content NOW.


That's not exactly a groundbreaking technological feat, but it is unique in the industry. And more than that, it's refreshing.


So check out the sessions. Give them a re-listen and see if you catch another nugget of knowledge you missed the first time around. Share them with your teams. Come back to them whenever you want (just like you can do for THWACKcamp 2017 (THWACKcamp 2017 ) or past events:


Meanwhile, I'm already looking forward to THWACKcamp 2018. I know there are going to be some amazing technologies to share, features to dive into, and stories to tell.


Until then!

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.