Skip navigation
1 2 3 4 5 Previous Next

Geek Speak

2,352 posts

The 2017 Word-a-Day challenge is off to an amazing start, and I wanted to share just a few of the incredible insights and amazing stories being shared in that space. If you hadn't heard about it until now, you can find all the entries here: Word-A-Day  Challenge 2017 .


Meanwhile, a quick reminder about the challenge rules: Each word will appear around midnight US Central time, and you have until midnight the following day to post a *meaningful* comment (you know, something more than "yeah!" or "great job!") for 150 THWACK points. You have until midnight Monday night to make comments on the words that post on Saturday and Sunday, since we here at THWACK would never want to pull you away from your valuable downtime.


So what were people talking about this week?


Identity (Posted by Leon Adato Expert)

Michael Probus Expert Nov 30, 2017 9:27 AM

I find the Where You Are an interesting addition to authentication.  Back in my day when I was teaching class, it was always the first three.  I'm wondering how many organizations are taking into account location when authenticating.  We have monitoring applications that log IP address, so it can easily be obtained, but I don't know that they are being factored in when the user is logging in (except for those that are blocked by ACL).


Simeon Castle Dec 1, 2017 6:11 AM

A personality, in a loose sense, is the sum of all experiences to date; everything from all our senses builds how we think and act and live. From this also comes the desires and ways we would act, talk and think. I've very recently taken the step to leave a decade career and lifelong ambition to join you in getting Thwack points IT administration. When I took that step, I changed my identity - who I am and who I will become. Sometimes it takes the Earth to make a small change, and sometimes it takes a word to change a life.


Mark Roberts Expert Dec 1, 2017 9:27 AM

An immediate thought that came out of reading the first days challenge, is related to me finding out I had a half brother I did not know existed (as did my Father) 3 years ago. We, along with my brother, Father and Mother became close very quickly after our first meeting, with the relationship very much based on our shared identity. The premise that we are who we are based on our life and the experiences within has also very strongly been proven to be based on nature as well. The similarities in our looks is one thing, but the shared mannerisms, way of speaking and many other traits along has brought much debate and sometimes with our wives consternation as now our sense of humours are spread across three of us.



Access (Posted by Eric CourtesyIT Expert)

Mercy K Dec 3, 2017 7:38 AM

Access is a big deal, it shows you where one belongs in a particular place; it is the evidence of belonging.


James Percy Dec 4, 2017 9:56 AM

Access is ones ability to gain entry into a place. For some that place was Studio 54, where it was difficult to gain access. For others access is gain entrance to a place that gives us an ability to do something, like gaining entrance to a group that allows me to change permissions in a virtual world.


Damon Goff Dec 4, 2017 12:56 PM

At my previous job access seemed to be a daily fight. I can't remember a single day where there wasn't a user testing to see if they could weasel their way into having access to something that they shouldn't have, and didn't need (they all thought they did). The biggest fight was always over WiFi access for personal devices. Our setup was simple, one SSID for company owned devices that only two people knew they password to and a second for guests/personal devices. Enough of the managers complained to the big boss over this that he came to me and told me to change the way we do things. Give all users the password. I argued with him over it and we came to an agreement that we would do an isolated test before allowing everyone access. I would broadcast a new SSID that was a clone of the existing and only hand out the password to a small group. Not even a day had gone by before I start noticing strange activity and unknown devices connecting to this new SSID at all hours. And wouldn't you know it, a select few almost immediately gave the password to just about everyone they knew! I really enjoyed the I told you so after I showed the big boss.


Insecure (Posted by Peter Monaghan, CBCP, SCP, ITIL ver.3 Expert)

Thomas Iannelli Expert Dec 3, 2017 11:23 AM

I recall a conversation when I was in the Air Force about how fighter jets are very dependent up their thrust to stay in the air, unlike the good old C-130s. It was the fighters' very instability that made them such a dynamic force in air combat. It makes me look at the instability of what I know in IT and feel comfortable with that. While it may make me insecure, if I apply the correct amount of thrust to stay in it, I can keep flying in the very dynamic career of IT professional.


Ani Love Dec 3, 2017 11:47 AM

Confidence is silent, insecurities are LOUD


Steven Carlson Expert Dec 4, 2017 12:07 AM

Admittedly, my initial introduction to a lot of SolarWinds MVPs made me feel pretty insecure - "They know so much more than me! Why am I here?".  Over time however I have realised that this has pushed me to increase my depth of knowledge and that there are times where I have been able to provide assistance to them. We're better as a whole than the individual parts.


Imposter (Posted by Joshua Biggley Expert)

Michael Probus Expert Dec 4, 2017 7:32 AM

Good write up.  Not what I envisioned when I saw the title, but I assume that is what you were going for.  Thinking outside the box. Too many times, people just try to fit in.  They are afraid to be themselves as they don't know how they will be perceived by others. Then there are the times when someone pretends to know something they don't for fear to being looked upon negatively. As you said it.  Be yourself.  If you don't know something, just ask.  Everyone had to learn at some point.  If someone doesn't like you for how you are, then they don't deserve to know you.


Kamil Nepsinsky Dec 4, 2017 9:03 AM


James Percy Dec 4, 2017 10:08 AM

At first I wanted to ask if I am Batman, or an imposter. But then I think when the band Kiss replaced their drummer Peter Chris, and Eric Singer then puts on the make up and persona of the original character, is Eric now an imposter? Or is he just playing a roll that was passed on.



Code (Posted by Craig Norborg Expert)

Ethan Beach Dec 7, 2017 4:16 PM

My kid at only 8 years old already took a Minecraft coding class. Man how young they are learning.


James Percy Dec 5, 2017 12:45 PM

When I think back on childhood, a code was more like a cipher, something you needed to de-code to understand. Something you needed the secrete decoder ring or tool for. In later life as I became a magician the code was more about ethics, where we agreed to not frivolously or intentionally give away certain secrets that may be revealed in how something may work. At work we have a code of conduct or a list of rules we must abide by. And in IT I feel here code bandied about as a term referring to programming.


Just looking over all the contexts and definitions and meanings of the work code, leads me to ponder other languages. It is interesting how in English we can have one word that can mean multiple things depending on context, however in other languages, even ancient ones, like Greek,  there are multiple words used that making those languages more complex but also more descriptive and precise.


Graeme Brown Dec 5, 2017 10:14 AM

I remember seeing the word code in many places but what first struck me was how the word applies to genetics AND information technology. When you consider the history of the word, it's a simple word with a very generic meaning, now it's use seems restricted to mystical deep uses. that will change however as people learn to question the norm and seek out true all things.



FUD (Posted by thegreateebzies Administrator)

Michael Perkins Dec 7, 2017 10:38 AM

I first heard about FUD from geeky pursuits. For along time, though, I actually used it far more to describe things political, especially in political campaigns (e.g., negative campaign ads).


The way, in short, that I like to deal with FUD hits each aspect:
Fear - Apprehension is generally OK. It is alright to be concerned, but not to be paralyzed or ruled by fear.Uncertainty - The more I know, the dumber I think I am, since I realize even more of what I don't know. Not being G-D, I cannot know everything, so some uncertainty is understandable.

Doubt - F and U definitely imply D. That doubt, however, can lead you to ask questions to perhaps catch something not foreseen, which is good.

Use FUD to guide you to where your concerns are and put efforts there to research and prove or alleviate them. Turn FUD into a productive force. Just don't let it paralyze you.


Alex Sheppard Dec 7, 2017 8:42 AM

"I must not fear!  Fear is the mind killer!  Fear is the littledeath that brings total obliteration!  I will face my fears.  I will allow them to pass over me, and over me."

   - Paul Atreides, "Dune"


Also, for those who don't already know, F.E.A.R is an acronym for False Evidence Appearing Real!


Steven Carlson Expert Dec 7, 2017 8:09 AM

While not work-related, I've gone through a bout of FUD lately with my first home purchase. We went in with a bit of risk knowing there was some existing damage that needed repairing (uncertainty) but the damage looks to be more than anticipated. I have had doubts about whether I've made the right choice, and fearful of what else would come up. However, I've since overcome most of that (except for a little bit of the fear) and we're pushing ahead with what we have and fixing it up properly for our peace of mind.



Pattern (Posted by kneps)

C Potridge Dec 7, 2017 2:38 PM

Better safe than sorry, run from all tigers, real or imagined.


Richard Schroeder Expert Dec 7, 2017 2:56 PM

I loved the "Faces In Places" search--thank you for sharing that.


Yes, sometimes the pattern only appears from far away--not being able to see the forest for the trees is a hurdle we don't sometimes don't even know we've encountered.


Other times we are in our own pattern (a.k.a.: "rut") from having recently diagnosed and troubleshot an issue with a particular technology. Our solution, and that recent troubleshooting pattern, can lead a person to easily waste time by digging into that specific issue, when in fact the problem is unrelated.


There's a character in Ursula K. LeGuin's EarthSea saga called "The Master Patterner" who is enigmatic and subtle.…   Enjoy her works as I do, and another pattern is made.


Thomas Iannelli Expert Dec 7, 2017 3:18 PM

I recently listened to an episode of the "You Are Not So Smart" podcast that had interesting comments to make on how artificial intelligence is using patterns it finds in historical input data which create biases that we may not want as a society as we go forward. The ability to recognize patterns and go "Oh, that one is bad, we should change that," is going to an important part of how we code AI.


Virtual Posted by Richard Letts Expert()

Richard Schroeder Expert Dec 8, 2017 9:34 AM

A definition for virtual I learned long ago, is "In essence, but not in fact."  An important distinction! And I learned "virtual" has nothing to do with the goodness of "virtue"; the two words are not related except by spelling and sound for my purposes. Knowing this, you can better understand product claims and news reports with a more critical eye when you hear "virtual" or "virtually".


Virtually the only differences between a "real" router or server and equivalent "virtual" models are form factor and management/setup process.  We may be accustomed to a hardware box dedicated to routing, or to a pizza box server taking requests for applications or files.  We reduce costs and extend flexibility and increase uptime by moving to virtual hardware that performs the same services when our environment and budget scale to the need.  But the function of the original hardware server or router is duplicated exactly by the virtual appliance, and more flexibility and options are gained through the virtues virtual routers and servers.


William Gonzalez Dec 8, 2017 9:56 AM

The challenge is trying to get management to virtualize very server. They keep insisting that we need physical boxes and it drives me crazy.


Matt R Expert Dec 8, 2017 10:43 AM (in response to Christopher Good)

I will always be forever reminded that virtualization (nee virtual) is good, as long as it is *understood*. It's critical that people know what it means to virtualize and what it doesn't mean. Do you have flexibility? Yes. HA? Ideally yes. Do you have redundancy? Hopefully yes. Do you still have physical hosting that virtualization? Yes.  Do you have enough hardware and properly allocated hardware so that people can do what they need? Again, hopefully yes. So it's helpful to remember that not everything can or will ever be virtual because we still live in a physical world. That being said, the benefits are immense, but the planning needs to be there from the start. Otherwise it's trying to ask normal people to figure out rocket science and realize they missed something (core) to keep a virtual environment up. Or pushing to go virtual when you don't even have enough resources to do it.



Again, that's just a sample. Check out the Word-a-Day 2017 forum to get the full story. Meanwhile, stay tuned this coming week as we ponder Binary, Footprint, Loop, Obfuscate, Bootstrap, Cookie, and Argument!


DevOps Wrap-Up

Posted by samplefive Dec 7, 2017

Welcome to the final post in my series on DevOps. If you've been following along and reading comments for the previous posts, you know the content generated several questions. In this post, I will consolidate those questions into one place so that I can expand the answers from the original posts. Also, I'll wrap up the series with suggestions for finding DevOps resources, because we've really just scratched the surface with this series. Let's get started by following up on some of the questions.


Isn't This Just Good Development Practices?

Definitely, the most common question about DevOps is, "How is it any different than the good development practices in existence?" Several developers commented that communicating with the operations staff was already part of their everyday operations, and viewed DevOps as just a re-branding of an old concept. I do agree that, like a lot of concepts in IT, DevOps recycles many ideas from the past. Without good development practices as part of your process, a DevOps initiative will end up in the trash very quickly. But DevOps is more than just good development practices.


When most organizations talk about implementing DevOps in their organization they are usually talking about applying those good development practices in new ways. Examples involve automating server maintenance where there previously was no development work, or creating custom applications to interface with your SDN control plane to add functionality that wasn't previously there. For a lot of organizations, DevOps simply applies good development practices to the parts of your business that previously had no development at all.


The Operator Coder

My Now We do DevOps. Do I Need to Learn Coding? post prompted people on the operations side to share various opinions about picking up development skills as an operator. They pointed out a couple of things that I feel were either missed or not thoroughly covered.


But I Don't Want to Code!

If you've already picked up development skills and enjoyed the process, it may be difficult to realize that there are some people who just don't enjoy crunching away on code. Admittedly, when writing that post, I may have glossed over the fact that if you are considering whether or not to learn to code you should first think about whether you want to do it and if it is enjoyable. Learning a new skill can be difficult, but learning a new skill you really don't care about is even more challenging.


Many Hats

When you work in a smaller organization, you likely are responsible for performing a variety of jobs each day. When management decides that they want to do DevOps in a small organization, it may fall on you to pick up development skills, simply because there are no developers on staff. In this case, it's not really a matter of whether or not you want to learn development skills. Like many other things that land on your plate, you learn whatever is required to get the job done.


Yes, you could argue that because there isn't a separate development staff, this isn't technically DevOps. However, every organization determines their own definition of DevOps--usually management personnel--regardless of whether that's right or wrong. Sometimes you just have to roll with it.


Developers and Operators are Never Going to Communicate

I found the stark differences among comments to be really fascinating. Several developers commented that communicating with operators was good development practice and that every developer worth his or her salt should be doing this already. Others commented that this just wasn't possible. Why? Because, based on experience, their developers and operations teams would never reach the point where they could effectively communicate with each other. Since we already talked about the first bit at this at the top of the post, let’s address the comment that developers and operations would never have enough communications to implement DevOps.


If you've got a group of developers or operators who just aren't willing to communicate, what do you do? The easy answer would be to just say, "You all suck," and let everyone go and start from scratch. But if your organization isn't quite ready to initiate a scorched-earth policy, read on. While it's definitely possible, the lack of communication may not be rooted directly in your individual contributors. As we discussed in DevOps Pitfalls, culture gets defined by behavior, which in turn gets defined by process. Does your organization have issues that make communicating between teams particularly difficult? Maybe you have a deeply rooted culture that supports a rivalry between operations and developers. Maybe operations and development have always treated each other with disdain, unfortunately. I'm sure you've seen organizations where the developers think the users are idiots that don't know what they want. This organizational attitude could be at the core of why you don't have communications between teams. It's important to look at both individual contributors and management. Are they helping to shape unhelpful team attitudes? Maybe this is why the teams can't seem to communicate effectively.


I'm Going Somewhere Else

Now that you've read through this blog series, you may be wondering where else to look for information to help move your DevOps initiatives forward. I asked people to share some of their recommendations for implementing DevOps in their organizations and received several suggestions. Some focused specifically on DevOps, while others looked at management practices. I'll list several of them below, but if you have anything else to recommend, please add it in comments!


The Phoenix Project

When you write a novel that attempts to educate readers about a certain thing, there is the potential for that novel to be pretty hokey. When someone recommended The Phoenix Project , that person assured me that this was not the case. With a four-and-a-half star rating on Amazon from nearly 2,000 reviews, this is one that I'm going to have to check out myself.


Time Management for System Administrators

On the topic of making your DevOps work a little faster, sometimes you need to better manage your time. This book may be able to help you get there.



Here, you'll find lots of articles have been written about DevOps and how it is implemented in different organizations. I found the site to be very helpful when I was writing several of the posts for this series.



Another site that has a fair amount of DevOps content, including, "How to Get Started in DevOps" and "DevOps Culture." This might be a good place to pick up more information.


The Toyota Way

Another book focused on the management side of things, including processes that Toyota used to improve communication in their factories.


Team Geek: A Software Developer's Guide to Working Well with Others

Since this series has been focused on the importance of communications in your DevOps efforts, we'll round out this list with a book about working with other people, teams, and users when developing software.


As always, I look forward to reading your comments!

When I first considered writing this post about possible HIPAA violations, I could clearly imagine how non-IT compliant file sharing could cause issues with HIPAA compliance. It wasn't until I really started digging into it that a whole world of real-life examples opened up to me. In fact, if you think that your current policy of allowing non-IT file sharing is okay, you might want to reconsider. How so?


First, let me relate the account of a hospital that was actually fined for violating HIPAA requirements. The hospital had no evidence of a breach. No patient data was compromised as far as anyone could tell. However, because their users were sharing patient data in a way that put them at risk, they were fined. The data didn't have to be exploited to be found in violation. It just had to be capable of being stolen. That hospital ended up being fined over $200,000. There is a definite downside to our users circumventing IT controls and making their own decisions.


I believe we should consider the following areas when it comes to secure file transfer and storage:




Data at rest and data in motion need to be part of the solution. Using Next Generation Encryptions, such as AES-256, Elliptical Curve Diffie Hellman, SHA-256, and so on are keys to enhancing the security level of your data. No matter where data exists in your environment, it should be protected at, a minimum, by some form of strong encryption.




The solution cannot leave data free in the open with no authentication controls. Strong user authentication also has to be in place. Password strength should be high, using the standard requirements of variations in upper and lower case, alphanumeric values, and special characters. A longer password length will provide additional security. Restricting access to data, even encrypted data, ensures that it doesn't end up somewhere it shouldn't be and become vulnerable to theft.


Malware filtering


We should also consider anti-malware filtering. We never want our organization to be a source of malware. Making sure that we aren’t inadvertently sharing malware is critical. The rise of malware as the preferred method of data theft and exfiltration means we need to be even more vigilant about keeping our organization free from attack.


Length of storage


For a while, I used a Keyboard Maestro snippet, a Hazel rule, three folders, and Dropbox to personally share files. I had three folders:


  • 1 Day
  • 5 Days
  • 30 Days


I could put a file in one of those three folders and then Hazel would look at them constantly. If a file in the 1 Day folder was older than 1 day, Hazel would delete it. The same was true with the other folders and their respective durations.


That was a good solution for a home user, but it’s not a good solution for an enterprise. However, several offerings today are able to place a lifetime of the data as well as a user’s access. These things need to be factored in as well. And remember that good data hygiene practices should be followed with a time-limited storage sharing system. Critical data can get exploited in a matter of minutes when it isn't protected.




Audit logging and user tracking is another important factor. If you don’t have this kind of visibility, how do you know if, for example, your data is being downloaded by a trusted user in Colorado and 15 minutes later by the same user--only this time from China. These types of things would certainly raise a red flag. If we’re flying blind, we’re putting ourselves in line for big fines or worse.


When it comes to file-sharing solutions, if a user is working with a laptop, we need to be concerned about that laptop being stolen and whether or not we have a remote wipe capability.

Where do we go from here?


I think the best solution is to educate ourselves about our options and then educate our users on how to get the most out of the solution. Of course, this may not be your approach. That’s okay. But anyone who’s responsible for the deployment of a solution should do their due diligence in selecting the best one for their organization.


What experiences have you had in working with a secure file sharing solution and navigating HIPAA compliance?

In this last post of my 5 More Ways I Can Steal Your Data series, I focus on my belief that all data security comes down to empathy. Yes, that one trait that we in technology stereotypically aren't known for displaying. But I know there are IT professionals out there who have and use it. These are the people I need on my teams to help guide them toward making the right decisions.


Empathy? That's Not a Technical Skill!

If we all recognize that the personal data we steward actually belongs to people who need to have their data treated securely, then we will make decisions that make that data more secure. But what about people who just don't have that feeling? We see attitudes like this:


"I know the data model calls for encryption, but we just don't have the time to implement it now. We'll do it later."


"Encryption means making the columns wider. That will negatively impact performance."


"We have a firewall to protect the data."


"Encryption increases CPU pressure. That will negatively impact performance."


"Security and privacy aren't my jobs. Someone needs to do those parts after the software is done."


"We don't have to meet European laws unless our company is in Europe." [I'm not a lawyer, but I know this isn't true.]


What's lacking in all those statements is a lack of empathy for the people whose data we are storing. The people who will be forced to deal with the consequences of bad data practices once all the other 10+ Ways I Can Steal Your Data I've been writing about in the eBook and this series. Consequences might just be having to reset their passwords. Bad data practices could lead to identity theft, financial losses, and personal safety issues.


Hiring for Empathy


I rarely see any interview techniques that focus on screening candidates for empathy skills or experiences. Maybe we should be adding such items to our hiring processes. I believe the best way to do this is to ask candidates to talk about:

  • Examples of times they had to choose the right type of security to implement for Personally Identifiable Information (PII)
  • A time they had to trade performance in favor of meeting a requirement
  • The roles they think are responsible for data protection
  • The methods they would use in projects focused on protecting data
  • The times they have personally experienced having their own data exposed


If I were asking these questions of a candidate, I'd be looking not so much for their answers, but the attitude they convey while answering. Did they factor in risks? Trade-offs? How a customer might be impacted?  This is what Jerry Weinberg writes about in Secrets of Consulting when he says, "Words are useful, but always listen to the music."


By the way, this concept applies to consultants as well. Sure, we tend to retain consultants who can just get things done, but they also need to have empathy to help clients make the right decisions. Consultants who lack empathy tend to not care much about your customers, just their own.


Wrapping it Up

I encourage you to read the eBook, go back through the series, then take steps to help ensure data security and empathy. Empathy is about feeling their pain and taking a stand to mitigate that pain as much as you can.


Oh, and as I said in a previous post, keeping your boss out of jail.  Do that.


UPDATE: My eBook, 10 Ways We Can Steal Your Data is now available.  Go download it.

10 Ways We Can Steal Your Data eBook cover: spaceship, robot, data center

The title of this post is a quote that might be (or not be) from Albert Einstein. I first thought the title of the post would be something like:” why root cause analysis is your saviour”, but I think knowledge/experience title has more potential, so I’ll stay with Albert…


I’ve seen my fair share of companies collecting loads of data on incidents happening in their database environments, but using this “knowledge” and use it to come out better experienced seems to be a hard thing to establish.


What Albert means is that we need to make sure that the only way to make things better is by learning from our mistakes. Mistakes will happen and so will incidents, there is nothing we can do to prevent that from happening. And to be honest Albert (and me ;)) wouldn’t want you to stop making mistakes because without mistakes innovation would come to an end. And that would mean that this is it, nothing will ever be better than what we have now. No. We need innovation. We need to push further and harder. In the database world, I mean.

einstein quote


To do so, we’ll have incidents. But if we perform root cause analysis and learn from the incidents happening and try to figure out what we can do to prevent the same problem from ruining our day ever again.


The right tool to perform root cause analysis will provide you all the information needed. We all want to end up like the kid in the picture below, right? Although…. Albert and me might say the only way to innovate is by making mistakes, AND learn from them.


In two weeks I’ll write another post in which I’ll look back at the post from the last couple of weeks and what tools I think are essential for a good performing database environment, now and in the future. In the meantime, I love the comments you all provide.  I’ll try to answer as many as possible in due time.

Had a wonderful time at AWS re:Invent last week. It was great to be at an event where I was just an attendee. It's been 15 years or so since that has happened. In case you missed it, Amazon had a long list of announcements last week. I included a link in the Actuator this week that gives a summary of all the announcements. I'll try to put together a dedicated post on all of the data-focused announcements at some point.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


AWS re:Invent Product Announcements

A complete list of everything that was announced last week. Be prepared to spend some time here, there was a lot of new products and services announced last week.


Amazon Plays Catch Up in a Corner of the Cloud Where It Lagged

There were a handful of announcements last week at re:Invent around AI and Machine Learning. I loved hearing them and examining the tech. What I didn’t like was Amazon presenting their new products and services as if they were the only ones in existence. Saying “the world’s first” when you aren’t just makes me cringe. Amazon has wonderful products and services, built by good people. There’s no need to lie in marketing; it doesn’t make you look better.


Brilliant Jerks in Engineering

A tad long, but worth your time. Don’t be a jerk, even a brilliant jerk. This article reinforces what I have advocated for a long time now: Hard skills have a cap; soft skills do not.


Privacy not included: A Guide to Make Shopping for Connected Gifts Safer, Easier, and Way More Fun

Can it spy on me? Yeah, almost always the answer is yes.


The Fuss About VMware and Azure

Nice summary regarding the Microsoft announcement regarding VMware. My biggest takeaway from all this is that when a company tells you something is “unsupported," what they really mean is, “You will need to pay more for support."


Apple Releases Fix to Security Flaw in Mac Operating System

Apple employs 125,000 people, is worth $887 billion dollars, and makes $229 billion in profit in a year. Oh, and they can ship an operating system update to 400 million users with a blank admin password. Can’t wait to see what they do next.


The One Piece of Career Advice I Wish I’d Gotten

Great advice coming on the heels of re:invent, an event that ties directly to the advice given here.


Here's a nice graphic showing some of the lesser-known AWS announcements last week:


By Joe Kim, SolarWinds EVP, Engineering and Global CTO


With the advent of the Internet of Things (IoT) and connected devices, the amount of data agencies collect continues to grow, as do the challenges associated with managing that data. Handling these big data challenges will require federal IT pros to use new data mining methodologies that are ideal for hybrid cloud environments. These methodologies can improve network efficiency through automated and intelligent decision-making that’s driven by predictive analytics.


Today’s environments require a break from the data analysis methods of the past, which were time-consuming and required an enormous amount of manual labor. Things were difficult before the IoT, connected devices, and hybrid cloud environments became commonplace; today, it’s nearly impossible.


Data lives across numerous departmental silos, making it hard for IT departments to keep track of it all. It’s difficult to achieve clear insights into these types of environments using traditional data mining approaches, and even more difficult to take those insights and use them to ensure consistent and flawless network performance.


Agencies need tools that have a cross-stack view of their IT data so they can compare disparate metrics and events across hybrid cloud infrastructure, identify patterns and the root cause of problems, and analyze historical data to help pinpoint the cause of system behavior.


Predicting the Future


Automated data mining paired with predictive analytics addresses both the need to identify useful data patterns and use that analysis to predict—and prevent—possible network issues. By using predictive analytics, administrators can automatically analyze and act on historical trends in order to predict future states of systems. Past performance issues can be evaluated in conjunction with current environments, enabling networks to “learn” from previous incidents and avert future issues.


With predictive analysis, administrators can be quickly alerted about potential problems so they can address issues before they occur. The system derives this intelligence based on past experiences and known performance issues, and can apply that knowledge to the present situation so that network slowdowns or downtime can be proactively prevented.


Learning from the Past


Administrators can take things a step further and incorporate prescriptive analytics and machine learning into their data analysis mix. Prescriptive analytics and machine learning actually provide recommendations to prevent problems, like potential viruses or malware. This can help agencies overcome threats and react to suspicious behavior by establishing what “normal” network activity looks like.


Using new, modern approaches to data analysis can help agencies make sense of their data and keep their networks running at the utmost efficiency. Predictive and prescriptive analysis, along with machine learning, can help keep networks running smoothly and prevent potential issues before they occur. Each of these approaches will prove invaluable as agencies’ data needs continue to grow.


Find the full article on Government Computer News.

So you've taken the leap, battled through the migration, and your mail is in the Microsoft cloud. You turned off your locally hosted and managed Exchange cluster and reclaimed those hours of sleep lost to responding to alerts about how so-and-so can't send an email at 2:00 am on a Saturday morning.


You probably are also missing the ease of monitoring that your on-prem solution provided. Now, with the details at arm's length, your usual monitoring methods don't work anymore. Don't worry, Microsoft has you covered, if and only if you've been keeping your PowerShell skills sharp.


Introducing: Remote PowerShell Sessions for Microsoft Office 365


Here's how it works:

Connect to Office 365

Create a new PowerShell session, and import it into your current session. You’ll be prompted for credentials when this portion of code runs.


Important: Your Office 365 credentials are in the UPN format username@domain, not domain\username. Chasing authentication issues only to realize I was not using the correct format definitely caused me some grief.


$office365 = New-PSSession -ConfigurationName "Microsoft.Exchange" -ConnectionUri -Credential (Get-Credential) -Authentication Basic -AllowRedirection

Import-PSSession $office365 | Out-Null

Query for the Information

At this point, you have access to the same commands that you would have had were you using the Exchange PowerShell module locally. Here you will find numerous sources of valuable monitoring data.  For example, to get the details on all of the inactive mailboxes:


$inactiveMailboxes = Get-Mailbox -InactiveMailboxOnly

View Your Results

At this point, you have an array of your inactive mailboxes, but what information do you have at your fingertips? An easy to way to find out is by making use of PowerShell’s Out-GridView cmdlet to give you an easy-to-use (and filter!) interface to pore over the data:


$inactiveMailboxes | Out-GridView

Do Something with Them

Now that you have your inactive mailboxes, what should you do with them? You have a couple of options, which are described in the following article:


Delete or restore user mailboxes in Exchange Online


Definitely read through the information in this article to help ensure that you don’t do something you’ll regret later, such as permanently delete a mailbox you want back. Assuming you’re ready to part with your inactive mailboxes, it’s as easy as this:


$inactiveMailboxes | Remove-Mailbox -Confirm: $false


By leveraging the clout of the PowerShell pipe, you can send your mailbox objects directly from your $inactiveMailboxes array right into the Remove-Mailbox cmdlet.


Next Up: Working with Azure Active Directory via PowerShell

You've probably heard about the importance of business continuity and disaster recovery. Today, more businesses have business continuity plans than ever before. With so many businesses looking to secure their future, there are still a few aspects of business continuity that today’s business need to understand. After all, there is more to it than just data backup. Disaster recovery is something that needs to be planned, practiced, and updated regularly, and it’s important to have a management system that helps you predict, monitor, and execute your business continuity plan.

Over the last few years, business continuity has changed how it is perceived within a business. In a previous position, the company I worked for had multiple physical data centers with hardware at both sites with a full failover from one site to the next. But today’s ever-changing, always-on data requirements bring new complications to the business continuity plan. Today, infrastructure and applications can be hosted across multiple platforms, from on-premises to the public cloud. With these disparate environments and multiple management tools, it is key to know what is going on within your business.   

So, let’s look at how some of the software packages in the market can help your business monitor your infrastructure and provide critical insights into your data.

Availability Monitoring

Experience has taught me that a lot of outages can be tracked to network issues. In the majority of cases, these outages could have been avoided. Availability monitoring software provides you a way to help identify and proactively troubleshoot network issues early. I have often blamed the network team for issues with my data center, putting pressure on them to work out what’s wrong, when the issue ended up being related to disk access or performance. With an availability monitoring solution, you can provide a quick response to your teams, helping you troubleshoot the issue before the business or end-user is affected. Availability monitoring tools can work in a standalone fashion to provide quick response for small organizations or for companies looking to provide information about a specific project. A larger organization may want to integrate availability monitoring into a more comprehensive platform.

Interface Monitoring

Sometimes you have to delve deeper into the environment when no real issues show on the network, but you can begin to see actual issues with each individual interface. Take a service provider, for instance. They have large distributed and shared networks with multiple VLANs and dedicated ports for each customer. Not to mention the different types of interfaces: 1Gb, 10Gb, Ethernet, all the way through to high-speed fiber. Now, I'm no expert when it comes to networks, so I would need help to start to decipher the issues. Using SNMP to collect the interface stats within the environment, and ICMP packet reports to collect data (such as packet loss, round-trip times, etc.), helps the network administrator identify application performance issues in the network quickly.

Virtualization Manager 

This is the tool that I find really cool. When I was an IT Manager, every day was a challenge, especially when we started introducing larger applications. But this was eight years ago, long before I knew about the category of virtualization management. Back then, if I had a piece of software that could proactively recommend what I needed to do with my VMs, I would have slept a lot easier. I will more time going over the benefits of virtualization management software in the future because it is a large and very detailed category, but today I want to highlight the features that I think can help your business continuity plan, including Predictive Recommendations and Active Virtualization Alerts.

Predictive Recommendations proactively monitors and calculates active and historical data to help you prevent and fix performance issues. You can review each recommendation and choose to act now or schedule for later time and date. This gives you choice and control over your environment. You can also use VMAN to help prevent future issues, by implementing resource settings and plans that can actioned if any performance thresholds are breeched. Now, what's key for me is not only the value that’s provided by saving time and resources but also the uptime that can be achieved by making sure the VMs are in the right place.

In my next article, we will look at providing insight into your infrastructure to meet the compliancy challenges we are seeing today.

This week's edition of The Actuator comes to you direct from Las Vegas and AWS re:Invent. This is my first time at this event, and I am fortunate to be here as an attendee. For the past 15 years or so, I have been working events, not attending them. This week I have been hanging around the data folks, trying to soak up some knowledge on RDS, Aurora, Redshift, and I also attended a workshop on building Alexa Skills. So, yeah, I'm in data geek heaven here.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Amazon launches new cloud storage service for U.S. spy agencies

“Amazon says it is the only commercial cloud provider to offer regions to serve government workloads across the full range of data classifications, including unclassified, sensitive, secret and top secret.” Well, except for Azure, which has offered this for more than a year already. But, hey, with AWS re:Invent happening, why not embellish the truth a bit, right?


Google admits it tracked user location data even when the setting was turned off

Remember when Google wasn’t the evilest company on Earth? Neither do I. Now we can only imagine what horrible things they are doing to user data stored in Google Cloud.


Decision making under stress

I read this and remembered every triage call I participated in through the years. This is why we have things like checklists and automated scripts to remove the human element and lower stress in group situations where we feel we have no control.


Uber Concealed Breach of 57 Million Accounts for a Year

Remember when Uber wasn’t as evil as Google? Yeah, that was three minutes ago. When I woke up today I didn’t need yet another reason to dislike Uber, but here we are.


Run the First Edition of Unix (1972) with Docker

“I bet people from the 1970s were really good spellers because not being able to move your cursor is pretty brutal.”


Microsoft is Bringing VMware to Azure, VMware Is Not A Happy Camper

One thing I learned a long time ago is that “unsupported” really means “you pay extra for support." Of course, VMware will tell you that this is unsupported. They have to say that because of their agreement with AWS. But at the end of the day, VMware wants to survive, and the best way to do that is to allow their software to run on any cloud.


Gambling regulators to investigate 'loot boxes' in video games

I was thinking of getting Battlefront 2 as a gift this year, but I’ve decided that I’d be better off buying actual lottery tickets instead.


If you have never been to re:Invent, here's a sample of what you should expect for the week: Standing in lines.

Over the next few weeks, I will be releasing a series of articles covering the value of data analytics and insight, focusing on five of the key business drivers that I have seen within the industry. With the IT landscape constantly changing, let’s look at what is driving businesses further down the path toward data analytics. More importantly, we will also at the drive to understand what is happening within current infrastructures and how this knowledge can deliver value.


Business Continuity


In an ideal world, your organization would run effortlessly to provide both your business and your customers with data and resources at all times. In reality, no matter how successful, no business is without its challenges, and it often has to mitigate and eventually overcome these challenges to make sure the business can achieve its outcomes. One of the ways that organizations can prepare for disruptive events is through Business Continuity Management (BCM). For some businesses, this means deploying BCM software and creating business procedures to continue operating when the unexpected occurs.


When I am talking to businesses about Business Continuity, it is important to highlight at an early stage what the business needs to keep running and what it is that they want to monitor. Deploying the right BCM software will help businesses identify, manage, and prevent issues before they occur. This has the added benefit of possibly reducing the need to activate disaster recovery or business continuity plans.




Compliance has become a huge topic of conversation over the past year. The issue was prompted mainly by the introduction of General Data Protection Regulation (GDPR), which goes into effect on the 25th of May 2018. I have spent the last year working with customers to help them understand the impact of GDPR, and the importance of making sure their businesses stay compliant. Now, I am no expert when it comes to its legalities, but what I have found is that becoming GDPR-compliant is not the major challenge. Instead, I have discovered that businesses are more concerned about staying compliant. It is important to have a monitoring tool that can help you maintain compliance from Security and Access control through to patch management and device tracking. These aren’t all tied to GDPR, but each one, working in collaboration, will help your business stay compliant.




This challenge has been brought to my attention in many different ways. For me, responsiveness can mean anything having to do with networks, data access, hyperscaling, and even cloudbursting as the business needs. With the ever-changing user requirements, for applications, or the business as a whole, monitoring is crucial. Businesses want tools that can give them proactive information that will help them make the kind of decisions that guide them toward becoming competitive within the market.




As mentioned above, it is becoming more important to manage large infrastructures from one central management platform. As technologies move forward, there is no longer a one-stop shop for all your business requirements. Infrastructures and applications are brought in to supply the businesses needs from a plethora of vendors. It is important for businesses to stay agile to best meet the trends of the market. Therefore, they must use the best possible tools to help them achieve and keep that competitive edge.




It's common practice for businesses to run their infrastructures efficiently. As we all know, though, this isn’t as easy as it sounds. With multiple disparate environments all having their own operating system and management tools, it’s very hard to keep a track of it all. Increasingly, I find myself talking to customers about centralized management and efficiency. I believe Solarwinds Orion Platform helps businesses manage those very specific challenges.


Over the next four weeks, I am going to delve into each of these subjects individually in a lot more detail. I will also reveal how SolarWinds can help businesses deliver value using insightful data analytics.

Today you can find any number of online articles about the impending loss of Net Neutrality in the United States and around the world. I think most web surfers don't understand the potential impact.



There are many examples of corporations and governments violating the principles of Net Neutrality.  Here are just a few:

  • Comcast secretly injected forged packets to slow certain users' traffic down. When discovered, they didn't stop until the FCC forced them to.
  • In a different instance, the FCC fined a small ISP $15000 for restricting their customers' access to a rival ISP's services.
  • AT&T was caught limiting their customers' access to a specific public site unless the customers paid AT&T more for the access.
  • More recently, Verizon secretly restricted its customers' ability to stream from Netflix and Youtube, until they were caught and forced to change.
  • Outside the U.S, in some countries, the general population can't access any information not officially approved by the government. They can't email anyone they'd like, search for information about politics, medicine, religions, etc.


That's what it's already like today when Net Neutrality is not followed.   Net neutrality - Wikipedia


The obvious fallout from losing Net Neutrality is separating people from more money and seeing it sent to carriers and big corporations. But although money is the front reason for doing away with it, it's not the worst reason.


Suppose Net Neutrality goes away in the United States due to an act of Congress, and it becomes legal for carriers and ISPs and corporations to slow traffic down or shut it down entirely based on:

  • Who you are
  • What you want to learn
  • Your past browsing history
  • Your income
  • Your race
  • Your religion
  • Your political views
  • How much extra you're willing to pay


Losing Net Neutrality sounds a lot like trading the freedoms and rights that come from living in the United States and moving to Russia or China or Iran or Syria. We could be giving up a LOT of freedoms and speeds that we've always taken for granted.


Are there any justifiable reasons for slowing or stopping your traffic? Maybe.

  • It costs carriers and ISPs more and more as people increase bandwidth demand by streaming audio and video, or moving increasingly larger files for work or pleasure. Should you be denied access, or slowed down, or forced to pay more to have the speed and access you already have today?
  • If you use a lot of bandwidth streaming entertainment or playing games during business hours, some online businesses may not be able to serve their customers as well. Is that a fair reason for you to be denied speed or access to what you want?


It occurred to me that throttling bandwidth with QoS is similar to a clogged freeway (aka "oversubscribed") that the DoT "fixes" by dedicating a lane to folks who pay more to bypass the slow traffic. When does slowing someone's traffic begin a movement toward losing Net Neutrality?

  • How about if the DoT intentionally slowed everyone EXCEPT you down, and you must drive 55 mph where you'd been driving 70, and everyone else must drive 40 mph?
  • Or if they said, "You may go 70 mph because you're wealthy. People who earn less than you won't be allowed to go that fast."
  • Or "Your religion or color or gender all are reasons why you may not access these sites with good speeds, or perhaps to be able to access them at all. Further, all of your surfing will be slowed until you comply with some new policy we will indicate at a later time, perhaps a loyalty oath."


Put on your best network administrator hat and imagine how monitoring will play a role in this. You might be asked to prove that your ISP/carrier/remote online service is flowing as fast as your company pays it to be. And you might be asked to identify IP addresses, users, protocols, and destinations and throttle their throughput per the demands of some higher-up in your organization.


Maybe you already do this with QoS, prioritizing traffic because your internet pipe or WAN pipes aren't big enough for the demand.  Is that parallel to not being able to access what you want privately, or at home, or at work?


How will you react when Net Neutrality is gone and you learn that you:

  • Cannot access the sites you used to enjoy?
  • Cannot stream A/V content as fast as you used to?
  • Cannot research what you want?
  • Are prevented equal access to information based on your politics/income/race/religion/age or gender?
  • Will be forced to pay additional fees each time you use certain protocols or sites?


Think about how important Net Neutrality has been to you in the past, and how we've taken it for granted.


Then imagine it being used as a tool to separate you from more money, and to keep you in the dark about your health or your government's activities, or even from learning about coming weather conditions.


What good things could come from losing Net Neutrality?

  • Users might abandon the internet and get physically active and become fit again?
  • Think of the zillions of bits conserved!
  • Discovering ignorance is bliss?  (If this makes you smile, I recommend you read George Orwell's book 1984).


How will losing Net Neutrality affect you and your business and your network monitoring demands and discoveries?

By Joe Kim, SolarWinds EVP, Engineering and Global CTO


Through its significant investment in networked systems and smart devices, the DOD has created an enormously effective—yet highly vulnerable—approach to national network security threats. The department has begun investing more in the Internet of Things (IoT), which has gone a long way toward making ships, planes, tanks, and other weapon systems far more lethal and effective. Unfortunately, the IoT's pervasive connectivity also has increased the vulnerability of defense networks and the potential for cyberattacks.


That attack surface only continues to grow and evolve, with new cyberthreats against the government coming in a regular cadence. DOD must adapt to this rapidly changing threat landscape by embracing a two-phase plan to make network security more agile and automated.


Phase One: Speeding Up Tech Procurement


The government first must accelerate its technology procurement process. Agencies must quickly deploy easily customizable and highly adaptable tools that effectively address changing network security threat vectors. These tools must be simple to install and maintain, with frequent updates to ensure that networks remain well fortified against the latest viruses or hacker strategies.


There is hope. In recent years, the government has made it easier for agencies to buy software through a handful of measures, such as the General Services Administration Schedule and the Department of Defense Enterprise Software Initiative. All have been carefully vetted to work within government regulations and certifications.


Phase Two: Automating Network Security


Automated network security solutions to alert agency administrators to possible threats are also important. The government should implement these types of solutions to monitor activity from the myriad devices using Defense Department networks. Administrators can be alerted to potential security breaches and software vulnerabilities to provide real-time threat response capabilities.


The SolarWinds® Log & Event Manager (LEM) lets administrators gain real-time intelligence about the activity happening on their networks, alerting them to suspicious behavior. Administrators can trace questionable activity back to its source and set up automated responses—including blocking IPs, disabling users, and more— to prevent potentially hazardous and malicious intrusions.


The number of connected devices operating on government networks makes a comprehensive User Device Tracker (UDT) a necessary counterpart to LEM. UDTs have gained a significant amount of traction over the past couple of years, particularly since the workforce began using personal mobile devices over government networks.


Today, federal administrators must deploy solutions that automatically detect who and what are using the network at all times. Solutions should easily locate the devices through various means, so administrators quickly can prevent major breaches that have become all too common.


Prevention is more about implementing network security measures quickly and automatically than it is about who has the better firewall. For the Defense Department, which has become so dependent on connected devices and the information they provide, there’s simply no time for that type of old-school thinking. Federal administrators must act now and invest in automated, agile, and efficient solutions to keep their networks safe from cyberattacks.


Find the full article on Signal.

A thing that has never been is happening right now: We’re recording SolarWinds Lab #60 Live on-location at AWS re:Invent 2017 in Las Vegas. Head Geek patrick.hubbard and Director, Product Management for SolarWinds Cloud Michael Yang will review all of the latest from SolarWinds Cloud, including AppOptics.


Lab 60 from Las Vegas


Join us Wednesday, December 13, 2017 at 1p CST for a Lab you won’t forget.


SolarWinds Lab #60 - SolarWinds Cloud, GO! Learn Modern Monitoring for Apps, Cloud & DevOps

Someday, we may look back on IT as a subset of social science as much as a technological discipline. Because it sits at the intersection of business and technology, visibility and information are at a premium within organizations.


In another social science, economics, there is a theory that given perfect information, rational humans will behave predictably. Putting aside the assumption about rationality (and that's a major bone of contention), we can use that same principle to say that when people seem to behave unpredictably, it's a failure of information.


In any organization at scale, information disparities have the potential to cause confusion, or worse, conflict. One of the most typical examples of this in the enterprise is between storage administrators and database administrators (DBAs). Very often, these two roles butt heads. But, why? After all, at the end of the day, both are trying to serve the same organization with a clearly defined mission, as discussed in the eBook, The Unifying Force of Data, co-authored by myself and Keith Townsend.


So, it could be said that both roles have the same overall goal, but have different priorities within it. These priorities inform how each determines their success within the organization. It's due to a lack of knowledge of their counterparts' priorities that often cause this seemingly inherent conflict.


So, what are these priorities? From a storage admin, much of their focus rests on cost. As data continues to eat the data center, the amount, and subsequent cost, of storage, is the fastest-rising cost in a data center. This tends to make them the master of "no" when it comes to storage requests.


This focus on costs informs how storage admins interact with other members of an organization. When it comes to DBAs, this can create a vicious cycle of assumptions. If a DBA requests an additional allocation of storage at a given performance tier, the storage admin is naturally skeptical if it's "really" required. The storage admin will look at what the DBA actually used out of their last allocation, perhaps even digging into IOPS requested by an application as part of their calculation.


In the back of the storage admin's mind, there may be an assumption that the DBA is actually asking for more than they need at this moment. This might be because of the lag time in provisioning additional storage. Regardless, the storage admin is trained to be skeptical of DBA provisioning requesting.


This is where a lack of information can really hamper organizations. The storage admin thinks the DBA will always seek to overprovision either capacity or speed. This causes additional delays as the admin tries to determine the "actual" requirements. Meanwhile, the DBA assumes the storage admin will be difficult to work with, and often overprovisions as a way to hedge against additional negative interactions.


This cycle isn't because the storage admin is a pessimistic sadist looking to make everyone's lives miserable. The point of storage in an organization is for it to be used to support the mission. The storage admin must provide applications with the storage they need. But they feel the squeeze on cost to make this as lean as possible.


Now it's impossible to expect either storage admins or DBAs to have perfect information on each other's roles and priorities. That would result in duplicated effort, cognitive overload, and a waste of human capital. instead, a monitoring scheme might approach this to effectively allow the two to troubleshoot their issues by effectively correlating behavior up and down the application stack.


This kind of monitoring requires the collaboration of both a storage admin and DBA. It may require a bit of planning, and probably won't be perfect from the start. But it might be the only way to solve the underlying information imbalance causing the conflict between the two roles.

Filter Blog

By date: By tag: