1 2 3 4 5 Previous Next

Geek Speak

2,652 posts

“Too many secrets.” – Martin Bishop


One of the pivotal moments in the movie Sneakers is when Martin Bishop realizes that they have a device that can break any encryption methodology in the world.


Now 26 years old, the movie was ahead of its time. You might even say the movie predicted quantum computing. Well, at the very least, the movie predicts what is about to unfold as a result of quantum computing.


Let me explain, starting with some background info on quantum computing.


Quantum computing basics

To understand quantum computing, we must first look at how traditional computers operate. No matter how powerful, standard computing operates on binary units called “bits.” A bit is either a 1 or a 0, on or off, true or false. We’ve been building computers based on that architecture for the past 80 or so years. Computers today are using the same bits that Turing invented to crack German codes in World War II.


That architecture has gotten us pretty far. (In fact, to the moon and back.) But it does have limits. Enter quantum computing, where a bit can be a 0, a 1, or a 0 and a 1 at the same time. Quantum computing works with logic gates, like classic computers do. But quantum computers use quantum bits, or qubits. With one qubit, we would have a matrix of four elements: {0,0}, {0,1}, {1,0}, or {1,1}. But with two qubits, we get a matrix with 16 elements, and at three qubits we have 64. For more details on qubits and gates, check out this post: Demystifying Quantum Gates—One Qubit At A Time.


This is how quantum computers outperform today’s high-speed supercomputers. This is what makes solutions to complex problems possible. Problems today’s computers can’t solve. Things like predicting weather patterns years in advance. Or comprehending the intricacies of the human genome.


Quantum computing brings these insights, out of reach today, into our grasp.


It sounds wonderful! What could go wrong?


Hold that thought.


Quantum Supremacy

Microsoft, Google, and IBM all have working quantum computers available. There is some discussion about capacity and accuracy, but they exist.


And they are getting bigger.


At some point in time, quantum computers will outperform classical computers at the same task. This is called “Quantum Supremacy.”


The following chart shows the linear progression in quantum computing for the past 20 years.


(SOURCE: Quantum Supremacy is Near, May 2018)


There is some debate about the number qubits necessary to achieve Quantum Supremacy. But many researchers believe it will happen within the next eight years.


So, in a short period of time, quantum computers will start to unlock answers to many questions. Advances in medicine, science, and mathematics will be within our grasp. Many secrets of the Universe are on the verge of discovery.


And we are not ready for everything to be unlocked.


Quantum Readiness

Quantum Readiness is the term applied to define if current technology is ready for quantum computing impacts. One of the largest impacts to everyone, on a daily basis, is encryption.


Our current encryption methods are effective due to the time necessary to break the cryptography. But quantum computing will reduce that processing time by an order of magnitude.


In other words, in less than ten years, everything you are encrypting today will be at risk.




Databases. Emails. SSL. Backup files.


All of our data is about to be exposed.


Nothing will be safe from prying eyes.


Quantum Safe

To keep your data safe, you need to start using cryptography methods that are “Quantum-safe.”


There’s one slight problem—the methods don’t exist yet. Don’t worry, though, as we have "top men" working on the problem right now.


The Open Quantum Safe Project, for example, has some promising projects underway. And if you want to watch mathematicians go crazy reviewing research proposals during spring break, the PQCrypto conference is for you.


Let’s assume that these efforts will result in the development of quantum-safe cryptography. Here are the steps you should be taking now.


First, calculate the amount of time necessary to deploy new encryption methods throughout your enterprise. If it takes you a year to roll out such a change, then you had better get started at least a year ahead of Quantum Supremacy happening. Remember, there is no fixed date for when that will happen. Now is your opportunity to take inventory of all the things that require encryption, like databases, files, emails, etc.


Second, review the requirements around your data retention policies. If you are required to retain data for seven years, then you will need to apply new encryption methods on all of that older data. This is also a good time to make certain that data older than your policy is deleted. Remember, you can’t leave your data lying around—it will be discovered and decrypted. It’s best to assume that your data will be compromised and treat it accordingly.


One thing worth mentioning is that some data, such as emails, are (possibly) stored on the servers they touch as they traverse the internet. We will need to trust that those responsible for the mail servers are going to apply new encryption methods. Security is a shared responsibility, after all. But it’s a reminder that there are still going to be things outside your control. And maybe reconsider the data that you are making available and sharing in places like private chat messages.



Don’t wait until it’s too late. Data has value, no matter how old. Just look at the spike in phishing emails recently, where they show you an old password and try to extort money. Scams like that work, because the data has value, even if it is old.


Start thinking how to best protect that data. Build yourself a readiness plan now so that when quantum cryptography happens, you won’t be caught unprepared.


Otherwise…you will have no more secrets.

“I heard a bird sing in the dark of December.

A magical thing. And sweet to remember.

We are nearer to Spring than we were in September.

I heard a bird sing in the dark of December.”

- Oliver Herford


Here on the eve of the darkest month, when cultures across the world celebrate light in an attempt to brighten the short days and long nights, we want to bring some illumination to our THWACK® community too, in the form of the December Writing Challenge. In my announcement, I described how this challenge has been an uplifting event each year, and how many of us—both inside SolarWinds and in the THWACK community at large—look forward to it as a chance to reflect on the past and connect, both with each other and with our goals for the coming year.


I don't need to repeat the instructions (you can read them in the announcement, here), but I hope this post gives you a final reminder to keep an eye on the December Writing Challenge forum starting tomorrow and each day during December.


Rather than a word-a-day style writing prompt like previous years, this year's challenge has a single idea: "What I would tell my younger self." We're excited to read everyone's contributions, ideas, and discussions.


See you in the comments section tomorrow!

Today, in the fifth post of this six-part series, we’re going to cover the fourth and final domain of our reference model for IT infrastructure security. Not only is this the last domain in the model, it is one of the most exciting.


As IT professionals, we are all being asked to do more with less. This is why we need security tools that give us more visibility and control. But what do those tools look like? Let’s take a peek.


Domain: Visibility & Control

If we were securing a castle, it might be good enough to go to a high tower to see the battlefield, and we might be able to use horns or smoke signals to coordinate our defense. In a modern organization, we need to do a little better than that. Real-time visibility providing contextual awareness and granular control of all our security tools is required to defend against today’s threats.


The categories in the visibility and control domain are: automation and orchestration, security incident and event management (SIEM), user (and entity) behavior analytics (UBA/UEBA), device management, policy management, and threat intelligence.


Category: Automation and Orchestration

Automation and orchestration are the tools that make it easier to operate a secure infrastructure. These tools should work across the vendors in your environment and simplify the job of your security practitioners by reducing tedious and error prone manual tasks, reducing incident response times, and increasing operational efficiency and resiliency. This category is still emerging. This means that even more than the other categories, there is an option to build this functionality with open source tools and, more recently, to buy a commercial platform.


Category: SIEM

Security information and event management (SIEM) products and services combine security information management (SIM) and security event management (SEM) to provide real-time analysis of security alerts generated by applications and network hardware. SIEM solutions collect and correlate a wide variety of information, including logs, alerts, and network data-flow characteristics, and present the data in human-readable formats that administrators use for a variety of reasons, such as application tuning or regulatory compliance. More and more, these tools are complemented with some form of automation platform to provide instructions to analysts for how to deal with alerts, or even act on them automatically!


Category: UBA / UEBA

User behavior analytics (UBA) solutions look at patterns in user behavior and then use algorithms or machine learning to detect anomalies to prevent insider threats like theft, fraud, or sabotage. User and entity behavior analytics (UEBA) tools expand that to look at the behavior of any entity with an IP address to more broadly encompass "malicious and abusive behavior that otherwise went unnoticed by existing security monitoring systems, such as SIEM and DLP."


Category: Device Management

Device management is all about managing your security devices. These tools are often vendor-specific, and most attempt to display data in a single pane of glass using a central management system (CMS).  Recently, many vendors have recognized the need for a single interface and have enabled APIs to accommodate third-party reporting. Going forward, these tools may be replaced or controlled by other, vendor-agnostic automation tools in a more mature security infrastructure.


Category: Policy Management

Policy management tools make it easier to maintain homogeneous security policies across a large number of devices. These tools were initially vendor-specific, but vendor-neutral policy managers are becoming more prolific. They give the ability to deploy a common policy across an organization, a group of devices, or to a single device.  Additionally, Policy Management tools often give a user the ability to test/validate configurations before deploying them.  Finally, Policy Management tools provide a mechanism to create configuration templates used for no-touch/zero-touch provisioning.


Category: Threat Intelligence

Threat intelligence can take many forms. The unifying purpose of them is to provide you, your security organization, and your other security tools information on external threats. Threat intelligence gathers knowledge of malware, zero-days, advanced persistent threats (APT), and other exploits so that you can block them before they affect your systems or data.


One More Thing

In the final post in this series we’ll look at the full model that has been described thus far and consider how you can put it to use to meet your individual security goals. Be sure to stick with me for the conclusion!

The Dream of the Data Center


For me, it started with OpenStack. I was at a conference a number of years ago listening to Shannon McFarland talking about using OpenStack to bring programmatic Network Functions Virtualization (NFV) into the data center. My own applications are much more small-scale, but the idea was captivating from the beginning. Since then, many other approaches to this have come into play, but all of them share that single idea of programmatic control over large-scale installations.


The thing about data center architectures is that they need automation. It's not an optional thing or a nice-to-have item. The human resources required to maintain those systems the way most network engineers maintain our networks just don't make financial sense. They're not all that efficient in smaller networks and are particularly ineffective at scale. Necessity is the mother of invention and that is what built the network automation infrastructure we see in large-scale DC deployments.


For smaller deployments, automation makes things easier and takes the drudge work out of the job. It's something we want, but not something we can always justify. Still, a guy can dream.


Automation at the Device Level (NETCONF/YANG)


Meanwhile, back in the real world of smaller networks and device-centric configurations, we're trying to make things easier as best we can. We've got NETCONF interfaces for programmatic control, and YANG models to use as templates for how things should be. Some of us are using tools like Ansible and SaltStack to go beyond device-by-device configurations, but we're still focused on the devices.


I'm not sure if this is due to the unwillingness of network engineers to change our paradigm of thinking from the devices to the network as a whole, or if it's the vendors creating equipment that interacts with the network only from its own perspective. It may well be that each feeds the other, creating a vicious cycle that's difficult to break.


If the necessity isn't there, where's the need for invention?


Commoditization and Virtualization (NFV)


As virtual machine technology began to become more common in smaller enterprises, the option of virtualizing all of the things became more appealing. If we're saving money and making more efficient use of resources by virtualizing server loads, why wouldn't we consider virtualizing some of our network infrastructures, too?


With Network Functions Virtualization, we came full circle to the dream that began with that OpenStack presentation. If the network, or at least portions of it, could be addressed programmatically like the other virtual machines, we were getting closer.


Were we dreaming too small?


Systemic Networking (SDN)


Even with NFV and the ability to use cloud and DC automation tools to provision and configure our virtual routers and switches, we're still being traditional network engineering greybeards and thinking in terms of devices rather than in terms of the entire network.


Enter Software Defined Networking, where we theoretically see the network as a programmable whole. The virtual components and the physical components share a single southbound API from a set of central controllers and the whole thing can be programmed through there.


Of course, depending on whose definition of SDN our products are working with, this may or may not be a complete solution, but that's a topic for another article.


Once this becomes commoditized, we theoretically have all of the tools to automate the network from a holistic perspective, but do we have an automation framework that will work equally well for all of the components in the platform?


The Whisper in the Wires


We have what it takes to virtualize and automate most of the network, making automation via central controllers a workable option. We can use one framework to deploy, provision, and automate the lot, right? Here's where I'm not quite sure. Even if we have a good strategy for our NFV devices and/or SDN controllers and their satellite devices, do we have a single framework that we can use to handle the deployment and management of the lot?

In Las Vegas this week for AWS re:Invent. If you are there, stop by Booth #608. I’d love to chat about data and databases with you.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Hospital Refuses Procedure, Prescribes 'Fundraising Effort' for Heart Transplant

I was told the same thing earlier this year when my daughter needed $4k worth of allergy injections. I cannot understand how our society has allowed our healthcare to reach this point.


Should a self-driving car kill the baby or the grandma? Depends on where you’re from

It’s an old question, but this time with data. I’d prefer that instead of trying to enforce our ethics into software, we spend time helping cars (and people) avoid the scenarios where accidents happen.


What The Cloud Will Look Like In Ten Years

More AI, collecting data from more things connected, hence the need for more cybersecurity. Seems legit to me.


A dystopian human scoring system in China is blocking people from booking flights

Forget blockchain, this is the tech that will change the world. And probably not in a good way.


New Vehicle Hack Exposes Users’ Private Data Via Bluetooth

On second thought, if you are dumb enough to sync your phone with a rental car, you deserve to have your data stolen.


Scared Your DNA Is Exposed? Then Share It, Scientists Suggest

Closing the barn door after the horses have escaped. Still, an interesting idea, but I’m not certain it's the right solution.


How to Shop Online Like a Security Pro

With the holiday shopping season upon us already, here’s a bunch of good advice that you should share with everyone you know.


It's hard to explain what 40,000 people crammed into the Venetian feels like:


Not too long ago, GDPR was the major topic in many conversations around business and technology.

It went "live" in May 2018, and since then, we haven’t heard much interesting news until recently, as a hospital in Portugal got caught with the first violation of the regulation.
Well, the first we know of, at least.


Also, some websites are no longer available from Europe, as the owners weren’t able or willing to implement GDPR regulatory strategies, even six months later. As they blocked me, it doesn’t affect me… but, my American friends, what do you think they do with your data?
From my point of view, coming from Europe, this behaviour is unacceptable as it shows disrespect towards the users. But on the other hand, GDPR might clash with the First Amendment in the USA.


SolarWinds, like any other company dealing with customers in Europe, should comply. And we do! Here is the statement.
I am quite happy that the company I work for provides so much insight into the whole GDPR process.


But on top of that, in my former role here as a Sales Engineer working out of the Rebel City in Ireland, I spoke with quite a few customers who needed assistance during the implementation of the GDPR and they checked to see if SolarWinds would have a product to help them.

In some of these conversations, I felt a little sorry because the IT pros had been left alone.
I heard one example where a legal department explained GDPR to the C-Levels, and the C-Levels then forwarded the whole task to IT with a deadline and no further planning or explanation.

On that note, what was your experience implementing GDPR at your company, if you don’t mind sharing?


What is the GDPR Right to Be Forgotten Process?


Quite recently, I asked myself how GDPR compliance looks now from the perspective of a user who wants to be forgotten, so I decided to run an experiment myself.

So, the actual task was to get in touch with companies and services that I no longer use and ask them to close my accounts, delete my personal data, and confirm. For the sake of efficiency, I used this opportunity to change my passwords everywhere.


The tools I used were simple: I used LastPass™ as the primary repository of all my account credentials (which I have done for ages), and a communication method to these companies that was either a web form or an email.

Oh boy, I didn't remotely expect the layers of complexity I was facing!
You basically deal with different corporations, policies, people, and a varying amount of creativity in the way GDPR has been implemented.


The first roadblock is to find contact information. Most companies put it into the privacy policy, legal info, or FAQ. If I could not find anything, I used their customer support. That happened quite often!

Some companies replied within a day or so with a simple confirmation like, "We initiated the process, but it can take up to 30 days until all your data is gone."


Sometimes it took a while for a response, but that is fine. Here’s how I imagine some of those GDPR processes look like:

A contact center works on the request first, forwards to someone who understands what it is about but not necessarily empowered to execute so that a ticket will be forwarded to IT, and IT starts the deletion, and the whole thing gets routed back.


Two organizations asked for reasons, and I replied with, "I would like to express my rights as a European citizen." (I am German, after all, no need to be overly friendly!) And that worked, no more questions asked.


Two companies asked for a verification of my identity, and sure, they are right!
GDPR includes not only the right to be forgotten, but also the right to retrieve a copy of all your data, and there better be a mechanism to ensure they only talk to authorized persons.


One of these two sent me a short PDF to sign and finally rang me. Quick and painless.

The other one, unfortunately, escalated quickly. The company asked for a copy of my passport, a utility bill, and required to return a questionnaire. Charming!


I Google searched a little bit and found websites explaining that companies, in general, need to verify who they are talking to, but the efforts should be in a healthy relation to the data already stored.
I consider my passport and my electricity bill of higher value than my name and one of my email addresses.


What to Do if a Company May be in Violation of GDPR


Each European country runs an organization dealing with privacy and data protection. For me, in Ireland, it is the Irish Data Protection Commission. I opened a concern with them, and we will see what happens next.


Now to a bad example!


I received a "newsletter" from a company and replied with my usual request. No response received other than another newsletter two days later.

On their website, I found legal@companyname.com, and I sent an email. They didn’t reply, but guess what? I received another newsletter a day later. Spam leads to anger, anger leads to…well, you know your Yoda.

So, I went to their website again and looked up the management team.

My next email went to firstname.lastname@ of the CEO, the complete board of directors, legal@, and abuse@.


Now guess what—I received a response within a day!
Not a friendly one, but it contained my requested confirmation, and I haven’t heard anything since.

This is an example of “no process in place” or perhaps even “oops, GDPwhat?”


The result of my test is that almost all companies appear to have done a good job implementing GDPR.
Some surely need finetuning, and I feel it definitely should be easier to find the responsible person or team to get in touch with them directly.


On a side note, I seriously improved my security rating over at LastPass.




© 2018 SolarWinds Worldwide, LLC.  All rights reserved.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

By Paul Parker, SolarWinds Federal & National Government Chief Technologist


Federal IT professionals spend a lot of time working on optimizing their IT infrastructures, so it can understandably be frustrating when agency policies or cultural issues get in the way. Unfortunately, responses to a recent SolarWinds North American Public Sector survey indicate that this is often the case. Forty-four percent of IT professionals surveyed who claimed their environments were not optimized cited inadequate organizational strategies as the primary culprit. This was followed closely by insufficient investment in training, which was mentioned by 43% of respondents.


Managers and their teams must work together to bridge the knowledge divide that exists within agencies. Top-level managers must find ways to communicate their organizational strategies, so that teams can map their activities toward those objectives. Simultaneously, agencies and individuals should consider ways to improve knowledge sharing and training, so that everyone has the skills to do their jobs.


Don’t let communication be a one-time event


Town hall meetings or emails declaring an organizational change or new priorities are fine but are rarely sufficient on their own. Agency IT leaders should consider implementing systematic methods for communicating and disseminating information to ensure that everyone understands the challenges and opportunities and can work toward strategic goals. The strategy must be sold by the leadership, bought into by middle management, and actively and appropriately pursued by the overall workforce.


Understand that training is everyone’s responsibility


A busy environment encourages a “check the box” mentality when it comes to training. People will often do the minimum required to learn new material, skipping any extra steps that could immerse them in new technologies or trends like operational assurance and security, even though training can have a remarkably positive impact on efficiency.


The Defense Department Directive 8570 certification is a good example of an agency initiative that puts a premium on the importance of training and expertise. The certification requires a baseline level of knowledge of computer systems and cybersecurity, and continuous education units must be learned and submitted on a regular basis. DOD 8570 certification requires all employees with access to DOD information systems maintain a basic level of knowledge, helping ensure they’ll be up to speed on the technologies that impact those systems.


Self-training can be just as important. IT professionals should use the educational allowances allocated to them by their agencies. Take the time to learn about the technologies they already have in house, but also examine other solutions and tools that will help their departments become fully optimized. Vendors will be more than willing to help out through support programs and their own educational tools, including certification and training programs, online forums, and other offerings.


According to our survey, a knowledge and information-sharing gap does exist within federal IT environments. Applying the practices mentioned above should help shrink that gap and create more knowledgeable and optimized environments for all federal IT professionals.


Find the full article on Government Computer News.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Ryan Adzima

Words Matter

Posted by Ryan Adzima Nov 27, 2018

One of my biggest pet peeves with AI is the word itself. Words matter, especially in IT. Whether you’re trying to nail down a scope of work for a project, troubleshooting an outage, or even trying to buy a product, if everyone isn’t using the same terminology, you’re headed for trouble. You could even potentially be headed for a disaster. In my day-to-day work I have to deal with this constantly.


The wireless world is rife with fractured vernacular and colloquialisms. Words like "WAP" and "coverage" can be confusing, misleading, or even frowned upon in the industry. Even a word as simple as "survey" can cause major issues when talking about what work needs to be done. For a team of cable installers, this can mean a site visit to determine cable paths. For a wireless engineer, the same word could mean a site walk to assess the current wireless situation or the validation of design that has yet to be deployed (typically by doing what’s called an AP-on-a-stick or APoS survey). Often before heading out to perform a job, I will set up a meeting to review the information and level-set on terminology. What does all this confusion in Wi-Fi have to do with AI? Like I said, the word itself is what pains me.


Artificial Intelligence has a different definition depending on who you ask, but it seems to bring the same thoughts and ideas to mind for everyone: typically HAL 9000, WOPR, and more recently TARS along with any other movie AI. A system capable of thinking like a human but with instantaneous access to information beyond our wildest dreams. Yet, in the marketplace of tools dubbed AI, there isn’t a single "intelligent" system to be seen. To me, Artificial Intelligence is a general knowledge capable of simulating human thought and response by using vast amounts of experience and seemingly non-related information to make those decisions. As I posited in my past posts, we’re not even close. AI is merely making decisions for us in our networks based on the inputs of its creators (developers).


Now why does this bother me so much? What’s wrong with calling advanced machine learning "artificial intelligence?" As we continue to use machine learning and approach true AI without drawing a line in the sand about what it really is and isn’t, the definitions will blur. The use cases, costs, and abilities are vastly different and similarities will begin to fade. As that happens, the machine learning systems that corporations are buying now will become antiquated and obsolete, while your customers (or even your executives) scream about all the money and time invested -- now wasted. This isn’t a simple case of aging tech, like old firewalls not capable of doing the same thing as the current generation. This is a fundamentally different issue. It’s like comparing a first generation Ethernet hub capable of moving 10Mbps across a shared medium to the new set of core switches available today with layer 3 functions, firewall capabilities, and so much more. Machine Learning is a small subset of Artificial Intelligence and only able to perform a small subset of the tasks. Calling Machine Learning "AI" is imprecise and leads to confusion.


I’d argue that there are maybe three or four systems in place (that we know of) that could be considered AI and I promise you, no one is selling you one of those. Amazon Alexa, Apple Siri, IBM Watson, and whatever Google has dubbed their intelligent system are just about the only intelligent machines out there. And the argument could still be made that these systems aren’t intelligent and are merely working off of vast training sets too large to house in a typical data center. Until we have machines with awareness of more than just the training sets provided to them, let’s stick to calling it Machine Learning.

It is often observed that, "The practice of writing begets more writing," which I, at least, have certainly found to be true. But more than that, the act of writing creates connections to readers (not to mention other writers) in unexpected and delightful ways. Perhaps this is because writing is always personal, even when giving over nothing but relatively dry facts and processes. There's always a perspective, a point of view, buried in the most mundane of procedures. So how much more so when the topic is something deeply and specifically personal?


Which is why so many members of the THWACK® community look forward to this time of year: for the chance to read, and even participate, in the December Writing Challenge.


That's not just idle speculation or opinion. As with all things at SolarWinds, we have solid facts and data to back up that observation. Last year the 2017 Challenge attracted:

  • 32 days of posts by a select group of 26 authors (including 12 THWACK MVPs).
  • Over 255,000 THWACK points awarded


But beyond the raw figures, the challenge opened a window into the private lives and personal thoughts of the participants. We read about hopes and dreams, successes and setbacks. Each day’s entry allowed us to catch a glimpse of the person behind each THWACK ID and avatar.


This year will be no different, even as the format changes slightly. Rather than a new word each day, the 2018 Challenge features a single writing prompt:


“What I would tell my younger self.”


Each day, a featured writer (whether from the SolarWinds staff or our THWACK community) will share their thoughts, and the community is then encouraged to respond with responses, comments, or advice of their own. Participants will earn THWACK points (2,000 for writing the featured article, 200 for commenting).


At the end of the week, a summary article on Geek Speak™ will highlight some of the more engaging contributions.


Because a society without rules tends to descend into chaos (or in the case of THWACK, a passionate debate about who is the greatest starship captain of all time), let me clarify how this will work:

  • Each day a select author will post to the 2018 Writing Challenge Forum, which you can find here.
  • The post will appear at (roughly) 12:01 a.m. CT (GMT -6).
  • Once that post appears, the community is encouraged to offer their thoughts in the comments.
    • Commenting will earn you 200 THWACK points.
    • One comment per person per day will earn points.
    • You are free to continue to comment but points are earned only for the first comment per day.
  • You have until midnight U.S. CT (GMT -6) to comment.
  • For weekend posts, you have until Monday at midnight U.S. CT to comment for the Saturday and Sunday posts. That way, people who take their weekends seriously are not penalized.
  • If you have questions, feel free to post them in the comments below.


So sharpen your pencils, gather your thoughts, and get ready. Because December 1st is only 5 days away!

Welcome to another edition of the Actuator. This week we will celebrate Thanksgiving here in the USA. I hope that wherever you are reading this, you have the opportunity to be surrounded by family and friends and share a meal.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Nordstrom Blames Breach of Employee Data on Contractor

Nice reminder that it’s important to hire data professionals who understand the basics of data security and privacy. It’s also a nice reminder that a giant corporation will throw you under a bus when given the chance.


Spectre, Meltdown researchers unveil 7 more speculative execution attacks

It’s time we stop thinking we will avoid attacks. Better to adopt the "assume compromise" line of thinking, and work on how to handle incidents when they happen. Because they will.


Japanese Cybersecurity Minister Admits He Has Never Used a Computer

If you aren't on the grid, you can't be hacked. Brilliant!


Dutch Report Slams Microsoft for GDPR Violations in Office

I’m OK with companies collecting telemetry in order to make product better. I am not OK when that telemetry contains personal data. Microsoft, you can do better than this, and I trust that they will take steps to clean up how their telemetry is done.


Your Private Data Is Quietly Leaking Online, Thanks to a Basic Web Security Error

Everything is terrible.


Sagrada Familia agrees €36 million payment after building for 136 years with no permit

Crazy to think that a building has been under construction for 136 years, never mind they didn’t have a permit.


Michael Bloomberg Will Donate $1.8 Billion To Johns Hopkins University

“No qualified high school student should ever be barred entrance to a college based on his or her family’s bank account.” This, so much this.


There was no turkey at the first Thanksgiving--they most likely ate lobster. Enjoy!


Hopefully, you have been following along with this series and finding some useful bits of information. We have covered traditional APM implementations in part one, part two, and part three. In this post, we will begin looking at how implementing APM in an agile environment is much different than a traditional environment. We will also look at how APM is potentially much easier and more precise when implementing in an agile environment.


Differences between Traditional and Agile


From what we have covered up to this point, it should be very clear what we are talking about when we refer to traditional APM implementations and what we have referenced as “after-the-fact implementation.” This is when APM is implemented after the environment has already been built, which includes the application being in production. Again, this scenario is more than likely what most are familiar with.


So, how does an agile environment differentiate itself from an APM perspective? When implementing APM in an agile environment, all the components related to APM are implemented iteratively throughout the agile process and lifecycle. What this means is, as we build out our environment for our application, we are at the same time implementing the APM constructs that are required for accurately monitoring application health. By doing so, we can effectively ensure that we have identified all the dependencies that can affect our applications performance. This means that we can also implement the correct monitoring, alerting, etc., to identify any bottlenecks we may experience. This also includes the application itself, which means that we can also identify any potential issues that may have been introduced from our applications' iterated versions along the way. Another important aspect that we can identify along the way is a true baseline of what we should consider normal application performance.


Application Dependency Mapping


Adding to what we mentioned above regarding effectively identifying all the dependencies which can affect our applications performance, while we are building out our application, we should be mapping out all the dependencies. These dependencies would include things we have mentioned previously, such as load balancers, hypervisors, servers, caching layers, databases, etc. When identifying these components in an agile environment while managing the applications, lifecycle should be much easier to identify. By effectively mapping out these dependencies along the way, we can begin implementing the proper monitoring to begin identifying issues along the way. Equally as much would be that if for some reason during our application's lifecycle, we decide to implement something new or change something, we can easily adapt our monitoring at the same time. Now, you may be thinking to yourself that this could be accomplished in a traditional method as well. While that is absolutely true, the chances of forgetting something are much higher. This is not to say that an agile environment is not equally as suspect to forgetting something, but because we are continually iterating throughout the implementation, those chances are at a minimum.




So, to put these thoughts into perspective, let's look at a quick scenario:

We are working in an agile environment, which means we are hopefully iterating on our application every 2-4 weeks as sprints. After our last sprint release, we decided that it would make sense to implement a message bus for our application to communicate over. We decided this because we identified an application performance issue when making certain calls between our application stacks, and we have the performance data to back that up with. So, we have decided that during this next sprint, we will implement this message bus in hopes of resolving the performance issue that was identified. After doing so, we can begin observing through our APM solution that we have absolutely resolved that issue, but uncovered additional issues based on our application's dependency mappings that is affecting another tier in our application stack. We are now able to continually iterate through our application's dependencies to increase performance throughout the lifecycle of our application.




As you can see, implementing APM in an agile environment can absolutely increase the accuracy of our application performance monitoring. Again, this does not mean that APM, in a traditional sense, is not effective. Rather, in an agile environment, we are easily able to adjust as we go throughout the lifecycle.

By Paul Parker, SolarWinds Federal & National Government Chief Technologist


Artificial intelligence (AI) is coming. Contrary to the stuff of science fiction, however, AI has the potential to have a positive impact within the federal IT community. The adoption of AI will likely be the result of the adoption of hybrid and cloud IT computing.


AI is not new. While highly effective, AI has historically had a long adoption timeframe—similar to other excellent technologies waiting for the perfect use case within day-to-day IT environments. Many believe that’s about to change. Public sector investment in AI is expected to rise rapidly in the coming years. According to the 2018 SolarWinds North American Public Sector IT Trends report, more than a third of surveyed public sector IT pros predict that AI will be among some of the biggest technology priorities in three to five years.


Signs point to hybrid IT and cloud adoption as primary factors in the rise of AI adoption. In fact, the two can have a synergistic relationship, since AI can enhance the capabilities provided through a hybrid IT or cloud environment.


AI platforms


One of the great things about cloud is its ability to serve as a platform for federal IT pros to acquire and use technologies as a service, rather than buying them outright. Applications, storage, infrastructure—all of these are now available as a service.


AI is no different. Each major cloud provider offers its own machine learning services (MLaaS) platform, which will let third-party AI application developers build their smart applications on each of these cloud platforms. With the availability of AI platforms comes the opportunity to “let someone else” handle the intricacies of creating AI applications—which may lead to a wide variety of new AI-based applications.


There are two more advantages of cloud that present an environment ripe for AI: abundant computing capacity and access to vast amounts of data. Abundant capacity means applications have the room to use as much computing power as necessary to accomplish highly complex computing algorithms; access to vast amounts of data means applications have the information necessary to use those complex algorithms to deliver far more “intelligent” information. The network making its transition to Software-Defined Everything can allow AI to use additional resources when necessary and return that capacity when it’s finished with complex issues. Templates, policies, and dynamic scaling are designed to make this more than possible—it becomes simple.


The final advantage of this great convergence of technologies is AI’s role in managing this highly intelligent environment. Take the Internet of Things (IoT) for example. AI has the potential to allow for a dramatically enhanced ability to manage things that to date have been difficult to manage or even track. Taking that scenario even further, the intelligence and data analytics behind AI may also provide the ability to implement far more broad-reaching automation.


With automation comes greater efficiency and more opportunity for innovation. I’d call that a win-win.


Find the full article on our partner DLT’s blog Technically Speaking.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

With the popularity of Agile methodologies and the ubiquity of people claiming they were embracing NetOps/DevOps, I could swear we were supposed to have adopted a new silo-busting software-defined paradigm shift which would deliver the critical foundational framework we needed to become the company of tomorrow, today.

Warning: Bull-dung Ahead


Brace For Cynicism!


I recently discussed Automation Paralysis and the difficulties of climbing the Cliffs of Despair from small, uni-functional automation tools at the bottom (the Trough of Small Successes) to larger, integrated tools at the top (the Plateau of Near-Completion). The way I see it, in most cases, the cliffs are being climbed individually by each of the infrastructure sub-specialties (network, compute, storage, and security), and even though each group goes through similar pains, there's no shared experience here. Each group works through its own problems on their own, in their own way, using their own tools.


For the sake of argument, let's assume that all infrastructure teams have successfully scaled the Cliffs of Despair with only a few casualties along the way, and are making their individual base camps on the Plateau of Near Completion. What happens now? What does the company actually have? What the company has is four complex automation products which are most likely totally incompatible with one another.


Introducing the Silo Family!


IT Silos: Network, Compute, Storage, Security


If I may, I'd like to introduce to you all to the Silo Family. While they may not show up in genetic test results, I'll wager we all have relatives in these groups:



Netbeard Picture








NetBeard is proud of having automated a large chunk of what many said could not be automated: the network. Given the right inputs, NetBeard's tools can save many hours by pushing configs out to devices in record time. NetBeard's team was the first one in the world to ever have to solve these problems, and it was made all the more difficult by the fact that there were no existing tools available that could do what was needed.


Compute Monkey

Computer Monkey Picture

Looking more confident than the rest of the cohort, Compute Monkey can't understand what the fuss is all about. Compute Monkey's servers have been Puppeted and Cheffed and Whatever-Elsed for years, and it's a public secret that deploying compute now requires little more than a couple of mouse clicks.



Storebot Picture

StoreBot is pleasant enough, but while everybody can hear the noises coming out of StoreBot's mouth, few have the ability to interpret what it all means. If you've ever heard the teacher talking in the Peanuts TV cartoon series it's a bit like that: Whaa waawaa scuzzy whaaaa LUN wawabyte whaaaaw.


Security Fox

SecurityFox Picture

Nobody knows anything about Security Fox. Security Fox likes to keep things secret.


Family Matters


The problem is, each group works in a silo. They don't collaborate with automation, they don't believe that the other groups would really understand what they do (come on, admit it), and they keep their competitive edge to themselves. I don't believe that any of the groups really means to be insular, but, well, each team has knowledge, and to work together on automation would mean having to share knowledge and be patient while the other groups try to understand what, how, and why the group operates the way it does. And once somebody else understands that role, why should they be the ones to automate it? Isn't that automating another group out of a job? Ultimately, I am cynical about the chances of success based on most of the companies I've seen over the years.


However, if success is desired, I do have a few thoughts, and I'm sure that the THWACK community will have some too.


Bye Bye, Silos


Getting rid of silos does not mean expecting everybody to do everything. Indeed, expertise in each technology in use is required just as it is when the organization was siloed. However, merging all these skills into a single team does mean that it's possible to introduce the idea of shared fate, where the team as a whole is responsible – and hopefully rewarded – for achieving tighter integrations between the different technologies so that there can be a single workflow.


Create APIs Between Groups


If it's not possible to unite the teams, and especially where there is a legacy of automation dragged up to the Plateau, make that automation available to fellow teams via APIs, and the other teams should do the same in return. That way each team gets to feel accomplished and maintains their expertise, management team, and so on, but now automation in each group can use, and be used by, automation from other groups. For example, when deploying a server based on a request to the Compute group, wouldn't it be nice if the Compute group's automation obtained IPs, VLANs, trunks, etc., via an API provided by the Network group. Storage could be instantiated the same way. Everybody gets to do their own thing, but by publishing APIs, everybody gets smarter.


Go Hyperconverged


Hyperconvergence is not only Buzzword Approved™, but for some it's the perfect workaround for having to create all this automation in a bespoke fashion. Of course, with convenience comes caveat, and there are quite a few to consider, perhaps including:

  • Vendor lock-in (typically only vendor-approved hardware can be used)
  • Solution lock-in (no choice but to run the vendor's software)
  • Delivers a one-size-fits-most solution, which is good if you're that size
  • May not be able to customize to particular needs if not supported by the software


I'm not against hyper converged infrastructure (HCI) by any means, but it seems to me that it's always a compromise in one way or another.


Use Another Solution


Why write all this coordinated automation when somebody else can do it for you? Well, because somebody else might not do it quite the way you had in mind. I mean, why not spin up some OpenStack in the corporate DC? OpenStack has a component for everything, I hear, including compute, storage, network, vegan recipes, key management, 18th century French poetry, and orchestration. OpenStack can be incredibly powerful, but last I heard it's really not fun to install and maintain for oneself; it's much nicer to let somebody else run it and just subscribe to the service; sounds a bit like cloud doesn't it? On which note:




Make It Somebody Else's Problem (MISEP). The big cloud providers have managed to de-silo their teams, or maybe they were never siloed in the first place. The point is, services like AWS are half way up the Asymptotic Dream of Full Automation; they pull together all those automation tools, make them work together, orchestrate them, then provide pointy-clicky access via a web browser. What's not to love? All the hard work is done, it's cheaper*, there will be no need to write scripts any more**, you can do anything you like***, and life will be wonderful****.


* Rarely true with any reasonable number of servers running

** Also very rarely true

*** I made this up

**** It won't


As ever, if you read between the lines, you might guess that as with HCI (another form of MISEP), such simplicity comes at a price, both literally and figuratively. With cloud services it's usually a many-sizes-fit-most model, but if what you want to do isn't supported, that's just tough luck and you need to find another way. While skills in the previous silos may be less necessary, a new silo appears instead: Cloud Cost Optimization. Make of that what you will.


Why The Long Face?


It may seem that this is an unreasonably negative view of automation – and some of it is a tiny bit tongue-in-cheek, – but I have tried to highlight some of the very real challenges standing in the way of a beautifully cost-efficient, highly agile, high-quality automated network. Wait, that's reminding me of something, and allows me to make one last dig at the dream:


Pick Two: Cheap, Fast, Good


We can get there. At least, we can get much of the way there, but we have to break out of our silos and start sharing what we know. We also need to go into this with eyes wide open, an understanding of what the alternatives might be, and a reasonable expectation of what we're going to get out of it in the end.

Earlier this year I found myself in one of those dreaded career/professional ruts. My job has been extremely busy with new projects over which I had no control, talks of a big upcoming merger, and lots of annoying issues, both technical and non-technical, that have put a real drain on me. So, come summer, I found myself with little energy and even less inspiration while I was in the office. It’s unlike me, but truth be told, I’ve fallen victim to this dilemma a few times over my 25+ year IT career. However, with each passing birthday, I found it harder to bounce back and return with the same gusto. Many a car ride to and from the office I found myself thinking about why I felt like I was stuck in quicksand and at the end I would come to no reasonable conclusion.


In late August, my family and I took a much-needed vacation. My wife and I decided to treat our two kids to an old-fashioned family road trip. We drove from Baltimore to Atlanta, minimized use of electronic devices, and stopped to see all the interesting things on the right side of the road, repeating the same process on the drive home. (Why Atlanta you may ask? For starters, the Georgia Aquarium is amazing. Their main tank holds four whale sharks!) By the time we made it to Atlanta, we had found 34 of the 51 state license plates, as well as plates for D.C., Mexico, and Ontario, Canada. (We finished the trip with 41 plates total.)


We spent four days in Georgia seeing the sights, the parks, the museums, and some friends. The whole family had a great time. But I was still unable to shake the feeling that I was “stuck” in my job. Driving to Atlanta you spend a lot of time thinking and I did just that. I asked the same old questions, “Is it me? Is it my boss? Is it my staff? Am I missing the big picture here?” No answer made sense and I found myself no closer to finding peace of mind. My brain was beginning to feel like oatmeal as I processed the same questions and scenarios. Then I analyzed other people around my age who have had similar career paths in IT to see how I measured up. There are many far ahead of me, but there are almost an equal amount behind me in terms of success. What I realized is that I wasn’t as passionate about IT as most. “Have I officially become an old man? Am I starting to resist all this change in IT and the constant expectation of learning new technology? Is my subconscious trying to protect me for fear that I might not have the energy to keep up?” These are scary realizations, because to stand in place in IT and not accept change is career suicide.


Now I love what I do, and I love the position I hold. I have the freedom to be creative with my SolarWinds platform to tell my story. I’m not technical anymore, but I know the technology very well. And I can translate the technical into potential business outcomes. I meet with teams frequently and I start off by asking, “How can SolarWinds make your life easier?” I follow-up with “Did you know that SolarWinds can do…” Doing this and helping people always leaves me with a profound sense of satisfaction.


Driving back home through northeast Georgia, I saw a road sign for the town of Toccoa. “Honey! We need to make a detour for about two hours!” I told my wife. Right outside Toccoa is Currahee Mountain. This is the mountain the original U.S. Army Airborne paratroopers trained on, and cursed about, during WWII, and was made famous in the pilot episode of the acclaimed HBO miniseries “Band of Brothers.” Full disclosure, I’m a big WWII history buff, have watched the miniseries at least 20 times, and I like to run when I can. My wife dropped me off at the start of the Col. Robert Sink Memorial Trail and said goodbye to me for what could have been the last time. I threw on my running gear and up the mountain I ran. Immediately I thought to myself, “What was I thinking? This really is a mountain! I’m in no shape for this!” “Three miles up! Three miles down!” the soldiers would curse to themselves. In reality, it was closer to 2.4. (The paratroopers started inside the camp, which is closed to the public.) I ran some of it and walked the rest. I never stopped until I got to the top. (At one point my organs began to hurt. Seriously.) Currahee is Cherokee for “Stand Alone.” This mountain is away from others, so the view in every direction is gorgeous.


I spent about 30 minutes alone contemplating so many things. I thought about the thousands of young men who ran up this mountain almost daily in full battle gear as they prepared to go to war, many of whom never came home. I thought about who I was, thought about my family and friends, and I thought about my career. The answers I’d been looking for didn’t come to me, but my brain was fresh. And other than my pulsating spleen and sharp kidney pains, I felt invigorated. Eventually I ran, mostly walked, back down Currahee, and I felt that sense of accomplishment that I had been missing for so long. I wasn’t stuck in quicksand anymore.


For me, the lesson learned from this exercise in self-inflicted agony is that the next time I get stuck in a mental rut is to not to dwell on it… but run until it hurts, and to knock off a #bucketlist item. I’m not suggesting this diagnosis for everyone. But to paraphrase Winston Churchill, “If you’re going through heck, keep going!” Stop dwelling and change things up. Push yourself in another direction, challenge yourself on a different level. Identify a short-term goal in your life and reach it… and then bask in the glory of a job well done. You will assuredly see your challenges from a whole new perspective.


As for me, it’s been almost two months since Currahee and I’m a changed man. I like to think I escaped my rut was through climbing Currahee. The view from the peak was the change in perspective I needed. My answers are now in view, even if they are still far off. And my descent renewed my energy to pursue my goals. My attitude towards my work and my career are like they used to be.


If you find yourself in a rut and you can’t think your way out of it – run! Or maybe walk, climb, paddle, build, swim, or whatever inspires you to lift you out of your rut and inspire you again. As Col. Sink would yell, “Currahee!”



© 2018 SolarWinds Worldwide, LLC.  All rights reserved.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

I’m 98% confident if you ask three data scientists to define Artificial Intelligence (AI), you will get five different answers.


The field of AI research dates to the mid-1950s, and even earlier when you consider the work of Alan Turing in the 1940s. Therefore, the phrase “AI” has been kicked around for 60+ years, or roughly the amount of time since the last Cleveland Browns championship.


My preference for a definition to AI is this one, from Elaine Rich in the early 1990s:

“The study of how to make computers do things which, at the moment, people do better.”


But there is also this quote from Alan Turing, in his effort to describe computer intelligence:

“A computer would deserve to be called intelligent if it could deceive a human into believing it was human.”


When I try to define AI, I combine the two thoughts:

“Anything written by a human that allows a machine to do human tasks.”


This, in turn, allows humans to find more tasks for machines to do on our behalf. Because we’re driven to be lazy.


Think about the decades spent finding ways to build better programs, and the automation of traditional human tasks. We built robots to build cars, vacuum our house, and even flip burgers.


It the world of IT, alerting is one example of where automation has shined. We started building actions, or triggers, to fire in response to alert conditions. We kept building these triggers until we reached a point where human intervention was needed. And then we would spend time trying to figure out a way to remove the need for a person.


This means if you ever wrote a piece of code with IF-THEN-ELSE logic, you’ve written AI. Any computer program that follows rule-based algorithms is AI. If you ever built code that has replaced a human task, then yes, you built AI.


But to many in the field of AI research, AI means more than just simple code logic. AI means things like image recognition, text analysis, or a fancy “Bacon/Not-Bacon” app on your phone. AI also means talking robots, speech translations, and predicting loan default rates.


AI means so many different things to different people because AI is a very broad field. AI contains both Machine Learning and Deep Learning, as shown in this diagram:



That’s why you can find one person who thinks of AI as image classification, but another person who thinks AI is as simple as a rules-based recommendation engine. So, let’s talk about those subsets of AI called Machine Learning and Deep Learning.


Machine Learning for Mortals

Machine Learning (ML) is a subset of AI. ML offers the ability for a program to apply statistical techniques to a dataset and arrive at a determination. We call this determination a prediction, and yes, this is where the field of predictive analytics resides.


The process is simple enough: you collect data, you clean data, you classify your data, you do some math, you build a model, and that model is used to make predictions upon similar sets of data. This is how Netflix knows what movie you want to watch next, or how Amazon knows what additional products you would want to add to your cart.


But ML requires a human to provide the input. It’s a human task to define the features used in building the model. Humans are the ones to collect and clean the data used to build the model. As you can imagine, humans desire to shed themselves of some tasks that are better suited for machines, like determining if an image is a chihuahua or a muffin.


Enter the field of Deep Learning.


Deep Learning Demystified

The first rule of Deep Learning (DL) is this: You don’t need a human to input a set of features. DL will identify features from large sets of data (think hundreds of thousands of images) and build a model without the need for any human intervention thankyouverymuch. Well, sure, some intervention is needed. After all, it’s a human that will need to collect the data, in the example above some pictures of chihuahuas, and tell the DL algorithm what each picture represents.


But that’s about all the human needs to do for DL. Through the use of Convoluted Neural Networks, DL will take the data (an image, for example), break it down into layers, do some math, and iterate through the data over and over to arrive at a predictive model. Humans will adjust the iterations in an effort to tune the model and achieve a high rate of accuracy. But DL is doing all the heavy lifting.


DL is how we handle image classifications, handwriting recognition, and speech translations. Tasks that once were better suited to humans are now reduced to a bunch of filters and epochs.



Before I let you go, I want to mention one thing to you: beware companies that market their tools as being “predictive” when they aren’t using traditional ML methods. Sure, you can make a prediction based upon a set of rules; that’s how Deep Blue worked. But I prefer tools that use statistical techniques to arrive at a conclusion.


It’s not that these companies are knowingly lying, it’s just that they may not know the difference. After all, the definitions for AI are muddy at best, so it is easy to understand the confusion. Use this post as a guide to ask some probing questions.


As an IT pro, you should consider use cases for ML in your daily routine. The best example I can give is the use of linear regression for capacity planning. But ML would also help to analyze logs for better threat detection. One caveat though: if the model doesn’t include the right data, because a specific event has not been observed, then the model may not work as expected.


That’s when you realize that the machines are only as perfect as the humans that program them.


And this is why I’m not worried about Skynet.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.