“Too many secrets.” – Martin Bishop

 

One of the pivotal moments in the movie Sneakers is when Martin Bishop realizes that they have a device that can break any encryption methodology in the world.

 

Now 26 years old, the movie was ahead of its time. You might even say the movie predicted quantum computing. Well, at the very least, the movie predicts what is about to unfold as a result of quantum computing.

 

Let me explain, starting with some background info on quantum computing.

 

Quantum computing basics

To understand quantum computing, we must first look at how traditional computers operate. No matter how powerful, standard computing operates on binary units called “bits.” A bit is either a 1 or a 0, on or off, true or false. We’ve been building computers based on that architecture for the past 80 or so years. Computers today are using the same bits that Turing invented to crack German codes in World War II.

 

That architecture has gotten us pretty far. (In fact, to the moon and back.) But it does have limits. Enter quantum computing, where a bit can be a 0, a 1, or a 0 and a 1 at the same time. Quantum computing works with logic gates, like classic computers do. But quantum computers use quantum bits, or qubits. With one qubit, we would have a matrix of four elements: {0,0}, {0,1}, {1,0}, or {1,1}. But with two qubits, we get a matrix with 16 elements, and at three qubits we have 64. For more details on qubits and gates, check out this post: Demystifying Quantum Gates—One Qubit At A Time.

 

This is how quantum computers outperform today’s high-speed supercomputers. This is what makes solutions to complex problems possible. Problems today’s computers can’t solve. Things like predicting weather patterns years in advance. Or comprehending the intricacies of the human genome.

 

Quantum computing brings these insights, out of reach today, into our grasp.

 

It sounds wonderful! What could go wrong?

 

Hold that thought.

 

Quantum Supremacy

Microsoft, Google, and IBM all have working quantum computers available. There is some discussion about capacity and accuracy, but they exist.

 

And they are getting bigger.

 

At some point in time, quantum computers will outperform classical computers at the same task. This is called “Quantum Supremacy.”

 

The following chart shows the linear progression in quantum computing for the past 20 years.

 

(SOURCE: Quantum Supremacy is Near, May 2018)

 

There is some debate about the number qubits necessary to achieve Quantum Supremacy. But many researchers believe it will happen within the next eight years.

 

So, in a short period of time, quantum computers will start to unlock answers to many questions. Advances in medicine, science, and mathematics will be within our grasp. Many secrets of the Universe are on the verge of discovery.

 

And we are not ready for everything to be unlocked.

 

Quantum Readiness

Quantum Readiness is the term applied to define if current technology is ready for quantum computing impacts. One of the largest impacts to everyone, on a daily basis, is encryption.

 

Our current encryption methods are effective due to the time necessary to break the cryptography. But quantum computing will reduce that processing time by an order of magnitude.

 

In other words, in less than ten years, everything you are encrypting today will be at risk.

 

Everything.

 

Databases. Emails. SSL. Backup files.

 

All of our data is about to be exposed.

 

Nothing will be safe from prying eyes.

 

Quantum Safe

To keep your data safe, you need to start using cryptography methods that are “Quantum-safe.”

 

There’s one slight problem—the methods don’t exist yet. Don’t worry, though, as we have "top men" working on the problem right now.

 

The Open Quantum Safe Project, for example, has some promising projects underway. And if you want to watch mathematicians go crazy reviewing research proposals during spring break, the PQCrypto conference is for you.

 

Let’s assume that these efforts will result in the development of quantum-safe cryptography. Here are the steps you should be taking now.

 

First, calculate the amount of time necessary to deploy new encryption methods throughout your enterprise. If it takes you a year to roll out such a change, then you had better get started at least a year ahead of Quantum Supremacy happening. Remember, there is no fixed date for when that will happen. Now is your opportunity to take inventory of all the things that require encryption, like databases, files, emails, etc.

 

Second, review the requirements around your data retention policies. If you are required to retain data for seven years, then you will need to apply new encryption methods on all of that older data. This is also a good time to make certain that data older than your policy is deleted. Remember, you can’t leave your data lying around—it will be discovered and decrypted. It’s best to assume that your data will be compromised and treat it accordingly.

 

One thing worth mentioning is that some data, such as emails, are (possibly) stored on the servers they touch as they traverse the internet. We will need to trust that those responsible for the mail servers are going to apply new encryption methods. Security is a shared responsibility, after all. But it’s a reminder that there are still going to be things outside your control. And maybe reconsider the data that you are making available and sharing in places like private chat messages.

 

Summary

Don’t wait until it’s too late. Data has value, no matter how old. Just look at the spike in phishing emails recently, where they show you an old password and try to extort money. Scams like that work, because the data has value, even if it is old.

 

Start thinking how to best protect that data. Build yourself a readiness plan now so that when quantum cryptography happens, you won’t be caught unprepared.

 

Otherwise…you will have no more secrets.

“I heard a bird sing in the dark of December.

A magical thing. And sweet to remember.

We are nearer to Spring than we were in September.

I heard a bird sing in the dark of December.”

- Oliver Herford

 

Here on the eve of the darkest month, when cultures across the world celebrate light in an attempt to brighten the short days and long nights, we want to bring some illumination to our THWACK® community too, in the form of the December Writing Challenge. In my announcement, I described how this challenge has been an uplifting event each year, and how many of us—both inside SolarWinds and in the THWACK community at large—look forward to it as a chance to reflect on the past and connect, both with each other and with our goals for the coming year.

 

I don't need to repeat the instructions (you can read them in the announcement, here), but I hope this post gives you a final reminder to keep an eye on the December Writing Challenge forum starting tomorrow and each day during December.

 

Rather than a word-a-day style writing prompt like previous years, this year's challenge has a single idea: "What I would tell my younger self." We're excited to read everyone's contributions, ideas, and discussions.

 

See you in the comments section tomorrow!

Today, in the fifth post of this six-part series, we’re going to cover the fourth and final domain of our reference model for IT infrastructure security. Not only is this the last domain in the model, it is one of the most exciting.

 

As IT professionals, we are all being asked to do more with less. This is why we need security tools that give us more visibility and control. But what do those tools look like? Let’s take a peek.

 

Domain: Visibility & Control

If we were securing a castle, it might be good enough to go to a high tower to see the battlefield, and we might be able to use horns or smoke signals to coordinate our defense. In a modern organization, we need to do a little better than that. Real-time visibility providing contextual awareness and granular control of all our security tools is required to defend against today’s threats.

 

The categories in the visibility and control domain are: automation and orchestration, security incident and event management (SIEM), user (and entity) behavior analytics (UBA/UEBA), device management, policy management, and threat intelligence.

 

Category: Automation and Orchestration

Automation and orchestration are the tools that make it easier to operate a secure infrastructure. These tools should work across the vendors in your environment and simplify the job of your security practitioners by reducing tedious and error prone manual tasks, reducing incident response times, and increasing operational efficiency and resiliency. This category is still emerging. This means that even more than the other categories, there is an option to build this functionality with open source tools and, more recently, to buy a commercial platform.

 

Category: SIEM

Security information and event management (SIEM) products and services combine security information management (SIM) and security event management (SEM) to provide real-time analysis of security alerts generated by applications and network hardware. SIEM solutions collect and correlate a wide variety of information, including logs, alerts, and network data-flow characteristics, and present the data in human-readable formats that administrators use for a variety of reasons, such as application tuning or regulatory compliance. More and more, these tools are complemented with some form of automation platform to provide instructions to analysts for how to deal with alerts, or even act on them automatically!

 

Category: UBA / UEBA

User behavior analytics (UBA) solutions look at patterns in user behavior and then use algorithms or machine learning to detect anomalies to prevent insider threats like theft, fraud, or sabotage. User and entity behavior analytics (UEBA) tools expand that to look at the behavior of any entity with an IP address to more broadly encompass "malicious and abusive behavior that otherwise went unnoticed by existing security monitoring systems, such as SIEM and DLP."

 

Category: Device Management

Device management is all about managing your security devices. These tools are often vendor-specific, and most attempt to display data in a single pane of glass using a central management system (CMS).  Recently, many vendors have recognized the need for a single interface and have enabled APIs to accommodate third-party reporting. Going forward, these tools may be replaced or controlled by other, vendor-agnostic automation tools in a more mature security infrastructure.

 

Category: Policy Management

Policy management tools make it easier to maintain homogeneous security policies across a large number of devices. These tools were initially vendor-specific, but vendor-neutral policy managers are becoming more prolific. They give the ability to deploy a common policy across an organization, a group of devices, or to a single device.  Additionally, Policy Management tools often give a user the ability to test/validate configurations before deploying them.  Finally, Policy Management tools provide a mechanism to create configuration templates used for no-touch/zero-touch provisioning.

 

Category: Threat Intelligence

Threat intelligence can take many forms. The unifying purpose of them is to provide you, your security organization, and your other security tools information on external threats. Threat intelligence gathers knowledge of malware, zero-days, advanced persistent threats (APT), and other exploits so that you can block them before they affect your systems or data.

 

One More Thing

In the final post in this series we’ll look at the full model that has been described thus far and consider how you can put it to use to meet your individual security goals. Be sure to stick with me for the conclusion!

The Dream of the Data Center

 

For me, it started with OpenStack. I was at a conference a number of years ago listening to Shannon McFarland talking about using OpenStack to bring programmatic Network Functions Virtualization (NFV) into the data center. My own applications are much more small-scale, but the idea was captivating from the beginning. Since then, many other approaches to this have come into play, but all of them share that single idea of programmatic control over large-scale installations.

 

The thing about data center architectures is that they need automation. It's not an optional thing or a nice-to-have item. The human resources required to maintain those systems the way most network engineers maintain our networks just don't make financial sense. They're not all that efficient in smaller networks and are particularly ineffective at scale. Necessity is the mother of invention and that is what built the network automation infrastructure we see in large-scale DC deployments.

 

For smaller deployments, automation makes things easier and takes the drudge work out of the job. It's something we want, but not something we can always justify. Still, a guy can dream.

 

Automation at the Device Level (NETCONF/YANG)

 

Meanwhile, back in the real world of smaller networks and device-centric configurations, we're trying to make things easier as best we can. We've got NETCONF interfaces for programmatic control, and YANG models to use as templates for how things should be. Some of us are using tools like Ansible and SaltStack to go beyond device-by-device configurations, but we're still focused on the devices.

 

I'm not sure if this is due to the unwillingness of network engineers to change our paradigm of thinking from the devices to the network as a whole, or if it's the vendors creating equipment that interacts with the network only from its own perspective. It may well be that each feeds the other, creating a vicious cycle that's difficult to break.

 

If the necessity isn't there, where's the need for invention?

 

Commoditization and Virtualization (NFV)

 

As virtual machine technology began to become more common in smaller enterprises, the option of virtualizing all of the things became more appealing. If we're saving money and making more efficient use of resources by virtualizing server loads, why wouldn't we consider virtualizing some of our network infrastructures, too?

 

With Network Functions Virtualization, we came full circle to the dream that began with that OpenStack presentation. If the network, or at least portions of it, could be addressed programmatically like the other virtual machines, we were getting closer.

 

Were we dreaming too small?

 

Systemic Networking (SDN)

 

Even with NFV and the ability to use cloud and DC automation tools to provision and configure our virtual routers and switches, we're still being traditional network engineering greybeards and thinking in terms of devices rather than in terms of the entire network.

 

Enter Software Defined Networking, where we theoretically see the network as a programmable whole. The virtual components and the physical components share a single southbound API from a set of central controllers and the whole thing can be programmed through there.

 

Of course, depending on whose definition of SDN our products are working with, this may or may not be a complete solution, but that's a topic for another article.

 

Once this becomes commoditized, we theoretically have all of the tools to automate the network from a holistic perspective, but do we have an automation framework that will work equally well for all of the components in the platform?

 

The Whisper in the Wires

 

We have what it takes to virtualize and automate most of the network, making automation via central controllers a workable option. We can use one framework to deploy, provision, and automate the lot, right? Here's where I'm not quite sure. Even if we have a good strategy for our NFV devices and/or SDN controllers and their satellite devices, do we have a single framework that we can use to handle the deployment and management of the lot?

In Las Vegas this week for AWS re:Invent. If you are there, stop by Booth #608. I’d love to chat about data and databases with you.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Hospital Refuses Procedure, Prescribes 'Fundraising Effort' for Heart Transplant

I was told the same thing earlier this year when my daughter needed $4k worth of allergy injections. I cannot understand how our society has allowed our healthcare to reach this point.

 

Should a self-driving car kill the baby or the grandma? Depends on where you’re from

It’s an old question, but this time with data. I’d prefer that instead of trying to enforce our ethics into software, we spend time helping cars (and people) avoid the scenarios where accidents happen.

 

What The Cloud Will Look Like In Ten Years

More AI, collecting data from more things connected, hence the need for more cybersecurity. Seems legit to me.

 

A dystopian human scoring system in China is blocking people from booking flights

Forget blockchain, this is the tech that will change the world. And probably not in a good way.

 

New Vehicle Hack Exposes Users’ Private Data Via Bluetooth

On second thought, if you are dumb enough to sync your phone with a rental car, you deserve to have your data stolen.

 

Scared Your DNA Is Exposed? Then Share It, Scientists Suggest

Closing the barn door after the horses have escaped. Still, an interesting idea, but I’m not certain it's the right solution.

 

How to Shop Online Like a Security Pro

With the holiday shopping season upon us already, here’s a bunch of good advice that you should share with everyone you know.

 

It's hard to explain what 40,000 people crammed into the Venetian feels like:

 

Not too long ago, GDPR was the major topic in many conversations around business and technology.

It went "live" in May 2018, and since then, we haven’t heard much interesting news until recently, as a hospital in Portugal got caught with the first violation of the regulation.
Well, the first we know of, at least.

 

Also, some websites are no longer available from Europe, as the owners weren’t able or willing to implement GDPR regulatory strategies, even six months later. As they blocked me, it doesn’t affect me… but, my American friends, what do you think they do with your data?
From my point of view, coming from Europe, this behaviour is unacceptable as it shows disrespect towards the users. But on the other hand, GDPR might clash with the First Amendment in the USA.

 

SolarWinds, like any other company dealing with customers in Europe, should comply. And we do! Here is the statement.
I am quite happy that the company I work for provides so much insight into the whole GDPR process.

 

But on top of that, in my former role here as a Sales Engineer working out of the Rebel City in Ireland, I spoke with quite a few customers who needed assistance during the implementation of the GDPR and they checked to see if SolarWinds would have a product to help them.

In some of these conversations, I felt a little sorry because the IT pros had been left alone.
I heard one example where a legal department explained GDPR to the C-Levels, and the C-Levels then forwarded the whole task to IT with a deadline and no further planning or explanation.

On that note, what was your experience implementing GDPR at your company, if you don’t mind sharing?

 

What is the GDPR Right to Be Forgotten Process?

 

Quite recently, I asked myself how GDPR compliance looks now from the perspective of a user who wants to be forgotten, so I decided to run an experiment myself.

So, the actual task was to get in touch with companies and services that I no longer use and ask them to close my accounts, delete my personal data, and confirm. For the sake of efficiency, I used this opportunity to change my passwords everywhere.

 

The tools I used were simple: I used LastPass™ as the primary repository of all my account credentials (which I have done for ages), and a communication method to these companies that was either a web form or an email.

Oh boy, I didn't remotely expect the layers of complexity I was facing!
You basically deal with different corporations, policies, people, and a varying amount of creativity in the way GDPR has been implemented.

 

The first roadblock is to find contact information. Most companies put it into the privacy policy, legal info, or FAQ. If I could not find anything, I used their customer support. That happened quite often!

Some companies replied within a day or so with a simple confirmation like, "We initiated the process, but it can take up to 30 days until all your data is gone."

 

Sometimes it took a while for a response, but that is fine. Here’s how I imagine some of those GDPR processes look like:

A contact center works on the request first, forwards to someone who understands what it is about but not necessarily empowered to execute so that a ticket will be forwarded to IT, and IT starts the deletion, and the whole thing gets routed back.

 

Two organizations asked for reasons, and I replied with, "I would like to express my rights as a European citizen." (I am German, after all, no need to be overly friendly!) And that worked, no more questions asked.

 

Two companies asked for a verification of my identity, and sure, they are right!
GDPR includes not only the right to be forgotten, but also the right to retrieve a copy of all your data, and there better be a mechanism to ensure they only talk to authorized persons.

 

One of these two sent me a short PDF to sign and finally rang me. Quick and painless.

The other one, unfortunately, escalated quickly. The company asked for a copy of my passport, a utility bill, and required to return a questionnaire. Charming!

 

I Google searched a little bit and found websites explaining that companies, in general, need to verify who they are talking to, but the efforts should be in a healthy relation to the data already stored.
I consider my passport and my electricity bill of higher value than my name and one of my email addresses.

 

What to Do if a Company May be in Violation of GDPR

 

Each European country runs an organization dealing with privacy and data protection. For me, in Ireland, it is the Irish Data Protection Commission. I opened a concern with them, and we will see what happens next.

 

Now to a bad example!

 

I received a "newsletter" from a company and replied with my usual request. No response received other than another newsletter two days later.

On their website, I found legal@companyname.com, and I sent an email. They didn’t reply, but guess what? I received another newsletter a day later. Spam leads to anger, anger leads to…well, you know your Yoda.

So, I went to their website again and looked up the management team.

My next email went to firstname.lastname@ of the CEO, the complete board of directors, legal@, and abuse@.

 

Now guess what—I received a response within a day!
Not a friendly one, but it contained my requested confirmation, and I haven’t heard anything since.

This is an example of “no process in place” or perhaps even “oops, GDPwhat?”

 

The result of my test is that almost all companies appear to have done a good job implementing GDPR.
Some surely need finetuning, and I feel it definitely should be easier to find the responsible person or team to get in touch with them directly.

 

On a side note, I seriously improved my security rating over at LastPass.

 

 

 

© 2018 SolarWinds Worldwide, LLC.  All rights reserved.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Federal IT professionals spend a lot of time working on optimizing their IT infrastructures, so it can understandably be frustrating when agency policies or cultural issues get in the way. Unfortunately, responses to a recent SolarWinds North American Public Sector survey indicate that this is often the case. Forty-four percent of IT professionals surveyed who claimed their environments were not optimized cited inadequate organizational strategies as the primary culprit. This was followed closely by insufficient investment in training, which was mentioned by 43% of respondents.

 

Managers and their teams must work together to bridge the knowledge divide that exists within agencies. Top-level managers must find ways to communicate their organizational strategies, so that teams can map their activities toward those objectives. Simultaneously, agencies and individuals should consider ways to improve knowledge sharing and training, so that everyone has the skills to do their jobs.

 

Don’t let communication be a one-time event

 

Town hall meetings or emails declaring an organizational change or new priorities are fine but are rarely sufficient on their own. Agency IT leaders should consider implementing systematic methods for communicating and disseminating information to ensure that everyone understands the challenges and opportunities and can work toward strategic goals. The strategy must be sold by the leadership, bought into by middle management, and actively and appropriately pursued by the overall workforce.

 

Understand that training is everyone’s responsibility

 

A busy environment encourages a “check the box” mentality when it comes to training. People will often do the minimum required to learn new material, skipping any extra steps that could immerse them in new technologies or trends like operational assurance and security, even though training can have a remarkably positive impact on efficiency.

 

The Defense Department Directive 8570 certification is a good example of an agency initiative that puts a premium on the importance of training and expertise. The certification requires a baseline level of knowledge of computer systems and cybersecurity, and continuous education units must be learned and submitted on a regular basis. DOD 8570 certification requires all employees with access to DOD information systems maintain a basic level of knowledge, helping ensure they’ll be up to speed on the technologies that impact those systems.

 

Self-training can be just as important. IT professionals should use the educational allowances allocated to them by their agencies. Take the time to learn about the technologies they already have in house, but also examine other solutions and tools that will help their departments become fully optimized. Vendors will be more than willing to help out through support programs and their own educational tools, including certification and training programs, online forums, and other offerings.

 

According to our survey, a knowledge and information-sharing gap does exist within federal IT environments. Applying the practices mentioned above should help shrink that gap and create more knowledgeable and optimized environments for all federal IT professionals.

 

Find the full article on Government Computer News.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Ryan Adzima

Words Matter

Posted by Ryan Adzima Nov 27, 2018

One of my biggest pet peeves with AI is the word itself. Words matter, especially in IT. Whether you’re trying to nail down a scope of work for a project, troubleshooting an outage, or even trying to buy a product, if everyone isn’t using the same terminology, you’re headed for trouble. You could even potentially be headed for a disaster. In my day-to-day work I have to deal with this constantly.

 

The wireless world is rife with fractured vernacular and colloquialisms. Words like "WAP" and "coverage" can be confusing, misleading, or even frowned upon in the industry. Even a word as simple as "survey" can cause major issues when talking about what work needs to be done. For a team of cable installers, this can mean a site visit to determine cable paths. For a wireless engineer, the same word could mean a site walk to assess the current wireless situation or the validation of design that has yet to be deployed (typically by doing what’s called an AP-on-a-stick or APoS survey). Often before heading out to perform a job, I will set up a meeting to review the information and level-set on terminology. What does all this confusion in Wi-Fi have to do with AI? Like I said, the word itself is what pains me.

 

Artificial Intelligence has a different definition depending on who you ask, but it seems to bring the same thoughts and ideas to mind for everyone: typically HAL 9000, WOPR, and more recently TARS along with any other movie AI. A system capable of thinking like a human but with instantaneous access to information beyond our wildest dreams. Yet, in the marketplace of tools dubbed AI, there isn’t a single "intelligent" system to be seen. To me, Artificial Intelligence is a general knowledge capable of simulating human thought and response by using vast amounts of experience and seemingly non-related information to make those decisions. As I posited in my past posts, we’re not even close. AI is merely making decisions for us in our networks based on the inputs of its creators (developers).

 

Now why does this bother me so much? What’s wrong with calling advanced machine learning "artificial intelligence?" As we continue to use machine learning and approach true AI without drawing a line in the sand about what it really is and isn’t, the definitions will blur. The use cases, costs, and abilities are vastly different and similarities will begin to fade. As that happens, the machine learning systems that corporations are buying now will become antiquated and obsolete, while your customers (or even your executives) scream about all the money and time invested -- now wasted. This isn’t a simple case of aging tech, like old firewalls not capable of doing the same thing as the current generation. This is a fundamentally different issue. It’s like comparing a first generation Ethernet hub capable of moving 10Mbps across a shared medium to the new set of core switches available today with layer 3 functions, firewall capabilities, and so much more. Machine Learning is a small subset of Artificial Intelligence and only able to perform a small subset of the tasks. Calling Machine Learning "AI" is imprecise and leads to confusion.

 

I’d argue that there are maybe three or four systems in place (that we know of) that could be considered AI and I promise you, no one is selling you one of those. Amazon Alexa, Apple Siri, IBM Watson, and whatever Google has dubbed their intelligent system are just about the only intelligent machines out there. And the argument could still be made that these systems aren’t intelligent and are merely working off of vast training sets too large to house in a typical data center. Until we have machines with awareness of more than just the training sets provided to them, let’s stick to calling it Machine Learning.

It is often observed that, "The practice of writing begets more writing," which I, at least, have certainly found to be true. But more than that, the act of writing creates connections to readers (not to mention other writers) in unexpected and delightful ways. Perhaps this is because writing is always personal, even when giving over nothing but relatively dry facts and processes. There's always a perspective, a point of view, buried in the most mundane of procedures. So how much more so when the topic is something deeply and specifically personal?

 

Which is why so many members of the THWACK® community look forward to this time of year: for the chance to read, and even participate, in the December Writing Challenge.

 

That's not just idle speculation or opinion. As with all things at SolarWinds, we have solid facts and data to back up that observation. Last year the 2017 Challenge attracted:

  • 32 days of posts by a select group of 26 authors (including 12 THWACK MVPs).
  • 41,000 views
  • 1,700 comments
  • Over 255,000 THWACK points awarded

 

But beyond the raw figures, the challenge opened a window into the private lives and personal thoughts of the participants. We read about hopes and dreams, successes and setbacks. Each day’s entry allowed us to catch a glimpse of the person behind each THWACK ID and avatar.

 

This year will be no different, even as the format changes slightly. Rather than a new word each day, the 2018 Challenge features a single writing prompt:

 

“What I would tell my younger self.”

 

Each day, a featured writer (whether from the SolarWinds staff or our THWACK community) will share their thoughts, and the community is then encouraged to respond with responses, comments, or advice of their own. Participants will earn THWACK points (2,000 for writing the featured article, 200 for commenting).

 

At the end of the week, a summary article on Geek Speak™ will highlight some of the more engaging contributions.

 

Because a society without rules tends to descend into chaos (or in the case of THWACK, a passionate debate about who is the greatest starship captain of all time), let me clarify how this will work:

  • Each day a select author will post to the 2018 Writing Challenge Forum, which you can find here.
  • The post will appear at (roughly) 12:01 a.m. CT (GMT -6).
  • Once that post appears, the community is encouraged to offer their thoughts in the comments.
    • Commenting will earn you 200 THWACK points.
    • One comment per person per day will earn points.
    • You are free to continue to comment but points are earned only for the first comment per day.
  • You have until midnight U.S. CT (GMT -6) to comment.
  • For weekend posts, you have until Monday at midnight U.S. CT to comment for the Saturday and Sunday posts. That way, people who take their weekends seriously are not penalized.
  • If you have questions, feel free to post them in the comments below.

 

So sharpen your pencils, gather your thoughts, and get ready. Because December 1st is only 5 days away!

Welcome to another edition of the Actuator. This week we will celebrate Thanksgiving here in the USA. I hope that wherever you are reading this, you have the opportunity to be surrounded by family and friends and share a meal.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Nordstrom Blames Breach of Employee Data on Contractor

Nice reminder that it’s important to hire data professionals who understand the basics of data security and privacy. It’s also a nice reminder that a giant corporation will throw you under a bus when given the chance.

 

Spectre, Meltdown researchers unveil 7 more speculative execution attacks

It’s time we stop thinking we will avoid attacks. Better to adopt the "assume compromise" line of thinking, and work on how to handle incidents when they happen. Because they will.

 

Japanese Cybersecurity Minister Admits He Has Never Used a Computer

If you aren't on the grid, you can't be hacked. Brilliant!

 

Dutch Report Slams Microsoft for GDPR Violations in Office

I’m OK with companies collecting telemetry in order to make product better. I am not OK when that telemetry contains personal data. Microsoft, you can do better than this, and I trust that they will take steps to clean up how their telemetry is done.

 

Your Private Data Is Quietly Leaking Online, Thanks to a Basic Web Security Error

Everything is terrible.

 

Sagrada Familia agrees €36 million payment after building for 136 years with no permit

Crazy to think that a building has been under construction for 136 years, never mind they didn’t have a permit.

 

Michael Bloomberg Will Donate $1.8 Billion To Johns Hopkins University

“No qualified high school student should ever be barred entrance to a college based on his or her family’s bank account.” This, so much this.

 

There was no turkey at the first Thanksgiving--they most likely ate lobster. Enjoy!

 

Hopefully, you have been following along with this series and finding some useful bits of information. We have covered traditional APM implementations in part one, part two, and part three. In this post, we will begin looking at how implementing APM in an agile environment is much different than a traditional environment. We will also look at how APM is potentially much easier and more precise when implementing in an agile environment.

 

Differences between Traditional and Agile

 

From what we have covered up to this point, it should be very clear what we are talking about when we refer to traditional APM implementations and what we have referenced as “after-the-fact implementation.” This is when APM is implemented after the environment has already been built, which includes the application being in production. Again, this scenario is more than likely what most are familiar with.

 

So, how does an agile environment differentiate itself from an APM perspective? When implementing APM in an agile environment, all the components related to APM are implemented iteratively throughout the agile process and lifecycle. What this means is, as we build out our environment for our application, we are at the same time implementing the APM constructs that are required for accurately monitoring application health. By doing so, we can effectively ensure that we have identified all the dependencies that can affect our applications performance. This means that we can also implement the correct monitoring, alerting, etc., to identify any bottlenecks we may experience. This also includes the application itself, which means that we can also identify any potential issues that may have been introduced from our applications' iterated versions along the way. Another important aspect that we can identify along the way is a true baseline of what we should consider normal application performance.

 

Application Dependency Mapping

 

Adding to what we mentioned above regarding effectively identifying all the dependencies which can affect our applications performance, while we are building out our application, we should be mapping out all the dependencies. These dependencies would include things we have mentioned previously, such as load balancers, hypervisors, servers, caching layers, databases, etc. When identifying these components in an agile environment while managing the applications, lifecycle should be much easier to identify. By effectively mapping out these dependencies along the way, we can begin implementing the proper monitoring to begin identifying issues along the way. Equally as much would be that if for some reason during our application's lifecycle, we decide to implement something new or change something, we can easily adapt our monitoring at the same time. Now, you may be thinking to yourself that this could be accomplished in a traditional method as well. While that is absolutely true, the chances of forgetting something are much higher. This is not to say that an agile environment is not equally as suspect to forgetting something, but because we are continually iterating throughout the implementation, those chances are at a minimum.

 

Scenario

 

So, to put these thoughts into perspective, let's look at a quick scenario:

We are working in an agile environment, which means we are hopefully iterating on our application every 2-4 weeks as sprints. After our last sprint release, we decided that it would make sense to implement a message bus for our application to communicate over. We decided this because we identified an application performance issue when making certain calls between our application stacks, and we have the performance data to back that up with. So, we have decided that during this next sprint, we will implement this message bus in hopes of resolving the performance issue that was identified. After doing so, we can begin observing through our APM solution that we have absolutely resolved that issue, but uncovered additional issues based on our application's dependency mappings that is affecting another tier in our application stack. We are now able to continually iterate through our application's dependencies to increase performance throughout the lifecycle of our application.

 

Conclusion

 

As you can see, implementing APM in an agile environment can absolutely increase the accuracy of our application performance monitoring. Again, this does not mean that APM, in a traditional sense, is not effective. Rather, in an agile environment, we are easily able to adjust as we go throughout the lifecycle.

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Artificial intelligence (AI) is coming. Contrary to the stuff of science fiction, however, AI has the potential to have a positive impact within the federal IT community. The adoption of AI will likely be the result of the adoption of hybrid and cloud IT computing.

 

AI is not new. While highly effective, AI has historically had a long adoption timeframe—similar to other excellent technologies waiting for the perfect use case within day-to-day IT environments. Many believe that’s about to change. Public sector investment in AI is expected to rise rapidly in the coming years. According to the 2018 SolarWinds North American Public Sector IT Trends report, more than a third of surveyed public sector IT pros predict that AI will be among some of the biggest technology priorities in three to five years.

 

Signs point to hybrid IT and cloud adoption as primary factors in the rise of AI adoption. In fact, the two can have a synergistic relationship, since AI can enhance the capabilities provided through a hybrid IT or cloud environment.

 

AI platforms

 

One of the great things about cloud is its ability to serve as a platform for federal IT pros to acquire and use technologies as a service, rather than buying them outright. Applications, storage, infrastructure—all of these are now available as a service.

 

AI is no different. Each major cloud provider offers its own machine learning services (MLaaS) platform, which will let third-party AI application developers build their smart applications on each of these cloud platforms. With the availability of AI platforms comes the opportunity to “let someone else” handle the intricacies of creating AI applications—which may lead to a wide variety of new AI-based applications.

 

There are two more advantages of cloud that present an environment ripe for AI: abundant computing capacity and access to vast amounts of data. Abundant capacity means applications have the room to use as much computing power as necessary to accomplish highly complex computing algorithms; access to vast amounts of data means applications have the information necessary to use those complex algorithms to deliver far more “intelligent” information. The network making its transition to Software-Defined Everything can allow AI to use additional resources when necessary and return that capacity when it’s finished with complex issues. Templates, policies, and dynamic scaling are designed to make this more than possible—it becomes simple.

 

The final advantage of this great convergence of technologies is AI’s role in managing this highly intelligent environment. Take the Internet of Things (IoT) for example. AI has the potential to allow for a dramatically enhanced ability to manage things that to date have been difficult to manage or even track. Taking that scenario even further, the intelligence and data analytics behind AI may also provide the ability to implement far more broad-reaching automation.

 

With automation comes greater efficiency and more opportunity for innovation. I’d call that a win-win.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

With the popularity of Agile methodologies and the ubiquity of people claiming they were embracing NetOps/DevOps, I could swear we were supposed to have adopted a new silo-busting software-defined paradigm shift which would deliver the critical foundational framework we needed to become the company of tomorrow, today.

Warning: Bull-dung Ahead

 

Brace For Cynicism!

 

I recently discussed Automation Paralysis and the difficulties of climbing the Cliffs of Despair from small, uni-functional automation tools at the bottom (the Trough of Small Successes) to larger, integrated tools at the top (the Plateau of Near-Completion). The way I see it, in most cases, the cliffs are being climbed individually by each of the infrastructure sub-specialties (network, compute, storage, and security), and even though each group goes through similar pains, there's no shared experience here. Each group works through its own problems on their own, in their own way, using their own tools.

 

For the sake of argument, let's assume that all infrastructure teams have successfully scaled the Cliffs of Despair with only a few casualties along the way, and are making their individual base camps on the Plateau of Near Completion. What happens now? What does the company actually have? What the company has is four complex automation products which are most likely totally incompatible with one another.

 

Introducing the Silo Family!

 

IT Silos: Network, Compute, Storage, Security

 

If I may, I'd like to introduce to you all to the Silo Family. While they may not show up in genetic test results, I'll wager we all have relatives in these groups:

 

NetBeard

Netbeard Picture

 

 

 

 

 

 

 

NetBeard is proud of having automated a large chunk of what many said could not be automated: the network. Given the right inputs, NetBeard's tools can save many hours by pushing configs out to devices in record time. NetBeard's team was the first one in the world to ever have to solve these problems, and it was made all the more difficult by the fact that there were no existing tools available that could do what was needed.

 

Compute Monkey

Computer Monkey Picture

Looking more confident than the rest of the cohort, Compute Monkey can't understand what the fuss is all about. Compute Monkey's servers have been Puppeted and Cheffed and Whatever-Elsed for years, and it's a public secret that deploying compute now requires little more than a couple of mouse clicks.

 

StoreBot

Storebot Picture

StoreBot is pleasant enough, but while everybody can hear the noises coming out of StoreBot's mouth, few have the ability to interpret what it all means. If you've ever heard the teacher talking in the Peanuts TV cartoon series it's a bit like that: Whaa waawaa scuzzy whaaaa LUN wawabyte whaaaaw.

 

Security Fox

SecurityFox Picture

Nobody knows anything about Security Fox. Security Fox likes to keep things secret.

 

Family Matters

 

The problem is, each group works in a silo. They don't collaborate with automation, they don't believe that the other groups would really understand what they do (come on, admit it), and they keep their competitive edge to themselves. I don't believe that any of the groups really means to be insular, but, well, each team has knowledge, and to work together on automation would mean having to share knowledge and be patient while the other groups try to understand what, how, and why the group operates the way it does. And once somebody else understands that role, why should they be the ones to automate it? Isn't that automating another group out of a job? Ultimately, I am cynical about the chances of success based on most of the companies I've seen over the years.

 

However, if success is desired, I do have a few thoughts, and I'm sure that the THWACK community will have some too.

 

Bye Bye, Silos

 

Getting rid of silos does not mean expecting everybody to do everything. Indeed, expertise in each technology in use is required just as it is when the organization was siloed. However, merging all these skills into a single team does mean that it's possible to introduce the idea of shared fate, where the team as a whole is responsible – and hopefully rewarded – for achieving tighter integrations between the different technologies so that there can be a single workflow.

 

Create APIs Between Groups

 

If it's not possible to unite the teams, and especially where there is a legacy of automation dragged up to the Plateau, make that automation available to fellow teams via APIs, and the other teams should do the same in return. That way each team gets to feel accomplished and maintains their expertise, management team, and so on, but now automation in each group can use, and be used by, automation from other groups. For example, when deploying a server based on a request to the Compute group, wouldn't it be nice if the Compute group's automation obtained IPs, VLANs, trunks, etc., via an API provided by the Network group. Storage could be instantiated the same way. Everybody gets to do their own thing, but by publishing APIs, everybody gets smarter.

 

Go Hyperconverged

 

Hyperconvergence is not only Buzzword Approved™, but for some it's the perfect workaround for having to create all this automation in a bespoke fashion. Of course, with convenience comes caveat, and there are quite a few to consider, perhaps including:

  • Vendor lock-in (typically only vendor-approved hardware can be used)
  • Solution lock-in (no choice but to run the vendor's software)
  • Delivers a one-size-fits-most solution, which is good if you're that size
  • May not be able to customize to particular needs if not supported by the software

 

I'm not against hyper converged infrastructure (HCI) by any means, but it seems to me that it's always a compromise in one way or another.

 

Use Another Solution

 

Why write all this coordinated automation when somebody else can do it for you? Well, because somebody else might not do it quite the way you had in mind. I mean, why not spin up some OpenStack in the corporate DC? OpenStack has a component for everything, I hear, including compute, storage, network, vegan recipes, key management, 18th century French poetry, and orchestration. OpenStack can be incredibly powerful, but last I heard it's really not fun to install and maintain for oneself; it's much nicer to let somebody else run it and just subscribe to the service; sounds a bit like cloud doesn't it? On which note:

 

Try MISEP

 

Make It Somebody Else's Problem (MISEP). The big cloud providers have managed to de-silo their teams, or maybe they were never siloed in the first place. The point is, services like AWS are half way up the Asymptotic Dream of Full Automation; they pull together all those automation tools, make them work together, orchestrate them, then provide pointy-clicky access via a web browser. What's not to love? All the hard work is done, it's cheaper*, there will be no need to write scripts any more**, you can do anything you like***, and life will be wonderful****.

 

* Rarely true with any reasonable number of servers running

** Also very rarely true

*** I made this up

**** It won't

 

As ever, if you read between the lines, you might guess that as with HCI (another form of MISEP), such simplicity comes at a price, both literally and figuratively. With cloud services it's usually a many-sizes-fit-most model, but if what you want to do isn't supported, that's just tough luck and you need to find another way. While skills in the previous silos may be less necessary, a new silo appears instead: Cloud Cost Optimization. Make of that what you will.

 

Why The Long Face?

 

It may seem that this is an unreasonably negative view of automation – and some of it is a tiny bit tongue-in-cheek, – but I have tried to highlight some of the very real challenges standing in the way of a beautifully cost-efficient, highly agile, high-quality automated network. Wait, that's reminding me of something, and allows me to make one last dig at the dream:

 

Pick Two: Cheap, Fast, Good

 

We can get there. At least, we can get much of the way there, but we have to break out of our silos and start sharing what we know. We also need to go into this with eyes wide open, an understanding of what the alternatives might be, and a reasonable expectation of what we're going to get out of it in the end.

Earlier this year I found myself in one of those dreaded career/professional ruts. My job has been extremely busy with new projects over which I had no control, talks of a big upcoming merger, and lots of annoying issues, both technical and non-technical, that have put a real drain on me. So, come summer, I found myself with little energy and even less inspiration while I was in the office. It’s unlike me, but truth be told, I’ve fallen victim to this dilemma a few times over my 25+ year IT career. However, with each passing birthday, I found it harder to bounce back and return with the same gusto. Many a car ride to and from the office I found myself thinking about why I felt like I was stuck in quicksand and at the end I would come to no reasonable conclusion.

 

In late August, my family and I took a much-needed vacation. My wife and I decided to treat our two kids to an old-fashioned family road trip. We drove from Baltimore to Atlanta, minimized use of electronic devices, and stopped to see all the interesting things on the right side of the road, repeating the same process on the drive home. (Why Atlanta you may ask? For starters, the Georgia Aquarium is amazing. Their main tank holds four whale sharks!) By the time we made it to Atlanta, we had found 34 of the 51 state license plates, as well as plates for D.C., Mexico, and Ontario, Canada. (We finished the trip with 41 plates total.)

 

We spent four days in Georgia seeing the sights, the parks, the museums, and some friends. The whole family had a great time. But I was still unable to shake the feeling that I was “stuck” in my job. Driving to Atlanta you spend a lot of time thinking and I did just that. I asked the same old questions, “Is it me? Is it my boss? Is it my staff? Am I missing the big picture here?” No answer made sense and I found myself no closer to finding peace of mind. My brain was beginning to feel like oatmeal as I processed the same questions and scenarios. Then I analyzed other people around my age who have had similar career paths in IT to see how I measured up. There are many far ahead of me, but there are almost an equal amount behind me in terms of success. What I realized is that I wasn’t as passionate about IT as most. “Have I officially become an old man? Am I starting to resist all this change in IT and the constant expectation of learning new technology? Is my subconscious trying to protect me for fear that I might not have the energy to keep up?” These are scary realizations, because to stand in place in IT and not accept change is career suicide.

 

Now I love what I do, and I love the position I hold. I have the freedom to be creative with my SolarWinds platform to tell my story. I’m not technical anymore, but I know the technology very well. And I can translate the technical into potential business outcomes. I meet with teams frequently and I start off by asking, “How can SolarWinds make your life easier?” I follow-up with “Did you know that SolarWinds can do…” Doing this and helping people always leaves me with a profound sense of satisfaction.

 

Driving back home through northeast Georgia, I saw a road sign for the town of Toccoa. “Honey! We need to make a detour for about two hours!” I told my wife. Right outside Toccoa is Currahee Mountain. This is the mountain the original U.S. Army Airborne paratroopers trained on, and cursed about, during WWII, and was made famous in the pilot episode of the acclaimed HBO miniseries “Band of Brothers.” Full disclosure, I’m a big WWII history buff, have watched the miniseries at least 20 times, and I like to run when I can. My wife dropped me off at the start of the Col. Robert Sink Memorial Trail and said goodbye to me for what could have been the last time. I threw on my running gear and up the mountain I ran. Immediately I thought to myself, “What was I thinking? This really is a mountain! I’m in no shape for this!” “Three miles up! Three miles down!” the soldiers would curse to themselves. In reality, it was closer to 2.4. (The paratroopers started inside the camp, which is closed to the public.) I ran some of it and walked the rest. I never stopped until I got to the top. (At one point my organs began to hurt. Seriously.) Currahee is Cherokee for “Stand Alone.” This mountain is away from others, so the view in every direction is gorgeous.

 

I spent about 30 minutes alone contemplating so many things. I thought about the thousands of young men who ran up this mountain almost daily in full battle gear as they prepared to go to war, many of whom never came home. I thought about who I was, thought about my family and friends, and I thought about my career. The answers I’d been looking for didn’t come to me, but my brain was fresh. And other than my pulsating spleen and sharp kidney pains, I felt invigorated. Eventually I ran, mostly walked, back down Currahee, and I felt that sense of accomplishment that I had been missing for so long. I wasn’t stuck in quicksand anymore.

 

For me, the lesson learned from this exercise in self-inflicted agony is that the next time I get stuck in a mental rut is to not to dwell on it… but run until it hurts, and to knock off a #bucketlist item. I’m not suggesting this diagnosis for everyone. But to paraphrase Winston Churchill, “If you’re going through heck, keep going!” Stop dwelling and change things up. Push yourself in another direction, challenge yourself on a different level. Identify a short-term goal in your life and reach it… and then bask in the glory of a job well done. You will assuredly see your challenges from a whole new perspective.

 

As for me, it’s been almost two months since Currahee and I’m a changed man. I like to think I escaped my rut was through climbing Currahee. The view from the peak was the change in perspective I needed. My answers are now in view, even if they are still far off. And my descent renewed my energy to pursue my goals. My attitude towards my work and my career are like they used to be.

 

If you find yourself in a rut and you can’t think your way out of it – run! Or maybe walk, climb, paddle, build, swim, or whatever inspires you to lift you out of your rut and inspire you again. As Col. Sink would yell, “Currahee!”

 

 

© 2018 SolarWinds Worldwide, LLC.  All rights reserved.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

I’m 98% confident if you ask three data scientists to define Artificial Intelligence (AI), you will get five different answers.

 

The field of AI research dates to the mid-1950s, and even earlier when you consider the work of Alan Turing in the 1940s. Therefore, the phrase “AI” has been kicked around for 60+ years, or roughly the amount of time since the last Cleveland Browns championship.

 

My preference for a definition to AI is this one, from Elaine Rich in the early 1990s:

“The study of how to make computers do things which, at the moment, people do better.”

 

But there is also this quote from Alan Turing, in his effort to describe computer intelligence:

“A computer would deserve to be called intelligent if it could deceive a human into believing it was human.”

 

When I try to define AI, I combine the two thoughts:

“Anything written by a human that allows a machine to do human tasks.”

 

This, in turn, allows humans to find more tasks for machines to do on our behalf. Because we’re driven to be lazy.

 

Think about the decades spent finding ways to build better programs, and the automation of traditional human tasks. We built robots to build cars, vacuum our house, and even flip burgers.

 

It the world of IT, alerting is one example of where automation has shined. We started building actions, or triggers, to fire in response to alert conditions. We kept building these triggers until we reached a point where human intervention was needed. And then we would spend time trying to figure out a way to remove the need for a person.

 

This means if you ever wrote a piece of code with IF-THEN-ELSE logic, you’ve written AI. Any computer program that follows rule-based algorithms is AI. If you ever built code that has replaced a human task, then yes, you built AI.

 

But to many in the field of AI research, AI means more than just simple code logic. AI means things like image recognition, text analysis, or a fancy “Bacon/Not-Bacon” app on your phone. AI also means talking robots, speech translations, and predicting loan default rates.

 

AI means so many different things to different people because AI is a very broad field. AI contains both Machine Learning and Deep Learning, as shown in this diagram:

 

 

That’s why you can find one person who thinks of AI as image classification, but another person who thinks AI is as simple as a rules-based recommendation engine. So, let’s talk about those subsets of AI called Machine Learning and Deep Learning.

 

Machine Learning for Mortals

Machine Learning (ML) is a subset of AI. ML offers the ability for a program to apply statistical techniques to a dataset and arrive at a determination. We call this determination a prediction, and yes, this is where the field of predictive analytics resides.

 

The process is simple enough: you collect data, you clean data, you classify your data, you do some math, you build a model, and that model is used to make predictions upon similar sets of data. This is how Netflix knows what movie you want to watch next, or how Amazon knows what additional products you would want to add to your cart.

 

But ML requires a human to provide the input. It’s a human task to define the features used in building the model. Humans are the ones to collect and clean the data used to build the model. As you can imagine, humans desire to shed themselves of some tasks that are better suited for machines, like determining if an image is a chihuahua or a muffin.

 

Enter the field of Deep Learning.

 

Deep Learning Demystified

The first rule of Deep Learning (DL) is this: You don’t need a human to input a set of features. DL will identify features from large sets of data (think hundreds of thousands of images) and build a model without the need for any human intervention thankyouverymuch. Well, sure, some intervention is needed. After all, it’s a human that will need to collect the data, in the example above some pictures of chihuahuas, and tell the DL algorithm what each picture represents.

 

But that’s about all the human needs to do for DL. Through the use of Convoluted Neural Networks, DL will take the data (an image, for example), break it down into layers, do some math, and iterate through the data over and over to arrive at a predictive model. Humans will adjust the iterations in an effort to tune the model and achieve a high rate of accuracy. But DL is doing all the heavy lifting.

 

DL is how we handle image classifications, handwriting recognition, and speech translations. Tasks that once were better suited to humans are now reduced to a bunch of filters and epochs.

 

Summary

Before I let you go, I want to mention one thing to you: beware companies that market their tools as being “predictive” when they aren’t using traditional ML methods. Sure, you can make a prediction based upon a set of rules; that’s how Deep Blue worked. But I prefer tools that use statistical techniques to arrive at a conclusion.

 

It’s not that these companies are knowingly lying, it’s just that they may not know the difference. After all, the definitions for AI are muddy at best, so it is easy to understand the confusion. Use this post as a guide to ask some probing questions.

 

As an IT pro, you should consider use cases for ML in your daily routine. The best example I can give is the use of linear regression for capacity planning. But ML would also help to analyze logs for better threat detection. One caveat though: if the model doesn’t include the right data, because a specific event has not been observed, then the model may not work as expected.

 

That’s when you realize that the machines are only as perfect as the humans that program them.

 

And this is why I’m not worried about Skynet.

When I first began this post, my thinking revolved around the translation problems of a declarative approach to network operation, but having only procedural interfaces to work with. Further thought dismissed that assumption and led to thinking about how declarative models evolve to meet organizational needs.

 

I Declare!

 

Many articles have been written about how declarative models define what we want, while procedural models focus on how we want to accomplish this. Overall network policy should be declarative, while low-level device management should be procedural, especially when dealing with older platforms, right? Well, it's almost right.

 

Network device configuration and management have, for the most part, been declarative for some time. We don't tell switches how spanning tree works, nor how to put VLAN tags on Ethernet frames. We plug in the necessary parameters to tune these functions to our specific needs, but the switch already knows how to do these things. Even old-fashioned device-level configurations are declarative operations. Admittedly, they're not high-level functions and need some tweaking and babysitting, but they're still declarative.

 

Let's explore the timeline of where we've been, where we are, and where we're potentially going.

 

Procedural Development

 

Back in the primordial days of computing (okay, perhaps not that far back), when I was first learning to code, everything was procedural. We joked about the level of detail required to accomplish the simplest of things. What we really needed was the DWIT (Do What I'm Thinking) instruction. Alas, this particular directive still hasn't made it into any current coding environments. Developers have accepted this for the most part, and the people who are making the decisions are usually happy to let them handle it rather than trying to figure out how it all works at a low level. The development team became the buffer between the desired state and the means by which it is achieved.

 

Device-Level Configuration

 

Network device management is a step up in the abstraction ladder from procedural coding, but is still seen as a detailed and arcane process to those who aren't experienced with it. Like the developers, the networking team became the buffer, but this time it was between a high-level desired state and the detailed configuration of same. The details became a little less daunting to look at and more input came from the people making the business decisions.

 

More and more, the networking team had to be aware of the greater business cases for what we were doing so that we could better align with the overall goals of the business. We implemented business policy in the language of the network, even if we weren't fully aware of it at the time.

 

Large-Scale Models

Now, we're slowly beginning to move to full-scale network automation and orchestration. We can potentially address the network as a single process, allowing our organization's policies to map directly to the ongoing operation of the network itself. It's still pretty messy underneath the abstraction layers, but there's a lot of potential for the various automation/orchestration tools that are being developed to clean that up.

 

The Whisper in the Wires

 

Our network architecture should line up with our business processes. We've always been aware of this, but delivering on it has been difficult, mostly because we had the right idea but on too small a scale. New tools and models are being developed and refined, making it possible to this point, but it's up to us to embrace these and push them to their limits. The value of the network to our organizations depends on it. By extension, our value to our organizations depends on it.

 

I don't think we're ever going to reach a point where that old DWIT instruction is a reality, but is it too much to hope that we're getting closer?

Back from VMworld Barcelona for a couple weeks before heading out to AWS re:Invent. If you are heading to re:Invent, let me know. I'd love to talk data with you.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

VMware, IBM Roll Out More Hybrid Cloud Features

VMware has strong ties with AWS and IBM. I’m waiting to hear of a similar partnership with Microsoft Azure. Not sure what is left for Azure to offer VMware that IBM and AWS cannot also offer.

 

‘Keep Talkin’ Larry’: Amazon Is Close to Tossing Oracle Software

I’m a little surprised they haven’t tossed it already. They certainly don’t need anything Oracle has to offer.

 

Alarm over talks to implant UK employees with microchips

Gives new meaning to “micromanagement.”

 

An AI Lie Detector Is Going to Start Questioning Travelers in the EU

Another industry facing the loss of jobs to robots. And the fact that humans are programming these robots you can except them to develop biases and use profiling, no matter how much data they collect.

 

Cyber attacks are the biggest risk, companies say

If you aren’t familiar with the phrase “assume compromise,” learn it now. Stop fooling yourself that you will be able to avoid any attack, and start thinking about how you can detect and remediate.

 

Linear Regression in Real Life

Seriously good explanation of linear regression, a simple and effective method for predictive analytics. Yes, there’s math involved in this post. Don’t be afraid, you got this. And when you’re done, you can think about how to apply this method to your current role.

 

25 years ago today the first major web browser was released: Mosaic 1.0

This was the first browser I used while doing research work at Washington State University. Get off my lawn.

 

The view from my office last week, next to Placa Espanya:

 

Ryan Adzima

AI-nxiety

Posted by Ryan Adzima Nov 14, 2018

So far I’ve written about my love and hatred of what AI can do for me (or cost me) as a network owner. This post is going to cover why I’m afraid to let a machine take the wheel and use all that artificial brain power to drive my network.

 

A big joke across any field affected by AI is that Skynet and the Terminators are coming. While I don’t share that exact fear, I do wonder what the dangers are of letting an unfeeling, politically unaware machine make important decisions for the operations of my network. I’m also not in the camp that thinks AI will replace me. It could change my role a bit maybe, but nothing more in the foreseeable future. So what about AI gives me panic attacks? One word - decisions.

 

As I covered in previous posts, the ability to find and correlate information at scale is a huge benefit for anyone running a network. When an AI can scour all my logs and watch live traffic flows to find issues and alert on them, it’s a massive gain. The next logical step would be to allow the AI to run preconfigured actions to remedy the situation. Now you’ve got an intelligently automated network. But what if we go one step further and let the AI start making decision about the network and servers. What if we let the AI start optimizing the network? I’m not talking about a simple "if this, then that" automation script. I’m talking about letting the AI actually pore over the data, devices, and history to make its own decisions about how the network should be configured, for the greater good. This is where things get a bit hairy for me. In the past I’ve used a somewhat morbid example of autonomous vehicles to make the point, and I think it’s a pretty good analogy.

 

Imagine an autonomous vehicle with you as a human passenger driving down a suburban street, humming along in what has been deemed the safest and most efficient manner. Suddenly, from behind a blind intersection, a group of children pop out on the left while simultaneously on the right a group of business people out to lunch emerge. The AI quickly scans the environment and sees only three possible outcomes.

 

  • Option 1: Save the children by driving into the group of business people.
  • Option 2: Save the business people driving into the children.
  • Option 3: Save them both by swerving into a nearby brick wall, ultimately resulting in the passengers doom.

 

How is this decision made? Should the developers hardcode a command to always save the passenger? What if the toll is higher next time and it’s a school bus or even a high-profile politician about to pass laws feeding the hungry while simultaneously removing all taxes? It’s a bit of a conundrum, and like I said, it’s a bit morbid. But it does highlight some things we can’t yet teach AI: situational knowledge, context outside of its training, and politics.Yes, politics. Every office has them.

 

Take that scenario and translate it into our world. Your network is humming along beautifully when suddenly one of your executives attempts to connect an ancient wireless streaming audio device that will have a huge performance impact on all the workers around him. The logical thing to do in this situation is to simply deny the connection of the device. Clearly the ability of the other employees to do their jobs outweighs a CxO’s ability to stream some audio on an old device, and the greater good is obviously more important. Unless it’s not. This is what scares me about AI.

 

Even though my example story may have been a bit out there, I hope it helps show you why I am afraid of what infrastructure AI will likely be poised to do in the not too distant future (if not already).

In honor of Stanley Martin Lieber, z''l

Known to most of the world as "Stan Lee"

1922-2018

 

When we first moved into the Orthodox Jewish world, we were invited to a lot of people's houses for a lot of meals. The community is very tight-knit, and everyone wants to meet new neighbors as soon as they arrive, and so it was something that just happened. Being new – both to the community and to orthodox Judaism in general – I noticed things others might have glossed over. Finally, at the third family’s home, I couldn't contain my curiosity. I asked if everyone we had visited so far were related. No, came the reply, why would I think that? Because, I explained, everyone had the same picture of the same grandfatherly man up on the wall:

 

Image source: Rabbi Moshe Feinstein, from Wikipedia

 

Our hosts were now equal parts confused and amused. "That's Rabbi Moshe Feinstein," they explained. "He's not our grandfather. He's not the related to anyone in the community, as far as we know."

 

"Then why on earth," I demanded, "is his picture on the walls of so many people's houses around here?"

 

The answer was simple, but it didn't make sense to me, at least at the time. People put up pictures of great Rabbis, I was told, because they represent who they aspire to become. By keeping their images visibly present in the home, they hoped to remind themselves of some aspect of their values, their ethics, their lives.

 

 

 

 

***********************

 

Several years later I was teaching a class of orthodox Jewish twenty-somethings about the world of IT. They were learning about everything from hardware to servers to networking to coding, but I also wanted to ensure they learned about the culture of IT. It started off as well as I'd hoped. When I got to sci-fi in general and comic books specifically, I held up a picture:

Image source: You'll Be Safe Here from Something Terrible, by Dean TrippeImage source: You'll Be Safe Here from Something Terrible, by Dean Trippe

 

"Can you identify anyone in this picture?" I asked.

 

Their responses were especially vehement. "Narishkeit" (foolishness) said one guy. "Bittel Torah" (sinful waste of time) pronounced another. And so on.

 

"Well I can name them all," I continued. “Every single one. And you know why? Because these aren't just characters in a story. These are my friends. And at a certain point in my life, they were my best friends. At the hardest times in my life, they were my only friends."

 

Now that they could tell I was serious, the dismissiveness was gone. "But not only that," I continued. "Each character in this picture represents a lesson. A value. A set of ethics. That big green dude? He taught me about what happens when we don't acknowledge our anger. That man with the bow tie? I learned how pure the joy of curiosity could be. And the big blue guy with the red cape? He showed me that it was OK to tone down aspects of myself in some situations, and to let them fly free in others."

 

Then I explained my confusion about the Rabbis on the wall, and how this was very much the same thing, especially for a lot of people working in tech today. And to call it narishkeit was as crude and insulting as it would be to say it was stupid to put up a picture of Rabbi Moshe Feinstein when you're not even related to him.

 

Then I explained where the picture came from. How author Dean Trippe came to write "Something Terrible" in the first place. At this point, my class might not have understood every nuance of what comic books were all about, but they knew it held a deeper significance than they thought.

 

Going back to the picture, I asked, "This picture has a name. Do you know what it's called?"

 

You'll Be Safe Here.

 

That, I explained, was what comic books meant to me – and to so many of us.

 

That’s the world that Mr. Lieber – or Stan Lee, as so many knew him – helped create. That’s the lifeline he forged out of ideas and dreams and pulp and ink. That lifeline meant everything to a lot of us.

 

Ashley McNamara may have put it best: "I repeated 1st grade because I spent that whole year locked in the restroom. The only thing I had were comics. They were an escape from my reality. It was the only thing I had to look forward to and if not for Stan Lee and others I wouldn’t have made it."

 

 

The truth is that "Stan Lee" saved more people than all of his costumed creations combined.

 

And for a lot of people, that's the story. Stan Lee, the man-myth, who helped create a comic empire and was personally responsible for the likes of Spiderman, Captain America, the X-Men, the Black Panther, and so on.

 

But for me there's just a little bit more. For a Jewish kid in the middle of a Midwest suburban landscape, Mr. Lieber had one more comic-worthy twist of fate. You see he, along with his cohort – Will Eisner, Joe Simon, Jack Kirby (Jacob Kurtzberg), Jerry Siegel, Joe Shuster, and Bob Kane (Kahn) – they didn't just SAY they were Jewish. They wove their Jewishness into the fabric of what they created. It obviously wasn't overt – none of the comics were called "Amazing Tales of Moses and his Staff of God!" Nor were Jewish themes subversively inserted. It just... was.

 

Comics told stories which were at once fantastical and familiar to me: a baby put in a basket (I mean rocket ship) and sent to sail across the river (I mean galaxy) to be raised by Pharaoh (I mean Ma and Pa Kent). Or a scrawny, bookish kid from Brooklyn who gets strong and the first thing he does is punch Hitler in the face.

 

And underlying it all was another Jewish concept: “tikkun olam”. Literally, this phrase means “fixing the world” and if I left it at that, you might understand some of its meaning. But on a deeper level, the concept of tikkun olam means to repair the brokenness of the world by finding and revealing sparks of the Divine which infuse everything. When you help another person – and because of your help they are able to rise above their challenges and become their best selves – you’ve performed tikkun olam. When you take a mundane object and use it for a purpose which creates more good in the world, you have revealed the holy purpose for that object being created in the first place, which is tikkun olam.

 

When you look at the weird, exotic, fantastical details of comic books – from hammers and shields and lassos and rings to teenagers who discover what comes with great power; and outcast mutants who save the world which rejects them; and aliens who hide behind mild-mannered facades; and Amazonians who turn away from beautiful islands to run toward danger – when you look at all of that, and you don’t see the idea of tikkun olam at play, well, you’re just not paying attention.

 

Stanley Lieber showed the world (and me) how to create something awesome, incredible, amazing, great, mighty, and fantastic but which could, for all its grandeur, still remain true to the core values that it started with. In fact, in one of his "Stan's Soapbox" responses, he addressed this:

 

“From time to time we receive letters from readers who wonder why there’s so much moralizing in our mags. They take great pains to point out that comics are supposed to be escapist reading, and nothing more. But somehow, I can’t see it that way. It seems to me that a story without a message, however subliminal, is like a man without a soul. In fact, even the most escapist literature of all – old time fairy tales and heroic legends – contained moral and philosophical points of view. At every college campus where I may speak there’s as much discussion of war and peace, civil rights, and the so-called youth rebellion as there is of our Marvel mags per se. None of us lives in a vacuum – none of us is untouched by the everyday events about us – events which shape our stories just as they shape our lives. Sure our tales can be called escapist – but just because something’s for fun, doesn’t mean we have to blanket our brains while we read it! Excelsior!”

 

Excelsior indeed.

To Stanley Martin Lieber, Zichrono Livracha.

(May his memory be for a blessing)

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Here’s an interesting article from SolarWinds associate, David Trossell, where he dives into cloud security concerns at National Health Service (NHS).

 

Moving to the cloud is not a be-all-end-all security solution for NHS organisations.

 

Several press reports claim that NHS Digital now recognizes public cloud services as a safe way of storing health and social care patient data. In January 2018, the UK’s National Health Service Digital press statement cited Rob Shaw, Deputy Chief Executive at NHS Digital.

 

It’s hoped that the standards created by the new national guidance document will enable NHS organizations to benefit from the flexibility and cost savings associated with the use of cloud facilities. However, Shaw says: "It is for individual organisations to decide if they wish to use cloud and data offshoring and there are a huge range of benefits in doing so. These include greater data security protection and reduced running costs, when implemented effectively.” 

 

With compliance to the EU’s General Data Protection Regulations (GDPR) in mind, which came into force in May 2018, the guidance offers greater clarity on how use to cloud technologies. With a specific focus on how confidential patient data can be safely managed, NHS Digital explains that the national guidance document “highlights the benefits for organisations choosing to use cloud facilities.”

 

These benefits can include “cost savings associated with not having to buy and maintain hardware and software, and comprehensive backup and fast recovery of systems.” Based on this, NHS Digital states it believes that these “features cut the risk of health information not being available due to local hardware failure.” However, at this juncture, it should be noted that the cloud is not a one-size-fits-all solution, and so each NHS Trust and body should examine the expressed benefits based on their own business, operational, and technical audits of the cloud.

 

ROI concerns

A report by Digital Health magazine suggests that everything is still not rosy with the public cloud. Owen Hughes headlines that, “Only 17% of NHS trusts expect financial return from public cloud adoption.” This figure emerged from a Freedom of Information request that was sent to over 200 NHS trusts and foundation trusts by the Ireland office of SolarWinds, an IT management software provider. The purpose of this FOI request, which received a response from 160 trusts, was to assess NHS organisation’s plans for public cloud adoption.

 

“The gloomy outlook appears to stem from a variety of concerns surrounding the security and management of the cloud: 61% of trusts surveyed cited security and compliance as the biggest barrier to adoption, followed by budget worries (55%) and legacy tech and vendor lock-in, which scored 53% respectively,” writes Hughes.

 

Key challenges

The research also found that the key challenges faced by the trusts in managing cloud services were caused by determining suitable workloads (49%), and 47% expressed a concern that they might have a lack of control of performance. The primary concern expressed by 45% of the respondents was about how to protect and secure the cloud.

 

To be continued…

 

Find the full article on ITProPortal.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Young girl holding a pen. Photo by Les Anderson on Unsplash

Recently, my friend Phoummala Schmitt, aka “ExchangeGoddess” and Microsoft Cloud Operations Advocate, wrote about her struggles with imposter syndrome (https://orangematter.solarwinds.com/beating-imposter-syndrome/). It's a good read that I highly recommend. But one element of it stuck with me, like an itch I couldn't quite reach.

 

I knew this itch wasn't that someone as obviously talented and accomplished as Phoummala would experience imposter syndrome. It's been well-documented that some of the most high-achieving folks struggle with this issue. It wasn't even the advice to "strike a pose" even though—because I work from home—if I did that too often my family might start taking pictures and trolling me on Twitter.

 

No, the thing that I found challenging was the advice to "fake it."

 

Now, to be clear, there's nothing particularly wrong with adopting a “fake it till you make it” attitude, if that works for you. The challenge is that for many folks, it reinforces exactly the feelings that imposter syndrome stirs up. The knowledge that I am purposely faking something can work against the ultimate goal of me feeling comfortable in my own skin and my own success.

 

Then I caught a quote from Neal Gaiman that went viral. The full post is here (http://neil-gaiman.tumblr.com/post/160603396711/hi-i-read-that-youve-dealt-with-with-impostor), but the part that really caught my eye was this sentence:

 

"Maybe there weren’t any grown-ups, only people [...] doing the best job we could, which is all we can really hope for."

 

Maybe there weren't any grown-ups.

 

This gave me the nugget of an idea. If nobody is actually an adult, then what are we? The obvious answer is that we're still kids wearing grown-up suits. We're all playing pretend.

 

Yes, I know, "playing pretend" is almost the same as "faking it"—except, not really.

 

When you play pretend you acknowledge the reality that Mrs. Finklestein is really a bear wearing your wig, the necklace you stole is out of Mom's jewelry box, and that there's no tea in the cup—but you simply opt to not focus on that part. You’re focusing on how Mrs. Finklestein just told you the most interesting bit of neighborhood gossip, and that this tea is just the right temperature and delicious. When you play pretend, a magical transformation occurs.

 

The movie Hook had a lot of drawbacks, but this scene captures the wonder of imagination pretty well.

 

Imagination can carry us to an important place. A place where we give ourselves permission to go with our craziest guesses, or invest fully in our weirdest ideas. To explore our wildest ramblings and see where it all leads. And more importantly, imagination allows us to run down rabbit holes to a dead end without regret. With imagination, it truly is the journey that matters.

 

I remember a teacher talking about one of her best techniques for helping students get "un-stuck." When a student would say "I don't know," she would respond, "Imagine you did know. What would you say if that was true?" Sometimes, imagining ourselves in a position of knowing is all it takes to knock a recalcitrant piece of knowledge loose.

 

As adults, we may feel that imagination is something we set aside long ago. That may be true, but it wasn't to our benefit.

 

As Robert Fulghum wrote:

 

"Ask a kindergarten class, ‘How many of you can draw?’ and all hands shoot up. Yes, of course we can draw—all of us. What can you draw? Anything! How about a dog eating a fire truck in a jungle? Sure! How big you want it?

 

How many of you can sing? All hands. Of course we sing! What can you sing? Anything! What if you don't know the words? No problem, we make them up. Let's sing! Now? Why not!

 

How many of you dance? Unanimous again. What kind of music do you like to dance to? Any kind! Let's dance! Now? Sure, why not?

 

Do you like to act in plays? Yes! Do you play musical instruments? Yes! Do you write poetry? Yes! Can you read and write and count? Yes! We're learning that stuff now.

 

Their answer is ‘Yes!’ Over and over again, ‘Yes!’ The children are confident in spirit, infinite in resources, and eager to learn. Everything is still possible.

 

Try those same questions on a college audience. A small percentage of the students will raise their hands when asked if they draw or dance or sing or paint or act or play an instrument. Not infrequently, those who do raise their hands will want to qualify their response with their limitations: ‘I only play piano, I only draw horses, I only dance to rock and roll, I only sing in the shower.’

 

When asked why the limitations, college students answer they do not have talent, are not majoring in the subject, or have not done any of these things since about third grade, or worse, that they are embarrassed for others to see them sing or dance or act. You can imagine the response to the same questions asked of an older audience. The answer: no, none of the above.

 

What went wrong between kindergarten and college?

 

What happened to ‘YES! Of course I can’?"

(excerpted from “Uh-Oh: Some Observations from Both Sides of the Refrigerator Door” by Robert Fulghum)

 

So, I want to fuse these ideas together. Ideas that:

  • We sometimes feel like imposters, about to be discovered for the frauds we feel we are
  • "Fake it till you make it" doesn't go far enough to help us avoid those feelings
  • Maybe none of us are actually grown-ups, but instead are still our childlike selves, all acting the part of adults
  • Imagination is one of our most powerful tools to get past our rigid self-image and gives us permission to playact
  • And that the childlike ability to say "YES, of course I can" is infinitely more valuable than we might have once thought

 

Maybe we need to take to heart what Gaiman said. There aren't any grown-ups. Every adult you know is a little kid wearing a big-person suit, muddling along and hoping nobody notices. But we need to take it to heart, accept it, and own it. Own the fact that we're little kids. Reclaim the brash, the bold, the brazen selves we used to be. When you’re experiencing an attack of self-doubt, I encourage you to imagine you’re 8 years old—your 8-year-old self—doing the same task. How would that kid go about it?

 

Sure, in the years since then we've all had a few scrapes and bumps.

 

But that doesn't mean we should stop imagining what it would be like to fly.

Are configuration templates still needed with an automation framework? Or does the automation policy make them a relic of the past?

 

Traditional Configuration Management: Templates

 

We've all been there. We always keep a backup copy of a device's configuration, not just for recovery, but to use as a baseline for configuring similar devices in the future. We usually strip out all of the device-specific information and create a generic template so that we're not wasting time removing the previous device's details. Sometimes we'll go so far as to embed variables so that we can use a script to create new configurations from the template and speed things up, but the principle remains the same: We have a configuration file for each device, based on the template of the day. The principle has worked for years, only complicated by that last bit. When we change the template, it's a fair bit of work to regenerate the initial configurations to comply, especially if changes have been made that aren't covered by the template. Which almost always happens because we're manually making changes to devices as new requirements surface.

 

Modern Configuration Management: Automation Frameworks

 

Automation (ideally) moves us away from manual changes to devices. There's no more making ad hoc changes to individual devices and hoping that someone documents it sufficiently to incorporate it into the templates. We incorporate new requirements into the automation policy and let the framework handle it. One of those requirements is usually going to be a periodic backup of device configurations so that a current copy is available to provision new devices, which amounts to using automation to create static configurations.

 

One Foot Forward and One Foot Behind

 

Templates and the custom configuration files built from them are almost always meant to serve as initial configurations. Once they're deployed and the device is running, their role is finished until the device is replaced or otherwise needs to be configured from scratch.

 

The automation framework, on the other hand, plays an active role in the operation of the network while each device is running. Until the devices are online and participating in the network, automation can't really touch them directly.

 

This has led to common practice where both approaches are in play. Templates are built for initial configuration and automation is used for ongoing application of policy.

 

Basic Templates via Automation Framework

 

Most organizations I've worked with keep their templates and initial device configurations managed via some sort of version control system. Some will even go so far as to have their device configurations populated to a TFTP server for new systems to be able to boot directly from the network. No matter how far we take it, this is an ideal application for automation, too.

 

We can use automation to apply policies to templates or configuration files in a version control system or TFTP repository just as easily (possibly even more so) as we can to a live device. This doesn't need to be complex. Creating initial configuration files using the automation framework so that new devices are reachable and identifiable is sufficient. Once the device is up and running, the policy is going to be applied immediately, so there's no need to have more than the basics in the initial configuration.

 

The Whisper in the Wires

 

The more I work with automation frameworks, the more I'm convinced that all configuration management should be incorporated into the policy. Yes, some basic configurations may need to be pre-loaded to make the device accessible, but why maintain two separate mechanisms? The basic configurations can be managed just as easily as the functional devices can, so it's just an extra step. Is this something that those of you who are automating have already done, or are most of us still using one process for initial configuration and another for ongoing operation?

Where are you? Halfway through this 6-part series exploring a new reference model for IT infrastructure security!

 

As you learned in earlier posts, this model breaks the security infrastructure landscape into four domains that each contain six categories. While today’s domain may seem simple, it is an area that I constantly see folks getting wrong--both in my clients and in the news. So, let’s carefully review the components that make up a comprehensive identity and access security system:

 

DOMAIN: IDENTITY & ACCESS

Your castle walls are no use if the attacking hoard has keys to the gate. In IT infrastructure, those keys are user credentials. Most of the recent high-profile breaches in the news were simple cases of compromised passwords. We can do better, and the tools in this domain can help.

 

The categories in the identity and access domain are; single sign-on (SSO – also called identity and access management, IAM), privileged account management (PAM), multi-factor authentication (MFA), cloud access security brokers (CASB), secure access (user VPN), and network access control (NAC).

 

CATEGORY: SSO (IAM)

The weakest link in almost every organization’s security posture is its users. One of the hardest things for users to do (apparently) is manage passwords for multiple devices, applications, and services. What if you could make it easier for them by letting them log in once, and get access to everything they need? You can! It’s called single sign-on (SSO) and a good solution comes with additional authentication, authorization, accounting, and auditing (AAAA) features that aren’t possible without such a system – that’s IAM.

 

CATEGORY: PAM

Not all users are created equal. A privileged user is one who has administrative or root access to critical systems. Privileged account management (PAM) solutions provide the tools you need to secure critical assets while allowing needed access and maintaining compliance. Current PAM solutions follow “least access required” guidelines and adhere to separation-of-responsibilities best practices.

 

CATEGORY: MFA

Even strong passwords can be stolen. Multi-factor authentication (MFA) is the answer. MFA solutions combine any of the following: something you know (the password), something you have (a token, smart phone, etc.), something you are (biometrics, enrolled device, etc), and/or somewhere you are (geolocation) for a much higher level of security. Governing security controls, such as PCI-DSS, and industry best practices require MFA to be in place for user access.

 

CATEGORY: CASB

According to Gartner: “Cloud access security brokers (CASBs) are on-premises, or cloud-based security policy enforcement points, placed between cloud service consumers and cloud service providers to combine and interject enterprise security policies as the cloud-based resources are accessed. CASBs consolidate multiple types of security policy enforcement. Example security policies include authentication, single sign-on, authorization, credential mapping, device profiling, encryption, tokenization, logging, alerting, malware detection/prevention, and so on.” If you are using multiple SaaS/PaaS/IaaS offerings, you should probably consider a CASB.

 

CATEGORY: SECURE ACCESS (VPN)

Your employees expect to work from anywhere. You expect your corporate resources to remain secure. How do we do both? With secure access. Common components of a Secure Access solution include a VPN concentrator and a client (or web portal) for each user. Worth noting, the new category of software defined perimeter (SDP) services mentioned in part 2 often look and act a lot like an always-on VPN. In any case, the products in this category ensure that users can securely connect to the resources they need, even when they’re not in the office.

 

CATEGORY: NAC

Let’s say a criminal or a spy is able to get into your office. Can they join the Wi-Fi or plug into an open jack and get access to all of your applications and data? Less nefarious, what if a user computer hasn’t completed a successful security scan in over a week? Network access control (NAC) makes sure the bad guys can’t get onto the network and that the security posture of devices permitted on the network is maintained. Those users or devices that don’t adhere to NAC policies are either blocked or quarantined via rules an administrator configures. Secure access and NAC are converging, but it’s too early for us to collapse the categories just yet.

 

ONE MORE DOMAIN!

While we’ve made a lot of progress, our journey through the domains of IT infrastructure security isn’t over yet. In the next post, we’ll peer into the tools and technologies that provide us with visibility and control. Even that isn’t the end though, as we’ll wrap the series up with a final post covering the model as a whole, including how to apply it and where it may be lacking. I hope you’ll continue to travel along with me!

This is part three in this series. In part one and part two, we covered some of the basics. But in this post, we will dig into the benefits of application performance monitoring (APM), look at a few examples, and begin looking at what APM in an Agile environment means.

Benefits

With everything we have discussed thus far, it may or may not be apparent on what the benefits of APM might be. Hopefully they are obvious, but in case the benefits are not clear, we will discuss the benefits briefly.

 

Based on some of the comments for the previous posts, it seems that there is a common theme: it is not so easy to accomplish. This tends to justify why many choose to either not start or quit trying when looking at APM.

 

I would personally agree that it is not an easy feat to accomplish, but it is very much beneficial to stick with it. There will be pain and suffering along the way, but in the end, your application's performance will be substantially more satisfying for everyone. Along the way you may even uncover some underlying infrastructure issues that may have gone unnoticed but will become more apparent as APM is implemented. So, in regards to the greatest benefit, I would say it's the fact that you were able to follow through on your APM implementation.

Examples

Let's now look at just a few examples of where APM would be beneficial in identifying a true performance degradation.

 

Users are reporting that when visiting your company’s website, there is a high degree of slowness or timeouts. I am sure that this scenario rings a bell with most. This is more than likely the only information that you have been provided as well. So where do we start looking? I bet most will begin looking at CPU, memory, and disk utilization. Why not? This seems logical, except that you do not see anything that appears to be an issue. But because we have implemented our APM solution, in this scenario we were able to identify that the issue was due to a bad SQL query in our database. Our APM solution was able to identify it, show us where the issue lies, and give us some recommendations on how to resolve the issue.

 

Now, let us assume that we were getting reports of slowness once again on our company’s website. But this time our application servers appear to be fine and our APM solution is not showing us anything regarding performance degradation. So, we respond with the typical answer, “Everything looks fine.”

 

A bit later, one of your network engineers reports that there is a high amount of traffic hitting the company’s load balancers. A DDoS attack is causing them to drop packets to anything behind them. And guess what? Your applications web servers are directly in line with the affected load balancers. Which would explain the reports that you received earlier. In this case, we did not have APM for our application configured to monitor anything else other than our application servers, so we never saw anything out of the norm. This is a good example of not only monitoring your application servers, but also all the external components that are in some way related to what performance is experienced with your application. If we had been doing so, we at the very least could have been able to correlate the reports of slowness with the high amount of traffic affecting the load balancers. In addition to this, our APM was not configured to monitor the connection metrics on our application servers. If we had, we should have been able to notice that our connection metrics were not reporting as normal.

Conclusion of After The Fact Monitoring

If you recall, I mentioned in the first article that I would reference the traditional APM methodology as “After the fact implementation.” This is more than likely the most typical scenario, which also leads to the burden and pain of implementation. In the next post of this series, we will begin looking at implementing APM in an Agile environment.

This version of the Actuator comes to you from Barcelona, where I am attending VMworld. This is one of my favorite events of the year. And I’m not just saying that because in Barcelona I can buy a plate of meat and it’s called “salad.” OK, maybe I am.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

The Cybersecurity Hiring Gap is Due to The Lack of Entry-level Positions

Yes, the hiring process is broken. This post helps break it down. You could replace cybersecurity with other tech roles, and see the issues exist everywhere.

 

Directions

Brilliant post from Leon where he helps break down a better way to conduct an interview. This is something near and dear to my heart, having written more than a few times about DBA jobs and interviews.

 

That sign telling you how fast you’re driving may be spying on you

It’s one thing to collect the data, it’s another to use the data. I think the collection is fine, but you need a warrant to search that database. And this is also a case where you can’t allow someone to be given SysAdmin access “just because.”

 

Who Is Agent Tesla?

Is it a monitoring agent? Is it malware? Why can’t it be both? Folks, if your “monitoring software” is asking for payment in Bitcoin, then you are asking for trouble when you install.

 

Lyft speeds ahead with its autonomous initiatives

Because I haven’t been including enough autonomous car stories lately, I felt the need to share this one. And when I am at AWS re:Invent later this month, I hope to use one.

 

Inside Europe’s quest to build an unhackable quantum internet

I don’t know why, but I’m more bullish about quantum computing than I am Blockchain. The quantum internet sounds cool, but the reality is most data breaches happen when Adam in Accounting leaves a laptop on a bus.

 

Apple Reportedly Blocked Police iPhone Hacking Tool and Nobody Knows How

Score one for the good guys! Wait. When did Apple become the good guys again?

 

I love the salad bars here in Barcelona:

 

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Agencies are becoming far more proactive in their efforts to combat threats, as evidenced by the Department of Defense’s Comply-to-Connect and the Department of Homeland Security’s Continuous Diagnostics and Mitigation programs. To develop and maintain strong security hygiene that supports these and other efforts, agencies should consider implementing five actions that can help strengthen networks before the next attack.

 

Identify and dispel vulnerabilities

 

Better visibility and understanding of network devices are key to optimal cybersecurity. Agencies should maintain device whitelists or known asset inventories and compare the devices that are detected to those databases. Then, they can make decisions based on their whitelist.

 

Identifying vulnerable assets and updating them will likely be more cost effective—and safer—than trying to maintain older systems.

 

Update and test security procedures

 

Many agencies engage in large-scale drills, but it’s equally important to test capabilities on a smaller scale and monitor performance under simulated attacks. Agencies must get into the habit of testing every time a new technology is added to the network, or each time a new patch is implemented. Likewise, teams should update and test their security plans and strategies frequently. In short: verify, then trust. An untested disaster recovery plan is a disaster waiting to happen.

 

Make education a priority

 

A significant number of IT professionals feel that agencies are not investing enough in employee training. Lack of training could pose risks if IT professionals are not appropriately knowledgeable on technologies and mitigation strategies that can help protect their organizations.

 

Agencies must also invest in ongoing user training, so their teams can be more effective. This includes solution training, but it may also encompass sessions that focus on the latest malware threats, hacker tactics, or the potential dangers posed by insiders.

 

Take a holistic view of everyone’s roles

 

It’s good that the government is focused on hiring highly-skilled cybersecurity professionals. Last year the General Services Administration held a first-ever event to recruit new cybersecurity talent, and we will likely see similar job fairs in the future.

 

However, security is everyone’s job. Managers must institute a culture of information sharing amongst team members; there’s no room for silos in cybersecurity. Everyone must be vigilant and on the lookout for potential warning signs, regardless of their job descriptions.

 

Implement the proper procedures for a cyber assault

 

Still, threats will inevitably occur, and while there are a variety of mechanisms and techniques that can be used in response, all involve having the correct tools working in concert. For instance, a single next-generation firewall is great, but ineffective in the event of data exfiltration over domain name server traffic.

 

To help protect critical services, agencies must employ a suite of solutions that can accurately detect anomalies that originate both inside and outside the network. These should include standard network monitoring and firewall solutions. Agencies may also want to consider implementing automated patch management, user device tracking, and other strategies that can provide true defense-in-depth capabilities.

 

Find the full article on SIGNAL.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Despite all the talk of all our jobs being replaced by automation, my experience is that the majority of enterprises are still very much employing engineers to design, build, and operate the infrastructure. Why, then, are most of us stuck in a position where despite experimenting with infrastructure automation, we have only managed to build tools to take over small, mission-specific tasks, and we've not achieved the promised nirvana of Click Once To Deploy?

 

We Are Not Stupid

 

Before digging into some reasons we're in this situation, it's important to first address the elephant in the room, which is the idea that nobody is stupid enough to automate themselves out of a job. Uhh, sure we are. We're geeks, we're driven by an obsession with technology, and the vast majority of us suffer from a terrible case of Computer Disease. I believe that the technical satisfaction of managing to successfully automate an entire process is a far stronger short-term motivation than any fear of the potential long-term consequences of doing so. In the same way that hoarding information as a form of job security is a self-defeating action (as Greg Ferro correctly says, "if you can't be replaced, you can't be promoted"), avoiding automation because it takes a task away from the meatbags is an equally silly idea.

 

Time Is Money

 

Why do we want to automate? Well, automation is the path to money. Automation leads to time-saving; time-saving leads to agility; agility leads to money. Save time, you must! Every trivial task that can be accomplished by automation frees up time for more important things. Let's be honest, we all have a huge backlog of things we've been meaning to do, but don't have time to get to.

 

However, building automation takes time too. There can be many nuances to even simple tasks, and codifying those nuances and handling exceptions can be a significant effort, and large scale automation is exponentially more complex. Because of that, we start small, and try to automate small steps within the larger task because that's a manageable project. Soon enough, there will be a collection of small, automated tasks built up, each of which requires its own inputs and generates its own outputs, and--usually--none of which can talk to each other because each element was written independently. Even so, this is not a bad approach, because if the tasks chosen for automation occur frequently, the time saved by the automation can outweigh the time spent developing it.

 

This kind of automation still needs hand-holding and guidance from a human, so while the humans are now more productive, they haven't replaced themselves yet.

 

Resource Crunch

 

There's an oft-cited problem that infrastructure engineers don't understand programming and programmers don't understand infrastructure, and there's more than a grain of truth to this idea. Automating small tasks is something that many infrastructure engineers will be capable of, courtesy of some great module/package support in scripting languages like Python. Automating big tasks end-to-end is a different ball game, and typically requires a level of planning and structure in the code exceeding that which most infrastructure engineers have in their skills portfolio. That's to be expected: if coding was an engineer's primary skill, they'd more likely be a programmer, not an infrastructure engineer.

 

Ultimately, scaling an automation project will almost always require dedicated and skilled programmers, who are not usually found in the existing team, and that means spending money on those programming resources, potentially for an extended period of time. While the project is running, it's likely that there will be little to no return on the investment. This is a classic demonstration of the maxim that you have to speculate to accumulate, but many companies are not in a position--or are simply unwilling--to invest that money up front.

 

The Cliffs Of Despair

 

With this in mind, in my opinion, one of the reasons companies get stuck with lots of small automation is that it's relatively easy to automate multiple, small tasks, but taking the next step and automating a full end-to-end process is a step too far for many companies. It's simply too great a conceptual and/or financial leap from where things are today. Automating every task is somewhere so far off in the distance, nobody can even forecast it.

 

They say a picture is worth a thousand words, which probably means I should have just posted this chart and said "Discuss," but nonetheless, as a huge fan of analyst firms, I thought that I could really drive my point home by creating a top quality chart representing the ups and downs of infrastructure automation.

 

The Cliffs Of Despair

 

As is clearly illustrated here, after the Initial Learning Pains, we fall into the Trough Of Small Successes, where there's enough knowledge now to create many, small automation tools. However, the Cliffs Of Despair loom ahead as it becomes necessary to integrate these tools together and orchestrate larger flows. Finally–and after much effort–a mechanism emerges by which the automation flows can be integrated, and the automation project enters the Plateau of Near Completion where the new mechanism is applied to the many smaller tools and good progress is made towards the end goal of full automation. However, just as the project manager announces that there are only a few remaining tasks before the project can be considered a wrap, development enters the Asymptotic Dream Of Full Automation, whereby no matter how close the team gets to achieving full automation, there's always just one more feature to include, one more edge case that hadn't arisen before, or one more device OS update which breaks the existing automation, thereby ensuring that the programming team has a job for life and will never achieve the sweet satisfaction of knowing that the job is finished.

 

Single Threaded Operation

 

There's one more problem to consider. Within the overall infrastructure, each resource area (e.g., compute, storage, network, security) is likely working their own way towards the Asymptotic Dream Of Full Automation and at some point will discover that full, end-to-end automation means orchestrating tasks between teams. And that's a whole new discussion, perhaps for a later post.

 

Change My Mind

 

Change My Mind

The hype around blockchain technology is reaching a fever pitch these days. Visit any tech conference and you’ll find more than a handful of vendors offering blockchain in one form or another. This includes Microsoft, IBM, and AWS. Each of those companies offers a public blockchain as a service.

 

Blockchain is also the driving force behind cryptocurrencies, allowing Bitcoin owners to purchase drugs on the internet without the hassle of showing their identity. So, if that sounds like you, then yes, you should consider using blockchain. A private one, too.

 

Or, if you’re running a large logistics company with one or more supply chains made up of many different vendors, and need to identify, track, trace, or source the items in the supply chain, then blockchain may be the solution for you as well.

 

Not every company has such needs. In fact, there’s a good chance you are being persuaded to use blockchain as a solution to a current logistics problem. It wouldn’t be the first time someone has tried to sell you a piece of technology software you don’t need.

 

Before we can answer the question if you need a blockchain, let’s take a step back and make certain we understand blockchain technology, what it solves, and the issues involved.

 

What is a blockchain?

The simplest explanation is a blockchain serves as a ledger, a series of transactions using cryptography to verify each transaction in the chain. Think of it as a very long sequence of small files, each file based upon a hash value of the previous file combined with new bits of data and the answer to a math problem.

 

Put another way, blockchain is a database—one that is never backed up, grows forever, and takes minutes or hours to update a record. Sounds amazing!

 

What does blockchain solve?

Proponents of blockchain believe it solves the issue of data validation and trust. For systems needing to verify transactions between two parties, you would consider blockchain. The leading problem blockchain is applied towards is supply chain logistics. Specifically, food sourcing and traceability.

 

Examples include Walmart requiring food suppliers to use a blockchain provided by IBM starting in 2019. Another is Albert Heijn using blockchain technology along with the use of QR codes to solve issues with orange juice. Don’t get me started on the use of QR codes; we can save it for a future post.

 

The problem with blockchain

Blockchain is supposed to make your system more trustworthy, but it does the opposite. Blockchain pushes the burden of trust down to the individuals adding transactions to the blockchain. This is how all distributed systems work. The burden of trust goes from a central entity to all participants. And this is the inherent problem with blockchain. What’s worth mentioning here is how many cryptocurrencies rely on trusted third parties to handle payouts. So, they use blockchain to generate coins, but don’t use blockchain to handle payouts because of the issues involved around trust.

 

Here’s another example of an issue with blockchain: data entry. In 2006, Walmart launched a system to help track bananas and mangoes from field to store, only to abandon the system a few years later. The reason? Because it was difficult to get everyone to enter their data. Even when data is entered, blockchain will not do anything to validate that the data is correct. Blockchain will validate the transaction took place but does nothing to validate the actions of the entities involved. For example, a farmer could spray pesticides on oranges but still call it organic. It’s no different than how I refuse to put my correct cell phone number into any form on the internet which requires a number be given.

 

In other words, blockchain, like any other database, is only as good as the data entered. Each point in the ledger is a point of failure. Your orange, or your ground beef, may be locally sourced, but that doesn’t mean it’s safe. Blockchain may help determine the point of contamination faster, but it won’t stop it from happening.

 

Do you need a blockchain?

Maybe. All we need to do is ask ourselves a few questions.

 

Do you need a [new] database? If you need a new database, then you might need a blockchain. If an existing database or database technology would solve your issue, then no, you don’t.

 

Let’s assume you need a database. The next question: Do you have multiple entities needing to update the database? If no, then you don’t need a blockchain.

 

OK, let’s assume we need a new database and we have many entities needing to write to the database. Are all the entities involved known, and trust each other? If the answer is no, then you don’t need a blockchain. If the entities have a third party everyone can trust, then you also don’t need a blockchain. A blockchain should remove the use of a third party.

 

OK, let’s assume we know we need a database, with multiple entities updating it, all trusting each other. The final question: Do you need this database to be distributed in a peer-to-peer network? If the answer is no, then you don’t need a blockchain.

 

If you have different answers, then a private or public blockchain may be the right solution for your needs.

Summary

No, you don’t need a blockchain. Unless you do need one, but that’s not likely. And it won’t solve basic issues of data validation and trust between entities.

So far in this series we have reviewed a few popular and emerging models and frameworks. These tools are meant to help you make sense of where you are and how to get where you’re going when it comes to information security or cybersecurity. We’ve also started the process of defining a new, more practical, more technology-focused map of the cybersecurity landscape. At this point you are familiar with the concept of four critical domains, and six key technology categories within each. Today we’ll dive into the second domain: Endpoint and Application.

 

I must admit that not everyone agrees with me about lumping servers and applications in with laptops and mobile phones as a security domain. I admit that the choice was a risk, but I believe it makes the most sense. So many of the tools and techniques are the same for both groups of devices. Especially now, as we move our endpoints out onto networks that we don’t fully control (or control at all in some cases). Let’s explore it together - and then let me know what you think!

 

Domain: Endpoint & Application

If we stick with the castle analogy from part 2, endpoints and applications are the people living inside the walls. Endpoints are the devices your people use to work: desktops, laptops, tablets, phones, etc. Applications are made up of the servers and software your employees, customers, and partners rely upon. These are the things that are affected if an attack penetrates your perimeter, and as such, they need their own defenses.

 

The categories in the endpoint and application domain are endpoint protection, detection, and response (EPP / EDR), patch and vulnerability management, encryption, secure application delivery, mobile device management (MDM), and cloud governance.

 

Category: EPP / EDR

The oldest forms of IT security are firewalls and host antivirus. Both have matured a lot in the past 30+ years. Endpoint protection (EPP) is the evolution of host based anti-malware tools, combining many features into products with great success rates. Nothing is perfect, however, and there are advanced persistent threats (APT) that can get into your devices and do damage over time. Endpoint detection and response (EDR) tools are the answer to APT. We're combining these two concepts into a single category because you need both – and luckily for us, many manufacturers now combine them as features of their endpoint security solution.

 

Category: Patch and Vulnerability Management

While catching and stopping malware and other attacks is great, what if you didn’t have to? Tracking potential vulnerabilities across your systems and automatically applying patches as needed should reduce the exploit capabilities of an attacker and help you sleep better at night. While you can address patch management without vulnerability management, I recommend that you take a comprehensive and automated approach, which is why they are both covered in this category.

 

Category: Encryption

When properly applied, encryption is the most effective way to protect your data from unwanted disclosure. Of course, encrypted data is only useful if you can decrypt it when needed – be sure to have a plan (and the proper tools) for extraction! Encryption/decryption utilities can protect data at rest (stored files), data in use (an open file), and data in motion (sending/receiving a file).

 

Category: Secure Application Delivery

Load balancers used to be all you needed to round-robin requests to your various application servers. Today application delivery controllers (ADC) are much more than that. You always want to put security first, so I recommend an ADC that includes web application firewall (WAF) and other security features for secure application delivery.

 

Category: Mobile Device Management

EPP and EDR may be enough for devices that stay on-prem, under the protection of your perimeter security tools, but what about mobile devices? When people are bringing their own devices into your network, and taking your devices onto other networks, a more comprehensive security-focused solution is needed. These solutions fall under the umbrella of mobile device management (MDM). 

 

Category: Cloud (XaaS) Governance

Cloud Governance is a fairly emergent realm and in many ways is still being defined. What’s more is that to an even higher degree than the other categories here, governance must always include people, processes, and technology. Since this reference model is focused on technology and practical tools, this category includes technologies that enable and enforce governance.  As your organization becomes more and more dependent on more and more cloud platforms, you need visibility and policy control over that emerging multi-cloud environment. A solid cloud governance tool provides that.

What's Next?

We are now three parts into this six-part series. Are you starting to feel like you know where you are? How about where you need to be going? Don’t worry, we still have two more domains to cover, and then a final word on how to make this model practical for you and your organization. Keep an eye out for part 4, where we’ll dive into identity and access - an area that many of you are probably neglecting, despite its extreme importance. Talk to you then!

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.