1 2 3 4 5 6 Previous Next

Geek Speak

2,679 posts

As somebody not lucky enough to have a nice clicky interface with which to manage and automate all my equipment, I have to develop my own tools to do so. One aspect of developing those tools that drives me up the wall is the variety of mechanisms I have to support to communicate with the various devices deployed in the network. Why can’t we have one, consistent way to manage the devices?




I can already hear voices saying “But we have SSH. SSH is available on almost all devices. Is that not the consistency you desire?” No, it isn’t. In terms of configuring network devices, SSH is just a(n encrypted) transport mechanism and provides no help whatsoever with configuring the devices. Once I’ve connected using SSH, I have to develop the appropriate customized code to screen-scrape the particular operating system I connect to. Anybody who has done this will testify that this is not particularly straightforward and, worse, the reward for doing so is to be able to issue commands and receive, in return, wads of unstructured data (i.e., command output) which can change between code versions, making parsing a nightmare.


So here’s my first requirement: structured data.


Structured Data


Typical command line output blurts out the requested information in such a way that the surrounding label text and the position of a piece of text impart the information necessary to infer what the text itself represents. Decoding data in this format usually means developing regular expressions to identify and pull apart the text so that the constituent data can be processed. Identification of data is implicit, based on contextual clues. Unstructured data is nice (or a least tolerable) for a human to look at and make sense of, but is frequently very difficult to interpret in code.


Structured data, on the other hand, follows a specific set of rules to present the data points such that they are explicitly and unambiguously identified and labeled for consumption by code. Structured data usually presents data in a way that mimics programmatic hierarchical data structures, which lends itself to easy integration into code.


Junos has historically favored XML for structured data, but — subjectively — I would argue that XML is ugly and can get complex very quickly, especially where multiple namespaces are being used. Personally I have a soft spot for JSON, and I know other people like YAML, but ultimately I’m at the point where I’d say “I don’t care, just pick one and stick with it,” so that I can focus my time and effort on handling the one format.


So here’s my second requirement: consistent encoding.


What Gets Encoded?


Once we have a way to encode the data in structured format, we then need to be more consistent about what’s being encoded. Again, this may point at a project like OpenConfig, but failing that, perhaps it might be worth it if everything could be described using YANG. By default, YANG maps to XML, but RFC7951 (JSON Encoding of Data Modeled With YANG) helpfully shows how my pet preference JSON can be used instead.


The point here is that if YANG is used, the mapping to JSON or XML is almost a side issue, so long as the data is modeled in YANG to start with; both XML and JSON fans can translate from YANG to their favorite encoding and—and this is the key part—so can the devices, so clients can request the encoding of their choice.


One Transport To Rule Them All


So now that I’ve determined that we need YANG models with support for both JSON and XML encoding, optionally following a common OpenConfig data model, let’s address how we communicate with the devices.


I don’t want to have to figure out what kind of device I’m connecting to, then based on that information, decide what connection transport I should be using (e.g., HTTP, HTTPS, SSH possibly on a non-standard TCP port). What I want to be able to do is to connect to the device in a standard way on a standard port. I don’t mind reconnecting to a non-standard port, but I want to be told about that port after I connect to the standard port.


That’s my third requirement: one transport for all.


OpenConfig takes this approach, by having a “well-known” URL on the device to report back information about the device and connection details in a standard format. I’d like to take it a step further, though.


REST API or Bust


Let’s use a REST API for all this communication. REST APIs are ubiquitous now; every programming language has the ability to send and receive requests over HTTP(S) and to decode the XML/JSON responses. It makes things easy!


My last requirement: access via REST API.


Wait a moment, let me check again:

  • Structured data: YES
  • Consistent encoding: YES
  • One transport: YES


I believe I’ve accidentally just defined RESTCONF (RFC8040), whose introductory paragraph reads:


“[...] an HTTP-based protocol that provides a programmatic interface for accessing data defined in YANG, using the datastore concepts defined in the Network Configuration Protocol (NETCONF).”


I am not a huge fan of NETCONF/XML over SSH, but NETCONF/JSON over HTTP? Count me in!


My Thoughts


Automating the infrastructure is hard enough without battling against multiple protocols. How about we just all agreed that RESTCONF is a good compromise and start supporting it across all devices?


For what it’s worth, there is some level of RESTCONF support in more recent software releases, including:

  • Cisco IOS XE
  • Cisco IOS XR
  • Cisco NXOS
  • Juniper Junos OS
  • Arista EOS
  • Extreme XOS
  • DellEMC OS10
  • and more...


But here’s the problem: when did you last hear of anybody trying to automate with RESTCONF?


That’s what I’d like to see. What about you?

Home from Orlando and SQL Live, my last speaking event of the year. I now have less than two weeks to get all my holiday shopping done. Oh, and so do you.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Microsoft Edge goes Chromium (and MacOS)

If you still needed a sign that Microsoft is a different company, this should do it for you.


The Problem With Feedback

Not all feedback is good feedback. When opinions are treated as information without any quality control, you end up with a billion people hoping to create a viral video on Facebook.


Even self-driving leader Waymo is struggling to reach full autonomy

Yeah, but sometimes you need to ship the product, and not wait for perfection. I’m still hopeful for autonomous cars in my lifetime.


What the Marriott Breach Says About Security

Security, privacy, convenience. Pick two.


250-page document dump is another nail in Facebook’s coffin

If this was Microsoft, we’d be talking about breaking the company into 37 pieces. I don’t understand why Facebook is allowed to continue operations in light of these data privacy issues, and the evidence that they knew and did not care.


How Alaska fixed its earthquake-shattered roads in just days

This is how you engineer things. Plan for failure and plan your response to that failure. Application developers could learn a lot from Alaska highways.


How Being Busy All the Time is Hurting You – plus 3 ways to stop

I’ve fallen into this trap as well, being too busy to do anything. Take a step back and reflect on your tasks and your accomplishments. Being busy is not a trophy.


My favorite event of the year, SQL Live in Orlando, where my room has a view of Hogwarts:


So far in this series, we have covered Application Performance Monitoring. If you haven't already, read part one, part two, part three, and part four. In the last post, we stressed the importance of mapping out all dependencies for our application. In this post, we will dig further into it and get a better understanding of what ADM (Application Dependency Mapping) is all about.


What Is It All About?


Application Dependency Mapping is far from a new concept. If you are like me, I remember the days when you would pay a company a very large sum of money to gather data into a spreadsheet that then became your application mappings. The data was compiled based on samples that were gathered in various ways. Often, by the time you received them, things had already changed; therefore, they were already out of date. The good thing is that we could gain a little better understanding of our applications going through this process.


So, how have things evolved since those days? Application dependencies have become much simpler to obtain. Even better, we can visualize them in real time. The tools and methods of gathering the data have advanced over the years.


These tools can monitor data flows over a certain period we determine in advance. Then we can generate visualizations of our application flows. With these visualizations, we can understand groupings of services, etc. We are able to see what devices communicate in a bi-directional fashion. This allows us to understand what applications are being provided, and what is consuming them. We can also uncover any potential security concerns as we dig in to the application flows. We may find that there are certain network ports exposed and/or being consumed and should be locked down.


Ultimately, our goal is to understand how our applications are linked to one another, which provides our application dependency maps. As you can see, these mappings will assist us in our application performance monitoring. As our application dependencies are mapped out, we can get the bigger picture of our applications.


How Do We Get Started?


How do we get started with ADM? In most cases, agents are used on endpoints. For network gear, either taps or built-in agents are used to capture the network flows. Once those are in place, the data will be aggregated and sent to a centralized solution, which will analyze the data. Once that data is analyzed, we can visualize our dependencies and flows. This is also a great opportunity to bring together all teams who support any of the components discovered. By bringing everyone together, we can collectively understand the results and determine what is important and what is not. The goal of our application dependencies is to ensure that what is needed is available, and what is not is blocked or disabled. This will depend on the security posture of the organization you work within, but it is good practice to take this approach. We will go into more on security around this topic in the next post, so stay tuned for that.


Depending on the solution used for analyzing the data to provide our application dependency mappings, there might be some very interesting things we can do with the data. Some solutions will allow you to create ADM profiles which can be versioned for rollback functionality. A solution like this provides the ability to analyze the data flows, present the dependency mappings, and allow you to create a profile with the communications in place and block the unnecessary ones. This is more security-focused, but good to point out here. The real value is that you can create an initial profile, and then later on, create a new updated profile and apply it. By having the ability to rollback, you can easily return to a known working profile on the off-chance something were to break. Another very interesting thing you might do is run through different scenarios using the analyzed data and see how those scenarios may affect your application. Depending on the amount of history data maintained you would have much more granularity in your scenarios. By having this ability, you can ensure that a new profile will not cause issues once applied.




We have only touched briefly on the benefits of using a solution to assist in our application dependencies. However, I hope the value is clear. There is much more to it, but this highlights initial value that can be provided. I mentioned a few times that certain topics were security-focused, and we will touch on those more in the next post.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting blog that reviews the fundamental steps every federal IT pro should take to create a strong security foundation. I agree with the author that an overarching plan that encompasses multiple layers of security can serve as the most effective strategy.


The Five Fundamentals


1. Create an information security framework


A security framework is essentially your security blueprint. It encompasses a series of well-documented policies, guidelines, processes, and procedures about how best to implement and manage ongoing security within your agency.


There are several established security frameworks, but the U.S. government usually follows the guidelines set forth by the National Institute of Standards and Technology (NIST). Specifically, agencies use the National Institute of Standards and Technology SP 800-53 to comply with the Federal Information Processing Standard’s (FIPS) 200 requirements.


Use NIST guidelines to establish a security framework that assists with successfully detecting and responding to incidents in a quick and efficient manner.


2. Develop a consistent training program


Just as important, end users must understand the importance of practicing good cybersecurity hygiene—and the ramifications of poor security practices.


Regular, consistent training across the agency is key.


Train your team to understand how to recognize potential vulnerabilities quickly, and how to find the gems of important information within a sea of security-related alerts and alarms. Train end users on topics like creating strong passwords, identifying phishing emails and other social engineering attacks, and what information can and cannot leave the agency.


3. Outline policies and procedures


Creating the security framework is one thing; ensuring that everyone understands the policies and procedures associated with that framework is as important as the framework itself.


Sharing this information with all staff, security teams, and end users is often best done upon hiring. Outline policies and expectations clearly from the start to avoid any misunderstandings.


4. Monitor and maintain IT systems


Part of good security hygiene is making sure you’re up-to-date on all hardware and software updates and patches. New malware is introduced every day; ensuring all your systems are up to date should be your baseline.


Another form of important maintenance is to have a strong backup system in place. If a breach occurs and data is compromised, a good backup system will support minimal data and productivity loss.


Finally, even if all end users are up to date on security training, there is always the possibility they will violate security policy. Federal IT security pros must be able to monitor end user activity to mitigate this risk and catch policy violations before they become breaches.


5. Stay current with government mandates and regulations


Some of the most common are the Federal Information Security Management Act (FISMA) of 2002, Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry Data Security Standard (PCI DSS) for agencies that deal with any kind of credit card transaction, and NIST regulations.




As the basics fall into place, expect more layers to become necessary to shore up your federal cybersecurity strategy plan. Adding layers like perimeter defense, device failure, and enhanced monitoring for insider threats can help enhance a stable foundation, and result in a safer and more secure agency.


Find the full article on Government Technology Insider.


  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Where are you in your career arc right now? Trusting simple statistics, I can say that the majority of you are either just starting out, or somewhere in the middle. Relatively few of those reading this essay will be at, or near, the end (whether that means retirement is on the immediate horizon or you are looking at pivoting into something completely different).


So, for that majority of you who are not-done-yet, I want to ask you: what will you leave behind? How are you planning for your graceful exit? How are you ensuring that your colleagues (those who also aren't-done-yet) will continue to be successful without you to call on?


To be honest, this wasn't a question I’d considered very much, until I met a very special person in the SolarWinds booth at Cisco Live! last year.


He had clearly been around the data center a few times, about a decade and a half ahead of me, career-wise. We spent a few minutes amicably playing what I call "IT sonar"—where you get the depth and breadth of someone's experience by reminiscing on the tech you've seen come and go.


But then he turned to the demo station, because he had a few questions. The things he wanted to know were interestingly specific. They didn't center on the latest-and-greatest. He'd heard about our most recent features, and he and his team were using them. He was at Cisco Live!, and knew about THEIR most recent announcements, but wasn't particularly concerned about how WE could monitor THAT.


This was notable because, if I'm being honest, people visiting the SolarWinds booth usually fall into three categories.

  1. People who want to know about our stuff.
  2. People who want to know if OUR stuff can help with this OTHER stuff they just heard about and/or are buying.
  3. Fans who just want to say, “HI,” bask in the orange glow of #MonitoringGlory, pick up our latest buttons or stickers, and pose for a selfie.


Curiosity piqued, I asked him what was up. What was he REALLY trying to do? His answer came as a surprise, "I built this thing, but I'm going to retire one of these days," he said.


By "this thing" he meant all of it. The whole IT environment at his company. He took them from dumb terminals to PCs running Arcnet to Novell servers on Ethernet and all the way to today. He had a hand in all of it. He knew where the important bits were and where the cables were buried.


What got me most was the WAY he relayed this. He wasn't bragging. He wasn't justifying himself. He wasn't bringing up long-forgotten accomplishments as a way of proving he was still relevant. He was calm, confident, and clearly didn't need to prove anything. He told me he'd found his niche at the company long ago, and worked hard to gain and keep the trust and respect from both management and his peers. This gave him the freedom to make decisions in his lane, as well as reach out and help folks whose work fell outside that lane. He also described how he had worked to keep his skills sharp through the successive waves of IT trends, without falling into the bad habit of chasing the latest fad.


The problem, he told me, was that he realized there was no way to teach his coworkers—some of whom were young enough to be his grandchildren—everything that was in his head. And he realized it would be a waste of time to do so.


"There's just stuff," he said, "that isn't worth anyone's time to learn, or to carry around on the odd chance that it will be important a year or three from now. But even so, that stuff is still running. And it's going to break. And they'll need to know about it when it does."


I started to make a joke about documentation, and he told me that was just as bad as trying to teach it to somebody. Burying a piece of information, whether in a binder on a shelf or on a page in a labyrinthine SharePoint site, is a great way to feel good about knowledge that nobody is ever going to read.


He explained that his idea was to replace historical knowledge—what he called "tribal memory"—with tools that would keep track of the "what" (the devices, applications, and elements); handle the "when" by notifying the right people at the right time (meaning when something had gone wrong); and then point them in the direction of "how" by including links to walk-throughs, diagrams, or even just having very clear descriptions in the body of the alert message or ticket.


His job was to understand the "which." Which of the tasks and technical areas under his purview were repetitive busy work that could be automated (mostly) away, and which were skills that he needed to ensure the team acquired.


I joked about how the cool kids today would call it “technical debt.” He took that gag and ran with it, explaining that his goal, like lots of folks contemplating retirement, was to pay off his entire technical mortgage and have a title-burning party.


With that frame of reference, we had an amazing conversation. I'd like to think I was able to help him out a little.


But when he walked away and I started scribbling the notes that would eventually turn into this essay, I could think of just one word to describe what it all represented: "Legacy." For IT practitioners, that has some very specific connotations—technology from a bygone era that’s still around, still requires support and maintenance, but is no longer a platform on which new solutions can be built.


But of course, there’s the more universal meaning to “legacy:” The things (whether physical or intellectual) that we leave behind after we’re gone. And I realized that our documentation, our code, our integrations, and our installations are no less a legacy than the money, photos, investments, homes, cars, antiques, artwork, or businesses left to others when we die.


And as I was scribbling my notes, I thought about making THAT the end of this essay—something like, "What will be left behind when you leave? Will you leave your inheritors saddled with your technical debt? Are you thinking about how that legacy reflects on you?"


But then it occurred to me that, as impressive as the tools and automation this guy was building was, it wasn't the most important thing he was leaving his team. That wasn’t his legacy at all, not by a long shot.


I remembered the impression he left with me: calm, confident, not needing to prove anything... of having found his niche... of having gained and kept the trust and respect from both management and his peers. Recognizing how to make decisions in his lane, and using his secure position to help others.


Maya Angelou famously said,


“I've learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.”


So NOW I will ask you, all of you "not-done-yet" readers as well as the "almost-there" ones, to take a moment before you close this essay and ask yourself, "What is MY legacy going to be? What am I doing, today, that will be left behind when I leave?"

We made it! This is the final post in this six-part series mapping the cybersecurity landscape through a new reference model for IT infrastructure security. Thank you for coming along on this journey with me. Now it’s time to take a look at where we’ve been, review the map itself, and discuss how to put it to work in your own environment.


We started the series by reviewing some of the most popular and useful models and frameworks currently available. While all of these can serve as maps to help us build a secure infrastructure, they leave us with a couple fundamental questions unanswered:

  • Which tools provide defense in depth, and which are just causing duplication?
  • How do I compare competing products and the protections they provide?


To help answer those questions, we needed a clear way to map where individual security tools fit into a comprehensive security infrastructure. That’s where the reference model comes in, and the following four posts each zoomed in on each of the four domains of IT security:

  • Perimeter - Network Security, Email Security, Web Security, DDoS Protection, Data Loss Prevention, and Ecosystem Risk Management
  • Endpoint & Application - EPP / EDR, Patch & Vulnerability Management, Encryption, Secure Application Delivery, Mobile Device Management, and Cloud Governance
  • Identity & Access - SSO (IAM), Privileged Account Management, Multi-Factor Authentication, CASB, Secure Access (VPN), and Network Access Control
  • Visibility & Control - Automation & Orchestration, SIEM, UBA / UEBA, Device Management, Policy Management, and Threat Intelligence


Now we can zoom out and take a look at the full picture:

Reference Model for IT Infrastructure Security


Think of this like one of the maps you might find in a mall or other public area, telling you confidently: you are here.


This particular map aims to give you the ability to answer those two stubborn questions above. By knowing which domain and category within the InfoSec landscape you are dealing with, you can evaluate various tools in an apples to apples comparisons. When the latest hot security company or product comes on the market, you can judge it against your existing infrastructure by placing it on this map.


How many network security devices, SSO services, or threat intelligence providers you need is unique to each organization. However, there is a big difference between intentionally adding depth to your security posture and unwittingly adding duplication. Use this model to ensure you only add the tools you really need, filling a gap or replacing a less adequate solution.


Speaking of gaps, that's another great way to use this map. There’s a third important question we can answer: Does your current security infrastructure provide the protection you need? Once you understand your organization’s risks and goals, you can use this model to ensure that all the right boxes are filled with a product or service that does the needed job.


Not every company needs a tool in each of these categories, of course, and some of you may need multiple protections in one or more of the categories. Also note that there are various ways to provide those protections. Each of these categories can be addressed by technical tools (hardware, software, and services), legal tools (e.g., contracts), organizational tools (policies and procedures), and human “tools” (like training and awareness), or a combination of two or more of these countermeasures. The key is understanding where real gaps exist, and what’s available to fill them.


Finally, we must always remember that the map is not the terrain. While I have found this model to be extremely useful in many discussions with CIOs, CISOs, IT management, and security practitioners, it can’t tell the whole picture. Thinking about the NIST Cybersecurity Framework of Identify, Protect, Detect, Respond, and Recover. This model sits mostly in the protect and detect realms. You still need talented staff or third parties to identify your most valuable assets, your compliance requirements, and your risks, goals, and vulnerabilities. Not to mention responding to attacks that do occur and recovering after an incident with policy updates, tool refreshes, or public relations.


Now it’s up to you – how will you use this new resource to better protect your organization?

Welcome to the week 1 wrap-up of the 2018 December Writing Challenge! If you missed the initial announcement, the structure of our third annual community event has changed this year. Instead of offering a new word each day on which everyone can reflect, we're taking a single idea and hearing everyone's unique view of it: “What I would tell my younger self.”


You can head over to the special forum we've set up just for this event or start with my summaries below and follow the links wherever your whimsy leads you.


I'm dividing my summary into two sections: the authors and the comments.




The Authors




Leon Adato, SolarWinds Head Geek and THWACK MVP

Day 1: Slow Down You Crazy Child

I had the honor of leading off this year. I wrote a few things. I would love it if you checked it out, and maybe even left a comment or two.



Joe Kim, EVP Engineering and Global CTO

Day 2: Navigating Ambiguity is Critical as a Technologist

Joe's message is bold, broad, and not just applicable to his past self but to his future self as well, and therefore advice we all can follow now.


“The future is hard to guess, so don’t.”


Along with that he offers two more pieces of solid advice:

  1. Focus on the “HOW.”
  2. Continue to add to your toolbox.



Charlcye Mitchell, Product Manager

Day 3: Show Up & Pay Attention, Inspiration is Everywhere

Charlcye leads the team responsible for the SolarWinds online demo (demo.solarwinds.com). This quote immediately jumped out at me, not only because it was incredibly motivating, but in one sentence it captures the spirit of that team and what they accomplish every day: "Inspiration is everywhere, but you’ll rarely see it if you aren’t looking for it."


But she didn't stop there. She went on to challenge her younger self with four more questions. (As some of you know, I'm a big fan of The Four Questions):

"Find an unanswered question that excites you; Fill your time with unfamiliar experiences and learn new skills; Teach other people; Discover more things to be grateful for."



Matthew Reingold, MVP

Day 4: Post-Recollection

Matt kept it short and sweet, and this line really caught me short: "Don’t let the bad stuff make you forget about the good stuff." We also got to hear a bit of the background and lessons learned that led to this being such an important piece of advice for him, personally.



Nick Zourdos, MVP

Day 5: Burning a Candle at Both Ends...With Napalm

Nick first acknowledged the obvious reality of offering advice to our past selves, which would effectively change the trajectory of our life. Not only that though, but he underscored how inseparable our past experiences are from our current selves, and how that isn't a truth that can be casually waved away: "The point I’m trying to make is that the past makes you… you, and that’s worth something."


With that point acknowledged, however, he nevertheless offered some heartfelt words that might have eased the path for him in his younger years:

"Make time for life. Friends, family, relationships, and your own mental health are so much more important than good grades in your college years."



Thomas Ianelli, MVP

Day 6: Let's Go Get Ice Cream and Have a Little Chat...

In a pattern that is familiar to any of us who work regularly with our MVP community, Thomas took Nick's idea even further, moving past the thought that changing our past selves does us a disservice, and digging into the idea (with citations and references) that we may not even remember our past selves clearly:

"What do you really know of this person anyway? They are as much a stranger to you, as you are to them. They are just the collection of stories you have recited for years, about significant moments."


This led him to give voice to something that I think we all, as IT practitioners and especially those who work in some type of teaching capacity, have run up against:

"It is difficult to remember what it was like not to know."


Finally, a footnote to his whole analysis is worth repeating here, because it is wonderfully geekworthy:

“*Any discussion on the merits or risks of time travel should include a warning that anything changed in the past can have unforeseen ripple effects dramatically altering the future, including your very own existence in the present."



Jez Marsh, MVP

Day 7: Always Remember

Like many of our lead writers so far, Jez struggled with the far-reaching implications of altering the timestream. Nevertheless, he found a message which was both specific to him and yet general enough that he felt it wouldn't cause too many ripples: "Est Sularus Oth Mithas,” a quote from the Dragonlance Chronicles meaning, "My honour is my life."


I found the meaning behind this message to be wonderfully insightful. "We are, at times, our own worst enemy. Receiving this bolt from the blue at that time of my life would help me defeat the lingering self-doubt and regain my mojo a little sooner."




The Comments



***** Day 1 *************



By definition, the people nearest you are the most important. They are the ones who chose to show up, to stay, to be in your company. Give that choice the respect it deserves.

Such a hard, yet important lesson. Being mindful and not taking your support system for granted is a huge sign of emotional maturity that we all should be striving for.



Dear Younger Me:

A world of good will come from treating everyone as you want to be treated.  A world of hurt follows if you don't.


Don't get tangled up in things that aren't enjoyable and interesting and beneficial to someone.  While you have your entire life ahead of you, it's too brief to waste on petty squabbles or major ones.  Spend no time worrying about things you can't change.  They're in the past.  Learn from them, modify your behavior so you don't repeat them, and move on.


Take a note from a song James Taylor recorded; consider making it your motto.

"The secret to life is enjoying the passage of time. Any fool can do it."



I'm an auto racing fan and there's more than one story and/or illustration of drivers learning that you slow down to go fast. Just what does that mean? If you drive just as fast as you can, you don't hit your lines, you don't brake at the best times, you accelerate too hard, etc. When you slow down - in other words, focus on doing things right - you hit the best lines because that is a focus, you brake at the best times because that is a focus, you accelerate best because that is a focus. So, slowing down actually makes your lap times better. The same too with IT work. I've been guilty of rushing through a project only to later see my mistakes and have to redo or repair what I've done. When we slow down and take things carefully and methodically, we are at our best.


***** Day 2 *************



Thanks for the heads-up Joe.

At this point in my life I was about to stop diversifying and wanted to narrow my development in this field (IT), but after reading your experience, I now believe that the diversification that I am aspiring to achieve will further my development more, that aiming for carrier development.

Thanks for laying out there this game-changing perspective.



This is a tricky one for me. My previous department was dissolved and I was moved to a different team, after 11 years with my company. It's hard to forge your own path when you're held inside a box and not allowed any growth at all. It fosters complacency and apathy. I think some people might read "so don't (predict the future) and presume that nothing can be done...” I've been there. I'm fortunate to have leadership that wants to foster growth in many directions now.



This is a terrific read.  Joe outlines and explains two specific ways we can be better people going forward.  I appreciate that he is concise, makes his points and supports them.  I'll read this more than once.


***** Day 3 *************



So often people view everything from the angle of "what's in it for me." Once we begin to look at what we can bring to others our purposes will become more apparent.



I find point 3 most important, giving folks guidance on their own journey...



In my experience (YMMV) I always learn the most when I'm teaching others.  Sometimes you're literally only slightly more knowledgeable than the person you're teaching, but having to teach a concept requires you to wrestle with the knowledge, the right way to convey it, and to come up with multiple analogies or ways of communicating it. You don't really understand something until you can teach it, and teaching it always highlights deficiencies in my own learning or understanding as well as helping to bring more clarity my knowledge.


***** Day 4 *************



One of the long-lasting lessons I've taken from a leadership course I attended in the Marines:

At any given time you have up to 360 choices of direction for lateral movement. Pick one. People who sit in the same spot are the easiest to hit.

(paraphrased to clean up for public consumption)

The point being, when faced with any challenge, you have to make a decision, and preferably quickly, because ANY decision is action, and action begets action. Waiting for the perfect situation usually results in a lot of waiting, in my experience anyways.



Of all the words of mice and men, the saddest are 'It might have been.'



I agree, hesitation is natural and something I am very good at. I think it has helped more than harmed. It gives me the chance to think before I leap.


***** Day 5 *************



"Diamonds may be formed under pressure, but never forget they are not formed overnight."



How do I deal with burnout? I literally ran up a mountain! I am not suggesting that for everyone but it has made a huge impact on my approach to life and work since. It was also a #bucketlist and fed into a lifelong desire that I mentioned in my Day1 post.

I as well went through intense stressful periods of my life that has altered my personality permanently... for the worse. Unfortunately I was never able to return and I do miss my old self.



Life is full of choices, whichever you choose can be viewed as the wrong one.  Live life for the moment, and do what makes you happy.  Live life without regrets.  Having worked 16-18 hour days to try and "help" the company, to be overlooked, and undervalued, you soon find out to stop.  Take stock.  And most importantly, do what makes you happy, as no-one else in life will.  If that means working loads, and learning great. But then don’t worry about the things you may miss by not experiencing them.  Alternatively, work hard, play hard and use your time wisely.  There is no simple one answer, there is no silver bullet.


***** Day 6 *************



Choices are made, reality hits hard when you grow up.  Time goes on and that broadens the vision to make you realize what you could have done better.



What does the younger us know that we don't? As you grow you change, was it for the good? Are you still the same person? What a great concept to think about. I moved around a lot when I was young being in a military family. As we moved I would meet new friends that would have an impact on me and change who I was. Who could I have been if I stayed in Hawaii where I was born? Would I still be the man I am in the tech field do what I am doing? This writing challenge gets me thinking, please stop I don't like this.



Great post tomiannelli. It's funny, you absolutely nailed something that has been semi-haunting me (in a good way) recently. I've been thinking a lot about memory bias and am I falling prey to it, and to what extent current events and circumstances are biasing my memories one way or another.  It's an interesting question to think about, and like you, I'd like to go back and ask my younger selves at various points in time what my thoughts and feelings were at the time they were happening.  Not sure if a diary would help or not - you don't always see what's salient at the time. Anyway, thanks for the thought-provoking post.


***** Day 7 *************


Let's do our best to change the future to the best possible place to be for our next generation . :-)

Jez I think you're the first person to mention timing. I'm surprised none of us other ultra-analytical IT pros thought of that!

Very nice! If I could, I think I'd do the opposite and have my younger, more optimistic self remind my current self that life is really pretty good and that I should be more appreciative.

I’ve covered a lot in this series of posts around artificial intelligence (machine learning); from the beginning (Not Another AI Post…) to why I love it (My Affection for AI) to what I hate about it (My Animosity Toward Artificial Intelligence) to what scares me about it (AI-nxiety) and even why we need to decide what words to use when talking about it (Words Matter). One common theme is that a lot of the discussions, when it comes to artificial intelligence and machine learning, aren’t around the technology directly. In my experience, the focus of many conversations I’ve had about AI and ML center around the politics, marketing, high-level capabilities, and immediate problems that will be solved. When it comes to the actual technology and how to implement, feed, and care for it properly, things get complicated. I’m not a data scientist or a software developer, so there are a lot of concepts I can’t--or don’t care to--wrap my head around. Most people don’t care about the methods, just the results, and I’m no different.


I covered a few scenarios along the way that have some huge benefits with things like large-scale historical analysis to create event correlations of disparate system logs, as well as failure predictions with infrastructure systems, but those are things we can get now (with enough time and money). What does the future hold for artificial intelligence and machine learning within the enterprise network and systems space? It’s a massive field that’s growing and changing all the time with newfound capabilities, so predicting what is coming is an extremely tough thing to do. But I can tell you what I’d like to see (and maybe give some ideas to a vendor or two stealthily working on a new product).


First, as I covered in my post "AI-nxiety," I want to be able to inform a system of my office politics. Whether it’s as simple as supplying an organizational chart or as complicated as mapping MAC addresses and applications to varying roles, this is something that is definitely needed before I’ll begin to let HAL 9000 take the wheel. By creating a system that assigns a value based on importance to a user or application, issues can be routed better and responded to faster when needed, or left for tomorrow when not--automatically. That’s the whole point of these systems, limiting and avoiding human intervention. I could go on with this topic and possible ways to configure or even teach a system, but I'm staying high-level here.


Second, I want a system that sits 100% in my company’s data center. I deal with a lot of sensitive data, and giving a cloud provider unfettered access to all of it just doesn’t sit well with me (or the regulators that oversee my networks). There has been a lot of development in the custom silicon space as of late, and if these chips are even remotely affordable, I don’t see an issue with bringing this kind of workload in-house. The usual response is "Well, we won’t monitor that information," or "We only grab metadata." No dice. It’s important and needs to be monitored and under some regulatory bodies, if it touches the data, it’s in scope. This includes gathering the data.


Third (and last for now), I don’t want a laser-focused AI that only sees applications or servers or the WAN or the WLAN. I want something that sees it all and takes everything into account. As a wireless architect, I’m no stranger to the difference between actual issues and issues that present to clients deceptively. When a RADIUS or DHCP service fail, all the user knows is that the wireless is down. I need something that sees into the back-end, the front-end, and everything in between, constantly watching and correlating every single packet along the way. From the border firewalls to the cloud apps to the wireless clients, it’s all important and should be treated as such.


Maybe this stuff is out there. If so, they need to step up their marketing. Maybe if I cobble a few solutions together I could get my wishlist. If so, those companies should look at partnering up and showing off what they can do together. If you’ve got any feedback to offer, leave a comment below. And if you’ve got anything to add to the wishlist, definitely leave a comment below.

Home from Las Vegas and AWS re:Invent for 60 hours, then I’m back on the road. In Orlando this week for SQL Live, where I have four sessions to deliver. I’ll also be working the SolarWinds booth. If you are attending SQL Live, let’s connect and talk data.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


AWS Says It’s Never Seen a Whole Data Center Go Down

AWS is a master of spin when it comes to their product statements. Here they are having fun drawing a fine line between “disaster” and “event.” To the customers affected by outages at AWS, they don’t care if the building was down or not. It’s a disaster for them, period.


Amazon gets into the blockchain with Quantum Ledger Database & Managed Blockchain

Here we see AWS leveraging the use of the word “quantum.” It’s meant as “small,” and in no way do they want you to confuse it with “quantum computing.” And, as a bonus, it’s combined with Blockchain, for maximum SEO effect.


Blockchain study finds 0.00% success rate and vendors don't call back when asked for evidence

If you are using Blockchain successfully, please contact the Register and let them know.


Can blockchain co-exist with GDPR? It’s complicated

Ah, the joy of working with immutable transactions.


Half of all Phishing Sites Now Have the Padlock

The padlock just means traffic to the website server is encrypted. It offers no protection about who is at the other end. You could be having an encrypted discussion with Satan.


Marriott breach exposes more than just customer info

This story is still evolving, but I’d expect things to get worse as details emerge. This has potential to be a case study in cybersecurity worst practices.


Malware vector: become an admin on dormant, widely-used open source projects

Not just malware, but adware, too. Open source projects are susceptible to being hijacked, in legitimate ways, by outsiders that see an opportunity to use the project to further their own needs. It’s like a hostile corporate takeover, but without the excitement of a shareholder vote.


One of my favorite events of the year:


I really want Software Defined Networking (SDN), or something like it, to be the go-to approach for networking, but are we too tied to our idea of what SDN is for us to get there?


The Definition


Almost ten years ago in 2009, Kate Green coined the term Software-Defined Networking in an article describing the newly-created OpenFlow specification that would be released later that year. The idea was revolutionary: Decouple the forwarding plane from the control plane and move the latter to a centralized controller. The controller would then manage the forwarding plane of the individual devices in the network from a global perspective. This would allow the entire network to be managed via a single interface to the controller. For some time following this, SDN became synonymous with OpenFlow, but the philosophy has exceeded the implementation.


A Cloud Technology?


In an admittedly questionable Wikipedia page, SDN is defined as "an approach to cloud computing that facilitates network management and enables programmatically efficient network configuration in order to improve network performance and monitoring." This is an interesting perspective, considering that OpenFlow appears to have been developed with large service provider networks in mind. So where does it go from being a service provider technology to a cloud technology? Large service providers and cloud (particularly public cloud) providers have one thing in common: scale.


In previous articles, I've discussed network automation in the cloud as a requirement rather than a desired state. Arguably, large networks of any sort share this property. When working at scale, there really isn't any other way to do things effectively.


This, of course, doesn't mean that the approach isn't desirable outside of large-scale environments. Still, need drives progress and the market focuses on the need.


Silo Busting


Since I began my career in networking (too) many years ago, technologies were placed in seemingly arbitrary categories and vendors tended to develop equipment with feature sets that followed these silos. Invariably, there's bleed from one category to another when new requirements surface. So why are we maintaining these categories in the first place? Networking is networking. If the solution for an enterprise business requirement is traditionally a data centre networking or service provider networking technology, use it.


For many years the IS-IS routing protocol was considered a service provider technology. Now, with its ability to handle IPv4 and IPv6 under a single routing architecture, it's getting a resurgence in the enterprise.


MPLS VPNs have mostly been in the service provider category, but are becoming seen in enterprise networks for organizations that need to support franchise network connectivity over the parent organization's network.


Shortest Path Bridging (SPB) was developed as a data centre networking technology, but is arguably an ideal replacement for Spanning Tree Protocol (STP) in general.


We need to think beyond the silos and look at networking as networking if we're going to escape the current state of micromanaging equipment. This means bringing SDN out of the cloud and service provider categories.


Delegation of Control


One of the key concerns about SDN that I've heard over the years is the problem of relying on a controller (or cluster of controllers) to make forwarding decisions. This approach is really good for standard routing and network functions that can be addressed globally. It falls down a bit when it comes to things like security policies at the edge, policy-based routing, and other exception-based items that are device-centric rather than network-centric.


Can we have an SDN architecture where the control plane is still distributed, but managed at the controller? Is it still SDN? The purists may argue, but in the same vein as the silos above, it doesn't really matter. We may need another term for it, but SDN can work for now, and here's why.


An Imperfect Dream


When I first considered writing this article, I was running under the working title of "When SDN Isn't" because I was frustrated with the number of solutions that purported to be SDN, but really weren't for various reasons. Some of them did not centralize the control plane under a controller. Others didn't provide open northbound APIs into their controller. Now I'm starting to think it's time to expand the practical definition a bit.


At its core, SDN works by allowing software to define requirements to the controller via a northbound API. The controller then programs the component devices or virtual devices via a southbound API. Taking the actual term Software Defined Networking literally, these are the key requirements.


If the component devices are programmed at the flow level by a controller that has the entire control plane centralized, and it meets the needs of the organization, that's awesome. If those devices have their own control planes and their decision making is defined at a higher level by the controller, that's just great too, again, so long as it meets the needs of the organization.


The Whisper in the Wires


SDN, or a relaxed definition of it, has the potential to be the holy grail of networking in general, but we're still stuck thinking in networking silos: cloud, data centre, service provider, enterprise, small/medium business, etc. What we want is a central and programmable interface to the entire network and to stop micromanaging devices. How that is accomplished below the controller level should be immaterial.

By Paul Parker, SolarWinds Federal & National Government Chief Technologist


IT modernization projects help federal agencies deploy more advanced technologies to enhance efficiency and provide a greater depth of capability. These advancements often provide greater opportunity to leverage automation and allow for stronger IT controls to protect critical assets.


That said, technology upgrades can also create security challenges. In the 2017 SolarWinds Federal Cybersecurity Survey, federal respondents cited three increases in IT security challenges as a result of modernization.


  • More vulnerabilities in new technology stacks (cited by 53%)
  • Burden of supporting new technologies and legacy systems (cited by 51%)
  • Lack of training on new technologies (cited by 50%)


All in all, the survey revealed that 66% of respondents—a full two-thirds—think federal agencies’ efforts regarding network modernization has resulted in an increase in government IT security challenges.


Not modernizing is not an option; that’s understood. Security holes can be far greater in older technologies. So, what’s a federal IT pro to do?


Four steps toward getting the best of both worlds in government IT


Step 1: Enhance IT controls


According to the survey, those agencies that deem themselves as having excellent IT controls have seen a decrease in cybersecurity threats across the board. Conversely, those who say their agencies have poor IT controls have seen an increase in security incidents.


In fact, the same survey notes that 51% of agencies that rate themselves with excellent IT controls say IT modernization has enhanced their ability to manage risk.


Step 2: Ensure compliance


Over two-thirds (68%) of survey respondents said that implementing relevant standards is critical to achieving their cybersecurity targets. In fact, 62% agreed that agencies that merge and balance both risk management and federal IT compliance are more likely to avoid IT security issues.


Step 3: Take advantage of new technologies to enhance security


Remember, IT modernization projects often provide greater automation, stronger IT controls, smaller attack surfaces, and built-in security features. Federal IT pros can take advantage of these enhancements to improve the agency’s cybersecurity posture.


Respondents cited the following as “highly effective” in enhancing network and application security:


  • Identity and access management tools (56%)
  • Endpoint security software (48%)
  • Network admission control (NAC) solutions (46%)
  • Patch management (45%)
  • Configuration management (42%)


Step 4: Training


Historically, one of the greatest sources of security threats to any agency, civilian or military, is careless or untrained users. The threat is not getting any smaller. In the 2017 survey, 54% of respondents cited this group of users as the greatest threat to agency security.


The solution is training, which is particularly important as agencies implement IT modernization projects. The more the federal IT team understands new technologies, the better equipped they are to implement them successfully and take full advantage of the newer built-in security features.




Federal IT pros face many challenges that affect an agency’s cybersecurity posture, from untrained users to budget constraints to a multitude of competing priorities. Ideally, IT modernization should not be one of them. The goal is to implement IT modernization projects that improve risk management protections, rather than increasing security challenges. Developing strong IT controls is the first step in that journey.


Find the full article on Government Technology Insider.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

There Can Be Only One

There Can Only Be One


I’ve heard repeatedly from people in this industry that what we need is a single interface to our infrastructure. For status monitoring, perhaps that’s best represented by the ubiquitous “Single Pane of Glass” (Drink!); for general network management, perhaps it’s a Manager of Managers (MoM); and for infrastructure configuration it’s, well, that’s where it gets tricky.


Configurator of Configurators




I was once hopeful of a future where I could configure and monitor any network device I wanted using a standardized operational interface, courtesy of the efforts of OpenConfig. If you’ve not heard of this before, you can check out an introduction to OpenConfig I wrote in early 2016. However, after a really promising start with lots of activity, OpenConfig appears to have gone dark; there are no updates on the site, the copyright notice still says 2016, the latest “News” was a 2015 year-end round-up, and there’s really little to learn about any progress that might have been made. OpenConfig is a mysterious and powerful project, and its mystery is only exceeded by its power.




I mention OpenConfig because one of the biggest battles that project faced was to to reconcile the desire for consistency across multiple vendors’ equipment while still permitting each vendor to have their own proprietary features. Looking back, SNMP started off the right way too by having a standard MIB where everybody would store, say, network information, but it became clear quite quickly that this standard didn’t support the vendors’ needs sufficiently. Instead, they started putting all the useful information in their own data structures within an Enterprise MIB specific to the vendor’s implementation. Consequently, with SNMP, the idea of commonality has almost been reduced to being able to query the hostname and some very basic information. If OpenConfig goes the same way, then it will have solved very little of the original problem.


Puppet faces a similar problem in that its whole architecture is based on the idea that you can tell any device to do something, and not need to worry about how it’s actually accomplished by the agent. This works well with the basic, common commands that apply to all infrastructure devices (say, configuring an access VLAN on a switchport), but the moment things get vendor-specific, it gets more difficult.


The CoC


It should therefore be fairly obvious that to write a tool that can fully automate a typical homogeneous infrastructure (that is, containing a mix of device types from multiple vendors) is potentially an incredibly steep task. Presenting a heterogeneous-style front end to configure a homogeneous infrastructure is tricky at best, and to create a single, bespoke tool to accomplish this would require skills in every platform and configuration type in use. The fact that there isn’t even a common transport and protocol that can be used to configure all the devices is a huge pain, and the subject of another post coming soon. But what choice do we have?


APIs Calling APIs


One of solutions I proposed to the silo problem is for each team to provide documented APIs to their tools so that other teams can include those elements within their own workflows. Most likely, within a technical area, things may work best if a similar approach is used:

API Hierarchy


Arguably the Translation API could itself contain the device-specific code, but there’s no getting around the fact that each vendor’s equipment will require a different syntax, protocol, and transport. As such, that complexity should not be in the orchestration tools themselves but should be hidden behind a layer of abstraction (in this case, the translation API). In this example, the translation API changes the Cisco spanning-tree “portfast” into a more generic “edge” type:

API Translation At Work


There’s also no way to avoid the exact problem faced by OpenConfig, that some vendors, models, features, or licenses, will offer capabilities not offered by others. OpenConfig aimed to make this even simpler, by pushing that Translation API right down to the device itself, creating a lingua franca for all requests to the device. However, until that glorious day arrives, and all devices have been upgraded to code supporting such awesomeness, there’s a stark fact that should be considered:


Most automation in homogeneous environments will, by necessity, cater to the lowest common denominator.


Lowest of the Low


Let’s think about that for a moment. If we want automation to function across our varied inventory, then the fact that it ends up catering to the lowest common denominator means that it should be possible to deploy almost any equipment into the network, because the fancy proprietary features aren’t going to be used by automation. While that sounds dangerously like ad-copy for white box switching, the fact remains that if any port on the network can be configured the same way (using an API) then the reality of which hardware is deployed in the field is only a matter of whether or not a device-specific API can be created to act as middleware between the scripts and the device. That could almost open up a whole new way of thinking about our networks...


Abstract the Abstraction


Will we end up dumbing down our networks to allow this kind of heterogeneous operation of homogeneous networks? I don’t know, but it seems to me that as soon as there’s a feature disparity between two devices with a similar role in the network, we end up looking right back at the LCD.


I’m a fan of creating abstractions to abstractions, so that—as much as possible—the dirty details are hidden well out of sight. And while it would be lovely to think that all vendors will eventually deploy a single interface that our tools can talk to, until that point, we’re on the hook to provide those translations, and to build that common language for our configuration and monitoring needs.


Qu'est-ce qui pourrait mal se passer?

“Too many secrets.” – Martin Bishop


One of the pivotal moments in the movie Sneakers is when Martin Bishop realizes that they have a device that can break any encryption methodology in the world.


Now 26 years old, the movie was ahead of its time. You might even say the movie predicted quantum computing. Well, at the very least, the movie predicts what is about to unfold as a result of quantum computing.


Let me explain, starting with some background info on quantum computing.


Quantum computing basics

To understand quantum computing, we must first look at how traditional computers operate. No matter how powerful, standard computing operates on binary units called “bits.” A bit is either a 1 or a 0, on or off, true or false. We’ve been building computers based on that architecture for the past 80 or so years. Computers today are using the same bits that Turing invented to crack German codes in World War II.


That architecture has gotten us pretty far. (In fact, to the moon and back.) But it does have limits. Enter quantum computing, where a bit can be a 0, a 1, or a 0 and a 1 at the same time. Quantum computing works with logic gates, like classic computers do. But quantum computers use quantum bits, or qubits. With one qubit, we would have a matrix of four elements: {0,0}, {0,1}, {1,0}, or {1,1}. But with two qubits, we get a matrix with 16 elements, and at three qubits we have 64. For more details on qubits and gates, check out this post: Demystifying Quantum Gates—One Qubit At A Time.


This is how quantum computers outperform today’s high-speed supercomputers. This is what makes solutions to complex problems possible. Problems today’s computers can’t solve. Things like predicting weather patterns years in advance. Or comprehending the intricacies of the human genome.


Quantum computing brings these insights, out of reach today, into our grasp.


It sounds wonderful! What could go wrong?


Hold that thought.


Quantum Supremacy

Microsoft, Google, and IBM all have working quantum computers available. There is some discussion about capacity and accuracy, but they exist.


And they are getting bigger.


At some point in time, quantum computers will outperform classical computers at the same task. This is called “Quantum Supremacy.”


The following chart shows the linear progression in quantum computing for the past 20 years.


(SOURCE: Quantum Supremacy is Near, May 2018)


There is some debate about the number qubits necessary to achieve Quantum Supremacy. But many researchers believe it will happen within the next eight years.


So, in a short period of time, quantum computers will start to unlock answers to many questions. Advances in medicine, science, and mathematics will be within our grasp. Many secrets of the Universe are on the verge of discovery.


And we are not ready for everything to be unlocked.


Quantum Readiness

Quantum Readiness is the term applied to define if current technology is ready for quantum computing impacts. One of the largest impacts to everyone, on a daily basis, is encryption.


Our current encryption methods are effective due to the time necessary to break the cryptography. But quantum computing will reduce that processing time by an order of magnitude.


In other words, in less than ten years, everything you are encrypting today will be at risk.




Databases. Emails. SSL. Backup files.


All of our data is about to be exposed.


Nothing will be safe from prying eyes.


Quantum Safe

To keep your data safe, you need to start using cryptography methods that are “Quantum-safe.”


There’s one slight problem—the methods don’t exist yet. Don’t worry, though, as we have "top men" working on the problem right now.


The Open Quantum Safe Project, for example, has some promising projects underway. And if you want to watch mathematicians go crazy reviewing research proposals during spring break, the PQCrypto conference is for you.


Let’s assume that these efforts will result in the development of quantum-safe cryptography. Here are the steps you should be taking now.


First, calculate the amount of time necessary to deploy new encryption methods throughout your enterprise. If it takes you a year to roll out such a change, then you had better get started at least a year ahead of Quantum Supremacy happening. Remember, there is no fixed date for when that will happen. Now is your opportunity to take inventory of all the things that require encryption, like databases, files, emails, etc.


Second, review the requirements around your data retention policies. If you are required to retain data for seven years, then you will need to apply new encryption methods on all of that older data. This is also a good time to make certain that data older than your policy is deleted. Remember, you can’t leave your data lying around—it will be discovered and decrypted. It’s best to assume that your data will be compromised and treat it accordingly.


One thing worth mentioning is that some data, such as emails, are (possibly) stored on the servers they touch as they traverse the internet. We will need to trust that those responsible for the mail servers are going to apply new encryption methods. Security is a shared responsibility, after all. But it’s a reminder that there are still going to be things outside your control. And maybe reconsider the data that you are making available and sharing in places like private chat messages.



Don’t wait until it’s too late. Data has value, no matter how old. Just look at the spike in phishing emails recently, where they show you an old password and try to extort money. Scams like that work, because the data has value, even if it is old.


Start thinking how to best protect that data. Build yourself a readiness plan now so that when quantum cryptography happens, you won’t be caught unprepared.


Otherwise…you will have no more secrets.

“I heard a bird sing in the dark of December.

A magical thing. And sweet to remember.

We are nearer to Spring than we were in September.

I heard a bird sing in the dark of December.”

- Oliver Herford


Here on the eve of the darkest month, when cultures across the world celebrate light in an attempt to brighten the short days and long nights, we want to bring some illumination to our THWACK® community too, in the form of the December Writing Challenge. In my announcement, I described how this challenge has been an uplifting event each year, and how many of us—both inside SolarWinds and in the THWACK community at large—look forward to it as a chance to reflect on the past and connect, both with each other and with our goals for the coming year.


I don't need to repeat the instructions (you can read them in the announcement, here), but I hope this post gives you a final reminder to keep an eye on the December Writing Challenge forum starting tomorrow and each day during December.


Rather than a word-a-day style writing prompt like previous years, this year's challenge has a single idea: "What I would tell my younger self." We're excited to read everyone's contributions, ideas, and discussions.


See you in the comments section tomorrow!

Today, in the fifth post of this six-part series, we’re going to cover the fourth and final domain of our reference model for IT infrastructure security. Not only is this the last domain in the model, it is one of the most exciting.


As IT professionals, we are all being asked to do more with less. This is why we need security tools that give us more visibility and control. But what do those tools look like? Let’s take a peek.


Domain: Visibility & Control

If we were securing a castle, it might be good enough to go to a high tower to see the battlefield, and we might be able to use horns or smoke signals to coordinate our defense. In a modern organization, we need to do a little better than that. Real-time visibility providing contextual awareness and granular control of all our security tools is required to defend against today’s threats.


The categories in the visibility and control domain are: automation and orchestration, security incident and event management (SIEM), user (and entity) behavior analytics (UBA/UEBA), device management, policy management, and threat intelligence.


Category: Automation and Orchestration

Automation and orchestration are the tools that make it easier to operate a secure infrastructure. These tools should work across the vendors in your environment and simplify the job of your security practitioners by reducing tedious and error prone manual tasks, reducing incident response times, and increasing operational efficiency and resiliency. This category is still emerging. This means that even more than the other categories, there is an option to build this functionality with open source tools and, more recently, to buy a commercial platform.


Category: SIEM

Security information and event management (SIEM) products and services combine security information management (SIM) and security event management (SEM) to provide real-time analysis of security alerts generated by applications and network hardware. SIEM solutions collect and correlate a wide variety of information, including logs, alerts, and network data-flow characteristics, and present the data in human-readable formats that administrators use for a variety of reasons, such as application tuning or regulatory compliance. More and more, these tools are complemented with some form of automation platform to provide instructions to analysts for how to deal with alerts, or even act on them automatically!


Category: UBA / UEBA

User behavior analytics (UBA) solutions look at patterns in user behavior and then use algorithms or machine learning to detect anomalies to prevent insider threats like theft, fraud, or sabotage. User and entity behavior analytics (UEBA) tools expand that to look at the behavior of any entity with an IP address to more broadly encompass "malicious and abusive behavior that otherwise went unnoticed by existing security monitoring systems, such as SIEM and DLP."


Category: Device Management

Device management is all about managing your security devices. These tools are often vendor-specific, and most attempt to display data in a single pane of glass using a central management system (CMS).  Recently, many vendors have recognized the need for a single interface and have enabled APIs to accommodate third-party reporting. Going forward, these tools may be replaced or controlled by other, vendor-agnostic automation tools in a more mature security infrastructure.


Category: Policy Management

Policy management tools make it easier to maintain homogeneous security policies across a large number of devices. These tools were initially vendor-specific, but vendor-neutral policy managers are becoming more prolific. They give the ability to deploy a common policy across an organization, a group of devices, or to a single device.  Additionally, Policy Management tools often give a user the ability to test/validate configurations before deploying them.  Finally, Policy Management tools provide a mechanism to create configuration templates used for no-touch/zero-touch provisioning.


Category: Threat Intelligence

Threat intelligence can take many forms. The unifying purpose of them is to provide you, your security organization, and your other security tools information on external threats. Threat intelligence gathers knowledge of malware, zero-days, advanced persistent threats (APT), and other exploits so that you can block them before they affect your systems or data.


One More Thing

In the final post in this series we’ll look at the full model that has been described thus far and consider how you can put it to use to meet your individual security goals. Be sure to stick with me for the conclusion!

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.