1 2 3 Previous Next

Geek Speak

2,637 posts

Application Dependency Mapping: Security and Ongoing ADM

 

Well, here we are. Finally, on the last post of this series. If you have not been following along, you can find the previous posts: part one, part two, part three, part four, and part five. And, if you have been following along, I hope you have found at least something useful along the way. And for those who have left comments, I really appreciate the feedback.

So, let’s get this moving and close this series out.

 

Implementing Security Based on ADM

 

In the previous post, we briefly touched on security and I mentioned that I would save that for this post. And that is exactly what we will discuss right now. You may wonder why I saved security for this post rather than the previous one. I felt that because at least in most of my experience when dealing with ADM, the applications are the focus and security is generally not part of this, which led me to separating out the two between posts.

 

As mentioned, security is a critical component to ADM. The ideal scenario after mapping our application dependencies would be to secure communications to everything else not required for our application to function. By doing so, we can minimize the potential for any vulnerabilities which may affect our application. Now if you recall, in the last post, we also mentioned that if your ADM solution could create profiles and model the data collected, we could benefit from these when implementing our security profiles. Having the ability to model a security profile based on our application mappings, we can ensure that our application continues to function. If we were to apply a profile that affected our application, we could easily restore to a previous profile that was functional. By having these capabilities centralized we can easily manage our environment in a very holistic view. To elaborate on this, we mentioned in an earlier post that many ADM solutions require a host level agent to be installed. By using these agents, host level firewall capabilities can be managed by your ADM solution. For example, Windows firewall for Windows hosts and IPTables for Linux hosts. By doing this, we can manage security at the host level rather than at the edge.

 

Additionally, some ADM solutions may also provide reporting capabilities that show any high security vulnerabilities and patches that may be required to maintain our security posture required. This is a huge benefit to your security organization.

Not Just Once But Ongoing (ADM)

 

After you have successfully mapped out your applications dependencies and implemented a security policy if that is a requirement, ongoing ADM discoveries should be performed regularly. By doing so, we can ensure that if anything has changed, the proper measures can be taken. This will also ensure that using a new toolset to get our dependencies does not end up turning into the previous methods of tracking them in spreadsheets that become outdated relatively quickly.

 

Conclusion Of Series

 

Throughout this series, we have covered a lot of ground. The majority of what we covered was based around Application Performance Monitoring, but we also touched on Application Dependency Mapping along with security. I am hopeful that these topics have been useful, not only in being something potentially new, but also additional perspectives of possibilities that can be achieved by using a very efficient solution. I look forward to the comments around these posts to inspire additional conversations and perspectives based on other’s experiences.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting blog that discusses digital twins, and emerging technology with network impacts; I think it’s something federal IT folks should keep an eye on.

 

Digital twin technology is the ability to create a digital representation of a physical object, which—thanks to a series of sensors—can be used to monitor the object’s health, movements, location, and more. Take, for example, a military vehicle equipped with numerous sensors. Data collected by these sensors creates a digital representation, or digital twin. The digital twin then provides a real-time understanding of the vehicle’s status, such as location, engine performance, temperature, or even tire pressure.

 

Digital twins may sound like the stuff of science fiction. Yet, digital twin technology is one of the top 10 strategic technology trends for 2018, according to research firm Gartner, Inc. Gartner recently announced “48% of organizations that are implementing the Internet of Things (IoT) said they are already using, or plan to use, digital twins in 2018.” Gartner’s survey also indicated that “the number of participating organizations using digital twins will triple by 2022.”*

 

Data Analytics and Monitoring

 

Data analytics is the “brains” behind digital twin technology.

 

As with the military vehicle, collecting and analyzing data fed by the vehicle’s sensors allow the federal IT team to understand a broad array of real-time information about the vehicle.

 

Collecting this information over time allows the team to gain a historical perspective. Historical analytics can then be used to uncover potential warning signs and predict failures before they occur. Data can also be used to diagnose a problem and even, in some cases, solve the issue remotely. With this knowledge, the digital twin can continuously model and adapt its prediction of future performance.

 

As you might imagine, having the right data analysis tools will determine the level of information you can derive through a digital twin. Luckily, many of the tools agencies already have on hand can be used to analyze this information.

 

Assess the tools you have in place. They need to be able to compare different data types, as well as visualize performance metrics on a correlated timeline to create data patterns. These two capabilities are the basic starting point for digital twin data analytics.

 

Monitoring

 

Monitoring tools for federal agencies can be used to optimize the object’s performance, and even predict future performance. Monitoring can also help optimize system performance, improve capacity planning, and indicate where connected devices can affect networks.

 

Be sure you have a range of monitoring tools, including network monitoring, server and application monitoring, storage monitoring, and more. Ideally, all these monitoring tools will work in conjunction with one another so the federal IT pro has a single view of the entire infrastructure.

 

Conclusion

 

Signs point to digital twin technology coming on fast and becoming a staple for many agencies within the next three to five years. This is great news, based on the advantages it can provide. Take the above advice on data analytics and monitoring, and you might just find yourself in a prime position to create an ideal infrastructure for digital twin technology.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

*Gartner Press Release, Gartner Survey Reveals Nearly Half of Organizations Implementing IoT Are Using or Plan to Use Digital Twin Initiatives in 2018, March 2018, https://www.gartner.com/en/newsroom/press-releases/2018-03-13-gartner-survey-reveals-nearly-half-of-organizations-implementing-iot-are-using-or-plan-to-use-digital-twin-initiatives-in-2018

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

All of the thoughts in week 1 were so deep, so thoughtful, so wonderfully personal and insightful that it's hard to imagine this week matching it. And yet, if you were following along each day, you know it did.

 

Once again I'm going to divide my summaries between our incredible lead authors and the insightful and honest comments that the THWACK community shared.

 

***********************

**** The Authors *****

***********************

 

Kevin Sparenberg, Technical Product Marketing Manager and THWACK MVP

https://thwack.solarwinds.com/community/solarwinds-community/contests-missions/december-writing-challenge-2018/blog/2018/12/07/day-8-cherish-the-small-things

Kevin begins with what is becoming a common theme among all us nerds, geeks, and sci-fi fans on THWACK—a statement about the dangers of time travel and altering past events:

 

"There are going to be things that you cannot avoid, pivotal moments in your life..."

 

But then he takes a turn, and this begins what was, for me, an emotional ride:

 

"...and for most of them, the pain of the event is outweighed by the experiences you gain beyond them."

 

The pain he's alluding to is laid bare in an essay on Kevin's personal blog: https://blog.kmsigma.com/2018/11/18/a-time-for-reflections-thanks/. Once you know the content of THAT post, the next words in his Challenge essay are a punch to the gut:

 

"In your future there is going to be pain—pain that defies logic to the deepness and sadness it creates—and you’ll think that it will break you. You’ll ask yourself questions that start with “What if I…?” You’ll berate yourself with statements beginning like “If I had just…” All I can say from this side of the fence is that those questions are good, healthy even, but don’t lose track of the good in life. You are stronger than you think. Just take the time to appreciate the small things in life between the big stuff."

 

To Kevin's credit, he doesn't lapse into non-stop foreshadowing. And some of his insights are truly inspiring (and once again, wonderfully personal).

 

"Watch people, I mean really watch people, and how they interact with each other. Stop thinking about how much of a baby your cousin Barbara is when she sings along with Cinderella. Just look at the joy that she has dancing around the room singing along with the mice."

 

But perhaps the most important piece of advice he gave his younger self comes at the very end:

"P.S. – Remember to comment your code. You don’t know that this means yet, but trust me, it’ll save you hours and hours of time later in life"

 

Josh Biggley, MVP

https://thwack.solarwinds.com/community/solarwinds-community/contests-missions/december-writing-challenge-2018/blog/2018/12/08/day-9-dont-be-afraid-to-fail

The rawness, the purity, the sincerity of the wisdom and advice that folks have been sharing, both in the lead articles and the comments below, continues to take my breath away. While I don't know what the posts tomorrow (or for the rest of the month) will hold, few so far match Josh's insight:

 

"Of all the advice that I've heard, of all the advice I could give, ‘Don't be afraid to fail’ is the single most important lesson we can learn in every part of our lives. Accepting failure is a profoundly humbling experience and it begins with acknowledging that we cannot know everything nor can we always make the right choices for any given situation. Deciding that failure is an option allows each of us to accept failure in others. Instead of viewing mistakes as limitations, we can begin to recognize them as an exercise in discovery."

 

But the advice to embrace failure—taken alone—can seem like a sentence to a life of disappointment and struggle, which is why Josh's final piece of advice is so necessary, containing both confidence and hope:

 

"To my younger self, in whichever multiverse you exist and have yet to take those first steps, don't be afraid to fail. Always be learning. Push yourself. You've got this."

 

Catherine O'Driscoll, Customer Marketing Manager

https://thwack.solarwinds.com/community/solarwinds-community/contests-missions/december-writing-challenge-2018/blog/2018/12/09/day-10-walk-before-you-run

Like so many of our lead writers, Catherine struggled with the idea of offering advice that was so specific that it would change the course of our lives and fundamentally alter who we are. But I thought her solution to this conundrum was wonderfully unique and inspirational:

 

"I decided to give advice that is relevant to what is to come but also still allows younger me the freedom to make those mistakes, take the unpaved path and live her life as only she can!"

 

And what was that advice? It was short and to the point, but also focused and relevant. The essence of it was, "What I wanted to share with you is that you might not always have someone there to catch you. So, in everything you do, don’t jump in head (or face) first. Take the time to learn the steps and walk before you run...or in our case crawl before you walk!"

 

Destiny Bertucci, Head Geek

https://thwack.solarwinds.com/community/solarwinds-community/contests-missions/december-writing-challenge-2018/blog/2018/12/10/day-11-yeah-youll-never-do-that

Destiny is the second of the Head Geeks to chime in, and also one of the few (so far) not to worry too much about "breaking the timeline" with her advice. I also found it fascinating that she focused on a single pivotal moment in life when everything changed.

 

"To myself, well heck, looking back I loved every trial. Every teary-eyed moment of rejection of ideas and every win that started to outweigh the losses. In the end I’d just tell me ‘Yea, you’ll never do that medical stuff’ and to follow my heart instead of the dream I thought I once had."

 

Richard Schroeder, MVP

https://thwack.solarwinds.com/community/solarwinds-community/contests-missions/december-writing-challenge-2018/blog/2018/12/11/day-12-what-i-would-tell-my-younger-self

Like I said, many of our authors fretted about the effects of telling our past selves about the future. Like Destiny, Richard took the road less traveled, and fully embraced this possibility, offering up advice that is at once incredibly specific to his situation:

 

"Don’t get into the front passenger seat of any vans without seat belts and you won’t lose your eyebrows (and you won’t get to enjoy having them sewn back on in the E.R.) after you fly face-first through a windshield in 1975"

 

...but also useful for all of us to keep in mind:

 

"Never buy a new automobile—the depreciation makes it a bad investment. Buy one that’s two years old, with mileage between 20,000 and 30,000. Buy less than you want, and only what you need, and be done with a car or toy loan in two years or less."

 

...whether that's our younger selves or our present-day incarnations.

 

Chelsia Johnson, Senior Marketing Communications Manager

https://thwack.solarwinds.com/community/solarwinds-community/contests-missions/december-writing-challenge-2018/blog/2018/12/12/day-13-jomo-the-joy-of-missing-out

From the first word, I was struck by the difference and originality of Chelsia's take on our theme; as I read further, her raw honesty and sincere assessment of the choices she had made and how she would go back and offer her younger self advice spoke to me in a way that few of the essays have.

 

At the heart of it, was the idea of embracing "JOMO" (the Joy Of Missing Out), and how that would have helped her in the intervening years:

 

"I’ve learned that while I may miss an inside joke here and there, and I might not be tagged in every photo to hit social media, I am a much better friend (and human) when I am not over-extended and saying yes to every invitation. Because you can’t show up when it really matters if you’ve exhausted all your energy. You can’t provide the support we all need at some point when you’re sleep deprived and living latte to latte."

 

Matthew Quick, Sales Engineer

https://thwack.solarwinds.com/community/solarwinds-community/contests-missions/december-writing-challenge-2018/blog/2018/12/13/day-14-take-care-of-the-needs-of-others-before-ones-own

I loved how, in true geek fashion, Matt derives life lessons from pop culture sources—in this case, a single episode of the anime series Neon Genesis Evangelion. I also appreciated that Matt isn't trying to tell his younger self some new piece of information that he could never have known, but instead that he should take to heart something he already knew:

 

"...take care of others first. Everyone says Karma is a…well…negative thing, but it can also be good. Making sure that your friends and family are taken care of and that they have what they need should come before yourself."

 

***********************

*** The Comments ***

***********************

That's it for the lead essays, but the comments this week were no less insightful, deep, heartfelt, or meaningful. Here are just a few that caught my eye.

 

Day 8

Zack Mutchler  Dec 8, 2018 10:34 AM

Such a strong chunk of advice. Especially for those of us who can't seem to naturally make these connections, being mindful of how others relate can be eye opening and provide valuable lessons and insights. Thanks for being my friend and cheerleader, buddy.

 

Jan Pawlowski Dec 10, 2018 6:08 AM

Remember none of us are perfect, and in the words of Bill and Ted, "Be excellent, to each other."

 

Joshua Smith Dec 10, 2018 7:54 AM

"Not everything will make sense right now..." - This... this is something I wish people would've told me at times. Even now, I can think of times that I wish some people would've told me this in my professional life. Always consider the possibility that there are intentions and plans that you just aren't going to know about until later.

 

Day 9

Tregg Hartley Dec 9, 2018 11:11 AM

We are encouraged to fail. We work in an integration environment. Failure is going to happen. Just document it so you don't repeat it.

 

Matt Riley Dec 10, 2018 9:34 AM

“Only those who dare to fail greatly can ever achieve greatly.” – RFK

 

Diana Simpson Dec 10, 2018 9:46 AM

"Failure is simply the opportunity to begin again, this time more intelligently." Henry Ford

"Success and failure are both part of life. Both are not permanent." Shah Rukh Khan

 

Day 10

Thomas Iannelli  Dec 10, 2018 5:42 AM

I might tell myself this, "Don't be so concerned about what your classmates think of you. The vast majority of them won't be in your life for long. Don't treat them lightly or bad, but don't let their opinions carry so much weight. Just be you."

 

Peter Wilson Dec 10, 2018 9:51 AM

When I was learning to SCUBA dive many many years ago, we were taught a simple mantra for when things would eventually go wrong underwater.

 

STOP

BREATHE

THINK

  1. ACT.

 

It works for pretty much every situation.

 

Louise Cannon Dec 11, 2018 4:31 AM

I'm not sure I could ever talk myself out of running before I could walk. Running is too much fun! And the face-plant is always worth it ;-)

 

Day 11

Jan Pawlowski Dec 11, 2018 7:54 AM

Overcoming challenges can be the greatest achievement in life. I got a degree in Business and Management, and never used it. I should've studied computer science or similar, but it was hard, and I wanted the easy life. Wasn't until 8 years later that I realized that I had the ability and inclination to do IT properly, so I got back on that horse, and set a path, small achievable goals along the way, so I wasn't daunted by the mountain I had to climb. Few years later, and I’m now designing and implementing networks, having worked my way up from service desk, with multiple qualifications in multiple fields. Who could say where I would've been had I done what I really should have? Probably not in IT.

 

Nick Zourdos  Dec 11, 2018 8:39 AM

I love hearing the stories of those who migrate to IT from other fields. In our IT department alone we have a former accountant, project manager, event coordinator, teacher, and even someone who was an Air Force pilot.

 

Diana Simpson Dec 11, 2018 10:05 AM

Great article Dez! Like kremerkm (I work with that yahoo), I also have my BA in Communications (Com Management with minors in psych and marketing). I spent a lot of time trying to figure that out (and if you ask me today what I want to do when I grow up, I still have no idea), but I have evolved from PC support to network design to network security and technical writing. Back in high school and college I told myself "nah, I'll never do that" about writing—but here I am.

 

Day 12

Ryan Wagner Dec 12, 2018 8:30 AM

The section on finances is especially good. I wish I had been more financially responsible when I was single and could have worked extra and set aside the money without having to sacrifice family time. There's another piece of advice: work extra, get your education, and save hard before you get married. If you can barely support yourself, you most certainly cannot support a family. Trust me, financial stability will make married life a lot easier. (I'm pretty sure the only reason I'm still married is because my wife is a saint.) Loans and credit cards are a trap to avoid at all costs. If you can't afford it right now, don't buy on credit, save for it and pay in full.

 

George Sutherland Dec 12, 2018 9:51 AM

  1. Rick... your thoughts mirror mine in many ways.

 

The most important is relationships... my wife is my best friend... 41 years married.

 

Diana Simpson Dec 12, 2018 10:45 AM

There a quite a few things I wish I had done differently....maybe have kids earlier (than in my early 30s), definitely hire a wedding planner than doing it myself (try moving a wedding date 4 x thanks to a was-to-be-sister-in-law complaining and then pulling the plug on their wedding plans while I was on my honeymoon) ...bought stock...

 

Your five questions/guidelines are great! I wish more people would follow it...

 

Day 13

Richard Schroeder  Dec 13, 2018 12:50 PM

I lived that too-full-schedule in the 1970s and 8's. My "Pocket Monthly Minder" 18-month calendar had multiple entries for nearly every evening and weekend. It began feeling tight. Confining. Eventually, I decluttered my schedule, my weekends opened up, and the stress at home reduced dramatically. Now I've found a good balance of family, work, fishing, and occasionally being in four bands at the same time. It sounds like a lot, but my time is my own now, instead of others'.

 

Phillip Collins Dec 13, 2018 1:24 PM

About 5 years ago I stopped working weekends to spend time with my family. It was the best decision I have ever made. My son and I are now closer than ever. Too much of the time prior to that was centered around work. Now my goal is to work when at work and live my life when away. It isn't easy at times, but it is best for me.

 

Ethan Beach Dec 14, 2018 1:30 AM

Sometimes to say no is difficult. There have been many times I didn't have the funds or had an early morning and didn't say no to try and fit it. In the end, I do not associate with those friends anymore and went through a lot trying to be friends with them. Missing out isn't always missing out in the end.

 

 

Day 14

Ryan Wagner Dec 14, 2018 8:25 AM

My Grandfather lived on a principle of "Never lend, only give." He had money and always gave it away. Not once in the 36 years that I knew him did I ever see him loan money. He gave away large sums to those who needed it, but he never asked for payment in return. His generosity and example is the reason that I practice the same philosophy. He died a man of honor at 96 years old and I will never forget his example. Be considerate of others. Never lend, only give.

 

Mike Ashton-Moore  Dec 14, 2018 8:38 AM

Yep, the Golden Rule. With so many religions containing this you'd think it would be more widespread, but sadly the current self-centered world view seems the political standard in many places nowadays. Channeling a recent class I attended.... "You have no choice about setting an example—only the example that you set."

 

Diana Simpson Dec 14, 2018 9:27 AM

As a mom, this is a daily thing...taking care of everything before me... but you do need to take a timeout for yourself once in a while....

As somebody not lucky enough to have a nice clicky interface with which to manage and automate all my equipment, I have to develop my own tools to do so. One aspect of developing those tools that drives me up the wall is the variety of mechanisms I have to support to communicate with the various devices deployed in the network. Why can’t we have one, consistent way to manage the devices?

 

SHHH About SSH

 

I can already hear voices saying “But we have SSH. SSH is available on almost all devices. Is that not the consistency you desire?” No, it isn’t. In terms of configuring network devices, SSH is just a(n encrypted) transport mechanism and provides no help whatsoever with configuring the devices. Once I’ve connected using SSH, I have to develop the appropriate customized code to screen-scrape the particular operating system I connect to. Anybody who has done this will testify that this is not particularly straightforward and, worse, the reward for doing so is to be able to issue commands and receive, in return, wads of unstructured data (i.e., command output) which can change between code versions, making parsing a nightmare.

 

So here’s my first requirement: structured data.

 

Structured Data

 

Typical command line output blurts out the requested information in such a way that the surrounding label text and the position of a piece of text impart the information necessary to infer what the text itself represents. Decoding data in this format usually means developing regular expressions to identify and pull apart the text so that the constituent data can be processed. Identification of data is implicit, based on contextual clues. Unstructured data is nice (or a least tolerable) for a human to look at and make sense of, but is frequently very difficult to interpret in code.

 

Structured data, on the other hand, follows a specific set of rules to present the data points such that they are explicitly and unambiguously identified and labeled for consumption by code. Structured data usually presents data in a way that mimics programmatic hierarchical data structures, which lends itself to easy integration into code.

 

Junos has historically favored XML for structured data, but — subjectively — I would argue that XML is ugly and can get complex very quickly, especially where multiple namespaces are being used. Personally I have a soft spot for JSON, and I know other people like YAML, but ultimately I’m at the point where I’d say “I don’t care, just pick one and stick with it,” so that I can focus my time and effort on handling the one format.

 

So here’s my second requirement: consistent encoding.

 

What Gets Encoded?

 

Once we have a way to encode the data in structured format, we then need to be more consistent about what’s being encoded. Again, this may point at a project like OpenConfig, but failing that, perhaps it might be worth it if everything could be described using YANG. By default, YANG maps to XML, but RFC7951 (JSON Encoding of Data Modeled With YANG) helpfully shows how my pet preference JSON can be used instead.

 

The point here is that if YANG is used, the mapping to JSON or XML is almost a side issue, so long as the data is modeled in YANG to start with; both XML and JSON fans can translate from YANG to their favorite encoding and—and this is the key part—so can the devices, so clients can request the encoding of their choice.

 

One Transport To Rule Them All

 

So now that I’ve determined that we need YANG models with support for both JSON and XML encoding, optionally following a common OpenConfig data model, let’s address how we communicate with the devices.

 

I don’t want to have to figure out what kind of device I’m connecting to, then based on that information, decide what connection transport I should be using (e.g., HTTP, HTTPS, SSH possibly on a non-standard TCP port). What I want to be able to do is to connect to the device in a standard way on a standard port. I don’t mind reconnecting to a non-standard port, but I want to be told about that port after I connect to the standard port.

 

That’s my third requirement: one transport for all.

 

OpenConfig takes this approach, by having a “well-known” URL on the device to report back information about the device and connection details in a standard format. I’d like to take it a step further, though.

 

REST API or Bust

 

Let’s use a REST API for all this communication. REST APIs are ubiquitous now; every programming language has the ability to send and receive requests over HTTP(S) and to decode the XML/JSON responses. It makes things easy!

 

My last requirement: access via REST API.

 

Wait a moment, let me check again:

  • Structured data: YES
  • Consistent encoding: YES
  • One transport: YES
  • RESTAPI: YES

 

I believe I’ve accidentally just defined RESTCONF (RFC8040), whose introductory paragraph reads:

 

“[...] an HTTP-based protocol that provides a programmatic interface for accessing data defined in YANG, using the datastore concepts defined in the Network Configuration Protocol (NETCONF).”

 

I am not a huge fan of NETCONF/XML over SSH, but NETCONF/JSON over HTTP? Count me in!

 

My Thoughts

 

Automating the infrastructure is hard enough without battling against multiple protocols. How about we just all agreed that RESTCONF is a good compromise and start supporting it across all devices?

 

For what it’s worth, there is some level of RESTCONF support in more recent software releases, including:

  • Cisco IOS XE
  • Cisco IOS XR
  • Cisco NXOS
  • Juniper Junos OS
  • Arista EOS
  • Extreme XOS
  • DellEMC OS10
  • and more...

 

But here’s the problem: when did you last hear of anybody trying to automate with RESTCONF?

 

That’s what I’d like to see. What about you?

Home from Orlando and SQL Live, my last speaking event of the year. I now have less than two weeks to get all my holiday shopping done. Oh, and so do you.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Microsoft Edge goes Chromium (and MacOS)

If you still needed a sign that Microsoft is a different company, this should do it for you.

 

The Problem With Feedback

Not all feedback is good feedback. When opinions are treated as information without any quality control, you end up with a billion people hoping to create a viral video on Facebook.

 

Even self-driving leader Waymo is struggling to reach full autonomy

Yeah, but sometimes you need to ship the product, and not wait for perfection. I’m still hopeful for autonomous cars in my lifetime.

 

What the Marriott Breach Says About Security

Security, privacy, convenience. Pick two.

 

250-page document dump is another nail in Facebook’s coffin

If this was Microsoft, we’d be talking about breaking the company into 37 pieces. I don’t understand why Facebook is allowed to continue operations in light of these data privacy issues, and the evidence that they knew and did not care.

 

How Alaska fixed its earthquake-shattered roads in just days

This is how you engineer things. Plan for failure and plan your response to that failure. Application developers could learn a lot from Alaska highways.

 

How Being Busy All the Time is Hurting You – plus 3 ways to stop

I’ve fallen into this trap as well, being too busy to do anything. Take a step back and reflect on your tasks and your accomplishments. Being busy is not a trophy.

 

My favorite event of the year, SQL Live in Orlando, where my room has a view of Hogwarts:

 

So far in this series, we have covered Application Performance Monitoring. If you haven't already, read part one, part two, part three, and part four. In the last post, we stressed the importance of mapping out all dependencies for our application. In this post, we will dig further into it and get a better understanding of what ADM (Application Dependency Mapping) is all about.

 

What Is It All About?

 

Application Dependency Mapping is far from a new concept. If you are like me, I remember the days when you would pay a company a very large sum of money to gather data into a spreadsheet that then became your application mappings. The data was compiled based on samples that were gathered in various ways. Often, by the time you received them, things had already changed; therefore, they were already out of date. The good thing is that we could gain a little better understanding of our applications going through this process.

 

So, how have things evolved since those days? Application dependencies have become much simpler to obtain. Even better, we can visualize them in real time. The tools and methods of gathering the data have advanced over the years.

 

These tools can monitor data flows over a certain period we determine in advance. Then we can generate visualizations of our application flows. With these visualizations, we can understand groupings of services, etc. We are able to see what devices communicate in a bi-directional fashion. This allows us to understand what applications are being provided, and what is consuming them. We can also uncover any potential security concerns as we dig in to the application flows. We may find that there are certain network ports exposed and/or being consumed and should be locked down.

 

Ultimately, our goal is to understand how our applications are linked to one another, which provides our application dependency maps. As you can see, these mappings will assist us in our application performance monitoring. As our application dependencies are mapped out, we can get the bigger picture of our applications.

 

How Do We Get Started?

 

How do we get started with ADM? In most cases, agents are used on endpoints. For network gear, either taps or built-in agents are used to capture the network flows. Once those are in place, the data will be aggregated and sent to a centralized solution, which will analyze the data. Once that data is analyzed, we can visualize our dependencies and flows. This is also a great opportunity to bring together all teams who support any of the components discovered. By bringing everyone together, we can collectively understand the results and determine what is important and what is not. The goal of our application dependencies is to ensure that what is needed is available, and what is not is blocked or disabled. This will depend on the security posture of the organization you work within, but it is good practice to take this approach. We will go into more on security around this topic in the next post, so stay tuned for that.

 

Depending on the solution used for analyzing the data to provide our application dependency mappings, there might be some very interesting things we can do with the data. Some solutions will allow you to create ADM profiles which can be versioned for rollback functionality. A solution like this provides the ability to analyze the data flows, present the dependency mappings, and allow you to create a profile with the communications in place and block the unnecessary ones. This is more security-focused, but good to point out here. The real value is that you can create an initial profile, and then later on, create a new updated profile and apply it. By having the ability to rollback, you can easily return to a known working profile on the off-chance something were to break. Another very interesting thing you might do is run through different scenarios using the analyzed data and see how those scenarios may affect your application. Depending on the amount of history data maintained you would have much more granularity in your scenarios. By having this ability, you can ensure that a new profile will not cause issues once applied.

 

Conclusion

 

We have only touched briefly on the benefits of using a solution to assist in our application dependencies. However, I hope the value is clear. There is much more to it, but this highlights initial value that can be provided. I mentioned a few times that certain topics were security-focused, and we will touch on those more in the next post.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting blog that reviews the fundamental steps every federal IT pro should take to create a strong security foundation. I agree with the author that an overarching plan that encompasses multiple layers of security can serve as the most effective strategy.

 

The Five Fundamentals

 

1. Create an information security framework

 

A security framework is essentially your security blueprint. It encompasses a series of well-documented policies, guidelines, processes, and procedures about how best to implement and manage ongoing security within your agency.

 

There are several established security frameworks, but the U.S. government usually follows the guidelines set forth by the National Institute of Standards and Technology (NIST). Specifically, agencies use the National Institute of Standards and Technology SP 800-53 to comply with the Federal Information Processing Standard’s (FIPS) 200 requirements.

 

Use NIST guidelines to establish a security framework that assists with successfully detecting and responding to incidents in a quick and efficient manner.

 

2. Develop a consistent training program

 

Just as important, end users must understand the importance of practicing good cybersecurity hygiene—and the ramifications of poor security practices.

 

Regular, consistent training across the agency is key.

 

Train your team to understand how to recognize potential vulnerabilities quickly, and how to find the gems of important information within a sea of security-related alerts and alarms. Train end users on topics like creating strong passwords, identifying phishing emails and other social engineering attacks, and what information can and cannot leave the agency.

 

3. Outline policies and procedures

 

Creating the security framework is one thing; ensuring that everyone understands the policies and procedures associated with that framework is as important as the framework itself.

 

Sharing this information with all staff, security teams, and end users is often best done upon hiring. Outline policies and expectations clearly from the start to avoid any misunderstandings.

 

4. Monitor and maintain IT systems

 

Part of good security hygiene is making sure you’re up-to-date on all hardware and software updates and patches. New malware is introduced every day; ensuring all your systems are up to date should be your baseline.

 

Another form of important maintenance is to have a strong backup system in place. If a breach occurs and data is compromised, a good backup system will support minimal data and productivity loss.

 

Finally, even if all end users are up to date on security training, there is always the possibility they will violate security policy. Federal IT security pros must be able to monitor end user activity to mitigate this risk and catch policy violations before they become breaches.

 

5. Stay current with government mandates and regulations

 

Some of the most common are the Federal Information Security Management Act (FISMA) of 2002, Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry Data Security Standard (PCI DSS) for agencies that deal with any kind of credit card transaction, and NIST regulations.

 

Conclusion

 

As the basics fall into place, expect more layers to become necessary to shore up your federal cybersecurity strategy plan. Adding layers like perimeter defense, device failure, and enhanced monitoring for insider threats can help enhance a stable foundation, and result in a safer and more secure agency.

 

Find the full article on Government Technology Insider.

 

  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Where are you in your career arc right now? Trusting simple statistics, I can say that the majority of you are either just starting out, or somewhere in the middle. Relatively few of those reading this essay will be at, or near, the end (whether that means retirement is on the immediate horizon or you are looking at pivoting into something completely different).

 

So, for that majority of you who are not-done-yet, I want to ask you: what will you leave behind? How are you planning for your graceful exit? How are you ensuring that your colleagues (those who also aren't-done-yet) will continue to be successful without you to call on?

 

To be honest, this wasn't a question I’d considered very much, until I met a very special person in the SolarWinds booth at Cisco Live! last year.

 

He had clearly been around the data center a few times, about a decade and a half ahead of me, career-wise. We spent a few minutes amicably playing what I call "IT sonar"—where you get the depth and breadth of someone's experience by reminiscing on the tech you've seen come and go.

 

But then he turned to the demo station, because he had a few questions. The things he wanted to know were interestingly specific. They didn't center on the latest-and-greatest. He'd heard about our most recent features, and he and his team were using them. He was at Cisco Live!, and knew about THEIR most recent announcements, but wasn't particularly concerned about how WE could monitor THAT.

 

This was notable because, if I'm being honest, people visiting the SolarWinds booth usually fall into three categories.

  1. People who want to know about our stuff.
  2. People who want to know if OUR stuff can help with this OTHER stuff they just heard about and/or are buying.
  3. Fans who just want to say, “HI,” bask in the orange glow of #MonitoringGlory, pick up our latest buttons or stickers, and pose for a selfie.

 

Curiosity piqued, I asked him what was up. What was he REALLY trying to do? His answer came as a surprise, "I built this thing, but I'm going to retire one of these days," he said.

 

By "this thing" he meant all of it. The whole IT environment at his company. He took them from dumb terminals to PCs running Arcnet to Novell servers on Ethernet and all the way to today. He had a hand in all of it. He knew where the important bits were and where the cables were buried.

 

What got me most was the WAY he relayed this. He wasn't bragging. He wasn't justifying himself. He wasn't bringing up long-forgotten accomplishments as a way of proving he was still relevant. He was calm, confident, and clearly didn't need to prove anything. He told me he'd found his niche at the company long ago, and worked hard to gain and keep the trust and respect from both management and his peers. This gave him the freedom to make decisions in his lane, as well as reach out and help folks whose work fell outside that lane. He also described how he had worked to keep his skills sharp through the successive waves of IT trends, without falling into the bad habit of chasing the latest fad.

 

The problem, he told me, was that he realized there was no way to teach his coworkers—some of whom were young enough to be his grandchildren—everything that was in his head. And he realized it would be a waste of time to do so.

 

"There's just stuff," he said, "that isn't worth anyone's time to learn, or to carry around on the odd chance that it will be important a year or three from now. But even so, that stuff is still running. And it's going to break. And they'll need to know about it when it does."

 

I started to make a joke about documentation, and he told me that was just as bad as trying to teach it to somebody. Burying a piece of information, whether in a binder on a shelf or on a page in a labyrinthine SharePoint site, is a great way to feel good about knowledge that nobody is ever going to read.

 

He explained that his idea was to replace historical knowledge—what he called "tribal memory"—with tools that would keep track of the "what" (the devices, applications, and elements); handle the "when" by notifying the right people at the right time (meaning when something had gone wrong); and then point them in the direction of "how" by including links to walk-throughs, diagrams, or even just having very clear descriptions in the body of the alert message or ticket.

 

His job was to understand the "which." Which of the tasks and technical areas under his purview were repetitive busy work that could be automated (mostly) away, and which were skills that he needed to ensure the team acquired.

 

I joked about how the cool kids today would call it “technical debt.” He took that gag and ran with it, explaining that his goal, like lots of folks contemplating retirement, was to pay off his entire technical mortgage and have a title-burning party.

 

With that frame of reference, we had an amazing conversation. I'd like to think I was able to help him out a little.

 

But when he walked away and I started scribbling the notes that would eventually turn into this essay, I could think of just one word to describe what it all represented: "Legacy." For IT practitioners, that has some very specific connotations—technology from a bygone era that’s still around, still requires support and maintenance, but is no longer a platform on which new solutions can be built.

 

But of course, there’s the more universal meaning to “legacy:” The things (whether physical or intellectual) that we leave behind after we’re gone. And I realized that our documentation, our code, our integrations, and our installations are no less a legacy than the money, photos, investments, homes, cars, antiques, artwork, or businesses left to others when we die.

 

And as I was scribbling my notes, I thought about making THAT the end of this essay—something like, "What will be left behind when you leave? Will you leave your inheritors saddled with your technical debt? Are you thinking about how that legacy reflects on you?"

 

But then it occurred to me that, as impressive as the tools and automation this guy was building was, it wasn't the most important thing he was leaving his team. That wasn’t his legacy at all, not by a long shot.

 

I remembered the impression he left with me: calm, confident, not needing to prove anything... of having found his niche... of having gained and kept the trust and respect from both management and his peers. Recognizing how to make decisions in his lane, and using his secure position to help others.

 

Maya Angelou famously said,

 

“I've learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

 

So NOW I will ask you, all of you "not-done-yet" readers as well as the "almost-there" ones, to take a moment before you close this essay and ask yourself, "What is MY legacy going to be? What am I doing, today, that will be left behind when I leave?"

We made it! This is the final post in this six-part series mapping the cybersecurity landscape through a new reference model for IT infrastructure security. Thank you for coming along on this journey with me. Now it’s time to take a look at where we’ve been, review the map itself, and discuss how to put it to work in your own environment.

 

We started the series by reviewing some of the most popular and useful models and frameworks currently available. While all of these can serve as maps to help us build a secure infrastructure, they leave us with a couple fundamental questions unanswered:

  • Which tools provide defense in depth, and which are just causing duplication?
  • How do I compare competing products and the protections they provide?

 

To help answer those questions, we needed a clear way to map where individual security tools fit into a comprehensive security infrastructure. That’s where the reference model comes in, and the following four posts each zoomed in on each of the four domains of IT security:

  • Perimeter - Network Security, Email Security, Web Security, DDoS Protection, Data Loss Prevention, and Ecosystem Risk Management
  • Endpoint & Application - EPP / EDR, Patch & Vulnerability Management, Encryption, Secure Application Delivery, Mobile Device Management, and Cloud Governance
  • Identity & Access - SSO (IAM), Privileged Account Management, Multi-Factor Authentication, CASB, Secure Access (VPN), and Network Access Control
  • Visibility & Control - Automation & Orchestration, SIEM, UBA / UEBA, Device Management, Policy Management, and Threat Intelligence

 

Now we can zoom out and take a look at the full picture:

Reference Model for IT Infrastructure Security

 

Think of this like one of the maps you might find in a mall or other public area, telling you confidently: you are here.

 

This particular map aims to give you the ability to answer those two stubborn questions above. By knowing which domain and category within the InfoSec landscape you are dealing with, you can evaluate various tools in an apples to apples comparisons. When the latest hot security company or product comes on the market, you can judge it against your existing infrastructure by placing it on this map.

 

How many network security devices, SSO services, or threat intelligence providers you need is unique to each organization. However, there is a big difference between intentionally adding depth to your security posture and unwittingly adding duplication. Use this model to ensure you only add the tools you really need, filling a gap or replacing a less adequate solution.

 

Speaking of gaps, that's another great way to use this map. There’s a third important question we can answer: Does your current security infrastructure provide the protection you need? Once you understand your organization’s risks and goals, you can use this model to ensure that all the right boxes are filled with a product or service that does the needed job.

 

Not every company needs a tool in each of these categories, of course, and some of you may need multiple protections in one or more of the categories. Also note that there are various ways to provide those protections. Each of these categories can be addressed by technical tools (hardware, software, and services), legal tools (e.g., contracts), organizational tools (policies and procedures), and human “tools” (like training and awareness), or a combination of two or more of these countermeasures. The key is understanding where real gaps exist, and what’s available to fill them.

 

Finally, we must always remember that the map is not the terrain. While I have found this model to be extremely useful in many discussions with CIOs, CISOs, IT management, and security practitioners, it can’t tell the whole picture. Thinking about the NIST Cybersecurity Framework of Identify, Protect, Detect, Respond, and Recover. This model sits mostly in the protect and detect realms. You still need talented staff or third parties to identify your most valuable assets, your compliance requirements, and your risks, goals, and vulnerabilities. Not to mention responding to attacks that do occur and recovering after an incident with policy updates, tool refreshes, or public relations.

 

Now it’s up to you – how will you use this new resource to better protect your organization?

Welcome to the week 1 wrap-up of the 2018 December Writing Challenge! If you missed the initial announcement, the structure of our third annual community event has changed this year. Instead of offering a new word each day on which everyone can reflect, we're taking a single idea and hearing everyone's unique view of it: “What I would tell my younger self.”

 

You can head over to the special forum we've set up just for this event or start with my summaries below and follow the links wherever your whimsy leads you.

 

I'm dividing my summary into two sections: the authors and the comments.

 

 

******************

The Authors

******************

 

 

Leon Adato, SolarWinds Head Geek and THWACK MVP

Day 1: Slow Down You Crazy Child

I had the honor of leading off this year. I wrote a few things. I would love it if you checked it out, and maybe even left a comment or two.

 

 

Joe Kim, EVP Engineering and Global CTO

Day 2: Navigating Ambiguity is Critical as a Technologist

Joe's message is bold, broad, and not just applicable to his past self but to his future self as well, and therefore advice we all can follow now.

 

“The future is hard to guess, so don’t.”

 

Along with that he offers two more pieces of solid advice:

  1. Focus on the “HOW.”
  2. Continue to add to your toolbox.

 

 

Charlcye Mitchell, Product Manager

Day 3: Show Up & Pay Attention, Inspiration is Everywhere

Charlcye leads the team responsible for the SolarWinds online demo (demo.solarwinds.com). This quote immediately jumped out at me, not only because it was incredibly motivating, but in one sentence it captures the spirit of that team and what they accomplish every day: "Inspiration is everywhere, but you’ll rarely see it if you aren’t looking for it."

 

But she didn't stop there. She went on to challenge her younger self with four more questions. (As some of you know, I'm a big fan of The Four Questions):

"Find an unanswered question that excites you; Fill your time with unfamiliar experiences and learn new skills; Teach other people; Discover more things to be grateful for."

 

 

Matthew Reingold, MVP

Day 4: Post-Recollection

Matt kept it short and sweet, and this line really caught me short: "Don’t let the bad stuff make you forget about the good stuff." We also got to hear a bit of the background and lessons learned that led to this being such an important piece of advice for him, personally.

 

 

Nick Zourdos, MVP

Day 5: Burning a Candle at Both Ends...With Napalm

Nick first acknowledged the obvious reality of offering advice to our past selves, which would effectively change the trajectory of our life. Not only that though, but he underscored how inseparable our past experiences are from our current selves, and how that isn't a truth that can be casually waved away: "The point I’m trying to make is that the past makes you… you, and that’s worth something."

 

With that point acknowledged, however, he nevertheless offered some heartfelt words that might have eased the path for him in his younger years:

"Make time for life. Friends, family, relationships, and your own mental health are so much more important than good grades in your college years."

 

 

Thomas Ianelli, MVP

Day 6: Let's Go Get Ice Cream and Have a Little Chat...

In a pattern that is familiar to any of us who work regularly with our MVP community, Thomas took Nick's idea even further, moving past the thought that changing our past selves does us a disservice, and digging into the idea (with citations and references) that we may not even remember our past selves clearly:

"What do you really know of this person anyway? They are as much a stranger to you, as you are to them. They are just the collection of stories you have recited for years, about significant moments."

 

This led him to give voice to something that I think we all, as IT practitioners and especially those who work in some type of teaching capacity, have run up against:

"It is difficult to remember what it was like not to know."

 

Finally, a footnote to his whole analysis is worth repeating here, because it is wonderfully geekworthy:

“*Any discussion on the merits or risks of time travel should include a warning that anything changed in the past can have unforeseen ripple effects dramatically altering the future, including your very own existence in the present."

 

 

Jez Marsh, MVP

Day 7: Always Remember

Like many of our lead writers so far, Jez struggled with the far-reaching implications of altering the timestream. Nevertheless, he found a message which was both specific to him and yet general enough that he felt it wouldn't cause too many ripples: "Est Sularus Oth Mithas,” a quote from the Dragonlance Chronicles meaning, "My honour is my life."

 

I found the meaning behind this message to be wonderfully insightful. "We are, at times, our own worst enemy. Receiving this bolt from the blue at that time of my life would help me defeat the lingering self-doubt and regain my mojo a little sooner."

 

 

******************

The Comments

******************

 

***** Day 1 *************

 

zackm

By definition, the people nearest you are the most important. They are the ones who chose to show up, to stay, to be in your company. Give that choice the respect it deserves.

Such a hard, yet important lesson. Being mindful and not taking your support system for granted is a huge sign of emotional maturity that we all should be striving for.

 

rschroeder

Dear Younger Me:

A world of good will come from treating everyone as you want to be treated.  A world of hurt follows if you don't.

 

Don't get tangled up in things that aren't enjoyable and interesting and beneficial to someone.  While you have your entire life ahead of you, it's too brief to waste on petty squabbles or major ones.  Spend no time worrying about things you can't change.  They're in the past.  Learn from them, modify your behavior so you don't repeat them, and move on.

 

Take a note from a song James Taylor recorded; consider making it your motto.

"The secret to life is enjoying the passage of time. Any fool can do it."

 

tallyrich

I'm an auto racing fan and there's more than one story and/or illustration of drivers learning that you slow down to go fast. Just what does that mean? If you drive just as fast as you can, you don't hit your lines, you don't brake at the best times, you accelerate too hard, etc. When you slow down - in other words, focus on doing things right - you hit the best lines because that is a focus, you brake at the best times because that is a focus, you accelerate best because that is a focus. So, slowing down actually makes your lap times better. The same too with IT work. I've been guilty of rushing through a project only to later see my mistakes and have to redo or repair what I've done. When we slow down and take things carefully and methodically, we are at our best.

 

***** Day 2 *************

 

lokempa

Thanks for the heads-up Joe.

At this point in my life I was about to stop diversifying and wanted to narrow my development in this field (IT), but after reading your experience, I now believe that the diversification that I am aspiring to achieve will further my development more, that aiming for carrier development.

Thanks for laying out there this game-changing perspective.

 

smttysmth02gt

This is a tricky one for me. My previous department was dissolved and I was moved to a different team, after 11 years with my company. It's hard to forge your own path when you're held inside a box and not allowed any growth at all. It fosters complacency and apathy. I think some people might read "so don't (predict the future) and presume that nothing can be done...” I've been there. I'm fortunate to have leadership that wants to foster growth in many directions now.

 

rnoel

This is a terrific read.  Joe outlines and explains two specific ways we can be better people going forward.  I appreciate that he is concise, makes his points and supports them.  I'll read this more than once.

 

***** Day 3 *************

 

tallyrich

So often people view everything from the angle of "what's in it for me." Once we begin to look at what we can bring to others our purposes will become more apparent.

 

jscott9074

I find point 3 most important, giving folks guidance on their own journey...

 

df112

In my experience (YMMV) I always learn the most when I'm teaching others.  Sometimes you're literally only slightly more knowledgeable than the person you're teaching, but having to teach a concept requires you to wrestle with the knowledge, the right way to convey it, and to come up with multiple analogies or ways of communicating it. You don't really understand something until you can teach it, and teaching it always highlights deficiencies in my own learning or understanding as well as helping to bring more clarity my knowledge.

 

***** Day 4 *************

 

zackm

One of the long-lasting lessons I've taken from a leadership course I attended in the Marines:

At any given time you have up to 360 choices of direction for lateral movement. Pick one. People who sit in the same spot are the easiest to hit.

(paraphrased to clean up for public consumption)

The point being, when faced with any challenge, you have to make a decision, and preferably quickly, because ANY decision is action, and action begets action. Waiting for the perfect situation usually results in a lot of waiting, in my experience anyways.

 

rileymat

Of all the words of mice and men, the saddest are 'It might have been.'

 

pcollins07

I agree, hesitation is natural and something I am very good at. I think it has helped more than harmed. It gives me the chance to think before I leap.

 

***** Day 5 *************

 

rileymat

"Diamonds may be formed under pressure, but never forget they are not formed overnight."

 

tinmann0715

How do I deal with burnout? I literally ran up a mountain! I am not suggesting that for everyone but it has made a huge impact on my approach to life and work since. It was also a #bucketlist and fed into a lifelong desire that I mentioned in my Day1 post.

I as well went through intense stressful periods of my life that has altered my personality permanently... for the worse. Unfortunately I was never able to return and I do miss my old self.

 

janobi

Life is full of choices, whichever you choose can be viewed as the wrong one.  Live life for the moment, and do what makes you happy.  Live life without regrets.  Having worked 16-18 hour days to try and "help" the company, to be overlooked, and undervalued, you soon find out to stop.  Take stock.  And most importantly, do what makes you happy, as no-one else in life will.  If that means working loads, and learning great. But then don’t worry about the things you may miss by not experiencing them.  Alternatively, work hard, play hard and use your time wisely.  There is no simple one answer, there is no silver bullet.

 

***** Day 6 *************

 

vangt33

Choices are made, reality hits hard when you grow up.  Time goes on and that broadens the vision to make you realize what you could have done better.

 

EBeach

What does the younger us know that we don't? As you grow you change, was it for the good? Are you still the same person? What a great concept to think about. I moved around a lot when I was young being in a military family. As we moved I would meet new friends that would have an impact on me and change who I was. Who could I have been if I stayed in Hawaii where I was born? Would I still be the man I am in the tech field do what I am doing? This writing challenge gets me thinking, please stop I don't like this.

 

df112

Great post tomiannelli. It's funny, you absolutely nailed something that has been semi-haunting me (in a good way) recently. I've been thinking a lot about memory bias and am I falling prey to it, and to what extent current events and circumstances are biasing my memories one way or another.  It's an interesting question to think about, and like you, I'd like to go back and ask my younger selves at various points in time what my thoughts and feelings were at the time they were happening.  Not sure if a diary would help or not - you don't always see what's salient at the time. Anyway, thanks for the thought-provoking post.

 

***** Day 7 *************

 

orionshark
Let's do our best to change the future to the best possible place to be for our next generation . :-)

nickzourdos
Jez I think you're the first person to mention timing. I'm surprised none of us other ultra-analytical IT pros thought of that!

tarabot
Very nice! If I could, I think I'd do the opposite and have my younger, more optimistic self remind my current self that life is really pretty good and that I should be more appreciative.

I’ve covered a lot in this series of posts around artificial intelligence (machine learning); from the beginning (Not Another AI Post…) to why I love it (My Affection for AI) to what I hate about it (My Animosity Toward Artificial Intelligence) to what scares me about it (AI-nxiety) and even why we need to decide what words to use when talking about it (Words Matter). One common theme is that a lot of the discussions, when it comes to artificial intelligence and machine learning, aren’t around the technology directly. In my experience, the focus of many conversations I’ve had about AI and ML center around the politics, marketing, high-level capabilities, and immediate problems that will be solved. When it comes to the actual technology and how to implement, feed, and care for it properly, things get complicated. I’m not a data scientist or a software developer, so there are a lot of concepts I can’t--or don’t care to--wrap my head around. Most people don’t care about the methods, just the results, and I’m no different.

 

I covered a few scenarios along the way that have some huge benefits with things like large-scale historical analysis to create event correlations of disparate system logs, as well as failure predictions with infrastructure systems, but those are things we can get now (with enough time and money). What does the future hold for artificial intelligence and machine learning within the enterprise network and systems space? It’s a massive field that’s growing and changing all the time with newfound capabilities, so predicting what is coming is an extremely tough thing to do. But I can tell you what I’d like to see (and maybe give some ideas to a vendor or two stealthily working on a new product).

 

First, as I covered in my post "AI-nxiety," I want to be able to inform a system of my office politics. Whether it’s as simple as supplying an organizational chart or as complicated as mapping MAC addresses and applications to varying roles, this is something that is definitely needed before I’ll begin to let HAL 9000 take the wheel. By creating a system that assigns a value based on importance to a user or application, issues can be routed better and responded to faster when needed, or left for tomorrow when not--automatically. That’s the whole point of these systems, limiting and avoiding human intervention. I could go on with this topic and possible ways to configure or even teach a system, but I'm staying high-level here.

 

Second, I want a system that sits 100% in my company’s data center. I deal with a lot of sensitive data, and giving a cloud provider unfettered access to all of it just doesn’t sit well with me (or the regulators that oversee my networks). There has been a lot of development in the custom silicon space as of late, and if these chips are even remotely affordable, I don’t see an issue with bringing this kind of workload in-house. The usual response is "Well, we won’t monitor that information," or "We only grab metadata." No dice. It’s important and needs to be monitored and under some regulatory bodies, if it touches the data, it’s in scope. This includes gathering the data.

 

Third (and last for now), I don’t want a laser-focused AI that only sees applications or servers or the WAN or the WLAN. I want something that sees it all and takes everything into account. As a wireless architect, I’m no stranger to the difference between actual issues and issues that present to clients deceptively. When a RADIUS or DHCP service fail, all the user knows is that the wireless is down. I need something that sees into the back-end, the front-end, and everything in between, constantly watching and correlating every single packet along the way. From the border firewalls to the cloud apps to the wireless clients, it’s all important and should be treated as such.

 

Maybe this stuff is out there. If so, they need to step up their marketing. Maybe if I cobble a few solutions together I could get my wishlist. If so, those companies should look at partnering up and showing off what they can do together. If you’ve got any feedback to offer, leave a comment below. And if you’ve got anything to add to the wishlist, definitely leave a comment below.

Home from Las Vegas and AWS re:Invent for 60 hours, then I’m back on the road. In Orlando this week for SQL Live, where I have four sessions to deliver. I’ll also be working the SolarWinds booth. If you are attending SQL Live, let’s connect and talk data.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

AWS Says It’s Never Seen a Whole Data Center Go Down

AWS is a master of spin when it comes to their product statements. Here they are having fun drawing a fine line between “disaster” and “event.” To the customers affected by outages at AWS, they don’t care if the building was down or not. It’s a disaster for them, period.

 

Amazon gets into the blockchain with Quantum Ledger Database & Managed Blockchain

Here we see AWS leveraging the use of the word “quantum.” It’s meant as “small,” and in no way do they want you to confuse it with “quantum computing.” And, as a bonus, it’s combined with Blockchain, for maximum SEO effect.

 

Blockchain study finds 0.00% success rate and vendors don't call back when asked for evidence

If you are using Blockchain successfully, please contact the Register and let them know.

 

Can blockchain co-exist with GDPR? It’s complicated

Ah, the joy of working with immutable transactions.

 

Half of all Phishing Sites Now Have the Padlock

The padlock just means traffic to the website server is encrypted. It offers no protection about who is at the other end. You could be having an encrypted discussion with Satan.

 

Marriott breach exposes more than just customer info

This story is still evolving, but I’d expect things to get worse as details emerge. This has potential to be a case study in cybersecurity worst practices.

 

Malware vector: become an admin on dormant, widely-used open source projects

Not just malware, but adware, too. Open source projects are susceptible to being hijacked, in legitimate ways, by outsiders that see an opportunity to use the project to further their own needs. It’s like a hostile corporate takeover, but without the excitement of a shareholder vote.

 

One of my favorite events of the year:

 

I really want Software Defined Networking (SDN), or something like it, to be the go-to approach for networking, but are we too tied to our idea of what SDN is for us to get there?

 

The Definition

 

Almost ten years ago in 2009, Kate Green coined the term Software-Defined Networking in an article describing the newly-created OpenFlow specification that would be released later that year. The idea was revolutionary: Decouple the forwarding plane from the control plane and move the latter to a centralized controller. The controller would then manage the forwarding plane of the individual devices in the network from a global perspective. This would allow the entire network to be managed via a single interface to the controller. For some time following this, SDN became synonymous with OpenFlow, but the philosophy has exceeded the implementation.

 

A Cloud Technology?

 

In an admittedly questionable Wikipedia page, SDN is defined as "an approach to cloud computing that facilitates network management and enables programmatically efficient network configuration in order to improve network performance and monitoring." This is an interesting perspective, considering that OpenFlow appears to have been developed with large service provider networks in mind. So where does it go from being a service provider technology to a cloud technology? Large service providers and cloud (particularly public cloud) providers have one thing in common: scale.

 

In previous articles, I've discussed network automation in the cloud as a requirement rather than a desired state. Arguably, large networks of any sort share this property. When working at scale, there really isn't any other way to do things effectively.

 

This, of course, doesn't mean that the approach isn't desirable outside of large-scale environments. Still, need drives progress and the market focuses on the need.

 

Silo Busting

 

Since I began my career in networking (too) many years ago, technologies were placed in seemingly arbitrary categories and vendors tended to develop equipment with feature sets that followed these silos. Invariably, there's bleed from one category to another when new requirements surface. So why are we maintaining these categories in the first place? Networking is networking. If the solution for an enterprise business requirement is traditionally a data centre networking or service provider networking technology, use it.

 

For many years the IS-IS routing protocol was considered a service provider technology. Now, with its ability to handle IPv4 and IPv6 under a single routing architecture, it's getting a resurgence in the enterprise.

 

MPLS VPNs have mostly been in the service provider category, but are becoming seen in enterprise networks for organizations that need to support franchise network connectivity over the parent organization's network.

 

Shortest Path Bridging (SPB) was developed as a data centre networking technology, but is arguably an ideal replacement for Spanning Tree Protocol (STP) in general.

 

We need to think beyond the silos and look at networking as networking if we're going to escape the current state of micromanaging equipment. This means bringing SDN out of the cloud and service provider categories.

 

Delegation of Control

 

One of the key concerns about SDN that I've heard over the years is the problem of relying on a controller (or cluster of controllers) to make forwarding decisions. This approach is really good for standard routing and network functions that can be addressed globally. It falls down a bit when it comes to things like security policies at the edge, policy-based routing, and other exception-based items that are device-centric rather than network-centric.

 

Can we have an SDN architecture where the control plane is still distributed, but managed at the controller? Is it still SDN? The purists may argue, but in the same vein as the silos above, it doesn't really matter. We may need another term for it, but SDN can work for now, and here's why.

 

An Imperfect Dream

 

When I first considered writing this article, I was running under the working title of "When SDN Isn't" because I was frustrated with the number of solutions that purported to be SDN, but really weren't for various reasons. Some of them did not centralize the control plane under a controller. Others didn't provide open northbound APIs into their controller. Now I'm starting to think it's time to expand the practical definition a bit.

 

At its core, SDN works by allowing software to define requirements to the controller via a northbound API. The controller then programs the component devices or virtual devices via a southbound API. Taking the actual term Software Defined Networking literally, these are the key requirements.

 

If the component devices are programmed at the flow level by a controller that has the entire control plane centralized, and it meets the needs of the organization, that's awesome. If those devices have their own control planes and their decision making is defined at a higher level by the controller, that's just great too, again, so long as it meets the needs of the organization.

 

The Whisper in the Wires

 

SDN, or a relaxed definition of it, has the potential to be the holy grail of networking in general, but we're still stuck thinking in networking silos: cloud, data centre, service provider, enterprise, small/medium business, etc. What we want is a central and programmable interface to the entire network and to stop micromanaging devices. How that is accomplished below the controller level should be immaterial.

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

IT modernization projects help federal agencies deploy more advanced technologies to enhance efficiency and provide a greater depth of capability. These advancements often provide greater opportunity to leverage automation and allow for stronger IT controls to protect critical assets.

 

That said, technology upgrades can also create security challenges. In the 2017 SolarWinds Federal Cybersecurity Survey, federal respondents cited three increases in IT security challenges as a result of modernization.

 

  • More vulnerabilities in new technology stacks (cited by 53%)
  • Burden of supporting new technologies and legacy systems (cited by 51%)
  • Lack of training on new technologies (cited by 50%)

 

All in all, the survey revealed that 66% of respondents—a full two-thirds—think federal agencies’ efforts regarding network modernization has resulted in an increase in government IT security challenges.

 

Not modernizing is not an option; that’s understood. Security holes can be far greater in older technologies. So, what’s a federal IT pro to do?

 

Four steps toward getting the best of both worlds in government IT

 

Step 1: Enhance IT controls

 

According to the survey, those agencies that deem themselves as having excellent IT controls have seen a decrease in cybersecurity threats across the board. Conversely, those who say their agencies have poor IT controls have seen an increase in security incidents.

 

In fact, the same survey notes that 51% of agencies that rate themselves with excellent IT controls say IT modernization has enhanced their ability to manage risk.

 

Step 2: Ensure compliance

 

Over two-thirds (68%) of survey respondents said that implementing relevant standards is critical to achieving their cybersecurity targets. In fact, 62% agreed that agencies that merge and balance both risk management and federal IT compliance are more likely to avoid IT security issues.

 

Step 3: Take advantage of new technologies to enhance security

 

Remember, IT modernization projects often provide greater automation, stronger IT controls, smaller attack surfaces, and built-in security features. Federal IT pros can take advantage of these enhancements to improve the agency’s cybersecurity posture.

 

Respondents cited the following as “highly effective” in enhancing network and application security:

 

  • Identity and access management tools (56%)
  • Endpoint security software (48%)
  • Network admission control (NAC) solutions (46%)
  • Patch management (45%)
  • Configuration management (42%)

 

Step 4: Training

 

Historically, one of the greatest sources of security threats to any agency, civilian or military, is careless or untrained users. The threat is not getting any smaller. In the 2017 survey, 54% of respondents cited this group of users as the greatest threat to agency security.

 

The solution is training, which is particularly important as agencies implement IT modernization projects. The more the federal IT team understands new technologies, the better equipped they are to implement them successfully and take full advantage of the newer built-in security features.

 

Conclusion

 

Federal IT pros face many challenges that affect an agency’s cybersecurity posture, from untrained users to budget constraints to a multitude of competing priorities. Ideally, IT modernization should not be one of them. The goal is to implement IT modernization projects that improve risk management protections, rather than increasing security challenges. Developing strong IT controls is the first step in that journey.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

There Can Be Only One

There Can Only Be One

 

I’ve heard repeatedly from people in this industry that what we need is a single interface to our infrastructure. For status monitoring, perhaps that’s best represented by the ubiquitous “Single Pane of Glass” (Drink!); for general network management, perhaps it’s a Manager of Managers (MoM); and for infrastructure configuration it’s, well, that’s where it gets tricky.

 

Configurator of Configurators

 

OpenConfig

 

I was once hopeful of a future where I could configure and monitor any network device I wanted using a standardized operational interface, courtesy of the efforts of OpenConfig. If you’ve not heard of this before, you can check out an introduction to OpenConfig I wrote in early 2016. However, after a really promising start with lots of activity, OpenConfig appears to have gone dark; there are no updates on the site, the copyright notice still says 2016, the latest “News” was a 2015 year-end round-up, and there’s really little to learn about any progress that might have been made. OpenConfig is a mysterious and powerful project, and its mystery is only exceeded by its power.

 

SNMP Mk II?

 

I mention OpenConfig because one of the biggest battles that project faced was to to reconcile the desire for consistency across multiple vendors’ equipment while still permitting each vendor to have their own proprietary features. Looking back, SNMP started off the right way too by having a standard MIB where everybody would store, say, network information, but it became clear quite quickly that this standard didn’t support the vendors’ needs sufficiently. Instead, they started putting all the useful information in their own data structures within an Enterprise MIB specific to the vendor’s implementation. Consequently, with SNMP, the idea of commonality has almost been reduced to being able to query the hostname and some very basic information. If OpenConfig goes the same way, then it will have solved very little of the original problem.

 

Puppet faces a similar problem in that its whole architecture is based on the idea that you can tell any device to do something, and not need to worry about how it’s actually accomplished by the agent. This works well with the basic, common commands that apply to all infrastructure devices (say, configuring an access VLAN on a switchport), but the moment things get vendor-specific, it gets more difficult.

 

The CoC

 

It should therefore be fairly obvious that to write a tool that can fully automate a typical homogeneous infrastructure (that is, containing a mix of device types from multiple vendors) is potentially an incredibly steep task. Presenting a heterogeneous-style front end to configure a homogeneous infrastructure is tricky at best, and to create a single, bespoke tool to accomplish this would require skills in every platform and configuration type in use. The fact that there isn’t even a common transport and protocol that can be used to configure all the devices is a huge pain, and the subject of another post coming soon. But what choice do we have?

 

APIs Calling APIs

 

One of solutions I proposed to the silo problem is for each team to provide documented APIs to their tools so that other teams can include those elements within their own workflows. Most likely, within a technical area, things may work best if a similar approach is used:

API Hierarchy

 

Arguably the Translation API could itself contain the device-specific code, but there’s no getting around the fact that each vendor’s equipment will require a different syntax, protocol, and transport. As such, that complexity should not be in the orchestration tools themselves but should be hidden behind a layer of abstraction (in this case, the translation API). In this example, the translation API changes the Cisco spanning-tree “portfast” into a more generic “edge” type:

API Translation At Work

 

There’s also no way to avoid the exact problem faced by OpenConfig, that some vendors, models, features, or licenses, will offer capabilities not offered by others. OpenConfig aimed to make this even simpler, by pushing that Translation API right down to the device itself, creating a lingua franca for all requests to the device. However, until that glorious day arrives, and all devices have been upgraded to code supporting such awesomeness, there’s a stark fact that should be considered:

 

Most automation in homogeneous environments will, by necessity, cater to the lowest common denominator.

 

Lowest of the Low

 

Let’s think about that for a moment. If we want automation to function across our varied inventory, then the fact that it ends up catering to the lowest common denominator means that it should be possible to deploy almost any equipment into the network, because the fancy proprietary features aren’t going to be used by automation. While that sounds dangerously like ad-copy for white box switching, the fact remains that if any port on the network can be configured the same way (using an API) then the reality of which hardware is deployed in the field is only a matter of whether or not a device-specific API can be created to act as middleware between the scripts and the device. That could almost open up a whole new way of thinking about our networks...

 

Abstract the Abstraction

 

Will we end up dumbing down our networks to allow this kind of heterogeneous operation of homogeneous networks? I don’t know, but it seems to me that as soon as there’s a feature disparity between two devices with a similar role in the network, we end up looking right back at the LCD.

 

I’m a fan of creating abstractions to abstractions, so that—as much as possible—the dirty details are hidden well out of sight. And while it would be lovely to think that all vendors will eventually deploy a single interface that our tools can talk to, until that point, we’re on the hook to provide those translations, and to build that common language for our configuration and monitoring needs.

 

Qu'est-ce qui pourrait mal se passer?

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.