Skip navigation

Geek Speak

8 Posts authored by: ciscovoicedude

In my previous blog, I discussed the somewhat unique expectations of high availability as they exist within a healthcare IT environment. It was no surprise to hear the budget approval challenges that my peers in the industry are facing regarding technology solutions. It also came as no surprise to hear that I’m not alone in working with businesses that demand extreme levels of availability of services. I intentionally asked some loaded questions, and made some loaded statements to inspire some creative dialogue, and I’m thrilled with the results!

 

In this post, I’m going to talk about another area in healthcare IT that I think is going to hit home for a lot of people involved in this industry: continuity of operations. Call it what you want. Disaster recovery, backup and recovery, business continuity, it all revolves around the key concept of getting the business back up and running after something unexpected happens, and then sustaining it into the future. Hurricane Irma just ripped through Florida, and you can bet the folks supporting healthcare IT (and IT and business, in general) in those areas are implementing certain courses of action right now. Let’s hope they’ve planned and are ready to execute.

 

If your experiences with continuity of operations planning are anything like mine, they evolved in a sequence. In my organization (healthcare on the insurance side of the house), the first thing we thought about was disaster recovery. We made plans to rebuild from the ashes in the event of a catastrophic business impact. We mainly focused on getting back and running. We spent time looking at solutions like tape backup and offline file storage. We spent most of our time talking about factors such as recovery-point objective (to what point in time are you going to recover), and recovery-time objective (how quickly can you recover back to this pre-determined state). We wrote processes to rebuild business systems, and we drilled and practiced every couple of months to make sure we were prepared to execute the plan successfully. It worked. We learned a lot about our business systems in the process, and ultimately developed skills to bring them back online in a fairly short period of time. In the end, while this approach might work for some IT organizations, we came to realize pretty quickly that this approach isn’t going to cut it long term as the business continued to scale. So, we decided to pivot.

 

Next we started talking about the next evolution in our IT operational plan: business continuity. So, what’s the difference, you ask? Well, in short, everything. With business continuity planning, we’re not so much focused on how to get back to some point in time within a given window, but instead we’re focused on keeping the systems running at all costs, through any event. It’s going to cost a whole lot more to have a business continuity strategy, but it can be done. Rather than spending our time learning how to reinstall and reconfigure software applications, we spent our time analyzing single points of failure in our systems. Those included software applications, processes, and the infrastructure itself. As those single points of failure were identified, we started to design around them. We figured out how to travel a second path in the event the first path failed, to the extreme of even building a completely redundant secondary data center a few states away so that localized events would never impact both sites at once. We looked at leveraging telecommuting to put certain staff offsite, so that in the event a site became inhabitable, we had people who could keep the business running. To that end, we largely stopped having to do our drills because we were no longer restoring systems. We just kept the business survivable.

 

While some of what we did in that situation was somewhat specific to our environment, many of these concepts can be applied to the greater IT community. I’d love to hear what disaster recovery or business continuity conversations are taking place within your organization. Are you building systems when they fail, or are you building the business to survive (there is certainly a place for both, I think)?

 

What other approaches have you taken to address the topic of continuity of operations that I haven’t mentioned here? I can’t wait to see the commentary and dialogue in the forum!

In my last blog post, we talked about protecting data at rest and data in motion. Thanks for all the really good comments and feedback you left. I think they gave us some good food for thought, especially a few items I hadn’t talked about, including mobile device security and management. In this post, I want to take things in a slightly different direction and talk about health care policy and how it affects data availability.

 

After working on the insurance side of health care for a good part of a decade, it became very clear to me that business policy influence had created a mentality of, "Everything must be up 100% of the time," and in many ways, it was true. While supporting a nursing triage hotline, people often called in with potentially life and death situations. Obviously, the availability of the telephone system, which was network-based, was critical to operating our environment. Our contact center, also network-based, and the backend systems our triage nurses needed to access, were also critical. We couldn’t have an outage that prevented our callers from reaching the nurses they needed to speak to. Lives were literally in the balance.

 

So how does one go about ensuring that data availability is achieved and that services are operational to an extended period of uptime, beyond your typical business? The answer to that, my friends, is architecture. You can only achieve the levels of high availability that are required in a healthcare environment when you specifically design for it. And these kinds of designs usually come with a mighty big price tag. But before I get into that part of the conversation, let’s break this challenge down into three steps. How do we go about achieving this unprecedented level of uptime?

 

You design it to be redundant.

 

First, you gain a full understanding of your business requirements, which are most often non-technical in nature. Then you design a model, whether it be network or software application architecture, which removes any and all single points of failure. This ideally results in an architecture design that can lose one or more critical components while operations continue. Ideally, you do this without the end-user noticing. This might mean network infrastructure, telecommunications circuits, application servers... it really can be anything. If a component can fail, you need to understand the failure modes, and plan for how to mitigate them through a redundant design.

 

You design it to be maintainable, and you take a proactive approach to maintenance.

 

No environment can operate forever without maintenance. You need to have a strategy in place for dealing with failed components or applications, and one that also allows you to take proactive measures to prevent future service disruption. This can mean an end of lifecycle hardware replacement, application software patching, or any other standard maintenance task. Maintenance should be routine and have time allocated for it. Simply saying, "You can’t have a maintenance window" isn’t going to fly. So, forget that illusion right now.

 

You figure out how to monitor it so you can react before service impact occurs.

 

The final key to preparing an environment to be highly available is to monitor it. You must first know what "normal" looks like to determine what "abnormal" is. This applies equally to network performance as it does software application performance. This is always a moving target, and it’s a lot of work. There are a lot of really good off-the-shelf software packages that can help with the basics (insert shameless, unsolicited plug for some of the cool SolarWinds tools here), or you can develop your own monitoring solutions. I’m not going to tell you what to monitor or how to do it, but I’m going to tell you that you need to figure out the answers to those questions and take the action appropriate for your environment.

 

Wrapping this discussion up, I know that achieving a truly highly-available IT environment sounds kind of like the Holy Grail, right? In many ways, it can be. I don’t know that you’ll ever achieve 100% of every one of these goals, but this is what you strive for, and how you need to approach it.

 

What do you think about the availability of IT services within your healthcare organization? What have some of your key challenges been? How are you addressing them? Do you have any tips, tricks, or battle scars to share that can help the rest of us? I'd love to hear your thoughts!

 

Until next time….

Hey everybody! It’s me again! In my last post, "Introducing Considerations for How Policy Impacts Healthcare IT," we started our journey discussing healthcare IT from the perspective of the business, as well as the IT support organization. We briefly touched on HIPAA regulations, EMR systems, and had a general conversation about where I wanted to take this series of posts. The feedback and participation from the community was AMAZING, and I hope we can continue that In this post. Let's start by digging a bit deeper into two key topics (and maybe a tangent or two): Protecting data at rest and in motion.

 

Data at Rest

When I talk about data at rest, what exactly am I referring to? Well, quite frankly, it could be anything. We could be talking about a Microsoft Word document on the hard drive of your laptop that contains a healthcare pre-authorization for a patient. We could be talking about medical test results from a patient that resides in a SQL database in your data center. We could even be talking about the network passwords document on the USB thumb drive strapped to your key chain. (Cringe, right?!) Data at rest is just that: it’s data that’s sitting somewhere. So how do you protect data at rest? Let us open that can of worms and talk about that, shall we?

 

By now you’ve heard of disk encryption, and hopefully you’re using it everywhere. It’s probably obvious to you that you should be using disk encryption on your laptop, because what if you leave it in the back seat of your car over lunch and it gets stolen? You can’t have all that PHI getting out into the public, now can you? Of course not! But did you take a minute to think about the data stored on the servers in your data center? While it might not be as likely that somebody swipes a drive out of your RAID array, it CAN happen. Are you prepared for that? What about your SAN? Are those disks encrypted? You’d better find out.

 

Have you considered the USB ports on your desktop computers? How hard would it be for somebody to walk in with a nice 500gb thumb drive, plug it into a workstation, and grab major chunks of sensitive information in a very short period of time, and simply walk out the front door? Not very hard if you’re not doing something to prevent that. There are a bunch of scenarios we haven’t talked about, but at least I've made you think about data at rest a little bit now.

 

Data in Motion

Not only do we need to protect our data at rest, we also need to protect it in motion. This means we need to talk about our networks, particularly the segments of those networks that cross public infrastructure. Yes, even "private lines" are subject to being tapped. Do you have VPN connectivity, either remote-access (dynamic) or static to remote sites and users? Are you using an encryption scheme that’s not susceptible to man-in-the-middle or other security attacks? What about remote access connections for contractors and employees? Can they just "touch the whole network" once their VPN connection comes up, or do you have processes and procedures in place to limit what resources they can connect to and how?

 

These are all things you need to think about in healthcare IT, and they’re all directly related to policy. (They are either implemented because of it, or they drive the creation of it.) I could go on for hours and talk about other associated risks for data at rest and data in motion, but I think we’ve skimmed the surface rather well for a start. What are you doing in your IT environments to address the issues I’ve mentioned today? Are there other data at rest or data in motion considerations you think I’ve omitted? I’d love to hear your thoughts in the comments!

 

Until next time!

My name is Josh Kittle and I’m currently a senior network engineer working for a large technology reseller. I primarily work with enterprise collaboration technologies, but my roots are in everything IT. For nearly a decade, I worked as a network architect in the IT department of one of the largest managed healthcare organizations in the United States. Therefore, healthcare security policy, the topic I’m going to introduce to you here today, is something I have quite a bit of experience with. More specifically, I’m going to talk about healthcare security concerns in IT, and how IT security is impacted by the requirements of healthcare, and conversely, how health care policy is impacted by IT initiatives. My ultimate goal is to turn this into a two-way dialogue. I want to hear your thoughts and feedback on this topic (especially if you work in healthcare IT) and see if together we can take this discussion further!

 

Over the next five posts, I’m going to talk about a number of different considerations for healthcare IT, both from the perspective of the IT organization and the business. In a way, the IT organization is serving an entirely different customer (the business) than the business is serving (in many cases, this is the consumer, but in other cases, it could be the providers). Much of the perspective I’m going to bring to this topic will be specific to the healthcare system within the United States, but I’d love to have a conversation in the forum below about how these topics play out in other geographical areas, for those of you living in other parts of the world. Let’s get started!

     

There are a number of things to consider as we prepare to discuss healthcare policy and IT, or IT policy and health care for that matter since we’re going to dip our toes into both perspectives. Let's start by talking about IT policy and health care. A lot of the same considerations that are important to us in traditional enterprise IT apply in healthcare IT, particularly around the topic of information security. When you really think about it, information security is as much a business policy as it is something we deal with in IT,  and information security is a great place to start this discussion. Let me take a second to define what I mean by information security. Bottom line, information security is the concept of making sure that information is available to the people who need it while preventing access to those who shouldn’t have it. This means protecting both data-at-rest as well as data-in-motion. Topics such as disk encryption, virtual private networks, as well as preventing data from being exposed using offline methods all play a key role. We will talk about various aspects of many of these in future posts!

     

The availability of healthcare-related information is it pertains to the consumer is a much larger subject than it has ever been. We have regulations such as HIPAA that govern how and where we are able to share and make data available. We have electronic medical records systems (EMR) that allow providers to share patient information. We have consumer-facing, internet-enabled technologies that allow patients to interact with caregivers from the comfort of their mobile device (or really, from anywhere). It’s an exciting time to be involved in healthcare IT, and there is no shortage of problems to solve. In my next couple of posts, I’m going to talk about protecting both data-at-rest and data-in-motion, so I want you to think about how these problems affect you if you’re in a healthcare environment (and feel free to speculate and bounce ideas off the forum walls even if you’re not). I would love to hear the challenges you face in these areas and how you’re going about solving them!

 

As mentioned above, I hope to turn this series into a dialogue of sorts. Share your thoughts and ideas below -- especially if you work in healthcare IT -- so we can take this discussion further.

Over the past few weeks I’ve been blogging about network configuration management, and the first few blog posts were fairly straightforward, I have to admit.  I didn’t have to spend a whole lot of time thinking about what to talk about, because it was for the most part, a natural evolution – talk about basic concepts, demonstrate using those concepts, show how to apply concepts to the operations cycle…. But as I find myself sitting down to write this, my final post, for this series I’m torn with which way to take it. 

 

One natural direction to go would be ‘taking things further’ or ‘optimizing your network configuration management environment’ but I think that those topics are so obvious that I probably wouldn’t be doing this post justice if I stopped there, so here’s what I’m going to do… Lets talk about those things, but then lets take it a step further and challenge ourselves to describe ‘what’s possible’ – things that we aren’t necessarily doing today as part of a configuration management process, but thinks we wish we were.   So without further a due, here goes...

 

Taking Things Further

 

It’s easy to sort of plot a course for ‘taking things further’ based on where you’re currently at in your network configuration management strategy… For those of us not doing anything yet, we simply start collecting data; for those of us collecting data but not using it, we start thinking about how to apply it to our operational processes.  We start thinking about automation; we start thinking about opportunities to leverage information that we now have, information that was previously out of reach. Whatever you’re doing now, trust me, it’s not enough, as I'm sure you'll find. Do more, and it’ll quickly start to snowball and you won’t be able to help yourself.

 

Thinking Forward

 

As I look back and think about the ways that network configuration management techniques and solutions have changed the way I do things operationally, from ‘getting backups’ in the beginning to ‘auditing standard configuration processes’ years later, I find that there’s always something more that’s possible, something more to achieve.  Here are a few of my thoughts on where we need to go next as an industry (and granted, some of these things are already possible, but not yet widely adopted, so this is certainly a point-of-view based vision that may vary based on your individual perspectives).

 

Artificial Network Intelligence – Wouldn’t it be amazing if we could ‘teach’ the network to react to changing environments based on a strategy or approach rather than a more specific ‘if you see this behavior, run this code”?  Think about a network that can be self-diagnosing based on reviewing configuration data as collected as part of network configuration management processes, observing network availability changes as reported by a network monitoring solution, and not only alerting the operations staff as to ‘there is a network outage’ but also ‘these are the things that just changed, and this is what we think is causing the problem’.   I mean, WOW, that would be something.  Kind of takes the phrase ‘self-healing network’ to the next level, doesn’t it?    

 

Proactive Fault Prevention – How about a solution that monitors a network for common ‘configuration events’ that most often create an outage, and alerting us proactively that ‘something bad may be brewing, you should look into this” – all before a service degradation or outage ever happens.

 

I’m sure there are half a dozen more types of things that are possible as an output of a strong network configuration management strategy… These are a few of my ideas…. So my challenge to you – give me a few of yours.   What do you think is possible that we haven’t attempted yet?  What kinds of outputs from a network configuration management solution could be leveraged to change ‘doing business as usual’ and revolutionize how we do things?  I’m curious to see the dialogue that we can have on this one.

 

It’s been a fun few weeks, and I hope these posts have been informative to you, have sparked an interest in network configuration management, and what’s possible, and helped to shape the way you do business.  Let the comments begin!

 

Josh  - @ciscovoicedude

Happy Monday Everybody!

 

In my last two blog posts we talked about network configuration management. I talked about my previously experience with various tools and techniques, and how my needs have changed over the years as my job and networks evolved.  We then went into a kicking the tires exercise and talked about one scripting-based methodology for performing basic configuration archival, and hopefully gave you a glimpse of just a small sample of things that are possible with network configuration management techniques.  In this post, I’d like to talk about some of the benefits for implementing a network configuration management solution, types of information you can collect, and how we can use this information that we’ve gathered.

 

Configuration Archival

 

First and foremost, one of the areas of focus we commonly explore with a network configuration management strategy is that of configuration archival.  This can be something as simple as daily or weekly configuration backups into a repository for ‘just in case the device dies’ recovery, or it can be far more complex and deal with being able to go back in the past and review prior configuration revisions, whatever the reason.  I can’t tell you how many times I wish I had a reference point for how something used to work.  Even configurations I created myself as much as a decade or more ago can have value in the things I do today.

 

Bulk Change Deployment

 

Don’t think of configuration management as being a one-way task.  It’s not just about pulling information from devices, but can be a very valuable tool for pushing configuration to devices.  When I left a previous company where I had been for 8+ years, it would have been negligent had the employer not insisted that all critical passwords be changed.  The keys I held to that environment were pretty powerful.  I hate to laugh about it, but we didn’t have the best configuration management tools in that environment, and somebody had to manually touch a few hundred devices and change passwords.  With a little bit of scripting, or an off the shelf product, that task could have been greatly simplified.   Deploying even the most trivial or the most advanced of changes using a configuration management solution can pay off in spades.

 

Maintaining Standards / Detecting Unauthorized Changes

 

Especially if you have a mid-size or larger network, it’s likely that you employ some sort of ‘configuration standards’ or configuration consistencies that need to be maintained across a multitude of devices.  Maybe this policy based, or maybe it’s purely technical in nature, but with proper configuration management tools, you can audit your devices and make sure that things are being done ‘the way we planned’.  This can go a long way in ensuring operational availability of your environment.  You can also use this same logic to detect when policy violations may have occurred by detecting anomalous configurations. 

 

Assist with Inventory Management and Asset Control

 

Finally, but certainly not the least important, a network configuration management strategy can greatly help audit device inventory and assist with asset control.  Being able to pull a list of all active devices on your network sure helps come maintenance renewal time.  Don’t ask me how many times I’ve had to have an engineer perform a physical inventory because we weren’t 100% sure of what was installed in a particular location.

 

I’ve touched on many of the network configuration management benefits and key pieces of information that I use in my operations – tell me about yours, how you use the network configuration management tools that you have implemented in your network, and how it benefits running your operation. 

 

@ciscovoicedude

In my last article, I talked about using various network tools over the years to automate network configuration management. We had some great comments and feedback to that post, so I thought I'd take a moment to do a deep dive on one of the tools I've used - PERL.   This is a great 'first step' into configuration archival for somebody who may be scripting-inclined, but may not have need to full blown change-comparison capability.  This isn't meant to be a programming tutorial on how to write code, but it's more of a jumping-off point for those of you who aren't using tools today and would like to kick the tires on what's possible.   This is more an exercise in getting you to 'think network management', and we're just going to use a little scripting wizardry to do just that. I'm going to assume that you've got a working installation of perl on your operating system of choice (for windows users, ActivePerl is a great choice), you linux and OSX users should already have perl installed with your operating system

 

Let me start off by sharing some code. (And feel free to share / modify / rant about this code. It's a compilation of code I've pieced together over the years)

 

showrun.pl

#!/usr/bin/perl
use  Net::Telnet::Cisco::IOS;
use  Net::Telnet::Cisco;

$InFile="routers.csv";
open INFILE,$InFile;
@CONFIG=<INFILE>;
close INFILE;

$username = "ciscovoicedude";               # the username to use when logging in
$password = "n0fax";                           # the matching password
$enablepw = "enablepassword";
@hosts = qw(@CONFIG);
   foreach $host ( @CONFIG )  {             # go through the array one element at a time
       chomp $host;
         my $conn = Net::Telnet::Cisco::IOS->new(HOST => $host);  # connect to the host
         $conn->login(   Name => $username, #  Log into the device
                          Password => $password);
     #   $conn->enable($enablepw);          # You can enable this line if you have to send an enable to the router. I don't.


@output = $conn->getConfig();               # Put the config in an array

        $outfile = ">" . $host . "-config.txt";  # create a filename called ">host-confg"
        open OUTFILE,$outfile;              # open a file for writing
        print "Writing $host to file\n";    # write a status message
        print OUTFILE @output;              # print the config to file
        close OUTFILE;                      # close the file
        $conn->close;                       # close the connection to the device
@output = ""

  }


 

 

In this short script, you'll see a basic 'running configuration' fetch from a device via telnet.   I'm using the Net::Telnet::Cisco::IOS module, so use your favorite perl package manager to install this module prior to trying to run the code.  In essence, the script will read an input file (I show it as being a CSV file, but really I'm only accessing a single column of data), from which it will determine the devices to connect to.  See the code below for an example of an input file.

 

example routers.csv

10.1.1.5
10.1.1.6
10.15.2.2
192.168.22.13


 

The script will iterate through each device in the CSV file, and issue a 'show run', and save the output to a text file named  "devicename-config.txt".  You could easily modify the script and input CSV to allow you to pass a different set of login credentials for each device, but I'll leave that as an exercise for the reader.

 

As you can see, with very basic scripting capability, you can automate configuration collection from a great number of devices. You could even automate running this using a cron job or scheduled task on your operating system.

 

Taking things a step farther, you could even modify this code to deploy configuration changes to devices, or do more advanced parsing and reporting. Heck, you could even store the configuration archive in a database for all of your devices and that opens up all kinds of opportunities down the road.

 

So I leave you with this - if you're one of those people who haven't yet dipped their toes into this, go ahead and take the plunge and let us know what you think!  For anybody who has written code to do even cooler things, I invite you to share some links with the rest of us so we can collaborate and learn together as a community.

 

Next time I'm going to talk about strategies with dealing with all of this data we have gathered, and useful things we can do with it.

 

@ciscovoicedude

First off let me introduce myself - My name is Josh Kittle, @ciscovoicedude on Twitter, and I'll be one of the Thwack Ambassador's for the month of June.  I'm honored to have been selected to participate in this wonderful community, and look forward to the dialogue to come.


I first started managing infrastructure devices circa 2000, and I quickly learned that keeping track of how each and every device on my network was configured was going to be a very important task. Not just from a ‘just in case’ standpoint, although I have to admit that I initially approached this need from a ‘backup’ viewpoint, but also from a ‘what’s changed since X’, or ‘how did we have this configured when it worked 2 years ago for this other customer’, and let me tell you - the need for configuration management has never lessened, it has only grown exponentially.


Early on, I used client-apps such as (and including) the Kiwi Cattools product, which worked for a while when it was just me, but as my networks grew from 5 - 10 devices to hundreds of devices, and the number of people interacting with the network infrastructure grew, so did the need for capabilities for configuration management.


As time passed, I went on to write many of my own tools using scripting languages such as PERL and PHP, combining this ‘go fetch’ capability (thanks to perl modules like Net::Telnet::Cisco::IOS) with back-end database services running on mySQL.


So my question for you is this - what tools and / or processes have you developed to deal with network configuration management, and have they changed much over the years, or are you doing the same things now that you were doing in years past?


I’ll talk more about some of the specifics of how I’ve approached the use of these tools in future posts.

 

@ciscovoicedude

Filter Blog

By date: By tag: