Skip navigation
1 14 15 16 17 18 Previous Next

Geek Speak

1,920 posts
Patrick Hubbard

Charting SolarWinds

Posted by Patrick Hubbard Administrator Mar 2, 2016

As many of you who’ve met me can attest- at tradeshows, user groups, or on SolarWinds Lab- I talk a lot.  I talk about technology, flying, astronomy and anything else geeky, but I don’t talk about myself much. Ok, I do annoy millennials with stories about my kids, but that’s parental prerogative.  And maybe that’s why I don’t talk much about SolarWinds.  Of course I talk about our products, advanced SolarWinds techniques and how our customers succeed, but not about SolarWinds, the business.  With the release of the recent 2016 Gartner Magic Quadrant for Network Performance Monitoring and Diagnostics (NPMD) however, I’m a bit excited and want to humble brag just a little.


Not My First Rodeo


Once upon a time I was the product manager for Sun Java System Identity Manager, formerly Waveset Lighthouse.  (Say that five times fast).  One of my jobs was engaging analysts with the cool geekiness of automated identity management and secure systems provisioning.  It was hardcore integration and code, with plenty of jargon and lots to talk about.  And a highlight in my career remains the month we hit the top corner of the “Leaders” quadrant on the Garner Magic Quadrant for Identity Management.  The whole company was understandably proud and ordered a 10 foot Fathead to stick on the office entryway wall.  And it’s from that perspective that I’m really pleased with Gartner’s analysis and where SolarWinds is positioned at the top of the “Challengers” NPMD quadrant.  Perhaps you’re even surprised to see us included.


Very large corporations rely on analyst evaluations for technology because CIOs and other senior IT executives may have budgets that run into the billions, with complexity unimaginable to most of us.  They simply don’t have time to dive in and learn the details of thousands of potential vendor solutions in hundreds of technology categories.  For them analyst research is extremely helpful, in many ways a trusted advisor.  And what many are looking for is something that can transform their IT operations, even if it’s expensive or requires rip-and-replace, because the benefits at that scale can outweigh the concerns of budget, company politics or risk.


And just like an iPhone, shiny and new is just sexy.  How many vendors do you know that start a conversation with long lists of features, or talk about the ZOMG most amazing release that’s just around the corner? But SolarWinds isn’t that kind of a company.  In fact it’s intentionally not like any other company.


We don't create new widgets in a lab and then try to figure out how to sell them.  For fifteen years you, our customers, have been telling us what your problems are, what a product should do, how it should work and how much it should cost.  Only then do we create technology you can use every day.  We also proudly don’t offer professional services. If we say a product is easy to use but have to install it for you, it’s not.  We also don’t offer hardware and admittedly miss out on what would be a really cool, dark-gray bezel with the orange SolarWinds particles logo in the center.  (Did you know our mark is particles not a flame? Solar wind = charged particles from the sun’s atmosphere.  Geeks.)


Highest in Ability to Execute


So when I look at where SolarWinds appears in the Magic Quadrant, I see exactly what I would expect, SolarWinds’ positioned highest along the “Ability to Execute” axis among all vendors in the quadrant. Would we lean a bit more toward the Leaders quadrant if we teased a few more features and products in the “What Are We Working on” section of thwack?  Perhaps, but that’s not our way. I’m in my 10th year at SolarWinds for one simple reason.  During my time this company has grown 25x because it stays true to an IT Pro philosophy to be helpful.  We don’t do everything, but we do just about everything you ask for, always striving to do it well.  SolarWinds isn’t about transforming IT. SolarWinds is about transforming the lives of IT professionals.


But enough about that, I‘m late for a meeting.


Feedback: Are you surprised to see SolarWinds on a Gartner Magic Quadrant? Does this report matter to you and your company?  Let us know in the comments below!


Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.



Posted by mrlesmithjr Mar 2, 2016

Over the past 3 posts we have covered what it means, how to start and what is required to begin down the Infra-As-Code journey. Hopefully things have been making sense and you have found them to be of some use. Obviously the main goal is just bringing awareness to what is involved and to help start the discussions around this journey. In the last post I also mentioned that I had created a Vagrant lab for learning and testing out some of the tooling. If you have not checked it out, you can do so by heading over to here. This lab is great for mocking up test scenarios and learning the methodologies involved.


In this post we will take what we have covered in the previous posts and mock up an example to see how the workflow might look. And for our mock example we will be making some configuration changes to a Cisco ACI network environment using Ansible to perform the configuration changes for our desired state.


The details below is what our workflow looks like for this mock-up.

  • - Developer – Writes Ansible playbooks and submits code to Gerrit
  • - Gerrit – Git Repository and Code-Review (Both master and dev branches)
  • - Code-Reviewer – Either signs off on changes or pushes back
  • - Jenkins – CI/CD – Monitors master/dev branches on Git Repository (Gerrit)
  • - Jenkins – Initiates the workflow when a change is detected on master/dev branches


And below outlines what our mock-up example entails from a new request received.


Change request:

  • - Create a new tenant for the example environment, which will consist of some web-servers and DB-servers. The web-servers will need to communicate with the DB-servers over tcp/1433 for MS SQL.
  • - After bringing all of the respective teams together to discuss in detail on the request and identify each object which must be defined, configured and made available for this request to be successful. (Below is what was gathered based on the open discussion)
    • Tenant:
      • § Name: Example1
      • Context name(s) (VRF):
        • § Name: Example1-VRF
      • Bridge-Domains:
        • § Name: Example1-BD
        • § Subnet:
      • Application Network Profile
        • § Name: Example1-ANP
      • Filters:
        • § Name: Example1-web-filter
        • § Entries:
          • Name: Example1-web-filter-entry-80
            • Proto: tcp
            • Port: 80
            • Name: Example1-web-filter-entry-443
              • Proto: tcp
              • Port: 443
        • § Name: Example1-db-filter
        • § Entries:
          • Name: Example1-db-filter-entry-1433
            • Proto: tcp
            • Port: 1433
      • Contracts:
        • § Name: Example1-web-contract
          • Filters:
            • Name: Example1-web-filter
            • Subjects:
              • Name: Example1-web-contract-subject
        • § Name: Example1-db-contract
          • Filters:
            • Name: Example1-db-filter
            • Subjects:
              • Name: Example1-db-contract-subject

Open discussion:


So based on the open discussion we have come up with the above details on what is required from a Cisco ACI configuration perspective in order to deliver the request as defined. We will use the above information to begin creating our Ansible playbook to implement the new request.



We are now ready for the development phase of creating our Ansible playbook in order to deliver the environment from the request. And knowing that Gerrit is used for our version control/code repository we need to ensure that we are continually committing our changes to a new dev branch on our Ansible-ACI Git repository as we are developing our playbook.


**Note – Never make changes directly to the master branch…Always create/use a different branch to develop your changes and then merge those into master.


Now we need to pull down our Ansible-ACI Git repository to begin our development.

$mkdir -p ~/Git_Projects/Gerrit

$cd ~/Git_Projects/Gerrit

$git clone git@gerrit:29418/Ansible-ACI.git

$cd Ansible-ACI

$git checkout -b dev


We are now in our dev branch and can now begin our coding.


We now create our new Ansible playbook.

$vi playbook.yml


And as we create our playbook we can begin committing changes as we go. (Follow the steps below on every change you want to commit)

$git add playbook.yml

$git commit -sm “Added ACI Tenants, Contracts and etc.”

$git push


In the example above we used -sm as part of our git commit. The -s adds a sign-off by the user making the changes and the -m designates the message that we are adding as part of our commit. You can also just use the -s and then your editor will open for you to enter your message details.


So we end up coming up with the following playbook which we can now proceed with testing in our test environment.


- name: Manages Cisco ACI

  hosts: apic

  connection: local

  gather_facts: no


    - aci_application_network_profiles:

        - name: Example1-ANP

          description: Example1 App Network Profile

          tenant: Example1

          state: present

    - aci_bridge_domains:

        - name: Example1-BD

          description: Example1 Bridge Domain

          tenant: Example1


          context: Example1-VRF

          state: present

    - aci_contexts:

        - name: Example1-VRF

          description: Example1 Context

          tenant: Example1

          state: present

    - aci_contract_subjects:

        - name: Example1-web-contract-subject

          description: Example1 Web Contract subject

          tenant: Example1

          contract: Example1-web-contract

          filters: Example1-web-filter

          state: present

        - name: Example1-db-contract-subject

          description: Example1 DB Contract Subject

          tenant: Example1

          contract: Example1-db-contract

          filters: Example1-db-filter

          state: present

    - aci_contracts:

        - name: Example1-web-contract

          description: Example1 Web Contract

          tenant: Example1

          state: present

    - aci_filter_entries:

        - name: Example1-web-filter-entry-80

          description: Example1 Web Filter Entry http

          tenant: Example1

          filter: Example1-web-filter  #defined in aci_filters

          proto: tcp

          dest_to_port: 80

          state: present

        - name: Example1-web-filter-entry-443

          description: Example1 Web Filter Entry https

          tenant: Example1

          filter: Example1-web-filter  #defined in aci_filters

          proto: tcp

          dest_to_port: 443

          state: present

        - name: Example1-db-filter-entry-1433

          description: Example1 DB Filter MS-SQL

          tenant: Example1

          filter: Example1-db-filter

          proto: tcp

          dest_to_port: 1433

          state: present

    - aci_filters:

        - name: Example1-web-filter

          description: Example1 Web Filter

          tenant: Example1

          state: present

        - name: Example1-db-filter

          description: Example1 DB Filter

          tenant: Example1

          state: present

    - aci_tenants:

        - name: Example1

          description: Example1 Tenant

          state: present

  vars_prompt:  #Prompts for below info upon execution

    - name: "aci_apic_host"

      prompt: "Enter ACI APIC host"

      private: no

      default: ""

    - name: "aci_username"

      prompt: "Enter ACI username"

      private: no

    - name: "aci_password"

      prompt: "Enter ACI password"

      private: yes


    - name: manages aci tenant(s)


        name: "{{ }}"

        descr: "{{ item.description|default(omit) }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"


        - aci-tenants

      with_items: aci_tenants


    - name: manages aci context(s)


        name: "{{ }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"


        - aci-contexts

      with_items: aci_contexts


    - name: manages aci bridge domain(s)


        name: "{{ }}"

        descr: "{{ item.description|default(omit) }}"

        context: "{{ item.context }}"

        tenant: "{{ item.tenant }}"

        subnet: "{{ item.subnet }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"


        - aci-bridge-domains

      with_items: aci_bridge_domains


    - name: manages aci application network profile(s)


        name: "{{ }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"


        - aci-application-network-profiles

      with_items: aci_application_network_profiles


    - name: manages aci filter(s)


        name: "{{ }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"      

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"


        - aci-filters

      with_items: aci_filters


    - name: manages aci filter entries


        name: "{{ }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"

        filter: "{{ item.filter }}"

        proto: "{{ item.proto }}"

        dest_to_port: "{{ item.dest_to_port }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"


        - aci-filter-entries

      with_items: aci_filter_entries


    - name: manages aci contract(s)


        name: "{{ }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"

        scope: "{{ item.scope|default(omit) }}"

        prio: "{{ item.prio|default(omit) }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"


        - aci-contracts

      with_items: aci_contracts


    - name: manages aci contract subject(s)


        name: "{{ }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"

        contract: "{{ item.contract }}"

        filters: "{{ item.filters }}"

        apply_both_directions: "{{ item.apply_both_directions|default('True') }}"

        prio: "{{ item.prio|default(omit) }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"


        - aci-contract-subjects

      with_items: aci_contract_subjects




The assumption here is that we have already configured our Jenkins job to do the following as part of the workflow for our test environment:

  • - Monitor the dev branch on git@gerrit:29418/Ansible-ACI.git for changes.
  • - Trigger a backup of the existing Cisco ACI environment (Read KB on this here)
  • - Execute the playbook.yml Ansible playbook against our Cisco ACI test gear and report back on the status via email as well as what is available from our Jenkins job report. (Ensuring that our test APIC controller is specified as the host)


Now assuming that all of our testing has been successful and we have validated that the appropriate Cisco ACI changes have been implemented successfully. We are now ready to push our new configuration changes up to the master branch for code-review.



We are now ready to merge our dev branch into our master branch and commit for review. Remember that you should not be the one who also signs off on code-review and the person who does should have knowledge in regards to the change being implemented. So we will assume that the above is true for this mock-up.


So we can now merge the dev branch with our master branch.

$git checkout master

$git merge dev


Now we can push our code up for review.

$git review


Now our new code changes are staged on our Gerrit server ready for someone to either sign-off on the change and merge the new changes in our master branch or push the changes back for additional information. But before we proceed with the sign-off we need to engage our peer-review phase by following the next section.



We now should re-engage the original teams and discuss the testing phase results, the actual changes to be made and ensure that there is absolutely nothing missing from the implementation. This is also a good stage to include the person who will be signing off on the change as part of the discussion. In doing so will ensure that they are fully aware of the changes being implemented and have a better understanding in order to proceed or not.


After a successful peer-review the person who is in charge of signing off on the code-review should be ready to proceed or not. So for this mock-up we will assume that all is a go and they proceed with signing off and the changes get merged into our master branch. Those changes are now ready for Jenkins to pick up and implement in production.



So now that all of our configurations have been defined into an Ansible playbook, all testing phases have been successful and our code-review has been signed off on we are now ready to enter the implementation phase in our production environment.


Our production Jenkins workflow should look identical to our testing phase setup so this should be an easy one to setup. The only differences here should be the ACI controller that is configured for our production environment therefore our workflow should look similar to the following.

  • - Monitor the master branch on git@gerrit:29418/Ansible-ACI.git for changes.
  • - Trigger a backup of the existing Cisco ACI environment (Read KB on this here)
  • - Execute the playbook.yml Ansible playbook against our Cisco ACI production gear and report back on the status via email as well as what is available from our Jenkins job report. (Ensuring that our production APIC controller is specified as the host)


And again assuming that our Jenkins workflow ran successfully we should be good and all changes should have been implemented successfully.


Final thoughts


I hope you found the above useful and informational on what a typical Infra-As-Code change might look like. There are some additional methodologies that you may want to implement as part of your workflow as we did with this mock-up. And one of those may be some additional automation steps and/or end-user capabilities. And we will cover some of those items in the next post which will cover next steps in our journey.

Access control extends far beyond the simple static statements of a Cisco ACL or IP tables.  The access control we deal with today comes with fancy names like Advanced Malware Protection or “Next-Generation.”  If you work with Cisco devices that are part of the FirePOWER defense system you know what I’m talking about here.  For example, the Cisco FirePOWER services module in the ASA can work with Cisco Advanced Malware Detection to send a file hash to a Cisco server in the cloud.  From there, the Cisco server will respond with an indication that the file contains malware, or that its clean.  If it contains malware then of course the access control rule would deny the traffic.  If its determined that the traffic is clean it would allow the traffic. 


In this situation discussed previously, the file itself is never sent over the wire, just a hash is sent.  How is this at all helpful?  Cisco gathers correlation data from customers around the globe.  This data helps them to build their database of known threats, so when you send them a hash, its likely that they’ve already seen it and have run the file in a sandbox.  They use advanced tools like machine learning to determine if the file is malicious.  Then they catalog the file with a hash value so when you send a hash, they compare the hash, and there you have it!  This is very low overhead in terms of processing data.  What about the cases where Cisco doesn’t have any data on the file hash we’ve sent?  This is where things get interesting in my opinion. 


In this case, the file needs to be sent to Cisco.  Once Cisco receives the file they run it in a sandbox.  Using machine learning amongst other methods lets them determine if the file is doing something malicious or not.  At this point they would catalog the information with a hash value so they don’t have to look at it again.  This is all good, because we can usually get a quick response on wether something is good or bad, and our access-control rules can do their job.  But here’s where a few questions could be raised.  Aside from not having a hash for a file I’m sending or receiving, what determines that the file needs to be forward to Cisco?  Do they log the file or discard it after the sandbox run of the file?  I ask these questions because in my mind it’s realistic that all files could be sent to Cisco and cataloged meaning authorities could potentially subpoena that data from Cisco to see anything I’ve sent or received.  If this is the case then our “Advanced Malware Detection” could also be “Advanced Privacy Deterioration.” 


What are your thoughts?  Is it a bad idea to get the cloud involved in your access-control policies or do we just trust the direction vendors are taking us?


Report IT

Posted by kong.yang Employee Feb 26, 2016

In this final post of this SOAR series, I conclude with reporting as the final high value-add skill needed for mastering the virtual environment. Reporting is all about supplying the intended audience with what they need to make an informed decision.


Reporting: The decision-maker’s ammo

Reporting molds data and logged events into a summary that highlights key facts to help the end-user make a quick, yet sound, decision. Reporting is neither glamorous nor adrenaline-pumping, like the experience you get while developing and honing the other skills, but it is the one skill that will help you get on the promotion fast track.


Reporting is better than slideware

IT reporting at its best is pure art backed by pure science and logic. It is storytelling with charts, figures, and infographics. The intended audience should be able to grasp key information quickly. In other words, keep it stupid simple. Those of you following this SOAR series and my 2016 IT resolutions know that I’ve been beating the “keep it stupid simple” theme pretty hard. This is because continuous decision-making across complex systems can lead to second-guessing by many IT chefs in the IT kitchen, and we don’t want that. Successful reporting takes the guesswork out of the equation by framing the problem and potential solution in a simple, easily consumable way.


The most important aspect of reporting is knowing your target audience and creating the report just for them. Next, define the decision that needs to be made. Make the report pivot on that focal point because a decision will be made based on your report. Finally, construct the reporting process in a way that will be consistent and repeatable.



Reporting is a necessary skill for IT professionals. It helps them provide decision-makers with evidence leveraged in the decision-making process. Reporting should be built upon the other DART and SOAR skills so that reports become a valuable asset instead of merely a check mark on someone’s to-do list. Reporting can empower IT pros to new career paths and to new IT frontiers.

Mrs. Y.

Can Security Be Agile?

Posted by Mrs. Y. Feb 25, 2016

Unless you’ve been living in a bomb shelter for the last five years, it’s impossible to avoid hearing about the struggle IT organizations have with supporting DevOps initiatives. Those of us on infrastructure or information security teams are already under pressure to implement and streamline processes to improve efficiency in the midst of budget cuts and downsizing. Now we’re being asked to accommodate the seemingly unrealistic schedules of Agile developers. It often feels as though we’re working with two-year olds in the midst of an epic “continuous deployment” temper tantrum. 


Those of us who have some tenure in IT remember the bad old days. Sleeping under cubicle desks while performing off-hours maintenance or trudging back to the office at 3:00 AM because someone made a change that took down the network. It was the Wild, Wild West and organizations started implementing change management procedures to keep the business happy.  But then a shift occurred. While we were busy trying to improve the way we manage change, many organizations became risk-averse. They have meetings about meetings and it seems as though their IT staff spends more time talking about work than actually doing any. The stage was set for a DevOps revolution.


Does Agile have value outside of a few eCommerce companies or is it simply confirmation bias for technology chaos? Only time will tell, but organizations hungry for innovation drool when they hear Etsy pushes 50 deploys per day. You heard that right, per DAY. While DevOps organizations seems to be moving at warp speed, others stumble trying to implement that many changes in a month. What’s the secret? How do companies like Facebook, Google and Amazon stay nimble while maintaining some semblance of order and security?


Don’t be fooled by snazzy lingo like “sprints” and “stories,” DevOps still relies on process. But the requirements of Agile development demand rapid change. The main question infrastructure teams ask is how to support this environment safely. And how do security teams manage risk and compliance without an army of analysts to keep up?


The only way to achieve DevOps speed while minimizing risk is through a two-pronged approach: automation and self-service. Standards are useful, but being able to instantly spin up a system, application or container based on those standards will better meet the needs of your developers. When you manually build servers and install applications, then ask security to assess them, you’ve wasted precious time with process “gates.” Approach everything as code, something to be modularized then automated. Google and Facebook don’t perform manual code reviews, they use frameworks with pre-approved libraries built on standards.  This eliminates the need for frequent manual code reviews.


Moreover, implementation times are shorter when teams can fend for themselves. It’s critical to deploy tools that allow developers to be self-sufficient. Start to think like a programmer, because, as Marc Andreessen pointed out, “Software is eating the world.” Most hypervisors have excellent orchestration capabilities and every self-respecting public cloud has various APIs that will allow you to implement your own deployment scripts or integrate with existing help desk applications.  By spending time up-front to develop automated processes and creating self-service front-ends, you can actually maintain control over what your developers are able to do, while keeping them working.


Embracing DevOps doesn’t have to mean kicking security controls to the curb. But it will require security staff with skillsets closer to that of a developer, those who can think beyond a checkbox and provide solutions, not roadblocks.

Stop And Smell The Documentation

A recent issue with a network reminded me of the importance of documentation. I was helping a friend find out why destinations in the core of the network were unable to ping other locations. We took the time to solve some routing neighbor issues but couldn't figure out why none of the core could get out to the Internet. We were both confused and working through any issues in the network. After a bit more troubleshooting with his team, it turned out to be a firewall issue. In the process of helping the network team, someone had added a rule to the firewall that blocked the core from getting out. A lot of brainpower was wasted because this engineer was trying to help.

We reinforce the idea that documentation is imperative. As-built documentation is delivered when a solution is put together. Operational docs are delivered when the solution is ready to be turned up. We have backup plans, disaster plans, upgrade procedures, migration guidelines, and even a plan to take equipment out of service when it reaches the end of life. But all of these plans, while important, are the result of an entire process. What isn't captured is the process itself. This becomes very important when you are troubleshooting a problem.

When I worked on a national helpdesk doing basic system support, we used the CAR method of documentation:

  • C - Cause: What do we think caused the problem? Often, this was one of the last things filled in because we didn't want to taint out troubleshooting method with wild guesses. Often I would put in "doesn't work" or "broken".
  • A - Actions: This was the bulk of the documentation. What did you do to try and fix the problem? I'll expand on this in a minute, but Actions was the most critical part of the documentation that almost never captured what was needed.
  • R - Resolution: Did you fix the problem? If not, why? How was the customer left? If you were part of a multiple call process, Resolution needed to reflect where you were in the process so the next support technician could pick up where you left off.

Cause and Resolution are easy and usually just one or two line entries. What broke and how did you fix it? Actions, on the other hand, was usually a spartan region of half sentences and jumbled thoughts. And this is where most of the troubleshooting problems occurred.

When you're trying to fix a problem, it's tempting to only write down the things that worked. Why record things that failed to fix the problem? If you try something and it doesn't affect the issue, just move on to the next attempt. However, in the real world, not recording the attempts to fix the problem are just as detrimental as the issue in the first place.

Writing In The Real World

Let's take the above example. We were concentrating on the network and all the issues there. No one thought to look at the firewall until we found out the issue was with outbound traffic. No one mentioned there was a new rule in the firewall that directly affected traffic flow. No one wrote down that they put the rule in the firewall just for this issue, which made is less apparent how long the rule had been there.

Best practice says to document your network. Common practice says to write down as much as you can think of. But common sense practice is that you should write down everything you've done to during troubleshooting. If you swap a cable or change a port description it should be documented. If you tweak a little setting in the corner of the network or you delete a file it should be noted somewhere. Write down everything and decide what to keep later.

The reason for writing it all down is because troubleshooting is rarely a clean process. Fixing one problem will often uncover other issues. Without knowing the path we took to get from problem to solution, we can't know which of these issues were introduced by our own meddling and which issues were already there. Without an audit trail we could end up chasing our own tails for hours not knowing that the little setting we tweaked five minutes in caused the big issue three hours later.

It doesn't matter if you write down your steps on a tablet or jot them on the back of a receipt for lunch. What is crucial is that all that information makes it to a central location by the end of the process. You need great tools, like the ones from Solarwinds, to help you make sense of all these changes and correlate them into solutions to problems. And for those of us, like me, that often forget to write down every little step, Solarwinds makes log capture programs that can let the device report every command that was entered. It's another way to make sure someone is writing it all down.

How do you document? What are you favorite methods? Have you ever caused a problem in the process of fixing another one? Let me know in the comments!

Stay in the IT game long enough and you’ll see the cyclic nature that exists as well as the patterns that form as tech trends go from hype to in-production. I remember working with virtualization technologies when you had to do things via the command line interface (CLI), such as configuring to your ESX host servers to connect with your fibre channel SAN. It was tedious and prone to human errors, so vendors simplified the experience with slick GUIs and one-touch clicks. Well, things have come full circle as the CLI is all the rage to gain scale and run lean via automation and orchestration scripts. With that in mind, here are a few things that virtualization admins need to remember.


First, let’s start with pattern recognition as in IT silos will always be targeted for destruction, even virtual ones. Just as virtualization broke down IT silos across networking, storage, application and systems, new technology constructs are doing the same to virtual silos. In this case, the virtualization silos are embodied by vendor-locked in solutions and technologies. Don’t be that IT guy who gets defensive about virtualization without seeking to understand the pros and cons of cloud services and container technologies.


Second, virtualization is an enabler of the application in terms of availability and data center mobility. Well guess what - containers, clouds, and microservices enable high availability and mobility beyond your on-premises data center at web-scale. So enable high availability and mobility in your career by learning and using these tech constructs.


Finally, let’s bring this conversation full circle – lean on your expertise and experience. If you know virtualization, you understand the underlying physical subsystems and their abstracted relationships to the application. Put that trust in yourself and embrace the new era of continuous service delivery and continuous service integration. It’s just an abstraction using a different model.


Automate IT

Posted by kong.yang Employee Feb 19, 2016

Last week, I covered optimization as a skill to keep your IT environment in tip-top shape by constantly delivering the most optimal Quality-of-Service (QoS). This week, I’ll walk through automation as another high value-add skill for the virtual environment. Automation is one part best practice, one part policy, and one part execution.


Automation: the only way to scale IT’s most valuable resource – you the IT professional

Automation is a skill that requires detailed knowledge including comprehensive experience around a specific task such that the task can be fully encapsulated by a workflow script, template or blueprint. Automation, much like optimization, focuses on understanding the interactions of the IT ecosystem, the behavior of the application stack, and the inter-dependencies of systems in order to deliver economies of scale benefit and efficiency towards the overall business objectives. And it embraces the “do more with less” edict that IT professionals have to abide by.


Automate away

Automation is the culmination of a series of brain dumps covering the steps that an IT professional takes to complete a single task, one that the IT pro is expected to complete multiple times with regularity and consistency. The singularity of regularity is common thread in deciding to automate an IT process. The chart below entitled “Geeks and repetitive tasks” provides good perspective on an IT professional’s decision to automate.


[2016 from Hosted on ]


Automate the IT way: scripts, templates and blueprints

IT automation usually is embodied by scripts, templates and blueprints. These scripts, templates and blueprints are built upon an IT professional’s practice and methods. Ideally, it’s based upon best practices and tried and true IT methods. Unfortunately, automation cannot differentiate between good and bad practices. Therefore, automating bad IT practices will lead to unbelievable pain at scale across your data centers.


With this in mind, keep automation stupid simple. First, automate at a controlled scale and follows the mantra doing no harm to your production data center environment. Next, monitor the automation process to make sure that every step executes as expected. Finally, analyze the results to make necessary adjustments to the automation process.



Automation is the skill that allows an IT professional to scale beyond what they could do singularly. It is also a skill that builds upon the DART and Optimization skills. It seeks to maximize the IT professional’s time by freeing up more time to do other tasks. And next week, I’ll talk about Reporting, the last of the SOAR skills that virtualization admins need to SOAR their professional careers.

In the previous post we discussed on ways to get started down the Infra-As-Code journey. However, one thing that was pointed out from others was that I missed the backup process of devices. So I wanted to go ahead and address that here in the beginning of this post to get that area covered. And I very much appreciate those who brought that to my attention.


So how do you currently cover backups on network devices? Do you not back them up? Do you back them up to a TFTP/FTP server? Well in order to accomplish backup tasks for Infra-As-Code we need to do our backups a little differently. We need to ensure that our backups are created as the same filename on our destination on every backup occurrence. So you are thinking to yourself this seems rather un-useful correct? Well actually it is very useful and this is what we want. And the reason behind this is that we want to store our backups in our version control system and in order to benefit from this the backup filename needs to be the same every time that it is committed to our version control system. Doing so allows for us to see a configuration diff between backups. Meaning that if any configuration change occurs our version control system will show the diff of the previous backup and the current backup allowing for us to track exact changes over time. This allows for easy identification of any changes that may have possibly caused an issue or maybe even identify an unauthorized change that was not properly implemented using our change methodologies that hopefully are in place. One last thing in regards to backups is that we do indeed need to ensure that our backup strategy is followed as part of our previous post in the implementation phase at the very least. Ensuring that our backups are running and validated prior to the actual implementation of a change. Now with this section on backups being covered let’s continue on where this post was meant to be and that is what is required for Infra-As-Code.


What do I mean by what is required for Infra-As-Code? I mean what tools are required to be in place to get us started on this new journey. I will only be covering a very small subset of tools that will allow us to get started as there are way too many to cover. And the tools that I will be mentioning here are the same ones that I use so I may be a bit partial but in no-way are they to be considered the best.


Let’s start with a version control system first because we need a repository to store all of our configurations, variables and automation tooling. Now I am sure you have heard of GitHub as a git repository for users to share their code and/or contribute to other user’s repositories. Well we are not going to be using GitHub as this is a public repository and we want a repository on-site. You could however use GitHub as a private repository if you so choose to but be very careful on what you are committing and ensuring that the repository is private. There are others such as BitBucket that allow the creation of free private repositories whereas GitHub is a pay for private repositories. So what do I use for an on-site git repository for version control? I use GitLab-CE (GitLab Community Edition) which is a free and open-source git repository system which has a very good WebUI and other nice features. GitLab also offers a paid for enterprise version which adds additional functionalities such as HA. Having a good version control system is absolutely crucial because it will allow us to create git branches within our repos to designate which are Master (Golden) and maybe some others such as a staging, test, dev and etc. Having these different branches is what will allow us to do different levels of automation testing as we work through our workflows of changes.


Let’s now touch on code-review. Remember our code-review is the system that will allow sign-off on code/configuration changes to be applied on our network devices. Code-review also enforces accountability for changes. The ideal recommended method is that whomever signs-off on code changes has a complete understanding of the underlying network devices as well as what the configuration/code changes are. Taking this method ensures that everyone is in the know on what is actually changing and can take the proper measures prior to a production change which brings down your entire network. And luckily if that was to happen you did have a proper backup of each configuration right? So what do I use for code-review? I use Gerrit which is developed by Google. Gerrit is a very well-known and used code-review system throughout the developer ecosystem. One thing to note on Gerrit is that it can ALSO be used as a git repository in addition to code-review. This works well for those who want only one single system for their git repositories and code-review. The integration is very tight but the WebUI is not so pretty to say the least. So when deciding on whether to use a single system or to use Gerrit for only code-review and GitLab for git repositories is a matter of choice. If you do choose to not use a single system, it takes additional steps in your methodologies.


Now we will touch on automation tooling. There are MANY different automation tools to leverage and each one is more of a personal preference and/or cost. Some of the different automation tools include Chef, Puppet, Salt, Ansible and many others. My preferred automation tool is Ansible. Why Ansible? I spent a good bit of time with many of the others and when I finally discovered Ansible I was sold on it’s relatively easy learning curve and power. Ansible relies on Python which is a very rich and powerful programming language and is also easy to learn. Using Ansible you can program network devices via an API (if it exists), raw commands (via SSH), SNMPv3 and other methods. There are other tools that leverage Ansible such as NAPALM as well. I highly recommend checking out NAPALM. so definitely checkout the different automation tools and find the one that works the best for you.


And for the final tool to discuss in this post covers our CI/CD (Continuous Integration/Continuous Delivery) automation of workflows and changes. There again are so many different tools to choose from. Some of these tools include Go-CD, Rundeck, Travis-CI, Bamboo, Jenkins and many many more. My tool of choice is Jenkins for the most part but also leverage several of the others listed above for one-off type deployments. Jenkins is a widely adopted CI platform with a massive number of plugins developed for tying other tooling into your workflows, such as Ansible playbooks and ad-hoc Ansible tasks. Jenkins allows for us to stitch together our complete workflow from top to bottom if desired. Meaning we could leverage Jenkins to kick off a backup, pull a specific branch from our git repository with configuration changes, run an Ansible playbook against a set of network devices, report back on the status and continue throughout our workflow pipelines. Jenkins is the tool that takes many manual processes and automates those tasks for us. So think of Jenkins as the brains to our Infra-As-Code.


Now I know this is a lot to learn and digest on many new tools and methodologies but hopefully I can help you with getting your head around these tools. And with that I have created a Vagrant lab that you can spin up and begin learning each of these tools. I will be leveraging this lab going forward with the additional follow-up posts in the next few weeks. But I wanted to share this here now so you can also start getting your head around these tools. So if you head over to here you can begin your journey too.


And with that this ends this post and up next we will discuss some examples of our new methodologies.

If you’re in management, you may not understand the effects of changes on your network.  However, if you’re the network engineer you know exactly the effects and ramifications that come with a change on your network.  The slightest change can literally cause an outage.


So what’s the big deal with software companies that want you to buy Network Configuration Change Management (NCCM) software?  Well I know personally that a few of you have been in this exact position and on both sides of this ball.  As a manager you want to have a seamless network and keep down costs.  As the network engineer you want to be able to have a smooth running network and a happy manager.


What is the happy medium here?  When are too many software tools or too many diagrams on walls and an over-abundance of saved test files enough to know software is required to actually manage all of this?


SolarWinds offers a Network Configuration-Change Management package. Does this mean it’s the best?  No, as that is in the eye of the beholder and user. Does this mean that it is manageable and can save me time and my manager money?  You’re darn rightit can do both very easily!


Yes, there are other software tools that do all about the same thing with little differences along the way.  Just like I like thin pancakes and you may like fluffy thick pancakesin the end they are still pancakes.


Now to know what a good NCCM is regardless of the name across it, let’s go over the top 6 reasons to have such software.


  1. Making changes because you were told to…
    1. You want to be able to know if someone is in fact making changes immediately and have a way to revert changes if needed.  NCCM software allows you to do this and consistently backs up your devices in case such changes are incorrect and provides a complete barebones backup if needed for a new device.
  2. Scheduled device changes
    1. Planning IOS upgrades, change in ACL lists, SNMP passwords, or many other items on your daily tasks.  Having a program that will allow you to monitor and roll out these changes saves time and show results quickly.
  3. A second pair of eyes
    1. It’s good to have an approval system in place so that scripting and changes receive a second look before deployment.  This helps prevent outages and mistakes, and definitely is valuable when your network has service level agreements and high availability needs.
    1. I cannot say this enough…if you do not have regular backups of your system that are easily retrievable, you do not have a fully reliable network.  PERIOD. Backing up to your local machine is not acceptable…You know who you are
  5. Automation of the tasks you might rather forget...
    1. Being able to detect issues within your configuration through compliance reporting, real-time change detection, scheduled IOS upgrades, inventory, and many more automated tasks. This allows you to focus on the integrity and availability of your network.
  6. Security
    1. If you have certain required security measures within your configurations, then you need compliance reporting.  With NCCM software, you can schedule a report or run it manually and print out that your ‘state of compliance’ within seconds instead of per device.


Well there are a few valuable reasons to at least consider this type of software.  If you have any other thoughts, feel free to drop me a line! Add to my list or take away, I’m a pretty open mined individual.


If you’re looking for more information this has a solid outlook on NCCM and businesses.

Not to be outdone in the race for federal network modernization, the United States Army last year issued the Army Network Campaign Plan (ANCP). Created by Lt. Gen. Robert Ferrell, the Army’s outgoing director, the ANCP “outlines current efforts that posture the Army for success in a cloud-based world” and provides “the vision and direction that set conditions for and lay a path to Network 2020 and beyond.”


These broad and bold statements encompass several things. First, there’s the Army’s desire to create a network that aligns with DISA’s Joint Information Environment (JIE) and the Defense Department’s modernization goals, which include better insight into what’s happening within its networks and tighter security postures. Second, there’s the pressing need to vastly improve the services the Army is able to deliver to personnel, including, as outlined in the ANCP, everything from “lighter, more mobile command posts to austere environments that will securely connect the network and access information.”


How unifying operations and security fits into the ANCP


The need for greater agility outlined in the ANCP dictates that operations and security teams become more integrated and unified. The responsibilities of one can have a great impact on the other. Working together, cybersecurity and operations teams can share common intelligence that can help them more quickly respond to threats and other network problems.


Similarly, the solutions that managers use to monitor the health and security of their networks should offer a combination of features that address the needs of this combined team. As such, many of today’s network monitoring tools not only report on the overall performance of the network, but also provide indications of potential security threats and remediation options.


Why letting go of the past is critical to the success of the ANCP

Combing operations and security teams is a new concept for many organizations and it requires letting go of past methodologies. The same mindset that contributes to that effort should also be applied to the types of solutions the Army uses moving forward, because the ANCP will not be successful if there is a continued dependence on legacy IT solutions.


It used to be fine for defense agencies to throw their lots in with one software behemoth controlling large segments of their entire IT infrastructure, but those days of expensive, proprietary solutions are over. Army IT professionals are no longer beholden to the technologies that may have served them very well for the past few decades, because the commercial market for IT management tools  now has lightweight, affordable, and easy-to-deploy solutions. The willingness to let go of the past is the evolution of federal IT, and is at the heart of all modernization efforts.


The fact that Ferrell and his team developed a plan as overarching as the ANCP indicates they are not among this group of IT leaders. In fact, the plan itself shows vision and a great desire to help the Army “be all it can be.” Now, the organization just needs to fully embrace new methodologies and technologies to reach that goal.


To read the extended article on Defense Systems

A week ago Leon Adato shared a fine post titled What “Old” Network Engineers Need To Remember.  I enjoyed reading his post and agreed with every point.  And so I thought I’d make my own list this week and share my thoughts on how “Not” to be a bad network security engineer.

So let’s get right two it shall we?

  1. Don’t assume bad motives.  Too often we assume that users are doing something they shouldn’t be and when we get the call that their computer has malware or we find something funny in the logs we treat them pretty bad.  Sure, some people are jerks and try to get around the rules.  But most people just want to get work done with as little friction as possible.
  2. Don’t assume that everyone knows the latest malware or ransomware delivery methods.  I have a friend that works for an autoparts distributor.  He deals with shipments all the time.  One of the emails he received was a failed shipping notification.  He opened it and boom!  Cryptolocker.  It encrypted everything on the shared drives he was connected to and left the business limping along for a few hours while they restored the previous nights backups.  He had no idea.  Malware  and Ransomware isnt in his job description.
  3. Educate your users in a way that isn’t demeaning to them.  I know the old “Nick Burns” videos are humorous.  But again, if you take the time to train your users and your not a jerk about it, they’re more apt to respond in a positive manner.
  4. Now for the technical stuff.  If you’re using a ton of ACL statements to control traffic, please add remarks.  By adding remarks to your ACL statementns those who come after you will think you’re a pretty nice guy.  I’ve inherited ACLs with thousands of lines and no clue what any of the entries were for.  Not cool!
  5. Use Event Logging and Correlation to your benefit.  Too many network security professionals try to get by without a solid logging and correlation strategy.  So instead of having all the info, they tend to tread water trying to keep up with what’s going on in the network.  There are a number of SIEM solutions today that offer event correlation and really good filtering on the logs.  If you don’t have one, build a case for one and present it to upper management.

It’s true that we’re in a very tough spot some times.  We manage systems that have a lot of power in terms of network connectivity.   It’s good for us to be transparent to users but at the same time we don’t want our users activity to be transparent to us.  It’s quite a balance we have to strike, but it’s worth it when we can.  And using some of the more advanced tools made available today can help give us the visibility we need.  Here’s a good example of how you can use Solarwinds LEM to create rules for real-time correlation and response..  This is just one example of how we can use today technology to provide security services while being somewhat transparent to users.  And as far as the five points mentioned above, these are but a few point I’ve learned over the years that have proven to be useful.  There are many more of course.  If you have one perhaps you’d share it below.  Iron sharpens iron after all!


Optimize IT

Posted by kong.yang Employee Feb 12, 2016

Maximize your resources’ utility

In the previous week, I covered security as a skill to take your IT career to the next level. This week, I’ll walk through optimization as another high value add skill in the virtual environment. To truly appreciate the importance of optimization, you have to understand the blood, sweat and tears that one endures when gaining optimization wisdom.


Optimization: More than meets the I → T

Optimization is a skill that requires a clear end-goal in mind. Optimization focuses on understanding the interactions of the IT ecosystem, the behavior of the application stack, and the inter-dependencies of systems inside and outside their sphere of influence in order to deliver success in business objectives.


If one were to look at optimization from a theoretical perspective, each optimization exercise would be a mathematical equation with multi-variables. Think multivariate calculus as an IT pro tries to find the maximum performance as other variables change with respect to one another.


I’m positive that Professor sqlrockstar could lead us through a database optimization course leveraging multivariate calculus to analyze the deterministic SQL systems with N-degrees of freedom. Meanwhile, I could leverage my ECE background and lead a discussion on applied control systems theory and its application to optimize performance in your dynamic, virtual data centers. This begs the question: is calculus knowledge required to optimize your virtual environment? The answer is no. While it may help to formulate theories and visualize the overall concepts, IT is all about keeping IT stupid simple. That should be the IT ideal after all.


Optimizing for everything is really optimizing for nothing

Optimization is a skill forged from focus tuning to achieve a desired end-goal. As such, the one trap that all IT pros should avoid is trying to do too much. Optimizing for everything and everyone more often than not ends up in disappointment as the optimization efforts end up making the Quality-of-Service worse for everyone involved.


To avoid such pitfalls, have a simple optimization plan. Prioritize your most important deliverable as defined by all the stakeholders. Optimize with that deliverable as the focal point. Understand the behavior and relationship of change as it pertains to your optimization goal. And if additional optimization tasks are appended to the original task, communicate the risks clearly and concisely. And understand that sometimes, there is no convincing an IT executive looking to make their mark, even at the expense of their direct reports.



Optimization is a high reward skill that builds upon the DART skills framework. It seeks to maximize the utility of IT resources as it delivers the best in class Quality-of-Service to end users. Next week, I’ll discuss the Automation skill, another of the SOAR skills that virtualization admins need to take flight in their careers.

The day starts like any other. You’re finally starting to feel like you’ve got a handle on the security posture of your enterprise, because you’ve been able to add a reasonable amount of visibility into the infrastructure:

  • You have a Netflow collector reporting anomalies.
  • Taps or network packet brokers are installed at key ingress/egress points.
  • Vulnerability management systems are implemented to scan networks or devices and report regularly.
  • Firewalls, intrusion detection systems (IDS) and other malware detection mechanisms send log and alert data to a security information and event management system (SIEM) or a managed security service provider (MSSP).
  • A centralized log correlation system is deployed to meet the needs of security and assist operations with troubleshooting.


Then the unthinkable happens.


Your boss waltzes in to tell you, “We’re moving to the cloud. Isn’t that exciting?!” While management is already counting the money they think they’re going to save by eliminating on-premise systems and reducing head-count, all you feel is frustration and dread. How are you going to do this all over again, and in the cloud, without the organization going broke or your team having a nervous breakdown?


Remain calm and don’t surrender to nerd rage. While challenging, there are ways to use some of your existing tools and leverage similar ones built into software-as-a-service (SaaS) or infrastructure-as-a-service (IaaS) offerings in order to meet your security needs.


For example, you can still use network monitoring tools such as Netflow to track anomalies at key ingress/egress points between the enterprise and the Iaas or SaaS. To implement access restrictions, you could set up a dedicated virtual private network (VPN) tunnel between your physical network and the cloud environment, forcing your users through traditional, on-premise controls. Additionally, most cloud providers offer the ability to create access control lists (ACLs) and security zones. This provides another method of restricting access to resources. By leveraging VPN tunnels, ACLs and logging ingress/egress firewall rules from your network to the cloud service, you can create an audit trail that will prove useful during and post breach.


Other useful access control methods are the addition of multi-factor authentication (MFA) or single-sign-on (SSO) to your cloud service or infrastructure. We all know the problems with passwords, but this becomes even more frightening when you consider your services and data are in a multi-tenant environment. Many cloud providers support free or paid MFA integration. Moreover, you’ll want to leverage the provider’s SSO capabilities to ensure that you’re controlling, auditing and removing administrative access centrally. When choosing a cloud service, these requirements should be in your RFP in order to ensure that a provider’s offerings in this realm align with your security model and compliance initiatives.


If you already have security products you’re comfortable with, you generally don’t have to reinvent the wheel. Because of multi-tenancy, you won’t be able to install physical taps and IDS appliances, but you have other options in applying more traditional controls to IaaS and Saas.  Many companies offer versions of their products that work within cloud environments. Network firewalls, IDS, web application firewalls (WAF), logging systems, vulnerability scanners; a simple search in the Amazon Web Services (AWS) marketplace should alleviate your fears.


Finally, many providers have an administrative console with APIs and external logging capabilities. AWS’ Cloudtrail and Cloudwatch are just two examples.  With some effort, these can be integrated with either an on-premise or outsourced SIEM.


Migrating to the cloud can be intimidating, but it doesn’t have to be the end of security as you know it. Your team must be prepared to shift the way you apply controls. With some tweaks, some of them will still work, but others might need to be abandoned. The cloud seems to be here to stay, so security must adapt.

Technology moves fast in today's world. We go from zero to breakneck speed on a new concept before we can even catch a breath. New software enables new business models and those new models drive our understanding of people forward in ways we couldn't imagine before. I can catch a taxi with my phone, perform a DNA analysis from the comfort of my home, and collect all kinds of information about my world with a few clicks. Keeping up gets harder every day.

It's important to recognize new technology that has the potential to change the way IT professionals do their job. Years ago, virtualization changed the server landscape. The way that server administrators performed their duties was forever changed. Now instead of managing huge tracts of hardware, server admins had to focus on the fine details of managing software and resources. Making the software perform became the focus instead of worrying about the details of the server itself.

Today, we're poised to see the same transition in application software. We've spent years telling our IT departments how important it is to isolate workloads. Thanks to virtualization, we transitioned away from loading all of our applications onto huge fast servers and instead moved that software into discreet virtual machines designed to run one or two applications at most. It was a huge shift.

Yet we still find ourselves worried about the implications of running multiple virtual operating systems on top of a server. We've isolated things by creating a bunch of little copies of the kernel running an OS and running them in parallel. It solves many of our problems but creates a few more. Like resource contention and attack surfaces. What is needed is a way to reduce overhead even further.

Containment Facility

That's where containers come into play. Containers are a software construct that run on top of a Linux kernel. They allow us to create isolated instances inside of a single operating system and have those systems running in parallel. It's like a virtual OS instance, but instead of a whole copy of the operating system running in the container it is just the environment itself. It's fast to deploy and easy to restart. If your application service halts or crashes, just destroy the container and restart it. No need to reprovision or rebuild anything. The container admin program takes care of the heavy lifting and your service returns.

Containers have been used in development organizations for some time now to help speed the need to rapidly configure hundreds or thousands of instances to run a single command once or twice. It's a great way to provide huge amounts of test data for code to ensure it will run correctly in a variety of circumstances. Beyond development there are even more uses for containers. Imagine having a huge database application. Rather than building query functions into the app itself, the queries can run as containers that are spun up according to direction as needed and destroyed as soon as the data is returned. This would reduce the memory footprint of the database significantly and off-load some of the most CPU-intensive actions to a short-lived construct.

When application developers start utilizing containers even more, I imagine we will see even more security being built into software. If a process is forked into a container it can be isolated. Containers can be configured to self-destruct when a breach is detected, immediately ejecting the offending party. Data can be contained and communication lines secured to ensure that the larger body of sensitive material can be protected from interception. Applications can even be more responsive to outages and other unforeseen circumstances thanks to rapid reconfiguration of resources on the fly.

Containers are on the verge of impacting our IT world in ways we can't possibly begin to imagine. The state of containers today is where virtualization was a decade ago. In those 10 years, virtualization has supplanted regular server operations. Imagine where containers will be in even just five years?

You owe it to yourself to do some investigative work on containers. And don't forget to check out the Thwack forums where IT professionals just like you talk about their challenges and solutions to interesting problems. Maybe someone has the container solution to a problem you have waiting for you right now!

Filter Blog

By date:
By tag: