Skip navigation

1602_BracketBattle_Captains_525x133.jpg

 

The Sailor round was full of tough match-ups & huge upsets!

Trekkies on thwack set their phasers to stunned when Han Solo beat out the beloved Captain Kirk.

This was easily one of the most contested match-ups this round.

 

Comments from Captain Kirk Supporters:

  • “This seems unfair. Han Solo is known more as a pilot and smuggler. Not as a captain. His crew consisted of a poorly maintained shag carpet. Yet Han Solo is winning, too bad really.” ddiemert
  • “This may be symptom of too many younger people that didn't watch the original Star Trek... Kirk is "the" man period.  Han Solo was neat character but Kirk saved everyone multiple times over.” ecklerwr1
  • “Captain Kirk: 7 movies. Han Solo: 4 movies. 7 > 4. 'Nuff said” ironman84
  • “Capt. Kirk is the original bada$$ space-age captain in TV and movies. Han was a supporting player in a larger story. Look at the big picture people!” tinmann0715

 

What was interesting in this match-up was that there were several heartfelt and logical defenses for Captain Kirk and virtually no defense for Han Solo, and yet Han still prevailed.

The odds of Han Solo winning this match up were approximately 3,720 to 1, but then again Han would retort with “Never tell me the odds”.

 

Round 2 Shutouts & nail-bitters:

  • Turanga Leela vs Underpants- Captain Underpants got his pants kicked in this match up & Captain Turanga Leela won by the largest margin in this round!
  • America vs Marvel- Captain America secures an easy win over Ms Marvel. Once again, tinmann0715 said what we were all thinking: “One of the most popular comic book super heroes of all time against a lesser-known comic book super hero in the same genre? It's a bloodbath.”

 

  • Crunch vs Caveman- This was the only true nail-bitter in this round & Cap’n Crunch was able to crunchatize his way to a narrow victory.

 

Which victories or losses shocked you this round? Tell us below!

 

To see who else is advancing on in the next round, check out the updated bracket & start voting for the Gunner round!

We need your help & input as we get one step closer to crowning the ultimate captain!

 

Access the bracket and make your picks HERE>>

One of the thornier new ideas in networking is automation. As soon as someone starts talking about adding automation to a network system there are cries of "taking my job" or "automating me out of a career" from all corners of the data center. The truth is that automation isn't going to cost anyone a job but it might just make yours more interesting.

Big-Boned Finger Mistakes

We've all been there before. It's the end of the day. You've had a long few hours of trying to bring new switches online. You're ready to call it a day after this last device comes online. All you need to do is get this last one in place and it's time to go. You bring up the basic configuration and everything appears to be running just fine. As soon as the routing neighbors establish you're set. It's only then that you realized that you pasted in the wrong configuration! This is the config for the switch across the campus. As you frantically attempt to fix the broken configuration, the routing relationship comes online with the wrong info. Everyone starts dropping. Routes are being black-holed. Your cell phone starts ringing while you try and get back to normal. Users are furious that their end-of-day jobs are failing because the network crashed yet again.

Making mistakes is part of the learning process in networking. As we screw things up we learn how to prevent them from happening again. But what if the easiest mistakes to make were also the easiest to prevent in the first place? Network automation isn't just about configuring switches to provision automatically when they first boot, although that is a function of automation. It's also about taking repetitive, simple tasks that we do every day and making them run in a predictable way every single time.

I've spent a lot of my career doing boring data entry on network system. MAC addresses, IP addresses, and port configurations are about as boring as it comes. Yet making a mistake on one of these could spell disaster for everything. One mistake in a port security access statement could limit traffic to a bad MAC address that locks out everyone in the network from reaching the firewall. Typing in the wrong IP address in a routing statement could cause the whole routing table to fall apart as above.

Making Our Lives Easier

Automation doesn't steal our jobs. It takes the boring parts of our jobs and makes them automatic. It means that we don't have to worry about a mistake taking our job instead. It also means that we could have even more time to learn new things. Take a moment and thing about how much more you could accomplish at your desk if you knew you didn't have to do repetitive provisioning tasks. Imagine a word in which your server team can have VLANs automatically assigned to ports instead of generating a ticket and calling you an hour later wondering why it hasn't been done yet.

Automation means giving us control over things. It means having a second set of machine eyes looking at a task and deciding that it's being done correctly. It means knowing that the junior admins are following best practice guidelines every time they implement something new. It means having a complete audit report available when someone asks so you can say that it really wasn't the network this time.

Automation doesn't have to include a whole system like Puppet or Chef or Ansible to start with. It can be as simple as finding out what scripting languages your network gear supports. It can be as simple as finding out if your NMS supports scripts to do things like provision ports or clear disabled states. Automation really starts with making a list of all the tasks that you find yourself doing every day and seeing if there is a way to have something else do them for you. Once you know what you want to automate, find a way to do it is a simple task.

This post will be the final one on the series about Infra-As-Code and I will keep this one short as we have covered a good bit over the past few weeks. So I will only be doing a quick overview of what we have covered in regards to methodologies and processes over this series. As well as I want to ensure that we finish up by reinforcing what we have covered.

Remember we will be adopting new ways of delivering services to our customers by taking a more programmatic approach which also should involve testing and development phases (something new to most right?). At the initial phase of a new request we should have an open discussion with everyone that should be involved in the duration of the new implementation. This open discussion should cover the specifics of a test plan and when a reasonable timeline should be reached. We should also be leveraging version control for our configurations and/or code and we accomplish by using Git repos. To further benefit our version control we should adopt a continuous integration/delivery solution such as Jenkins. By using a CI/CD solution we are able to automate the testing of our configuration changes and receive the results of those tests. This not only saves us the manual tasks that can be somewhat time consuming but also ensures consistency of our testing. And when we are ready for sign-off to move into production we should leverage a code-review system and peer review. Code-review adds in an additional layer of checks against the configuration changes we intend on making. And our peer review is for us to have a follow-up open discussions covering the results of our testing and development phases with those who were involved in our initial open discussion. And once we have final sign-off and all parties are in agreement on the deliverables we can leverage the same CI/CD solution as we did in our test/dev phases to ensure consistency.

When a mission-critical application experiences an outage or severe performance degradation, the pressure on the agency and its information technology (IT) contractors to find and fix the problem quickly can be immense. Narrowing down the root cause of the problem wherever it exists within the application stack (appstack) and enabling the appropriate IT specialists to quickly address the underlying problem is essential.

 

This post outlines how taking a holistic view of the appstack and optimizing visibility into the entire IT environment—applications, storage, virtual machines, databases and more—is the key to maintaining healthy applications, and how the right monitoring tools can quickly identify and tackle the problems before they become serious performance and security threats.

 

Know the impact

 

Your IT infrastructure is made up of a complex web of servers, networks and databases, among other things; and troubleshooting can be tricky. But an end user with a problem only knows that he or she can’t accomplish his or her task.

 

Take a holistic view of monitoring

 

Using individual monitoring tools for appstack issues is inefficient. Wouldn’t it be more effective if you already had a narrowed-down area in which to look for a problem?

 

Holistic monitoring prevents the “where’s the problem?” issue by keeping an eye on the entire appstack, pulling information from each individual monitoring tool for a high-level view of your systems’ health. High-level monitoring tools can be checked quickly and efficiently without diving into individual tools, and they tie together data from in-depth tools to reach conclusions and identify problems across multiple areas.

 

With a tool that provides broad, high-level visibility of the status of all layers of the appstack, IT professionals can quickly get an interdisciplinary look at different aspects of the infrastructure and how configurations or performance of various components have recently changed.

 

Extend the view to security

 

Another advantage of holistic application monitoring is that it gives visibility into both performance and security. A good holistic monitoring tool talks to all your different firewalls, intrusion detection systems and security-focused monitoring tools. It collects log data and correlates and analyzes it to give you visibility into performance and security issues as they’re happening.

 

Prioritize monitoring for maximum security

 

1. Understand what you’re trying to secure. The starting point in every system prioritization is to choose an end goal. What aspect of your appstack is the most important to secure?

 

Come up with a prioritized list and find out how each priority area is being secured, the technologies being used and the existing monitoring.

 

2. Use best practices for monitoring policies. Tools need to be checked regularly. Monitoring tools are only as good as their results, and performance and security issues could be slipping past you.

 

Be sure to set up alerts in each monitoring tool as well as running a holistic monitoring tool. This ensures that your IT pros are immediately made aware of issues with individual components of your appstack in addition to the overall insights offered by the holistic tool.

 

3. Don’t sacrifice your deep-dive specialized tools. Keep in mind that holistic monitoring doesn’t eliminate the need for your existing, individual monitoring tools. It’s good to have a holistic tool with overall visibility, but you’ll also need the more in-depth tools for deeper dives when identifying problem areas.

 

Find the full article on Signal.

1602_BracketBattle_Captains_525x133.jpg

 

We started in uncharted waters with this bracket battle, and unfortunately it was not smooth sailing for everyone in the Swabbie round. Personally, I’m a little shocked that Captain Hook was among the fallen who will now be sentenced to swabbing the poop deck.

 

Round 1 shut outs & nail bitters:

 

  • Picard vs Nero- Shut out for Picard! tinmann0715 said what we were all thinking “Seriously? Picard is on the Mt. Rushmore of Captains. It isn't even fair!
  • Han Solo vs Nemo- Han Solo wins by a huge margin! ironman84 put it best “In this battle, much like in real life, Han shoots first.
  • Kirk vs Phasma- Kirk easily wins this round! According to mhintonKirk is the UConn women of this bracket anyway. It is all merely a formality.
  • America vs Planet- No contest for Captain America! Really this round can be summed up in one word.. network defenderMurica

 

  • Underpants vs Haddock- Captain Underpants narrowly secures a victory in this nearly even split match up.
  • Messi vs Captain & Tennille- Captain Lionel Messi wins this one in overtime, but barely.
  • Phillips vs Flint- Captain Phillips takes the victory and surprise upset in this round! ahh‘s comment might help explain what happened in this match-up: “I want to go with bada$$ Flint on this, but have too much respect for Capt. Phillips (in spite of the Hollywood version).

 

Were you surprised by any of the shut outs or nail bitters for this round?

 

To see who else is advancing on in the next round, check out the updated bracket & start voting for the Sailor round! We need your help & input as we get one step closer to crowning the ultimate captain!

 

Access the bracket and make your picks HERE>>

One of the resolutions that any IT professional should make and stick to for 2016 is to learn a programming language. Before anyone starts shouting and telling me that they aren't going to be a programmer just yet, let me clarify my thoughts around this. Learning a programming language is not going to make you a programmer overnight.

Computers are a very procedural thing. For all of the amazing things that we can do around location services, deep analytics, and artificial intelligence we are still working with a system that processes instructions. Those instructions must be clear and correct in order for a system to work correctly. Computers are not capable of context separation or intuition no matter how smart they may appear to be.

Learning a programming language isn't about writing apps for mobile phones or working on the dirty underbelly of a program that controls your business. Instead, you're learning how computers process information and evaluate instructions. You're learning how they think. And that helps you figure out what they're thinking when things start going wrong. Troubleshooting a network issue or a server malfunction is actually much faster when you understand why something is broken.

Take a simple problem like a race condition. This can have a huge security impact in your IT environment. It sounds ominous and scary. But it's really just a situation where two instructions are processed in parallel and whichever is completed first is executed. Now, that instruction may contain malicious code or it could just contain a conditional that causes something to be true all the time. The former is a huge security hole. But the latter case is a bad instruction that could just cause you a headache every once in a while.

By taking the time to sit down and think through how instructions are executed and processed, we gain insight into how to prevent things from becoming problems down the road. What about massive data entry? If you don't know how data is being read into your systems, you could end up with Little Bobby Tables running around:

Little Bobby Tables

The truth is that our personal and professional lives are governed now by programming languages at almost every level. Learning how to evaluate statements and conditions gives you a leg up on other professionals that don't know how to discern the secrets hiding under the surface in a modern computer system.

The other reason for learning a programming language is that you can understand how repetitive tasks can be automated and simplified. A staggering number of errors in computer systems are caused by errant data entry. But learning how software can do that hard work for you, you not only increase your productivity but you reduce the amount of incorrect data. You don't have to write multi-function parallel processing AI scripts. Something simple that takes input and produces output will get the job done with a minimum of effort.

How do you get started with programming? Well, if you've never touched a program in your life you would do well to check out the Hour of Code series that is being positioned for kids everywhere. It's a bit simplistic, but it teaches the basics of syntax and constructs without being tied to a specific language. Once you know the basics, you can move on to something more formal, like Codecademy, which features lessons on many languages in bite-sized chunks that you can go through at your leisure.

These two resources will give you a huge advantage in your daily IT life as well as your personal computer interactions. When you learn to think like a compute you can learn to outthink them and solve problems before they happen. And that's a program that anyone can get down with.

Cisco Live Europe (CLEUR), hosted in Berlin this year, marked my second foray into the world of all things Cisco® as SolarWinds® Head Geek®. I’m here to testify that it did NOT disappoint!

 

The convention was small, but only in comparison to its American cousin, with 11,000 people in attendance in Berlin versus the 25,000 people who showed up in San Diego last June. Whatever may have been missing in quantity was more than made up for in intensity. The people who visited our booth, and who I spoke with in the hallways and in talks, were driven and focused. Everyone I spoke with had their own personal list of questions they wanted answered, technologies they wanted to view and review, and ideas they wanted to explore.

CLEUR delivered experiences and answers to those questions in spades. As always, the show was geared toward the IT professional, but the emphasis was squarely on the IT and much less on the professional. The spaces were liberally decorated with graffiti-style signage. Learning areas dedicated to DevOps and DevNet were given prime real estate and liberal floor space. Vendors were pulled into the collaborative mood (whether they planned to or not), through an engaging side quest from Arduino (not exactly your typical Cisco Live partner).

 

This side quest involved scouring the vendor area for parts needed to build a specific Arduino project. It may have been a sound emitter or a controller for a robotic arm. Each vendor had a bucket of one type of part, either a connector cable, a Wi-Fi card, a servo controller, or the like. Attendees had to ask vendors to see which part they stocked. If you needed multiple cables, you were going to visit several vendors to complete your inventory. Once all the parts were collected, players returned to the Arduino booth to assemble everything and test the end result with a staff member. If your project worked, you were entered into the running to win an iPad®, BB-8, or other cool gift.

 

That Cisco would invite this kind of pan-vendor scavenger hunt underscored for me the shift from the traditional "come see each vendor sell their wares" mentality, to the far more real-world view that each vendor probably has a piece of the solution your company needs, but it's up to you, the internal IT pro, to determine which pieces you need right now and figure out how to make them all work together to achieve your desired result.

 

Bottom line, this was a brilliant metaphor for CLEUR as a whole.

 

So what kinds of questions did we get at the SolarWinds booth? Along with the usual interest in NPM and SAM (our core flagship products), there were LOTS of questions about how NCM could both simplify and improve the management of network infrastructure. There was an equal level of interest in IPAM and UDT. The oncoming freight train of IPv6 (not to mention the general trend of environments growing at geometric rates due to IoT and BYOD), was at the heart of the matter for many of our visitors. And NTA was once again on many people's wish lists, with a deep appreciation for the insight NetFlow offers.

 

The other notable feature of CLEUR was... the food! Coffee stations were liberally placed throughout the show floor, which kept attendees fueled up all day. Lunch was served from silver chafing dishes in cavernous halls by a staff that resembled an army, both in execution and quantity. And near the end of each day, the show floor became a veritable buffet with stations serving soft pretzels, sushi, ice cream, and assorted hors d'oeuvres, along with a fantastic selection of local beers.

 

Cisco Live is an experience in every sense of the word. The event has the potential to enhance not just your knowledge of the networking space, but also your appreciation for the best that IT has to offer. I can't wait to see how Cisco Live US in Las Vegas this June raises the bar even higher.

As federal technology environments become more complex, the processes and practices used to monitor those environments must evolve to stay ahead of -- and mitigate -- potential risks and challenges.

 

Network monitoring is one of the core IT management processes that demands focus and attention in order to be effective. In fact, there are five characteristics of advanced network monitoring that signal a forward-looking, sophisticated solution:

 

  1. Dependency-aware network monitoring
  2. Intelligent alerting systems
  3. Capacity forecasting
  4. Dynamic network mapping
  5. Application-aware network performance

 

It might be time to start thinking about evolving your monitoring solution to keep up.

 

1. Dependency-aware network monitoring

 

Network monitoring is a relatively basic function, sending status pings from devices on your agency’s network so you know they’re operational. Some solutions offer a little bit more with the ability to see connectivity -- which devices are connected to each other.

A sophisticated network monitoring system, however, provides all dependency information: what’s connected, network topology, device dependencies and routing protocols. This type of solution then takes that dependency information and builds a theoretical picture of the health of your agency’s network to help you effectively prioritize network alerts.

 

2. Intelligent alerting system

 

The key to implementing an advanced network monitoring solution is having an intelligent alerting system that triggers alerts based on dynamic baselines calculated from historical data. An alerting system that understands the dependencies among devices can significantly reduce the number of alerts being escalated.

Intelligent alerting will also allow an organization to “tune” alerts so that admins get only one ticket when there is a storm of similar events, or that alerts are sent only after a condition has persisted for a significant period of time.

 

3. Capacity forecasting

 

An agency wide view of utilization for key metrics, including bandwidth, disk space, CPU and RAM, plays two very important roles in capacity forecasting:

 

1.    When you have a baseline, you can see how far above or below normal the network is functioning; you can see trends over time and can be prepared for changes on your network.

2.    Because procurement can be a lengthy process, having the ability to forecast capacity requirements months in advance allows you to have a solution in place when the capacity is needed.

 

4. Dynamic network mapping

 

Dynamic network mapping allows you to take dependency information one step further and display it on a single screen, with interactive, dynamic maps that can display link utilization, device performance metrics, automated geolocation and wireless heat maps.

 

5. Application-aware network performance

 

Users often blame the application, but is it really the application? Application-aware network performance monitoring collects information on individual applications as well as network data and correlates the two to determine what is causing an issue. You’ll be able to see if it is the application itself causing the issue or if there is a problem on the network.

 

As I mentioned, federal technology environments are getting more complex; at the same time, budgets remain tight. Evolving your network monitoring solution will help with both of these challenges -- it will keep you ahead of the technology curve and help meet budget and forecasting challenges.

 

Find the full article on GCN.

1602_BracketBattle_Captains_525x133.jpg

 

O Captain! My Captain! Our Journey’s just begun.

Whether your ship sails the high seas or galaxies, votes will say who won.

Featuring captains Crunch, Kangaroo, Sparrow, Kirk, & everyone in between,

This is a bracket battle, the likes of which you’ve never seen.

Sound the battle cry, let the trash-talk fly—there’s no telling which way this will go.

Now all that’s left to say is Vote up me hearties, yo ho!

 

Starting today, 33 of the most popular captains will battle it out until only one remains and reigns supreme as the ultimate captain.

We’ve handpicked from a wide-range of captains from pop-culture & real life to make this one of the most diverse bracket battles yet. The starting categories are as follows:

 

  • Interstellar
  • High Seas
  • Island of Misfits
  • Animania

 

We picked the starting point and initial match ups, however, just like in bracket battles past, it will be up to the community to decide who takes control of the ship & who will just be part of the crew.

 

Bracket battle rules:

Match up Analysis:

  • For each captain, we’ve provided reference links to wiki pages— to access these, just click on their NAME on the bracket
  • A breakdown of each match-up is available by clicking on the VOTE link
  • Anyone can view the bracket & match-ups, but in order to vote or comment you must have a thwack account & you must be logged in

 

Voting:

  • Again, you must be logged in to vote & trash-talk
  • You may vote ONCE for each match up
  • Once you vote on a match, click the link to return to the bracket & vote on the next match up in the series
  • Each vote earns you 50 thwack points! If you vote on every match up in the bracket battle you can earn up to 1550 points!

 

Campaigning:

  • Please feel free to campaign for your favorite captains & debate the match ups via comment section or on social media (also, feel free to post pics of bracket predictions on social media)
  • To join the conversation on social use hashtag #SWBracketBattle
  • There is a PDF printable version of the bracket available so you can track the progress of your favorite picks

 

Schedule:

  • Bracket Release is TODAY, March 21st
  • Voting for each round will begin at 10 AM CT
  • Voting for each round will close at 11:59 PM CT on the date listed on the bracket home page
  • Play-in battle opens TODAY, March 21st
  • Swabbie round OPENS March 23rd
  • Sailor round OPENS March 28th
  • Gunner round OPENS March 31st
  • Quartermaster round OPENS April 4th
  • First Mate round OPENS April 7th
  • Ultimate captain announced April 13th

 

If you have any other questions, please feel free to comment below & we’ll get back to you!

Which one of these captains would you follow to the end of the earth?

We’ll let the votes decide!

 

Access the bracket overview HERE>>

Leon Adato

Un-Acceptable

Posted by Leon Adato Expert Mar 18, 2016

Larry Wall (creator of the Perl programming language) famously said,

 

“Most of you are familiar with the virtues of a programmer.

There are three, of course: laziness, impatience, and hubris.”

 

In one brilliantly succinct phrase, Mr. Wall took three traits commonly understood to be character flaws and re-framed them as virtues.

 

As I sat and thought about how acceptance is generally seen as a positive trait in life, I realized that in I.T. it could be just the opposite.

 

Accepting the status quo, that the system “is what it is”, that things are (or aren’t) changing (or staying the same) and there is nothing that we can do to affect that… all of these are anti-patterns which do us no good.

 

As I sat and pondered it in the wee hours of the morning, I heard the voice of Master Yoda whisper in my ear:

  • NOT accepting leads to curiosity
  • Curiosity leads to hacking
  • Hacking leads to discovery
  • Discovery leads to innovation
  • Innovation leads to growth

 

When we refuse to accept, we grow.

 

Bringing this back around to personal growth, I think there is a time and place when refusing to accept – our perceived limitations, our place (whether that’s in the org chart, or in society at large), our past failures, etc. When we refuse to permit those external forces to define or limit us – that is when we find the path toward personal growth.

This is a common scenario in most organizations: end-users complaining of slow email performance. Sometimes all users experience it, or just a few people (maybe in a specific location, or just randomly). It could also be a home user connecting via VPN complaining that email Web access is slow. IT teams face different types of email performance issues, which arise due to a variety of reasons, fairly consistently.  At times, slow email is blamed on a slow network, even though the problem could very well be an application issue. So, where is the problem and who should attend to it – network admin or sysadmin? It could be:

  • A network issue that has nothing do to with email application.
  • The email server is experiencing a performance issue.
  • Some other IT infrastructure problem is affecting the email server.

 

Let us take  Microsoft® Exchange®, for example. Slow email performance does not always have to be associated with a root cause in the network, or the Exchange Server application. It could stem from any of the dependent IT components that support Exchange: the physical server, some OS processes, resource contention for the virtual machine running Exchange Server, or the storage array.

Even within the Exchange Server roles, it is difficult to know if it is a mailbox issue or a failed service in the CAS role issue. With so many possibilities out there, where do you start, and how can you fix the issue fast?

SolarWinds® AppStack™ dashboard and Quality of Experience (QoE) dashboard are helpful tools to quickly isolate the root cause of email performance issues.

  • AppStack is a combination of four SolarWinds products: SAM, WPM, VMan and SRM. An application dependency map presents the data in one view, which allows you to see how various infrastructure entities are connected to one another, from website transactions to applications, databases, physical and virtual servers and storage systems.
  • QoE dashboard is built into SolarWinds SAM. It gives you the ability to analyze network packets and measure network latency and application latency for specific application traffic types. This helps you isolate whether it is a network or an application issue.

 

The infographic below shows how IT teams can use the AppStack and QoE dashboards to determine the root cause behind slow emails and provide faster resolution.

 

Download the full infographic at the end of the blog.

Email 1.PNG

Email 2.PNG

Email 3.PNG

Email 4.PNG

 

To see detailed performance metrics on the AppStack dashboard, simply drill down to the specific application or server node, virtual machine, or storage array/LUN for further analysis and troubleshooting.

Note:

  • AppStack is NOT a separate product. It’s available as a built-in dashboard for SAM, WPM, VMan, and SRM. You can benefit from AppStack even when you are using these products individually. But when integrated, the collective insight across all IT layers will simplify IT troubleshooting significantly. Learn more about AppStack »
  • QoE is available as part of SAM and NPM. There is no additional license needed for this. It simply requires installing network and server packet analysis sensors (which are available with SAM/NPM) on your network to scan traffic for specific application types. Learn more about QoE dashboard »
  1. Hopefully the series of posts over the past few weeks have been beneficial to others.  I am hopeful that they have either confirmed your thoughts, confirmed your methodologies or maybe even opened your eyes to the idea of heading down the Infrastructure as Code journey. And with all of the content that has been covered you may be wondering to yourself how or what are the next steps to continue this journey. And this is exactly what we will cover in this post. Up until this post we have focused on the network being treated as code versus stagnant methodologies and processes that continue to be considered dark magic on how networks are treated. Which we have covered in the previous posts on how we can begin leveraging newer methodologies and processes as well as relearning our culture as an organization. And if you follow those as well as develop your own you can begin extending these same principals into other areas of infrastructure. This could include application and server deployments along with their respective lifecycle management. But maybe these specific areas have already been addressed and the network is the next phase of your journey. And if that is the case then you can continue to grow each specific area of infrastructure as a whole in order to get a complete overall strategy for each area. You should use this journey as a way to begin breaking down the silos between teams and come together as one in order to deliver services in a much more holistic manner. But if your focus is only network infrastructure then you should continue to implement and practice the steps outlined in this post as well as any additional practices that you adopt in each implementation phase going forward. Keeping all discussions and methodologies out in the open across teams will allow for the culture shift that can and should occur going forward. This will only strengthen the relationships that teams and organizations have in the future. And I realize that DevOPS (see…I finally said it J) is on many organizations mind but do not fall victim to the mentality that we HAVE a DevOPS team. DevOPS is not a team, it is not automation, it is not Infrastructure as Code but rather at the core is culture. With culture and the areas that we have touched on over this series will only enhance all services being delivered as an organization. But if the only thing that is achieved by your journey is a more stable, repeatable and consistent delivery method allowing you to reach the desired state then you will be in a much better place.

And with all of this I will end this post for now and the final post in the series is up next which will be an overview of what has been covered. Hope you have enjoyed this series and keep the comments coming.

Forget about writing a letter to your congressman – now, people are using tools like the web, email, and social media to have their voices heard on the state, local, and federal levels. Forward-looking agencies and politicians are even embracing crowdsourcing as a way to solicit feedback and innovative ideas for improving the government.

 

Much of this is due to the ubiquity of mobile devices. People are used to being able do just about everything with a smartphone or tablet, from collaborating with their colleagues wherever they may be, to ordering a pizza with a couple of quick swipes.

 

Citizens expect their interactions with the government to be just as satisfying and simple – but, unfortunately, recent data indicates that this has not been the case. According to a January 2015 report by the American Customer Satisfaction Index, citizen satisfaction with federal government services continued to decline in 2014. This, despite Cross-Agency Priority goals that state federal agencies are to “utilize technology to improve the customer experience.”

 

Open data initiatives can help solve these issues, but efforts to institute these initiatives are creating new and different challenges for agency IT pros.

 

  • First, they must design services that allow members of the electorate to easily access information and interact with their governments using any type of device.
  • Then, they must monitor these services to ensure they continue to provide users with optimal experiences.

 

Those who wish to avoid the wrath of the citizenry would do well to add automated end-user monitoring to their IT tool bag. End-user monitoring allows agency IT managers to continuously monitor the user experience without having to manually check to see if a website or portal is functioning properly. It can help ensure that applications and sites remain problem-free – and enhance a government’s relationship with its citizens.

 

There are three types of end-user monitoring solutions IT professionals can use. They work together to identify and prevent potential problems with user-facing applications and websites, though each goes about it a bit differently.

 

First, there is web performance monitoring, which can proactively identify slow or non-performing websites that could hamper the user experience. Automated web performance monitoring tools can also report on load-times of page elements so that administrators can adjust and fix slow-loading pages accordingly.

 

Synthetic end-user monitoring (SEUM), allows IT administrators to run simulated tests on different possible scenarios to anticipate the outcome of certain events. For example, in the days leading up to an election or critical vote on the hill, agency IT professionals may wish to test certain applications to ensure they can handle spikes in traffic. Depending on the results, managers can make adjustments accordingly to handle the influx.

 

Finally, real-time end user monitoring effectively complements its synthetic partner. It is a passive monitoring process that—unlike SEUM which uses simulated data—gathers actual performance data as end-users are visiting and interacting with the web application in real time.

 

Today, governments are becoming increasingly like businesses. They’re trying to become more agile and responsive, and are committed to innovation. They’re also looking for ways to better service their customers. The potent combination of synthetic, real-time, and web performance monitoring can help them achieve all of these goals by greatly enhancing end-user satisfaction and overall citizen engagement.

 

Find the full article on GCN

Recently the Cisco Firepower Next-Generation Firewall was released and according to Cisco, it’s the “first fully integrated, threat-focused next-gen firewall with unified management.”   It’s capabilities include Application Visibility and Control (AVC), Firepower next-gen IPS (NGIPS), Cisco® Advanced Malware Protection (AMP), and URL Filtering.  That’s a lot to roll into a single OS, especially when you consider the stateful firewall capability.

 

In the past we’ve seen Cisco package the FirePOWER services on a module that sits in the ASA.  Using the MPF you can forward traffic to the module.  The module is managed by the FirePOWER Management Center or by a local FMC that’s part of ASDM.  It’s still separate from ASA policy.  With the new Cisco Firepower NGFW it’s all managed in one place.  This is a significant step in the right direction.

 

So the short answer is “Yes.”  Yes you can put all that capability into one box.  Cisco isn’t the first to do it.  In fact, Cisco’s pretty late to the game on this one.  Of course Cisco would likely contend that they have some special sauce baked into the Firepower NGFW.  The new 4100 series hardware provides a platform for Firepower NGFW, Cisco AMP, and the traditional ASA (although I can’t imagine the traditional ASA stays around much longer.)

 

Should We Care?

 

So now the question is, should we really care that Cisco has another firewall?  Absolutely.  The architecture of this devices allows Cisco and Third Party Vendors to quickly add security services as the network evolves.

 

Architecture.png

 

Of particular interest to me is what a third-party vendor could run as a service on this platform.  Could monitoring be an added service?  A Correlation engine?  There’s a lot to this architecture that’s interesting to me.  The API access, OpenFlow, the orchestration layer.  I think with modern developments in orchestration combined with this new architecture and some third-party services we can do some interesting things.

 

What’s Your Take?

 

At this point I open it up to you.

  1. Do you feel this is a significant development in Cisco’s security portfolio?
  2. What would you like to see third-party developers working on for this platform?
  3. What did I miss?

In IT, we search the Web constantly. Everything we need is literally at our fingertips, just an Internet query away.

 

This is interesting because IT professionals tend to make a big deal of knowing things by heart. Can you calculate subnets off the top of your head? Do you know these OS commands (and all of their sub-commands)? Can you set up this or that without referring to the manual?

 

Back in high school, one of my best friends was on the path to what would become a very fulfilling career as a microbiologist. I vividly recall sitting in the hallway before school quizzing her on the periodic table of elements for her chemistry exams.

 

I reconnected with her several years after graduation. One of my first calls to her started off with me demanding to know the atomic weight of germanium. She knew immediately what I was asking, but responded with, “I couldn’t care less.”

 

I was surprised, and asked if it was an example of things you learn in school that turn out not to be important later on.

 

“Nope. I use that kind of thing every day,” she said.

“Then how come you don’t know it?”

“Because,” she replied. “I don’t have to know it. I simply have to know where to find it. What is actually important to know,” she explained, “is what to do with the information once I have it.”

 

I think that’s what differentiates experienced IT professionals from newbies. The newcomers focus on (and stress out about) specific factoids, the atomic elements that make up a particular technology. The veterans know that it’s not the specific commands or verbs that are important. What’s important are the larger patterns and use-cases. Those things can actually make or break you, professionally speaking.

 

Let’s take this a bit further and discuss the difference between knowing and understanding. Information Technology is one of the few places I can think of where people who call themselves professionals can be successful even while they don’t understand huge swaths of the technology they use.

 

I have met entire teams of server administrators who can’t explain the first thing about IP addresses, or networking in general. Similarly, I have met network engineers who don’t know and don’t care how operating systems communicate.

 

This is partially by design, and partially by convenience. DBAs don’t need to understand how packets are built up and broken down as they traverse switches and routers. In a handful of situations, they may be able to more effectively troubleshoot an issue if they did know, but most of the time it’s not important. The network is a big black box where their data goes (and, if you ask them, the network is the reason their data is delivered so slowly. But they’re wrong. IT’S NEVER THE NETWORK!)

However, there is a difference between not understanding and not caring to understand. One is due to a lack of opportunity but not curiosity. The other is a willing abdication of responsibility to know.

 

I think the second is extremely unhealthy.

 

IT pros need to be committed to lifelong (or at least career-long) learning and growth. No area of IT is too esoteric to want to know about. We may not have time right now, or we may not be able to utilize the knowledge immediately, but rest assured that understanding how and why something works the way it does is always better than the alternative.

Mrs. Y.

SIEM’s Middle Way

Posted by Mrs. Y. Mar 10, 2016

I’ve recently been tasked with assessing the implementation of a log correlation product for an organization. I won’t mention the product name, but its consumption-based licensing model is infamous throughout IT shops far and wide. I don’t usually do this kind of work, but the organization wants to use it as a SIEM, including the configuration of some automated security alerting. Initially this seemed to be a reasonable request, but the project has me feeling like an IT Sisyphus trying to unravel the world’s largest rubber band ball.  It’s even causing me to reevaluate many of my previously held ideas about visibility.

 

One of the most surprising aspects to this endeavor is my discovery of an entire cottage industry of training and consulting dedicated to the deployment, maintenance and repair of this product across enterprises.  But if it’s just log correlation, why is it such a big lift? Well, now it’s called “machine generated big data” and you’re supposed to be looking for “operational intelligence.” That means more complexity and a bigger price tag.

 

I remember the good old days, when all a Unix engineer needed was a Syslog server and some regular expression Judo. A few “for” loops in the crontab and you had most of the basic alerting in place to keep your team informed. But lately log correlation and alerting seems like a hedonic treadmill. No matter how much data you have, no matter how many enhancements to alerting you make, you never seem to make any progress.

 

Identifying feeds, filtering and normalizing the data, building and maintaining dashboards. Maintaining a log correlation and event monitoring system has morphed into a full time job, usually for a team of people, in many organizations.  But when business is demanding increased efficiency from IT departments, how can they afford to dedicate staff to a task that doesn’t seem to add demonstrable value?

 

It’s time to take a step back and rethink what we’re trying to achieve with the collection of log and event data. At one time, along with everyone else, I thought visibility meant that I needed everything. But with consumption licenses and all the work that goes into normalizing and extracting intelligence, that's probably unrealistic. Maybe it’s really about getting the right data, that which is truly relevant. Think of it like an emergency room. If I walk in complaining about chest pains, the nurse doesn’t ask me what my mother’s maiden name is or who received the Oscar for best actress last year. He or she is going to use a “fast and frugal” decision tree to ascertain my risk for a heart attack as quickly as possible.

 

So if we want useful “operational intelligence” we need to reset our expectations, focusing on simple, cost-effective systems that will rapidly produce actionable information about events. We need to find a “middle way” for log correlation and event monitoring, understanding that the perfect shouldn’t get in the way of the good. We may miss some things, but by implementing an easy-to-manage system that works, we won’t miss everything.

We've all been there. We look at the monitoring dashboard and see the collection of Christmas lights staring back at us. The hope is that they are all green and happy. But there is the occasional problem that generates a red light. We stop and look at it for a moment.

  • How long has this been red?
  • If someone already looked at it, why are there no notes?
  • Did this get fixed? Is it even able to get fixed?
  • Do we need to have a meeting to figure this out?

All these questions come up and make us try and figure out how to solve the problem. But what if the light itself is the problem?

Red Means Stop

How many monitoring systems get setup with the traditional red/yellow/green monitoring thresholds that never get modified? We spend a large amount of time looking at dashboards trying to glean information from the status indicators, but do we even know what those indicators represent? I would wager that a large portion of the average IT department doesn't know what it means to have the indicator light transition from green to yellow.

Threshold settings are very critical to the action plan. Is a CPU running at 90%, triggering a red warning? CPU spikes happen frequently. Does it need to run at that level for a specific period of time to create an alert? When does it fall back to yellow? And is that same 90% threshold the same for primary storage? What about tracking allocated storage? A thick provisioned LUN takes up the same amount of space on a SAN whether it has data stored there or not.

Defining the things that we want to monitor take up a huge amount of time in an organization. Adding too many pieces to the system causes overhead and light fatigue. Trying to keep an eye on the status of 6,000 switch ports is overwhelming. Selecting the smartest collection of data points to monitor is critical to making the best of your monitoring system.

But at the same time, it is even more critical to know the range for the data that you're monitoring. If your alerts are triggering on bad thresholds or are staying red or yellow for long periods of time then the problem isn't with the device. The problem is that the data you're monitoring is bad or incorrect. When lights stay in a trouble state for longer than necessary you create aversion to wanting to fix problems.

I tested this once by doing just that. I went in to a monitoring system and manually trigged a red light on a connectivity map to see how long it would take for that light to get changed back or for someone to notice that a fault had  occurred. The results of my test were less than stellar. I finally had to point it out some weeks later when I was talking to the team about the importance of keeping a close eye on faults.

Green Means Go!

How can we fix light fatigue? The simple solution is to go through your monitoring system with a fine-toothed comb and make sure you know what thresholds are trigging status changes. Make notes about those levels as well. Don't just take the defaults for granted. Is this level a "best practice"? Who's practice? If it doesn't fit for your organization then change it.

Once you've figured out your thresholds, make sure you implement the changes and tell your teams that things have changed. Tell them that the alerts are real and meaningful now. Also make sure you keep tabs on things to ensure that triggered alerts are dealt with in a timely manner. Nothing should be left for long without a resolution. That prevents the kind of fatigue that makes problems slip under the radar.

Lastly, don't let your new system stay in place for long without reevaluation. Make sure to call regular meetings to reassess things when new hardware is installed or major changes happen. That way you can stay on top of the lights before the lights go out.

I was a sucker for Windows CE (or WinCE, as we mobile enthusiasts called it). I spent hours with my WinCE devices. I owned 3 of them (1 purchased on ebay long after the sun had set on that tragically short-lived OS). I sang their praises. I showed them off to friends.

 

Looking back, I can write those hours off as a complete loss.

 

Recently, Seth Godin made a comment in this blog (http://sethgodin.typepad.com/seths_blog/2015/11/the-end-of-the-future.html) which caught my attention:

 

"it turns out that this thing, the thing we have now, is worth working with, because it offers so many opportunities compared with merely waiting for the next thing."

 

This is a very important idea for IT pros to come to terms with. As Berkely Brethed noted back in 1984: "Hackers, as a rule, do not handle obsolescence well."
Untitled.jpg

 

To extend Seth's idea, learning "it" may be the wasted time that leads to an actual measurable skill.

 

Or to quote another famous artist (Randall Munroe of XKCD), that one weekend you spend messing with Perl may turn out to be far more useful than you imagine, years later:
11th_grade.png

 

However, that's not my point. We spend time learning skills today with the express intent (or at least hope) that they will end up being more useful (or at least measurably useful) down the road. What happens when they don't? And what happens when you have several skills, hard-won with hours of dedication, which turn out to be utterly, completely useless?

 

More to the point, how do we - those who dedicate our time learning complex skills in the hope it will help us succeed - how do we reconcile the fact that all that time we took actually pulled us AWAY from learning some other skill that might have turned out to be more useful?

 

Spend a few years in IT and dozen or so false starts, and one may be inclined to look at every new trend and think "I'll wait and see if this pans out to anything. Let some other guy do all the heavy lifting."

 

Why bother learning python. I spent all that time learning Perl, and now it’s good for not much. Ditto Turbo Pascal. And WordPerfect. And CorelDraw. And Token Ring. And dBase III, and FoxPro, and Paradox.

 

But I think that's a mistake. Not the idea of letting some trends pass go on without us being on the bleeding edge. We all have to budget our time. But in the attitude that the reason we're letting it pass by is because it will ultimately either:


A) Fail, or
B) be replaced by version NEXT-dot-zero which will be so much better that we may as well wait for THAT to get here.

 

The truth is that learning "it" may take away from time better spent learning but the simple act of getting into something, being excited about it, falling in love with it (warts and all) is what feeds the soul of many a lifelong IT enthusiast. It keeps us ready for the next generation (which will undoubtedly be better and of course we will LOVE it even more than we love this).

 

Perl and Pascal and WordPerfect and Token Ring and all the rest? They were all worth it. Not just because it taught me stuff that makes me better at using today’s stuff (which it does) but because it was worth it in its own right. It had value. It was worth the investment even though I can see now how temporary it was.

 

There’s a phrase in Hebrew: “Gam ze ya’avo” – “This, too, shall pass”. It is a double-edged sword of a phrase which can give hope (this bad situation won’t last forever) and at the same be a sobering reminder (this good thing won’t be around forever, either).

 

When you are wrestling with technology fatigue, it’s important to remember “this, too, shall pass.”. Dive in and bask in it if you want. Or sit this one out, but do it not out of fear of obsolescence, but rather so that you are ready for the next one.

I walked the show for 21K steps a day ( according to my fitbit) and these were my top takeaways:

 

  1. The sheer number of vendors makes remembering much of the show difficult
  2.   Innovative security solutions teams are developing new capabilities
    • heterogeneous public cloud deployments
    • new windows end point security capabilities that are managable
    • more application specific security that matches risk
  3. You can build a better mousetrap
    • new one time password solutions
    • easier to manage encryption
  4. There were more policy presentations because of Apple than Snowden

 

Of course we still have to security capabilities that are early in the hype cycle, and Threat Intelligence is one of those. 

We did a little write up for you on Threat Intelligence here:

 

http://resources.solarwinds.com/is-threat-intelligence-for-me/

 

What did you think of RSA? Anything you found compelling?

As many of you who’ve met me can attest- at tradeshows, user groups, or on SolarWinds Lab- I talk a lot.  I talk about technology, flying, astronomy and anything else geeky, but I don’t talk about myself much. Ok, I do annoy millennials with stories about my kids, but that’s parental prerogative.  And maybe that’s why I don’t talk much about SolarWinds.  Of course I talk about our products, advanced SolarWinds techniques and how our customers succeed, but not about SolarWinds, the business.  With the release of the recent 2016 Gartner Magic Quadrant for Network Performance Monitoring and Diagnostics (NPMD) however, I’m a bit excited and want to humble brag just a little.

 

Not My First Rodeo

 

Once upon a time I was the product manager for Sun Java System Identity Manager, formerly Waveset Lighthouse.  (Say that five times fast).  One of my jobs was engaging analysts with the cool geekiness of automated identity management and secure systems provisioning.  It was hardcore integration and code, with plenty of jargon and lots to talk about.  And a highlight in my career remains the month we hit the top corner of the “Leaders” quadrant on the Garner Magic Quadrant for Identity Management.  The whole company was understandably proud and ordered a 10 foot Fathead to stick on the office entryway wall.  And it’s from that perspective that I’m really pleased with Gartner’s analysis and where SolarWinds is positioned at the top of the “Challengers” NPMD quadrant.  Perhaps you’re even surprised to see us included.

 

Very large corporations rely on analyst evaluations for technology because CIOs and other senior IT executives may have budgets that run into the billions, with complexity unimaginable to most of us.  They simply don’t have time to dive in and learn the details of thousands of potential vendor solutions in hundreds of technology categories.  For them analyst research is extremely helpful, in many ways a trusted advisor.  And what many are looking for is something that can transform their IT operations, even if it’s expensive or requires rip-and-replace, because the benefits at that scale can outweigh the concerns of budget, company politics or risk.

 

And just like an iPhone, shiny and new is just sexy.  How many vendors do you know that start a conversation with long lists of features, or talk about the ZOMG most amazing release that’s just around the corner? But SolarWinds isn’t that kind of a company.  In fact it’s intentionally not like any other company.

 

We don't create new widgets in a lab and then try to figure out how to sell them.  For fifteen years you, our customers, have been telling us what your problems are, what a product should do, how it should work and how much it should cost.  Only then do we create technology you can use every day.  We also proudly don’t offer professional services. If we say a product is easy to use but have to install it for you, it’s not.  We also don’t offer hardware and admittedly miss out on what would be a really cool, dark-gray bezel with the orange SolarWinds particles logo in the center.  (Did you know our mark is particles not a flame? Solar wind = charged particles from the sun’s atmosphere.  Geeks.)

 

Highest in Ability to Execute

 

So when I look at where SolarWinds appears in the Magic Quadrant, I see exactly what I would expect, SolarWinds’ positioned highest along the “Ability to Execute” axis among all vendors in the quadrant. Would we lean a bit more toward the Leaders quadrant if we teased a few more features and products in the “What Are We Working on” section of thwack?  Perhaps, but that’s not our way. I’m in my 10th year at SolarWinds for one simple reason.  During my time this company has grown 25x because it stays true to an IT Pro philosophy to be helpful.  We don’t do everything, but we do just about everything you ask for, always striving to do it well.  SolarWinds isn’t about transforming IT. SolarWinds is about transforming the lives of IT professionals.

 

But enough about that, I‘m late for a meeting.

 

Feedback: Are you surprised to see SolarWinds on a Gartner Magic Quadrant? Does this report matter to you and your company?  Let us know in the comments below!

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Over the past 3 posts we have covered what it means, how to start and what is required to begin down the Infra-As-Code journey. Hopefully things have been making sense and you have found them to be of some use. Obviously the main goal is just bringing awareness to what is involved and to help start the discussions around this journey. In the last post I also mentioned that I had created a Vagrant lab for learning and testing out some of the tooling. If you have not checked it out, you can do so by heading over to here. This lab is great for mocking up test scenarios and learning the methodologies involved.

 

In this post we will take what we have covered in the previous posts and mock up an example to see how the workflow might look. And for our mock example we will be making some configuration changes to a Cisco ACI network environment using Ansible to perform the configuration changes for our desired state.

 

The details below is what our workflow looks like for this mock-up.

  • - Developer – Writes Ansible playbooks and submits code to Gerrit
  • - Gerrit – Git Repository and Code-Review (Both master and dev branches)
  • - Code-Reviewer – Either signs off on changes or pushes back
  • - Jenkins – CI/CD – Monitors master/dev branches on Git Repository (Gerrit)
  • - Jenkins – Initiates the workflow when a change is detected on master/dev branches

 

And below outlines what our mock-up example entails from a new request received.

 

Change request:

  • - Create a new tenant for the example environment, which will consist of some web-servers and DB-servers. The web-servers will need to communicate with the DB-servers over tcp/1433 for MS SQL.
  • - After bringing all of the respective teams together to discuss in detail on the request and identify each object which must be defined, configured and made available for this request to be successful. (Below is what was gathered based on the open discussion)
    • Tenant:
      • § Name: Example1
      • Context name(s) (VRF):
        • § Name: Example1-VRF
      • Bridge-Domains:
        • § Name: Example1-BD
        • § Subnet: 10.0.0.0/24
      • Application Network Profile
        • § Name: Example1-ANP
      • Filters:
        • § Name: Example1-web-filter
        • § Entries:
          • Name: Example1-web-filter-entry-80
            • Proto: tcp
            • Port: 80
            • Name: Example1-web-filter-entry-443
              • Proto: tcp
              • Port: 443
        • § Name: Example1-db-filter
        • § Entries:
          • Name: Example1-db-filter-entry-1433
            • Proto: tcp
            • Port: 1433
      • Contracts:
        • § Name: Example1-web-contract
          • Filters:
            • Name: Example1-web-filter
            • Subjects:
              • Name: Example1-web-contract-subject
        • § Name: Example1-db-contract
          • Filters:
            • Name: Example1-db-filter
            • Subjects:
              • Name: Example1-db-contract-subject

Open discussion:

 

So based on the open discussion we have come up with the above details on what is required from a Cisco ACI configuration perspective in order to deliver the request as defined. We will use the above information to begin creating our Ansible playbook to implement the new request.

 

Development:

We are now ready for the development phase of creating our Ansible playbook in order to deliver the environment from the request. And knowing that Gerrit is used for our version control/code repository we need to ensure that we are continually committing our changes to a new dev branch on our Ansible-ACI Git repository as we are developing our playbook.

 

**Note – Never make changes directly to the master branch…Always create/use a different branch to develop your changes and then merge those into master.

 

Now we need to pull down our Ansible-ACI Git repository to begin our development.

$mkdir -p ~/Git_Projects/Gerrit

$cd ~/Git_Projects/Gerrit

$git clone git@gerrit:29418/Ansible-ACI.git

$cd Ansible-ACI

$git checkout -b dev

 

We are now in our dev branch and can now begin our coding.

 

We now create our new Ansible playbook.

$vi playbook.yml

 

And as we create our playbook we can begin committing changes as we go. (Follow the steps below on every change you want to commit)

$git add playbook.yml

$git commit -sm “Added ACI Tenants, Contracts and etc.”

$git push

 

In the example above we used -sm as part of our git commit. The -s adds a sign-off by the user making the changes and the -m designates the message that we are adding as part of our commit. You can also just use the -s and then your editor will open for you to enter your message details.

 

So we end up coming up with the following playbook which we can now proceed with testing in our test environment.

---

- name: Manages Cisco ACI

  hosts: apic

  connection: local

  gather_facts: no

  vars:

    - aci_application_network_profiles:

        - name: Example1-ANP

          description: Example1 App Network Profile

          tenant: Example1

          state: present

    - aci_bridge_domains:

        - name: Example1-BD

          description: Example1 Bridge Domain

          tenant: Example1

          subnet: 10.0.0.0/24

          context: Example1-VRF

          state: present

    - aci_contexts:

        - name: Example1-VRF

          description: Example1 Context

          tenant: Example1

          state: present

    - aci_contract_subjects:

        - name: Example1-web-contract-subject

          description: Example1 Web Contract subject

          tenant: Example1

          contract: Example1-web-contract

          filters: Example1-web-filter

          state: present

        - name: Example1-db-contract-subject

          description: Example1 DB Contract Subject

          tenant: Example1

          contract: Example1-db-contract

          filters: Example1-db-filter

          state: present

    - aci_contracts:

        - name: Example1-web-contract

          description: Example1 Web Contract

          tenant: Example1

          state: present

    - aci_filter_entries:

        - name: Example1-web-filter-entry-80

          description: Example1 Web Filter Entry http

          tenant: Example1

          filter: Example1-web-filter  #defined in aci_filters

          proto: tcp

          dest_to_port: 80

          state: present

        - name: Example1-web-filter-entry-443

          description: Example1 Web Filter Entry https

          tenant: Example1

          filter: Example1-web-filter  #defined in aci_filters

          proto: tcp

          dest_to_port: 443

          state: present

        - name: Example1-db-filter-entry-1433

          description: Example1 DB Filter MS-SQL

          tenant: Example1

          filter: Example1-db-filter

          proto: tcp

          dest_to_port: 1433

          state: present

    - aci_filters:

        - name: Example1-web-filter

          description: Example1 Web Filter

          tenant: Example1

          state: present

        - name: Example1-db-filter

          description: Example1 DB Filter

          tenant: Example1

          state: present

    - aci_tenants:

        - name: Example1

          description: Example1 Tenant

          state: present

  vars_prompt:  #Prompts for below info upon execution

    - name: "aci_apic_host"

      prompt: "Enter ACI APIC host"

      private: no

      default: "127.0.0.1"

    - name: "aci_username"

      prompt: "Enter ACI username"

      private: no

    - name: "aci_password"

      prompt: "Enter ACI password"

      private: yes

  tasks:

    - name: manages aci tenant(s)

      aci_tenant:

        name: "{{ item.name }}"

        descr: "{{ item.description|default(omit) }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"

      tags:

        - aci-tenants

      with_items: aci_tenants

 

    - name: manages aci context(s)

      aci_context:

        name: "{{ item.name }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"

      tags:

        - aci-contexts

      with_items: aci_contexts

 

    - name: manages aci bridge domain(s)

      aci_bridge_domain:

        name: "{{ item.name }}"

        descr: "{{ item.description|default(omit) }}"

        context: "{{ item.context }}"

        tenant: "{{ item.tenant }}"

        subnet: "{{ item.subnet }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"

      tags:

        - aci-bridge-domains

      with_items: aci_bridge_domains

 

    - name: manages aci application network profile(s)

      aci_anp:

        name: "{{ item.name }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"

      tags:

        - aci-application-network-profiles

      with_items: aci_application_network_profiles

 

    - name: manages aci filter(s)

      aci_filter:

        name: "{{ item.name }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"      

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"

      tags:

        - aci-filters

      with_items: aci_filters

 

    - name: manages aci filter entries

      aci_filter_entry:

        name: "{{ item.name }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"

        filter: "{{ item.filter }}"

        proto: "{{ item.proto }}"

        dest_to_port: "{{ item.dest_to_port }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"

      tags:

        - aci-filter-entries

      with_items: aci_filter_entries

 

    - name: manages aci contract(s)

      aci_contract:

        name: "{{ item.name }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"

        scope: "{{ item.scope|default(omit) }}"

        prio: "{{ item.prio|default(omit) }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"

      tags:

        - aci-contracts

      with_items: aci_contracts

 

    - name: manages aci contract subject(s)

      aci_contract_subject:

        name: "{{ item.name }}"

        descr: "{{ item.description|default(omit) }}"

        tenant: "{{ item.tenant }}"

        contract: "{{ item.contract }}"

        filters: "{{ item.filters }}"

        apply_both_directions: "{{ item.apply_both_directions|default('True') }}"

        prio: "{{ item.prio|default(omit) }}"

        state: "{{ item.state }}"

        host: "{{ aci_apic_host }}"

        username: "{{ aci_username }}"

        password: "{{ aci_password }}"

      tags:

        - aci-contract-subjects

      with_items: aci_contract_subjects

 

 

Testing:

The assumption here is that we have already configured our Jenkins job to do the following as part of the workflow for our test environment:

  • - Monitor the dev branch on git@gerrit:29418/Ansible-ACI.git for changes.
  • - Trigger a backup of the existing Cisco ACI environment (Read KB on this here)
  • - Execute the playbook.yml Ansible playbook against our Cisco ACI test gear and report back on the status via email as well as what is available from our Jenkins job report. (Ensuring that our test APIC controller is specified as the host)

 

Now assuming that all of our testing has been successful and we have validated that the appropriate Cisco ACI changes have been implemented successfully. We are now ready to push our new configuration changes up to the master branch for code-review.

 

Code-Review:

We are now ready to merge our dev branch into our master branch and commit for review. Remember that you should not be the one who also signs off on code-review and the person who does should have knowledge in regards to the change being implemented. So we will assume that the above is true for this mock-up.

 

So we can now merge the dev branch with our master branch.

$git checkout master

$git merge dev

 

Now we can push our code up for review.

$git review

 

Now our new code changes are staged on our Gerrit server ready for someone to either sign-off on the change and merge the new changes in our master branch or push the changes back for additional information. But before we proceed with the sign-off we need to engage our peer-review phase by following the next section.

 

Peer-Review:

We now should re-engage the original teams and discuss the testing phase results, the actual changes to be made and ensure that there is absolutely nothing missing from the implementation. This is also a good stage to include the person who will be signing off on the change as part of the discussion. In doing so will ensure that they are fully aware of the changes being implemented and have a better understanding in order to proceed or not.

 

After a successful peer-review the person who is in charge of signing off on the code-review should be ready to proceed or not. So for this mock-up we will assume that all is a go and they proceed with signing off and the changes get merged into our master branch. Those changes are now ready for Jenkins to pick up and implement in production.

 

Implementation:

So now that all of our configurations have been defined into an Ansible playbook, all testing phases have been successful and our code-review has been signed off on we are now ready to enter the implementation phase in our production environment.

 

Our production Jenkins workflow should look identical to our testing phase setup so this should be an easy one to setup. The only differences here should be the ACI controller that is configured for our production environment therefore our workflow should look similar to the following.

  • - Monitor the master branch on git@gerrit:29418/Ansible-ACI.git for changes.
  • - Trigger a backup of the existing Cisco ACI environment (Read KB on this here)
  • - Execute the playbook.yml Ansible playbook against our Cisco ACI production gear and report back on the status via email as well as what is available from our Jenkins job report. (Ensuring that our production APIC controller is specified as the host)

 

And again assuming that our Jenkins workflow ran successfully we should be good and all changes should have been implemented successfully.

 

Final thoughts

 

I hope you found the above useful and informational on what a typical Infra-As-Code change might look like. There are some additional methodologies that you may want to implement as part of your workflow as we did with this mock-up. And one of those may be some additional automation steps and/or end-user capabilities. And we will cover some of those items in the next post which will cover next steps in our journey.

Access control extends far beyond the simple static statements of a Cisco ACL or IP tables.  The access control we deal with today comes with fancy names like Advanced Malware Protection or “Next-Generation.”  If you work with Cisco devices that are part of the FirePOWER defense system you know what I’m talking about here.  For example, the Cisco FirePOWER services module in the ASA can work with Cisco Advanced Malware Detection to send a file hash to a Cisco server in the cloud.  From there, the Cisco server will respond with an indication that the file contains malware, or that its clean.  If it contains malware then of course the access control rule would deny the traffic.  If its determined that the traffic is clean it would allow the traffic. 

 

In this situation discussed previously, the file itself is never sent over the wire, just a hash is sent.  How is this at all helpful?  Cisco gathers correlation data from customers around the globe.  This data helps them to build their database of known threats, so when you send them a hash, its likely that they’ve already seen it and have run the file in a sandbox.  They use advanced tools like machine learning to determine if the file is malicious.  Then they catalog the file with a hash value so when you send a hash, they compare the hash, and there you have it!  This is very low overhead in terms of processing data.  What about the cases where Cisco doesn’t have any data on the file hash we’ve sent?  This is where things get interesting in my opinion. 

 

In this case, the file needs to be sent to Cisco.  Once Cisco receives the file they run it in a sandbox.  Using machine learning amongst other methods lets them determine if the file is doing something malicious or not.  At this point they would catalog the information with a hash value so they don’t have to look at it again.  This is all good, because we can usually get a quick response on wether something is good or bad, and our access-control rules can do their job.  But here’s where a few questions could be raised.  Aside from not having a hash for a file I’m sending or receiving, what determines that the file needs to be forward to Cisco?  Do they log the file or discard it after the sandbox run of the file?  I ask these questions because in my mind it’s realistic that all files could be sent to Cisco and cataloged meaning authorities could potentially subpoena that data from Cisco to see anything I’ve sent or received.  If this is the case then our “Advanced Malware Detection” could also be “Advanced Privacy Deterioration.” 

 

What are your thoughts?  Is it a bad idea to get the cloud involved in your access-control policies or do we just trust the direction vendors are taking us?

Filter Blog

By date: By tag: