Skip navigation
1 2 3 Previous Next

Geek Speak

35 Posts authored by: mbleib

In previous posts in this “My Life” blog series, I’ve written quite a bit about the project management/on-task aspects of how I keep my focus and direction top of mind while I practice and place emphasis on my goals. But, what happens when something doesn’t work?

 

In some cases, the task we’re trying to accomplish may be just too hard. In some cases, we’re just not there yet. Practice, in this case, is the right answer. As my french horn teacher used to say, "Practice does not make perfect. Perfect practice makes perfect." The problem, particularly in my guitar playing, is that I’m flying a little blind because I have no teacher helping me to practice perfectly. But, imagine a tricky chord sequence that has had me failing during practice. If I don’t burn through the changes as often as possible in my practice time, I’ll definitely fail when I’m on stage, attempting to play it in front of a live audience.

 

In an effort to avoid the embarrassment of that failure, I sandbox. At least that’s how I see it.

 

The same analogy can be transposed to my thoughts about implementing a new build of a server, an application that may or may not work in my VMware architecture, etc. We don’t want to see these things fail in production, so we test them out as developer-type machines in our sandbox. This is a truly practical approach. By making sure that all testing has taken place prior to launching a new application in production, we're helping to ensure that we'll experience as little downtime as possible.

 

I look at these exercises as an effort to enable a high degree of functionality.

 

The same can be said as it reflects on training myself to sleep better, or to gain efficiency in my exercise regime. If I were to undertake a new weightlifting program, and my lifts are poorly executed, I could strain a muscle or muscle group. How would that help me? It wouldn’t. I work out without a trainer, so I rely on the resources that are available to me, such as YouTube, to learn and improve. And when I’ve got a set of motions down pat, the new exercise gets rolled into my routine. Again, I’ve sandboxed and learned what lessons I need to know by trial and error. This helps me avoid potentially hazardous roadblocks, and in the case of my guitar, not looking like a fool. Okay, let’s be clear... avoid looking more like a fool than usual.

 

I know that this doesn’t feel spontaneous. Actually, it isn’t. Again, as I relate it to musical performance, the correlation is profound. If I know via practice and a degree of comfort with the material, it allows my improvisation to take place organically. I always know where I am, and where I want to be in the midst of a performance, and thus, my capacity to improvise can open up.

Previously, I’ve spoken about project management and micro-services as being analogous to life management. To me, these have seemed to be really functional and so very helpful to my thought process regarding the goals I have set for myself in life. I hope that in some small way they’ve helped you to envision the things you want to change in your own life, and how you may be able to approach those changes as well.

 

In this, the third installment of the “My Life as IT Code” series, I’ll take those ideas, and try to explain how I visualize and organize my approach. From a project manager’s perspective, the flowchart has always been an important tool. The first step is to outline a precedence diagram, such as the one I created for a past project. It was a replication and disaster recovery project for a large, multi-site law firm that had leveraged VMware-based virtual machines, a Citrix remote desktop, and storage-to-storage replication to maintain consistent uptime. I broke individual time streams into flowcharts, giving the individual stakeholders clear indications of their personal tasks. I organized them into milestones that related to the project as a whole. I delivered both as Gantt Charts and flow charts to show how the projects could be visualized, revealing time used per task, as well as the tasks that I discretely broke into their constituent parts.

 

This same technique is applicable to some of these life hacks. While it can be difficult to assign timeframes to weight loss, for example, the milestones themselves can be quite easy to demarcate. There are, with some creative thinking, methods by which one may be able to achieve viable metrics against which progress can be marked, and effective tools for the establishment of reasonable milestones can be elucidated.

 

There are great tools to aid in exercise programs, which enforce daily or weekly targets, and these are great to leverage in one’s own plan. I have one on my phone called the seven-minute workout. I also note my daily steps, using the fitness tracker application on both my phone and on my smartwatch. The growth of these tools, along with the use of a scale, can show a graphic progress along the path to your goals. Weight loss is never a simple downward slope, but rather a decline that tends toward plateaus followed by restarts. However, as your progress does move forward, so, in turn, does the use of a graphical representation of your weight loss to encourage more work along those lines. For me, the best way to track progress is by using a spreadsheet, graphing on a simple x/y axis, which provides an effective visualization of progress. I do not suggest paying rigid attention to the scale, as these plateaus can be detrimental to the emotional effect of how one sees one’s progress.

 

I’ve never been a runner, but the ability to define distance plans, time to these distances, and delineations of the progress along those paths are most easily translatable to a flowchart. It’s important to know what you’re doing. Rather than saying things like, “I’m going to lose 40 lbs in 2018,” a more effective strategy is to focus on specifics, such as, "I plan on walking 10,000 steps per day, or five miles per day." It is easier and more impactful to adhere to smaller, more strategic commitments.

 

Meanwhile, the creation and attention to the flow chart is a hugely effective tool to help keep a person on track. It helps them pay attention to their goals, and gives them visibility into their progress. Once you have that, you can celebrate when you reach the milestones you have set for yourself.

 

As I’ve stated, the flowchart can be an amazing tool. And when you look at life goals with the eyes of a project manager, you give yourself defined timeframes and goals. The ability to visualize your goals, milestones, and intent can really assist in keeping them top of mind.

 

The journey of a thousand miles begins with a single step.

-Lao Tsu

 

In this, my second post in a five-part series, I’m approaching the concept of microservices as being analogous to my life. Hopefully, my experience will assist you in your life as an IT pro, and also serve as a life hack.

 

I’ve written about microservices before. It seems to me that the concept as it relates to agility in code design and implementation in the data center is profound. What I see as the magic of code development and deployment in the data center is compelling because it allows the code jockey to focus his efforts on the discrete area of development that needs to take place. It allows larger groups to divvy up their projects into chunks of work that make the most sense for those doing the coding and allows for these changes to take place dynamically. The agility of the application as a whole becomes practically fluid. The ability to roll back changes, should they be necessary, retains the levels of fluidity and dynamic nature necessary for the growth of a modern application in today’s data center (whether it is on-premises or in the cloud). Sometimes, those small incremental changes can prove to be so impactful that they ensure the success of these changes.

 

I can draw the same correlation of small changes to large benefits in the realm of life hacks. For example, as I’ve mentioned, I play guitar, though not particularly well. I’ve been playing since I was twelve, and literally, for decades, my ability has remained the same. But recently, I learned a few techniques that have helped me improve significantly. Small changes and new levels of understanding have helped me quite a bit. In this case, learning blues shapings on the fretboard, and altering my hand positioning have made a big difference.

 

The same can be said of weight control. I’m a chunky guy. I’ve struggled with my weight most of my life. Small changes in my approach to food and exercise have helped me recently in achieving my goals.

 

Sleep is another category in which I struggle. My patterns have been bad. My average sleep per night has averaged about 5 hours. To be pithy, I’m tired of being tired. So, I’ve looked at techniques to assist in what I see as success in sleep. Removing screens from the room, blackout curtains, and room temperature have all been isolated as key criteria to staying asleep once falling asleep. So, I’ve addressed these things. Another key category has been white noise. I’ve leveraged all of these tools and techniques to assist me. Again, I'm happy to say that recently these small changes have helped.

 

I like to view these small changes in the same light as microservices. Small incremental changes can make a significant difference in the life of the implementer.

Recently, I wrote a post about the concept of a pre-mortem, which was inspired by an amazing podcast I’d heard (listen to it here). I felt that this concept could be interpreted exceptionally well within a framework of project management. The idea that thinking of as many variables and hindrances to the success of individual tasks, which in turn would delay the milestones necessary to the completion of the project as a whole, quite literally correlates to medical care. Addressing the goals linked to a person's or project's wellness really resonated with me.

 

In my new series of five posts, this being the first, I will discuss how concepts of IT code correlate to the way we live life. I will begin with how I see project management as a potentially correlative element to life in general and how to successfully live this life. This is not to say that I am entirely successful, as definitions of success are subjective. But I do feel that each day I get closer to setting and achieving better goals.

 

First, we need to determine what our goals are, and whether financial, physical, fitness, emotional, romantic, professional, social, or whatever else matters to you are success goals. For me, lately, a big one has become getting better at guitar. But ultimately, the goals themselves are not as important as achieving them.

 

So, how do I apply the tenets of project management to my own goals? First and foremost, the most critical step is keeping the goals in mind.

 

  • Define your goals and set timelines
  • Define the steps you need to take to achieve your goals
  • Determine the assets necessary to achieve these goals, including vendors, partners, friends, equipment, etc.
  • Define potential barriers to achieving those goals, such as travel for work, illness, family emergencies, etc.
  • Define those barriers (as best you can)
  • Establish a list of potential roadblocks, and establish correlating contingencies that could mitigate those roadblocks
  • Work it

 

What does the last point mean? It means engaging with individuals who are integral to setting and keeping commitments. These people will help keep an eye on the commitments you’ve made to yourself, the steps you’ve outlined for yourself, and doing the work necessary to make each discrete task, timeline, and milestone achievable.

 

If necessary, create a project diagram of each of your goals, including the above steps, and define these milestones with dates marked as differentiators. Align ancillary tasks with their subtasks, and defined those as in-line sub-projects. Personally, I do this step for every IT project with which I am involved. Visualizing a project using a diagram helps me keep each timeline in check. I also take each of these tasks and build an overall timeline of each sub-project, and put it together as a master diagram. By visualizing these, I can ensure that future projects aren't forgotten, while also giving me a clear view of when I need to contact outside resources for their buy-in. Defining my goals prior to going about achieving them allows me to be specific. Also, when I can see that I am accomplishing minor successes along the way (staying on top of the timeline I've set, for example), helps me stay motivated. 

 

Below is a sample of a large-scale precedence diagram I created for a global DR project I completed years ago.  As you start from the left side and continue to the right you’ll see each step along the way, with sub-steps, milestones, go/no-go determinations, and final project success on the far right. Of course, this diagram has been minimized to fit to the page. In this case, visibility into each step is not as important as is the definition and precedence of each step. What does that step rely on for the achievement of this step? These have all been defined on this diagram.

 

 

The satisfaction I garner from a personal or professional task well accomplished cannot be minimized.

Pushback.jpg

I’ve attempted to locate a manager or company willing to commit to the pretense of corporate pushback against a hybrid mentality. I’ve had many conversations with customers who’ve had this struggle within their organizations, but few willing to go on record.

 

As a result, I’m going to relate a couple of personal experiences, but I’m not going to be able to commit customer references to this.

 

Let’s start with an organization I’ve worked with a lot lately. They have a lot of data of an unstructured type, and our goal was to arrive at an inexpensive “SMB 3.0+” storage format that would satisfy this need. We recommended a number of cloud providers, both hybrid and public, to help them out. The pushback came from their security team, who’d decided that compliance issues were a barrier to going hybrid. Obviously, most compliance issues have been addressed. In the case of this company, we, as a consultative organization, were able to make a good case for both the storage of the data, the costs, and an object-based model for access from their disparate domains. As it turned out, this particular customer chose a solution that placed a compliant piece of storage on-premises that could satisfy its needs, but as a result of the research we’d submitted for them, their security team agreed to be more open in the future to these types of solutions.

 

Another customer had a desire to launch a new MRP application and was evaluating hosting the application in a hybrid mode. In this case, the customer had a particular issue with relying on the application being hosted entirely offsite. As a result, we architected a solution wherein the off-prem components were designed to augment the physical/virtual architecture built for them onsite. This was a robust solution that ensured a guarantee of uptime for the environment with a highly available approach toward application uptime and failover. In this case, just what the customer had requested. The pushback in this solution wasn’t one of compliance because the hosted portion of the application would lean on our highly available and compliant data center for hosting. They objected to the cost, which seemed to us to be a reversal of their original mandate. We’d provided a solution based on their requests, but they changed that request drastically. In their ultimate approach, they chose to host the entire application suite in a hosted environment. Their choice to move toward a cloudy environment for the application, in this case, was an objection to the costs of their original desired approach.

 

Most of these objections were real-world, and decisions that our customers had sought. They’d faced issues they had not been entirely sure were achievable. In these cases, pushback came in the form of either security or cost concerns. I hoped we had delivered solutions that met their objections and helped the customers achieve their goals.

 

It’s clear that the pushback we’d received was due to known or unknown real-world issues facing their business. In the case of the first customer, we’d been engaged to solve their issues regardless of objections, and we found them a storage solution that gave them the best on-premises solution for them. But in the latter, by promoting a solution that was geared toward satisfying all they’d requested, we were bypassed in favor of a lesser solution provided by the application vendor. You truly win some and lose some.

 

Have you experienced pushback in similar situations? I'd love to hear all about it.

Root Cause.png

 

I remember the largest outage of my career. Late in the evening on a Friday night, I received a call from my incident center saying that the entire development side of my VMware environment was down and that there seemed to be a potential for a rolling outage including, quite possibly, my production environment.

 

What followed was a weekend of finger pointing and root cause analysis between my team, the virtual data center group, and the storage group. Our org had hired IBM as the first line of defense on these Sev-1 calls. IBM included EMC and VMware in the problem resolution process as issues went higher up the call chain, and still the finger pointing continued. By 7 am on Monday, we’d gotten the environment back up and running for our user community, and we’d been able to isolate the root cause and ensure that this issue would never come again. Others, certainly, but this one was not to recur.

 

Have you experienced similar circumstances like this at work? I imagine that most of you have.

 

So, what do you do? What may seem obvious to one may not be obvious to others. Of course, you can troubleshoot the way I do. Occam’s Razor or Parsimony are my courses of action. Try to apply logic, and force yourself to choose the easiest and least painful solutions first. Once you’ve exhausted those, you move on to the more illogical, and less obvious.

 

Early in my career, I was asked what I’d do as my first troubleshooting maneuver for a Windows workstation having difficulty connecting to the network. My response was to save the work that was open on the machine locally, then reboot. If that didn’t solve the connectivity issue, I’d check the cabling on the desktop, then the cross-connect before even looking at driver issues.

 

Simple parsimony, aka economy in the use of means to an end, is often the ideal approach.

 

Today’s data centers have complex architectures. Often, they’ve grown up over long periods of time, with many hands in the architectural mix. As a result, the logic as to why things have been done the way that they have has been lost. As a result, the troubleshooting toward application or infrastructural issues can be just as complex.

 

Understanding recent changes, patching, etc., can be an excellent way to focus your efforts. For example, patching Windows servers has been known to break applications. A firewall rule implementation can certainly break the ways in which application stacks can interact. Again, these are important things to know when you approach troubleshooting issues.

 

But, what do you do if there is no guidance on these changes? There are a great number of monitoring software applications out there that can track key changes in the environment and can point the troubleshooter toward potential issues. I am an advocate for the integration of change management software into help desk software and would like to add to that some feed toward this operations element with some SIEM collection element. The issue here has to do with the number of these components already in place at an organization, and with that in mind, would the company desire changing these tools in favor of an all-in-one type solution, or try to cobble pieces together. Of course, it is hard to discover, due to the nature of enterprise architectural choices, a single overall component that incorporates all of the choices made throughout the history of an organization.

 

Again, this is a caveat emptor situation. Do the research and find out a solution that best solves your issues, determines an appropriate course of action, and helps to provide the closest to an overall solution to the problem at hand.

The-Hog-Ring-Auto-Upholstery-Community-Aerospace-Lancia-Beta-Trevi.jpg

 

We’ve all seen dashboards for given systems. A dashboard is essentially a quick view into a given system. We are seeing these more and more often in the monitoring of a given system. Your network monitoring software may present a dashboard of all switches, routers, and even down to the ports, or all the way up to all ports in given WAN connections. For a large organization, this can be a quite cumbersome view to digest in a quick dashboard. Network is a great example of fully fleshed out click-down views. Should any “Red” appear on that dashboard, a simple click into it, and then deeper and deeper into it, should help to discover the source of the problem wherever it may be.

 

Other dashboards are now being created, such that the useful information presented within the given environment may be not so dynamic, and harder to discern in terms of useful information.

 

The most important thing to understand from within a dashboard environment is that the important information should be so easily presented that the person glancing at it should not have to know exactly how to fix whatever issue is, but that that information be understood by whoever may be viewing it. If a given system is presenting an error of some sort, the viewer should have the base level of understanding necessary to understand the relevant information that is important to them.

 

Should that dashboard be fluid or static? The fluidity is necessary for those doing the the deep dive into the information at the time, but a static dashboard can be truly satisfactory should that individual be assigning the resolution to another, more of a managerial or administrative view.

 

I believe that those dashboards of true significance have the ability to present either of these perspectives. The usability should only be limited by the viewer’s needs.

 

I’ve seen some truly spectacular dynamic dashboard presentations. A few that spring to mind are Splunk, the analytics engine for well more than just a SIEM, Plexxi, a networking company with outstanding deep dive capabilities into their dashboard with outstanding animations, and of course, some of the wonderfully intuitive dashboards from SolarWinds. This is not to say that these are the limits of what a dashboard can present, but only a representation of many that are stellar.

 

The difficulty with any fluid dashboard is how difficult is it for a manager of the environment to create the functional dashboard necessary to the viewer? If my goal were to fashion a dashboard intended for the purpose of seeing for example Network or storage bottlenecks, I would want to see, at least initially, a Green/Yellow/Red gauge indicating if there were “HotSpots” or areas of concern, then, if all I needed was that, I’d, as management assign someone to look into that, but if I were administration, I’d want to be more interactive to that dashboard, and be able to dig down to see exactly where the issue existed, and/or how to fix it.

 

I’m a firm believer in the philosophy that a dashboard should provide useful information, but only what the viewer requires. Something with some fluidity always is preferable.

When focusing on traditional mode IT, what can a Legacy Pro expect?

 

great-white-shark-1.jpg

 

This is a follow-up to the last posting I wrote that covered the quest for training, either from within or outside your organization. Today I'll discuss the other side of the coin because I have worked in spaces and with various individuals where training just wasn't important.

 

These individuals were excellent at the jobs they'd been hired to do, and they were highly satisfied with those jobs. They had no desire for more training or even advancement. I don't have an issue with that. Certainly, I’d rather interact with a fantastic storage admin or route/switch engineer with no desire for career mobility than the engineer who’d been in the role for two months and had their sights set on the next role already. I’d be likely to get solid advice and correct addressing of issues by the engineer who’d been doing that job for years.

 

But, and this is important, what would happen if the organization changed direction. The Route/Switch guy who knows Cisco inside and out may be left in the dust if he refused to learn Arista, (for example) when the infrastructure changes hands, and the organization changes platform. Some of these decisions are made with no regard to existing talent. Or, if as an enterprise, they moved from expansion to their on-premises VMware environment to a cloud-first mentality? Those who refuse to learn will be left by the wayside.

 

Just like a shark dying if it doesn't move forward, so will a legacy IT pro lose their status if they don’t move forward.

 

I’ve been in environments where people were siloed to the extent that they never needed to do anything outside their scope. Say, for example, a mainframe coder. And yet, for the life of the mainframe in that environment, they were going to stay valuable to the organization. These skills are not consistent with the growth in the industry. Who really is developing their mainframe skills today? But, that doesn’t mean that they intrinsically have no impetus to move forward. They actually do, and should. Because, while it’s hard to do away with a mainframe, it’s been known to happen.

 

Obviously, my advice is to grow your skills, by hook or by crook. To learn outside your standard scope is beneficial in all ways. Even if you don’t use the new tech that you’re learning, you may be able to benefit the older tech on which you currently work by leveraging some of your newly gained knowledge.

 

As usual, I’m an advocate for taking whatever training interests you. I’d go beyond that to say that there are many ways to leverage free training portals, and programs to benefit you and your organization beyond those that have been sanctioned specifically by the organization. Spread your wings, seek out ways to better yourself, and in this, as in life, I’d pass on the following advice: Always try to do something beneficial every day. At least one thing that will place you on the moving forward path, and not let you die like a shark rendered stationary.

mbleib

The IT Training Quandary

Posted by mbleib Apr 27, 2017

What do you do when your employer says no more training? What do you do when you know that your organization should move to the cloud or at least some discrete components? How do you stay current and not stagnate? Can you do this within the organization, or must you go outside to gain the skills you seek?

 

This is a huge quandary…

 

Or is it?

 

Not too long ago, I wrote about becoming stale in your skill sets, and how that becomes a career-limiting scenario. The “gotcha” in this situation is that often your employer isn't as focused on training as you are. The employer may believe in getting you trained up, but you may feel as if that training is less than marketable or forward thinking. Or, worse, the employer doesn’t feel that training is necessary. They may view you as being capable of doing the job you’ve been asked to do, and that the movement toward future technology is not mission critical. Or, there just might not be budget allotted for training.

 

These scenarios are confusing and difficult. How is one to deal with the disparity between what you want and what your employer wants?


The need for strategy, in this case, is truly critical. I don’t advocate misleading your employer, of course, but we are all looking out for ourselves and what we can do to leverage our careers. Some people are satisfied with what they’re doing and don’t long to sharpen their skills, while others are like sharks, not living unless they’re moving forward. I consider myself to be among the latter.

 

Research free training options. I know, for example, that Microsoft has much of its Azure training available online for no cost. Again, I don’t recommend boiling the ocean, but you can choose what you want to select strategically. Of course, knowing the course you wish to take might force you to actually pay for the training you seek.

 

Certainly, a sandbox environment, or home lab environment, where you can build up and tear down test platforms would provide self-training. Of course, getting certifications in that mode are somewhat difficult, as well as gaining access to the right tools to accomplish your training in the ways the vendor recommends.

 

I advocate doing research on a product category that would benefit the company in today’s environment, but can act as a catalyst for the movement to the cloud. Should that be on the horizon, the most useful ramp in this case is likely Backup as a Service or DR as a service. So the research into new categories of backup, like Cohesity, Rubrik, or Actifio, where data management, location, and data awareness are critical, can assist the movement of the organization toward cloudy approaches. If you can effectively sell the benefits of your vision, then your star should rise in the eyes of management. Sometimes it may feel like you’re dragging the technology behind you, or that you’re pushing undesired tech toward your IT management, but fighting the good fight is well worth it. You can orchestrate a cost-free proof of concept on products like these to facilitate the research, and thus prove the benefit to the organization, without significant outlay.

 

In this way, you can guide your organization toward the technologies that are most beneficial to them by solving today’s issues while plotting forward-thinking strategies. Some organizations are simply not conducive to this approach, which leads me to my next point.

 

Sometimes, the only way to better your skills, or improve your salary/stature, is without the relationship in your current organization. This is a very dynamic field, and movement from vendor to end-user to channel partner has proven a fluid stream. If you find that you’re just not getting satisfaction within your IT org, you really should consider whether moving on is the right approach. This draconian approach is one that should be approached with caution, as the appearance of hopping from gig to gig can potentially be viewed by an employer as a negative. However, there are times when the only way to move upward is to move onward.

I’ve long held the belief that for any task there are correct approaches and incorrect ones. When I was small, I remember being so impressed by the huge variety of parts my father had in his tool chest. Once, I watched him repair a television remote control, one that had shaped and tapered plastic buttons. The replacement from RCA/Zenith, I believe at the time, cost upwards of $150. He opened the broken device, determined that the problem was that the tongue on the existing button had broken, and rather than epoxy the old one back together, he carved and buffed an old bakelite knob into the proper shape, attached it in place of the original one, and ultimately, the final product looked and performed as if it were the original. It didn’t even look different than it had. This, to me, was the ultimate accomplishment. Almost as the Hippocratic Oath dictates, above all, do no harm. It was magic.

 

When all you have is a hammer, everything is a nail, right? But that sure is the wrong approach.

 

Today, my favorite outside work activity is building and maintaining guitars. When I began doing this, I didn’t own some critical tools. For example, an entire series of “Needle Files” and crown files are appropriate for the shaping and repair of frets on the neck. While not a very expensive purchase, all other tools would fail in the task at hand. The correct Allen wrench is necessary for fixing the torsion rod on the neck. And the ideal soldering iron is critical for proper wiring of pickups, potentiometers, and the jack. Of course, when sanding, a variety of grades are also necessary. Not to mention, a selection of paints, brushes, stains, and lacquers.

 

The same can be said of DevOps. Programming languages are designed for specific purposes, and there have been many advances in the past few years pointing to what a scripting task may require. Many might use Bash, batch, or PowerShell to do their tasks. Others may choose PHP or Ruby on Rails, while still others choose Python as their scripting tools. Today, it is my belief that no one tool can accommodate every action that's necessary to perform these tasks. There are nuances to each language, but one thing is certain: many tasks require the collaborative conversation between these tools. To accomplish these tasks, the ideal tools will likely call functions back and forth from other scripting languages. And while some bits of code are required here and there, currently it's the best way to approach the situation, given that many tools don't yet exist in packaged form. The DevOps engineer, then, needs to write and maintain these bits of code to help ensure that they are accurate each time they are called upon. 

 

As correctly stated in comments on my previous posting, I need to stress that there must be testing prior to utilizing these custom pieces of code to help ensure that other changes that may have taken place within the infrastructure are accounted for each time these scripts are set to task.

 

I recommend that anyone who is in DevOps get comfortable with these and other languages, and learn which do the job best so that DevOps engineers become more adept at facing challenges.

 

At some point, there will be automation tools, with slick GUI interfaces that may address many or even all of these needs when they arise. But for the moment,  I advise learning, utilizing, and customizing scripting tools. In the future, when these tools do become available, the question is, will they surmount the automation tools you’ve already created via your DevOps? I cannot predict.

Devops, the process of creating code internally in an effort to streamline the functions of the administrative processes within the framework of the functions of the sysadmin, are still emerging within IT departments across the globe. These tasks have traditionally revolved around the mechanical functions the sysadmin has under their purview. However, another whole category of administration is now becoming far more vital in the role of the sysadmin, and that’s the deployment of applications and their components within the environment.

 

Application Development is taking on a big change. The utilization of methods like MicroServices, and containers, a relatively new paradigm about which I’ve spoken before, makes the deployment of these applications very different. Now, a SysAdmin must be more responsive to the needs of the business to get these bits of code and/or containers into production far more rapidly. As a result, the SysAdmin needs to have tools in place so as to to respond as precisely, actively, and consistently as possibly. The code for the applications is now being delivered so dynamically, that it now must be deployed, or repealed just as rapidly.

 

When I worked at VMware, I was part of a SDDC group who had the main goal of assisting the rollouts of massive deployments (SAP, JDEdwards, etc.) to an anywhere/anytime type model. This was DevOps in the extreme. Our expert code jockeys were tasked with writing custom code at each deployment. While this is vital to the goals of many organizations, but today the tools exist to do these tasks on a more elegant manner.

 

So, what tools would an administrator require to push out or repeal these applications in a potentially seamless manner? There are tools that will roll out applications, say from your VMware vCenter to whichever VMware infrastructure you have in your server farms, but also, there are ways to leverage that same VMware infrastructure to deploy outbound to AWS, Azure, or to hybrid, but non-VMware infrastructures. A great example is the wonderful Platform9, which exists as a separate panel within vCenter, and allows the admin to push out full deployments to wherever the management platform is deployed.

 

There are other tools, like Mezos which help to orchestrate Docker types of container deployments. This is the hope of administrators for Docker administration.

 

But, as of yet, the micro-services type of puzzle piece has yet to be solved. As a result, the sysadmin is currently under the gun for the same type of automation toolset. For today, and for the foreseeable future, DevOps holds the key. We need to not only deploy these parts and pieces to the appropriate places, but we also have to ensure that they get pushed to the correct location, that they’re tracked and that they can be pulled back should that be required. So, what key components are critical? Version tracking, lifecycle, permissions, and specific locations must be maintained.

I imagine that what we’ll be seeing in the near future are standardized toolsets that leverage orchestration elements for newer application paradigms. For the moment, we will need to rely on our own code to assist us in the management of the new ways in which applications are being built.

When a SysAdmin implements a DevOps approach to an environment, there are many things that need to be considered. Not least of which is developing a strong line of communication between the developer and the SysAdmin. Occasionally, and more frequently as time goes on, I’m finding that these people are often one and the same, as coders or scripters are spreading their talents into the administrator’s roles. But these are discreet skills, and the value of the scripter will augment the skills and value of the system administrator.

 

Process is a huge component of the SysAdmin's job. The ordering process, billing process, build process, delivery process. How many of those tasks could be automated? How easily could these functions be translated to other tasks that are conscripted, and could therefore be scripted?

 

The wonderful book, The Phoenix Project, is a great example of this. It outlines how an IT manager leverages the powers of organization, and develops a process that helps facilitate the recovery of a formerly financially strapped industry leader.

In the book, obvious communications issues initially bred the perception that IT was falling behind in its requisite tasks. Systems, trouble tickets, and other tasks were not being delivered to the requestor in any sort of accurate time frame. As the main character analyzed both the perceptions and realities of these things falling short, he began questioning various stakeholders in the company. People who were directly affected, and those outside his realm, were solicited for advice. He also evaluated processes within the various groups of the organization, found out what worked and why, then used his successes and failures to help him to establish assets worth leveraging. In some cases, those assets were skills, individuals, and personalities that helped him streamline his functions within the organization. Gradually, he was able to skillfully handle the problems that arose, and fix problems that had been established long before.

 

A number of key takeaways from the character's experiences in this series of actions illustrate what I was trying to outline. For example, the willingness to question yourself and your allies and adversaries in a given situation, with the goal of making things better. Sometimes it takes being critical in the workplace, which includes scrutinizing your successes and failures, may be the best way to fix issues and achieve overall success. 

 

What does your process look like for any given set of tasks? How logical are they? Can they be eased, smoothed, or facilitated faster? Evaluating the number of hours it takes to perform certain tasks, and then critically evaluating them, could potentially improve metrics and processes. As I see it, the scriptability of some of these tasks allows us to shave off precious hours, improve customer perception, and gain credibility again. Being willing to embrace this new paradigm offers countless benefits. 

 

It all begins with communication. Don’t be afraid to allow yourself to be seen as not knowing the answer, but always be willing to seek one. We can learn from anyone, and by including others in our evaluation, we can also be perceived as humble, collaborative, and productive.

In part one, I outlined the differentiations between a traditional architecture and a development oriented one. We’ve seen that these foundational and fundamental differences have specific design approaches that really can significantly impact the goals of a team, and the approaches of the sysadmin to support them. But, what we haven’t addressed are some of the prerequisites and design considerations necessary to facilitate that.

 

Let me note that DevOps as a concept began with the operations team creating tools within newer scripting and programming languages all centered around the assisting of the facilities and sysadmin groups to support the needs of the new IT. Essentially, the question was: “What kinds of applications do we in the infrastructural teams need to better support the it needs as a whole. Many years ago, I had the great pleasure to work on a team with the great Nick Weaver ( @LynxBat ). Nick had an incredible facility to imagine tools that would do just this. For example, if you’ve a requirement to replicate an object from one data center to another, which could be a simply Push/Get routine, Nick would tell you that you’re doing it wrong. He’d create a little application which would be customized to accomplish these tasks, verify completion, and probably give you a piece of animation while it was taking place to make you feel like you’re actually doing something. Meanwhile, it was Nick’s application, his scripting, that was doing all the heavy lifting.

 

I’ll never underestimate, nor appreciate the role of the developer more than the elegance with which Nick was able to achieve these things. I’ve never pushed myself to develop code. Probably, this is a major shortfall in my infrastructural career. But boy howdy, do I appreciate the elegance of well written code. As a result, I’ve always leaned on the abilities of my peers who have the skills in coding to create the applications that I’ve needed to do my job better.

 

When a sysadmin performs all of their tasks manually, no matter how strong the attention to detail, often, mistakes get made. But when the tools are tested, and validated, running them should be the same across the board. If some new piece of the infrastructure element must be included, then of course, the code must know about it, but still, the concept remains. 

 

So, in the modern, Cloud Based, or even Cloud Native worlds, we need these tools to keep ourselves on-top of all the tasks that truly do need to be accomplished. The more that we can automate, the more efficient we can be. This means everything from deploying new virtual machines, provisioning storage, loading applications, orchestration of the deployment of applications toward cloud based infrastructures (Either hybrid, on premises, or even public cloud based). In many cases, these framework types of orchestration applications didn’t exist. To be able to actively create the tools that would ensure successful deployments has become mission critical.

 

To be sure, entirely new pieces of software are often created that are now solving these problems as they arise, and many of these are simple to use, but whether you buy something off the shelf, or write it yourself, the goals are the same. Help the poor sysadmin to do the job they want to do as seamlessly, and efficiently as possible.

In traditional IT, the SysAdmin’s role has been established as supporting the infrastructure in its current dynamic. Being able to consistently stand up architecture in support of new and growing applications, build test/dev environments, and ensure that these are delivered quickly, consistently, and with a level of reliability such that these applications work as designed is our measure of success. Most SysAdmins with whom I’ve interacted have performed their tasks in silos. Network does their job, servers do theirs, and of course storage has their unique responsibility. In many cases, the silos worked at cross purposes, or at minimum, with differing agenda. The rhythms of each group caused for, in many cases, the inability to deliver the agility of infrastructure that the customer required. However, in many cases, our systems did work as we’d hoped, and all the gears meshed properly to ensure our organization’s goals were accomplished.

 

In an agile, cloud native, devops world, none of these variables can be tolerated. We need to be able to provide the infrastructure to deliver the same agility to our developer community that they deliver to the applications.

 

How are the applications different than the more traditional monolithic agencies for which we’ve been providing services for so long? The key here is the concepts of containers and micro-services. I’ve spoken of these before, but in short, a container environment involves the packaging of either the entire stack or discrete portions of the app stack which is not reliant necessarily on the operating system or platform on which they sit. In this case, the x86 service is already in place, or can be delivered in generic modes on demand. The application or portions of it can be deployed as the developer creates it, and alternately, removed just as simply. Because there is so much less reliance on the virtually deployed physical infrastructure, the compute layer can be located pretty much anywhere. Your prod or dev environment on premises, your hybrid cloud provider, or even the public cloud infrastructure like Amazon. As long as the security and segmented environment has been configured properly, the functional compute layer and its location are actually somewhat irrelevant.

 

A container based environment, not exclusively different than a MicroServices based one, delivers an application as an entity, but again, rather than relying on a physical or virtual platform and its unique properties, can sit on any presented compute layer. These technologies, such as Kubernetis, Docker, Photon, Mesosphere and the like are maturing, with orchestration layers and delivery methods far more friendly to the administrator than they’ve been in the past.

 

In these cases, however, the application platform being delivered is much different than traditional large corporate apps. An old Lotus Notes implementation, for example, required many layers of infrastructure, and in many cases, these types of applications simply don’t lend themselves to a modern architecture. They’re not “Cloud Native.” This is not to disparage how Notes became relevant to many organizations. But the value of a modern architecture, the mobility of the applications, and data locales simply does not support the kinds of infrastructure that an SAP, JDEdwards, etc. type of monolithic architecture required. Of course, these particular applications are solving the cloud issues in different ways, and are still as vital to their organizations as they’d been in the past.

 

In the following 4 blog postings, I’ll address the architectural/design/implementation issues facing the SysAdmin within the new paradigm that Cloud Native brings to the table. I hope to address those questions you may have, and hope for as much interaction, clarification, and challenging conversation as I can.

Over the past 5 postings, I’ve talked about some trends that we have seen happening and gaining traction within the Cloud space. I’ve spoken of :

 

  • Virtualization – Where established trends toward virtualization, particularly VMWare, have been challenged by a variety of newcomers, who’s market share continues to grow. Most notable here is OpenStack as a hypervisor. VMWare has challenged the threat of Azure, AWS, and true OpenStack by embracing it with a series of API’s meant to incorporate the on-prem Virtual DataCenter with those peers in the hybrid space.

 

  • Storage – In the case of traditional storage, the trend has been to faster, with faster Ethernet or Fibre as interconnect, and of course, Solid State is becoming the norm in any reasonably high IO environment. But the biggest sea change is becoming that of Object based storage. Object really is a different approach, with replication, erasure encoding, and redundancy built-in.

 

  • Software Defined Networking – Eating quite drastically into the data center space these days is SDN. The complexities in routing tables, and firewall rules are being addressed within the virtual data center by tools like ACI (Cisco) and NSX (VMWare). While port reduction isn’t quite the play here. The ability to segment a network via these rules far surpasses any physical switch’s capacities. In addition, these rules can be rolled out quite effectively, accurately, and with easy roll-back. I find that these two pieces are truly compelling to the maintaining and enhancing the elegance of the network, while reducing the complexities laid onto the physical switch environment.

 

  • Containers – In the new world of DevOps, Containers, a way to disaggregate the application from the operating system, have proven yet another compelling way into the future. DevOps calls for the ability to update parts and pieces of an application, while Containers allow for the ability to scale the application, update it, and deploy it wherever and whenever you want.

 

  • Serverless and MicroServices – Falling into the equation of DevOps, where small components compiled together make up for the entire application, put together as building-blocks make the whole of the application quite dynamic, and modifiable. While the “Serverless” piece, which is somewhat a misnomer (due to the fact that any workload must reside on some compute layer), are dynamic, movable, and reliant less on the hypervisor or location, than wherever the underlying architecture actually resides.

 

So… What’s next in the data center infrastructure? We’ve seen tools that allow the data center administrator to easily deploy workloads into a destination wherever that may be, we’ve seen gateways that bridge the gap from more traditional storage to object based, we’ve seen orchestration tools which allow for the rapid, consistent, and highly managed deployment of containers in the enterprise/cloud space, and we’ve seen a truly cross-platform approach to serverless/MicroService type of architecture which eases the use of a newer paradigm in the data center.

 

What we haven’t seen is a truly revolutionary unifier. For example, when VMWare became the juggernaut it did become, the virtualization platform became the tool that tied everything together. Regardless of your storage, compute (albeit X86 particularly) and network infrastructure, with VMWare as a platform, you had one reliable and practically bulletproof tool with which to deploy new workloads, manage existing platforms, essentially scale it up or down as required, and all through the ease of a simple management interface. However, with all these new technologies, will we have that glue? Will we have the ability to build entire architectures, and manage them easily? Will there be a level of fault tolerance; an equivalent to DRS, or Storage DRS? As we seek the new brass ring and poise ourselves onto the platforms of tomorrow, how will we approach these questions?

 

I’d love to hear your thoughts.

Filter Blog

By date: By tag: