Skip navigation
1 2 3 Previous Next

Geek Speak

31 Posts authored by: mbleib

Pushback.jpg

I’ve attempted to locate a manager or company willing to commit to the pretense of corporate pushback against a hybrid mentality. I’ve had many conversations with customers who’ve had this struggle within their organizations, but few willing to go on record.

 

As a result, I’m going to relate a couple of personal experiences, but I’m not going to be able to commit customer references to this.

 

Let’s start with an organization I’ve worked with a lot lately. They have a lot of data of an unstructured type, and our goal was to arrive at an inexpensive “SMB 3.0+” storage format that would satisfy this need. We recommended a number of cloud providers, both hybrid and public, to help them out. The pushback came from their security team, who’d decided that compliance issues were a barrier to going hybrid. Obviously, most compliance issues have been addressed. In the case of this company, we, as a consultative organization, were able to make a good case for both the storage of the data, the costs, and an object-based model for access from their disparate domains. As it turned out, this particular customer chose a solution that placed a compliant piece of storage on-premises that could satisfy its needs, but as a result of the research we’d submitted for them, their security team agreed to be more open in the future to these types of solutions.

 

Another customer had a desire to launch a new MRP application and was evaluating hosting the application in a hybrid mode. In this case, the customer had a particular issue with relying on the application being hosted entirely offsite. As a result, we architected a solution wherein the off-prem components were designed to augment the physical/virtual architecture built for them onsite. This was a robust solution that ensured a guarantee of uptime for the environment with a highly available approach toward application uptime and failover. In this case, just what the customer had requested. The pushback in this solution wasn’t one of compliance because the hosted portion of the application would lean on our highly available and compliant data center for hosting. They objected to the cost, which seemed to us to be a reversal of their original mandate. We’d provided a solution based on their requests, but they changed that request drastically. In their ultimate approach, they chose to host the entire application suite in a hosted environment. Their choice to move toward a cloudy environment for the application, in this case, was an objection to the costs of their original desired approach.

 

Most of these objections were real-world, and decisions that our customers had sought. They’d faced issues they had not been entirely sure were achievable. In these cases, pushback came in the form of either security or cost concerns. I hoped we had delivered solutions that met their objections and helped the customers achieve their goals.

 

It’s clear that the pushback we’d received was due to known or unknown real-world issues facing their business. In the case of the first customer, we’d been engaged to solve their issues regardless of objections, and we found them a storage solution that gave them the best on-premises solution for them. But in the latter, by promoting a solution that was geared toward satisfying all they’d requested, we were bypassed in favor of a lesser solution provided by the application vendor. You truly win some and lose some.

 

Have you experienced pushback in similar situations? I'd love to hear all about it.

Root Cause.png

 

I remember the largest outage of my career. Late in the evening on a Friday night, I received a call from my incident center saying that the entire development side of my VMware environment was down and that there seemed to be a potential for a rolling outage including, quite possibly, my production environment.

 

What followed was a weekend of finger pointing and root cause analysis between my team, the virtual data center group, and the storage group. Our org had hired IBM as the first line of defense on these Sev-1 calls. IBM included EMC and VMware in the problem resolution process as issues went higher up the call chain, and still the finger pointing continued. By 7 am on Monday, we’d gotten the environment back up and running for our user community, and we’d been able to isolate the root cause and ensure that this issue would never come again. Others, certainly, but this one was not to recur.

 

Have you experienced similar circumstances like this at work? I imagine that most of you have.

 

So, what do you do? What may seem obvious to one may not be obvious to others. Of course, you can troubleshoot the way I do. Occam’s Razor or Parsimony are my courses of action. Try to apply logic, and force yourself to choose the easiest and least painful solutions first. Once you’ve exhausted those, you move on to the more illogical, and less obvious.

 

Early in my career, I was asked what I’d do as my first troubleshooting maneuver for a Windows workstation having difficulty connecting to the network. My response was to save the work that was open on the machine locally, then reboot. If that didn’t solve the connectivity issue, I’d check the cabling on the desktop, then the cross-connect before even looking at driver issues.

 

Simple parsimony, aka economy in the use of means to an end, is often the ideal approach.

 

Today’s data centers have complex architectures. Often, they’ve grown up over long periods of time, with many hands in the architectural mix. As a result, the logic as to why things have been done the way that they have has been lost. As a result, the troubleshooting toward application or infrastructural issues can be just as complex.

 

Understanding recent changes, patching, etc., can be an excellent way to focus your efforts. For example, patching Windows servers has been known to break applications. A firewall rule implementation can certainly break the ways in which application stacks can interact. Again, these are important things to know when you approach troubleshooting issues.

 

But, what do you do if there is no guidance on these changes? There are a great number of monitoring software applications out there that can track key changes in the environment and can point the troubleshooter toward potential issues. I am an advocate for the integration of change management software into help desk software and would like to add to that some feed toward this operations element with some SIEM collection element. The issue here has to do with the number of these components already in place at an organization, and with that in mind, would the company desire changing these tools in favor of an all-in-one type solution, or try to cobble pieces together. Of course, it is hard to discover, due to the nature of enterprise architectural choices, a single overall component that incorporates all of the choices made throughout the history of an organization.

 

Again, this is a caveat emptor situation. Do the research and find out a solution that best solves your issues, determines an appropriate course of action, and helps to provide the closest to an overall solution to the problem at hand.

The-Hog-Ring-Auto-Upholstery-Community-Aerospace-Lancia-Beta-Trevi.jpg

 

We’ve all seen dashboards for given systems. A dashboard is essentially a quick view into a given system. We are seeing these more and more often in the monitoring of a given system. Your network monitoring software may present a dashboard of all switches, routers, and even down to the ports, or all the way up to all ports in given WAN connections. For a large organization, this can be a quite cumbersome view to digest in a quick dashboard. Network is a great example of fully fleshed out click-down views. Should any “Red” appear on that dashboard, a simple click into it, and then deeper and deeper into it, should help to discover the source of the problem wherever it may be.

 

Other dashboards are now being created, such that the useful information presented within the given environment may be not so dynamic, and harder to discern in terms of useful information.

 

The most important thing to understand from within a dashboard environment is that the important information should be so easily presented that the person glancing at it should not have to know exactly how to fix whatever issue is, but that that information be understood by whoever may be viewing it. If a given system is presenting an error of some sort, the viewer should have the base level of understanding necessary to understand the relevant information that is important to them.

 

Should that dashboard be fluid or static? The fluidity is necessary for those doing the the deep dive into the information at the time, but a static dashboard can be truly satisfactory should that individual be assigning the resolution to another, more of a managerial or administrative view.

 

I believe that those dashboards of true significance have the ability to present either of these perspectives. The usability should only be limited by the viewer’s needs.

 

I’ve seen some truly spectacular dynamic dashboard presentations. A few that spring to mind are Splunk, the analytics engine for well more than just a SIEM, Plexxi, a networking company with outstanding deep dive capabilities into their dashboard with outstanding animations, and of course, some of the wonderfully intuitive dashboards from SolarWinds. This is not to say that these are the limits of what a dashboard can present, but only a representation of many that are stellar.

 

The difficulty with any fluid dashboard is how difficult is it for a manager of the environment to create the functional dashboard necessary to the viewer? If my goal were to fashion a dashboard intended for the purpose of seeing for example Network or storage bottlenecks, I would want to see, at least initially, a Green/Yellow/Red gauge indicating if there were “HotSpots” or areas of concern, then, if all I needed was that, I’d, as management assign someone to look into that, but if I were administration, I’d want to be more interactive to that dashboard, and be able to dig down to see exactly where the issue existed, and/or how to fix it.

 

I’m a firm believer in the philosophy that a dashboard should provide useful information, but only what the viewer requires. Something with some fluidity always is preferable.

When focusing on traditional mode IT, what can a Legacy Pro expect?

 

great-white-shark-1.jpg

 

This is a follow-up to the last posting I wrote that covered the quest for training, either from within or outside your organization. Today I'll discuss the other side of the coin because I have worked in spaces and with various individuals where training just wasn't important.

 

These individuals were excellent at the jobs they'd been hired to do, and they were highly satisfied with those jobs. They had no desire for more training or even advancement. I don't have an issue with that. Certainly, I’d rather interact with a fantastic storage admin or route/switch engineer with no desire for career mobility than the engineer who’d been in the role for two months and had their sights set on the next role already. I’d be likely to get solid advice and correct addressing of issues by the engineer who’d been doing that job for years.

 

But, and this is important, what would happen if the organization changed direction. The Route/Switch guy who knows Cisco inside and out may be left in the dust if he refused to learn Arista, (for example) when the infrastructure changes hands, and the organization changes platform. Some of these decisions are made with no regard to existing talent. Or, if as an enterprise, they moved from expansion to their on-premises VMware environment to a cloud-first mentality? Those who refuse to learn will be left by the wayside.

 

Just like a shark dying if it doesn't move forward, so will a legacy IT pro lose their status if they don’t move forward.

 

I’ve been in environments where people were siloed to the extent that they never needed to do anything outside their scope. Say, for example, a mainframe coder. And yet, for the life of the mainframe in that environment, they were going to stay valuable to the organization. These skills are not consistent with the growth in the industry. Who really is developing their mainframe skills today? But, that doesn’t mean that they intrinsically have no impetus to move forward. They actually do, and should. Because, while it’s hard to do away with a mainframe, it’s been known to happen.

 

Obviously, my advice is to grow your skills, by hook or by crook. To learn outside your standard scope is beneficial in all ways. Even if you don’t use the new tech that you’re learning, you may be able to benefit the older tech on which you currently work by leveraging some of your newly gained knowledge.

 

As usual, I’m an advocate for taking whatever training interests you. I’d go beyond that to say that there are many ways to leverage free training portals, and programs to benefit you and your organization beyond those that have been sanctioned specifically by the organization. Spread your wings, seek out ways to better yourself, and in this, as in life, I’d pass on the following advice: Always try to do something beneficial every day. At least one thing that will place you on the moving forward path, and not let you die like a shark rendered stationary.

mbleib

The IT Training Quandary

Posted by mbleib Apr 27, 2017

What do you do when your employer says no more training? What do you do when you know that your organization should move to the cloud or at least some discrete components? How do you stay current and not stagnate? Can you do this within the organization, or must you go outside to gain the skills you seek?

 

This is a huge quandary…

 

Or is it?

 

Not too long ago, I wrote about becoming stale in your skill sets, and how that becomes a career-limiting scenario. The “gotcha” in this situation is that often your employer isn't as focused on training as you are. The employer may believe in getting you trained up, but you may feel as if that training is less than marketable or forward thinking. Or, worse, the employer doesn’t feel that training is necessary. They may view you as being capable of doing the job you’ve been asked to do, and that the movement toward future technology is not mission critical. Or, there just might not be budget allotted for training.

 

These scenarios are confusing and difficult. How is one to deal with the disparity between what you want and what your employer wants?


The need for strategy, in this case, is truly critical. I don’t advocate misleading your employer, of course, but we are all looking out for ourselves and what we can do to leverage our careers. Some people are satisfied with what they’re doing and don’t long to sharpen their skills, while others are like sharks, not living unless they’re moving forward. I consider myself to be among the latter.

 

Research free training options. I know, for example, that Microsoft has much of its Azure training available online for no cost. Again, I don’t recommend boiling the ocean, but you can choose what you want to select strategically. Of course, knowing the course you wish to take might force you to actually pay for the training you seek.

 

Certainly, a sandbox environment, or home lab environment, where you can build up and tear down test platforms would provide self-training. Of course, getting certifications in that mode are somewhat difficult, as well as gaining access to the right tools to accomplish your training in the ways the vendor recommends.

 

I advocate doing research on a product category that would benefit the company in today’s environment, but can act as a catalyst for the movement to the cloud. Should that be on the horizon, the most useful ramp in this case is likely Backup as a Service or DR as a service. So the research into new categories of backup, like Cohesity, Rubrik, or Actifio, where data management, location, and data awareness are critical, can assist the movement of the organization toward cloudy approaches. If you can effectively sell the benefits of your vision, then your star should rise in the eyes of management. Sometimes it may feel like you’re dragging the technology behind you, or that you’re pushing undesired tech toward your IT management, but fighting the good fight is well worth it. You can orchestrate a cost-free proof of concept on products like these to facilitate the research, and thus prove the benefit to the organization, without significant outlay.

 

In this way, you can guide your organization toward the technologies that are most beneficial to them by solving today’s issues while plotting forward-thinking strategies. Some organizations are simply not conducive to this approach, which leads me to my next point.

 

Sometimes, the only way to better your skills, or improve your salary/stature, is without the relationship in your current organization. This is a very dynamic field, and movement from vendor to end-user to channel partner has proven a fluid stream. If you find that you’re just not getting satisfaction within your IT org, you really should consider whether moving on is the right approach. This draconian approach is one that should be approached with caution, as the appearance of hopping from gig to gig can potentially be viewed by an employer as a negative. However, there are times when the only way to move upward is to move onward.

I’ve long held the belief that for any task there are correct approaches and incorrect ones. When I was small, I remember being so impressed by the huge variety of parts my father had in his tool chest. Once, I watched him repair a television remote control, one that had shaped and tapered plastic buttons. The replacement from RCA/Zenith, I believe at the time, cost upwards of $150. He opened the broken device, determined that the problem was that the tongue on the existing button had broken, and rather than epoxy the old one back together, he carved and buffed an old bakelite knob into the proper shape, attached it in place of the original one, and ultimately, the final product looked and performed as if it were the original. It didn’t even look different than it had. This, to me, was the ultimate accomplishment. Almost as the Hippocratic Oath dictates, above all, do no harm. It was magic.

 

When all you have is a hammer, everything is a nail, right? But that sure is the wrong approach.

 

Today, my favorite outside work activity is building and maintaining guitars. When I began doing this, I didn’t own some critical tools. For example, an entire series of “Needle Files” and crown files are appropriate for the shaping and repair of frets on the neck. While not a very expensive purchase, all other tools would fail in the task at hand. The correct Allen wrench is necessary for fixing the torsion rod on the neck. And the ideal soldering iron is critical for proper wiring of pickups, potentiometers, and the jack. Of course, when sanding, a variety of grades are also necessary. Not to mention, a selection of paints, brushes, stains, and lacquers.

 

The same can be said of DevOps. Programming languages are designed for specific purposes, and there have been many advances in the past few years pointing to what a scripting task may require. Many might use Bash, batch, or PowerShell to do their tasks. Others may choose PHP or Ruby on Rails, while still others choose Python as their scripting tools. Today, it is my belief that no one tool can accommodate every action that's necessary to perform these tasks. There are nuances to each language, but one thing is certain: many tasks require the collaborative conversation between these tools. To accomplish these tasks, the ideal tools will likely call functions back and forth from other scripting languages. And while some bits of code are required here and there, currently it's the best way to approach the situation, given that many tools don't yet exist in packaged form. The DevOps engineer, then, needs to write and maintain these bits of code to help ensure that they are accurate each time they are called upon. 

 

As correctly stated in comments on my previous posting, I need to stress that there must be testing prior to utilizing these custom pieces of code to help ensure that other changes that may have taken place within the infrastructure are accounted for each time these scripts are set to task.

 

I recommend that anyone who is in DevOps get comfortable with these and other languages, and learn which do the job best so that DevOps engineers become more adept at facing challenges.

 

At some point, there will be automation tools, with slick GUI interfaces that may address many or even all of these needs when they arise. But for the moment,  I advise learning, utilizing, and customizing scripting tools. In the future, when these tools do become available, the question is, will they surmount the automation tools you’ve already created via your DevOps? I cannot predict.

Devops, the process of creating code internally in an effort to streamline the functions of the administrative processes within the framework of the functions of the sysadmin, are still emerging within IT departments across the globe. These tasks have traditionally revolved around the mechanical functions the sysadmin has under their purview. However, another whole category of administration is now becoming far more vital in the role of the sysadmin, and that’s the deployment of applications and their components within the environment.

 

Application Development is taking on a big change. The utilization of methods like MicroServices, and containers, a relatively new paradigm about which I’ve spoken before, makes the deployment of these applications very different. Now, a SysAdmin must be more responsive to the needs of the business to get these bits of code and/or containers into production far more rapidly. As a result, the SysAdmin needs to have tools in place so as to to respond as precisely, actively, and consistently as possibly. The code for the applications is now being delivered so dynamically, that it now must be deployed, or repealed just as rapidly.

 

When I worked at VMware, I was part of a SDDC group who had the main goal of assisting the rollouts of massive deployments (SAP, JDEdwards, etc.) to an anywhere/anytime type model. This was DevOps in the extreme. Our expert code jockeys were tasked with writing custom code at each deployment. While this is vital to the goals of many organizations, but today the tools exist to do these tasks on a more elegant manner.

 

So, what tools would an administrator require to push out or repeal these applications in a potentially seamless manner? There are tools that will roll out applications, say from your VMware vCenter to whichever VMware infrastructure you have in your server farms, but also, there are ways to leverage that same VMware infrastructure to deploy outbound to AWS, Azure, or to hybrid, but non-VMware infrastructures. A great example is the wonderful Platform9, which exists as a separate panel within vCenter, and allows the admin to push out full deployments to wherever the management platform is deployed.

 

There are other tools, like Mezos which help to orchestrate Docker types of container deployments. This is the hope of administrators for Docker administration.

 

But, as of yet, the micro-services type of puzzle piece has yet to be solved. As a result, the sysadmin is currently under the gun for the same type of automation toolset. For today, and for the foreseeable future, DevOps holds the key. We need to not only deploy these parts and pieces to the appropriate places, but we also have to ensure that they get pushed to the correct location, that they’re tracked and that they can be pulled back should that be required. So, what key components are critical? Version tracking, lifecycle, permissions, and specific locations must be maintained.

I imagine that what we’ll be seeing in the near future are standardized toolsets that leverage orchestration elements for newer application paradigms. For the moment, we will need to rely on our own code to assist us in the management of the new ways in which applications are being built.

When a SysAdmin implements a DevOps approach to an environment, there are many things that need to be considered. Not least of which is developing a strong line of communication between the developer and the SysAdmin. Occasionally, and more frequently as time goes on, I’m finding that these people are often one and the same, as coders or scripters are spreading their talents into the administrator’s roles. But these are discreet skills, and the value of the scripter will augment the skills and value of the system administrator.

 

Process is a huge component of the SysAdmin's job. The ordering process, billing process, build process, delivery process. How many of those tasks could be automated? How easily could these functions be translated to other tasks that are conscripted, and could therefore be scripted?

 

The wonderful book, The Phoenix Project, is a great example of this. It outlines how an IT manager leverages the powers of organization, and develops a process that helps facilitate the recovery of a formerly financially strapped industry leader.

In the book, obvious communications issues initially bred the perception that IT was falling behind in its requisite tasks. Systems, trouble tickets, and other tasks were not being delivered to the requestor in any sort of accurate time frame. As the main character analyzed both the perceptions and realities of these things falling short, he began questioning various stakeholders in the company. People who were directly affected, and those outside his realm, were solicited for advice. He also evaluated processes within the various groups of the organization, found out what worked and why, then used his successes and failures to help him to establish assets worth leveraging. In some cases, those assets were skills, individuals, and personalities that helped him streamline his functions within the organization. Gradually, he was able to skillfully handle the problems that arose, and fix problems that had been established long before.

 

A number of key takeaways from the character's experiences in this series of actions illustrate what I was trying to outline. For example, the willingness to question yourself and your allies and adversaries in a given situation, with the goal of making things better. Sometimes it takes being critical in the workplace, which includes scrutinizing your successes and failures, may be the best way to fix issues and achieve overall success. 

 

What does your process look like for any given set of tasks? How logical are they? Can they be eased, smoothed, or facilitated faster? Evaluating the number of hours it takes to perform certain tasks, and then critically evaluating them, could potentially improve metrics and processes. As I see it, the scriptability of some of these tasks allows us to shave off precious hours, improve customer perception, and gain credibility again. Being willing to embrace this new paradigm offers countless benefits. 

 

It all begins with communication. Don’t be afraid to allow yourself to be seen as not knowing the answer, but always be willing to seek one. We can learn from anyone, and by including others in our evaluation, we can also be perceived as humble, collaborative, and productive.

In part one, I outlined the differentiations between a traditional architecture and a development oriented one. We’ve seen that these foundational and fundamental differences have specific design approaches that really can significantly impact the goals of a team, and the approaches of the sysadmin to support them. But, what we haven’t addressed are some of the prerequisites and design considerations necessary to facilitate that.

 

Let me note that DevOps as a concept began with the operations team creating tools within newer scripting and programming languages all centered around the assisting of the facilities and sysadmin groups to support the needs of the new IT. Essentially, the question was: “What kinds of applications do we in the infrastructural teams need to better support the it needs as a whole. Many years ago, I had the great pleasure to work on a team with the great Nick Weaver ( @LynxBat ). Nick had an incredible facility to imagine tools that would do just this. For example, if you’ve a requirement to replicate an object from one data center to another, which could be a simply Push/Get routine, Nick would tell you that you’re doing it wrong. He’d create a little application which would be customized to accomplish these tasks, verify completion, and probably give you a piece of animation while it was taking place to make you feel like you’re actually doing something. Meanwhile, it was Nick’s application, his scripting, that was doing all the heavy lifting.

 

I’ll never underestimate, nor appreciate the role of the developer more than the elegance with which Nick was able to achieve these things. I’ve never pushed myself to develop code. Probably, this is a major shortfall in my infrastructural career. But boy howdy, do I appreciate the elegance of well written code. As a result, I’ve always leaned on the abilities of my peers who have the skills in coding to create the applications that I’ve needed to do my job better.

 

When a sysadmin performs all of their tasks manually, no matter how strong the attention to detail, often, mistakes get made. But when the tools are tested, and validated, running them should be the same across the board. If some new piece of the infrastructure element must be included, then of course, the code must know about it, but still, the concept remains. 

 

So, in the modern, Cloud Based, or even Cloud Native worlds, we need these tools to keep ourselves on-top of all the tasks that truly do need to be accomplished. The more that we can automate, the more efficient we can be. This means everything from deploying new virtual machines, provisioning storage, loading applications, orchestration of the deployment of applications toward cloud based infrastructures (Either hybrid, on premises, or even public cloud based). In many cases, these framework types of orchestration applications didn’t exist. To be able to actively create the tools that would ensure successful deployments has become mission critical.

 

To be sure, entirely new pieces of software are often created that are now solving these problems as they arise, and many of these are simple to use, but whether you buy something off the shelf, or write it yourself, the goals are the same. Help the poor sysadmin to do the job they want to do as seamlessly, and efficiently as possible.

In traditional IT, the SysAdmin’s role has been established as supporting the infrastructure in its current dynamic. Being able to consistently stand up architecture in support of new and growing applications, build test/dev environments, and ensure that these are delivered quickly, consistently, and with a level of reliability such that these applications work as designed is our measure of success. Most SysAdmins with whom I’ve interacted have performed their tasks in silos. Network does their job, servers do theirs, and of course storage has their unique responsibility. In many cases, the silos worked at cross purposes, or at minimum, with differing agenda. The rhythms of each group caused for, in many cases, the inability to deliver the agility of infrastructure that the customer required. However, in many cases, our systems did work as we’d hoped, and all the gears meshed properly to ensure our organization’s goals were accomplished.

 

In an agile, cloud native, devops world, none of these variables can be tolerated. We need to be able to provide the infrastructure to deliver the same agility to our developer community that they deliver to the applications.

 

How are the applications different than the more traditional monolithic agencies for which we’ve been providing services for so long? The key here is the concepts of containers and micro-services. I’ve spoken of these before, but in short, a container environment involves the packaging of either the entire stack or discrete portions of the app stack which is not reliant necessarily on the operating system or platform on which they sit. In this case, the x86 service is already in place, or can be delivered in generic modes on demand. The application or portions of it can be deployed as the developer creates it, and alternately, removed just as simply. Because there is so much less reliance on the virtually deployed physical infrastructure, the compute layer can be located pretty much anywhere. Your prod or dev environment on premises, your hybrid cloud provider, or even the public cloud infrastructure like Amazon. As long as the security and segmented environment has been configured properly, the functional compute layer and its location are actually somewhat irrelevant.

 

A container based environment, not exclusively different than a MicroServices based one, delivers an application as an entity, but again, rather than relying on a physical or virtual platform and its unique properties, can sit on any presented compute layer. These technologies, such as Kubernetis, Docker, Photon, Mesosphere and the like are maturing, with orchestration layers and delivery methods far more friendly to the administrator than they’ve been in the past.

 

In these cases, however, the application platform being delivered is much different than traditional large corporate apps. An old Lotus Notes implementation, for example, required many layers of infrastructure, and in many cases, these types of applications simply don’t lend themselves to a modern architecture. They’re not “Cloud Native.” This is not to disparage how Notes became relevant to many organizations. But the value of a modern architecture, the mobility of the applications, and data locales simply does not support the kinds of infrastructure that an SAP, JDEdwards, etc. type of monolithic architecture required. Of course, these particular applications are solving the cloud issues in different ways, and are still as vital to their organizations as they’d been in the past.

 

In the following 4 blog postings, I’ll address the architectural/design/implementation issues facing the SysAdmin within the new paradigm that Cloud Native brings to the table. I hope to address those questions you may have, and hope for as much interaction, clarification, and challenging conversation as I can.

Over the past 5 postings, I’ve talked about some trends that we have seen happening and gaining traction within the Cloud space. I’ve spoken of :

 

  • Virtualization – Where established trends toward virtualization, particularly VMWare, have been challenged by a variety of newcomers, who’s market share continues to grow. Most notable here is OpenStack as a hypervisor. VMWare has challenged the threat of Azure, AWS, and true OpenStack by embracing it with a series of API’s meant to incorporate the on-prem Virtual DataCenter with those peers in the hybrid space.

 

  • Storage – In the case of traditional storage, the trend has been to faster, with faster Ethernet or Fibre as interconnect, and of course, Solid State is becoming the norm in any reasonably high IO environment. But the biggest sea change is becoming that of Object based storage. Object really is a different approach, with replication, erasure encoding, and redundancy built-in.

 

  • Software Defined Networking – Eating quite drastically into the data center space these days is SDN. The complexities in routing tables, and firewall rules are being addressed within the virtual data center by tools like ACI (Cisco) and NSX (VMWare). While port reduction isn’t quite the play here. The ability to segment a network via these rules far surpasses any physical switch’s capacities. In addition, these rules can be rolled out quite effectively, accurately, and with easy roll-back. I find that these two pieces are truly compelling to the maintaining and enhancing the elegance of the network, while reducing the complexities laid onto the physical switch environment.

 

  • Containers – In the new world of DevOps, Containers, a way to disaggregate the application from the operating system, have proven yet another compelling way into the future. DevOps calls for the ability to update parts and pieces of an application, while Containers allow for the ability to scale the application, update it, and deploy it wherever and whenever you want.

 

  • Serverless and MicroServices – Falling into the equation of DevOps, where small components compiled together make up for the entire application, put together as building-blocks make the whole of the application quite dynamic, and modifiable. While the “Serverless” piece, which is somewhat a misnomer (due to the fact that any workload must reside on some compute layer), are dynamic, movable, and reliant less on the hypervisor or location, than wherever the underlying architecture actually resides.

 

So… What’s next in the data center infrastructure? We’ve seen tools that allow the data center administrator to easily deploy workloads into a destination wherever that may be, we’ve seen gateways that bridge the gap from more traditional storage to object based, we’ve seen orchestration tools which allow for the rapid, consistent, and highly managed deployment of containers in the enterprise/cloud space, and we’ve seen a truly cross-platform approach to serverless/MicroService type of architecture which eases the use of a newer paradigm in the data center.

 

What we haven’t seen is a truly revolutionary unifier. For example, when VMWare became the juggernaut it did become, the virtualization platform became the tool that tied everything together. Regardless of your storage, compute (albeit X86 particularly) and network infrastructure, with VMWare as a platform, you had one reliable and practically bulletproof tool with which to deploy new workloads, manage existing platforms, essentially scale it up or down as required, and all through the ease of a simple management interface. However, with all these new technologies, will we have that glue? Will we have the ability to build entire architectures, and manage them easily? Will there be a level of fault tolerance; an equivalent to DRS, or Storage DRS? As we seek the new brass ring and poise ourselves onto the platforms of tomorrow, how will we approach these questions?

 

I’d love to hear your thoughts.

For the last couple years, the single hottest emerging trend in technology, a topic of conversation, the biggest buzzword, and a key criterion for designing both hardware and application bases has been the concept of containers.

 

At this point, we have approaches from Docker, Google, Kubernetes (k8s), Mesos, and notably, Project Photon from VMware. While discretely, there are differentiations on all fronts, the concept is quite similar. The container, regardless of the flavor, typically contains the packaged, migratable, and complete or component parts of the application. These containers work as workloads in the cloud, and allow for the ability to take that packaged piece and run it practically anywhere.

 

This is in direct contrast to the idea of virtual machines, which while VM’s can in some ways accomplish the same tasks, but in other ways, they’ve not got the portability to reside as-is on any platform. A VMware based virtual machine can only reside on a VMware host. Likewise Hyper-V, KVM, and OpenStack based VM’s are limited to their native platforms. Now, processes of migrating these VM’s to alternate platforms do exist. But the procedures are somewhat intensive. Ideally, you’d simply place your workload VM’s in their target environment, and keep them there.

 

That model is necessary in many older types of application workloads. Many more modern environments, however, pursue a more granular and modular approach to application development. These approaches allow for a more MicroServices type of concept. They also allow for the packaging and repackaging of these container based functions, and allow for the deployment to be relocated essentially at will.

 

In a truly “Cloud-Based” environment, the functionality and orchestration becomes an issue. As the adoption grows, the management of many containers becomes a bit clumsy, or even overwhelming. The tools from Kubernetes (Originally a Google project, then donated to the Cloud Native Computing Foundation) make the management of these “Pods” (basic scheduling units) a bit less of a difficulty. Tools are regularly expanded, and functionality of these tools grows, as part of an opensource code. Some of the benefits to this approach are that the community can access via tools like GitHub, the primitives and add to, optimize, and enhance them, to which these added functionalities are constantly being updated.

 

Opensource is a crucial piece of the equation. If your organization is not pursuing the agile approach, the “CrowdSourced” model of IT, which in my opinion is closed minded, then this concept is really not for you. But, if you have begun by delivering your code in parts and pieces, then you owe it to yourself to pursue a container approach. Transitions can present their own challenges, but the cool thing is that these new paradigm approaches can be done gradually, the learning curve can be tackled, there is no real outlay for the software, and from a business perspective, the potential beneficial enhancement on the journey to cloud, cloud-native, and agile IT are very real.

 

Do your research. This isn’t necessarily the correct approach to every IT organization, but it may be for yours. Promote the benefits, get yourself on https://github.com, and begin learning how your organization can begin to change your methods to approach this approach to IT management. You will not be sorry you did.

 

  • Some considerations that must be addressed prior to making the decision to move forward:
    Storage – Does your storage environment support containers? In the storage world, Object based is truly important
    • Application – Is your app functional in a micro-services/container based function? Many legacy applications are much too monolithic as to be supportable. Many new DevOps type applications are far more functional

          

          

  I’m sure that there are far more considerations.

We have long been in a space wherein Virtualization has played a huge role within the data center. Certainly the concept has existed both within the Mainframe world, and on LPAR for quite a bit longer than it has in X86, commodity architecture, but it wasn’t until VMware under Diane Greene, Mendel Rosenblum, Scott Devine, and Edouard Bunion brought that concept to the Intel X86 world that the explosion of capacities in the data center made it mainstream.

 

I can remember doing a POC (Proof of Concept) back in 2004 for a beverage company, wherein we virtualized a file-server in an effort to emphasize the capacities of vMotion when the customer claimed that it simply couldn’t work. We scripted a vMotion to occur every 30 minutes of this file-server. A month later, after we returned to further discuss that the customer re-emphasized their trepidation over the concept of vMotion, at which point we showed them the script logs displaying the roughly 1500 vMotions that had taken place unnoticed over the previous month when they realized the value of the product. So much has been accomplished over the following 12 years. Virtualization has become de-facto. So mainstream, in fact, that the question today is rarely “Should we virtualize, or Should we virtualize first as standard operating procedure?” but “Should we move on from VMware as a platform for virtualization to say, HyperV, Amazon, Azure, or possibly private/public OpenStack?”

 

I’m not going to enter into that religious debate. I can certainly see places wherein all these are valid questions. Again, as I’ve stated before, I stress that you should adequately evaluate all options before making a global decision regarding the platform on which you rest the bulk of your data center services. I will say, however, that these alternative choices have been making huge strides towards parity on many fronts of the virtualization paradigm. Some gaps do exist, and possibly always will. I won’t express those functional distinctions, rather impress on the customer to make educated choices, but I will say that if your decision to go one way or the other is based on money, then you’re likely not looking at the full picture. What may cost less initially, may come with unanticipated costs that go far beyond those that are immediately obvious. Caveat Emptor, right?

 

Needless to say, Virtualization is here. But what will happen, where will the new hot trends come from, and how is it changing? I have no crystal ball, nor are my tea leaves particularly legible or dependable for telling the future. What I can say, though, as I’ve said before, the decisions made today have implications toward the future. Should you choose a platform that doesn’t embrace the goals of the future, you may find yourself requiring a fork-lift upgrade not too far down the road.

 

It is clear that the API is the key to integrations with openstack. If you choose a closed platform, then your lock-in will be substantial. If you don’t evaluate pieces like object storage, API’s, container integration, security roadmaps, etc., you’ll be making choices in a vacuum. I truly don’t recommend it.

 

I cannot stress enough how existing staffing requirements including training can enter into the budgetary decision making process. Please understand, for example, that OpenStack as a decision, should not be made due to cost. Training, and support must be part of the decision making process.

Amongst the key trending technologies moving forward in the enterprise and data center space is that of the virtualization of the network layer. Seems a little ephemeral in concept, right? So, I’ll explain my experience with it, its benefits, and limitations.

 

First, what is it?

NFV (Network Functions Virtualization) is intended to ease the requirements placed on physical switch layers. Essentially, the software for the switch environment sits on the servers rather than on the switches themselves. Historically, when implementing a series of physical switches, an engineer must use the language of the switch’s operating system, to create an environment in which traffic goes where it is supposed to, and doesn’t where it shouldn’t. VLans, routing tables, port groups, etc. are all part of these command sets. These operating systems have historically been command line, arcane, and quite often discretely difficult to reproduce. A big issue is that the potential for human error, while not disparaging the skills of the network engineer, can be quite high. It’s also quite time-consuming. But, when it’s right, it simply works.

 

Now, to take that concept, embed the task into software that sits on standardized servers, and can be rolled out to the entire environment in a far more rapid, standardized and consistent manner. In addition to that added efficiency, and consistency, NFV can also reduce the company’s reliance on physical switch ports, which lowers the cost in switch gear, the cost in heating/cooling, and the cost in data center space.

 

In addition to the ease of rolling out new sets of rules, with the added consistency across the entire environment, there comes a new degree of network security. MicroSegmentation is defined as: The process of segmenting a collision domain into various segments. MicroSegmentation is mainly used to enhance the efficiency or security of the network. The MicroSegmentation performed by the switch results in the reduction of collision domains. Only two nodes will be present as a result of the collision domain reduction.

 

So MicroSegmentation, probably the most important function of NFV, doesn’t actually save the company money in a direct sense, but what it does do is allow for the far more controlled aspect of traffic flow management. I happen to think that this security goal, coupled with the most important ability to roll these rules out globally and identically with a few mouse clicks make for a very compelling product stream.

 

One of the big barriers of entry in the category, at the moment, is the cost of the product, and a bit of differing approach in each of the product streams. So Cisco’s ACI, for example, and while it attempts to address similar security and consistency goals has a very different modus operandi than NSX from VMware. Of course, there are some differentiations, but in addition, one of the issues is how would the theoretical merging of both ACI and NSX within the same environment work? As I do understand it, the issues could be quite significant… A translation effort, or API to bridge the gap, so to speak, would be a very good idea.

 

Meanwhile, the ability to isolate traffic, and do it consistently and across a huge environment could prove itself to be quite valuable to enterprises, particularly where compliance, security, and size are issues. I think about multi-tenant data centers, such as service providers, where the data being housed in the data center must be controlled, the networks must be agile, and the changes must take place in an instant are absolutely key for this category of product. However, I also think that for healthcare, higher education, governmental, and other markets, there are big adoption that will take advantage of these technologies.

Today’s posting steps out of my core skills a bit. I’ve always been an infrastructure guy. I build systems. I help my customers with storage, infrastructure, networking, and all sorts of solutions to solve problems in the data center, cloud, etc. I love this “Puzzle” world. However, there’s an entire category of IT that I’ve assisted for decades, but never really have I been a part of. Developers, as far as I’m concerned are magicians. How do they start with an idea and simply by using “Code” that they generate, in a framework, language, etc.? I simply don’t know. What I do know, though, is that the applications that they’re building are undergoing systemic change too. The difference between where we’ve been, and where we’re going is due to the need for speed, responsiveness, and agility in the modifications of the code.

 

In the traditional world of monolithic apps, these wizards needed to put features into applications by adjunct applications, or learn the code on which some “Off the Shelf” software was written in order to make any changes. Moving forward, the Microservices model, that is: code fragments each purpose built to either add functionality, or streamline existing function.

 

Companies like Amazon’s Lambda and Iron.IO, Microsoft Azure, (and soon to be Google) have upped the ante. While I feel that the term “Serverless” is an inaccuracy, as workloads will always need somewhere to run. There must be some form of compute element, but by abstracting that even further from the hardware (albeit virtual hoardware), we are relying less on where or what these workloads depend on. Particularly when you’re a developer, worrying about infrastructure is something you simply do not want to do.

 

Let’s start with a conceptual definition:  According to Wikipedia, “Also known as Function as a Service (FaaS) is a cloud computing code execution model in which the cloud provider fully manages starting and stopping virtual machines as necessary to serve requests, and requests are billed by an abstract measure of the resources required to satisfy the request, rather than per virtual machines, per hour.” Really, this is not as “Jargon-y” as it sounds. You, the developer, are relying on an amount of processor/memory/storage, but you don’t have any reliance on persistence, or particular location. I feel it’s designed to be a temporary sandbox for your code.

 

In many ways, it’s an approach alternate to traditional IT infrastructure modes of requesting virtual machine(s) from your team, waiting for them, and then if you’ve made some code errors, once again waiting for those machines to be re-imaged. Instead, you load up a temporary amount of horsepower, load it with a copy of your code, then destroy it.

 

In the DevOps world, the beauty of this kind of ephemeral environment is very beneficial. Functionally, the building of code in a DevOps world means that your application is made up of many small pieces of code, rather than a single monolithic application. We have adopted the term “MicroServices” to apply to these fragments of code. Agility and flexibility in these small pieces of code is critical. In fact, the whole concept of DevOps is all about agility. In DevOps, as opposed to traditional code development, rollouts of code changes, updates, patching, etc. can take place with a level of immediacy, whereas full application type rollouts require massive planning. In this case, a rollout could potentially take place instantaneously, as these pieces tend to be essentially tiny. Potentially, code updates can take place and/or be rolled back in minutes.

 

While it is completely certain that DevOps, and the agility that it portends will change the dynamic in Information technology, particularly in companies that rely on home-grown code, what is truly uncertain is whether the concepts of serverless computing will ever grow beyond the Test/Dev world and enter into production environments. I find it difficult, and maybe due to my own lack of coding skills, to envision deploying workloads in a somewhat piecemeal approach. While putting pieces together inside a single workload feels to me far more approachable.

Filter Blog

By date: By tag: