In part one of this series, I tried to apply context to the virtualization market—its history, how it’s changed the way we deploy infrastructure, and how it’s developed into the de facto standard for infrastructure deployment.

 

Over the last 20 years, virtualization has had a very positive impact in enabling us to deliver technology in a smarter, quicker way, bringing more flexibility to our infrastructure and giving us the ability to meet the demands placed upon it much more effectively.

 

But virtualization, or maybe more specifically, server virtualization, is now a very traditional technology designed to operate within our data centers. While it’s been hugely positive, it’s created its own set of problems within our IT infrastructures.

 

Server Sprawl

Perhaps the most common problem we witness is server sprawl. The original success of server virtualization came from allowing IT teams to reduce the waste caused by over-specified, underused, and expensive servers filling racks in their data centers.

 

This quickly evolved as we recognized how virtualization allowed us to deploy new servers much more quickly and efficiently. Individual servers for specific applications no longer needed new hardware to be bought, delivered, racked, and stacked. This made it an easy decision to build them for every application. However, this simplicity has created an issue like what we’d tried to solve in the first place. The ease of creating virtual machines meant instead of dealings with tens of physical servers, we had hundreds of virtual ones.

 

Management

The impact of virtual server sprawl introduced a wide range of challenges, from the simple practicality of managing so many servers, to securing them, networking them, backing them up, building disaster recovery plans, and even understanding why some of those servers exist to better control VM sprawl. Having an infrastructure at such a scale also becomes cumbersome, reducing the benefits of flexibility and agility that made virtualization such a powerful and attractive technology shift in the first place.

 

Complexity

With size comes complexity. Multiple servers, large storage arrays, and complex networking design make implementation more difficult, as of course does the management and protection of more complex environments.

 

The complexity highlighted in the physical infrastructure environment is also mirrored in the application infrastructure it supports. Increasingly, enterprises struggle to fully understand the complexity of their application stack. Dependencies being unclear and concerns changing to one part of the infrastructure could have unknown impacts across the rest. Complex environments come with increased risk of failures, security breaches, running costs, and are slower and more difficult to change, innovate, and respond to demands placed upon it.

 

The Evolution of Virtualization

With all of this said, the innovation that gave us virtualization technology in the first place hasn’t stopped. The challenges growing virtual environments have created have been recognized. Significant advances in virtualization management now allow us better control across the entire virtual stack. Innovations like hyperconverged infrastructure (HCI) are making increasingly integrated and software-driven hardware, networking, and storage elements much more readily available to our enterprises.

 

In the remaining parts of this series, we are going to look at the evolution of virtualization technologies and how continual innovation now means virtualization has moved far beyond the traditional world of server virtualization, becoming more software-driven, with increased integration with the cloud, improved management, better delivery at scale, and adoption of innovative deployment methods, all to ensure virtualization continues to deliver to our environments the flexibility, agility, innovation, and speed of delivery that has made it such a core component of today's enterprise IT environment.

At VMworld this week in San Francisco and enjoying the cooler weather. After three straight years in Las Vegas, it's a nice change. The truth is, this event could be held anywhere, because the #vCommunity is filled with good people who are fun to be around.

 

As always, here are some links I hope you find interesting. Enjoy!

 

Major breach found in biometrics system used by banks, UK police and defence firms

"As a precaution, please reset your fingerprints and face, thank you."

 

California law targets biohacking and DIY CRISPR kits

Not sure we need laws against this or not; after all, we don't have laws against do-it-yourself-dentistry. But we certainly could use some education regarding the use of biohacking and your overall health.

 

Apple warns its credit card doesn't like leather or denim or other cards

Any other company would be laughed out of existence. With Apple, we just laugh, and then pay thousands of dollars for items we don't need.

 

VMware is bringing VMs and containers together, taking advantage of Heptio acquisition

One of the many announcements this week, as VMware is looking to help customers manage the sprawl created by containers and Kubernetes.

 

The surprisingly great idea in Bernie Sanders’s Green New Deal: electric school buses

This is a good idea, which is why it won't happen.

 

Hackers are actively trying to steal passwords from two widely used VPNs

Please, please, please patch your systems. Stop making excuses. You can provide security AND meet your business SLAs. Is it hard? Yeah. Impossible? Nope.

 

Company that was laughed offstage sues Black Hat

Well, now I'm laughing, too. You can't expect to get on stage in front of a deeply technical audience, use a bunch of made-up words and marketing-speak, and be taken seriously.

 

This is my fifth consecutive VMworld, and back in the same city as the first. Lots of memories for me and my journey with the #vCommunity

 

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s the second part of the article from my colleague Jim Hansen about two key assets the government can use to help with cybersecurity.

 

Technology: Providing the Necessary Visibility

 

Cybersecurity personnel can’t succeed without the proper tools, but many of them don’t have the technology necessary to protect their agencies. According to the Federal Cybersecurity Risk Determination Report and Action Plan, 38% of federal cyberincidents didn’t have an identified attack vector. Per the report, IT professionals simply don’t have a good grasp of where attacks are originating, who or what is causing them, or how to track them down.

 

Part of this is due to the heavily siloed nature of federal agencies. The DoD, for example, has many different arms working with their own unique networks. It can be nearly impossible for an Air Force administrator to see what’s going on with the Army’s network, even though an attack on one could affect the entire DoD infrastructure. Things become even more complicated when dealing with government contractors, some of whom have been behind several large security breaches, including the infamous Office of Personnel Management security breach in 2014.

 

Some of it is due to the increasing complexity of federal IT networks. Some networks are hosted in the public cloud, while others are on-premises. Still, others are of a hybrid nature, with some critical applications being housed on-site, while others are kept in the cloud.

 

Regardless of the situation, agency administrators must have complete visibility into the entirety of the network for which they are responsible. Technology can provide this visibility, but not the garden-variety network monitoring solutions agencies used 10 years ago. The complexity of today’s IT infrastructures requires a form of “network monitoring on steroids.” Administrators need to be able to effectively police any type of network—distributed, on-premises, cloud, or hybrid—and provide unfettered visibility, alerts, and forensic data to help administrators quickly trace an event back to its root cause.

 

Administrators must have a means of tracing activity across boundaries, so they can have just as much insight into what’s happening at their cloud provider as they do in their own data center. Further, they must be able to monitor their data as it passes between these boundaries to ensure information is protected both at rest and in-flight. This is especially critical for those operating hybrid cloud environments.

 

None of this should be considered a short-term fix. It can take the government a while to get things going—after all, many agencies are still trying to conform to the National Institute of Standards and Technology’s password guidelines. That’s OK, though; the fight for good cybersecurity will be ongoing, and it will be incumbent upon agencies to evolve their strategies and tactics to meet these threats over time. The battle begins with people, continues with technology, and will ultimately end with the government being able to more effectively protect its networks.

 

Find the full article on Fifth Domain.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

If you’re in tune with the various tech communities, you’ve probably noticed a big push for professional development in the technical ranks. I love the recognition that IT pros need more than just technical skills to succeed, but most of the outreach has been about improving one’s own stature in the tech hierarchy. There’s surprisingly little focus on those who are happy in their place in the world and just want to make the world a little better. What the heck am I talking about? It’s not just us as individuals who need to grow and improve; our IT organizations need to evolve as well. Perhaps we need an example scenario to help make my point…

 

“Once upon a time, I was part of an amazing team made up of talented people, with a fantastic manager. Our camaraderie was through the roof. We had all the right people on the bus. Ideas were plentiful. We also couldn’t get anything of consequence done in the organization...

 

“It didn’t make sense to me at the time, as this group of rock stars should have been able to get anything done. In the end, there were a host of contributing issues, but one of the biggest was our own making. The long and short of it is, we didn’t play nice with others. We were the stereotypical IT geeks, and our standoffish behavior isolated us within the org. It’s not a fun place to be, tends to be self-replicating, didn’t help move the org forward, and ultimately was detrimental to us as individuals.”

 

Even in 2019, it doesn’t seem to be an unfamiliar story for IT practitioners. Today I’d like to take a few minutes exploring some thoughts on how we can fight the norms and evolve as an IT department to become an even more integral part of the business.

 

How to Improve IT Departments – Get Out of IT

 

Quick! Tell me, how does your organization make money? Seriously, ask yourself this question. Unless you work for a non-profit, this is the ultimate goal for your business: to make money. If the question stumped you at all, I’d like to ask you one more. If you don’t know the ultimate goal for your business, how can you truly understand your place within the business and how best to work towards its success? You can’t. No matter the size of the machine, you have to understand what the pieces within it do and how they engage with each other to move the machine forward. IT is just one piece within your business, and you need to go learn about the other pieces, their needs, and friction points.

 

The only real answer is to broaden your horizons and get out of IT.

 

When I say, “Get out of IT,” I’m being literal. Leave. Get out. Go talk to people. Depending on where you are, the mechanics of getting out of IT is going to take different forms. In smaller organizations, the HR department may be your best resource to figure out the key players to talk to, whereas larger organizations may have formal shadowing, apprenticeships, or even job rotation programs. If that’s all too involved for you, spending a little more time on the intranet reading up on other teams will still pay dividends in developing a better understanding of your stakeholders.

 

While You’re Out and About – How IT Pros Can Learn From Other Departments

 

Listen. Show empathy. Is it that simple? Yeah, it really is. The act of getting out of IT is not just about learning and gaining information, it’s also about building relationships. The most important takeaway from this exercise is building inter-department relationships.

 

One of the easiest and most effective ways to build relationships is to listen to the other party. I’m not talking about just waiting for a point to interject or to solve the problems in your head while they’re talking. Don’t practice selective or distracted listening, but be present, focus on the person, and try to hear what they’re saying. It’s not easy to do. After all, we live in a distracted society and many of us make our livelihoods by trying to solve problems as efficiently as possible. For me, I find active listening can help significantly with overcoming my inattentiveness.

 

Before moving on, I want to point out one specific word: empathy. I chose it specifically, in part, because of how Stephen Covey defines empathetic listening, “…it’s that you fully, deeply, understand that person, emotionally as well as intellectually.” This is not white belt level attentiveness we’re talking about; this is Buddha-like listening, and with practice and intentionality, you may find you’re able to reach this level of enlightenment. By doing so, you’ll inevitably forge bonds, and the relationship you build will be based on mutual understanding. With these bonds in place, you and your new cohorts will be more in sync and better able to row in the same direction.

 

The Importance of Asking “Why?”

 

Why are we here? Not quite as existential as that, but fundamental nonetheless, you should consider adding the word “Why?” to your workplace repertoire. “Why?” Well, let me tell you why. This simple three-letter word will help you peel back the layers of the onion. Asked in the right way, it can be a powerful means to demonstrate your empathy and leverage the newly strengthened relationships you’ve built. It’s a means to get deeper insight to the problem/pain/situation at hand. With deeper insights, you can create more effective solutions.

 

Now a word of caution for you, burgeoning Buddha. “Why?” can also backfire on you. It can be a challenging word. By using the word “Why” in the wrong context, setting, or situation, you can present a challenge to the questioned. If you’re talking to someone who can get defensive and put their shields up, it’s possible to lose traction. Tread carefully with this powerful little word. Ensure your newly improved relationships have a good bond and can be trusted if this word is interpreted in the wrong way. Long story short, make sure you’re being intentional with your communications, and asking “Why?” can take you a long way towards becoming more effective in your organization.

 

Why Can’t We Be Friends? (With IT)

 

The idea for this post has been kicking around for a while, but the title only came to me recently when the Rage Against the Machine song “Know Your Enemies” unexpectedly started playing in my car. If you’re able to strip out the controversial elements, the song is about a call to action, fighting against complacency, and bucking the norms. At the end of this post, that’s my hope for you: by being cognizant of our place in the organization and actively working to build better relationships, you’ll walk away not raging against the machine, but rather humming “why can’t we be friends, why can’t we be friends…

So much in IT today is focused on the enterprise. At times, smaller organizations get left out of big enterprise data center conversations. Enterprise tools are far too expensive and usually way more than what they need to operate. Why pay more for data center technologies beyond what your organization needs? Unfortunately for some SMBs, this happens, and the ROI on the equipment they purchase never really realizes its full potential. Traditional data center infrastructure hardware and software can be complicated for an SMB to operate alone, creating further costs for professional services for configuration and deployment. This was all very true until the advent of hyperconverged infrastructure (HCI). So to answer the question I posed above: yes, HCI is very beneficial and well suited for most SMBs. Here's why:

 

1. The SMB is small- to medium-sized – Large enterprise solutions don’t suit SMBs. Aside from the issue of over-provisioning and the sheer cost of running an enterprise data center solution, SMBs just don't need those solutions. If the need for growth arises, with HCI, an organization can easily scale out according to their workloads.

 

2. SMBs can’t afford complexity – A traditional data center infrastructure usually involves some separation of duties and siloes. Where there are so many moving parts with different management interfaces, it can become complex to manage it all without stepping on anyone’s toes. HCI offers an all-in-one solution—storage, compute, and memory all contained in a singular chassis. HCI avoids the need for an SMB to employ a networking team, virtualization team, storage team, and more.

 

3. Time to market is speedy – SMBs don’t need to take months to procure, configure, and deploy a large-scale solution for their needs. Larger corporations might require a long schedule, where the SMB might not require this. HCI helps them to get to market quickly. HCI is as close to a plug-and-play data center as you can get. In some cases, depending on the vendor chosen, time to market can be down to minutes.

 

4. Agility, flexibility, all of the above – SMBs need to be more agile and don’t want to carry all the overhead required to run a full data center. Power, space, and cooling can be expensive when it comes to large enterprise systems. Space itself can be a very expensive commodity. Depending on the SMB’s needs, their HCI system can be trimmed down to a single rack or even a half rack. HCI is also agile in nature due to the ability to scale on demand. If workloads spike overnight, simply add another block or node to your existing HCI deployment to bring you the performance your workloads require.

 

5. Don’t rely on the big players – Big-name vendors for storage and compute licensing can come at a significant cost. Some HCI vendors offer proprietary and built-in hypervisor solutions included in the cost and easier to manage than an enterprise license agreement. Management software is also built in to many HCI vendor’s solutions.

 

HCI has given the SMB more choices when it comes to building out a data center. In the past, an SMB had to purchase licensing and hardware generally built for the large enterprise. Now they can purchase a less expensive solution with HCI. HCI offers agility, quick time to market, cost savings, and reduced complexity. These can all be pain points for an SMB, which can be solved by implementing an HCI solution. If you work for an SMB, have you found this to be true? Does HCI solve many of these problems?

This is the second of a series of five posts on the market space for hyperconverged infrastructure (HCI).

 

To be clear, as the previous blog post outlined, there are many options in this space. Evaluation of your needs and clear understanding of why you want to choose a solution shouldn’t be made lightly. Complete understanding of what you hope to accomplish and why you wish to go with one of these solutions should be evaluated and understood, and this information should guide you in your decision-making process.

 

Here’s a current listing of industry players in hyperconverged.

  • Stratoscale
  • Pivot3
  • DellEMC Vx series
  • NetApp
  • Huawei
  • VMware
  • Nutanix
  • HPE SimpliVity
  • HyperGrid
  • Hitachi Data Systems (Ventara)
  • Cisco
  • Datrium

 

There are more, but these are the biggest names today.

 

Each technology falls toward the top of the solutions set required by the Gartner HCI Magic Quadrant (MQ). The real question is: which is the right one for you?

 

Questions to Ask When Choosing Hyperconvergence Vendors

 

Organizations should ask lots of questions to determine what vendor(s) to pursue. Those questions shouldn’t be based on the placement in the Gartner MQ, but rather your organization’s direction, requirements, and what’s already in use.

 

You also shouldn’t ignore the knowledge base of your technical staff. For example, I wouldn’t want to put a KVM-only hypervisor requirement in the hands of a historically VMware-only staff without understanding the learning curve and potential for mistakes. Are you planning on using virtual machines or containers? There are considerations to this. What about a cloud element? While most architectures support cloud, you should ask what cloud platform and what applications will you be using?

 

One of the biggest variables to be considered is and always should be backup, recovery, and DR. Do you have a plan in place? Will your existing environment support this vendor’s approach? Do you believe you’ve evaluated the full spectrum of how this will be done? The elements to set one platform apart are how the storage in the environment handles tasks like replication, deduplication, redundancies, fault tolerance, encryption, and compression. In my mind, the concern as to how this is handled, and how it might be able to integrate into your existing environment, must be considered.

 

I’d also be concerned about how the security regulations your organization faces are considered in the architecture of your choice. Will that affect the vendor you choose? It can, but it may not even be relevant.

 

I would also be concerned about the company’s track record. We assume Cisco, NetApp, or HPE will be around, as they’ve been there with support and solutions for decades. To be fair, longevity isn’t the only method for corporate evaluation, but it’s a very reasonable concern when it comes to supporting the environment, future technology breakthroughs, enhancements, and maybe the next purchase, should it be appropriate.

 

Now, my goal here isn’t to make recommendations, but to urge readers to narrow down a daunting list, and then evaluate the features and functions most relevant to your organization. Should a true evaluation be undertaken, my recommendation would be to do some research into the depth of your company’s needs, and those that can be resolved by placing a device or series of them in your environment.

 

The decision can last years, change the direction of how your virtual servers exist in your environment, and shouldn’t be undertaken lightly. That said, Hyperconverged Infrastructure has been one of the biggest shifts in the market over the last few years. 

Getting ready for VMworld next week in San Francisco. If you're attending, please stop by the booth and say hello. I have some speaking sessions as well as a session in the expo hall. Feel free to come over and talk data or bacon.

 

As always, here are some links I hope you find interesting. Enjoy!

 

Supercomputer creates millions of virtual universes

Another example of where quantum computers will help advance research beyond what supercomputers of today can provide.

 

Amazon's facial recognition mistakenly labels 26 California lawmakers as criminals

This depends on what your definition of "mistake" is.

 

Younger Americans better at telling factual news statements from opinions

I'd like to see this survey repeated, but with a more narrow focus on age groups. I believe grouping 18-49 as "young" is a bit of a stretch.

 

Attorney General Barr and Encryption

Good summary of the talking points in the debate about backdoors and encryption.

 

Loot boxes a matter of "life or death," says researcher

As a parent I have seen firsthand how loot boxes affect children and their habits.

 

Black Hat: GDPR privacy law exploited to reveal personal data

I wish I had attended this talk at Black Hat, brilliant research into how data privacy laws are making us less safe than we may have thought.

 

He tried to prank the DMV. Then his vanity license plate backfired big time.

NULLs remain the worst mistake in computer science.

 

I have walked past, but never into, the Boston Public Library many times. Last week I took the time to go inside and was not disappointed.

 

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Jim Hansen about the always interesting topic of cybersecurity. He says it comes down to people and visibility. Here’s part one of his article.

 

The Trump administration issued two significant reports in the last couple of months attesting to the state of the federal government’s cybersecurity posture. The Federal Cybersecurity Risk Determination Report and Action Plan noted 74% of agencies that participated in the Office of Management and Budget’s and Department of Homeland Security’s risk assessment process have either “at risk” or “high risk” cybersecurity programs. Meanwhile, the National Cyber Strategy of the United States of America addressed steps agencies should take to improve upon the assessment.

 

Together, the reports illustrate two fundamental factors instrumental in combating those who would perpetrate cybercrimes against the U.S. Those factors—people and the technology they use—comprise our government’s best defense.

 

People: the First Line of Defense

 

People develop the policies and processes driving cybersecurity initiatives throughout the government. Their knowledge—about the threat landscape, the cybersecurity tools available for government, and the security needs and workings of their own organizations—are essential to running a well-oiled security apparatus.

 

But finding those skilled individuals, and keeping them, is difficult. Since the government is committed to keeping taxpayers’ costs low, agencies can’t always afford to match the pay scales of private sector companies. This leaves agencies at a disadvantage when attempting to attract and retain skilled cybersecurity talent to help defend and protect national security interests.

 

Several education initiatives are underway to help with this cyberskills shortage. The National Cyber Strategy report lays out some solid ideas for workforce knowledge improvement, including leveraging merit-based immigration reforms to attract international talent, reskilling people from other industries, and more. Meanwhile, the Federal Cyber Reskilling Academy provides hands-on training to prepare non-IT professionals to work as cyberdefense analysts.

 

Hiring processes must also continue to evolve. Although there has been progress within the DoD, many agencies still adhere to an approach dictated by stringent criteria, including years of experience, college degrees, and other factors. This effectively puts workers into boxes—this person goes in a GS-7 pay grade box, and this other person in a GS-15.

 

While education and experience are both important, so are ideas, creativity, problem-solving, and a willingness to think outside the box. It’s a shame those attributes can’t be considered just as valuable, especially in a world where security professionals are continually being asked to think on their feet and combat an enemy who both shows no mercy and evolves quickly to bypass an organization’s defenses. The government needs people who can effectively identify and understand a security event, react quickly in the case of an event, respond to the event, anticipate the next potential attack, and formulate the right policies to prevent future incidents.

 

(to be continued next week)

 

Find the full article on Fifth Domain.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

In the previous blog, we discussed how defining use cases mapped to important security and business- related objectives are the first step in building and maintaining a secure environment. We’ve all heard the phrase, “you can’t defend what you can’t see,” but, you also “can’t defend what you don’t understand.”

 

Use cases will be built with the components of the ecosystem, so it’s critical to identify them early. Overlooking a key component could prove to be costly later. The blueprint for use case deployment can be drawn up based on three key areas.

 

1. Identify Role Players, Such as Endpoints, Infrastructure, and Users

A network is built using devices such as routers, switches, firewalls, and servers. Their optimal configuration and deployment requires a detailed understanding of the role each device will play in connecting users with applications and services securely and efficiently as mandated by their individual and group roles.

User and endpoint roles can then be mapped to enforcement techniques, authentication and access methods, and security audit requirements during the deployment phase.

 

For example:

  • Campus-based employees may access the network via wired company-owned devices and authorized for network access by MAB authentication
  • Mobile employees require 802.1X authentication via wireless network access
  • Guest users with their own wireless devices use Web Authentication and are authorized to access a restricted set of resources
  • Branch office connectivity and other remote access users connect via an IPsec VPN with authentication via IKEv2 with RSA signatures or EAP
  • Network administrator groups require access to subsets of devices, authenticate per device, and are authorized for specific commands

 

2. Understand Data Flows and Paths

The role players in the ecosystems are connected by data flows. Where do these flows need to go within the network? Where are users coming from? Flows need to be defined. This includes flows between users and services, as well as administrative flows between network infrastructure (think routing protocols, AAA requirements, log consolidation, etc.). As a data flow traverses a known path, identify network transit points such as remote access, perimeter, and even on- or off-premises communications with cloud-based services.

 

Understanding how data moves through the ecosystem raises questions on how to secure it.

 

  • Should a user be granted the same level of access regardless of their point of access?
  • Should data segmentation be implemented, and if so, will physical and/or logical segmentation by used?
  • Should all services be located inside my firewall perimeter, on a DMZ, or cloud hosted?
  • What types and levels of authentication, integrity, and privacy will be required to secure data flows?

 

3. Identify Software and Hardware, SaaS, and Cloud

Effective configuration and deployment of network elements is dictated by required functions and permitted traffic flows, which in turn drive the choice of hardware and software. Device capabilities shouldn’t define a security policy, although they may enhance it. Choosing products that don’t meet security or business needs is a sure way to limit effectiveness.

 

Knowing what you need is critical. However, one major influence on security policy is business return on investment. When possible, consider migration strategies using existing infrastructure to support newer features and a more secure design. Relocation of hardware to different areas of the network or simply upgrading a device in terms of adding memory to accommodate new software versions should always be considered. Deprecating on-premises hardware may be considered if a transition to cloud-based services is seen as a more efficient and cost-effective method of meeting security and business objectives.

 

When selecting new hardware, plan for future growth in terms of device capacity (bandwidth), performance (processor, memory), load balancing/redundancy capabilities, and flexibility (static form-factor versus expansion slots for additional modules). Set realistic and well-researched performance goals to ensure stability and predictability and choose the best way to implement them.

 

When selecting software, in addition to providing the required functionality, the following points should be considered.

  • Evaluate standards-based versus vendor proprietary features.
  • If certified products are required, is the vendor involved with certification efforts and committed to keeping certifications up to date?
  • Does the software provide for system hardening and performance optimization (such as control-plane policing and system tuning parameters) and system/feature failover options?
  • Understand performance trade-offs when enabling several features applied to the same traffic flows. Multiple devices may be required to provide all feature requirements.
  • Is the vendor committed to secure coding practices and responsive to addressing vulnerabilities?

 

After building an ecosystem blueprint, how can we be sure its deployment supports appropriate deployment, design, and security principles? The next blog will look at the role of best practices, industry guidelines, and compliance requirements.

Anyone who’s hired for a technical team can understand this scenario.

 

You’re hiring for “X” positions, so of course you receive “X” times infinity applications. Just because you’re looking for the next whizbang engineer, it doesn’t mean you get to neglect your day job. There are still meetings to attend and emails to write, never mind looking after the team and the thing you’re hiring for anyway! So, what do you do? Most people want to be as efficient as possible, so they jump right to the experience and skills section of the resume. That’s what you’re hiring for, right? Technical teams require technical skills. All the rest is fluff.

 

If you’ve done this, and I bet you have if you listen to the little voice in the back of your head, then you may be doing a disservice to your team, the candidate, and ultimately yourself. What the heck am I talking about? Yup, the “soft skills,” and today I’d like to talk primarily about communication skills.

 

I can hear the eyerolls from here, so let’s spend a few minutes talking about why they’re so important.

 

I’m a <insert technologist label here>. Why do I care about communications?

In 2019, continuous deployment pushes code around the clock; security event managers stop bad guys before we realize they’re there; and even help desk solutions where tickets can manage themselves. Yet even with these technological and automative advances, people want relationships. If you think about it, when you really need to get anything complex done, do you ask your digital personal assistant, or do you work with another human? We’re out for human interaction. By building relationships, we can move IT further away from the reputation of “the Department of No” and toward frameworks and cultures to enable things like DevOps and becoming a true business partner. Our ability to communicate builds bridges and becomes the foundation for our relationships.

 

Still skeptical? Let’s walk through another scenario.

Assume you’re an architect in a large organization and you’ve got an idea to revolutionize your business. You manage to get time with the person who controls the purse strings and you launch right into what this widget is and what you need for resources. You wow them with your technical knowledge and assure them this widget is necessary for the organization to succeed. Is this a recipe for success? Probably not. You might even get bounced out of the office on your ear.

 

Let’s replay the same scenario a little bit differently. You get time on the decisionmaker's calendar; but you do a little homework first. You ask your colleagues about the decisionmaker and what type of leader they are. You dig into what their organizational goals are and how they might be measured against said goals. Armed with this information, you frame the conversation in terms of the benefits delivered both to the organization and the purse holder. And you can speak their language, which they’ll most likely appreciate and will make your conversation go that much smoother. Due to your excellent communication skills, your project is approved, you have a new BFF, and you both go get tacos to celebrate your impending world domination.

 

Neither the world domination nor the tacos would be possible without the ability to convey benefits to the recipient in a language they understand. The only difference between world domination and coming across like a self-righteous nerd who cares more about their knobs than the organization is the ability to clearly and succinctly communicate with the business in a language they understand.

 

So now that we’ve talked a bit about why...

Let’s circle back to the original premise for a moment: you should be building communication skills into your teams. Obviously if you’re hiring a technical writer, communication is the skill, but chances are you’re looking for someone who has an attention to detail and can write some form of prose. The ability to craft a narrative will be vital if you’re looking for a technical marketing person. Anyone who’s in a help desk role needs to build rapport, so communicating with empathy and understanding becomes vital. If you’re hiring for an upper level staff position, the ability to distill highly technical concepts down to fundamentals and convey them in language that makes sense to the recipient is paramount. In my experience, this last example can be a bit of a rarity; if you find someone either within or outside your ranks who exudes it, you should think about how you can keep them on your hook.

 

How do you achieve this unicorn dream of hiring for communication skills? Classic geek answer, “it depends.” We can’t possibly diagnose all the permutations in my wee little blog post. Rather than try to give you a recipe, I think you’ll find by shifting your approach slightly, to be more mindful of what you’d like to achieve via your communications, you’ll inherently be more successful.

 

One last point before I bid you adieu. Here, we’ve focused on why you need to hire for these skills. This isn’t to say for one second that you shouldn’t also build them within your existing organizations. This, however, requires looking at the topic from some different angles and a whole other set of techniques, so we’ll leave it for another day. Until then, I hope you found this communication helpful, and I’d love to turn it into a dialogue if you’re willing to participate in the comments below.

In over 20 years I’ve spent in the IT industry, I’ve seen many changes, but nothing has had a bigger impact as virtualization.

 

I remember sitting in a classroom back in the early 2000s with a server vendor introducing VMware. They shared how they enabled us to segment underused resources on Intel servers into something called a "virtual machine" and then running separate environments in parallel on the same piece of hardware—technology witchcraft.

 

This led to a seismic shift in the way we built our server infrastructure. At the time, our data centers and computer rooms were full of individual, often over-specified and underutilized servers running an operating system and application, all of which were consuming space, power, cooling, and costing a small fortune. Add in projects taking months to deploy, meaning projects were often slow, reducing innovation, and elongating the response to business demands.

 

Virtualization revolutionized this, allowing us to reduce server estates and lower the cost of data centers and infrastructure. This allowed us to deploy new applications and services more quickly, letting us better meet the needs of our enterprise.

 

Today, virtualization is the de facto standard for how we deploy server infrastructure. It once being an odd, cutting-edge concept is hard to believe. While it’s the standard deployment model, the reasons we virtualize have changed over the last 20 years. It’s no longer about resource consolidation—it’s more about simplicity, efficiency, and convenience.

 

But, virtualization (especially server-based) is also a mature technology designed for Intel servers, running Windows and Linux inside the sacred walls of our data center. While it has served us well in making our data centers more efficient and flexible, reducing cost and ecological impact, does it still have a part to play in a world rapidly moving away from these traditional ways of working?

 

Over the next few weeks, we’ll explore virtualization from where we are today, the problems virtualization has created, where it’s heading, and whether it remains relevant in our rapidly changing technology world.

 

In this series, we’ll discuss:

  • The Problems of Virtualization – Where are we today and what problems 20 years of virtualization have caused, such as management, control, and VM sprawl.

  • Looking Beyond Server Virtualization – When you use the phrase virtualization, our thoughts immediately turn to Intel Servers, hypervisors, and virtual machines. But the future power of virtualization lies in changing the definition. If we think of it as abstracting the dependency of software from specific hardware, it opens a range of new opportunities.

  • Virtualization and the Drive to Infrastructure as Code – How the shift to a more software-defined world is going to cement the need for virtualization. Environments reliant on engineered systems are inflexible and slow to deploy. They’re going to become less useful and less prevalent. We need to be able to deploy our architecture in new ways and deliver it rapidly, consistently, and securely.

  • The Virtual Future – As we desire increasing agility, flexibility, and portability across our infrastructure and need more software-defined environments, automation, and integration with public cloud, then virtualization, (although maybe not in the traditional way we think of it), is going to play a core part in our futures. The more our infrastructure is software, the more ability we’ll have to deliver the future so many enterprises demand.

 

I hope you’ll join me in this series as I look at the changing face of virtualization and the part it’s going to play in our technology platforms today and in the future.

 

Had a wonderful time at Black Hat last week. Next up for me is VMworld in two weeks. If you're reading this and attending VMworld, stop by the booth and say hello.

 

As always, here are some links I hope you find interesting. Enjoy!

 

Hospital checklists are meant to save lives — so why do they often fail?

Good article for those of us that rely on checklists, and how to use them properly.

 

A Framework for Moderation

Brilliant article to help make sense of why content moderation is not as easy as we might think.

 

With warshipping, hackers ship their exploits directly to their target’s mail room

If you don't have the ability to detect rogue devices joining your network, you're at risk for this attack vector.

 

Uber, losing billions, freezes engineering hires

That's a lot of money disappearing. Makes me wonder where it's going, because it's not going to the drivers.

 

Study: Electric scooters aren’t as good for the environment as you think

Oh, maybe Uber is paying millions for research articles to be published. Just kidding. Uber offers scooters as well, as they remain dedicated to making things worse for everyone.

 

Robot, heal thyself: scientists develop self-repairing machines

What's the worst that can happen?

 

The World’s Largest and Most Notable Energy Sources

I enjoyed exploring this data set, and I think you might as well. For example, current energy consumption for bitcoin is about 60,000 megawatt hours. That's almost the same daily amount as the entire city of London.

 

We brought custom black hats to Black Hat, of course. We also brought photobombs by Dez, apparently.

 

Omar Rafik, SolarWinds, Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Mav Turner about machine learning and artificial intelligence. These technologies have recently become vogue and Mav does a great job of exploring them and their impacts on federal networking.

 

Machine learning (ML) and artificial intelligence (AI) could very well be the next major technological advancements to change the way federal IT pros work. These technologies can provide substantial benefits to any IT shop, particularly when it comes to security, network, and application performance.

 

Yet, while the federal government has been funding research into AI for years, most initiatives are still in the early stages. The reason for this is preparation. It’s critically important to prepare for AI and machine learning technologies—adopting them in a planned, purposeful manner—rather than simply applying them haphazardly to current challenges and hoping for the best.

 

Definition and Benefits of AI

 

From a high-level perspective, machine learning-based technologies create and enhance algorithms that identify patterns in large sets of data. Artificial intelligence is the ability for machines to continuously learn and apply cross-domain information to make decisions and act.

 

For example, AI technology allows computers to automatically recognize a threat to an agency’s infrastructure, automatically respond, and automatically thwart the attack without the assistance of the IT team.

 

But, remember, planning and preparation are absolutely necessary before agencies dive into AI implementation.

 

Preparing for AI

 

From a technology perspective, it’s important to be sure agency data is ready for the shift. Remember, machine learning technologies learn from existing data. Most agencies have data centers full of information—some clean, some not. To prepare, the absolute first step is to clean up the agency’s data store to ensure bad data doesn’t lead to bad automatic decisions.

 

Next, prepare the network by implementing network automation. Network automation will allow federal IT pros to provision a large number of network elements, for example, or automatically enhance government network performance. A good network automation package will provide insights—and automated response options—for fault, availability, performance, bandwidth, configuration, and IP address management.

 

Finally, strongly consider integrating any information that isn’t already integrated. For example, integrating application performance data with network automation software can automatically enhance performance. In fact, this integration can go one step further. Integrating historical data will allow the system to predict an impending spike in demand and automatically increase bandwidth levels or enable the necessary computing elements to accommodate the spike. While network automation is powerful and can start your organization down the path to prepare for AI-enabled solutions, there’s a big difference between network automation and AI.

 

Conclusion

 

This type of preparation provides two of the most essential elements of a successful AI implementation: efficiency and visibility. If an agency’s network isn’t already as efficient as it can be—and if the different elements of the infrastructure are not already linked—advanced technologies won’t be anywhere near as effective as they could be if the pieces were already in place, ready to take the agency to the next level.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Security is a key operational consideration for organizations today because a breach can lead to significant losses of revenue, reputation, and legal standing. An entity’s environment is an ecosystem comprised of users, roles, networking equipment, systems, and applications coming together to facilitate productivity and profitability as securely as possible. An environment will never be 100% secured against all threats. The next best option is to be proactive to defend against known attacks and to provide real-time, adaptable monitoring capabilities to detect and alert on behaviors outside of what are considered normal in the environment.

 

This blog series will present suggestions and guidelines for building and maintaining an environment for administrators to defend against and mitigate threats.

 

Security is no longer just an overlay to a network topology. Security methods provide protection for data, access, and infrastructure, and should be defined and deployed based on a carefully defined security policy. An effective security policy integrates well-known protection methods into a network in a way that meets both security standards and the business goals of the entity being secured. This is facilitated by defining use cases representing key business drivers, such as:

 

  • Improved efficiency through streamlined security processes reducing operational expenses in terms of time, money, and personnel
  • Increased productivity through well-defined and applied policies correctly balancing the level of access with perceived risk
  • Better agility allowing for efficiency with respect to the implementation of compliance and regulatory objectives, migration strategies, and risk mitigation techniques.

 

Identifying use cases is often the catalyst for a security policy review. Remember, each entity within an organization will have its own objectives. Even if things look typical on the surface, to sell the security policy, its benefits must be apparent to each stakeholder.

 

Here are some common use cases and relevant details a security policy should outline.

  • Performance and Availability
    • SLA requirements
    • Capacity and potential growth
    • Efficient use of bandwidth and device resources
    • Planning for redundant designs
  • Audit and Logging
    • Compliance or legal requirements
    • Compliance demonstration during audits
    • Granularity of monitoring and control
      • Per user
      • Command level
    • Detect suspicious behavior of log sources
    • React to expected host/log sources not reporting
    • Installation of agents on endpoints or collectors
    • Consolidation of log sources for a single view
  • Monitor/troubleshoot
    • What is the cost of downtime?
    • Acquisition and placement of management tools
    • What key events need to be highlighted?
    • Application of analytics, rulesets, and alerts
    • Escalation chain to handle alerts and incident response
    • Automated controls versus user intervention
    • Issue reporting mechanisms and management protocols
    • Support costs: in-house, outsourced
  • Asset Provisioning
    • Centralized repository versus per-device
    • Need for multiple levels of control
    • Automation of distribution
    • Change management processes
    • Vulnerability assessment strategy
  • Acceptable Use Monitoring
    • Employee monitoring
    • Analyzing user behavior to detect potentially suspicious patterns
    • Analyzing network traffic to pinpoint trends indicating potential attacks
    • Identifying improper user account usage, such as shared accounts
    • Publishing policies for the use of the organization’s resources
    • Develop a baseline document to outline threshold limits, critical resources information, user roles, and policies, and apply this to a monitoring system, service, or playbook
    • Legally acceptable method of handling breaches
  • Threat Playbook
    • Identify the threats and attacks of concern (could be industry-specific):
      • Detecting data exfiltration by attackers
      • Detecting insider threats
      • Identifying compromised accounts
      • Detection of brute force attacks
      • Application defense checks
      • Malware checks and update process
      • Detection of anomalous ports, services, and unpatched hosts/network devices
      • Incident investigation process
    • Proactive threat hunting
    • Engaging legal entities and incident response personnel

 

In summary, a security policy builds the foundation for a secure network, but it must be valuable and enforceable to an organization and all stakeholders.

 

In the next blog in this series, we’ll look at how use cases can be mapped to the components in the environment.

We’ve all been there. It’s time to consider building a home lab, whether it’s for testing a scenario, preparing for a certification, or learning more about a software application. There are two home lab options to consider.

 

A physical home lab includes a rack server, servers, networking equipment, monitor, keyboard, KVM, and so on. Additionally, the rack server requires a space tall and wide enough to house it, as well as sufficient power to run it.

 

A software-defined home lab (virtualized) eliminates most of the items needed in a physical lab environment. There’s no need for a rack server, servers, or space, and the power consumption is significantly less. A software-defined home lab can run off a single NUC, a desktop, or laptop computer. The hardware requirements (RAM, storage, processor) can vary depending on your home lab.

 

This is where we’ll dig deeper into the question asked in the title of this post: what is virtualization?

 

Virtualization provides an alternative to a physical environment because it allows the end user to create a software-based (virtualized) model of a server with additional servers, applications, and networks in a software-defined manner. Additionally, time savings are an important factor. VM templates are lifesavers because if you’re not satisfied with the virtual environment you’ve created, you can delete it and start over using the template and you’ll be up and running in half the time it would take to rebuild a physical server in the same manner.

 

There are five commonly known use cases to virtualize:

 

Server – allows multiple operating systems to run on one physical server

Storage – provides the ability to combine multiple physical storage options into a single logical storage environment

Network – provides the ability to create multiple software-defined networks (SDN) in a virtualized manner

Desktop – like server, but allows the end user to create and deploy multiple virtualized desktops onto a single desktop computer accessible from any device

Application – a prime use case scenario when you need to host an application for testing

 

The options to create a virtualized environment have also expanded to include public cloud services but some factors to consider are based on need. For short-term projects with a limited budget requiring a VM stood up in seconds, a public cloud service is ideal. This provides flexibility without the overhead, but keep in mind these services can be easily adopted because of the simplicity, and there’s a tendency to neglect the costs associated with them over time. If a public cloud service is a long-term solution, it’s even more important for you to keep track of costs for the same reasons listed above, perhaps with email alerts to keep track of each option selected with corresponding cost.

Hyperconverged infrastructure is adopted more and more every year by companies both big and small. For small- to medium-businesses (SMBs) and remote branch type companies, HCI provides an all-in-one solution to eliminate the need to have power, space, and cooling for multiple different systems. For bigger companies at the enterprise level and below, HCI gives them a flexible means of scaling out as the workload demand increases. There are many other benefits to adopting a hyperconverged approach to data center infrastructure. Here are what I consider the top five.

 

Flexible Scalability

The most attractive benefit of HCI is the ability to scale out your environment as your workload demand increases. Every application or data center has different workloads. Some demand a lot of resources while others don’t need as much. In the case of HCI, as workload demands increase, there’s no need to do a forklift upgrade. You simply add a new node or block to your current infrastructure and configure it as needed.

 

A Lower Total Cost of Ownership

Total Cost of Ownership (TCO) isn’t usually something the IT community really cares about. Generally, we’re given what we get, and we have to make it work effectively. Decision makers find TCO extremely important, and this metric plays a major role in the procurement process. TCO is good for both the decision makers and the engineers in the trenches. On one hand, the decision makers see a lower cost solution overall, and the engineers get to work with equipment and software that isn’t completely budget-restrained. Time to market is much quicker, less administrative staff is required, and there are cost savings when it comes to power, space, and cooling.

 

Less Administrative Overhead

When there’s less to manage, there’s less administrative overhead. Lowering administrative overhead doesn’t necessarily mean cutting down your staff, it means making your staff more effective with less work. In a traditional data center infrastructure, there are many moving parts, which requires many different teams or silos. Consolidating all the moving parts into one chassis cuts down on administrative overhead and segregated teams.

 

Avoid Overprovisioning

HCI essentially eliminates the possibility of overprovisioning. Traditional infrastructures require a big purchase upfront, which is based off an analyst’s projections for the workload three to five years out. Most of those workload projections end up being too generous and the organization is left with expensive hardware that never gets near capacity usage. HCI allows you to buy only what you currently need, with maybe a 15% buffer, and then as the workload increases, you can flexibly scale out to meet the increase. By eliminating the need for workload projection and large upfront hardware/software purchases, TCO decreases and flexibility increases.

 

Takes the Complexity Out of New Deployments

Traditional infrastructure deployments can be a very complex process. As discussed earlier, the workload projections move forward to procurement of hardware and software, which then need to be racked and stacked, cabled, and configured. This deployment approach requires multiple people, server lifts, and rack space. HCI deployments make it easier for less experienced administrators to unbox rack and configure an entire deployment. In the case of scaling, the same deployment model exists… unbox and bolt on a new block and configure it. There’s no need for the storage team to provision new storage, or the networking team to add more cables. It’s an all-in-one solution and simple to deploy.

 

There are numerous benefits to adopting HCI over traditional infrastructure. I’ve only listed five, which barely scratch the surface. For the benefit of those of us who haven’t deployed HCI before, what are some benefits you’ve realized by your deployment, but aren’t on this list?

Heading to Las Vegas this week for Black Hat. In preparation, I'm bringing a burner phone, wrapping it and my laptop in foil, and then burning them both when I head to the airport to leave.

 

As always, here are some links I hope you find interesting. Enjoy!

 

Woman arrested after Capital One hack spills personal info on 106 million credit card applicants

Secure your S3 buckets, y'all. This is a known attack vector, highlighted here as a "configuration vulnerability."

 

What We Can Learn from the Capital One Hack

Good summary of details regarding the "configuration vulnerabilities" existing within the open source code deployed by Capital One.

 

GitHub sued for aiding hacking in Capital One breach

This seems to be a stretch, but it's interesting to note. I'm not certain how GitHub is supposed to recognize leaked data is being stored (it could be fake data), or how they should verify code is secure.

 

Computer Science Curriculums Must Emphasize Privacy Over Capability

I like the idea, but don't think it's enough. Because most of the folks working in IT aren't CS majors, maybe we should have all fields of study include basic privacy and security information, too.

 

Google’s File on You is 10 Times Bigger Than Facebook’s — Here’s How to View It

In case you were wondering about the data Google is tracking as you surf the web.

 

All the best engineering advice I stole from non-technical people

A bit long, but worth your time.

 

NASA has created food out of thin air and it could be the solution to global hunger

Seems promising, but you'll have my full attention when you create bacon from thin air.

 

Got tired of mowing grass between the newly planted shrubs, so we built a new border path. At this rate, we won't have any grass to mow by 2021.

 

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Mav Turner about the need for federal employees to develop soft skills like communication and adaptability. I agree if folks want to get ahead in their careers today, they need strong skills like these.

 

Today’s federal IT pro has a broad range of responsibilities and corresponding talents. The need for a multidisciplinary skillset is only increasing. As environments get more complex and teams grow, federal IT pros will need a broader range of skills—specifically, “soft skills” or “people skills.” These types of additional skills have the potential to help solidify job security and help make the federal IT pro more invaluable.

 

Soft skill requirements

 

Communication, collaboration, and adaptability are the cornerstones of a strong, productive team—hence, three of the most desirable “soft skills” the federal IT pro can develop and grow.

 

Communication

 

When a project is created—and must be planned, tested, and executed—it’s critically important to have the ability to communicate project goals, strategy, planning, timelines, testing, implementation, and ongoing maintenance to everyone on the team, regardless of technical specialty.

 

Each group within the team should have an understanding of the criticality of the project, as well as its nontechnical goals as it relates to the agency’s mission. Communication is the key to achieving this goal. Federal IT pros must be able to communicate not only within the team but to others within the organization as well.

 

Most agencies have some combination of technical folks and business folks. A technical staffer who can explain how a technical project will help drive agency mission or business goals will likely have a successful career. Additionally, a technical staffer who can also explain the financial impact—ideally, the long-term cost-savings impact of many of today’s leading-edge technology projects—is likely to have an even more successful career.

 

Collaboration

 

Different agency groups will need to work together to ensure project success. According to the 2019 SolarWinds® Federal Cybersecurity Survey Report, a majority of security issues are born of user error. In fact, 56% of respondents say careless untrained insiders are a significant source of IT security threats in their agencies.

 

Based on those statistics, if an agency wants to enhance its security posture, collaborating with the rest of the agency will likely be a critical component of the project’s success.

 

The federal IT team can work with the agency’s internal communications team to implement an awareness or education program to ensure all agency personnel are informed—and are doing their part in the broader agency effort.

 

Adaptability

 

As every federal IT pro knows, change is constant. Whether the change is related to budget issues, administration changes, technology advancements, or a combination of all three, the ability to function in this type of changing environment is becoming increasingly important.

 

Critical thinking skills are a significant part of adaptability. As things change, federal IT pros must be able to shift thinking quickly and effectively. Critical thinking skills include problem identification, research, objective analysis, and the ability to draw conclusions and make decisions.

 

The final piece of adaptability is the willingness to change. It’s important to embrace change and thrive in this type of environment. A willingness—even eagerness—to learn new technologies, take on new challenges, and think differently will almost certainly ensure a long, successful career in the federal IT workforce.

 

Conclusion

 

Gone are the days of silo-based IT skills. Communication, collaboration, and adaptability will soon be job requirements within the federal IT workforce, even for the most technical staffers.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

This is the first of a five-part series on HCI. I hope you enjoy and this prompts discussion. Please feel free to reach out to me, either here or via Twitter at @MBLeib.

 

What Is Hyperconverged Infrastructure?

Hyperconverged infrastructure is one of the new hot areas of technology in the IT data center space. Like most areas of technology, there are the marketing words and then there’s the definition. So, what’s the definition? There’s no industry-wide meaning, but in my opinion, it involves the management of an architecture based on hardware, software, storage, and a hypervisor. As this audience is familiar with hypervisors, I’m happy to skip the “what is a hypervisor” conversation, but in certain cases the architecture supports VMware but not KVM, Hyper-V, or some flavor of software. In other cases, the architectures will support all of them. Let’s be clear here, though: I believe the original concept of this category is all built around the hypervisor, hence the term “hyperconverged.”

 

I don’t believe it involves a compute/storage environment, but without the hypervisor. So, for example, in the backup space, Rubrik and Cohesity, with no disrespect, are converged but not hyperconverged. And, believe me, there are many advantages in the converged arena as well, but by my definition, this isn’t that. I lay no claim, by the way, to the veracity of my definition.

 

The history of such an architecture goes back to the launch of the EMC/Cisco product, the vBlock. The idea when this was created was of a compute environment powered by VMware and Cisco servers (UCS), a switched environment powered by Cisco Nexus, and, of course, storage by EMC. The product was chosen by size requirements. Your compute engagement would be built around supporting the VMware load, and the storage would involve all your storage requirements. Seems easy, right? It wasn’t. These were first-generation builds and required much in the way of fine-tuning and technical support. At the same time, NetApp introduced their answer to this with the FlexPod. These were first-generation products, and though they were built quite robustly, they were tougher to manage than ever intended.

 

Soon came the launch of products from Nutanix and SimpliVity designed around industry standard x86, and initially a shared spinning-disc storage environment with a virtual SAN architecture spread across nodes. This became a far more viable build, with sizing around three or four node x86 clusters. Scalability was initially difficult, as once you outgrow your sizing or your storage, the requirement would be to spend on a full cluster once again.

 

Alternative builds arrived on the scene from brands like Datrium, and NetApp, VMware VxRail, as well as others, which had the idea of using storage nodes and compute nodes as separate components. This gave the customer far friendlier ways in which to grow the architecture. No longer were you limited by the storage/compute limits. If you needed more storage, you’d place a storage node into the cluster, and if you needed compute, well, that was easy as well. I find these architectures compelling.

 

As you can see, there are many approaches to resolve a converged architecture, with varying approaches designed to solve a variety of inherent issues. With so many options to draw from when pursuing this option, your data center needs will be likely resolved by one of these.


I’d also like to stress, as has always been my opinion, that the idea of convergence may not be appropriate for some scenarios. Orchestration elements have become far more sophisticated, such that “pools” of resources can be provisioned from the whole using a variety of methods, depending on the hardware to be leveraged. Sizing, needs, scalability, and other variables can be used to achieve either the same or similar goals. The build of servers, fabric, storage and network are still viable options. Also, a potential need can be solved by using a newer version of the converged architectures available, as HPE has done with the fully managed Synergy architecture.

 

Before endeavoring to implement an approach, be sure that your goals are being met by the solutions you pursue.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.