Geek Speak

February 2020 Previous month Next month

Introduction

Much of the dialog surrounding cloud these days seems centered on multi-cloud. Multi-cloud this, multi-cloud that—have we so soon forgotten about the approach that addresses the broadest set of needs for businesses beginning their transition from the traditional data center?

 

Friends, I’m talking about good-old hybrid cloud, and the path to becoming hybrid doesn’t have to be complicated. In short, we select the cloud provider who best meets most of our needs, establish network connectivity between our existing data center(s) and the cloud, and if we make intelligent, well-reasoned decisions about what should live where (and why), we’ll end up in a good place.

 

However, as many of you may already know, ending up in “a good place” isn’t a foregone conclusion. It requires careful consideration of what our business (or customers’ businesses) needs to ensure the decisions we’re making are sound.

 

In this series, we’ll look at the primary phases involved in transitioning toward a hybrid cloud architecture. Along the way, we’ll examine some of the most important considerations needed to ensure the results of this transition are positive.

 

But first, we’ll spend some time in an often-non-technical realm inhabited by our bosses and boss’s bosses. This is the realm of business strategy and requirements, where we determine if the path we’re evaluating is the right one. Shirt starched? Tie straightened? All right, here we go.

 

The Importance of Requirements

There’s a lot of debate around which cloud platform is “best” for a certain situation. But if we’re pursuing a strategy involving use of cloud services, we’re likely after a series of benefits common to all public cloud providers.

 

These benefits include global resource availability, practically unlimited capacity, flexibility to scale both up and down as needs change, and consumption-based billing, among others. Realizing these benefits isn’t without potential drawbacks, though.

 

The use of cloud services means we now have another platform to master and additional complexity to consider when it comes to construction, operation, troubleshooting, and recovery processes. And there are many instances where a service is best kept completely on-premises, making a cloud-only approach unrealistic.

 

Is this additional complexity even worth it? Do these benefits make any difference to the business? Answering a few business-focused questions upfront may help clarify requirements and avoid trouble later. These questions might resemble the following:

 

  1. Does the business expect rapid expansion or significant fluctuation in service demand? Is accurately predicting and preparing for this demand ahead of time difficult?
  2. Are the large capital expenditures every few years required to keep a data center environment current causing financial problems for the business?

  3. Is a lack of a worldwide presence negatively impacting the experience of your users and customers? Would it be difficult to distribute services yourself by building out additional self-managed data centers?

  4. Is day-to-day operation of the existing data center facility and hardware infrastructure becoming a time-sink? Would the business like to lessen this burden?

  5. Does the business possess applications with sensitive data that must reside on self-maintained infrastructure? Would there be consequences if this were violated?

 

In these examples, you can see we’re focusing less on technical features we’d “like to have” and instead thinking about the requirements of the business and specific needs that should be addressed.

 

If the answers to some of these questions are “yes,” then a hybrid cloud approach could be worthwhile. On the other hand, if a clear business benefit cannot be determined, then it may be best to keep doing what you are doing.

 

In any case, having knowledge of business requirements will help you select the correct path, whether it involves use of cloud services or maintaining your existing on-premises strategy. And through asking your key business stakeholders questions like these, you show them you’re interested in the healthy operation of the business, not just your technical responsibilities.

 

Wrap-Up

There are many ways to go about consuming cloud services, and for many organizations, a well-executed hybrid strategy will provide the best return while minimizing complexity and operational overhead. But before embarking on the journey it’s always best to make sure decisions are justified by business requirements. Otherwise, you could end up contributing to a “percentage of cloud initiatives that fail” statistic—not a good place to be.

This week's Actuator comes to you from Austin, as I'm in town to host SolarWinds Lab live. We'll be talking about Database Performance Monitor (nee VividCortex). I hope you find time to watch and bring questions!

 

As always, here's a bunch of links I hope you find useful. Enjoy!

 

First clinical trial of gene editing to help target cancer

Being close to the biotech industry in and around Boston, I heard rumors of these treatments two years ago. I'm hopeful our doctors can get this done, and soon.

 

What Happened With DNC Tech

Twitter thread about the tech failure in Iowa last week.

 

Analysis of compensation, level, and experience details of 19K tech workers

Wonderful data analysis on salary information. Start at the bottom with the conclusions, then decide for yourself if you want to dive into the details above.

 

Things I Believe About Software Engineering

There's some deep thoughts in this brief post. Take time to reflect on them.

 

Smart Streetlights Are Experiencing Mission Creep

Nice reminder that surveillance is happening all around us, in ways you may never know.

 

11 Reasons Not to Become Famous (or “A Few Lessons Learned Since 2007”)

A bit long, but worth the time. I've never been a fan of Tim or his book, but this post struck a chord.

 

Berlin artist uses 99 phones to trick Google into traffic jam alert

Is it wrong that I want to try this now?

 

I think I understand why they never tell me anything around here...

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Mav Turner with ideas about how the government could monitor and automate their hyperconverged infrastructure to help achieve their modernization objectives.

 

It’s no surprise hyperconverged infrastructure (HCI) has been embraced by a growing number of government IT managers, since HCI merges storage, compute, and networking into a much smaller and more manageable footprint.

 

As with any new technology, however, HCI’s wide adoption can be slowed by skeptics, such as IT administrators concerned with interoperability and implementation costs. However, HCI doesn’t mean starting from scratch. Indeed, migration is best achieved gradually, with agencies buying only what they need, when they need it, as part of a long-term IT modernization plan.

 

Let’s take a closer look at the benefits HCI provides to government agencies, then examine key considerations when it comes to implementing the technology—namely, the importance of automation and infrastructure monitoring.

 

Combining the Best of Physical and Virtual Worlds

 

Enthusiasts like HCI giving them the performance, reliability, and availability of an on-premises data center along with the ability to scale IT in the cloud. This flexibility allows them to easily incorporate new technologies and architectures into the infrastructure. HCI also consolidates previously disparate compute, networking, and storage functions into a single, compact data center.

 

Extracting Value Through Monitoring and Automation

 

Agencies are familiar with monitoring storage, network, and compute as separate entities; when these functions are combined with HCI, network monitoring is still required. Indeed, having complete IT visibility becomes more important as infrastructure converges.

 

Combining different services into one is a highly complex task fraught with risk. Things change rapidly, and errors can easily occur. Managers need clear insight into what’s going on with their systems.

After the initial deployment is complete monitoring should continue unabated. It’s vital for IT managers to understand the impact apps, services, and integrated components have on each other and the legacy infrastructure around them.

 

Additionally, all these processes should be fully automated. Autonomous workload acceleration is a core HCI benefit. Automation binds HCI components together, making them easier to manage and maintain—which in turn yields a more efficient data center. If agencies don’t spend time automating the monitoring of their HCI, they’ll run the risk of allocating resources or building out capacity they don’t need and may expose organizational data to additional security threats.

 

Investing in the Right Technical Skills

 

HCI requires a unique skillset. It’s important they invest in technical staff with practical HCI experience and the knowledge to effectively implement infrastructure monitoring and automation capabilities. These experts will be critical in helping agencies take advantage of the vast potential this technology has to offer.

 

Reaping the Rewards of HCI

 

Incorporating infrastructure monitoring and automation into HCI implementation plans will enable agencies to reap the full rewards: lower total cost of IT ownership thanks to simplified data center architecture, consistent and predictable performance, faster application delivery, improved agility, IT service levels accurately matched to capacity requirements, and more.

 

There’s a lot of return for simply applying the same level of care and attention to monitoring HCI as traditional infrastructure.

 

Find the full article on Government Computer News.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

This week's Actuator comes to you from New England where it has been 367 days since our team last appeared in a Super Bowl. I'm still not ready to talk about it, though.

 

As always, here's a bunch of links I hope you find interesting. Enjoy!

 

97% of airports showing signs of weak cybersecurity

I would have put the number closer to 99%.

 

Skimming heist that hit convenience chain may have compromised 30 million cards

Looks like airports aren't the only industry with security issues.

 

It’s 2020 and we still have a data privacy problem

SPOILER ALERT: We will always have a data privacy problem.

 

Don’t be fooled: Blockchains are not miracle security solutions

No, you don't need a blockchain.

 

Google’s tenth messaging service will “unify” Gmail, Drive, Hangouts Chat

Tenth time is the charm, right? I'm certain this one will be the killer messaging app they have been looking for. And there's no way once it gets popular they'll kill it, either.

 

A Vermont bill would bring emoji license plates to the US

Just like candy corn, here's something else no one wants.

 

For the game this year I made some pork belly bites in a garlic honey soy sauce.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Jim Hansen reviewing data from our cybersecurity survey, including details on how agencies are combatting threats.

 

According to a 2019 Federal Cybersecurity Survey released last year by IT management software company SolarWinds, careless and malicious insiders topped the list of security threats for federal agencies. Yet, despite the increased threats, federal IT security pros believe they’re making progress managing risk.

 

Why the positive attitude despite the increasing challenge? While threats may be on the rise, strategies to combat these threats—such as government mandates, security tools, and best practices—are seeing vast improvements.

 

Greater Threat, Greater Solutions

 

According to the Cybersecurity Survey, 56% of respondents said the greatest source of security threats to federal agencies is careless and/or untrained agency insiders; 36% cited malicious insiders as the greatest source of security threats.

 

Most respondents cited numerous reasons why these types of threats have improved or remained in control, from policy and process improvements to better cyberhygiene and advancing security tools.

 

•Policy and process improvements: 58% of respondents cited “improved strategy and processes to apply security best practices” as the primary reason careless insider threats have improved.

•Basic security hygiene: 47% of respondents cited “end-user security awareness training” as the primary reason careless insider threats have improved.

•Advanced security tools: 42% of respondents cited “intrusion detection and prevention tools” as the primary reason careless insider threats have improved.

 

“NIST Framework for Improving Critical Infrastructure Cybersecurity” topped the list of the most critical regulations and mandates, with FISMA (Federal Information Security Management Act) and DISA STIGs (Security Technical Implementation Guides) following close behind, at 60%, 55%, and 52% of respondents, respectively, citing these as the primary contributing factor in managing agency risks.

 

There’s also no question the tools and technologies to help reduce risk are advancing quickly; this was evidenced by the number of tools federal IT security pros rely on to ensure a stronger security posture within their agencies. The following are the tools cited, and the percentage of respondents saying these are their most important technologies in their proverbial tool chest:

 

•Intrusion detection and prevention tools 42%

•Endpoint and mobile security 34%

•Web application firewalls 34%

•Fire and disk encryption 34%

•Network traffic encryption 34%

•Web security or web content filtering gateways 33%

•Internal threat detection/intelligence 30%

 

Training was deemed the most important factor in reducing agency risk, particularly when it comes to reducing risks associated with contractors or temporary workers:

 

•53% cited “ongoing security training” as the most important factor

•49% cited “training on security policies when onboarding” as the most important factor

•44% cited “educate regular employees on the need to protect sensitive data” as the most important factor

 

Conclusion

 

Any federal IT security pro will tell you although things are improving, there’s no one answer or one solution. The most effective way to reduce risk is a combination of tactics, from implementing ever-improving technologies to meeting federal mandates to ensuring all staffers are trained in security best practices.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.