Geek Speak

March 2019 Previous month Next month

In my previous articles, I’ve looked at the challenges and decisions people face when undertaking digital transformation and transitioning to a hybrid cloud model of IT operation. Whether it’s using public cloud infrastructure or changing operations to leverage containers and microservices, we all know not everything can or even should move to the public cloud. There’s still a need to run applications in-house. Yet everyone still wants the benefits of the public cloud in our own data center. How do you go about addressing these private cloud needs?

 

Let’s take an example scenario. You’ve decided it's time to upgrade the company’s virtual machine estate and roll out the latest and greatest version of hypervisor software. You know that there are heaps of great new features in there that will make all the difference in your day-to-day operations, but that doesn’t mean anything to your board or to the finance director, who you need to win over to get a green-light to purchase. You need to present a proposal containing a set of measurables indicating when they are going to see a return on the money you’re asking them to release. In the immortal words of Dennis Hopper in Speed, “…what do you do? What do you do?”

 

First, you need a plan. The good news is, you can basically reuse the same one time and again. Change a few names here and a few metrics there, and you’ll have a winner straight out of the Bill Belichick of IT’s playbook.

The framework you should build a plan around has roughly nine sections you must address.

 

  • First, you need to outline the Scenario and the problems you’re facing.
  • This leads you to the Solution you’re proposing.
  • Then the Steps needed to get to this proposed solution.
  • Next, you would outline the Benefits of this solution.
  • And any Risks that might arise while transitioning and once up and running.
  • You would then summarize the Alternatives, including the fact that doing nothing will only exacerbate the issue.
  • After that, you want to profile the costs and compare it to the previous system for Cost Comparisons, detailing as much as possible on each, highlighting the TCO (Total Cost of Ownership). You may think you can finish here, but two important parts follow.
  • Highlight the KPIs (Key Performance Indicators).
  • Finally, the Timeline to implement the solution.

 

KPIs, or Key Performance Indicators, are a collection of statements related to the new piece of hardware, software, or even the whole system. You may say that we can reduce query time by five seconds during a normal day, or we will reduce power consumption by 10KWh per month. They have to be measurable and determinable via a value. You can’t say “it will be faster or better” as you cannot quantify these. Your KPIs may also have a deadline or date associated with them, so you can definitively say whether there’s been a measured improvement or not.

 

Sometimes it can be hard or near impossible to pull some of the details in the plan together, but remember, the finance department will have any previous purchases of hardware, software, professional services, or contractors in a ledger. Hopefully you know the number of man-hours per month it takes to maintain the environment, along with the application downtime, profit lost to downtime, and so on. Sometimes there will be points that you need to put down to the best of the knowledge at hand. Moving forward, you’ll start to track some of the figures that the board wants, showing a willingness to try seeing things from their perspective as the nuances of new versions of hardware and software can be lost on them.

 

Once you have a good understanding of the problem(s) you’re facing, you need to look at the possible solutions, which may mean a combination of demonstrations, proof of concepts (PoCs), evaluations, or try-and-buys for you to gain an insight into the technology available to solve the problem.

 

Next, it’s time to size up the solution. One of the hardest things to do when you adopt a new technology is to adequately size for the unexpected. Without properly understanding the requirements your solution needs to meet, how can you safely say the proposed new solution is going to fix the situation? I won’t go into how to size your solution as each vendor has a different method. The results are only as good as the figures you put in. You need to decide how many years you want to use the assets for, then figure out a rate of growth over that time. Don’t forget, updates and upgrades can affect the system during this timeframe, and you may need to take these into account.

 

Another known problem in the service provider space is the fact that you may size a solution for 1,000 users and you move 50 on to begin with, and sure enough, they get blazing speed and no contention. But as you begin to reach 500, the original users start to notice that tasks are taking longer than when they first started using the new system. You want to try to avoid this. You’ll start to get your original users complaining that they are not getting the 10x speed they had when they first moved, even though you spec’d it for a 5x improvement and they’re still getting 6-7x improvement over the legacy system. This pioneer syndrome needs some form of quality of service to prevent it arising—a “training wheels protocol,” if you will.

 

Now that you’ve identified your possible white knight, it’s time to do some due diligence and verify that it will indeed work in your environment, that it’s currently supported, and so on before going off half-cocked and purchasing something because you were wooed by the sales pitch or one-time only special end of quarter pricing.

 

I think too many people purchase new equipment and software and then rush to use the shiny new toy that they forget some of the most important steps: benchmarking and baselining. I refer to benchmarking as the ability to understand how a system performs when a known amount of load is put upon it. For example, when I have 50 virtual machines running, or I have 100 concurrent database queries, what happens when I add another 50 or 100? I monitor this increase and record the changes. Keep adding in known increments until you see the resources max out and adding any more has a degrading effect on the existing group. Baselining is getting measurements once a system goes live and seeing what a normal day’s operation does to a specific system. For example, you may have 500 people log on at 9 a.m. with peak load around 10:30. It then tails off until 2 p.m. when people start to come back from lunch, and spikes again around 5 p.m. as they clear their desks, finally dropping to a nominal load by 7 p.m. Only by having statistics on hand as to how everything in this system performs during this typical day can we make accurate measurements. Setting tolerances will help you decide if there’s actually a problem, and if so, narrow the search down when a user opens a support ticket. This process of baselining and benchmarking will ultimately help you determine SLA (Service Level Agreements) response times and define items that are outside the system’s control.

 

The point is, you’ll need a standard of measurement and documentation; and like any good high school science experiment, you’re probably going to need a hypothesis, method, results, conclusion, and evaluation. What I’m trying to say is you need to understand what you are measuring and its effect on the environment. Yes, there are some variables everyone’s going to measure: CPU and memory utilization, network latency, and storage capacity growth. But your environment may require you to keep an eye on other variables, like web traffic or database queries, and knowing a good value from a bad is critical in management.

 

The system management tools you’ll use should be tried and tested. If you cannot receive the results you need easily with your current implementation, it may be time to look at something new. It may be something open-source or it may be a paid solution with some professional services and training to maximize this new investment. As long as you’re monitoring and recording statistics that control your environment, you should be in a great position to evaluate new hardware and software options.

 

You may have heard comments around the “cloud being a great place for speed and innovation,” which is truer now more than ever with the speed at which they start to monitor and possibly bill you for your usage. I believe that to be a proper private cloud or on-premises part of a hybrid cloud, you need to be able to monitor detailed usage growth and have the potential to begin to show or charge departments for their IT usage. By monitoring hybrid cloud metrics, you can make a better-informed decision around moving applications to the cloud. As with any expenditure, you should also look at adding in functionality that starts to give you cloud-like abilities on-premises. Maybe begin by implementing a strategy to show chargeback to different departments or line of business application owners. Start making the move from keeping the lights on to innovation, and have IT lead the drive to a competitive advantage in your industry.

 

Moving from a data center to a private cloud isn’t as simple as changing the name on the door. It takes time and planning by implementing new processes to achieve goals and providing some form of automation and elasticity back to the business, along with ways to monitor and report trends. Like any problem, breaking it up into bite-size chunks not only gives you a sense of achievement, but also more control on the project going forward.

 

Whether you are moving to or even from the cloud, the above process can be applied. You need to understand the current usage, baselines, costs, and predicted growth rates, as well as any SLAs that are in place, then how this marries up with the transition to the new platform. It’s all well and good reaching for the New Kid on the Block when they come around telling you that their product will solve everything up to and possibly including “world hunger,” but let’s be realistic and make sure you’ve done your homework and you have a plan of attack. Predetermined KPIs and deliverables may seem like you’re adding shackles to the project, but it helps keep you focused on the goal and delivering back results to your board.

 

  Does investment equal business value? Spending money on the new shiny toy from Vendor X to replace aging infrastructure doesn’t always mean you’re going to improve things. It’s about what business challenges you’re trying to solve. Once you have set your sights on a challenge, it’s about determining what success is for the project, and what KPIs and milestones you’re going to set and measure. What tools you’ll use and how you can prove the value back to the board, so the next time you ask for money, it’s released a lot easier and faster. 

In sunny Redmond this week for the annual Microsoft MVP Summit. This is my 10th Summit and I’m just as excited for this one as if it was my first.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Massive Admissions Scandal

Having worked in higher education for a handful of years, none of this information is shocking to me. The most shocking part is that any official investigation was done, and indictments handed down. I truly hope this marks a turning point for our college education system.

 

Philadelphia just banned cashless stores. Will other cities follow?

One of the key benefits touted by cryptocurrency advocates is “access for everyone.” Well, the City of Brotherly Love disagrees with that viewpoint. The poor have cash, but not phones. Good for Philly to take a stand and make certain stores remain available to everyone.

 

Facebook explains its worst outage as 3 million users head to Telegram

A bad server configuration? That’s the explanation? Hey, Facebook, if you need help tracking server configurations, let me know. I can help you be back up and running in less than 15 hours.

 

Your old router is an absolute goldmine for troublesome hackers

Included for those that need reminding, don’t use the default settings for the hardware you buy off the shelf (or the internet). I don’t understand why there isn’t a default standard setup that forces a user to choose a new password, seems like an easy step to help end users stay safe.

 

Why CAPTCHAs have gotten so difficult

Because computers are getting better at pretending to be human. You would think that computers would also be better at recognizing a robot when they see one, and not bother me with trying to pick out all the images that contain a signpost or whatever silly thing CAPTCHAs  are doing this week.

 

The Need for Post-Capitalism

Buried deep in this post is the comment about a future currency based upon social capital. You may recall that China is currently using such a system. Last week, I thought that was crazy. This week, I think that maybe China is cutting edge, and trying to help get humanity to a better place.

 

The World Wide Web at 30: We got the free and open internet we deserve

“We were promised instant access to the whole of humanity's knowledge. What we got was fake news in our Facebook feeds.” Yep, that about sums up 30 years of the web.

 

Ever feel as if you are the dumbest person in the room? That's me at MVP Summit for 4 days:

 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s a helpful article on how to achieve your cloud-first objectives for cloud-based email. Email is a critical application for government, and there are several suggestions for improving reliability.

 

The U.K. government’s Cloud First policy mandates the need to prioritize cloud-based options in all purchasing decisions—and email providers are no exception to this rule. The rationale is clear: to deliver “better value for money.” Cloud-based email can help with this—offering huge operational benefits, especially considering the sheer number of users and the broad geographical footprint of the public sector. It can also be much simpler and cheaper to secure and manage than on-prem email servers.

 

However, while email services based in the cloud can offer a number of advantages, such services also pose some unique challenges. IT managers in the public sector must track their email applications carefully to help ensure cloud-based email platforms remain reliable, accessible, and responsive. In addition, it’s important to monitor continuously for threats and vulnerabilities.

 

Unfortunately, even the major cloud-based email providers have had performance problems. Microsoft Office 365, a preferred supplier with whom the U.K. government has secured a preferential pricing deal, has been subject to service outages in Europe and in the United States, as recently as late last year.

 

Fortunately, many agencies are already actively monitoring cloud environments. Sixty-eight percent of the NHS and 76% of central government organisations in a recent FOI request from SolarWinds reported having migrated some applications to the cloud, and using monitoring tools to oversee this. Although monitoring in the cloud can be daunting, organisations can apply many of the best practices used on-prem to the cloud—and often even use the same tools—as part of a cloud email strategy that can help ensure a high level of performance and reliability.

 

Gain visibility into email performance

 

Many of the same hiccups that affect the performance of other applications can be equally disruptive to email services. Issues including network latency and bandwidth constraints, for example, can directly influence the speed at which email is sent and delivered.

 

Clear visibility into key performance metrics on the operations of cloud-based email platforms is a must for administrators. They need to be able to proactively monitor email usage throughout the organisation, including the number of users on the systems, users who are running over their respective email quotas, archived and inactive mailboxes, and more.

 

When working across both a cloud-based email platform and an on-prem server, in an ideal world, administrators should set up an environment that allows them to get a complete picture across both. Currently, however, many U.K. public sector entities are using four or more monitoring tools—as is the case for 48% of the NHS and 53% of central government, according to recent SolarWinds FOI research. This highlights a potential disconnect between different existing monitoring tools.

 

Monitor mail paths

 

When email performance falters, it can be difficult to tell whether the fault lies in the application or the network. This challenge is often exacerbated when the application resides in the cloud, which can limit an administrator’s view of issues that might be affecting the application.

 

By using application path monitoring, administrators can gain visibility into the performance of email applications, especially those that reside in a hosted environment. By monitoring the “hops,” or transfers between computers, that requests take to and from email servers, administrators can build a better picture of current service quality and identify any factors that may be inhibiting email performance. In a job where time is scarce, this visibility can help administrators troubleshoot problems without the additional hassle of determining if the application or network is the source of the problem.

 

By applying existing standard network monitoring solutions and strategies to email platforms, administrators can gain better insight into the performance of cloud email servers. This will help keep communications online and running smoothly.

 

Find the full article on GovTech Leaders.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

It’s the most wonderful time of year. The time to come together as a community and – through pure opinion and preference – crown someone (or something) VICTORIOUS.

 

SolarWinds Bracket Battle 2019 is here. And, for its 7th anniversary, have we got something truly thought-provoking, debate-inducing, and oddly… meta for you.

 

The Premise:

It’s a well-known and well-understood fact that not all superpowers are created equal. Even amongst those blessed with supernatural or mutant abilities, someone’s going to end up with the fuzzy end of the lollipop.

 

And so we ask: If one were to end up in the shallow end of the superpowers pool, would it be better to just be normal? Or, is having ANY power – even something as random and seemingly worthless as the ability to turn into a bouncing ball (Bouncing Boy, DC Comics, 1961) or to turn music into light (Dazzler, Marvel, 1980) – better than nothing at all?

 

If the chance to don a super suit, hang at the mutant mansion, or chill with the League is worth enduring an endless stream of ridicule and continuous side-eye, then what IS the ULTIMATE USELESS SUPERPOWER?

 

Would you rather have the chance to parley with some tree rodents or parlez-vous binary? Superhuman strength for a millisecond or lactokinesis (which is basically milk mind-control)?

 

These are just some of the real head-scratchers we have in store for you in this year’s SolarWinds Bracket Battle.

 

Starting today, 33 of the most random and useless superpowers that we could imagine will battle it out until only one remains and reigns supreme as the BEST of the WORST.

 

We picked the starting point and initial match-ups; however, just like in Bracket Battles of the past, it’ll be up to the THWACK community to decide the winner.

 

Don’t Forget to Submit Your Bracket: If you correctly guess the final four bracket contestants, we’ll gift you a sweet 1,000 THWACK points. To do this, you’ll need to go to the personal bracket page and select your pick for each category. Points will be awarded after the final four contestants are revealed.

 

Bracket Battle Rules:

 

Match-Up Analysis:

  • For each useless superpower match-up, we’ve provided short descriptions of each power. Just hover over or click to vote to decipher the code.
  • Unlike in past years, there really isn’t an analysis of how the match-up would work because – well, let’s just say we want your imaginations to run wild. (Quite frankly, these are all pretty useless to begin with, it’s hard to really grasp how you'd make some of these abilities work to your advantage.)
  • Anyone can view the bracket and match-ups, but in order to vote or comment, you must have a THWACK® account.

 

Voting:

  • Again, you must be logged in to vote and trash talk
  • You may vote ONCE for each match-up
  • Once you vote on a match-up, click the link to return to the bracket and vote on the next match-up in the series

 

Campaigning:

  • Please feel free to campaign for your preferred form of uselessness, or debate the relative usefulness of any entry (also, feel free to post pictures of bracket predictions on social media)
  • To join the conversation on social media, use the hashtag #SWBracketBattle
  • There’s a PDF printable versionof the bracket available, so you can track the progress of your favorite picks

 

Schedule:

  • Bracket release is TODAY, March 18
  • Voting for each round will begin at 10 a.m. CT
  • Voting for each round will close at 11:59 p.m. CT on the date listed on the Bracket Battle home page
  • Play-in battle opens TODAY, March 18
  • Round 1 OPENS March 20
  • Round 2 OPENS March 25
  • Round 3 OPENS March 28
  • Round 4 OPENS April 1
  • Round 5 OPENS April 4
  • The Most USEFUL Useless Superpower will be announced April 10

 

If you have any other questions, please feel free to comment below and we’ll be sure to get back to you!

 

What power is slightly better than nothing at all? We’ll let the votes decide!

 

Access the Bracket Battle overview HERE>>

Migration to the cloud is just like a house move. The amount of preparation done before the move determines how smoothly the move goes. Similarly, there are many technical and non-technical actions that can be taken to make a move to the cloud successful and calm.

 

Most actions (or decisions) will be cloud-agnostic and can be carried out at any time, but once a platform is chosen, it unlocks even more areas where preparations can be made in advance.

 

In no particular importance or order, let’s look at some of these preparations.

 

Suitability

Some workloads may not be suitable for migration to the cloud. For example, compliance, performance, or latency requirements might force some workloads to stay on-premises. Even if a company adopts a “cloud-first” strategy, such workloads could force the change of model to a hybrid cloud.

 

If not done initially, identification of such workloads should be carried out as soon as the decision on the platform is made so the design can cater for them right from the start.

 

Hybrid IT

Most cloud environments are a hybrid of private and public cloud platforms. Depending on the size of the organization, it’s common to arrange for multiple high-speed links from the on-premises environment to the chosen platform.

 

However, those links are quite cumbersome to set up, as many different parties are involved, such as the provider, carrier, networking, and cloud teams. Availability of ports and bandwidth can also be a challenge.

 

Suffice it to say that lead times for an end-to-end cloud migration process typically ranges from a few weeks to a few months. For that reason, it’s recommended to prioritize identifying if such link(s) will be required and to which data centers, and get the commissioning process started.

 

Migration Order

This is an interesting one as many answers exist and all of them are correct. It really depends on the organization and maturity level of the applications involved.

 

For an organization where identical but isolated development environments exist, it’s generally preferred to migrate those first. However, you may find exceptions in cases where deployment pipelines are implemented.

 

It’s important to keep stakeholders fully involved in this process, not only because they understand the application best and foresee potential issues, but also so they’re aware of the migration schedule and what constitute reliable tests before signing off.

 

Refactoring

Most organizations like to move their applications to the cloud and improve later. This is especially true if there’s a deadline and migration is imminent. It makes sense to divide the whole task into two clear and manageable phases, as long as the improvement part isn’t forgotten.

 

That said, the thinking process on how to refactor existing applications post-migration can start now. There are some universal concepts for public cloud infrastructure like autoscaling, decoupling, statelessness, etc., but there will be some specific to the chosen cloud platform.

 

Such thinking automatically forces the teams to consider potential issues that might occur and therefore provides enough time to mitigate them.

 

Processes

Operations and support teams are extremely important in the early days of migration, so they should be comfortable with all the processes and escalation paths if things don’t go as planned. However, it’s common to see organizations force rollouts as soon as initial testing is done (to meet deadlines) before those teams are ready.

 

This can only cause chaos and result in a less-than-ideal migration journey for everyone involved. A way to ensure readiness is to do dry runs with a few select low-impact test environments, driven by the operations and support team solving deliberately created issues. The core migration team should be contactable but not involved at all.

 

Migration should only take place once both teams are comfortable with all processes and the escalation paths are known to everyone.

 

Training

Importance of training cannot be emphasized enough, and it’s not just about technical training for the products involved. One often-forgotten exercise is to train staff outside the core application team, e.g., operations and support, about the applications being migrated.

 

There can be many post-migration factors to consider that make it necessary to provide training on applications, such as application behavior changes, deployment mechanism changes, security profile, and data paths.

 

Training on technologies involved can start as early as the platform decision. Application-specific training should occur as soon as it’s ready for migration but before the dry runs. Both combined will keep the teams in good stead when migration day comes.

 

Conclusion

Preparation is key for a significant task like cloud migration. With a bit of thought, many things can be identified that are not dependent on platform choice or the migration and can therefore be taken care of well in advance.

 

A successful cloud migration sometimes depends on how many factors are involved. Reducing the number of tasks required can mean less stress for everyone. It pays to be prepared. As Benjamin Franklin put it:

  “By failing to prepare, you are preparing to fail.” 

If you’re a returning reader to my series, thank you for listening this far. We have a couple more posts in store for you. If you’re a new visitor, you can find previous posts below:

Part 1 – Introduction to Hybrid IT and the series

Part 2 – Public Cloud experiences, costs and reasons for using Hybrid IT

Part 3 – Building better on-premises data centres for Hybrid IT

Part 4 – Location and regulatory restrictions driving Hybrid IT

 

In this post, I’ll be looking at how I help my customers assess and architect solutions across the options available throughout on-premises solutions and the major public cloud offerings. I’ll look at how best to use public cloud resources and how to fit those to use cases such as development/testing, and when to bring workloads back on-premises.

 

In most cases, modern applications that have been built cloud-native, such as functions or using as-a-service style offerings, will have a natural fit to the cloud that they’ve been developed for. However, a lot of the customers I work with and encounter aren’t that far along the journey. That’s the desired goal, but it takes time to refactor or replace existing applications and processes.

 

With that in mind, where do I start? The first and most important part is in understanding the landscape. What do the current applications look like? What technologies are in use (both hardware and software)? What do the data flows look like? What does the data lifecycle look like? What are the current development processes?

 

Building a service catalogue is an important step in making decisions about how you spend your money and time. There are various methods out there for achieving these assessments, like TIME analysis or The 6 Rs. Armed with as much information as possible, you’re empowered to make better decisions.

 

Next, I usually look at where the quick wins can be made—where the best bang for your buck changes can be implemented to show return to the business. This usually starts in development/test environments and potentially pre-production environments. Increasing velocity here can provide immediate results and value to the business. Another area to consider is backup/long-term data retention.

 

Development and Testing

 

For development and test environments, I look at the existing architecture, are these traditional VM-based environments? Can they be containerized easily? Is containerization where possible a good step toward more cloud native behavior/thinking?

 

In traditional VM environments, can automation be used to quickly build and destroy environments? If I’m building a new feature and I want to do integration testing, can I use mocks and other simulated components to reduce the amount of infrastructure needed? If so, then these short-lived environments are a great candidate for the public cloud. Where you can automate and have predictable lifecycles into the hours, days, and maybe even weeks, the efficiencies and savings of placing that workload in the cloud are evident.

 

When it comes to longer cycles like acceptance testing and pre-production, perhaps these require a longer lifetime or greater resource allocation. In these circumstances, traditional VM-based architectures and monolithic applications can become costly in the public cloud. My advice is to use the same automation techniques to deploy these to local resources with more reliable costs. However, the plan should always look forward and assess future developments where you can replace components into modern architectures over time and deploy across both on-premises and public cloud.

 

Data Retention

 

As I mentioned, the other area I often explore is data retention. Can long-term backups be sent to cold storage in the cloud? The benefits offered above that of tape management for infrequently accessed data are often prominent. Restore access may be slower, but how often are you performing those operations? How urgent is a restore from, say, six years ago? Many times, you can wait to get this information back.

 

Continuing the theme of data, it’s important to understand what data you need where and how you want to use it. There are benefits to using cloud native services for things like business intelligence, artificial intelligence (AI), machine learning (ML), and other processing. However, you often don’t need the entire data set to get the information you need. Look at building systems and using services that allow you to get the right data to the right location, or bring the data to the compute, as it were. Once you have the results you need, the data that was processed to generate them can be removed, and the results themselves can live where you need them at that point.

 

Lastly, I think about scale and the future. What happens if your service/application grows beyond your expectations? Not many people will be the next Netflix or Dropbox, but it’s important to think about what would happen if that came about. While uncommon, there are situations where systems scale to a point that using public cloud services becomes uneconomical. Have you architected the solution in a way that allows you to remove yourself? Would major work be required to build back on-premises? In most cases, this is a difficult question to answer, as there are many moving parts and relies on levels of success and scale that may not have been predictable. I’ve encountered this type of situation over the years, usually not to the full extent of complete removal of cloud services. I commonly see this in data storage. Large amounts of active data can become costly quickly. In these situations, I look to solutions that allow me to leverage traditional storage arrays that can be near-cloud, usually systems placed in data centers that have direct access to cloud providers.

 

In my final post, I’ll be going deeper into some of the areas I’ve discussed here and will cover how I use DevOps/CICD tooling in hybrid IT environments.

 

Thank you for reading, and I appreciate any comments or feedback.

Saw Captain Marvel this past weekend. It's a good movie. You should see it, too. Make sure you stick around for the second end credit scene!

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Quadriga's Wallets Are Empty, Putting Fate Of $137 Million In Doubt

Somehow, I don’t think “create Ponzi scheme using crypto and fake my own death” was the exact pitch to investors. Incidents like this are going to give cryptocurrencies a bad name.

 

Part 1: How To Be An Adult— Kegan’s Theory of Adult Development

One of the most important skills you can have in life is empathy. Take 10 minutes to read this and then think about the people around you daily, and their stage of development. If you start to recognize the people that are Stage 2, for example, it may change how you interact, and react, with them.

 

Volvo is limiting its cars to a top speed of 112 mph

Including this because (1) we got a new Volvo this week and (2) the safety features are amazing. There are many times it seems as if the car is driving itself. It takes a while to learn to let go and accept the car is now your pilot.

 

This bill could ban e-cigarette flavors nationwide

"To me, there is no legitimate reason to sell any product with names such as cotton candy or tutti fruitti, unless you are trying to market it to children.” Preach.

 

Microsoft is now managing HP, Dell PCs via its Managed Desktop Service

And we move one step closer to MMSP – Microsoft Managed Service Provider.

 

A new study finds a potential risk with self-driving cars: failure to detect dark-skinned pedestrians

We are at the beginning of a boom with regards to machine learning. Unfortunately, most of the data we use comes with certain bias inherent. Date is the fuel for our systems and our decisions. Make sure you know what fuel you are using to power your algorithms.

 

Burnout Self-Test

We’ve all been there, or are there. Do yourself a favor and take 5 minutes for this quiz. Then, assess where you are, where you want to be, and the steps you need to take. Mental health is as important as anything else in your life.

 

The last real snowfall of the season, so I took the time to make a Fire Angel (it's like a Snow Angel, but in my Fire Circle):

 

The most difficult step in any organization’s journey to the cloud is the first one: where do you start? You’ve watched the industry slowly adopt cloud computing over the last decade, but when you look at your on-premises data center, you can’t conceptualize how to break out. You’ve built your own technology prison, and you’re the prisoner.

 

You might have a traditional three-tier application, and the thought of moving the whole stack to the cloud induces anxiety. You won’t control the hardware, and you won’t know who has access to the hardware. You won’t know where your data is, or who has access to it. It’s the unending un-knowing of cloud that makes so many of us retreat to the cold aisle, lean against the KVM, and clutch our tile pullers a little tighter.

 

Then you consider the notorious migration methods you’ve read about online.

 

Lift-and-shift cloud migrations are harrowing events that we should all experience at least once, and should never, ever experience more than once. Refactoring is often an exercise in futility, unless you have a crystal-clear understanding of what the resulting product will look like.

 

So how do you ease into cloud computing if lift-and-shift and refactoring aren’t practical for you?

 

You start considering a cloud-based solution for every new project that comes your way.

 

For example, I’ve recently been in discussions with an application team to improve the resiliency of their database solution. The usual solutions were kicked around: a same-site cluster for HA, a multi-site cluster for HA and DR, or an active-active same-site cluster for HA and FT. Of course, in each case, there’s excess hardware capacity that will sit idle until a failure event. The costs associated with these three solutions would inspire any savvy product manager to think, “there’s got to be a better way.”

 

And there is. It’s a cloud-native database service with an SLA for performance and availability, infinite elasticity, and bottomless storage. (Yes, I’m exaggerating a bit here, but look at the tech specs for Google CloudSQL or Amazon RDS; infinite is only a mild stretch.) You pay for the service based on consumption, which means all those idle cycles that would otherwise consume power and cooling are poof, gone. You’ll need to sort out the connectivity and determine the right way for your enterprise to connect with your cloud services, but that’s certainly easier than designing, implementing, and procuring the hardware and licenses for your on-prem HA solutions.

 

Your application team gets the service they want without investing in bare metal that would only serve to make your data center chillers work a little harder. And more importantly, you’ve taken your first step in the journey to the cloud.

 

A successful migration can spark interest in cloud as a solution for other components. The same application team, realizing now that their data is in the cloud and their app servers aren’t, might express an interest in deploying an instance group of VMs into the same cloud to be close to their data. They’ll want to learn about auto-scaling next. They’ll want to learn about costs savings by moving web servers to Debian. They’ll want to know more about how to set up firewall rules and cloud load balancers. They’ll develop an appetite for all that cloud has to offer.

 

And while you may not be able to indulge all their cloud fantasies, you’ll find that moving to the cloud is a much simpler and enjoyable effort when you’re working in partnership with your application team.

 

  That’s the secret to embracing not just cloud, but any new technology: let your business problems lead you to a solution.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article on BYOD, including data on DHS employees. I feel that, in some cases, a better balance needs to be achieved on this issue.

 

Agencies are still grappling with the types of devices using their networks and making device security a non-issue. Today, the mobile device challenge has gotten even more complex.

 

Welcome to BYOD’s second act, which may be even bigger than the first.

 

The numbers from the Department of Homeland Security tell the tale. According to DHS, its employees are currently using 90,000 devices. Thirty-eight percent of those employees are using government-issued devices, while the rest rely on their personal iPhone or Android mobile devices.

 

Although policies and guidance attempt to ensure mobile device security, initiatives like the DHS Mobile Device Security project and the Committee on National Security Systems (CNSS) Policy No. 11 go only so far. Employees don’t necessarily want to carry highly encrypted or modified devices. Like everyone else, they are accustomed to their phones being easy to use, not a burden.

 

While programs like these are necessary, and must be encouraged and followed, agencies should consider augmenting their mobile device security efforts with a few additional strategies.

 

Let employees keep their devices. Employees will inevitably use their personal devices over government networks. The trick is to make those devices secure while letting employees continue to use them with minimal inconvenience.

 

Keep tabs on those devices. Agencies must balance the reality of personal device use with security measures that allow administrators to easily manage and secure those devices, preferably from a central location. Administrators should be able to remotely wipe, lock, set passwords on devices, and implement mobile device tracking that uses GPS to find lost and stolen devices.

 

Go beyond the devices into the network itself. Automated threat monitoring solutions that employ constantly updated threat intelligence and continuously scan for potential anomalies are good places to start. Agency teams should consider complementing this tactic with user device tracking to quickly identify and locate unauthorized devices. Monitoring and capturing network logins and other events can also help detect questionable network activity and prevent unwanted intrusions.

 

Get a handle on bandwidth. Device management also involves managing the impact that devices can have on the network. Mobile devices used for bandwidth-hogging applications, such as video, can significantly slow down the network. Agency administrators should consider implementing network bandwidth analysis solutions that allow them to identify which applications and endpoints are consuming the most bandwidth. Through device tracking, they can also track excessive bandwidth usage back to a particular user and mobile device.

 

Although most of the focus on BYOD has been on security, mobile device management really must be a two-pronged approach. Security is and always will be important, but the ability to ensure that networks continue to operate efficiently and effectively in the midst of a device onslaught is also critical.

 

It’s also something that many agencies are still grappling with, nearly 10 years after BYOD was first introduced. We’ve come a long way since then, as the programs initiated by the DHS and other agencies show. But we still have far to go.

 

Find the full article on Government Computer News.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

In early January we traveled back in time to see how SolarWinds works with Cisco gear, in preparation for Cisco Live EMEA.
Now we’re talking about Microsoft and VMware, but we won’t travel back in time, and there’s no event happening. Why? New releases, of course.

 

But let’s take a few steps back to reiterate where we are, and what’s happening.

Looking at the Microsoft universe, what are you guys monitoring?

The most essential thing is probably the operating system. SolarWinds® supports Windows Server 2003-2016 out of the box with Server & Application Monitor (SAM) templates, and there’s a Server 2019 template already in the community, and I’d guess that it might turn into an “official” template at some point.

 

 

The templates can be assigned to nodes after a search, and they provide well-rounded monitoring with a few mouse clicks. Each template can be customized based on your needs, but essentially, they are ready to use, and the key performance indicators monitored make sure the machine is not acting up.

 

Next, there are applications.

A quick search for “Microsoft” here shows 71 templates built in to the product, and 263 templates have been contributed by the community. Thanks, guys!

 

 

Some of the templates are quite old, while others have been updated recently. You might find yourself in a situation with a template available for “Solution X 2016,” but you just updated to “2019” or whatever—I suggest giving the 2016 template a try first, as new products usually don’t change too much about monitoring.

It’s all nice and shiny, but there’s more.

 

The next step up is a feature called AppInsight, and this brightens up the day.
There are a couple of Microsoft solutions that are oh-so-popular, almost every organization is using them, and managing these applications can be unnecessarily complicated.
At SolarWinds, we don’t like complicated things, so in 2013, AppInsight for SQL came onboard, followed by Exchange and IIS in later versions. AppInsight isn’t exactly magic, but not too far from it.

During the process of adding a Windows node, SAM will automatically check if AppInsight is going to be a match and will suggest adding it in one of the steps. Just a single mouse click and loads of KPIs are monitored without further intervention required. And the best thing is, each KPI comes with supporting information to explain what exactly is going on if it suddenly turns red. This is invaluable for someone like me, who thinks SQL has been invented by the devil to bug humankind.
But we didn’t stop with the three AppInsights. There was a race here on THWACK for quite a while between different feature requests, but this one won by far, so SAM 6.8 comes with AppInsight for AD—great!

 

Let’s move on.
At some point, I think it was 2017, we saw the first templates for Office365. They have been updated recently, and you’ll find a quick overview here. As Microsoft does not provide access to logs—have a look here—the templates instead require the AzureAD module and use PowerShell for monitoring.

 

 

Together with NetPath, you get an excellent overview of Office365 as shown here in the online demo.

Likewise, in 2017, we added support for monitoring Azure.

 

 

A few articles explain the what and the how, and I suggest starting with this one. On a basic level, you attach the instance, throw an agent on the box, and monitor whatever is running on it. Again, NetPath is your friend even for this scenario.

 

One more thing regarding Microsoft: Hyper-V.
You can monitor the basics with SAM or Network Performance Monitor (NPM), and both provide information about used resources, what machines are running on a host, and how they relate to each other with Orion® Maps.
That’s nice, but we can do better, and here’s how:

 

Let me introduce you to Virtualization Manager (VMAN), which will deal with both Hyper-V and VMware.

VMAN goes much deeper in virtual environments and containers and trust me—virtualization isn’t dead at all.

Besides checking everything between a VM and the datastore—even vSAN—VMAN comes with pretty cool features like capacity planning, which just now received multi-clustering as a feature, and my all-time favorite: VM sprawl.
It contains this guy:

 

I so enjoy clicking “power off VM” to see if someone complains.

 

The latest version finally added another longtime feature request, VMware events.
Some of you guys used workarounds to get these events into the Orion Platform in the past.
We use the API to retrieve events in near real-time from vCenter or standalone hosts, and it works automatically—you add the gear, and we’ll do the rest for you.

 

As I said earlier, we don’t like complicated things here at SolarWinds.

Monitoring tools are vital to any infrastructure. They analyze and give feedback on what’s going on in your data center. If there are anomalies in traffic, a network monitoring tool will catch it and alert the administrators. When disk space is getting full on a critical server, a server monitoring tool alerts the server administrators that they need to add space. Some tools are only network tools, or only systems tools. However, these may not always provide all the analysis you need. There are additional monitoring tools that can cover everything happening within your environment.

 

In searching for a monitoring tool that fits the needs of your organization, it can be difficult to find one that’s the right size for your environment. Not all monitoring tools are one-size-fits-all. If you’re searching for a network monitoring tool, you don’t need to purchase one that covers server performance, storage metrics, and more. There are several things to consider when choosing a monitoring tool that fits your environment.

 

Run an Analysis on Your Environment

The first order of business when trying to determine which monitoring tool best fits your needs is to analyze your current environment. There are tools on the market today that help map out your network environment and gather key information such as operating systems, IP addresses, and more. Knowing which systems are in your data center, what types of technologies are present, and what application or applications they support will help you decide which tools are the best fit.

 

Define Your Requirements

There may be legal requirements defining what tools need to be present in your environment. Understanding these specific requirements will likely narrow down the list of potential tools that will work for you. If you’re running a Windows environment, there are many built-in tools that perform the tasks needed in an environment. Additionally, if your organization is using these built-in tools, it may not be necessary to spend money on another tool to do the same thing.

 

Know Your Budget

Budgetary demands typically determine these decisions for most organizations. Analyzing your budget will help you understand which tools you can afford and will narrow the list down. Many tools do more than needed for some, so it’s not necessary to spend more on a tool that might be outside your budget.

 

On-prem or Cloud?

When picking a monitoring tool, it’s important to research whether you want an on-premises tool or a cloud-based one. SaaS tools are very flexible and can store the information the tool gathers in the cloud. On the other hand, having an on-premises tool keeps everything in-house and provides a more secure option for data gathered. Choosing an on-prem tool gives you the ability to see your data 24/7/365 and have complete ownership of it. With a SaaS tool, it’s likely you could lose some visibility into how things are operating on a daily basis. Picking the right hosting option should be strictly based on your requirements and comfort with the accessibility of your data.

 

Just Pick One Already

This isn’t meant to be harsh, but spending too time researching and looking for a tool that fits your needs may put you in a bad position. While you’re trying to choose between the best network monitoring tools, you could be missing out on what’s actually going on inside your systems. Analyze your environment, define your requirements, know your budget, pick a hosting model, and then make your selection. By ensuring the monitoring tool solution fits the needs of your environment, it will pay dividends in the end.

 

There’s a revolution underway with application deployment, and you may or may not be aware of it. We’re seeing a move by businesses to adopt technology the large public cloud providers have been using for many years, and that trend is only going to increase. In my previous post on digital transformation, I looked at examining the business processes you run and how to position a new digital strategy within your organization. In this article, we look at one of the more prevailing methods of modernizing and paying off some of that technical debt. The large cloud providers and Google in particular have been deploying applications using a technology called containerization for years to allow them to run and scale isolated application elements and gain greater amounts of CPU efficiency.

 

What is Containerization?

While virtualization is hardware abstraction, containerization is operating system abstraction. Containerization has been growing in popularity, as this technology can get around some of the limitations of machine virtualization, like the sheer size of operating systems, the necessary overheads associated with getting the operating system up and keeping them running, and, as previously mentioned, the lower CPU utilization. (Remember it’s the application we really want to be running and interacting with; the operating system is just there to allow it to stand up).

 

Benefits of Containerization

A key benefit of containerization is that you can now run multiple applications within the user space of a Linux operating system kept separate from the Kernel. While each application requires its own dedicated operating system when it’s virtualized, containers hold only everything required to run the application (encapsulated runtime environment, if you will). Because of this encapsulation, it means that the application doesn’t see processes or resources outside of itself. As isolation is done down at the kernel level, this then removes the need for each application to have their own bloated operating system. It also allows for the ability to move it without any reliance on underlying operating systems and hardware, which in turn gives greater reliability for the application and removal of migration issues. As the operating system is already up and running and there’s no hypervisor getting in the way of the execution path, you can spin up a single container or thousands within seconds.

 

One of the earliest mainstream use cases of containers with which the wider audience may have interacted is Siri, Apple’s voice and digital assistant. Each time you record some audio, it’s sent to a container to be analyzed and a response generated. Then the application quits, and the container is removed. This helps explain why you can’t get a follow-up query to work with Siri. Another key benefit of containerization is its ability to help speed up development. Docker’s slogan "run any app anywhere" comes from the idea that if a developer builds an application on his laptop and it works there, it should run on a server elsewhere. This is the origin of the idea around improved reliability. In turn, it allows the development environment to now need to be exactly like production, therefore reducing costs and tackling and resolving the real issues we see with applications.

 

One of the major advantages of moving to the cloud  is elasticity, and a great way to make use of this is to start using containers. By starting on-premises with legacy application stacks and then slowly converting, or refactoring, their constituent parts to containers, you can make the transition to a cloud provider with greater ease. After all, containerization is basically a way of abstracting all the differences in OS distributions and undying infrastructure by encapsulating the application files’ libraries and environment variables and its dependencies into one neat package. Containers also help with application management  issues by breaking them up into smaller parts that function independently. You can monitor and refine these components, which leads us nicely to microservices.

 

Microservices

Applications separated into microservices are easier to manage, as you can alter various smaller parts for improvement while not breaking the overall application. Or, individual instances can be brought online immediately when required to meet growing demand.

 

By using microservices, you become more agile as you move to independently deployable services and the carving up of an application into smaller pieces. It allows for independent developing, testing, and deployment of a service on a more frequent schedule. This should allow you to start paying off some of that previously discussed technical debt.

 

Understanding the Market

There are several different types of container software, and it seems sometimes this subdivision is misunderstood when talking about the various products available. These are container engine software, container management software, container monitoring software, and container network software. The main bit of confusion in the IT market comes between container engine platforms and orchestration/management solutions. Docker, Kubernetes, Rancher, Amazon Elastic Container Service, Red Hat OpenShift, Mesosphere, and Docker Swarm are just some of the more high-profile names thriving in this marketplace. Two of the main players are Docker and Kubernetes.

 

Docker is a container platform designed to help build, ship, and run applications. At its heart, the Docker Engine is the core container runtime environment and the foundation for running containers. Docker Swarm is part of their enterprise suite and provides orchestration and management functionality similar to that of Kubernetes. Docker Swarm and Kubernetes are interchangeable if you’re using the Docker Engine.

 

Kubernetes is an open-source container management platform system that grew out of a software development project at Google. It can deploy, manage, and scale containerized applications on a planetary scale.

 

There is still some debate going on as to whether virtualized or containerized applications are more or less secure than the other, and while there are reasons to argue for each, it’s worth noting these two technologies can be used in conjunction each other. For instance, a VMware has Photon OS, which is a container system that can run on vSphere.

 

When it comes to dealing with containers, there are some design factors and ideals that differ from those of running virtual machines. Instances are disposable. If an instance stops working, just kill it and start another. Log files are saved externally from the instance, so they can be analyzed later or collated into a central repository. Applications need to have the ability to retry operations rather than crashing. This allows for new microservices to be started if demand cannot be met. Persistent data needs to be treated as special and therefore how it is accessed and stored needs to come into consideration. Containers consist of two parts: an image file, which is like a snapshot of the required application, and a configuration file. These are read-only, and therefore you need to store data elsewhere or it will be deleted on clean-up. By planning for redundancy and scalability, you are planning on how best to help the container improve over time. You must have a method to check that the container is both alive and ready, and if it’s not responding, to quickly kill that instance and start another.

 

Automation and APIs

Application programmable interfaces (APIs) are selections of code exposed by the application to allow for other code or applications to control its behavior. So, you have the CLI and GUI for human interaction with an application and an API for machines. By allowing other machines to access APIs, you start to gain the ability to automate the application and infrastructure as a whole. There are tools available today to ensure applications are in their desired state (i.e., working as intended with the correct settings enabled, etc.) and modify them if necessary. This interaction requires the ability to access the application’s API to make these changes and to interrogate it to see if it’s indeed at the required desired state.

 

Containers and Automation

As mentioned previously, the ability to spin up vast numbers of containers with next to no delay to meet spikes in demand requires a level of monitoring and automation that removes the need for human interaction. Whether you’re Google doing several billion start-ups and stops a week, a website that finds cheap travel, or a billing application that needs to scale depending on the fiscal period, there’s a benefit to looking at the use of containers and automation to meet the range of demands.

 

As we move to a more hybrid style of deploying applications, having the ability to run these as containers and microservices allows you the flexibility to transition this to the best location possible for the workload, which may or may not be a public cloud, without fear of breaking the service. Whether you start in one and move to another, this migration will soon be viewed in the same way a version update happens in an environment, and it will be just another task that needs to be undertaken during an application’s lifespan.

 

Containers offer a standardized way of deploying applications, and automation is a way to accomplish repetitive tasks and free up your workforce. As the two become more entwined in your environment, you should start to see a higher standard and faster deployment within your infrastructure. This then leads you on the journey to continuous integration and continuous delivery, but that’s a story for another day.

March is here, and I can only hope that means the worst of winter snow is behind us. I’m looking forward to getting the fire circle operational again. It’s amazing what a little bit of yard work can do to help alleviate stress. Burning things while sipping scotch helps, too.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Experiments, growth engineering, and exposing company secrets through your API: Part 1

For the 1% of people out there that know how to capture network traffic *AND* analyze what is happening, you have a lot of value to offer software companies.

 

California bill aims to strengthen data breach notification law

I do wish that we had some GDPR-type support at the federal level. It’s beyond time for our government to step in and help protect data privacy for all citizens.

 

Plain wrong: Millions of utility customers’ passwords stored in plain text

As I was just saying...

 

Rotten Tomatoes Bans User Comments Before Films’ Release

I’m a bit surprised this was even allowed, but this won’t stop the trolls. I’ve seen similar issues at conferences where you are allowed to rate sessions and speakers despite not attending the session itself, or watch it later.

 

America’s Cities Are Running on Software From the ’80s

Easily the least surprising headline of 2019.

 

China banned millions of people with poor social credit from transportation in 2018

Maybe instead of denying these citizens access to transportation, they should force them all to travel together. I’m certain Fox would purchase the broadcast rights to “Big Brother Airlines.”

 

Microsoft starts rolling out ability to turn photos of table data into Excel spreadsheets

While I love this, the part of me that cherishes data quality just died a little bit inside.

 

Microsoft Certified Bacon Engineer:

 

One is spoiled for choice when it comes to choosing a cloud provider. Cloud platforms have come a long way since their humble beginnings and now offer a myriad of services to suit most use cases customers might have. The question on every CTO's mind is, “Which cloud platform is the best for my business?”

 

So, what should you look for when choosing a cloud provider in 2019? Let’s look at some common factors about how to choose the best cloud computing option.

 

Cost

 

This is where most cloud platform evaluations start, as the desire to save on costs is natural to any company. It also makes sense as the cloud is consumption-based. The cheaper the cloud platform services are to start with, the less recurring cost it will have.

 

If an audit of existing infrastructure has been carried out, estimated costs for an equivalent deployment on the different clouds in scope shouldn’t be too difficult. Network traffic charges and applications might not be as straightforward, so some guesstimates, based on realistic assumptions, might be necessary.

 

However, bear in mind that even if it seems cheaper, those costs are estimates at this point and might change as a result of the eventual design. The same cloud might also become more expensive if the infrastructure profile changes in the long term, perhaps due to refactoring.

 

Existing Infrastructure

 

Cost should never be the only factor considered for this decision. There are many others, and existing infrastructure is among them.

 

Traditional infrastructure has grown organically for every company in the past. Platform and technology choices were driven by the needs of the company at the time. Some went the open-source way, while many had no choice but to have proprietary software.

When moving to the cloud, that history can influence the choice of platform. This is especially true for larger companies that might have enterprise licenses for software, translating into discounts on the platform and lowering costs.

 

Existing Expertise

 

Existing infrastructure also affects the expertise that exists within the teams that will be working with the chosen platform going forward. It’s important to take that into consideration, given that learning a new cloud platform takes a lot of time and effort.

Consider that the teams have worked with their existing environment for years to develop their expertise, but for the new platform, they’re expected to be up and running in a fraction of that. It helps if the platform chosen reuses at least some of their existing expertise.

 

Future Roadmap

 

What will application development and infrastructure look like in the future? Platforms aren’t changed frequently, and the ones that fit that vision should weigh heavier  than the ones that don’t when considering a cloud platform.

 

Be careful here. Popular opinion might put one cloud platform in front of the other for certain services, but is the company likely to use those services, and if so, would it use the features that differentiate it from other platforms?

 

Services Offered

 

Assuming the migration needs to happen soon, does the cloud platform cover (or provide equivalents for) all the services required today? Keeping the on-premises environment and going hybrid might be an option if some applications are not suitable for migration or too difficult to refactor, but it’s safer to look for a cloud provider that can provide the needed services from the start.

 

One significant consideration here is database platforms that may not have a natively licensed version on that platform. A workaround for that problem is to migrate to another cloud-based database platform, but it’s difficult, especially in the timescales for migration, and comes with a certain amount of risk. Another way is to host it on dedicated instances, but that’s an expensive and inflexible workaround, and is best avoided.

 

Multi-Cloud?

 

Some organizations have their sights set on a multi-cloud deployment, which, if successful, reduces the risk of choosing the wrong cloud. It might work, but only if there’s existing knowledge of those cloud platforms and compelling reasons to do so, e.g., some functionality that a platform excels in.

 

However, if there’s no existing knowledge and experience, then it could be a risky strategy. Becoming comfortable with one cloud platform is difficult enough with all the innovation and options  available. Adding another platform will stretch the teams too much, without much gain in capability.

 

A better way is to focus on one platform and do it really well and in-depth. Cloud concepts translate well between all, so there’s no reason the other platform can’t be added to the overall infrastructure later.

 

In the meantime, applications should be built on platform-agnostic infrastructure with standard interfaces between services and that should allow cloud mobility when more options become available.

 

Conclusion

 

Picking the best cloud provider is no easy task and a lot of thought goes into it. Comparative cost is never the only factor, and there are many other considerations that can influence the decision.

 

There’s so many choices available that it’s hard to find a use case that cannot be catered for by the cloud platforms available today. While it makes the decision-making harder, it’s a nice problem to have.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article on the internet of things (IoT) and security threats. We’ve all been expecting IoT devices to be problematic and it’s good to see recognition that better controls are needed for the federal government.

 

The Department of Defense is hearing the IoT alarm bells.

 

Did you hear about the heat maps used by GPS-enabled fitness tracking applications, which the U.S. Department of Defense (DOD) warned showed the location of military bases, or the infamous Mirai Botnet attack of 2016? The former led to the banning of personal devices from classified areas in the Pentagon, as well as a ban on all devices that use geolocating services for deployed personnel. While the latter may not have specifically targeted government networks, it still served as an effective wakeup call that connected devices have the potential to create a large-scale security crisis.

 

Indeed, the federal government is evidently starting to hear the alarm bells, considering the creation of the IoT Cybersecurity Act of 2017. The act emphasizes the need for better controls over the procurement of connected devices and assurances that those devices are vulnerability free and easily patchable.

 

Physical and cultural silos

 

Technical, physical, and departmental silos could undermine the government’s IoT security efforts. The DOD is comprised of about 15,000 networks, many of which operate independently of each other. According to respondents cited in SolarWinds’ 2018 IT Trends Report, federal agencies are susceptible to inadequate organizational strategies and lack of appropriate training on new technologies.

 

Breaking the silos

 

Bringing technology, people, and policy together to protect against potential IoT threats is a tricky business, particularly given the complexity of DOD networks. But it is not impossible, as long as defense agencies adhere to a few key points.

 

Focus on the people

 

First, it is imperative that federal defense agencies prioritize the development of human-driven security policies.

 

Malicious and careless insiders are real threats to government networks—perhaps just as much, if not more so, than external bad actors. Policies regarding which devices are allowed on the network—and who is allowed to use them—should be established and clearly articulated to every employee.

 

Agencies must also try to ensure everyone understands how those devices can and cannot be used, and continually emphasize those policies. Implementing a form of user device tracking—mapping devices on the network directly back to their users and potentially detecting dangerous activity—can assist in this effort.

 

Gain a complete view of the entire network

 

DOD agencies should provide their IT teams with tools that allow them to gain a complete, holistic view of their entire networks. They must institute security and information event management to automatically track network and device logins across these networks and set up alerts for unauthorized devices.

 

Get everyone involved

 

It is incumbent upon everyone to be vigilant and involved in all aspects of security, and someone has to set this policy. That could be the chief information security officer or an authorizing official within the agency. People will still have their own unique roles and responsibilities, but just like travelers in the airport, all agency employees need to understand the threats and be on the lookout. If they see something, they need to say something.

 

Finally, remember that networks are evolutionary, not revolutionary. User education, from top management on down, must be as continuous and evolving as the actions taken by adversaries. People need to be regularly updated and taught about new policies, procedures, tools, and the steps they can take to be on the lookout for potential threats.

 

As the fitness tracking apps issue and the Mirai Botnet incident have shown, connected devices and applications have the potential to do some serious damage. While government legislation like the IoT Cybersecurity Act is a good and useful step forward, it’s ultimately up to agency information technology professionals to be the last line of defense against IoT security risks. The actions outlined here can help strengthen that line of defense and effectively protect DOD networks against external and internal threats.

 

Find the full article on SIGNAL.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Welcome back to this series of blogs on my real world experiences of Hybrid IT. If you’re just joining us, you can find previous posts here, here and here. So far I have covered a brief introduction to the series, public cloud costing and experiences, and how to build better on-premises data centres.

 

So far, I’ve covered a brief introduction to the series, public cloud costing and experiences, and how to build better on-premises data centres.

 

In this post, I’ll cover something a little different: location and regulatory restrictions driving hybrid IT adoption. I am British, and as such, a lot of this is going to come from my view of the world in the U.K. and Europe. Not all of these issues will resonate with a global audience; however, they are good food for thought. With adoption of the public cloud, there are many options available to deploy services within various regions across the world. For many, this won’t be much of a concern. You consume the services where you need to and where they need to be consumed by the end users. This isn’t a bad approach for most global businesses with global customer bases. In my corner of the world, we have a few options for U.K.-based deployments when it comes to hyperscale clouds. However, not all services are available in these regions, and, especially for newer services, they can take some time to roll out into these regions.

 

Now I don’t want to get political in this post, but we can’t ignore the fact that Brexit has left everyone with questions over what happens next. Will the U.K. leaving the EU have an impact? The short answer is yes. The longer answer really depends on what sector you work in. Anyone that works with financial, energy, or government customers will undoubtedly see some challenges. There are certain industries that comply with regulations and security standards that govern where services can be located. There have always been restrictions for some industries that mean you can’t locate data outside of the U.K. However, there are other areas where being hosted in the larger EU area has been generally accepted. Data sovereignty needs to be considered when deploying solutions to public clouds. When there is finally some idea of what’s happening with the U.K.’s relationship with the EU, and what laws and regulations will be replicated within the U.K., we in the IT industry will have to assess how that affects the services we have deployed.

 

For now, the U.K. is somewhat unique in this situation. However, the geopolitical landscape is always changing, and treaties often change, safe harbour agreements can come to an end, and trade embargos or sanctions crop up over time. You need to be in a position where repatriation of services is a possibility should such circumstances come your way. Building a hybrid IT approach to your services and deployments can help with mobility of services—being able to move data between services, be that on-premises or to another cloud location. Stateless services and cloud-native services are generally easier to move around and have fewer moving parts that require significant reworking should you need to move to a new location. Microservices, by their nature, are smaller and easier to replace. Moving between different cloud providers or infrastructure should be a relatively trivial task. Traditional services, monolithic applications, databases, and data are not as simple a proposition. Moving large amounts of data can be costly; egress charges are commonplace and can be significant.

 

Whatever you are building or have built, I recommend having a good monitoring and IT inventory platform that helps you understand what you have in which locations. I also recommend using technologies that allow for simple and efficient movement of data. As mentioned in my previous post, there are several vendors now working in what has been called a “Data Fabric” space. These vendors offer solutions for moving data between clouds and back to on-premises data centres. Maintaining control of the data is a must if you are ever faced with the proposition of having to evacuate a country or cloud region due to geopolitical change.

 

Next time, I’ll look at how to choose the right location for your workload in a hybrid/multi-cloud world. Thanks for reading, and I welcome any comments or discussion.

At the start of this week, I began posting a series of how-to blogs over in the NPM product forum on building a custom report in Orion®. If you want to go back and catch up, you can find them here:

 

It all started when a customer reached out with an “unsolvable” problem. Just to be clear, they weren’t trying to play on my ego. They had followed all the other channels and really did think the problem had no solution. After describing the issue, they asked, “Do you know anyone on the development team who could make this happen?”

 

As a matter of fact, I did know someone who could make it happen: me.

 

That’s not because I'm a super-connected SolarWinds employee who knows the right people to bribe with baklava to get a tough job done. (FWIW, I am and I do, but that wasn’t needed here.)

 

Nor was it because, as I said at the beginning of the week, “I’m some magical developer unicorn who flies in on my hovercraft, dumps sparkle-laden code upon a problem, and all is solved.”

 

Really, I’m more like a DevOps ferret than a unicorna creature that scrabbles around, seeking out hidden corners and openings, and delving into them to see what secret treasures they hold. Often, all you come out with is an old wine cork or a dead mouse. But every once in a while, you find a valuable gem, which I tuck away into my stash of shiny things. And that leads me to the first big observation I recognized as part of this process:

 

Lesson #1: IT careers are more often built on a foundation of “found objects”small tidbits of information or techniques we pick up along the waywhich we string together in new and creative ways.

 

And in this case, my past ferreting through the dark corners of the Orion Platform had left me with just the right stockpile of tricks and tools to provide a solution.

 

I’m not going to dig into the details of how the new report was built, because that’s what the other four posts in this series are all about. But I *do* want to list out the techniques I used, to prove a point:

  • Know how to edit a SolarWinds report
  • Understand basic SQL queries (really just select and joins)
  • Have a sense of the Orion schema
  • Know some HTML fundamentals

 

Honestly, that was it. Just those four skills. Most of them are trivial. Half of them are skills that most IT practitioners may possess, regardless of their involvement with SolarWinds solutions.

 

Let’s face it, making a loaf of bread isn’t technically complicated. The ingredients aren’t esoteric or difficult to handle. The process of mixing and folding isn’t something that only trained hands can do. And yet it’s not easy to execute the first time unless you are comfortable with the associated parts. Each of the above techniques had some little nuance, some minor dependency, that would have made this solution difficult to suss out unless you’d been through it before.

 

Which takes me to the next observation:

 

Lesson #2: None of those techniques are complicated. The trick was knowing the right combination and putting them together.

 

I had the right mix of skills, and so I was able to pull them together. But this wasn’t a task my manager set for me. It’s not in my scope of work or role. This wasn’t part of a side-hustle that I do to pay for my kid’s braces or feed my $50-a-week comic book habit. So why would I bother with this level of effort?

 

OK, I'll admit I figured it might make a good story. But besides that?

 

I’d never really dug into Orion’s web-based reporting before. I knew it was there, I played with it here and there, but really dug into the guts of it and built something useful? Nah, there was no burning need. This gave me a reason to explore and a goal to help me know when I was “done.” Better still, this goal wouldn’t just be a thought experiment, it was actually helping someone. And that leads me to my last observation:

 

Lesson #3: Doing for others usually helps you more.

 

I am now a more accomplished Orion engineer than I was when I started, and in the process I’ve (hopefully) been able to help others on THWACK® become more accomplished as well.

 

And there’s nothing complicated about knowing how that’s a good thing.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.