Had a great time in Austin last week, but it's good to be home. Even if that means it's cold and I need to shovel. There's no place I'd rather be to watch the Patriots this Sunday night.

 

(You *knew* I had to mention them at some point, right?)

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

YouTube vows to recommend fewer conspiracy theory videos

That’s just what they want us to think.

 

Watch out for this new cryptocurrency ransomware stalking the web

If you have not yet heard of Anatova, consider yourself warned.

 

Drone sighting disrupts major US airport

I’m seeing more of these reports, and can’t help but wonder if they are connected or if they are just copycats.

 

Tim Cook Calls for ‘Data-Broker Clearinghouse’ in Push for Privacy Rules

With Apple revenue down, maybe this is the start of a new revenue stream – data security. Honestly, if the new fruit phone was marketed as “the most secure device,” they would get millions of users to upgrade to the latest version.

 

Amazon puts delivery robots on streets – with a human in tow

This seems rather inefficient. Perhaps it is a test for something not as silly. But in this current form, it’s not very useful.

 

Now Your Groceries See You, Too

Not creepy at all, nope.

 

Japanese government plans to hack into citizens' IoT devices

Governments are meant to help provide and protect their citizens. The idea that Japan would serve as a Red Team to help protect citizens with insecure IoT devices is brilliant. Help the people, help yourself.

 

 

Ah, the joys of working for a network monitoring company that leads by example:

There is no getting away from it: the word “cloud” is a buzzword in the IT industry. Maybe it’s graduated onto something more.

 

Why Cloud?

 

I remember being asked this exact question during a technical presentation I got drafted into very last minute with a well-known, high street brand. During the infrastructure pitch, I had added a little flair at the end and sketched something resembling a cumulus cloud that elicited the question, “Why cloud?” Now at this point, I was unsure whether I was being tested or whether the customer genuinely didn’t know, and while I had a fairly good understanding of what it was, explaining it was something I’d never done before. Thankfully, a bolt of inspiration had me reciting the fact that, “Network diagrams have a cloud to represent the infrastructure outside of an organisation. This is due to the fact that you knew the hardware is there, but you couldn’t see it and have no control over it. That ‘cloud’ probably drew from the idea of fog of war.” Everyone in the room smiled and I felt very happy with my answer as I had avoided using as a service at any point, which, during this time about nine years ago, was all you ever heard when the word “cloud” came up in an IT conversation.

 

Today it’s assumed that if you’re talking to anyone in the IT industry that they comprehend the idea of cloud. We’ve defined something ethereal and tried to standardise it. Does one person’s idea of cloud equal that of another?

 

Yet my point made nearly ten years ago still stands – there is hardware and software outside of your control. You probably don’t even know what’s being used. You can’t see or touch it, but you rely on it for the success of your business.

 

The Truth about Cloud Costs

 

We have now reached a point where practically every IT product has the word “cloud” in their tagline or description, and if they don’t yet, their next release will. But how do you decide what or when it’s time to stick with on-premises or pivot and utilise cloud? We’ve all heard the horror stories of organisations spending their predicted yearly budget in one month to migrate to the cloud, or of providers going bust and huge swaths of data being lost, or even of access and encryption keys being stolen or published. Yet people are moving to the cloud in droves, unperturbed by these revelations. It’s reminiscent of when virtualisation first entered the data centre. We had VM sprawl, but this plague was controlled, limited by the fact you had a finite pool of resources. With cloud, the concept of limitless resource pools with near-instant access that you can consume with just a couple lines of code means things can get out of hand very quickly if you don’t stay in control.

 

We have also heard the “Hotel California” stories—those who have tried to leave the cloud for one reason or another and have been unsuccessful. Yet for all these horror stories, there are huge numbers of successes we do (like Uber or Airbnb) and don’t hear about. You only need to look at the earnings calls and reported profits the big players in the industry to see that cloud as part of an IT stack is here to stay.

 

So, while moving from a hefty CAPEX number on the balance sheet to perceived smaller recurring OPEX numbers may look attractive to some organisations, these quite often snowball and, like death by a thousand cuts, you can find tonnes of entries on your balance sheet consisting of cloud expenses. My personal bill can run to a couple of pages, so an enterprise organisation could run into the hundreds if not careful.

 

That’s because cloud service providers have succeeded in figuring out a cost model for everything. Some services incur an immediate charge upon use, while others have a threshold you have to trigger to be billed on its use.

 

You may have also heard the analogy, “The cloud is a great place to holiday, but not to live.” In terms of costs, I don’t think this is a great statement. It probably has its roots from those early adopters who felt they could lift and shift everything to the cloud without refactoring their workflows. But this isn’t true. The cloud is a great place for speed and scalability for workflows that need huge amounts of resources and are available in a very small window of time. Workflows that are possibly only used for a short period of time and then can be scaled back, or essentially rented, is an advantage of the cloud that piques a lot of people’s attention. The fact that you can get someone to provide you a service like email for instance, so you don’t need to rely on several sites running multiple servers managed by a team of specialised individuals, is another attractive characteristic that has people interested in the cloud.

 

Hybrid Cloud Benefits

 

So, what is a proper hybrid cloud model? Well what may be right for one is different for another, but the overarching idea is to have your workload reside on the right infrastructure at the right time during its lifespan. Careful planning and leveraging multiple levels of security during your deployment and management is key. Having a 21st century outlook on IT is how I like to think of it. Understanding that certain applications or data requires you to have it close at hand or secured in a data centre you own; while others can live on someone else’s tin as long as you have access to it. Not to get tied up thinking that its current location, hardware, and software is permanent is a great stance to tackling today’s IT challenges. Knowing that it’s not something you can buy off the shelf, but have to craft and sculpt are traits you want in your IT department as these hybrid IT years progress.

 

  In the U.K., we have several websites who, as a service, can transfer you between energy, broadband, banking, mortgage, credit card, etc., providers to one that is cheaper or has the services you require. I can see in the near future that we could very well have IT companies whose business model is to make sure you are leveraging the right cloud for your workload. Maybe I should go now and copyright that… 

Here’s a blog that reviews strategies for managing networks that are growing in size and complexity.

 

Federal IT pros understand the challenge agencies face in trying to manage an ever-growing, increasingly complex network environment. While federal IT networks are becoming more distributed, the demand for security and availability is increasing. Yes, at some point, new technologies may make government network operations  easier and more secure; until that point, however, growing pains seem to be getting worse.

 

While there may be no one-size-fits-all solution to this problem, there is certainly something today’s federal IT pros can do to help ease those growing pains: invest in a suite of enterprise management tools.

 

Taking a suite-based approach, you should be able to more effectively manage a hybrid cloud environment, alongside a traditional IT installation, while simultaneously scaling as the network grows and gains visibility across the environment—including during IT modernization initiatives.

 

Moving Past the Status Quo

 

Today, most federal IT pros use a series of disparate tools to handle a range of management tasks. As the computing environment expands, this disparate approach will most likely become increasingly less viable and considerably more frustrating.

 

Investing in a suite of IT tools for government agencies that all work together can help save time, enhance efficiency, and provide the ability to be more proactive instead of reactive.

 

Consider monitoring a hybrid-cloud environment, for example. Traditionally, the federal IT pro would have a single tool for tracking on-premises network paths. With the move to a cloud or hybrid environment, services have moved outside the network; yet, the federal IT pro is still responsible for service delivery. The smart move would be to invest in a tool that can provide network mapping of both on-prem and cloud environments.

 

Next, make sure that the tool integrates with other tools delivering a single integrated dashboard. This way, you can combine information about your agency’s cloud environment with on-premises information, which should provide you with a much more accurate understanding of performance.

 

In fact, make sure the dashboards provide accessibility and visibility into all critical tools, including your network performance monitor, traffic analyzer, configuration manager, virtualization manager, server and application monitor, server configuration, storage monitor, database performance analyzer—in other words, the works.

 

Finally, be sure to implement role-based access controls designed to prevent users with disparate responsibilities from affecting each other’s resources, while still leveraging a common platform.

 

Being able to see everything through a series of dashboards can be a huge step in the right direction. Enabling tools to connect and share information, so you can make more highly informed decisions, is the icing on the proverbial cake.

 

One final word of advice on choosing a platform—make sure it connects discovery and resource mapping, dashboards, centralized access control, alerting, reporting, consolidated metrics, and data.

 

If the platform you’re looking at doesn’t do all of these things, then keep looking. It will eventually prove to be well worth your time and money.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Public clouds provide enormous scale and elasticity. Combined with a consumption-based model and instant deployment methodologies, they provide a platform that is extremely agile and avoids locking of precious capital.

 

Surely it’s a no-brainer and why every company has plans to migrate workloads to the cloud . In fact, it is so obvious that one actually needs to look for reasons why it won’t be a good idea. It may seem counterintuitive, but it is one of the most important steps you could take before starting on your cloud migration journey.

 

WHY?

Regardless of size, migration could be a huge drain on time and resources. One of the most cited reasons for the failure of a cloud migration project is: “The project ran out of steam.” Such projects typically lack enthusiasm, resulting in slow progress. Eventually, corners are cut to meet deadlines and the result is sub-standard migration and eventual failure.

 

Humans are wired to be more interested in doing something where there is a tangible benefit for them in some way. In addition, they are more likely to remain committed as long as they can see a goal that is clear and achievable.

 

HOW?

Migration is not a single-person job. Depending on the size of a company, teams from different business groups are involved in the process and have to work together to achieve that goal. To ensure success, it is critical to get the backing of all the stakeholders through an honest evaluation of the business need for migration to the cloud. It must be backed by hard facts and not just because it’s fashionable.

 

This evaluation goes hand-in-hand with the problems a company is looking to solve. The most effective pointers to them are the existing pain points. Are costs for running a particular environment too high? Is the company becoming uncompetitive due to lack of agility? It might even be a case of developing the capability to temporarily burst into the cloud when an occasional requirement comes up, instead of locking capital by buying, provisioning, and maintaining own equipment.

 

ANALYSIS

Armed with those pain points and relevant data gathered, analysis can be done to determine if cloud migration is the only pragmatic solution. SWOT is a great framework for such an evaluation, and is used by many organisations for strategic planning.

 

It’s important to have all the key stakeholders present when this exercise is done. This ensures that they are part of the discussion and can clearly see all arguments for and against the migration. Those stakeholders include leaders from the infrastructure and application groups as well as from the business side, as they have the best view of the financial impact of current issues and what it would be if action is not taken.

 

The focus of this analysis should be to identify what weaknesses and threats to the business exist due to the current state and if migration to the cloud will change them into strengths and opportunities. With prior research in hand, it should be possible to determine if the move to the cloud can solve those issues. More importantly, this analysis will highlight the financial and resource costs of the migration and if it would be worth that cost when compared against the problems it will fix.

 

CONCLUSION

Effort spent at this stage is extremely valuable and ensures the decision to migrate to the cloud is robust. Furthermore, the analysis clarifies the need for action to all stakeholders and brings them on board with the vision.

 

Once they see the goal and how it will solve their business problems, the result is a commitment from all teams to provide their part of the resource and continual participation in the project until its successful completion.

Over the coming weeks, I will be writing a series of posts on my varied experiences with hybrid IT. These experiences come from working on countless projects over the last ten years working in a consultancy role.

 

So, let’s start with a bit of background on me. I’m Jason Benedicic, an independent consultant based in Cambridge, U.K. I have been working in IT for the last twenty years, and of that, the last 10 have been in professional services consultancy across the U.K. My background is traditionally in infrastructure, covering all areas of on-premises solutions such as virtualisation, storage, networking, and backup.

 

A large part of this time has been spent working on Converged Infrastructure systems such as FlexPod. As the industry has changed and evolved, I have moved into additional skillsets, focusing on automation and infrastructure as code, more recently working in cloud native technologies and DevOps/CICD.

 

What is Hybrid IT?

The term hybrid IT isn’t exactly new. It has been used in various ways since the early days of public cloud. However, the messaging often changes, and the importance or even relevance of this operating model is regularly put into question.

 

For me it boils down to a simple proposition. Hybrid IT is all about using the right tools in the right locations to deliver the best business outcome, making use of cloud-native technologies where they fit the business requirements and cost models, and maintaining legacy applications across your existing estate.

 

Not all applications can be modernised in the required timeframes. Not every application will be suited to cloud. Hybrid IT focuses on flexibility and creating a consistent delivery across all platforms.

 

What will I be covering?

The series of posts will build on the above concepts and highlight my experiences in each of several areas.

 

Public Cloud Experiences, Costs, and Reasons for using Hybrid IT

This post will focus on my experiences in deploying services into public clouds, ranging from lift-and-shift type migrations to deploying cloud-native services like functions. I will look at where you make cost decisions and how you can assess what the long-term impact can be to your business.

 

Building Better On-Premises Data Centres for Hybrid IT

Here I will look at the way we can adopt cloud-like practices on-premises by modernising existing data centre solutions, refining process, and using automation. I will look at the various technologies I have used over the recent years, what the major vendors are doing in this space, and ways you can get the most from your on-premises infrastructure by providing a consistent consumption model for your business.

 

Location and Regulatory Restrictions Driving Hybrid IT

Constant change throughout the industry, governments, and regulating bodies require us to constantly evaluate how we deploy solutions and where we deploy them. I will look at recent changes such as GDPR and how uncertainties surrounding political events such as the U.K.’s exit from the European Union affect decisions. I will also cover how building a hybrid approach can ease the burden of these external factors on your business.

 

Choosing the Right Location for your Workload in a Hybrid and Multi-Cloud World

This post will look at how I help customers to assess and architect solutions across the wide landscape of options, including determining how best to use public cloud resources, looking at what use cases suit short-lived cloud services such as Development/Testing, and when to bring a solution back on-premises.

 

DevOps Tooling and CI/CD in a Hybrid and Multi-Cloud World

My final post will focus on my recent journey in making more use of DevOps tooling and Continuous Integration/Deployment pipelines. I will discuss the tools I have been using and experiences with deploying the multiple life-cycle stages of software into multiple locations.

 

I hope that you will find this series helpful and enjoy reading through my journey.

If there’s one thing I’ve learned in my decades as a data professional, it’s this:

 

Things aren’t what you think they are.

 

Data's ultimate purpose is to drive decisions. But our data isn’t as reliable or accurate as we want to believe. This leads to a most undesirable result:

 

Bad data means bad decisions.

 

As a data professional, part of our mission is to make data "good enough" for use by others. We spend time scrubbing and cleaning data to make it consumable by other teams.

 

But we didn’t go to school to become a data janitor.

 

And yet, here we are. We understand that all data is dirty, but some data is useful.

 

The origins of dirty data

The main reason why data is dirty and often unreliable is simple: human intervention.

 

If not for humans, then data would be clean, perfect, and clear. If you’ve ever played the game “Telephone,” then you understand how humans are awful at relaying even the simplest of data. But it gets worse when you realize that data today is collected from machines that are programmed by humans. There are assumptions being made by both the programmer and the person using the tool. These assumptions lead to data quality issues.

 

Here are but a few examples to consider:

 

Your standards change – Standards change all the time. For example, scientific standards have been updated over the years in an effort to improve precision. In November 2018, the standard definition of a kilogram changed. This means that any system using the old standard is producing wrong calculations. Tools are subject to rounding errors when the standards change. This leads to bad calculations and dirty data.

 

Your data collections fail – Our collection tools and methods can collect the wrong data, or no data. Or worse, they could have issues with unit conversion. You might see (ms) and assume milliseconds, but it could be microseconds. Just because a collection is automated doesn’t mean it can be trusted. Humans are still touching the data.

 

Your data sets are incomplete – You could have a rather large dataset and think, “Jackpot.” But large datasets are often incomplete. They have missing attributes or were put together by someone scraping websites. While the internet provides everyone the ability to access data at any time and for any need, it does not guarantee that the data is valid.

 

Time series collections lack context – Time series collections are all the rage these days. In our effort to be DevOps-y, we stream logs to achieve observability and perform analytics for insights. The problem is that this streaming data often lacks context for what is being measured. Often, the data being measured is changing. The simplest example is retail sales tied to seasons. You need context with your data. And SysAdmins know that measuring CPU by itself doesn’t have enough context—you need to collect additional metrics to tell the whole story.

 

All of the above can lead to the following:

 

Duplicate data – A single event is recorded and entered into your dataset twice.
Missing data – Fields that should contain values don’t.
Invalid data – Information not entered correctly or not maintained.
Bad data – Typos, transpositions, variations in spelling, or formatting (say hello to my little friend Unicode!)
Inappropriate data – Data entered in the wrong field.

 

By now you should understand that it is difficult, nay impossible, to determine if data is ever clean.

 

Methods to clean your dirty data

Here's a handful of techniques that you should consider when working with data. Remember, all data is dirty; you won't be able to make it perfect. Your focus should be making it "good enough" to pass along to the next person.

 

The first thing you should do when working with a dataset is to examine the data. Ask yourself, "Does this data make sense?"

 

Then, before you do anything else, make a copy or backup of your data before you begin to make the smallest change. I cannot stress this enough.

 

OK, so we've examined the data to see if it makes sense, and we have a copy. Here are a few data cleaning techniques.

 

Identify and remove duplicate data – Tools such as Excel and PowerBI make this easy. Of course, you'll need to know if the data is duplicated, or two independent observations. For relational databases, we often use primary keys as a way to enforce this uniqueness of records. But such constraints aren't available for every system that is logging data.

 

Remove data that doesn't fit – Data entered that doesn't help you answer the question you are asking.

 

Identify and fix issues with spelling, etc. – There are lots of ways to manipulate strings to help get your data formatted and looking pretty. For example, you could use the TRIM function to remove spaces from the text in a column, then sort the data and look for things like capitalization and spelling. There are also regional terms, like calling a sugary beverage "pop" or "soda."

 

Normalize data – Set a standard for the data. If the data is a number, make sure it is a number. Often times you will see “three” instead of a 3, or a blank instead of a 0. If the data attribute is categorical, make sure the entries that apply for that category.

 

Remove outliers – But only when it makes sense to do so! If the outlier was due to poor collection, then it could be safe to remove. Hammond’s Law states that “Ninety percent of the time, the next measurement will fall outside the 90% confidence interval.” Be mindful that outliers are innocent until proven guilty.

 

Fix missing data – This gets... tricky. You have two options here. Either you remove the record, or you update the missing value. Yes, this is how we get faux null values. For categorical data, I suggest you set the data to the word “missing.” For numerical data, set the value to 0, or to the average of the field. I avoid using faux nulls for any data, unless it makes sense to note the absence of information collected. Your mileage may vary.

 

Summary

We all fall into the same trap: we never have time to do it right in the first place, but somehow think there will be time to fix it later.

 

When it comes to data, it’s a never-ending process of curating, consuming, and cleaning. And no matter how much you scrub that data, it will never be clean… but it can be good enough.

 

Life’s dirty. So is your data. Get used to it.

You’ll find no shortage of blog posts and thought pieces about how cloud computing has forever changed the IT landscape. The topic is usually addressed in an “adapt or die” argument: cloud is coming for your on-prem applications, and without applications in your data center, there’s no need for IT operations. You’ll encounter terms like “dinosaur” and “curmudgeon” when bloggers refer to IT professionals who have decades of experience in the data center, but have not yet mastered the skills necessary to manage a hybrid or cloud-native environment. It’s a bit self-serving. While it’s partially true that cloud will drastically change how your Ops staff go about their work day, the notion that on-prem IT is dead is a bit much.

 

At least for now.

 

While you could opt to continue about your day managing on-prem like cloud is just a fad (remember when that was the overwhelming sentiment in the mid-2000s?), you’d be wise to use this time to determine if your IT operations staff is structured to facilitate a pivot to the cloud. The journey from IT Ops to No-Ops is a long one, but rest assured, it will happen. So how do you get your intrepid Ops teams ready?

 

OMG NO MOAR SILOS

Dividing operations teams into groups based on common technical skillsets and resources is a deceptive practice when it comes to cloud. In many cases, these silos are the result of contracting preferences: many large organizations find it easy to contract for a specific skillset (e.g., let’s hire a company to handle and monitor our Windows servers). That contract is then managed by a single manager who simply refers to the team as “the Windows team.” Or maybe your mainframe days are not too far in the distance, and the big iron crowd referred pejoratively to the new IT staff as “the Windoze team,” and it stuck. A decade or two later, and you find yourself with the familiar silos: Windows, storage, network.

 

This is not an unreasonable approach to managing legacy on-prem. But it does get complicated when you layer on abstraction technologies like virtualization. And it gets absolutely bonkers when you go hybrid IT. Don’t believe me? Take your most experienced storage engineer and put her in front of the Google Cloud Storage console for the first time. Suddenly, all that experience and knowledge of storage protocols and disk partitioning strategies becomes irrelevant. Successfully managing cloud storage is an exercise in working directly with your development teams, becoming fluent in their development and deployment practices, and ceding control of the storage infrastructure to a cloud service provider.

 

The same is true for staff who may find themselves provisioning Kubernetes (that’s k8s for the cool kids) pods after deploying on-prem VMs for a decade. The very nature of provisioning resources in any cloud is an application-centric endeavor; it’s time to let go of the VM as the ideal unit of abstraction. We don’t have the problems in the cloud that pushed us to x86 virtualization in the data center.

 

One option is to shake up your teams a bit, and look to Agile team-building practices to guide your transition. Look beyond the eye-rolling crowd here; not everyone likes Agile and nearly everyone has a bad story about a failed shift. Build teams that cover the whole spectrum of your IT needs, and give those teams time to gel. Forming, storming, and norming are essential to achieve that sought-after team stage of performing.

 

By decoupling your teams from discrete IT resources, you’re encouraging a culture change that’s essential to adopting cloud.

 

Training for the Cloud

IT Ops veterans acquire a wealth of knowledge about more than just how the IT works in your organization: they have perfect knowledge of finding points of failure in systems, and how to design around those failures. They have intimate knowledge of your applications, and know which ones are more politically sensitive than others. And their diagnostic and troubleshooting skills are second to none. These are the exact skillsets you want in a cloud engineer, as these skills are not specific to on-prem infrastructure. With the right training, your staff can easily adopt cloud services. For example, a systems engineer who knows that you never put all of your VMs in a single chassis would quickly realize that you should never put all of your cloud VMs in a single region. Encouraging your on-prem staff to become familiar with the lexicon of cloud will make your journey to the cloud that much simpler.

 

Make Mistakes in Cloud Labs

Most CSPs provide free or low-cost access to lab environments so your staff can get acquainted with the UX of the cloud management console. Or you may allocate some budget to each of your IT ops staff to use in the provisioning of various cloud microservices. Whatever your method, give your Ops staff a safe place to make mistakes. They surely will, and you want those mistakes to be isolated from your production environment. Your staff will gain valuable first-hand experience with these new tools.

 

Just the Beginning

It’s folly to believe you can describe how to prepare your Ops teams for the cloud with just a few ideas. The transition to cloud is a journey, as they say. That word is carefully selected to reflect the long and arduous nature of moving from all on-prem to hybrid IT. Don’t expect your staff to simply become site reliability engineers overnight. Give them time and the resources to adapt and grow.

In Austin this week for our annual Head Geek Summit, and to film some secret stuff. I'm thankful for the warm Austin weather this week, but can't understand why everyone is wearing coats.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Apple replaced 11 million iPhone batteries in its $29 program, report says

Apple has tricked consumers into phone hardware upgrades for a decade, and consumers are starting to understand they don’t need a new phone every year.

 

IRS is recalling 46,000 workers — nearly 60 percent of its workforce — from furlough to handle tax refunds without pay

Nothing says "refund" like having a disgruntled IRS employee review your tax return.

 

Japan’s robot hotel lays off half the robots after they created more work for humans

Ageism against robots is a thing, apparently. "Many of the robots that have been retired were in service for years, making them outdated.”

 

Mother of All Breaches Exposes 773 Million Emails, 21 Million Passwords

Chances are you have an email address in this breach. More importantly, you can check to see if your passwords have been involved in previous breaches. I recommend you take time, today, to review your passwords.

 

Mark Your Calendars: The End of Support for Windows 7 is Jan. 14, 2020

While I am certain your company will hold on well past the Win7 expiration date, I cannot stress enough the importance of upgrading to the latest versions of software available.

 

Microsoft no longer sees Cortana as an Alexa or Google Assistant competitor

I’m not sure anyone saw Cortana as a competitor, either. But it’s interesting to see Microsoft admit they missed their chance to compete, rather than waiting another five years as they did with Windows Phone.

 

Introducing AWS Backup, a centralized AWS backup service

I felt a great disturbance in the Force, as if millions of DBAs suddenly cried out in terror...

 

A day in the life of a Head Geek on the move - rental car, building access card, thank you note from an airline, and lunch:

Here’s an interesting blog about how storage needs to evolve to support modernization.

 

With the recent explosion in mobile device use, government agencies are facing a deluge of data. As a result, they need to rethink their data storage solutions and consider one that can alleviate capacity concerns and centralize storage management.

 

Migrating to Software-Defined Storage (SDS) can give agencies a more modern and reliable storage infrastructure that offers greater visibility and control over their networks. SDS allows administrators to monitor network performance across numerous storage devices, providing a centralized dashboard view of IT assets. It is also designed to deliver deeper insight into what types of data employees are storing on agency networks.

 

SDS abstracts storage resources to form resource pools, which can help agencies increase scalability and simplify administration of the network environment. It also makes storage infrastructures hardware-independent, which can lead to more efficiency, greater security, and increased agility.

 

SDS is hardware-agnostic, allowing data to be stored regardless of from where it emanates. In addition, SDS utilizes a very non-specific mechanism for data security and enables users to have multiple tiers of storage, which is managed from policy to automation. If it were necessary to make a security-relevant change on server message blocks, from version 2 to 3, for example, admins could make the adjustment at the software level without having to restructure and migrate government data.

 

One of the biggest benefits of SDS can be enterprise infrastructure scalability. SDS can be configured based on an organization’s needs, improving flexibility. Switching to SDS can also reduce both provisioning time and human interaction with the system through the implementation of automated policy and program processes, empowering people to do more with the same resources.

 

Another advantage of SDS is that it reduces the number of management interfaces that users are required to use. While single-pane orchestration is not yet here, SDS can simplify storage administration significantly. Also, SDS can work in a multivendor environment.

 

SDS can require significant upfront investments of time and money to migrate data, particularly in an enterprise that operates at government scale. However, it’s not a matter of if migration to SDS will occur, but rather when.

 

To begin realizing the efficiencies of SDS, IT managers must do some upfront planning. For starters, they should evaluate the kind of storage they currently have and perform a gap analysis between their starting point and their goal. It’s important to determine the types of data on the network, its criticality, and the services accessing that data. Infrastructure services like virtual desktop environments or Exchange (as an example) have different performance requirements and may need different storage.

 

As government heads toward large, complex enterprises, legacy storage infrastructures must modernize to adequately handle agencies' data growth. Traditional architectures can be too costly to maintain and lack scalability, but fortunately, SDS can help.

 

Find the full article on Government Computer News.

 

  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

“Who is the wise one? He who learns from all men.”

  • Shimon Ben Zoma

 

Wisdom can be gained from anywhere and anyone—even the most unlikely places. For me, that includes nuggets of capital-T “truth” in popular culture, primarily superhero movies.

 

As I did before with the movies “Doctor Strange” and “Logan,” I found that “Spider-Man: Into the Spider-Verse” held some universal lessons for those of us working in technical disciplines.

 

Spoilers (duh)

As with any deep discussion of movie-based content, I’m obligated to point out that there will be many spoilers revealed in what follows. If you have not yet had a chance to enjoy this chapter of the Spider-Man mythology, it may be best to bookmark this for a later time.

 

People may do the same thing you do, but they got here differently.

The entire premise of the movie is that Spider-Folk from different dimensions (Peter Parker’s Spider-Man, Miles Morales’ Spider-Man, Gwen Stacey’s Spider-Woman, Peni Parker’s sp//dr, the other Peter Parker’s Spider-Man Noir, and Peter Porker, the Amazing Spider-Ham) are brought into a single dimension (the home of Miles Morales). While each of these individuals embody, with slight variations, the abilities and ideals of the spider-themed superhero, it’s clear from the beginning that the path that led each person to their current state is very different. Peter Parker makes this clear when he tells Miles, “Remember, what makes you different is what makes you Spider-Man.”

 

The lesson for IT should be clear. Even on a team of equally-qualified SysAdmins, network engineers, DevOps practitioners, or monitoring specialists (especially monitoring specialists, in fact), our abilities may be similar, but the path we took to acquire them is unique and personal. We’re at our best when we recognize and value those differences in perspective and approach, even as we appreciate the way our colleagues can execute as a team with consistency.

 

Everyone’s the star of their own story.

Everyone’s the star of their own movie, not a sidekick in yours. This was clear from the start of “Into the Spider-Verse.” Each Spider-Person was used to being their dimension’s “one and only.” Even when they were in the same room, they maintained their individuality.

 

But one trope that the movie avoided was the “no, *I’m* the one-and-only Spider-<Person>. You must be an impostor!!” From the very start, the Spider-Folks understood they had to work together, leverage each other’s strengths, and support each other’s deficits.

 

We can take that lesson to heart. Despite that we’re all starring in our own show, we can be co-heroes in the larger story—shining when our individual skills are called upon, supporting others when they need it, knowing that we aren’t diminished when we raise others up.

 

Perfect (whether that’s a situation, technique, or person) doesn’t exist. Or if it does, it only exists briefly.

The writers and the story itself confirm that the “real” Spider-Man, the one from the dimension Marvel has stated is “ours” (Earth 616), is the older, slightly downtrodden, thicker one. So, what about the first one? The Peter who is sandy-haired, young, and far more put-together?

 

That was an idealized Spider-Man. An inspirational, but unrealistic model, to which Miles (as well as the city) might aspire, but never achieve. Because life is messy. Because in any plan, little things go wrong along the way.

 

Likewise, our “perfect” design crumbles in the face of real production load, undocumented data center layouts, or other things make up “just another day in IT.”

 

Just because that “perfect” Spider-Man (or network design) didn’t last very long under the onslaught of implementation hiccoughs (or Kingpin’s fists) doesn’t mean we can’t draw strength and inspiration from them. The key—which Miles learned and we should learn as well—is to adapt to changing circumstances, plan for contingencies, and fall back on grit and determination to get through the hardest parts.

 

Even a bad mentor may have something to teach you.

Life has handed Peter a few raw deals and he’s definitely worse for the wear because of it. Cynical and world-weary, he’s not the greatest teacher for Miles. Despite that, Miles is a ready student who is at a point in his life where he still has a sense of wonder, but has the street smarts to see the lessons in people’s actions over their words. Miles’ willingness to believe in Peter’s ability to show him the ropes as Spider-Man carries both the student and the mentor through.

 

In a long (and hopefully fulfilling) career in IT, we can learn from many different people. While some of these mentors will be gifted with the ability to see us clearly and say the right thing to point us in the right direction, far more will be well-meaning, but flawed individuals who may be pressed for time, short on patience, and caught up in their own poor choices. Nevertheless, they can teach us beyond serving as an example of what not to do. Being a student of life is one of the most valuable skills any IT professional can aspire to attain because it leads to more discoveries.

 

Mentoring will likely teach you more than your student.

Flipping things around, it’s not such a stretch to see ourselves in the role of the time-scarce and impatient mentor haunted by impostor syndrome. “Who am *I* to teach anything to anyone?”

 

Nobody is ever ready for responsibility when they first set out. It’s only by learning-as-we-go that we discover how much of a mentor and teacher we can be. One of the unexpected benefits is that we become better in the process. Better teachers, certainly, but also better professionals, team members, and even people.

 

Peter enters Miles’ dimension very nearly washed-up, ready to hang up his web shooters. While returning home gives him an immediate motivation, you still get a sense that he’ll go right back to the status quo once he gets back. In teaching Miles what it means to be a webslinger—both the ideal, as well as Peter’s more nuanced reality—Peter rediscovers and re-aligns his own inner compass. In the end, we see Peter take steps in his own life that were unthinkable before meeting his inter-dimensional protégé.

 

Nobody is a teacher, everybody is a student.

“Into the Spider-Verse” teaches that, by and large, everybody is a student. While you could also understand that to mean that everybody has something to teach, the lesson and focus here is on ourselves. When we are teaching, we can learn.

 

From the big things, like the aforementioned life lessons that Earth 616 Peter learned from Miles and Gwen, to the equally important life lessons Gwen learned about the value of being open to friendships, to the concept of a Rubik’s Cube that baffled black-and-white Spider-Man Noir, the characters learn from each other and the world they are thrown into, and are better for it.

 

Never assume you know everything about someone.

Miles’ Uncle Aaron is a pivotal character. We understand at the start of the movie that he’s something of a black sheep—he’s not on speaking terms with his brother (Miles’ father), his job takes him out of town unexpectedly, and he’s not able to settle down. But he’s also the “cool uncle” that Miles turns to for wisdom. The twist comes when we discover Aaron’s secret identity: the villain known as The Prowler, who is on Kingpin’s payroll and happens to have the new Spider-Man in his sights. It’s only during one of the climactic fights that Miles and Aaron recognize each other for who they are, in costume and out. In that brief second of recognition, Aaron decides to save, rather than kill, Spider-Man. The consequence for this is swift and, in a parallel to the “traditional” Spider-Man story we all know, we see the hero cradling his uncle’s dead body in his arms.

 

While “you can’t save everyone” is as much a part of the Spider-Man trope as colorful tights and swinging on webs is (the Spider-Folk tell Miles as much), there’s a more important lesson for the audience, especially for those who work in IT.

 

Uncle Aaron had very complex and personal reasons for staying away from Miles’ family, for becoming (and remaining) Prowler, and for saving Miles. These reasons weren’t obvious to anyone around him, but that didn’t make them any less important or real.

 

We can’t assume to know everything about a person. We may see their actions, but we cannot always understand their motivations, their reasons, the things that drove them to this moment. My fellow Head Geek Thomas LaRock writes about this here, comparing people’s motivations to a “MacGuffin” used in storytelling.

 

Finding out the reasons and motivations of those around us may make it easier for us to accept their decisions and actions, but it’s not necessary. What is necessary is accepting that each member of our team has those reasons and motivations in the first place, even if we aren’t privy to them; that those reasons and motivations are valid (at least to them); and that we need to respect them. We don’t have to agree with them. But until we know what they are, we can’t dismiss them as pointless, useless, or non-existent.

 

The Adventure Continues

That’s certainly not all I have to say on the subject. Stay tuned for the next issue.

 

Until next time, true believers,

Excelsior!

 

1 “Spider-Man: Into the Spider-Verse,” 2018, Sony Pictures Entertainment

Still no sign of snow here in New England, but baby it's cold outside! Looks like snow will get here soon enough, just in time for ice dam season. Lucky me.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Top 10 IoT vulnerabilities

A good list that applies to IT security in general. Have a look and see how many of these you have overlooked.

 

The Feds Cracked El Chapo's Encrypted Comms Network by Flipping His System Admin

Nice reminder that the biggest threats are often from within.

 

Filled with malware, phishing and scams, does the web need a safety manual?

Yes.

 

IBM’s new quantum computer is a symbol, not a breakthrough

20 qubits is not a lot, but it’s a start. We are still on track for quantum supremacy within 8-10 years.

 

GitHub is Now Free and That’s Great

Microsoft continues to embrace open source, especially if it is open source code hosted on servers they own.

 

AWS gives open source the middle finger

AWS gives open source the middle finger, whereas Microsoft has been making efforts to embrace open source. What Bizarro World is this?

 

Costco Now Sells A 27-Pound Tub Of Macaroni And Cheese That Lasts 20 Years

I’m not saying that I want this 27-lb bucket of mac ‘n cheese, but I’m not saying I would send it back if someone had it shipped to my house.

 

Sometimes we all need a little "retail therapy":

 

 

SWUG began in 2016 as a largely-volunteer effort, cobbled together using spare time and budget by a core set of dedicated SolarWinds staff and THWACK fanatics. The effort hit its stride in 2017, standardizing the format, honing the style, and gathering data from attendees.

 

And then in 2018, all hell broke loose. SWUG went to more cities than ever before, presenting on a wider range of topics and inviting speakers from every corner of the SolarWinds organization, and even inviting some of our MVPs to take the podium and share their valuable knowledge and experience with the audience.

 

And it was that last part—the variety of speakers—that caused a very small but beloved change for me. As Head Geek, I had the best seat and the best job in the house: emcee. I got to introduce each of our speakers, frame their topics, and then stand back and watch in awe as each and every one of them brought the house down with their skills and knowledge.

 

The introductions themselves became something of a labor of love for me. This group of superstars needed more than a simple recitation of their name and title. They needed to have their praises sung and their accomplishments shouted from the rooftops so the SWUG attendees understood just what an incredible individual they had in front of them, and how deep the SolarWinds bench truly was.

 

However, in retrospect, I may have gone a bit overboard. But I'll let you be the judge. Because as we move into 2019, SWUG is once again evolving, and it might be time to set aside these introductions in favor of some new form (note: I say "might").

 

Nevertheless, I submit for your reading pleasure "A Year of SWUG Introductions", i.e. all the ways I introduced speakers at the 2018 SWUG events.

 

Consistency Is the Key

In many cases, I was remarkably consistent when we had regular speakers such as Chris O'Brien, Steven Hunt, and Kevin Sparenberg:

 

Chris had two main variations:

  • A man whose name is literally part of the source code for NPM, who is known as the father of NetPath, PM Chris O'Brien.
  • A man whose name is literally part of the source code for NPM, who had an Easter egg built in his honor, PM Chris O'Brien.

 

Similarly, Steven (aka "Phteven"):

  • My kayfabe arch nemesis Steven Hunt, Windows fan boy, and Principal Product Strategist (Systems).
  • My kayfabe nemesis and, conversely, my little Linux protégé, PM Steven Hunt.

 

And Kevin only had this one intro...

 

  • The only person here who's landed gentry as well as a SolarWinds PM, a former customer, and a THWACK MVP, his Lairdship Kevin Sparenberg.

 

...until the very last one, because it was such a special moment for him:

 

  • This year he's acquired more titles than some people change shoes. He's also the only person here who is both a member of landed gentry as well as a former customer, SolarWinds employee, and a THWACK MVP. Please help me congratulate him on his 10-year THWACKniversary and welcome our DM of community (or THWACKbassador), his Lairdship Kevin Sparenberg.

 

Variety Is the Spice of Life

For the UX team, I just kept doing variations on a theme:

  • Combine the observational skills of Sherlock Holmes with the empathic skills of a Betazoid ship's counselor, you pretty much end up with our manager of UX, Tulsi Patel.
  • Cross rainbows and sunshine with a Betazoid ship's counselor asking, "How does this wireframe make you feel?" and you pretty much have Kellie Mecham, User Experience Researcher.
  • Combine the observational skills of Sherlock Holmes with a Betazoid starship counselor asking, "How does this wireframe make you feel?" and you pretty much have Katie Cole, User Experience Researcher

 

While at other times I was clearly at a loss

(admittedly, these all came from one of the first SWUGs where I barely did any introductions at all):

  • On Drums, SE extraordinaire Mario Gomez.
  • Director of Cinematography and Certification, Cal Smith.
  • Itinerant food critic and Fed SE, Andy Wong.
  • Chief roadie Kyle Lohren, video production manager.

 

For the guest MVP speakers, I tried to roll out the red carpet:

  • From Atmosera comes a person who's been an MVP as long as I have: Byron Anderson.
  • From Loop 1, we have a programming force of nature and an avid learner of all the things, THWACK MVP Steven Klassen.
  • When I was at Cardinal, Josh joined our team one month before I ended up getting the Head Geek job. He's had every right to punch me in the face, but I lucked out because he's not only Canadian, he's just an all-around amazing guy as well as a THWACK MVP, Josh Biggley.
  • He began his IT career with a walk-on role in Star Wars, but now he divides his time between monitoring and specializing as a Mini Cooper stunt driver. Please welcome THWACK MVP Richard Phillips.

 

The “Bodyguard to the Stars” shtick ended up being a go-to for newcomers

(Those I may not have known well enough to tease):

  • Bodyguard to the stars with top secret clearance, Federal and national Sales Engineer Sean Martinez.
  • Bodyguard to the stars and former stunt driver for Tom Cruise, Federal Sales Engineer Arthur Bradway.
  • Bodyguard to the stars, world-famous He-Man cosplayer, and Virtualization PM Chris Paap.

 

Saving the Best For Last

But for many folks, I let the originality flow:

  • A pretty pink unicorn with rainbow painted brass knuckles and top-secret clearance, Head Geek Destiny Bertucci.
  • Forget about knowing where the bodies are buried or who has the pictures. This person knows which NPM questions you got wrong – Nanette Neal, Program Manager for SCP.
  • Formerly a Calvin Klein model, before he gave up fitted pants for NetFlow packets - Product Manager Joe Reves.
  • Just like Locutus, it takes incredible willpower to escape the Borg collective known as the SolarWinds sales group, and yet Robert Blair did the impossible and is now our Customer Advocacy Manager.
  • Whenever you see Tom Cruise doing a mountain climbing scene, you're actually watching his stunt double, Product Manager Serena Chou (they're about the same height).
  • We sometimes find him sleeping in his car, not because he's fallen on hard times, but because he simply loves his Jeep that much. Please welcome Network Management Product Manager Jonathan Petkevich.
  • Clocking in at 6'5", he's officially the tallest person in our department and therefore the most important to us because he can reach the really good Scotch up on the tall shelves - Senior web properties manager Ben Garves.
  • Out at conventions he has fun playing the role of Patrick Hubbard's kayfabe arch-nemesis, but in the office, he's got veto power for every new feature or upgrade – Our VP of product strategy Mav Turner.
  • In D&D one of the most interesting PC's is the multi-classed character. At SolarWinds we value our multi-class staff. She started out as a UX illusionist and is now part of our rogue’s gallery of product marketing managers -Katie Cole.
  • What happens when someone with a degree in mechanical engineering takes a right turn at San Antonio and ends up at a software company? You get a product marketing manager who can tech you under the table. Lourdes Valdez.

 

Last But Not Least

And finally, as I have done at every SWUG this year, I'd like to introduce and Thank the people who make THWACK a reality every day:

  • And of course, Ms. THWACKniss Everdeen herself, the heavenly source of THWACK point blessings, Community Cat Wrangler Danielle Higgins.
  • And of course, the woman whose THWACK ID sends everyone into spontaneous giggles, who can repeat, from memory, every post ever banned from THWACK – Wascally Wendy Wabbot... I mean Abbot.

 

If you were able to join us for a SWUG this year, I hope this brought back some fond memories. And if you couldn't make it out to join us, I sincerely hope you'll have that chance in 2019. Read more from SWUG Head Master, Kevin Sparenberg, on what you can expect at these events this year.

 

Or just cut to the chase and join us for FREE at a SWUG in 2019:

Here’s an interesting blog about the Internet of Things and battlefield advances.

 

The internet of things (IoT) is advancing into the theater of war and becoming the Internet of Battlefield Things (IoBT).

 

Planning for the IoBT

 

The U.S. Army Research Laboratory is devising ways to turn inanimate and innocuous objects, including plants and stones, into connected information gathering points. This work complements initiatives undertaken by DARPA to provide war fighters and their commanders with critical information through the innovative use of smartphones, floating sensors, and more. It also recently began working with leading universities on these initiatives.

 

According to a report from the IEEE Computer Society, the IoBT will lead to “an unprecedented scale of information produced by the network sensors and computing units.” Already overtaxed and undermanned, here are some things IT teams should consider.

 

Monitoring the Monitors

 

Ensuring the security of IoBT networks will most likely be uncharted territory for network administrators. The military will not control nontraditional IoBT sensors or their pathways (it’s hard to control a rock, for example). Also, enemies could use similar tactics and their own unorthodox devices to breach U.S. defense networks.

 

Gaining greater visibility into the devices and connections using these networks will be more important than ever. Automated tools that scan and alert to suspicious devices will likely prove invaluable to ensuring that only devices deemed secure are gaining access to their IoBT networks. Watch lists should be established to account for rogue or unauthorized devices and sensors. The goal should be to create an intelligent and automated network of devices that can respond to potential threats or service interruptions with minimal input from an operator.

 

Ready for Change

 

The 2018 SolarWinds public sector IT Trends Report found that a large portion of survey respondents ranked inadequate organizational strategy and lack of user training as barriers to network optimization. What happens when something as complex as IoBT management is thrown into the mix? We should remain cognizant that the size and complexity of these networks changes quickly—and the devices on these networks are becoming more diverse.

 

Policies and procedures should be clearly articulated to define what constitutes a potential risk and how to report it. Military IT pros can be trained and reminded about vigilance, or as the old adage goes, “If you see something, say something.” Equally important, they should know exactly who to say it to.

 

IT teams should continuously evaluate and reevaluate their tools to ensure they are adequate to address their security concerns and network complexity. Like networks, threat vectors are also evolutionary, and can change quickly. Regular testing of network tools and adjusting security protocols are important to a healthy, proactive, and robust security posture.

 

Winning the Battle

 

The IoBT may seem like something straight out of “Starship Troopers,” but it is very real, and is evolving rapidly. The IoT will likely only become more pervasive. Soon, it will hit the battlefield, so administrators can benefit from getting ahead of the challenge now.

 

Find the full article on C4ISRNET.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

The conservation of quantum information is a theory that information can neither be created nor destroyed. Stephen Hawking used this theory to explain how a black hole does not consume photons like a giant cosmic eraser. It is clear to me that neither Stephen Hawking, nor any quantum physicist, has ever worked in IT.

 

Outside the realm of quantum mechanics, in the physical world of corporate offices, information is generated, curated, and consumed at an accelerated pace with each passing year. The similarity between the physical corporate world and the quantum mechanics realm is that this data is never destroyed.

 

We are now a nation, and a world, of data hoarders.

 

Thanks to popular processes such as DevOps, we are obsessed with telemetry and observability. System administrators are keen to collect as much diagnostic information as possible to help troubleshoot servers and applications when they fail. And the Internet of Things has a billion devices broadcasting data to be easily consumed into Azure and AWS.

 

All of this data hoarding is leading to an accelerated amount of ROT (Redundant, Outdated, Trivial information).

 

Stop the madness.

 

It’s time to shift our way of thinking about how we collect data. We need to become more data-centric and do less data-hoarding.

 

Becoming data-centric means that you define goals and problems to be solved BEFORE you collect or analyze data. Once these goals or problems are defined, you can begin the process of collecting the necessary data. You want to collect the right data to help you make informed decisions about what actions are necessary.

 

Here are three ways for you to get started on becoming more data-centric in your current role.

 

Start with the question you want answered. This doesn’t have to be a complicated question. Something as simple as, “How many times was this server rebooted?” is a fine question to ask. You could also ask, “How long does it take for a server to reboot?” These examples may seem like simple questions, but you may be surprised to find that your current data collections do not allow for an easy answer without a bit of data wrangling.

 

Have an end-goal statement in mind. Once you have your question(s) and you have settled on the correct data to be collected, you should think about the desired output. For example, perhaps you want to put the information into a simple slide deck. Or maybe build a real-time dashboard inside of Power BI. Knowing the end goal may influence how you collect your data.

 

Learn to ask good questions. Questions should help to uncover facts, not opinions. Don’t let your opinions affect how you collect or analyze your data. It is important to understand that every question is based upon assumptions. It’s up to you to decide if those assumptions are safe, and an assumption is considered safe if it is something that can be measured. For example, your gut may tell you that server reboots are a result of O/S patches being applied too frequently. Instead of asking, “How frequently are patches applied?” a better question would be, “How many patches require a reboot?” and compare that number to the overall number of server reboots.

Summary

When it comes to data, no one is perfect. These days, data is easy to come by, making it a cheap commodity. When data is cheap, attention becomes a commodity. By shifting to a data-centric nature, you can avoid data hoarding and the amount of ROT in your enterprise. With just a little bit of effort, you can make things better for yourself, your company, and help set the example for everyone else.

Welcome back to the blog series that never ends, but does take time off for the holidays. I hope everyone had a wonderful time with family and friends these past two weeks. We are about 2.5% done with 2019 at this point. Don’t wait to get started on whatever goals you have set for the year.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

It's Time for a Data Bill of Rights

Yes! Data rights should protect my privacy by default, not force me into consenting to the opposite.

 

Marriott breach included 5 million unencrypted passport numbers

Passport numbers are considered PII, and should have been protected in some manner. This is a large oversight by Marriott, and makes me wonder what else they are missing.

 

Is blockchain living up to the hype?

No.

 

'Tracking every place you go': Weather Channel app accused of selling user data

Well, now we know the real business model behind reporting the weather 24/7.

 

How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually.

Wait, you mean that random dude online bragging about success as a ‘full stack blockchain developer’ making $500k a year may not be telling the truth? Shocking.

 

Can't unlock an Android phone? No problem, just take a Skype call: App allows passcode bypass

My first phone was a Droid, and I recall this functionality. But I thought it was a feature at the time. Now I see it’s a bug. Funny how perspectives can change over time.

 

100+ Lessons Learned for Project Managers

Wonderful article, filled with great advice such as ‘Reviews, meetings, and reality have little in common.’

 

With 20 guests for Christmas dinner, this 20lb roast beast was just enough:

 

Here’s an interesting blog that looks into the importance of two-factor authentication for the public sector as digital crime increases.

 

“It won’t happen to me” can be naïve, and perhaps even irresponsible, in an era that sees digital crime grow each day.

 

Awareness Through Education

 

Google has done much to elevate online security awareness. Most account users will be familiar with its 2-Step Verification process, designed to make it much harder for hackers to gain access to files and information. Known generally as Two-Factor Authentication (2FA), this additional layer of security requires not just a username and password, but also something that is completely unique to that user, whether it be a piece of information or a physical token. It’s based on the concept that only those users will achieve access based on something they know (knowledge) and something they have (possession).

 

Leading by Example

 

In a public sector context, data sits at the heart of organizations, in an environment shaped by stringent data regulations and growing security threats. As such, a renewed emphasis has been placed on expanding the use of strong multifactor authentication that’s resistant to attack, particularly for systems accessed by the public. Two years ago, the U.S. government launched a Cybersecurity National Action Plan (CNAP), which included mandatory two-factor authentication for federal government websites and government contractors.

 

The Local 2FA Landscape

 

From a U.K. perspective, a growing number of government agencies are deploying encryption to help secure critical information properties. For example, the Code of Connection (CoCo) and public services network (PSN) frameworks recommend that any remote or mobile device should authenticate to the PSN via two-factor authentication. The uptake in two-factor authentication processes in public sector organizations is rising, with some vendors delivering authentication-as-a-service that can be used to authenticate cloud applications, infrastructure, and information.

 

Better Security = Peace of Mind

 

Two-factor authentication provides reassurance for both users and system administrators. Biometric authentication, such as a fingerprint, is becoming more common and can be used in diverse systems such as websites, enterprise applications, and secure thumb drives.

 

The Practical Way Forward

 

Organizations will need to ensure that their back-end solutions are designed and in place to support the technology and work properly for system users. Thought also needs to be given to education and awareness when introducing new authentication systems. It could become overwhelming, particularly when considering that many public sector organizations may have only recently started to develop a digital transformation strategy. In the NHS space, for example, just 24% of trusts and Clinical Commissioning Groups (CCGs) have begun to develop strategies.

 

Processes such as cloud adoption and 2FA are all part of the same digital transformation journey, and having the appropriate government cybersecurity tools to manage each of these components can go a long way towards helping public sector organisations understand what is needed to best support them and their publics. Striving for more secure authentication systems that provide far more confidence in the identity of both end users and systems administrators is a great example of this, and is why it matters.

 

Find the full article on Open Access Government.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

As we stand here, in the dawning moments of a new year, let’s all take a moment to acknowledge the acts of generosity, enthusiasm, and bravery of our community in sharing their personal stories, observations, and lessons. Through them, the members of THWACK have transformed the last 31 days into an exercise in reflection, contemplation, and growth. I couldn’t be more proud to be part of this group, and part of a company that fosters these types of conversations.

 

While I have the individual post summaries and a selection of comments below, I wanted to share some statistics with you to emphasize just how engaged everyone was in this dialogue. From December 1-31, the Writing Challenge generated:

  • 1 lead post each day from 31 different authors, including 14 THWACK MVPs
  • 1,257 comments
  • 22,083 views
  • ...from 1,931 people
  • ...spread across 19 countries

 

Some other informal statistics* worth noting:

 

  • 127 people mentioned “Back to the Future,” “Doctor Who,” and/or “The Butterfly Effect”
  • 4,846 expressed concerns about altering the past
  • And 1,332 also worried they wouldn’t be who they are today if they had encountered their younger selves

 

Based on the data, we can rest easy knowing that the THWACK community will not be the one to screw up the timeline, should technology advance sufficiently to permit traveling to the past.

 

However, as we travel into the future in the normal fashion, one second at a time, I’d like to wish you all, on behalf of the entire SolarWinds team, a very happy New Year, and hope you experience nothing but joy, prosperity, and peace in the coming year.

 

- Leon

 

*Remember kids, 52.7% of all statistics are made up on the spot.

 

***********************

**** The Authors *****

***********************

 

Danielle Higgins, Manager of the Community Team

https://thwack.solarwinds.com/community/solarwinds-community/contests-missions/december-writing-challenge-2018/blog/2018/12/28/day-29-what-i-would-tell-my-younger-self-perspective-from-a-millennial

We can’t always know what experiences led someone to become the person they are. But when we are privileged to discover the details, it cannot help but bring us closer. That’s exactly what Danielle did in her post, giving a frank and pointed description of her youth, and the messages she would tell that young woman. It’s emblematic of Danielle’s personality that those messages center around hope, reassurance, trust, believing, and focus.

 

Allison Rael, Marketing Communications Manager, Content Marketing

https://thwack.solarwinds.com/community/solarwinds-community/contests-missions/december-writing-challenge-2018/blog/2018/12/29/day-30-

Alli outs herself as a card-carrying member of the international order of worriers and offers some background on it. But she immediately pivots to a breathtaking observation that I think we all (and especially those of us who are also members of the worrier’s club) can take to heart:

 

“I’ve gradually come to realize that when you worry less and live more, amazing things start to happen.”

 

She lists out some of those amazing things—both from her past and her present—and then comes up with this gem:

 

“In most cases, my worries are just head trash holding me back.”

 

“Head trash.” I’m definitely going to use that one in the future to frame my less helpful thought patterns.

 

Jenne Barbour, Senior Director, Corporate Marketing

https://thwack.solarwinds.com/community/solarwinds-community/contests-missions/december-writing-challenge-2018/blog/2018/12/30/day-31-

Finishing up both the week and the challenge itself, Jenne begins by sharing her family’s Yuletide tradition (re-watching the Harry Potter series) and how the theme of the challenge this year naturally blends with the idea of Time-Turners in the Harry Potter mythology.

 

As so many have done, Jenne understands that, while our own past is something which cannot and should not be changed, offering reassurance to our younger selves so that we can face our challenges with a measure of comfort would be a blessing.

 

Her final words are the perfect way to wrap up the series, as well as my summaries:

 

“And as we have traveled through time to meet ourselves today, I like to think our past selves would be pretty impressed by how we’ve all turned out. By how we’ve met obstacles both big and small, celebrated wins, learned from losses, and how we cherish our families, friends, and the good things in life, however we see them. And as we head into a new year—into the very future itself—I hope we all choose to encourage ourselves to be strong, to believe in ourselves, and to remember that we are enough.”

 

***********************

*** The Comments ***

***********************

Day 29

Laura Desrosiers Dec 29, 2018 5:30 AM

I grew up being told I would be a failure, which I believed for a very long time, but when I went back to school 10 years after high school and found out I was able to achieve, I started to push myself for more. Everything you have stated in the list in the article is so true and I just have to begin following your advice. I will print that off and hang it in my office as a reminder to myself no one is perfect, you don’t know it all, and you can thrive at what you do.

 

Jan Pawlowski Dec 29, 2018 1:44 PM

I’d add to 8 by saying own your failures as well. Celebrate the wins, but own your failures. This will teach you humility, and people will respect you much more for it.

 

Olusegun Odejide Dec 29, 2018 8:09 PM

Very good article. I love the list, especially No 1. You don’t need to fix everything, you need to let go sometimes, sit back and enjoy the ride.

 

Day 30

Phillip Collins Dec 30, 2018 8:28 AM

Your letter speaks to me. I can see myself in it. How right your Grandpa was. All my life I have allowed my worries to dictate my actions, except one brief period. The last 3 years of college I was able to let worry go and enjoy my life. Many good this came of that time. I pledged a great fraternity, made several wonderful friends, met and married my beautiful wife. None of this would have happened if I didn’t let worry go and just live my life. For whatever reason, I was not able to continue this after graduating. I often look back on those 3 years and try to understand what I was able to do then I can’t seem to do now. I wish they would come up with a pill to help you keep things in perspective. Why worry about what you cannot control. Do your best, learn and grow, enjoy the life you have been gifted.

 

Holger Mundt Dec 30, 2018 5:16 PM

Thanks for your encouraging words to worry less. As a native southern German worrying is deeply rooted in my genes.

Always a good reminder to let aside those worrying thoughts.

 

Laura Desrosiers Dec 31, 2018 4:51 AM

I worry way so much about things. I will stay up all night wearing holes out in the carpeting pacing the floors. This is going to be my New Year’s resolution. Don’t worry so much and live more.

 

Day 31

Mark Roberts  Dec 31, 2018 7:17 AM

A great post, which for those that have read more than a dozen of the articles this month (go back and read them all if you haven’t btw), it has been interesting to see that common thread of not taking this opportunity to tell their younger self to do much or anything differently. Everyone can recount times of pain, loss and missed opportunities, but that those life experiences and challenges have brought them to the place, physically and emotionally they are happy and proud to be.

 

Jeremy Mayfield  Dec 31, 2018 7:56 AM

It is interesting to think about what could have been, but the truth is we will and can never know. We are who we are, where we are, and the how’s and why’s matter little. All we can do is strive to be better moving forward. The future is not written, but the past, as you referenced, is set in stone.

 

Jan Pawlowski Dec 31, 2018 8:22 AM

I think too often we concentrate on “What might’ve been,” rather than what is. We can all relate where we wish a certain situation had gone differently, or an outcome had been different. It’s all too easy to blame things on past discrepancies that have brought you to where you are today. In truth without those happenings, you wouldn’t be where you are, nor the person you are today. Every day is a school day, it’s your choice if you learn or not.

Here’s a recent article from Signal magazine discussing innovation in the federal marketplace that I think you’ll find interesting.

 

The need for next-generation networking solutions is intensifying, and for good reason. Modern software-defined networking (SDN) solutions offer better automation and remediation, and stronger response mechanisms than others in the event of a breach.

 

But federal administrators should balance their desire for SDN solutions with the realities of government. While there are calls for ingenuity, agility, flexibility, simplicity, and better security, implementation of these new technologies must take place within constraints posed by methodical procurement practices, meticulous security documentation, sometimes archaic network policies, and more.

 

How do modern networking technologies fit into current federal IT regulations and processes? Let’s take a look at two popular technologies—SDN and so-called white box networking solutions—to find some answers.

 

SDN: Same Stuff, Different Process

 

Strip away the spin around SDN, and IT administrators are left with the same basic network management processes under a different architectural framework. However, that architecture allows administrators to manage their networks in very different and far more efficient ways. This greater agility and responsiveness should not grant administrators a carte blanche approach to network operations.

 

If a network is overloaded with traffic, then administrators could decide to spin up more virtual switches to address the issue, right? The federal government requires strict documentation and record-keeping every time a new technology is implemented or an existing one is changed. From managing IP addresses to the dynamic scaling of resources, administrators should carefully consider and account for changes to ensure what they are doing does not pose a security risk.

 

Beware White Boxes

 

White box networking solutions are designed to run on any network, including those that are software-defined, ostensibly at a lower cost. Fortunately, agencies may not need to switch because original equipment manufacturers will continue to step up their game to stay competitive.

 

Even if agencies decide to go the white box route, there are other potential issues that need to be considered, particularly in relation to federal regulations. Agencies need to know who manufactures the technology they use, where it comes from, and other critical considerations that the government requires.

 

Balance Considerations and Benefits

 

There is a lot more to consider before moving into network modernization. Solutions must be compatible across agencies, which can be challenging if every vendor offers a different flavor of SDN. Agencies need to make sure they have the right people in place for the job and are embracing a pattern of continuous employee education.

 

Despite these considerations, modern network solutions can provide great benefits to federal IT teams. Teams can save significant money in the long run because they will not have to invest in patching or maintaining outdated systems.

 

Most importantly, federal administrators can use modern solutions to help build a network foundation that is ready for future innovations. Those innovations may need to occur within the mold of existing government processes, but the groundwork will have been laid for more scalable and more secure networks.

 

Find the full article on Signal.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

In general, our products are vendor-agnostic, and this is important to us.
Using Network Performance Monitor (NPM) as an example: If a vendor is following the SNMP RFC, we can retrieve data, correlate KPIs, and forecast situations.
However, sometimes, this is not enough, just like eating half a portion of pasta carbonara.


As Cisco® is the most popular network vendor in our customer base, we focus on providing a little more information out-of-the-box and make (work) life more comfortable, such as via support for non-RFC OIDs. 
Also, we added support for CLI/API access to collect statistics that are not available at all via SNMP.


Let’s jump into our DeLorean and travel back in NPM’s history to…five years or so!

10.4        Hardware Health

10.7        Support for EIGRP and VRF

11.5        Wireless Heatmaps
12.0        Cisco SwitchStack®

12.1        Meraki
12.2        Network Insight for Cisco ASA

12.3        Network Insight for Cisco Nexus®

12.4        Support for ACI

 

Some of these features have been around for ages. I arrived just before the NPM 11.0 release, so for me, things like hardware health have been there “forever.”
The SwitchStack support was the first highlight for me, followed by the ASA integration in both NPM and Network Configuration Manager (NCM).

By the way, do you know that most of the features in the list are based on community requests?


On top of that, other Orion® Platform modules support VOIP, DNS, and DHCP solutions from Cisco, and you can attach those with a few clicks.
Finally, there is NetFlow. Over the years we have added support for NBAR2 and WLC flows into our NetFlow Traffic Analyzer (NTA).


There are various statistics out there discussing Cisco’s market share and how it changed over time, and I don’t want to get in an “I don’t like them at all” discussion. Trust me, I’ve had enough of those already. I prefer JunOS when it comes down to the CLI.
But also, I love both pasta carbonara and all`Amatriciana, and there is nothing wrong with it.


Still, Cisco is basically everywhere. You guys keep on using it, so we keep on adding new features into our network products to help you support your infrastructure.

So, the good news is that we’re attending Cisco Live! EMEA in Barcelona. You will find us in booth S20A starting on Monday, January 28, and the code word to remember is “T-shirt.”

 

Over the course of December, the THWACK community had the privilege to peek inside the personal thoughts and formative moments of many of our members. The ideas, stories, and emotions they shared with us were sometimes raw with honest sincerity, often amusing, and always relevant and engaging.

 

As monitoring aficionados, we are sensitive to patterns, seeking to discover the signal that may lie, undetected, beneath the "noise" of unrelated data. And sure enough, as the days progressed, certain themes surfaced again and again in both the lead articles and the comments. While I identified a few of them in yesterday's post, I'd like to focus on a particular one here.

 

Catherine O'Driscoll may have phrased it best on day 10:

"I found it quite difficult to pass on just one piece of advice when there is so much I wanted to tell my younger self; to prepare her for and to protect her from. But then I realized that if she doesn’t go through it, then we wouldn’t become the person we are today."

 

The idea that we cannot go back, cannot undo what we have already done, because it will fundamentally change who we are, came up time and time again. And here, on the first day of 2019, I'm going to challenge that idea, in the hope that it allows us to set a goal for ourselves in the coming year that could have far-reaching consequences.

 

Recently, I read an essay where the author laid out the following logic:

 

First, for any action, there are many downstream consequences—some expected, others not. Some of the results of an action are intentional, while others are not. And some of the outcomes of that action can be understood as empirically "good," and others not.

 

So how are we—the individual who performed that initial action—judged? Are the expected, intentional, and "good" outcomes ascribed to us, or the ones on the other side of the equation? Or are we credited with all outcomes and results? Or a mixture of both?

 

The answer, this author states, lies in our reason for taking the action in the first place.

 

If our reasons were to harm or hurt or otherwise "do bad," then those are the results that we, in a sense, get "credit" for. The fact that our action might ALSO have had helpful or positive results is less a credit to us, and more a credit to fate, Karma, nature, luck, Divine providence, etc. And, obviously, the reverse is also true.

 

But let's say that, at some point in the past, we acted wrongly with the intention to harm, and that action had a mixture of reactions both bad and (unintentionally) good. Sometime later (moments, days, or even years), we look back at that moment and feel true, sincere, honest regret. We reflect on that moment and learn something about ourselves that we understand much change.

 

And we change it.

 

We work on ourselves. Grow. Improve. Mature. That moment in the past becomes an object lesson for us, and impels us to become better than the person we once were.

 

NOW, standing in the present moment, how is that action judged? As it turns out, all the positive results—unintended though they may have been—can be ascribed to us and the negative ones (while not disappearing entirely) fade into the background. This is the critical idea behind reformative, versus punitive, consequences. Behind repentance. Behind forgiveness.

 

Looking back at that theme that came up again and again—that we cannot offer advice to our younger self because it would fundamentally change who we are today—I say that if we use those past moments as motivation to change who we are today, then we HAVE changed our past selves. We have reached back through the years and changed the past. Not by changing WHAT we did, but changing the MEANING of what we did.

 

And in the words of the author,

"Time then becomes an arena of change in which the future redeems the past and a new concept is born – the idea we call hope."

 

My hope is that over the course of December, you found more than just some interesting stories, or chuckle-worthy reading. I hope in either reading or writing the words that were shared, you found a catalyst for positive change that can lead you toward hope and happiness in your life in the coming year and beyond.

 

From everyone at SolarWinds and the THWACK community,

we wish you a very Happy New Year and the best to come in 2019.

 

P.S.: Use this link to catch up on any part of the 2018 December Writing Challenge you may have missed.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.