1 2 3 4 5 Previous Next

Geek Speak

2,703 posts

Had a great time in Austin last week for Tech Field Day 18. Now I am getting ready for my next trip, to Darmstadt, Germany, and SQL Konferenz. If you are reading this and near Darmstadt the week of the 20th, stop by and say hello. I'll be there talking about data, security, and privacy.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Questions for a new technology

Nice, simple framework for you to consider the next time your company looks to adopt new technologies or applications.

 

All the Bad Things About Uber and Lyft In One Simple List

A decent list of reasons why the privatization of public transit is not a good thing except for the few dozen executives at Uber and Lyft making millions of dollars.

 

The broadband industry loves bull**** names

Honestly, I’m shocked they went with 10G when they could have gone with 100G, or X-100G, or THX-1138.

 

London's Met police confess: We made just one successful collar in latest facial recog trial

Well now, this brings up all kinds of questions. First off…how many arrests are they looking to make? I mean, if no one has broken a law, then they should expect zero arrests. It seems they are looking to increase their incarceration rates, considering they are fining people for hiding their faces.

 

Stop using Internet Explorer, warns Microsoft's own security chief

I’ve been saying the same thing for 20 years.

 

Cheap Internet of Things gadgets betray you even after you toss them in the trash

Everything is not awesome.

 

Project A119: Inside the United States’ secret plan to blow up the Moon

Here’s a story reminding us all how truth is often stranger than fiction.

 

The greatest Valentine's Day gift ever:

 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article that hits close to home. Having supported federal clients at SolarWinds for over four years, I have seen firsthand the increase in complexity our customers need to manage.

 

As government networks become more complex, maintaining the highest levels of network performance is getting harder. The growth of hybrid IT makes things even more challenging.

 

How do federal IT pros maintain a consistent high level of performance under these conditions?

 

The State of IT Today

 

The right strategy starts with a good network management tool. Traffic monitoring across all networks, including hybrid IT networks, can help ensure you see where users are going, the IP addresses they’re coming from, and if they’re trying to access unauthorized information or files.

 

You should also have the ability to analyze that traffic. Be sure you can see network performance across hybrid IT landscapes, and cloud XaaS (Anything-as-a-Service) solutions. For the best results, look for solutions that provide auto-generating, contextually aware maps of network traffic. This can provide far clearer visualization of network performance throughout the entire environment.

 

Inventory as It Relates to Capacity

 

Maintaining an inventory of all hardware, software, applications, and equipment connected to your network is a best practice. This not only gives you a snapshot of your IT asset inventory, but also allows you to quickly detect unauthorized users.

 

Inventory becomes a critical component of scalability, which will contribute to better network performance. The better a federal IT pro can understand a given inventory, particularly based on historic data, the greater the ability to forecast future network-infrastructure needs.

 

An Application Perspective

 

Understanding the applications powering the network is the next cornerstone of effectively optimizing network performance. This requires developing a holistic view of all applications, which in turn will result in a better understanding of how the performance of applications may affect the entire application stack.

 

Federal IT pros need a network tool that can quickly identify and rectify performance issues and bottlenecks without having to hunt through layers of applications to find the root of the problem.

 

The Right Strategy: Resiliency and Reliability

 

Effective network management also involves implementing the right strategy that is strongly focused on the information you collect. How you interpret that data, and ensuring the validity of the data, can be the difference between success or failure.

 

Resiliency and reliability are also critical performance metrics. Resiliency is the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation. Reliability is a system’s ability to recover from infrastructure or service disruptions, automatically scale resources to meet demand, and mitigate service disruptions, including misconfigurations.

 

Resiliency and reliability underscore the value that federal IT pros can bring to fruition for their agencies. They also represent measures of how well a distributed application was integrated and delivered, and also depict overall performance.

 

So, remember, to optimize performance, look to leverage tools that deliver full-stack observability into the logs, metrics, and tracing data that underpin reliability and resiliency metrics.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

rorz

To Cloud or Not to Cloud?

Posted by rorz Feb 12, 2019

There’s a question I’m sure is keeping some IT managers and CTOs up at night. How do you know when it’s right to move a workload to the cloud? And when you finally make the decision to migrate, how do you choose the best location for your application?

 

As mentioned in my previous post, cloud is a buzzword that carries with it a lot of white noise that IT managers and departments need to cut through to see whether it’s a worthwhile venture for their organisation. New three-letter initialisms and hyperbolic marketing campaigns are hitting inboxes daily. Vendors are using studies and figures from advisory firms like Gartner, IDC, and Forrester to back up their elaborate stance on cloud. Yet there is no denying that more companies and applications than ever are transitioning to the cloud.

 

Before I go further, I just want to level-set a few things. Firstly, “cloud” is a service-oriented architecture. Depending on the amount of abstraction you want within the application stack, we have Infrastructure as a Service (Iaas), Platform as a Service (PaaS), and Software as a Service (SaaS). As we move towards SaaS from IaaS, we sacrifice control to gain flexibility and scale.

 

More often than not, “cloud” is used in reference to the public cloud, a collection of services available to the public to consume, at will, via the internet. Private cloud, on the other hand, refers to everything from the data centre, the racks, the servers, and networking, through the various operating systems, to applications. The intent is to provide a service or services to a single company or organisation. It can be argued whether this private cloud infrastructure requires the ability to be controlled via a programmable interface to facilitate self-service and automation.

 

To decide where a piece of software or software stack runs, we need to look at the different aspects that can come into play between public and private cloud. One of the first things to do is consider the total cost of ownership for each deployment. While the public cloud gives you the ability to quickly deploy and rapidly scale with known operational pricing, you still need to monitor and assess whether you’re gaining the right amount of agility to performance. With on-premises hardware and software, you can have greater control over costs because you can calculate your requirements then add in the environmental, management, support, backup, disaster recovery, and decommission costs before spending a penny. The downside to this is that you have to “guesstimate”’ how much your application’s usage will increase over its lifespan. What will happen if your company takes on 1,000 new employees or acquires another business unit that needs to be integrated? If you go too big, you have overspent and underutilised, which nobody wants to do.

 

Yet a benefit from running it on-premises or in a private cloud is that you have far greater control over the versioning within your stack as well as maintenance windows that work with your company’s fiscal calendar. In the public cloud or off-premises, you probably have very little control over these factors and may have to add layers of complexity to deal with things like maintenance windows.

 

Which leads us into decisions that will need to be made regarding disaster recovery and the ability to avoid disruption. With a private cloud, you’ll need to purchase two sets of infrastructures (identical if you want complete disaster recovery), which then poses the question—do you sweat that asset and spread your workload over the two sites, thereby increasing the management overhead for your organisation? This leads to a greater initial capital expenditure (although financing options are available) and then a longer delivery, setup, burn in, benchmark, and tuning period before you can go live. On the other side, we can code site fault tolerance into the majority of public cloud services at deployment time, with several providing this as a default setting with services like databases and others where it can be “retrofitted” as required.

 

Reading this, you probably feel that the public cloud is ahead in this race to acquire your next application deployment, but there are drawbacks. A large one that needs mentioning is, “Can the data live there?” Regulatory bodies like HIPAA, FERPA, SOX, and GDPR have rules to help protect customers, consumers, and patients alike, so decisions on which cloud and usage of technologies like VPNs, encryption, and more are detailed in their frameworks. There are also concerns that need to be addressed around how you connect to the application and manage security for this environment so you don’t end up on the front page of the Wall Street Journal or Financial Times.

 

It is very rare to find an organisation that is solely in the cloud. There are a greater number of companies who still operate everything in-house, but we are now seeing a trend towards the hybrid cloud model. Having the ability to run different applications on either the public or private cloud and the ability to change this during the lifespan of an application are becoming requirements for organisations trying to go through digital transformation.

 

While there are many cloud options available, it’s important not to get hung up on these, as an application may move during its lifecycle, just like a server or VM. It’s about what’s right for the business at that time and having the ability to react quickly to changes in your marketplace. That’s what will set you apart from the competition.

 

I have only touched on some of the points that I see as the ones of greater concern when discussing the various cloud options, but spending time investigating and understanding this area of computing and application delivery will put both you and your company in good stead.

 

  A colleague recently mentioned to me that if you are not speaking to your customers about the benefits of cloud, you can be sure your rival is, so let’s make sure we frame the conversation correctly.

Great! You’ve made the decision to migrate to the cloud. What should be the first step towards that goal? The answer is: defining a cloud migration team that understands the vision, has skilled members to carry out required tasks, and is available to contribute as and when required.

 

The best compliment for an IT team is invisibility. It’s a sign of a well-oiled machine that provides everything that a business needs, anticipates problems, and rectifies them before they occur.

 

A cloud migration team is no different. It typically consists of members from various business units from within the company (although external skilled consultants can also be brought in) who are aligned to very specific and clearly-defined roles. If done correctly, even the most complex landscapes can be migrated with ease and transparently.

 

Think of the whole process as a drama: there’s a plot and colourful characters that play specific roles. There are ups and downs during the whole process. Occasionally, people get emotional and tantrums are thrown. The main thing is that by executing their role well, each team member works towards a happy ending, and by playing their part faithfully, that’s exactly what they get.

 

Essential Roles

The main character of this drama is the cloud architect. Appointing someone to this role early is essential. Leading the mission, this person is proactive, defines the scope and strategy, and identifies the wider team that will contribute towards a successful migration.

 

A great source of contributors is the group of stakeholders from the business, platform, security, and operation functions, who by now are already sold on the idea. That would indeed be the case if management has gone through evaluating the business need to go to the cloud and was part of the approval process. Not only can they provide resources to the project, but they also have the unique view from their own functional groups’ perspective that ensures all important bases are covered.

 

Commonly seen as the villain, the project manager is an extremely important role that keeps the cloud architect and other players on the straight and narrow. This role is not just about making a schedule and keeping everyone on time, but also to help prevent scope creep and foresee resourcing issues.

 

It’s easy to forget the “understudy,” but we are all humans and can fall ill unexpectedly—sometimes for long periods. People can also switch jobs. That can have a major impact on progress, especially if that person held an important role. Once the process starts, it’s important to keep the momentum going. That is made possible by having multiple people shadowing key roles where possible.

 

Skills

Nobody wants a bad actor in a drama. It can annoy the audience and derail the entire performance. Team members not suitably skilled for the role they’re assigned are likely to underperform and drag the whole team to a standstill.

 

That said, everyone wants to be a part of something special, and often, they are prepared to put the extra effort in to learn new skills. Career development and a sense of achievement by being part of a success story is a great motivator too.

 

The key is to identify the gaps early and send team members to appropriate training as soon as possible. The sooner this step is taken, the less pressured they feel when the project starts, and they can provide valuable input to important decisions early in the process.

 

Availability

Imagine what would happen if a character drops out from a performance every now and then. Worse, if more than one does it. Would that play be a success?

 

The same is true for a cloud migration project. While it can be a huge drain on a company’s resources, the commitment to provide the personnel necessary to carry out assigned tasks all the way to the end, is critical before embarking on that journey. Not doing so creates huge dependency issues that are hard to resolve and forces the entire schedule to go out of shape.

 

The day job doesn’t stop with a project like this, but a portion of time should be committed and prioritised. It’s almost impossible to commit resources full-time to a project, but as long as availability issues are communicated clearly and well in advance, the team can work around it.

 

Conclusion

The success of a migration project is highly dependent on the people assigned to the project. It is something that is interesting but also challenging at the same time. Delivery of a high-quality migration with minimal or no disruption requires a skilled team, clear roles and responsibilities, and the time commitment to think and plan properly.

 

Most importantly, we all know the magic that happens when all the characters are having fun. That production always becomes a knockout success.

When it comes to getting something done in IT, the question of "How?" can be overwhelming. There are many different solutions on the market to achieve our data center monitoring and management goals. The best way to achieve success is for project managers and engineers to work together to determine what tools best fit the needs of the project.

 

With most projects, budgets are a major factor in deciding what to use. This can make initial decisions relatively easy or increasingly difficult. In the market, you’ll find a spectrum from custom implementations of enterprise management software to smaller, more nimble solutions. There are pros and cons to each type of solution (look for a future post!), and depending on the project, some cons can be deal-breakers. Here are a couple of points to think about when deciding on a tool/solution to get your project across the finish line.

 

Budget, Anyone?

 

Budgets run the IT world. Large companies with healthy revenues have large budgets to match. Smaller organizations have budgets more in line with the IT services that they need to operate. Each of these types of companies need to have a solution that fits their needs without causing problems with their budgets. There are enterprise management systems to fit a variety of budgets for sure. Some are big, sprawling systems with complicated licensing and costs to match. Still others consist of community-managed tools that have less costs associated with them, but also have less support. And, of course, there are tools that fit in the middle of those two extremes.

 

Don’t think that having limitless budget means that you should just buy the most expensive tool out there. You need to find a solution that first and foremost fits your needs. Likewise, don’t feel like a small budget means that you can only go after free solutions or tools with limited support. Investigating all the options and knowing what you need are the keys to finding good software at reasonable costs.

 

Do I Have the Right People?

Having the right people on your IT staff also helps when choosing what type of management tool to use. Typically, IT pros love researching tools on their own and spend hours in community threads talking about tools. If you have a seasoned and dedicated staff, go with a more nimble tool. It usually costs less, and your staff will ensure it gets used properly.

 

Conversely, if your IT staff is lacking, or is filled with junior level admins, a nimble tool might not be the best solution. An enterprise solution often comes with more support and a technical account manager assigned directly to your account. Enterprise solutions often offer professional services to install the tool and configure it to meet the demands of your infrastructure. Some enterprise management software vendors offer on-site training to get your staff up to speed on the tool’s use.

 

Don’t forget that sometimes the best person for the job may not even be a person at all! Enterprise management systems often provide the ability to have tools that can automate a large number of tasks, such as data collection or performance optimization. If your staff finds itself overworked or lacking in certain areas, you may be able rely on your EMS platform to help you streamline the things you need to accomplish and help you fill in the gaps where necessary. Not everyone has the need for a huge IT team, but using your platforms as a force multiplier can give you an advantage.

 

There are many other points to discuss when deciding on an enterprise monitoring or management system versus a nimble tool. However, the points discussed above should be the most pertinent to your discussions. Do not make any decisions on a solution without taking the time to make some proper assessments first. Trust your staff to be honest about their capabilities, ensure your budgetary constraints are met, and choose a tool that will be the best fit for the project. In the end, what matters most is delivering a solution that meets your customer and/or company’s needs.

Spent the weekend making feijoada to have for the Big Game this past Sunday. If you found the game, commercials, and halftime show to be boring, I wanted you to know I had 7 different parts of a pig in a stew over rice. In a way, we are all winners.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

"5,000 troops to Colombia"? Nation responds to Bolton's note

Weekly reminder that when it comes to data security and privacy, our elected (and appointed) leaders provide the worst examples of "what not to do."

 

Microsoft accidentally deletes customer DBs

One of the perils of automation: when you make a mistake, it affects thousands of objects. Whoops.

 

Rethinking Informed Consent

"…the important questions aren't around how our data is used, but where our data travels." Yes, so much yes to this. It’s time we all understand how our data is being used.

 

Facebook pays teens to install VPN that spies on them

I don’t understand how Facebook is still in business. What does a company have to do to be shut down these days? At this point, Zuckerberg could club a seal on Pier 39 and no one would care.

 

An 'acoustic fingerprint' should keep Alexa from waking during Amazon's Super Bowl ad

I was wondering about this, and why my Alexa seemed to be ignoring the television. Now I’m wondering if someone could hack my device to have it ignore, or redirect, other commands.

 

Gym Class Is So Bad, Kids Are Skipping School to Avoid It

Interesting data here, trying to correlate increased gym time with student disciplinary actions. Setting that possible correlation aside, the current structure of PE classes is, indeed, horrible.

 

Data is expensive

"...more data isn’t always better." Wonderful post from Seth on the cost of data.

 

Raise your hand if you are the GOAT:

My previous post provided a short introduction to this series.

 

In this post, I’m going to be focusing on public cloud. I’ll cover my day-to-day experiences in using the major hyperscalers, costs, how I think about managing them, and my reasons for adopting hybrid IT as an operational model.

 

By now, almost everyone reading this should have had some experience with using a public cloud service. Personally, I’m using public cloud services daily. Being an independent consultant, I run my own company and I want to be able to focus on my customers, not on running my own IT.

 

When setting up my business, it made perfect sense for me to utilise public offerings such as Office 365 to get my communication and business tools up and running. When I want to work on new services or train myself, I augment my home lab with resources within the public clouds. For these use cases, this makes perfect sense: SaaS products are cheap, reliable, and easy to set up. Short-lived virtual machines or services for development/testing/training are also cost effective when used the right way.

 

This works for me, but I’m a team of one. Does this experience translate well into other cases? That’s an interesting question because it isn’t one-size-fits-all. I’ve worked with a wide range of customers over the years, and there are many different starting points for public cloud. The most important part of any cloud journey is understanding what tools to use in what locations. I’ve seen lift-and-shift style migrations to the cloud, use of cloud resources for discrete workloads like test/development, consumption of native services only, and every combination in between. Each of these have pros and cons, and there are areas of consideration that are sometimes overlooked in making these decisions.

 

I want to break down my experiences into the three areas where I’ve seen cost concerns arise, and how planning a hybrid IT approach can help mitigate these.

 

Virtual Machines

All public clouds offer many forms of virtual machines, ranging in cost, size, and capabilities. The provisioning model of the cloud make these easy to consume and adopt, but this is a double-edged sword. There are several brilliant use cases for these machines. It could be that you have a short-term need for additional compute power to supplement your existing estate. It might be that you need to perform some testing and need the extra resources available to you. Other options include being able to utilise hardware that you wouldn’t traditionally own, such as GPU-enabled platforms.

 

When planned out correctly, these use cases make financial sense. It is a short-term need that can be fulfilled quickly and allows business agility. The cost vs. benefit is clear. On the flip side, leaving these services running long-term can start to spiral out of control. From my own test environment, I know that a running VM that you forget about can run up bills very quickly, and while my own environment and use cases are relatively small even for me, bills into the hundreds of pounds (or dollars) per month for a couple of machines that I had forgotten to shut down/destroy are common. Now multiply that to enterprise scale, and the bill can become very difficult to justify, let alone manage. Early adopters that took a lift-and-shift approach to cloud without a clear plan to refactor applications are now hitting these concerns. Initial savings of moving expenditure from Cap-Ex to Op-Ex masked the long-term impact to the business.

 

Storage

The cloud offers easy access to many different types of storage. The prices can vary depending on the requirements of that storage. Primary tier storage is generally the most expensive to consume, while archive tiers are the cheapest. Features of storage in the cloud are improving all the time and catching up to what we have come to expect from the enterprise storage systems we have used on-premises over the years, but often at additional cost.

 

Consuming primary tier storage for long-term usage quickly adds up. This goes hand in hand with the examples of monitoring VM usage above. We can’t always plan for growth within our businesses, however, what starts off as small usage can quickly grow to multiple TB/PB. Managing this growth long-term is important for keeping costs low; ensuring that only the required data is kept on the most expensive tiers is key. We’ve seen many public examples where this kind of growth has required a rethink. The most recent example that comes to mind is that of Dropbox. That might be an exception to the rule, however, it highlights the need to be able to support data migrations either between cloud services or back to on-premises systems.

 

Networking

Getting data to the cloud is now a relatively easy task and, in most instances, incurs little to no cost. However, moving data within or between clouds, and in cases of repatriation, back to your own systems, does incur cost. In my experience, these are often overlooked. Sprawl within the cloud, adopting new features in different regions, or running a multi-region implementation can increase both traffic between services and associated costs.

 

Starting with a good design and maintaining locality between services helps minimise this. Ensuring the data you need is as close to the application as possible and minimal traffic is widely distributed needs to be considered from the very start of a cloud journey.

 

Why a Hybrid IT Mindset?

With those examples in mind, why should we adopt a hybrid IT mindset? Having all the tools available to you within your toolbox allows for solutions to be designed that maximise the efficiencies and productivity of your business whilst avoiding growing costs. Keeping long-running services that are low on the refactor/replace priorities in on-premises infrastructure is often the most cost-effective method. Allowing the data generated by these services to take advantage of cloud-native technologies and systems that would be too costly to develop internally (such as artificial intelligence or machine learning) gives your business the best of both worlds. If you have short-term requirements for additional capacity or short-lived workloads like dev/test, giving your teams access to resources in the cloud can speed up productivity and even out spikes in demand throughout the year. The key is in assessing the lifetime of the workload and what the overall costs of the services consumed are.

 

Next time I’ll look at how we build better on-premises data centres to adopt cloud-like practices and consumption. Thank you for reading and I welcome any questions in the comments section.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting blog about the need to keep a continued focus on security. I can’t agree more about training and security checks as a useful method of keeping staff on their guard. I’d add that an incident response plan is a useful and often overlooked part of the plan—which is now required for the feds.

 

November 30, 2018 marked the thirtieth annual Computer Security Day. Originally launched in 1988, the day was one of the earliest reminders of the threats facing modern technology and data.

 

Now, thirty years on, the threats facing organizations’ data are more significant than ever—from ransomware to hacking—while the sensitivity and volume of data grows each year. According to a recent survey conducted by IDC and Zerto, 77% of businesses have experienced a malicious attack in the past 12 months, with 89% of these being successful—demonstrating just how prevalent these security threats are. As Shannon Simpson, cyber security and compliance director at Six Degrees put it: “Cyberattacks have crossed over into the mainstream, and guarding against security breaches requires constant vigilance throughout your entire business, not just the IT team.”

 

The case for training

 

As security professionals, we are acutely aware of the tricks scammers may use—such as emails with fake bank details or ones made to look like they were sent from another company employee. However, it’s important to remember that not all employees are exposed to this on a regular basis. This is why experts strongly support ongoing training and education programs for employees to help empower them to avoid evolving scams.

 

Moving away from a fixed perimeter approach

 

A key factor in the move away from fixed perimeter security is the adoption of the cloud and the rise in cloud-based applications. Steve Armstrong, Regional Director at Bitglass, stressed that despite such applications making businesses more flexible and efficient, “many of the most popular cloud applications provide little visibility or control over how sensitive data is handled once it is uploaded to the cloud.” One of the primary vulnerabilities that Armstrong highlighted was the problem of misconfiguration such as in Amazon A3 buckets or MongoDB databases, pointing out that “given how readily available discovery tools are for attackers, ensuring corporate infrastructure is not open to the public internet should be considered essential for enterprise IT. To do this, Armstrong recommends that organizations should “leverage security technologies such as those provided by the public cloud providers,” all of which “provide visibility and control over cloud services like AWS.”

 

In addition, automation technology can help reduce the risk to data, both at rest and in transit, said Neil Barton, CTO at WhereScape. This is because “by limiting or negating the need for manual input, businesses can better protect against security vulnerabilities.” Meanwhile, using automation to take care of the basics can help free up IT staff “to ensure the data infrastructure is delivering results with security top of mind.”

 

The importance of testing plans and learning from mistakes

 

Providing IT staff with more time could be critical to one of the most vital aspects of security preparedness—testing. Stephen Moore, Chief Security Strategist at Exabeam, commented that “organizations that handle sensitive data must implement constant security checks, as well as rapid incident response and triage when needed.” This was a sentiment also voiced by Paul Parker, Chief Technologist, Federal & National Government at SolarWinds. Speaking about the need for cybersecurity in the public sector, Parker noted that “most important is developing and routinely testing your emergency response plan. Much like the UK’s Fire and Rescue Services practice fire response and life-saving services, organizations should also practice their network breach response.” His core advice to organizations in the current security threat landscape? “Don’t learn how to extinguish a fire on the fly.”

 

Finally, a sentiment echoed by several experts was the inevitability of organizations facing a cyberattack at some point in time. Gijsbert Janssen van Doorn, Technology Evangelist at Zerto, concluded: “Yes, protection is important; however, in a culture where attacks and downtime are no longer a matter of ‘if,’ but ‘when,’ these precautions are not enough. Organizations also need to be prepared for what happens after a disruption, and will be judged not only on keeping people out and data safe, but also on how quickly they are back to functioning as normal—how resilient they are.” Meanwhile, Parker concluded that, following an attack, “public sector organizations can use the insights garnered from the incident to learn, perfect, and prepare for the next one”—a sentiment as true for all businesses as those in the public sector.

 

Thirty years after the first Computer Security Day, it’s clear IT and security professionals find themselves in a much more complicated landscape than their predecessors. However, there is much that can be done to keep data safe, and businesses online—from moving away from the fixed perimeter approach to cybersecurity to ensuring regular training and plan testing, and even making sure organizations can get back online when something does, inevitably, go wrong. The key, across all aspects of security, is preparation.

 

Find the full article on Information Security Buzz.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

In our last exciting installment I began delving into the IT-centric lessons we can glean from the latest addition to the Spider-Man franchise. (Just like I did in the past with the movies “Doctor Strange” and “Logan.”)

 

But like a good comic book series, one installment is never enough. Keep reading, true believers, to see what other jaw dropping discoveries await your eyes!

 

Spoilers (duh)

As with any deep discussion of movie-based content, I’m obligated to point out that there will be many spoilers revealed in what follows. If you have not yet had a chance to enjoy this chapter of the Spider-Man mythology, it may be best to bookmark this for a later time.

 

Even inelegant solutions can be powerful

At a few key moments of the action, help comes from an unanticipated direction—the more “cartoonish” abilities of Spider-Ham. Whether it’s a giant mallet he produces from I-don’t-want-to-know-where, or an anvil falling directly on the head of a villain, these great saves are played for laughs, but still have a lesson for us in IT.

 

Have you turned it off and on again?” is an inelegant solution. But it works. So does restarting IIS to get the website back up. As do a million other “kludges” that IT professionals employ every day, sometimes feelings guilty about it.

 

My advice (and I believe Peter Porker would back me up on this): don’t overthink it. If a solution works, it works. That doesn’t mean you shouldn’t also make time to resolve the underlying problem. But always be open to use every tool in your toolbox, even an oversized wooden mallet.

 

Simple tech used with determination—even by less skilled folks—can be very effective.

Closely related to the previous lesson is that often your commitment to solving a problem is more important than the techniques or tools you use to solve it.

 

In the movie, this is best exemplified by Aunt May. Horrified at the destruction being done to her home, she takes matters (and a Louisville Slugger) into her own hands, and makes Tombstone understand that wearing muddy shoes inside her house will simply not be tolerated.

 

The moral for us is twofold. First, that our success as IT practitioners is less about the sophistication of tools, and more about our persistence in solving the problem.

 

On the flip side of this, when we see one of our colleagues, even someone we consider less “powerful” than we are (although anyone who judges Aunt May like that is in for a rude and likely painful surprise), we need to focus less on their technique or tools and more on their goals, putting us into the healthier and more productive role of supporting, rather than judging.

 

Trying to re-create, or worse, “fix” the past is a fool’s errand.

The best villains are the ones who don’t see themselves as such, but instead have deeply-seated motivations driving them to extreme lengths. In a different context, they might even be seen as a hero because of their determination to see a course of action through to the end. Such is the character of Wilson Fisk (aka Kingpin). We learn that in a single moment, Fisk lost the love and trust of his wife Vanessa. This triggered a rapid cascade of events, leading to the death of both Vanessa and their son Richard. Fisk cannot reconcile the pain of that loss, and therefore set himself on the path that leads to the catalyzing event of the movie—opening a rift between dimensions, finding an instance where Vanessa and Richard did not die, and pulling those living versions to him and make his life whole again.

 

Each one of us carries memories of past moments where, looking back, we know we could have done better, or could have been better than we chose to be. In fact, in 2018 the THWACK community spent an entire month discussing what they would have told their younger selves, if they had the chance.

 

Working in IT, there are pivotal moments where we realize we’ve made an error—sometimes the microsecond after hitting the ENTER key (c.f. the ohnosecond). These are moments we might wish to erase or undo. However, even if the technology existed, very few of us would do so if it meant hurting someone.

 

The lesson we can take from the movie is how damaging it can be to dwell on those past mistakes, replaying them over and over and saying, “if only.” I’m not saying that regret will turn you into a criminal mastermind. But I am saying that living in a regretful past will lead to nothing good.

 

Being multi-lingual is normal. Don’t fight it. Don’t make a big deal of it.

Miles Morales is celebrated for being one of the most compelling and relatable characters in comics, due in no small part because of his cultural heritage. He moves effortlessly between cultures, and one of the ways the movie shows this is when he flows from English to Spanish without hesitation (and without subtitles, which is part of my point below). Whether it’s the kids in his neighborhood, the teachers at his new school, or the villains crowding into Aunt May’s home in Queens, Miles is un-self-consciously fluent in the languages around him.

 

While I would love to make this lesson all about how I think all IT professionals should learn another language because it will help in ways they cannot possibly imagine, that’s not exactly my point. But if you want to change your life, learn to speak more than one language. Really.

 

My point is more about the way Miles’ multilingual nature is portrayed: it’s nothing special. Miles never acts as the interpreter to those around him. He never shouts, “Scorpion just said he’s going to knock you into next week!” He’s not there as a proxy for a non-comprehending audience. He’s there as a proxy for everyone else.

 

The lack of subtitles in the movie drives this home. Directors Bob Persichetti and Peter Ramsey made this choice purposefully, as if to say “This is a trivial aspect of this world. If it’s jarring to you, that’s you, not the story. Get used to it. This is how the world works.”

 

The lesson is that being multilingual is an IT thing too. Maybe not spoken languages, but modalities of computing. Cloud, hybrid IT, containers, software-defined networking, platforms-as-a-service—these are all part of the fabric of our work now. Even if we’ve put off learning to code the same way we put off learning French, the time is now for us to take another look, start to familiarize ourselves, and begin to build our fluency. The Miles Morales-es of our organizations are going to come in un-self-consciously fluent, and it behooves us as colleagues and potential mentors to be partners in that journey.

 

But That’s Not All

With a character history as rich as Spider-Man (not to mention a movie as awesome as “…Into the Spider-Verse”), it turns out I have a lot more to say on this subject. The adventure continues in the next issue.

 

Until next time, true believers,

Excelsior!

 

Spider-Man: Into the Spider-Verse,” 2018, Sony Pictures Entertainment

jreves

Network Flowetry!

Posted by jreves Employee Feb 4, 2019

Hi! I’m Joe Reves, and I’m a Flow Nerd.

 

I’m the Product Manager for our NetFlow Traffic Analyzer, and I’ve been working at SolarWinds for a little over a year. I’m excited about flow analytics and the problems we can solve by examining and visualizing network flow information. I’m awfully enthusiastic about all types of flow technologies, particularly traffic sampling.

 

I spend a lot of time talking about flow—asking customers about their challenges, and how they use flow data and tools in their environment. I also talk with my colleagues about flow. Sometimes, until they get tired of hearing about it. Often, even after they get tired…

 

At a recent trade show, our own “Father of NetPath”—Chris O’Brien—decided he would have a little fun, and he started a rumor that I was compiling a book of flow poetry. Naturally, this prompted me to begin writing flow poetry:

 

Chris, mischievous

Craves the poetry of flow

Sampled flow, of course

 

joer

 

I promptly notified our team that flow poetry, or “Flowetry,” was ON. Shortly after midnight, our intrepid leader responded with this epic:

 

Woes of Flow

(A poem for Joe)

 

It uncovers source and destination

without hesitation.

Both port and address

to troubleshoot they will clearly assess.

Beware the bytes and packets

bundled in quintuplet jackets,

for they are accompanied by a wild hog

that will drown your network in a bog.

The hero boldly proclaims thrice,

sampling is not sacrifice!

He brings data to fight

but progress is slow in this plight.

 

Mav

 

This just goes to show I shouldn’t be tossing out literary challenges in email after midnight.

 

Sometime after that—yes, after midnight, but before daylight—our Product Marketing Manager finished her daily email backlog and offered this:

 

BAP emails abound

Find the bandwidth bandit now

Joy! Now I have alerts

 

Abigail

 

So far, Chris hasn’t coughed up any examples of Flowetry. We’re calling on him next.

 

In terms of rumors, I hear that our resident beat poet and Head Geek™ Leon Adato is shopping for a black turtleneck sweater and a beret. And some shades. I can’t wait to see this.

 

We’d like to invite you now to our first-ever Flowetry event!

 

Post your best network flow-themed poetry below—odes, ballads, sonnets, limericks, haiku… whatever your style, we’d like to hear from you. Anything goes, but keep those limericks clean.

 

 

Looking for more?  Complete the network-themed poem for a chance to win here: https://slrwnds.com/network-flowetry-blog

Every operations team has its fair share of monitoring solutions. While you may not have achieved the perfect state of a single pane of glass, you likely have settled on two or three solutions that cover all the hardware and software that supports your business. You even invested considerable time and effort to not just implement these solutions with out-of-the-box settings, but with tailored IT alerting thresholds and alarms that suit your environment’s specific needs. Look at you!

 

It’s easy to overlook the next step of your monitoring deployment: notifications. Usually, one of two things will happen:

 

  1. 1. You get too many alerts, and subsequently turn off alerts
  2. 2. You get too many alerts, and subsequently create an Outlook rule to trash them all

 

It’s the age-old signal-to-noise problem in IT. How do you fine-tune your notifications, so they alert you to events that deserve your attention while filtering out all of the notifications that are not actionable? Your first thought might be to turn off any performance-related alerts and just receive system or device down notifications. But if that’s all you’re looking to get out of notifications, you should just write a PowerShell script to run a Test-Connection against your server list and Send-MailMessage when a host is down. (That’s mostly sarcasm.)

 

Instead of throwing the baby out with the bath water, here are some monitoring and alerting best practices for reducing notification overload.

 

Inventory Your Applications

First things first: no one cares about your servers like you do. You invest countless hours building, installing, patching, backing up, repairing, and generally supporting these virtual beasts. Even if you’re running an automated shop (which you’re not), you still train your attention on the infrastructure. But the business cares about the applications.

 

Application monitoring will always be more important to the business than infrastructure monitoring.

 

So, if you don’t have a list of your apps (which should include URLs in this modern SaaS era), get one together. Without a reliable and accurate inventory, you’ll never know if you’re monitoring all your devices. (On a related note: if you’ve got tips on how to collect and maintain an application inventory, share them in the comments below.)

 

Map Applications to Devices

Now it’s time to correlate infrastructure with applications. In other words, if server org1east-c goes offline, what applications are affected? What if the NAS doesn’t survive a firmware upgrade? When you can draw direct connections between your applications and the infrastructure, you can shift the focus of your monitoring (and eventually notification routing) to the right teams as quickly as possible.

 

The benefit of this exercise is to tune your alerts and notifications to reach the right teams right away.

 

Create Your Notifications

It’s useless if your monitoring solution detects issues and fails to notify cognizant IT staff. But because you’ve got a list of your apps, and you have mapped connections between your server infrastructure and applications, you’ll be able to set up your notification routing efficiently.

 

For example, if one of your load balanced web servers goes offline, you can have an alert sent to the server team to investigate the server. But don’t stop there. Also have an alert sent to the team that supports the website or app that relies on that web server. They may not need to take any corrective action, but they’ll certainly appreciate a heads-up that there may be infrastructure trouble brewing. And you’ll also avoid fielding calls from the web team asking, “What’s going on?”

 

Routing notifications isn’t the most exciting part of deploying a monitoring solution, because it’s likely the most difficult. Not because of the technology, but because of the deep dive required to really understand the connection between your applications and your infrastructure.

 

How do you handle notification policies?

Had a great time in Austin last week, but it's good to be home. Even if that means it's cold and I need to shovel. There's no place I'd rather be to watch the Patriots this Sunday night.

 

(You *knew* I had to mention them at some point, right?)

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

YouTube vows to recommend fewer conspiracy theory videos

That’s just what they want us to think.

 

Watch out for this new cryptocurrency ransomware stalking the web

If you have not yet heard of Anatova, consider yourself warned.

 

Drone sighting disrupts major US airport

I’m seeing more of these reports, and can’t help but wonder if they are connected or if they are just copycats.

 

Tim Cook Calls for ‘Data-Broker Clearinghouse’ in Push for Privacy Rules

With Apple revenue down, maybe this is the start of a new revenue stream – data security. Honestly, if the new fruit phone was marketed as “the most secure device,” they would get millions of users to upgrade to the latest version.

 

Amazon puts delivery robots on streets – with a human in tow

This seems rather inefficient. Perhaps it is a test for something not as silly. But in this current form, it’s not very useful.

 

Now Your Groceries See You, Too

Not creepy at all, nope.

 

Japanese government plans to hack into citizens' IoT devices

Governments are meant to help provide and protect their citizens. The idea that Japan would serve as a Red Team to help protect citizens with insecure IoT devices is brilliant. Help the people, help yourself.

 

 

Ah, the joys of working for a network monitoring company that leads by example:

There is no getting away from it: the word “cloud” is a buzzword in the IT industry. Maybe it’s graduated onto something more.

 

Why Cloud?

 

I remember being asked this exact question during a technical presentation I got drafted into very last minute with a well-known, high street brand. During the infrastructure pitch, I had added a little flair at the end and sketched something resembling a cumulus cloud that elicited the question, “Why cloud?” Now at this point, I was unsure whether I was being tested or whether the customer genuinely didn’t know, and while I had a fairly good understanding of what it was, explaining it was something I’d never done before. Thankfully, a bolt of inspiration had me reciting the fact that, “Network diagrams have a cloud to represent the infrastructure outside of an organisation. This is due to the fact that you knew the hardware is there, but you couldn’t see it and have no control over it. That ‘cloud’ probably drew from the idea of fog of war.” Everyone in the room smiled and I felt very happy with my answer as I had avoided using as a service at any point, which, during this time about nine years ago, was all you ever heard when the word “cloud” came up in an IT conversation.

 

Today it’s assumed that if you’re talking to anyone in the IT industry that they comprehend the idea of cloud. We’ve defined something ethereal and tried to standardise it. Does one person’s idea of cloud equal that of another?

 

Yet my point made nearly ten years ago still stands – there is hardware and software outside of your control. You probably don’t even know what’s being used. You can’t see or touch it, but you rely on it for the success of your business.

 

The Truth about Cloud Costs

 

We have now reached a point where practically every IT product has the word “cloud” in their tagline or description, and if they don’t yet, their next release will. But how do you decide what or when it’s time to stick with on-premises or pivot and utilise cloud? We’ve all heard the horror stories of organisations spending their predicted yearly budget in one month to migrate to the cloud, or of providers going bust and huge swaths of data being lost, or even of access and encryption keys being stolen or published. Yet people are moving to the cloud in droves, unperturbed by these revelations. It’s reminiscent of when virtualisation first entered the data centre. We had VM sprawl, but this plague was controlled, limited by the fact you had a finite pool of resources. With cloud, the concept of limitless resource pools with near-instant access that you can consume with just a couple lines of code means things can get out of hand very quickly if you don’t stay in control.

 

We have also heard the “Hotel California” stories—those who have tried to leave the cloud for one reason or another and have been unsuccessful. Yet for all these horror stories, there are huge numbers of successes we do (like Uber or Airbnb) and don’t hear about. You only need to look at the earnings calls and reported profits the big players in the industry to see that cloud as part of an IT stack is here to stay.

 

So, while moving from a hefty CAPEX number on the balance sheet to perceived smaller recurring OPEX numbers may look attractive to some organisations, these quite often snowball and, like death by a thousand cuts, you can find tonnes of entries on your balance sheet consisting of cloud expenses. My personal bill can run to a couple of pages, so an enterprise organisation could run into the hundreds if not careful.

 

That’s because cloud service providers have succeeded in figuring out a cost model for everything. Some services incur an immediate charge upon use, while others have a threshold you have to trigger to be billed on its use.

 

You may have also heard the analogy, “The cloud is a great place to holiday, but not to live.” In terms of costs, I don’t think this is a great statement. It probably has its roots from those early adopters who felt they could lift and shift everything to the cloud without refactoring their workflows. But this isn’t true. The cloud is a great place for speed and scalability for workflows that need huge amounts of resources and are available in a very small window of time. Workflows that are possibly only used for a short period of time and then can be scaled back, or essentially rented, is an advantage of the cloud that piques a lot of people’s attention. The fact that you can get someone to provide you a service like email for instance, so you don’t need to rely on several sites running multiple servers managed by a team of specialised individuals, is another attractive characteristic that has people interested in the cloud.

 

Hybrid Cloud Benefits

 

So, what is a proper hybrid cloud model? Well what may be right for one is different for another, but the overarching idea is to have your workload reside on the right infrastructure at the right time during its lifespan. Careful planning and leveraging multiple levels of security during your deployment and management is key. Having a 21st century outlook on IT is how I like to think of it. Understanding that certain applications or data requires you to have it close at hand or secured in a data centre you own; while others can live on someone else’s tin as long as you have access to it. Not to get tied up thinking that its current location, hardware, and software is permanent is a great stance to tackling today’s IT challenges. Knowing that it’s not something you can buy off the shelf, but have to craft and sculpt are traits you want in your IT department as these hybrid IT years progress.

 

  In the U.K., we have several websites who, as a service, can transfer you between energy, broadband, banking, mortgage, credit card, etc., providers to one that is cheaper or has the services you require. I can see in the near future that we could very well have IT companies whose business model is to make sure you are leveraging the right cloud for your workload. Maybe I should go now and copyright that… 

Here’s a blog that reviews strategies for managing networks that are growing in size and complexity.

 

Federal IT pros understand the challenge agencies face in trying to manage an ever-growing, increasingly complex network environment. While federal IT networks are becoming more distributed, the demand for security and availability is increasing. Yes, at some point, new technologies may make government network operations  easier and more secure; until that point, however, growing pains seem to be getting worse.

 

While there may be no one-size-fits-all solution to this problem, there is certainly something today’s federal IT pros can do to help ease those growing pains: invest in a suite of enterprise management tools.

 

Taking a suite-based approach, you should be able to more effectively manage a hybrid cloud environment, alongside a traditional IT installation, while simultaneously scaling as the network grows and gains visibility across the environment—including during IT modernization initiatives.

 

Moving Past the Status Quo

 

Today, most federal IT pros use a series of disparate tools to handle a range of management tasks. As the computing environment expands, this disparate approach will most likely become increasingly less viable and considerably more frustrating.

 

Investing in a suite of IT tools for government agencies that all work together can help save time, enhance efficiency, and provide the ability to be more proactive instead of reactive.

 

Consider monitoring a hybrid-cloud environment, for example. Traditionally, the federal IT pro would have a single tool for tracking on-premises network paths. With the move to a cloud or hybrid environment, services have moved outside the network; yet, the federal IT pro is still responsible for service delivery. The smart move would be to invest in a tool that can provide network mapping of both on-prem and cloud environments.

 

Next, make sure that the tool integrates with other tools delivering a single integrated dashboard. This way, you can combine information about your agency’s cloud environment with on-premises information, which should provide you with a much more accurate understanding of performance.

 

In fact, make sure the dashboards provide accessibility and visibility into all critical tools, including your network performance monitor, traffic analyzer, configuration manager, virtualization manager, server and application monitor, server configuration, storage monitor, database performance analyzer—in other words, the works.

 

Finally, be sure to implement role-based access controls designed to prevent users with disparate responsibilities from affecting each other’s resources, while still leveraging a common platform.

 

Being able to see everything through a series of dashboards can be a huge step in the right direction. Enabling tools to connect and share information, so you can make more highly informed decisions, is the icing on the proverbial cake.

 

One final word of advice on choosing a platform—make sure it connects discovery and resource mapping, dashboards, centralized access control, alerting, reporting, consolidated metrics, and data.

 

If the platform you’re looking at doesn’t do all of these things, then keep looking. It will eventually prove to be well worth your time and money.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Public clouds provide enormous scale and elasticity. Combined with a consumption-based model and instant deployment methodologies, they provide a platform that is extremely agile and avoids locking of precious capital.

 

Surely it’s a no-brainer and why every company has plans to migrate workloads to the cloud . In fact, it is so obvious that one actually needs to look for reasons why it won’t be a good idea. It may seem counterintuitive, but it is one of the most important steps you could take before starting on your cloud migration journey.

 

WHY?

Regardless of size, migration could be a huge drain on time and resources. One of the most cited reasons for the failure of a cloud migration project is: “The project ran out of steam.” Such projects typically lack enthusiasm, resulting in slow progress. Eventually, corners are cut to meet deadlines and the result is sub-standard migration and eventual failure.

 

Humans are wired to be more interested in doing something where there is a tangible benefit for them in some way. In addition, they are more likely to remain committed as long as they can see a goal that is clear and achievable.

 

HOW?

Migration is not a single-person job. Depending on the size of a company, teams from different business groups are involved in the process and have to work together to achieve that goal. To ensure success, it is critical to get the backing of all the stakeholders through an honest evaluation of the business need for migration to the cloud. It must be backed by hard facts and not just because it’s fashionable.

 

This evaluation goes hand-in-hand with the problems a company is looking to solve. The most effective pointers to them are the existing pain points. Are costs for running a particular environment too high? Is the company becoming uncompetitive due to lack of agility? It might even be a case of developing the capability to temporarily burst into the cloud when an occasional requirement comes up, instead of locking capital by buying, provisioning, and maintaining own equipment.

 

ANALYSIS

Armed with those pain points and relevant data gathered, analysis can be done to determine if cloud migration is the only pragmatic solution. SWOT is a great framework for such an evaluation, and is used by many organisations for strategic planning.

 

It’s important to have all the key stakeholders present when this exercise is done. This ensures that they are part of the discussion and can clearly see all arguments for and against the migration. Those stakeholders include leaders from the infrastructure and application groups as well as from the business side, as they have the best view of the financial impact of current issues and what it would be if action is not taken.

 

The focus of this analysis should be to identify what weaknesses and threats to the business exist due to the current state and if migration to the cloud will change them into strengths and opportunities. With prior research in hand, it should be possible to determine if the move to the cloud can solve those issues. More importantly, this analysis will highlight the financial and resource costs of the migration and if it would be worth that cost when compared against the problems it will fix.

 

CONCLUSION

Effort spent at this stage is extremely valuable and ensures the decision to migrate to the cloud is robust. Furthermore, the analysis clarifies the need for action to all stakeholders and brings them on board with the vision.

 

Once they see the goal and how it will solve their business problems, the result is a commitment from all teams to provide their part of the resource and continual participation in the project until its successful completion.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.