1 2 3 4 Previous Next

Geek Speak

2,703 posts

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article on the internet of things (IoT) and security threats. We’ve all been expecting IoT devices to be problematic and it’s good to see recognition that better controls are needed for the federal government.

 

The Department of Defense is hearing the IoT alarm bells.

 

Did you hear about the heat maps used by GPS-enabled fitness tracking applications, which the U.S. Department of Defense (DOD) warned showed the location of military bases, or the infamous Mirai Botnet attack of 2016? The former led to the banning of personal devices from classified areas in the Pentagon, as well as a ban on all devices that use geolocating services for deployed personnel. While the latter may not have specifically targeted government networks, it still served as an effective wakeup call that connected devices have the potential to create a large-scale security crisis.

 

Indeed, the federal government is evidently starting to hear the alarm bells, considering the creation of the IoT Cybersecurity Act of 2017. The act emphasizes the need for better controls over the procurement of connected devices and assurances that those devices are vulnerability free and easily patchable.

 

Physical and cultural silos

 

Technical, physical, and departmental silos could undermine the government’s IoT security efforts. The DOD is comprised of about 15,000 networks, many of which operate independently of each other. According to respondents cited in SolarWinds’ 2018 IT Trends Report, federal agencies are susceptible to inadequate organizational strategies and lack of appropriate training on new technologies.

 

Breaking the silos

 

Bringing technology, people, and policy together to protect against potential IoT threats is a tricky business, particularly given the complexity of DOD networks. But it is not impossible, as long as defense agencies adhere to a few key points.

 

Focus on the people

 

First, it is imperative that federal defense agencies prioritize the development of human-driven security policies.

 

Malicious and careless insiders are real threats to government networks—perhaps just as much, if not more so, than external bad actors. Policies regarding which devices are allowed on the network—and who is allowed to use them—should be established and clearly articulated to every employee.

 

Agencies must also try to ensure everyone understands how those devices can and cannot be used, and continually emphasize those policies. Implementing a form of user device tracking—mapping devices on the network directly back to their users and potentially detecting dangerous activity—can assist in this effort.

 

Gain a complete view of the entire network

 

DOD agencies should provide their IT teams with tools that allow them to gain a complete, holistic view of their entire networks. They must institute security and information event management to automatically track network and device logins across these networks and set up alerts for unauthorized devices.

 

Get everyone involved

 

It is incumbent upon everyone to be vigilant and involved in all aspects of security, and someone has to set this policy. That could be the chief information security officer or an authorizing official within the agency. People will still have their own unique roles and responsibilities, but just like travelers in the airport, all agency employees need to understand the threats and be on the lookout. If they see something, they need to say something.

 

Finally, remember that networks are evolutionary, not revolutionary. User education, from top management on down, must be as continuous and evolving as the actions taken by adversaries. People need to be regularly updated and taught about new policies, procedures, tools, and the steps they can take to be on the lookout for potential threats.

 

As the fitness tracking apps issue and the Mirai Botnet incident have shown, connected devices and applications have the potential to do some serious damage. While government legislation like the IoT Cybersecurity Act is a good and useful step forward, it’s ultimately up to agency information technology professionals to be the last line of defense against IoT security risks. The actions outlined here can help strengthen that line of defense and effectively protect DOD networks against external and internal threats.

 

Find the full article on SIGNAL.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Welcome back to this series of blogs on my real world experiences of Hybrid IT. If you’re just joining us, you can find previous posts here, here and here. So far I have covered a brief introduction to the series, public cloud costing and experiences, and how to build better on-premises data centres.

 

So far, I’ve covered a brief introduction to the series, public cloud costing and experiences, and how to build better on-premises data centres.

 

In this post, I’ll cover something a little different: location and regulatory restrictions driving hybrid IT adoption. I am British, and as such, a lot of this is going to come from my view of the world in the U.K. and Europe. Not all of these issues will resonate with a global audience; however, they are good food for thought. With adoption of the public cloud, there are many options available to deploy services within various regions across the world. For many, this won’t be much of a concern. You consume the services where you need to and where they need to be consumed by the end users. This isn’t a bad approach for most global businesses with global customer bases. In my corner of the world, we have a few options for U.K.-based deployments when it comes to hyperscale clouds. However, not all services are available in these regions, and, especially for newer services, they can take some time to roll out into these regions.

 

Now I don’t want to get political in this post, but we can’t ignore the fact that Brexit has left everyone with questions over what happens next. Will the U.K. leaving the EU have an impact? The short answer is yes. The longer answer really depends on what sector you work in. Anyone that works with financial, energy, or government customers will undoubtedly see some challenges. There are certain industries that comply with regulations and security standards that govern where services can be located. There have always been restrictions for some industries that mean you can’t locate data outside of the U.K. However, there are other areas where being hosted in the larger EU area has been generally accepted. Data sovereignty needs to be considered when deploying solutions to public clouds. When there is finally some idea of what’s happening with the U.K.’s relationship with the EU, and what laws and regulations will be replicated within the U.K., we in the IT industry will have to assess how that affects the services we have deployed.

 

For now, the U.K. is somewhat unique in this situation. However, the geopolitical landscape is always changing, and treaties often change, safe harbour agreements can come to an end, and trade embargos or sanctions crop up over time. You need to be in a position where repatriation of services is a possibility should such circumstances come your way. Building a hybrid IT approach to your services and deployments can help with mobility of services—being able to move data between services, be that on-premises or to another cloud location. Stateless services and cloud-native services are generally easier to move around and have fewer moving parts that require significant reworking should you need to move to a new location. Microservices, by their nature, are smaller and easier to replace. Moving between different cloud providers or infrastructure should be a relatively trivial task. Traditional services, monolithic applications, databases, and data are not as simple a proposition. Moving large amounts of data can be costly; egress charges are commonplace and can be significant.

 

Whatever you are building or have built, I recommend having a good monitoring and IT inventory platform that helps you understand what you have in which locations. I also recommend using technologies that allow for simple and efficient movement of data. As mentioned in my previous post, there are several vendors now working in what has been called a “Data Fabric” space. These vendors offer solutions for moving data between clouds and back to on-premises data centres. Maintaining control of the data is a must if you are ever faced with the proposition of having to evacuate a country or cloud region due to geopolitical change.

 

Next time, I’ll look at how to choose the right location for your workload in a hybrid/multi-cloud world. Thanks for reading, and I welcome any comments or discussion.

At the start of this week, I began posting a series of how-to blogs over in the NPM product forum on building a custom report in Orion®. If you want to go back and catch up, you can find them here:

 

It all started when a customer reached out with an “unsolvable” problem. Just to be clear, they weren’t trying to play on my ego. They had followed all the other channels and really did think the problem had no solution. After describing the issue, they asked, “Do you know anyone on the development team who could make this happen?”

 

As a matter of fact, I did know someone who could make it happen: me.

 

That’s not because I'm a super-connected SolarWinds employee who knows the right people to bribe with baklava to get a tough job done. (FWIW, I am and I do, but that wasn’t needed here.)

 

Nor was it because, as I said at the beginning of the week, “I’m some magical developer unicorn who flies in on my hovercraft, dumps sparkle-laden code upon a problem, and all is solved.”

 

Really, I’m more like a DevOps ferret than a unicorna creature that scrabbles around, seeking out hidden corners and openings, and delving into them to see what secret treasures they hold. Often, all you come out with is an old wine cork or a dead mouse. But every once in a while, you find a valuable gem, which I tuck away into my stash of shiny things. And that leads me to the first big observation I recognized as part of this process:

 

Lesson #1: IT careers are more often built on a foundation of “found objects”small tidbits of information or techniques we pick up along the waywhich we string together in new and creative ways.

 

And in this case, my past ferreting through the dark corners of the Orion Platform had left me with just the right stockpile of tricks and tools to provide a solution.

 

I’m not going to dig into the details of how the new report was built, because that’s what the other four posts in this series are all about. But I *do* want to list out the techniques I used, to prove a point:

  • Know how to edit a SolarWinds report
  • Understand basic SQL queries (really just select and joins)
  • Have a sense of the Orion schema
  • Know some HTML fundamentals

 

Honestly, that was it. Just those four skills. Most of them are trivial. Half of them are skills that most IT practitioners may possess, regardless of their involvement with SolarWinds solutions.

 

Let’s face it, making a loaf of bread isn’t technically complicated. The ingredients aren’t esoteric or difficult to handle. The process of mixing and folding isn’t something that only trained hands can do. And yet it’s not easy to execute the first time unless you are comfortable with the associated parts. Each of the above techniques had some little nuance, some minor dependency, that would have made this solution difficult to suss out unless you’d been through it before.

 

Which takes me to the next observation:

 

Lesson #2: None of those techniques are complicated. The trick was knowing the right combination and putting them together.

 

I had the right mix of skills, and so I was able to pull them together. But this wasn’t a task my manager set for me. It’s not in my scope of work or role. This wasn’t part of a side-hustle that I do to pay for my kid’s braces or feed my $50-a-week comic book habit. So why would I bother with this level of effort?

 

OK, I'll admit I figured it might make a good story. But besides that?

 

I’d never really dug into Orion’s web-based reporting before. I knew it was there, I played with it here and there, but really dug into the guts of it and built something useful? Nah, there was no burning need. This gave me a reason to explore and a goal to help me know when I was “done.” Better still, this goal wouldn’t just be a thought experiment, it was actually helping someone. And that leads me to my last observation:

 

Lesson #3: Doing for others usually helps you more.

 

I am now a more accomplished Orion engineer than I was when I started, and in the process I’ve (hopefully) been able to help others on THWACK® become more accomplished as well.

 

And there’s nothing complicated about knowing how that’s a good thing.

This month, we’ve spent time discussing how cloud will affect traditional on-premises IT operations staff. Many of you provided great feedback on how your organizations view cloud computing, whether cloud is a strategic solution for you, and what you give up when you go with a cloud solution. Broadly, your responses fell into two categories: yeah, we’re doing cloud and it’s no big deal, and no, we’re not doing cloud and likely never will. Not much middle ground, which is indicative of the tribalism that’s developed around cloud computing in the last decade.

 

So instead of beating this horse to death once more, let’s consider what nascent technologies lay in wait for us in the next decade. You’re probably tired of hearing about these already, but we should recall that we collectively viewed the cloud as a fad in the mid-2000s.

 

I present to you, in no particular order, the technologies that promise to disrupt the data center in the next decade: Artificial Intelligence (AI) and Machine Learning (ML).

 

I know, you all just rolled your eyes. These technologies are the stuff of glossy magazines in the CIO’s waiting room. Tech influencers on social media peddle these solutions ad nauseam, and they’ve nearly lost all meaning in a practical sense. But let’s dig into what each has to offer and how we’ll end up supporting them.

 

AI

When you get into AI, you run into Tesler’s Theorem, which states, “AI is whatever hasn't been done yet.” This is a bit of snark, to be sure. But it’s spot-on in its definition of AI as a moving, unattainable goal. Because we associate AI with the future, we don’t consider any of our current solutions to be AI. For example, consider any of the capacity planning tools that exist for on-prem virtualization environments. These tools capitalize on the data that your vCenter Servers have been collecting for many years. Combine that with near-real-time information to predict the future in terms of capacity availability. Analytics as a component of AI is already here; we just don’t consider it AI.

 

One thing is certain about AI: it requires a ton of compute resources. We should expect that even modest AI workloads will end up in the cloud, where elastic compute resources can be scaled to meet these demands. Doing AI on-premises is already cost prohibitive for most organizations, and the more complex and full-featured it becomes, the more cost prohibitive it will be on-premises for all companies.

 

ML

You can barely separate ML from AI. Technically, ML is a discipline within the overall AI field of research. But in my experience, the big differentiator here is in the input. To make any ML solution accurate, you need data. Lots of it. No, more than that. More even. You need vast quantities of data to train your machine learning models. And that should immediately bring you to storage: just where is all this data going to live? And how will it be accessed? Once again, because so many cloud providers offer machine learning solutions (see Google AutoML, Amazon SageMaker, and Microsoft Machine Learning Studio), the natural location for this data is in the cloud.

 

See the commonality here? These two areas of technological innovation are now cloud-native endeavors. If your business is considering either AI or ML in its operations, you’ve already decided to, at a minimum, go to a hybrid cloud model. While we may debate whether cloud computing is appropriate for the workloads of today, there is no debate that cloud is the new home of innovation. Maybe instead of wondering how we will prepare our data centers for AI/ML, we should instead wonder how we'll prepare ourselves.

 

What say you, intrepid IT operations staff? Do you see either of these technologies having a measurable impact on your shop?

Back from Darmstadt and another wonderful SQL Konferenz. A few weeks at home before I'm away again when I head to Redmond and the Microsoft MVP Summit.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Nike just bricked its self-lacing shoes by accident

Setting aside the obvious jokes about how people are being forced to tie their own shoes, this tech does serve a need for people who are physically incapable of doing a task we take for granted. Here’s hoping Nike can make this work.

 

Use an 8-char Windows NTLM password? Don't. Every single one can be cracked in under 2.5hrs

Just reminding you that computers are getting faster, and soon quantum computing will crack stronger passwords, and faster.

 

85 percent of Chrome apps and extensions lack a privacy policy

For a company that once used the slogan “Don’t be evil,” Google has taken a long, long time to publicly show that it cares about data security and privacy for users. That’s probably a result of their not caring, as our data is their product.

 

Google Didn't Notify Users Its Nest Alarm System Has A Microphone

As I was just saying...

 

Jerry Westrom Threw Away a Napkin Last Month. It Was Used to Charge Him in a 1993 Murder.

I expect we are going to hear more stories similar to this one, as police around the country review their cold case files in an effort to seek justice.

 

Russian Hackers Go From Foothold to Full-On Breach in 19 Minutes

And it takes 90 days or more for most companies to be aware that the breach happened.

 

Emoji number plates launched in Queensland

I would totally pay $340 to get strips of bacon on my next plate.

 

The view of Darmstadium, including the ancient Roman wall they kept as part of the structure:

Many organizations grow each year in business scope and footprint. When I say footprint, it’s not merely the size of the organization, but the number of devices, hardware, and other data center-related items. New technologies creep up every year, and many of those technologies live on data center hardware, servers, networking equipment, and even mobile devices. Keeping track of the systems within your organization’s data center can be tricky. Simply knowing where the device is and if it’s powered on isn’t enough information to get an accurate assessment of the systems' performance and health.

 

Data center monitoring tools provide the administrator(s) with a clear view of what’s in the data center, the health of the systems, and their performance. There are many data center monitoring tools available depending on your needs, including networking monitoring, server monitoring, or virtual environment monitoring, and it’s important to considering both open-source and proprietary tools available.

 

Network Monitoring Tools for Data Centers

 

Networking can get complicated, even for the most seasoned network pros. Depending on the size of the network you operate and maintain, managing it without a dedicated monitoring tool can be overwhelming. Most large organizations will have multiple subnets, VLANs, and devices connected across the network fabric. Deploying a networking tool will go a long way in understanding what network is what, and whether or not there are any issues with your networking configuration.

 

An effective networking tool for a data center is more than just a script that pings hosts or devices across the network. A good network tool monitors everything down to the packet. Areas in the network where throughput is crawling will be captured and reported within the GUI or through SMTP alerts. High error rates and slow response times will also be captured and reported. Network administrators can customize the views and reports that are fed to the GUI to their specifications. If networking is bad or broken, things will escalate quickly. The best network monitoring tools can help avoid this.

 

Data Center Server Monitoring Tools

 

Much of the work that a server or virtual machine monitoring tool does can be also accomplished using a good network monitoring tool. However, there are nuances within server/VM monitoring tools that go above and beyond the work of a network monitoring tool. For example, there are tools designed specifically to monitor your virtual environment.

 

A virtual environment essentially contains the entire data center stack, from storage to networking to compute. This entire stack is more than just simple reachability and SNMP monitoring. It’s imperative to deploy a data center monitoring solution that understands things at a hypervisor level where transactions are brokered between the kernel and the guest OS. You need a tool that does more than tell you the lights are still green on your server. You need a tool that will alert you if your server light turns amber and why it’s amber, as well as how to turn it back to green.

 

Some tools offer automation in their systems monitoring. For instance, if one of your servers is running high on CPU utilization, the tool would migrate that VM to a cluster with more available CPU for that VM. That kind of monitoring is helpful, especially when things go wrong in the middle of the night and you’re on call.

 

Application Monitoring Tools

 

Applications are the lifeblood of most organizations. Depending on the customer, some organizations may have to manage and monitor several different applications. Having a solid application performance monitoring (APM) tool in place is crucial to ensuring that your applications are running smoothly and the end users are happy.

 

APM tools allow administrators to see up-to-date information on application usage, performance metrics, and any potential issues that may arise. If an application begins to deliver a poor end-user experience, you want to know about it as much in advance of the end user as possible. APM tools track everything from client CPU utilization, bandwidth consumption, and many other performance metrics. If you’re managing multiple applications in your environment, don’t leave out an APM tool—it might save you when you need it most.

 

Finding the Best Data Center Monitoring Tools for Your Needs

 

Ensure that you have one or all of these types of tools in your data center. It saves you time and money in the long run. Having a clear view of all aspects of your data center and their performance and health helps build confidence in the reliability of your systems and applications.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting blog on skills needed by federal IT pros. My team and I run into hybrid IT more and more frequently, and it’s good for clients to gain these needed skills.

 

It seems the federal IT world has gone hybrid, keeping some applications on-premises and migrating others to the cloud. Some are now taking advantage of the economy of scale that the cloud provides while maintaining in-house control of other mission-critical applications.

 

Hybrid IT can compound an already formidable network management challenge facing federal administrators. First, keeping an eye on both on-premises and off-premises systems and applications can be a tall order for even the most seasoned IT professional. Adding to the complexity is the fact that some engineers and administrators are now required to manage their agency’s relationship with external cloud service providers, while also maintaining acceptable levels of service quality, and not compromising security.

 

How can IT professionals successfully adapt to this strange new world? How can agencies help their staff along the path toward success?

 

Evolve Existing Skills, and Acquire New Ones

 

Typically, IT professionals love learning about new technologies and seeing how they can apply them to help their organizations or solve complex problems. That hunger can serve them well in a hybrid IT environment.

 

Consider investing in learning about new solutions to help staff manage these environments. They can familiarize themselves with the terminology and the concepts, and then move into gaining a greater understanding of areas like software-defined constructs, containers, microservices, etc.

 

IT is no longer just about managing the network; it’s also about managing business relationships. IT professionals must serve as the foundational cornerstones being developed between their agencies and cloud providers.

 

Find a Single Point of Truth

 

The greatest challenges that administrators managing a hybrid IT environment face may be lack of visibility. It can be difficult to track down the root of the problem. Is the issue in-house, or is it at the host location? If it’s the latter, who’s responsible for troubleshooting?

 

It’s important to adopt a management and monitoring mindset that provides visibility across the entire IT landscape. A “single point of truth” can help show where the fault lies, so it can be quickly addressed.

This approach helps bring clarity, transparency, and total visibility to an enormously complex IT infrastructure. In doing so, it may make them far easier to manage. Another potential benefit is strengthening the relationship between the agency and its cloud partners, because everyone can get on the same page.

 

Adopting a hybrid IT architecture can be extraordinarily beneficial. Hybrid IT may enable agencies to scale IT resources up or down with ease, achieve greater agility and cost efficiencies, and choose from best-in-class cloud service providers to satisfy their unique needs.

 

Still, we cannot discount these added complexities and challenges that hybrid IT may bring. Ask the question, “Are my skills and knowledge up to par when it comes to visibility and understanding of hybrid IT networks?” There will most likely be additional hurdles that must be overcome. After all, that’s usually what happens anytime IT professionals venture into a strange new world. Make sure you’re properly equipped to address these challenges.

 

Find the full article on Open Access Government.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

In this blog series, we’ve been looking at various methods of utilisation with cloud infrastructure. While cloud adoption can be beneficial to almost any organisation, change for change’s sake can be costly and bad for business. Simply lifting and shifting applications from your on-premises server room or data centre to the public cloud won’t give you the desired impact or experience that the cloud has the potential to deliver.

 

Companies are trying to figure out a way to adapt to today’s digital world. How can they change to find new customers, increase customer satisfaction, and gain repeat business? How can they improve overall efficiency, speed up time to market or time to deliver, and thereby grow profits? A move to cloud is usually part of a bigger plan to undergo digital transformation.

 

There are four key areas to digital transformation: Social, Mobile, Analytical, and Cloud, or SMAC for short. To understand how these four areas intersect, imagine having a website that can be accessed anytime, anywhere via a mobile device and can suggest other items to purchase based on the user’s history and people who bought the same item. This digital transformation will necessitate a large distinctive change for many businesses—a metamorphosis, if you will.

 

I’m not suggesting you curl up in a cocoon for two weeks and come out a fancy butterfly. However, I am suggesting that you take a strong look at the business processes and applications that you’re running now. How much time is spent on upkeep? How many people work with them? Is it a “house of cards” application stack? Like the DC where they had the caution tape in a 5-foot perimeter around two important racks, no one was allowed near them, for fear that their billing application could halt.

 

Two questions to ask yourself:

 

  • Does this application add value to the company?
  • Does improving or speeding it up provide any worth back to the company?

 

A common misconception about digital transformation is that it’s the deployment of large amounts of “new” technology. Like trying to purchase “DevOps,” for example. DevOps is, in fact, a strategy, and if your company wants to attract and retain the brightest and best talent, they need to be dedicated to digital progress.

 

Digital transformation is a complete strategy to provide a more digital experience to your customers. This MUST be led from the top down through an organisation if it is to succeed. By undergoing a digital transformation led by the C-suite and down, not only does it instill a high level of confidence from your workforce about their leaders, but it shows they are digitally fluent. This means having the skills and competencies to select and articulate the reasons for selecting a technology and its benefit to the business, thereby allowing you to tap quickly into new streams of revenue.

 

The end goal is that you remove logjams, waits, and unnecessary handling in current processes, streamlining, if you will, and thereby moving to a methodology of continuously optimising business processes.

 

It’s worth noting that not all process can be solved with IT. There will also be a human element of interactions which will also need to be maximised, especially if customer interaction is a key part of how your company does business. As you begin to implement your strategy, it’s important to note that during your current or previous deployment of the processes in question, you may have accrued some technical debt.

 

“Technical debt” describes the processes where you took the quick and dirty solution to a problem rather than a slower, more structured approach. While this is OK in the short term, it should be noted that you will need to revisit the element and reimplement when you have a better understanding of the problem and how to make this process align correctly to the business objective. Otherwise, it can grow to a point where technical debt is hindering your business entirely.

 

There are several different categories of technical debt that need addressing, and the main overarching solution to each is with proper understanding of the problem, a solution can be refactored into the process. This constant micro-refinement of code will hopefully steer you into agile development.

 

Agile development is a software deployment model where in each iteration of software released, quality and reliability are maintained while adding to the functionality from the previous version. To achieve this, a feature isn’t released until it satisfactorily implemented. The use of automated testing and continuous integration during implementation helps speed this process along, and we then see many smaller code change releases rather than fewer monolithic changes.

 

Companies that have a clear digital strategy and have started or completed the transformation of some business processes are said to be heading towards digital maturity. Digitally mature companies are more likely to take risks, as they understand the fact that to succeed sometimes you must fail, and they learn from those failures to rise to new levels of competitive advantage.

 

  So, by looking to undergo some form of digital transformation strategy, you will probably deploy new technology. This in turn will be rooted in an agile deployment model that’s constantly evolving to better utilise best-of-breed. Before you know it, you’ll be seeing the fruits of your labour thrive into your own hybrid cloud model with the ability to adapt rapidly to whatever changes come to your industry. 

All successful cloud migration projects start with up-to-date knowledge of a company’s current infrastructure and application landscape. While it may seem obvious, many organisations don’t have a clear picture of what’s running in their environments. This picture helps the migration effort both before and after the migration. I’ll mention how in a bit, but first, what should we look for?

Audit

One obvious category is information about the workloads themselves. Almost all platforms come with reporting tools that tell you about the workloads running on them. That information, extracted from all platforms and kept in an easily manipulated format, such as a spreadsheet, forms the initial scope for migration.

 

Networking design will undoubtedly change beyond recognition as part of cloud migration, but it’s crucial to have an in-depth understanding of the existing topology, the rationale behind that design, type, and quantity of traffic that goes over the various links.

 

Knowing the complete breakdown of the costs of running the existing service is a must. Hardware and software costs for the platform and data centre costs are obvious ones, but the headcount required for operations is often forgotten. Make sure that they’re also considered.

 

What about who owns which service and who is the technical authority for it? These are often different people with different priorities. It’s not enough to just put any names there. It’s very important that the named individuals know about and agree to hold those roles for that service.

 

Pain points are excellent indicators of what are important issues to resolve. For example:

  • How long does it take for operations to deploy a machine?
  • Does the application scale under load? If so, what are the response times?
  • How long do the developers have to wait before they get their test platforms?
  • Are current processes automated or do they require manual intervention?

 

These measurements define your KPIs (key performance indicators) and will be used later to measure success. Those indicators must be well-defined, and their values should prove categorically if the state has improved or deteriorated. Once defined, values of those indicators should be carefully documented for the pre-migration state, with careful consideration that no correction is carried out before the measurements are taken. 

Benefits Before Migration

Once the audit is done, the picture starts to become clearer. Let’s take the infrastructure spreadsheet for starters. Initially, it will be a crude list of workloads, but with a bit of thinking, the team can determine how many of them should be in-scope to move. They could be all production and development machines, but what about test machines?

 

It could also be that the hosting platform is creaking under the workload, increasing the urgency to migrate, to avoid capital investment into more hardware. This audit may identify workloads that are forgotten but can be removed to reclaim precious resources, thereby creating breathing space and buying more time to migrate.

 

Before the design phase, having all networking information in-hand is extremely important, as networking costs are calculated very differently in the public cloud. Not only are they highly-dependent on the design, but they can influence the final design.

 

It’s also a good opportunity to look at workloads to see if there are any unusual machines in terms of resource allocation or licenses that may not fit nicely when moved to a public cloud platform. Some rationalisation in that area can help reduce pain when going ahead with the design.

 

If not already done, this exercise also gives a good estimation of potential infrastructure costs on the various cloud platforms and can also feed into business case conversations.

Benefits After Migration

Going through the audit process makes it easier to carry the habit through into the new platform, especially if it proved to be useful. Data gathered can also be extended to contain additional data points that might have been useful to have initially but were only discovered as part of the migration.

 

In the short term, after-migration costs are seen to increase. That is because in the initial stages of migration, both source and destination environments are running in parallel. In such situations, having the audit done with exact cost breakdown helps to explain that behaviour.

 

A key milestone in the migration project is to show the business that you as a team have achieved the goals that were set out in the cloud migration business case. That’s where the effort spent in defining those KPIs and documenting the values comes in handy. New data returned into those KPIs post-migration becomes the determining factor for project success or failure.

Conclusion

Having the current, complete, and accurate data in-hand is the first practical step in starting the migration journey. That should contain data from all the sources that will be affected by the migration. Most importantly, it should contain the KPIs that will determine success or failure and their measurements before and after migration.

 

It’s important because while one can announce a migration to be a success, as W. Edwards Deming said:

“Without data, you’re just another person with an opinion.”

This week’s Actuator comes to you live from Darmstadt, Germany - and from SQL Konferenz. If you're here, please stop me and say hello—I'd love to talk data and databases with you. And maybe some speck, too.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Google Maps’ AR navigation feature could solve the app’s biggest little problem

For someone that visits cities in other countries and has no idea about street names, I love this application of AR into Google Maps.

 

Sandusky, Ohio, Makes Election Day A Paid Holiday — By Swapping Out Columbus Day

This sounds great, but what would be even greater is allowing for easy access to early voting. But if America really wants to increase voter turnout, we should offer a basket of chicken wings (or bacon) to everyone that votes.

 

Paris seeks $14 million from Airbnb for illegal adverts

Just a quick reminder that Airbnb is to hotels as Uber is to taxis. Oh, and both companies make city living worse, not better.

 

Spotify will now suspend or terminate accounts it finds are using ad blockers

Because Spotify is planning on using that revenue to increase the pay for the artists. Yeah, I laughed out loud, too.

 

Facebook Glitch Lets You Search for Pictures of Your Female Friends

Weekly reminder that Facebook is an awful company that should be shuttered until it can be rebuilt by adults and not some kid in a dorm room at Harvard looking to hook up with girls on campus.

 

Why storing your Bitcoin private keys on Google Drive is a terrible idea

Then again, if you are foolish enough to be involved with Bitcoin to begin with, maybe you get what you deserve by storing your private keys with a cloud provider.

 

New York Wants Its Money Back From Amazon As States Turn On Corporate Tax Breaks

The best quote I saw about this story was from New York State Senator Michael Gianaris who said “Like a petulant child, Amazon insists on getting its way or takes its ball and leaves.” Good for NYC to reconsider handing billions of dollars to a company that does not need the money.

 

I come to Germany for the SQL, the speck, and the schweinshaxe:

 

If you’re just joining the series now, you can find the first two posts here and here.

 

Last time, I talked about the various decision points and considerations when using the public cloud. In this post, I’m going to look at how we build better on-premises data centres for hybrid IT. I’ll cover some of the technologies and vendors that I’ve used in this space over the last few years, what’s happening in this space, and some of the ways I help customers get the most from their on-premises infrastructure.

 

First of all, let’s start with some of my experience in this area. Over the years, I have spoken at conferences on this topic, I’ve recorded podcasts (you can find my most recent automation podcast here https://techstringy.wordpress.com/2018/11/27/automate-all-of-the-things-jason-benedicic-ep82/), and I’ve worked on automation projects across most of the U.K. In my last permanent role, I worked heavily on a project productising FlexPod solutions, including factory automation and off-the-shelf private cloud offerings.

 

Where Do You Begin?

Building better on-premises infrastructure doesn’t start where you would expect. It isn’t about features and nerd-knobs. These should be low on the priority list. Over the last few years, perhaps longer than that, since the mainstream adoption of smartphones and tablets, end users have had a much higher expectation of IT in the workplace. The simplicity of on-demand apps and services has set the bar high; turning up at work and having a Windows XP desktop and waiting three weeks for a server to be provisioned just doesn’t cut it.

 

I always start with the outcomes the business is trying to achieve. Are there specific goals that would improve time-to-market or employee efficiency? Once you understand those goals, start to look at the current processes (or lack thereof) and get an idea for what work is taking place that creates bottlenecks or where processes spread across multiple teams and delays are created with the transit of the tasks.

 

Once you’ve established these areas, you can start to map technologies to the desired outcome.

 

What Do I Use?

From a hardware perspective, I’m looking for solutions that support modularity and scalability. I want to be able to start at the size I need now and grow if I must. I don’t want to be burdened later with forklift replacement of systems because I’ve outgrown them.

 

Horizontal growth is important. Most, if not all, of the converged infrastructure and hyper-converged infrastructure platforms offer this now. These systems often allow some form of redistribution of capacity as well. Moving under-utilised resources out to other areas of the business can be beneficial, especially when dealing with a hybrid IT approach and potential cloud migrations.

 

I’m also looking for vendors that support or work with the public cloud, allowing me to burst into resources or move data where I need it when I need it there. Many vendors now have at least some form of “Data Fabric” approach and I think this is key. Giving me the tools to make the best use of resources, wherever they are, makes life easier and gives options.

 

When it comes to software, there are a lot of options for automation and orchestration. The choice will generally fall to what sort of services you want to provide and to what part of the business. If you’re providing an internal service within IT as a function, then you may not need self-service portals that would be better suited to end users. If you’re providing resources on-demand for developers, then you may want to provide API access for consumption.

 

Whatever tools you choose, make sure that they fit with the people and skills you already have. Building better data centres comes from understanding the processes and putting them into code. Having to learn too much all at once detracts from that effort.

 

When I started working on automating FlexPod deployments, the tool of choice was PowerShell. The vendors already had modules available to interact with the key components, and both myself and others working on the project had a background in using it. It may not be the choice for everyone, and it may seem basic, but the result was a working solution, and if need be, it could evolve in the future.

 

For private cloud deployments, I’ve worked heavily with the vRealize suite of products. This was a natural choice at the time due to the size of the VMware market and the types of customer environments. What worked well here was the extensible nature of the orchestrator behind the scenes, allowing integration into a whole range of areas like backup and disaster recovery, through to more modern offerings like Chef and Ansible. It was possible to create a single customer-facing portal with Day 2 workflows, providing automation across the entire infrastructure.

 

More recently, I’ve begun working with containers and orchestration platforms like Kubernetes. The technologies are different, but the goals are the same: getting the users the resources that they need as quickly as possible to accelerate business outcomes.

 

But Why?

You only have to look at the increasing popularity of Azure Stack or the announcement of AWS Outposts to see that the on-premises market is here to stay; what isn’t are the old ways of working. Budgets are shrinking, teams are expected to do more with less equipment and/or people, businesses are more competitive than ever, and if you aren’t being agile, a start-up company can easily come along and eat your dinner.

 

IT needs to be an enabler, not a cost centre. We in the industry all need to be doing our part to provide the best possible services to our customers, not necessarily external customers, but any consumers of the services that we provide.

 

If we choose the right building blocks and work on automation as well as defining great processes, then we can all provide a cloud-like consumption model. Along with this, choosing the right vendors to partner with will open a whole world of opportunity to build solutions for the future.

 

Next time I will be looking at how location and regulatory restrictions are driving hybrid IT. Thanks for reading!

This is part 3 in a series that began here and continued here, which found life-lessons for IT practitioners up on the big screen in the movie “Spider-Man: Into the Spider-Verse”. (I did something similar for the movies “Doctor Strange” and “Logan.”)

 

If you missed the first two issues, use those links to go back and check them out. Otherwise, read on, true believers!

 

Spoilers (duh)

As with any deep discussion of movie-based content, I’m obligated to point out that there will be many spoilers revealed in what follows. If you have not yet had a chance to enjoy this chapter of the Spider-Man mythology, it may be best to bookmark this for a later time.

 

Humility is its own reward.

It could be said that, if honesty about those around us is the source of empathy, then honesty about ourselves is the source of humility.

 

Along with empathy, humility is the other great value to which we can aspire. Not the false humility of someone fishing for more compliments, nor humility that originates from low self-esteem, but honestly understanding our own motivations, strengths, and weaknesses, and keeping them in perspective.

 

In IT, humility allows us to clearly see how our work stacks up against the challenges we face; how to best utilize the people, skills, perspectives, and resources at our disposal; whether our approach has a realistic chance of success or if we need to step back and consider a new path; and more. Humility moves ego out of the way and lets us see things for what they are.

 

Of course, Spider-Man (Peter, Miles, and the rest of the Spider-Folk) is innately humble. That’s part and parcel of the mythology. No, the place I found this lesson was how the movie was humble about itself.

 

From the recognition that certain aspects of the Spider-Man franchise were poorly conceived (Spidey-O’s cereal, “evil” Toby Maguire); or poorly executed (the 1977 TV series); or both (the Spider-Man popsicle), this movie is intent on letting the audience know that it knows the complete Spider-Man pedigree, warts and all. But the humility goes deeper than that.

 

After the third origin montage of the movie, you get the feeling the writers were never taking themselves that seriously. You sense that they are now making a commentary on just how many Spider-Man origin movies there have been (and how unnecessary some of them were). Miles’ comment “how many of us are there?” is a direct reference to the insane number of reboots the franchise has undergone. And the title of the comic Miles’ dorm-mate is reading (“What If... There Was More Than One Spider-Man?”) shows the movie is aware of its own preposterous nature.

 

The overall effect ends up endearing the characters, the plot, and the narrative to us even more, in the same way that “Spaceballs” and “Galaxy Quest” were to their respective franchises. The humility becomes a love letter to the story and the people who have invested so much into it.

 

Understand how to relax.

Played mostly for laughs, Miles’ initial inability to “let go” of things using his spider ability is a wonderful metaphor, especially for those of us in problem-solving roles, who often find ourselves asked to do so in stressful situations (like when the order entry system is down and the boss’s boss’s boss is hovering over your shoulder).

 

Whether it’s meditation, exercise, raging out to metal, travel, perfectly rolled sushi, looking at art, getting lost in a book, enjoying a fine Scotch (or wine, or chocolate, or doughnut), or gaming non-stop, you need to know for the sake of your ongoing mental health what it takes for you to unwind. While many of us find most of our work in IT fulfilling, there will always be dark and stressful times. In those moments, we need to be able to honestly assess first that we are stressed, why, and finally, how to remove some of that stress so that we can continue to be effective.

 

As the movie illustrates, not being able to let go can get in the way of our ability to succeed (hanging from the lights in Doc Oc’s office), and even hurt those around us (Gwen’s hair).

 

When you get quiet and listen to your inner voice, that’s when you are the most powerful.

Since “Into the Spider-Verse” is largely an origin story about Miles’ transformation into his dimension’s one-and-only Spider-Man, much of the action focuses on him learning about his powers and how to use them. The difference between this and many other superhero origin stories is that Miles is surrounded by the other Spider-Folk, who are much more experienced. This comes to a head near the end of the movie, when the others decide that Miles’ inexperience is too much of a liability and leave him behind. After an entire movie of Miles running, jumping, and awkwardly swinging from moment to moment, idea to idea, and crisis to crisis, this is where, for the first time, Miles finally stops and just is for a moment. He takes a few precious seconds to center himself, to understand where he is, and where he wants to be. In that moment, he is finally able to get in touch with all his abilities and control them.

 

Much like knowing how to relax and let go, being able to “check in” with ourselves in this way is incredibly powerful. Over the length of our IT careers, we will find ourselves surrounded, as Miles did, by people who are doing the same work as us but are vastly more experienced and confident about it. If we’re lucky, some of those people will be patient with us as we learn the ropes. But even so, being patient with ourselves—being able to stop for a moment in the middle of the cyclone of ideas, tickets, questions, incidents, doubts, system failures, and fears—will serve us well.

 

Pushing outside of our comfort zones is good, but if it doesn’t fit, we need to recognize it before we hurt ourselves.

“Try harder than you think you can!” “Push yourself just a little further!” “Do more than you planned!”

 

It seems like the message to try and exceed our limits is everywhere, and is mostly a positive one. We should want to keep improving ourselves, and having a cheerleader (even an inspirational coffee mug) can be an effective way to reinforce that desire.

 

But there can come a point when our attempt to push through the discomfort in pursuit of growth becomes unhealthy. When we are no longer “lean and mean,” but “emaciated and ornery;” when we’ve trimmed the fat, stripped the muscle, and are now cutting into bone.

 

In the movie, this lesson becomes clear when we see the other Spider-Folk experience the slow but deadly effects of being in a dimension not their own. Their cells are slowly dying, and if they don’t get back home, they have no hope of survival.

 

In our dimension—where we’re more likely to be accosted by users claiming “the internet is down” than by plasma-gauntlet wielding stalkers—it would be nice if being dangerously outside of our comfort zone was as clear. Sometimes it is. Many of us have experienced the effects of long-term exhaustion, drained of motivation and unable to focus. The movie is teaching us that we need to first understand what is happening to us, and then work to find our way “home.”

 

As I described earlier, maybe that means centering ourselves and determining what we really need; or maybe doing something relaxing until we’ve recharged. But to not do so, to keep powering through in the vain hope that we’ll somehow find equilibrium, is as deadly to us (our career, if not our health) as being in dimension 1610 (Miles Morales’ home) when we belong in 616.

 

It’s never too late to try again

I’ve already commented on the state of dimension 616’s Peter—his emotional state at the start of the movie, the condition of his relationships, etc. And I’ve also commented on how, by the end of the movie, he’s beginning to take steps to repair his life. As moviegoers, we are invited to compare that choice to Wilson Fisk’s. His way of fixing his mistakes was to steal something that wasn’t his. We’re left to wonder, even if he had succeeded in spiriting a copy of Vanessa and Richard from another dimension, how would they survive? What would they think of him? So much about his choice leads only to more problems, more mistakes. It’s not that Peter’s path is easy. But if reconciling with Mary Jane is difficult (even if it’s ultimately unsuccessful), it’s still the only way to move ahead.

 

I am reminded of two business-critical failures, occurring a week apart, that I observed at a particular company. In both cases, a human error by a technician caused the failure.

 

In one case, the tech came forward immediately, owned up to what happened, and offered to help resolve it. Even after it was evident that the failure extended beyond their skillset, this person stuck around to watch, so they would learn and know more next time. The incident was resolved, and nothing more was ever said.

 

In the other case, the technician did everything they could to cover up the event, and their role in it. The truth came out fairly quickly (never forget that there are logs for everything) and the employee was literally walked out the door.

 

The lesson for IT pros should be clear. Even after a critical failure, we have opportunities to improve, fix, and ensure that next time the outcome is better. No technology failure spells “the end”—only our own attitude toward the failure can do that.

 

Final Lesson

In watching the Spider-Folk work together as a team, with all the similarities and differences in their abilities, attitudes, and personalities, I was reminded of an anonymous quote:

 

“In that which we share, let us see the common prayer of humanity.

In that which we differ, let us wonder at the freedom of humankind.”

 

If there is any lesson we can walk away with from this movie, it’s this: there is more about us that is the same than there is different; and both the similarities and the differences are the source of our strength as individuals and teams working in IT.

 

Until next time, true believers,

Excelsior!

 

Spider-Man: Into the Spider-Verse,” 2018, Sony Pictures Entertainment

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s another interesting article with suggestions about how agencies might approach legacy application modernization and cloud adoption using MGT Act funds. Picking the “best” apps and systems will certainly increase success rates, and SolarWinds systems and applications management tools provide data and reports to support your decision making.

 

The Modernizing Government Technology (MGT) Act was passed in late 2017 to provide government agencies with IT-focused financial resources and technical expertise to replace legacy infrastructure. The goal of the MGT Act is to “allow agencies to invest in modern technology solutions to improve service delivery to the public, secure sensitive systems and data, and save taxpayer dollars.”

 

With extra IT funds, where will agencies spend that money?

 

It seems logical that agencies should use MGT funds to begin their cloud migration if they haven’t done so already, or to speed up the process of moving systems and applications to the cloud.

 

Yet, as a federal IT pro, how do you know which systems to migrate and which should stay on-premises?

 

Best Choice Apps for Cloud Migration

 

There are four primary things to consider:

 

  1. Size
  2. Complexity
  3. Criticality to the mission
  4. Variable usage

 

Size

 

Look at the amount of data your application is accumulating and the amount of storage it’s taking up. Lots of data and storage space can equal lots of money on databases and storage systems.

 

Agencies can save money in two ways when moving that data into the cloud. First, think of the reduced maintenance costs. Second, with a cloud environment you’re only paying for what you use, which means you can add storage capacity on the fly if you need to or, just as important, remove storage you no longer need.

 

Complexity

 

Consider keeping your most complex applications on-premises until you’ve already undergone some application migration and understand the ins and outs of the process.

 

When considering complexity, think not only about the application itself, but think about its dependencies, connections, and associations. The more that application relies on or connects to other systems, the more complex it will be to migrate.

 

Criticality to the Mission

 

Save the most mission-critical applications for last or keep them on-premises. Think about development or staging environments and applications, disaster recovery systems, and many human-resources applications. This way, if there is a glitch in your migration, the agency will continue operating without interruption.

 

Another point worth making: home-grown applications may be far more complex to migrate and may be best off staying on-premises.

 

Variable Usage

 

Perhaps the area where federal IT pros can get the most bang for their buck in migrating to the cloud is in “bursty” applications. Think about applications that get heavy use during very specific time periods, such as during the tax season.

 

The challenge with keeping these types of applications on-premises is needing to have resources at the ready for these heavy-use periods—resources that otherwise can go unused. This is precisely what makes them ideal for cloud migration.

 

Consider, too, the applications that require batch processing for the very same reasons.

 

Conclusion

 

The MGT Act is good news for federal IT pros, giving them the opportunity to move their infrastructure forward. For many agencies, cloud and hybrid IT may be the ideal place to start.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Cost plays a factor in most IT decisions. Whether the costs are hardware- or software-related, understanding how the tool’s cost will affect the bottom line is important. Typically, it’s the engineer’s or administrator’s task to research tools and/or hardware to fit the organization's needs both fiscally and technologically. Multiple options are available to organizations from open-source tools, proprietary tools, and off-the-shelf tools for purchase. Many organizations prefer to either build their own tools or purchase off-the-shelf solutions that have been tried and tested. However, the option of open-source software has become increasingly popular and adopted by many organizations both in the public and private sector. Open-source software is built, maintained, and updated by a community of individuals on the internet and it can change on the fly. This poses the question: is open-source software suitable for the enterprise? There are both pros and cons that can make that decision easier. 

 

The Pros of Open-source Software

 

Open-source software is cost-effective. Most open-source software is free to use. In cases where third-party products are involved, such as plug-ins, there may be a small cost incurred. However, open-source software is meant for anyone to download and do with as they please, to some extent based on licensing. With budgets being tight for many, open-source could be the solution to help stretch your IT dollars.

 

Constant improvements are a hallmark of open-source software. The idea of open-source software is that it can and will be improved as users see flaws and room for improvements. Open-source software is just that: open, and anyone can update it or improve its usage. A user that finds a bug can fix it and post the updated iteration of the software. Most large-scale enterprise software solutions require major releases to fix bugs and are bound by major release schedules to get the latest and greatest out for their customers.

 

The Cons of Open-source Software

 

Open-source software might not stick around. There’s a possibility that the open-source software your organization has hedged their bets on simply goes away. When the community behind updating the software and writing changes to the source code closes up shop, you’re the one now tasked with maintaining it and writing any changes pertinent to your organization. The possibility of this happening makes open-source a vulnerable choice for your organization.

 

Support isn’t always reliable. When there is an issue with your software or tool, it’s nice to be able to turn to support for help resolving your issue. With open-source software, this isn’t always guaranteed, and if there is support, there aren’t usually the kind of SLAs in place that you would expect with a proprietary enterprise-class software suite.

 

Security becomes a major issue. Anyone can be hacked. However, the risk is far less when it comes to proprietary software. Due to the nature of open-source software allowing anyone to update the code, the risk of downloading malicious code is much higher. One source referred to using open-source software as “eating from a dirty fork.” When you reach in the drawer for a clean fork, you could be pulling out a dirty utensil. That analogy is right on the money.

 

The Verdict

 

Swim at your own risk. Much like the sign you see at a swimming pool when there is no lifeguard present, you have to swim at your own risk. If you are planning on downloading and installing an open-source software package, do your best to scan it and be prepared to accept the risk of using it. There are pros and cons, and it’s important to weigh them with your goals in mind to decide whether or not to use open-source.

Few messages strike fear in the hearts of IT operations staff like the dreaded scan results from your security team. These messages often appear in your inbox early on Monday morning, which means it’s sitting there waiting for you. You’re not even into your second cup of coffee when you open the message:

 

     Good morning,

 

     Your servers have 14,000 vulnerabilities that must be resolved in 14 days. See the attached spreadsheet for a wall of text that’s entirely useless and requires a few hours to manipulate into actionable information.

 

      Love,

      Security

 

Sure, I’m exaggerating a bit. But only a bit.

 

The relationship between operations and security is fraught with tension and calamity. Often, communications from the security team indicate a lack of understanding of patching and change management processes. And equally often, communications from the operations team indicates a lack of exigency regarding addressing the vulnerabilities. But ironically, you’ll hear the same observations from both teams: “We’re just trying to do our jobs.”

 

For operations, the job is managing server lifecycles, handling tickets, responding to alerts, ensuring backups run, and generally juggling flaming chainsaws. For security, the job is scanning networked hosts for known vulnerabilities and missing patches, compiling reports for the executives, and disseminating scan results to the cognizant technical teams for resolution.

 

The problem inherent in these teams’ goals is this: there’s no clear accountability for the security of the network and its systems. Without this clarity, you’ll end up with an IT shop that is more focused on placating the other teams than it is on security.

 

This is where SecOps can save the day.

 

What Is SecOps, Anyway?

At this point, you might be thinking, “SecOps… sounds like DevOps. And that is a meaningless buzzword that belongs in the garbage heap of forgotten IT trends.” I hear you. These terms are certainly overused, and that’s a shame. Both DevOps and SecOps can solve organizational problems that are the result of a generation of siloed org charts and team assignments. And just like DevOps brings together your application and infrastructure teams, SecOps brings together security and infrastructure.

 

In both cases, “bringing teams together” means that the existing battle lines get erased. For example, you maybe have a security team that uses a patch scanning tool to identify systems with missing patches. That tool likely has a built-in database that can track vulnerabilities for hosts over time. It might even include a reporting function that can extract recent scan results for a subset of hosts and place that information in a PDF or spreadsheet. This are all standard, reasonable expectations from a security team’s internal tools.

 

On the other side of the aisle is your operations team, with their own patch management solution that has been configured to act primarily as a patch and software deployment tool. The tool can determine missing patches for managed system and act as a central repository for vendor patches. It likely has its own built-in database for keeping track of patch deployments. And it might include a reporting function. Again, these are all basic tools and capabilities of the ops staff.

 

What’s the problem with this approach? You’ve got valuable data sequestered from staff who could put this information to good use. And the organization as a whole can’t turn that data into knowledge. And most importantly, the focus of these two teams’ interactions will be on finding fault in each other’s data as a means to discredit the team as a whole. I’ve seen this happen. You have, too. It’s not good.

 

How Can SecOps Help?

A SecOps approach would open up these two datasets to both teams. Security gets a view into your ops team’s patching process, and operations gets firsthand access to scan results for the systems they maintain. And something remarkable happens: instead of the conflict that is manufactured when two teams are each working with non-shared information, you get transparency. The conflict starts to ease. The focus shifts from appeasement to collaboration. And the distrust between the teams starts to fade.

 

OK, that might be expecting a little much from a simple change. Because these battle lines are ingrained in the minds of IT staff, you’ll want to be patient with your new SecOps practitioners.

 

The next step to bring about SecOps culture change is to align the goals of each team with the organization’s goals and vision. Before SecOps, your security team’s effectiveness may have been measured by the raw number of vulnerabilities it identifies each month, or quarter, or year. And your operations team may be measured by how many tickets it closed each month, or quarter, or year. But these measurements fail to put the focus where it belongs: on the shared responsibility to create and maintain a secure IT environment. These measures are also trailing indicators, and generally suggest a fundamental problem with patching or scanning or both.

 

  Like all trends in IT, SecOps is no panacea for all of IT’s woes. And it requires buy-in from your technical staff who are likely suspicious of yet another “great idea” from the execs in the Ivory Tower. But with the right coaching, and the focus on SecOps as an iterative journey and not a point-in-time change, your organization will certainly benefit from bringing these teams together.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.