1 2 Previous Next

Geek Speak

22 Posts authored by: gregwstuart

For some, artificial intelligence (AI) can be a scary technology. There are so many articles on the web about how AI will end up replacing X% of IT jobs by Y year. There’s no reason to be afraid of AI or machine learning. If anything, most IT jobs will benefit from AI/machine learning. The tech world is always changing, and AI is becoming a big driver of change. Lots of people interact with or use AI without even realizing it. Sometimes I marvel at the video games my kids are playing and think back to when I played Super Mario Brothers on the original Nintendo Entertainment System (which I still have!). I ask myself, “What kind of video games will my kids be playing?” The same applies to the tech world now. What kinds of things will AI and machine learning have changed in 10 years? What about 20 years?

 

Let’s look at how AI is changing the tech world and how it will continue to benefit us in decades to come.

 

Making IoT Great

The idea of the Internet of Things focuses on all the things in our world capable of connecting to the internet. Our cars, phones, homes, dishwashers, washing machines, watches, and yes, even our refrigerators. The internet-connected devices we use need to do what we want, when we want, and do it all effectively. AI and machine learning help IoT devices perform their services effectively and efficiently. Our devices collect and analyze so much data, and they rely more and more on AI and machine learning to sift through and analyze all the data to make our interactions better.

 

Customer Service

I don’t know a single human being who gets a warm and fuzzy feeling when thinking about customer service. No one wants to deal with customer service, especially when it comes to getting help over the phone. What’s our experience today? We call customer service and we likely get a robotic voice (IVR) routing us to the next most helpful robot. I find myself yelling “representative” over and over until I get a human… really, it works!

 

AI is improving our experience with IVR systems by making them easier to interact with and get help from. IVR systems also use AI to analyze input from callers to better route their calls to the right queue based on common trending issues. AI also helps ensure 24/7 customer service, which can be helpful with off-hours issues. You don’t have to feed AI-enhanced IVR systems junk food and caffeine to get through the night!

 

Getting Info to the User Faster

Have you noticed the recommendations you get on YouTube? What about the “because you watched…” on Netflix? AI and machine learning are changing the way we get info. Analytical engines pour through user data and match users with the info they most want, and quickly. On the slightly darker side of this technology, phones and smart speakers have become hot mics. If you’re talking about how hungry you are, your phone or speaker hears you, then sends you an email or pops up an ad on your navigation system to the nearest restaurant. Is that good? I’m not sold on it yet—it feels a little invasive. Like it or not, the way we get our data is changing because of AI.

 

Embrace It or Pump the Brakes?

For some, yeah you know who I’m talking about (off-gridders), AI is an invasive and non-voluntary technology. I wouldn’t be surprised if you had to interact with AI at least once in your day. We can’t get around some of those moments. What about interacting with AI on purpose? Do you automate your home? Surf the web from your fridge door? Is AI really on track with helping us as humans function easier? It’s still up for debate, and I’d love to hear your response in the comments.

Everyone take a deep breath and calm down. The likeliness of a robot taking over your job any time soon is very low. Yes, artificial intelligence (AI) is a growing trend, and machine learning has improved by leaps and bounds. However, the information technology career field is fairly safe, and if anything, AI/machine learning will only make things better for us in the future. However, a few IT jobs already have experienced the impact of AI, and I want to cover those here. Now, take this with a grain of salt, since AI/machine learning technology is fairly young and a lot of the news out there is simply conjecture.

 

Help Desk/IT Support

Think about the last time you called a support desk. Did your call get answered by a human or a robot? OK, maybe not an actual robot (that would be awesome), but an interactive voice response (IVR) system. How annoying is that? How often do we just start yelling, “Representative... representative... REPRESENTATIVE!” It can take several routes from an IVR system before we get a human who can help us out. This is all too often the situation when we call support or the help desk. Unfortunately (for help desk specialists), AI is only making IVR more efficient. AI enhances the capability of the IVR system to better understand and process human interaction over the phone. With IVR systems configured for automatic speech recognition (ASR), AI essentially eliminates the need for input via the keypad as it can more intelligently process the human voice response.

 

Data Center Admins

This one hurts because I’ve done a lot of data center admin work and still do some today. The idea of machine learning or AI replacing this job hits close to home. The truth is automation tools are already replacing common tasks data center admins used to carry out daily. Monitoring tools have used AI to improve data analytics pulled from system scans. Back when I started in IT, the general ratio was around one hundred systems to one administrator. With advances in monitoring, virtualization and AI, it’s now closer to one thousand systems to every administrator. While this is great for organizations looking to cut down on OPEX, it’s not great news for administrators.

 

Adapt or Die

Yeah, maybe that’s a little exaggerated, but it’s not a bad way to think. If you don’t see the technological advances as a hint to adapt, your career likely will die. AI and machine learning are hot topics right now, and there’s no better time to start learning the ins and outs of it and how you can adapt to work with AI instead of becoming obsolete. Understanding how to bridge the gap between humanity and technology can serve you well in the future. One way you can adapt is by learning programming, thereby gaining a better understanding of automation, AI, and machine learning. Maintain your creativity by implementing new ideas and using AI and machine learning to your advantage.

 

In the end, I don’t believe AI or machine learning will eliminate the need for a human workforce in IT. The human brain is far more adept at storing, processing, and analyzing data than any robot or machine will ever be. The human brain can adapt, learn, and connect with other humans in ways machines can’t. There might be an influx of jobs being taken over by AI, but we’ll always need humans to program the software and design the machines.

Perhaps the title of this post should be “To Hyperconverge or Not to Hyperconverge,” since it’s the question at hand. Understanding whether HCI is a good idea requires a hard look at metrics and budget. If your data center has been running a traditional server, storage, network architecture for a while, it should be easy to gather the metrics on power, space, and cooling. By gathering these metrics, you can compare the TCO of running a traditional architecture versus an HCI architecture.

 

Start by getting an accurate comparison. While having a TCO baseline will help with the comparison, you need to consider a few other items before making a final decision.

 

Application Workload

Forecasting the application workload(s) running in your current environment is important when considering HCI over traditional infrastructure. Current application workloads aren’t a good indicator of the health and status of your infrastructure. A good rule of thumb is to forecast three years out, which gives you a game plan for upgrading or scaling out your current configuration. If you’re only running a few workloads and they aren’t forecasted to be out of space within three years, you probably don’t need to upgrade to HCI. Re-evaluate HCI again in two years while looking three years ahead.

 

Time to Scale

Having an accurate three-year workload forecast will help you understand when you need to scale out. If you need to scale out now, I suggest going all in on HCI. Why all in on HCI? Because scaling with HCI is a piece of cake. It doesn’t require a forklift upgrade and you can scale on-demand. With most HCI vendors, simply adding a new block or node to your existing HCI deployment will scale out. You can add more than one block or node at a time, making the choice to go HCI very attractive. The process of scaling out in a traditional infrastructure costs more and is more time-constrained.

 

Staff Experience

You can’t afford to overlook this area in the decision process. Moving from traditional infrastructure to HCI can present a learning curve for some. In traditional infrastructure, most technologies are separate and require a separate team to manage them. In some cases where security is a big requirement, there’s a separation of duties, with which different admins manage network, compute, and storage. Upgrading to an HCI deployment gives you the benefit of having all components in one. When you move to a new technology, it requires somewhat of a learning curve, and this couldn’t be truer with HCI. The interfaces differ, and the way the technologies are managed differ. Some HCI vendors offer a proprietary hypervisor, which will require learning from the old industry standard hypervisor.

 

Make an Informed Decision

If you’re determined to transition to a new HCI deployment, make sure you consider all the previous items. In most cases, the decisionmakers attend vendor conferences and get excited about new technology. They then buy in to a new technology and leave it to their IT staff to figure out. Ensure there’s open communication and consider staff experience, TCO, and forecasted application workload. If you consider those things, you can feel good about making an informed decision.

Hyperconverged infrastructure has become a widely adopted approach to data center architecture. With so many moving parts involved, it can be difficult to keep up with the speed at which everything evolves. Your organization may require you to be up to speed with the latest and greatest technologies, especially when they decide to adopt a brand-new hyperconverged infrastructure. The question then arises, how do you become a hyperconverged infrastructure expert? There are many ways to become an expert, depending on the technology or vendor your organization chooses. While there aren’t many HCI certifications yet, there are many certification tracks you can obtain to make you an HCI expert.

 

Storage Certifications

There are many great storage options for certification depending on which vendor your organization uses for storage. If you’re already proficient with storage, you’re a step ahead. The storage vendor isn’t nearly as important as the storage technologies and concepts. Storage networking is important and getting trained on its concepts will be helpful in your quest to become an HCI expert.

 

Networking Certifications

There aren’t many certifications more important than networking. I strongly believe everyone in IT should have at least an entry-level networking certification. Networking is the lifeblood of the data center. Storage networking also exists in the data center and gaining a certification or training in networking will help to build your expert status.

 

Virtualization Certifications

Building virtual machines has become a daily occurrence, and if you’re in IT, it’s become necessary to understand virtualization technology. Regardless of the virtualization vendor of choice, having a solid foundational knowledge of virtualization will be key in becoming an HCI expert. Most HCI solutions use a specific vendor for their virtualization piece of the puzzle, but some HCI vendors have proprietary hypervisors built in to their products. Find a virtualization vendor with a good certification and training roadmap to gain knowledge of ins and outs of virtualization. When it comes to HCI, you’ll need it.

 

HCI Training

If you already have a good understanding of all the technologies listed above, you might be better suited to taking a training class or going after an HCI-specific certification. Most HCI vendors offer training on their platforms to bring you and your team up to speed and help build a foundational knowledge base for you. Classes are offered through various authorized training centers worldwide. Some vendors offer HCI certifications—while it’s currently a very small amount, I believe this will change over time. Do a web search for HCI training and see what returns. There are many options to choose from depending on your level of HCI experience thus far.

 

Hands-on Experience

I saved the best for last, as you can’t get better training than on-the-job training. Learning as you go is the best route to becoming an HCI expert. Granted, certifications help validate your experience, and training helps you dive deeper, but hands-on experience is second-to-none. Making mistakes, learning from your mistakes, and documenting everything you do is the fastest way to becoming an expert in any field in my opinion. Unfortunately, not everyone can learn on the job, as most organizations cannot afford to have a production system go down, or have admins making changes on the fly without a prior change board approval. In this case, find an opportunity to build a sandbox or use an existing one to build things and tear them down, break things, and fix things. Doing this will help you become the HCI expert your organization desperately needs.

Running a traditional data center infrastructure for years can put your company in a rut, especially when it’s time to pick a new solution. When electing to trade out your traditional infrastructure for a sleek new hyperconverged infrastructure (HCI), it can be a difficult paradigm to shift. So many questions can arise in selecting, and many HCI vendors are willing to answer those questions, which doesn’t necessarily make it easier. When deciding to switch to an HCI solution, it’s important to take stock of your current situation and assess why you’re searching for a new solution. Here are some things to think about when choosing an HCI solution.

 

Do You Have Experienced Staff?

Having staff on-hand to manage an HCI solution should be the main concern when choosing a solution. Traditional server infrastructures generally rely on several siloed teams to manage different technologies. When there are storage, networking, server, and security personnel, it’s important to decide if an all-in-one HCI solution is a possibility. Is there enough time to get your team spun up on the latest HCI solution and all the nuances it brings? Take a good look at your staff and take stock of their skillset and level of experience before diving headfirst into a brand-new HCI solution.

 

Support, Support, Support

Support is only considered expensive until it isn’t. When your new HCI solution isn’t working as planned or your team is having trouble configuring something, a support call can come in very handy. If the HCI solution you’re looking into doesn’t have the level of support to meet your needs, forget about it. It does no good to pay for support you can’t rely on when it all comes crashing down, which it can from time to time. Ensure the vendor’s support provides coverage for both hardware and software and offers support coverage hours suited to your needs. If you’re a government entity, does the vendor provide a U.S.-citizen-only support team? These are all important questions to ask of prospective vendors.

 

How Will the HCI Solution Be Used?

First things first, how will you be using the HCI solution? If your plan is to employ a new HCI solution to host your organization’s new VDI implementation, specific questions need to be addressed. What are the configuration maximums for CPU and memory, and how much flash memory can be configured? VDI is a very resource-intensive project, and going into the deployment without the right amount of resources in the new HCI solution can put your organization in a bad spot. If the idea of HCI procurement is coming specifically from an SMB/ROBO situation, it’s extremely important to get the sizing right and ensure the process of scaling out is simple and fast.

 

Don't Get Pressured Into Choosing

Your decision needs to come when your organization is ready, not a vendor’s schedule or pressure to commit. Purchasing a new HCI solution is not a small decision and it can come with some sticker shock, so it’s important to choose wisely and choose what’s right for your organization. Take stock of the items I listed above and decide how to proceed with all the vendor calls, which will flood your phones once you decide you’re looking for a new HCI solution.

So much in IT today is focused on the enterprise. At times, smaller organizations get left out of big enterprise data center conversations. Enterprise tools are far too expensive and usually way more than what they need to operate. Why pay more for data center technologies beyond what your organization needs? Unfortunately for some SMBs, this happens, and the ROI on the equipment they purchase never really realizes its full potential. Traditional data center infrastructure hardware and software can be complicated for an SMB to operate alone, creating further costs for professional services for configuration and deployment. This was all very true until the advent of hyperconverged infrastructure (HCI). So to answer the question I posed above: yes, HCI is very beneficial and well suited for most SMBs. Here's why:

 

1. The SMB is small- to medium-sized – Large enterprise solutions don’t suit SMBs. Aside from the issue of over-provisioning and the sheer cost of running an enterprise data center solution, SMBs just don't need those solutions. If the need for growth arises, with HCI, an organization can easily scale out according to their workloads.

 

2. SMBs can’t afford complexity – A traditional data center infrastructure usually involves some separation of duties and siloes. Where there are so many moving parts with different management interfaces, it can become complex to manage it all without stepping on anyone’s toes. HCI offers an all-in-one solution—storage, compute, and memory all contained in a singular chassis. HCI avoids the need for an SMB to employ a networking team, virtualization team, storage team, and more.

 

3. Time to market is speedy – SMBs don’t need to take months to procure, configure, and deploy a large-scale solution for their needs. Larger corporations might require a long schedule, where the SMB might not require this. HCI helps them to get to market quickly. HCI is as close to a plug-and-play data center as you can get. In some cases, depending on the vendor chosen, time to market can be down to minutes.

 

4. Agility, flexibility, all of the above – SMBs need to be more agile and don’t want to carry all the overhead required to run a full data center. Power, space, and cooling can be expensive when it comes to large enterprise systems. Space itself can be a very expensive commodity. Depending on the SMB’s needs, their HCI system can be trimmed down to a single rack or even a half rack. HCI is also agile in nature due to the ability to scale on demand. If workloads spike overnight, simply add another block or node to your existing HCI deployment to bring you the performance your workloads require.

 

5. Don’t rely on the big players – Big-name vendors for storage and compute licensing can come at a significant cost. Some HCI vendors offer proprietary and built-in hypervisor solutions included in the cost and easier to manage than an enterprise license agreement. Management software is also built in to many HCI vendor’s solutions.

 

HCI has given the SMB more choices when it comes to building out a data center. In the past, an SMB had to purchase licensing and hardware generally built for the large enterprise. Now they can purchase a less expensive solution with HCI. HCI offers agility, quick time to market, cost savings, and reduced complexity. These can all be pain points for an SMB, which can be solved by implementing an HCI solution. If you work for an SMB, have you found this to be true? Does HCI solve many of these problems?

Hyperconverged infrastructure is adopted more and more every year by companies both big and small. For small- to medium-businesses (SMBs) and remote branch type companies, HCI provides an all-in-one solution to eliminate the need to have power, space, and cooling for multiple different systems. For bigger companies at the enterprise level and below, HCI gives them a flexible means of scaling out as the workload demand increases. There are many other benefits to adopting a hyperconverged approach to data center infrastructure. Here are what I consider the top five.

 

Flexible Scalability

The most attractive benefit of HCI is the ability to scale out your environment as your workload demand increases. Every application or data center has different workloads. Some demand a lot of resources while others don’t need as much. In the case of HCI, as workload demands increase, there’s no need to do a forklift upgrade. You simply add a new node or block to your current infrastructure and configure it as needed.

 

A Lower Total Cost of Ownership

Total Cost of Ownership (TCO) isn’t usually something the IT community really cares about. Generally, we’re given what we get, and we have to make it work effectively. Decision makers find TCO extremely important, and this metric plays a major role in the procurement process. TCO is good for both the decision makers and the engineers in the trenches. On one hand, the decision makers see a lower cost solution overall, and the engineers get to work with equipment and software that isn’t completely budget-restrained. Time to market is much quicker, less administrative staff is required, and there are cost savings when it comes to power, space, and cooling.

 

Less Administrative Overhead

When there’s less to manage, there’s less administrative overhead. Lowering administrative overhead doesn’t necessarily mean cutting down your staff, it means making your staff more effective with less work. In a traditional data center infrastructure, there are many moving parts, which requires many different teams or silos. Consolidating all the moving parts into one chassis cuts down on administrative overhead and segregated teams.

 

Avoid Overprovisioning

HCI essentially eliminates the possibility of overprovisioning. Traditional infrastructures require a big purchase upfront, which is based off an analyst’s projections for the workload three to five years out. Most of those workload projections end up being too generous and the organization is left with expensive hardware that never gets near capacity usage. HCI allows you to buy only what you currently need, with maybe a 15% buffer, and then as the workload increases, you can flexibly scale out to meet the increase. By eliminating the need for workload projection and large upfront hardware/software purchases, TCO decreases and flexibility increases.

 

Takes the Complexity Out of New Deployments

Traditional infrastructure deployments can be a very complex process. As discussed earlier, the workload projections move forward to procurement of hardware and software, which then need to be racked and stacked, cabled, and configured. This deployment approach requires multiple people, server lifts, and rack space. HCI deployments make it easier for less experienced administrators to unbox rack and configure an entire deployment. In the case of scaling, the same deployment model exists… unbox and bolt on a new block and configure it. There’s no need for the storage team to provision new storage, or the networking team to add more cables. It’s an all-in-one solution and simple to deploy.

 

There are numerous benefits to adopting HCI over traditional infrastructure. I’ve only listed five, which barely scratch the surface. For the benefit of those of us who haven’t deployed HCI before, what are some benefits you’ve realized by your deployment, but aren’t on this list?

Today’s data center is full of moving parts. If your data center is hosted on-premises, there’s a lot to do day-in and day-out to make sure everything is functioning as planned. If your data center is a SaaS data center hosted in the cloud, there are still things you need to do, but far fewer compared to an on-premises data center. Each data center carries different workloads, but there’s a set of common technologies that need to be monitored. When VM performance isn’t monitored, you can miss a CPU overload or max out memory. When the right enterprise monitoring tools are in place, it’s easier to manage the workloads and monitor their performance. The following is a list of tools I believe every data center should have.

 

Network Monitoring Tools

Networking is so important to the health of any data center. Both internal networking and external network play a key role in the day-to-day usage of the data center. Without networking, your infrastructure goes nowhere. Installing a network monitoring tool that tracks bandwidth usage, networking metrics, and more allows a more proactive or preventative approach to solving networking issues. Furthermore, a networking tool such as an IP management tool that stores all the available IP addresses and ranges and dynamically updates as addresses get handed out. This will go a long way in staying organized.

 

Virtual Machine Health/Performance Monitoring Tools

Virtualization has taken over the data center landscape. I don’t know of a data center that doesn’t have some element of the software-defined data center in use. Industry-leading hypervisor vendors have created so many tools and advanced options for virtualization that go above and beyond server virtualization. With so many advanced features and software in place, it’s important to have a tool that monitors not just your VMs, but your entire virtualization stack. Software-defined networking (SDN) has become popular, and while that might end up falling under the networking section, most of the SDN configurations can be handled directly from the virtual administration console. Find a tool that will provide more metrics than you think you need; it may turn out that you scale out and then require them at some point. A VM monitoring tool can catch issues like CPU contention, lack of VM memory, resource contention, and more.

 

Storage Monitoring Tools

You can’t have a data center without some form of storage, whether it be slow disks, fast disks, or a combination of both. Implementing a storage monitoring tool will help the administrator catch issues that slow business continuity such as a storage path dropping, a storage mapping being lost, loss of connectivity to a specific disk shelf, a bad disk, or a bad motherboard in a controller head unit. Data is king in today’s data center, so it’s imperative to have a storage monitoring tool in place to catch anomalies or issues that might hurt business continuity or compromise data integrity.

Environment Monitoring Tools          

Last, but definitely not least, a data center environment monitoring tool will keep you from a loss of hardware and data altogether. This type of tool will protect a data center against physical or environmental issues within the data center. A good environment monitoring tool will alert you to the presence of excess moisture in the room, an extreme drop in temperature, or spike in temperature. Monitoring tools usually come with a video aspect to monitor it visually, plus sensors installed in the data center room to monitor environmental factors. Water can do serious damage in your data center. Great monitoring tools will have monitors installed near the floor to detect moisture and a build-up of water.

 

You can’t be too careful when it comes to protecting your enterprise data center. Monitoring tools like the ones listed above are a good place to start. Check your data center against my list and see if it matches up. Today, there are many tools that encompass all these areas in one package, making it convenient for the administrator to manage it all from a single screen.

Small-to-medium-sized businesses (SMBs) tend to get overlooked when it comes to solutions that fit their needs for infrastructure monitoring. There are many tools that exist today that cater to the enterprise, where there’s a much larger architecture with many moving parts. The enterprise data center requires solutions for monitoring all the systems within it that an SMB might not have. The SMB or remote office/branch office (ROBO) is a much smaller operation, sometimes using a few servers and some smaller networking gear. There may be a storage solution in the back-end, but it’s more likely that the data for their systems is stored on a local drive within one or all of their servers.

 

It seems unfair for the SMB to be ignored as developers typically work to create solutions that will fit better in an enterprise architecture than in a smaller architecture, but that’s the nature of the beast. There’s more money in enterprise solutions, with enterprise licensing agreements (ELAs) reaching into the millions of dollars for some clients. It would make sense that enterprise software is more readily available than SMB software, but that doesn’t mean there aren’t solutions out there for the SMB.

Find What’s Right for YOU

Don’t pick a solution with more than what you need for the infrastructure within your organization. If your organization consists of a single office with three physical servers, a modem/router, and a direct attached storage (DAS) solution, you don’t need an enterprise solution to achieve the systems monitoring. There are many less expensive or open-source server monitoring tools out there that contain a lot of documentation to help you get through installation and configuration. Enterprise solutions aren’t always the answer just because they are “enterprise solutions.” Bigger isn’t always better. If a solution with a support agreement is more in line with your expectations, there are quite a few providers that can offer an SMB-class monitoring solution for you.

 

Don’t Overpay

Software salespeople are all about selling, selling, selling. Many times, salespeople are sent along on a client call with a solutions engineer (SE) who has more technical experience than the salesperson. Focus more attention on the SE and less on the salesperson. There’s no need to shell out a ton of money for an ELA that’s way more than what you need for your SMB infrastructure. Many times, quality is associated with costs, and that’s just plain false. When it comes to choosing a monitoring tool for your SMB, "quality over quantity" should be your mantra. If you don’t require 24/7/365 monitoring and SLAs at the platinum level with two-minute response time, don’t buy it. Find a tool that fits your SMB budget, and don’t feel you are slighted because you didn’t buy the slightly shiny, more expensive enterprise solution.

 

Pick a Vendor That Will Work for YOU

Software vendors, especially those that work to develop large enterprise monitoring solutions, don’t always have the best interests of the SMB in mind when building a tool. By focusing your search on vendors that scale to the SMB market, you’ll find that the sales process and the support will be tailored to the needs of your organization. With vendors building scalable tools, customization becomes a key selling point. Vendors can cater to the needs and requirements of the customer, not the market.

 

Peer Pressure is Real

Don’t take calls from software vendors that cater only to the needs of enterprise-scale monitoring solutions. Nothing against enterprise monitoring solutions—they’re needed for the enterprise. However, focusing on the chatter and the “latest and greatest” types of marketing will make your SMB feel even smaller. There’s no competition. Pick what works for your SMB. Don’t overpay. Find a vendor that will support you. By putting all these tips in place, you can find a monitoring tool for your SMB that won’t make you feel like you had to settle.

Monitoring tools are vital to any infrastructure. They analyze and give feedback on what’s going on in your data center. If there are anomalies in traffic, a network monitoring tool will catch it and alert the administrators. When disk space is getting full on a critical server, a server monitoring tool alerts the server administrators that they need to add space. Some tools are only network tools, or only systems tools. However, these may not always provide all the analysis you need. There are additional monitoring tools that can cover everything happening within your environment.

 

In searching for a monitoring tool that fits the needs of your organization, it can be difficult to find one that’s the right size for your environment. Not all monitoring tools are one-size-fits-all. If you’re searching for a network monitoring tool, you don’t need to purchase one that covers server performance, storage metrics, and more. There are several things to consider when choosing a monitoring tool that fits your environment.

 

Run an Analysis on Your Environment

The first order of business when trying to determine which monitoring tool best fits your needs is to analyze your current environment. There are tools on the market today that help map out your network environment and gather key information such as operating systems, IP addresses, and more. Knowing which systems are in your data center, what types of technologies are present, and what application or applications they support will help you decide which tools are the best fit.

 

Define Your Requirements

There may be legal requirements defining what tools need to be present in your environment. Understanding these specific requirements will likely narrow down the list of potential tools that will work for you. If you’re running a Windows environment, there are many built-in tools that perform the tasks needed in an environment. Additionally, if your organization is using these built-in tools, it may not be necessary to spend money on another tool to do the same thing.

 

Know Your Budget

Budgetary demands typically determine these decisions for most organizations. Analyzing your budget will help you understand which tools you can afford and will narrow the list down. Many tools do more than needed for some, so it’s not necessary to spend more on a tool that might be outside your budget.

 

On-prem or Cloud?

When picking a monitoring tool, it’s important to research whether you want an on-premises tool or a cloud-based one. SaaS tools are very flexible and can store the information the tool gathers in the cloud. On the other hand, having an on-premises tool keeps everything in-house and provides a more secure option for data gathered. Choosing an on-prem tool gives you the ability to see your data 24/7/365 and have complete ownership of it. With a SaaS tool, it’s likely you could lose some visibility into how things are operating on a daily basis. Picking the right hosting option should be strictly based on your requirements and comfort with the accessibility of your data.

 

Just Pick One Already

This isn’t meant to be harsh, but spending too time researching and looking for a tool that fits your needs may put you in a bad position. While you’re trying to choose between the best network monitoring tools, you could be missing out on what’s actually going on inside your systems. Analyze your environment, define your requirements, know your budget, pick a hosting model, and then make your selection. By ensuring the monitoring tool solution fits the needs of your environment, it will pay dividends in the end.

 

Many organizations grow each year in business scope and footprint. When I say footprint, it’s not merely the size of the organization, but the number of devices, hardware, and other data center-related items. New technologies creep up every year, and many of those technologies live on data center hardware, servers, networking equipment, and even mobile devices. Keeping track of the systems within your organization’s data center can be tricky. Simply knowing where the device is and if it’s powered on isn’t enough information to get an accurate assessment of the systems' performance and health.

 

Data center monitoring tools provide the administrator(s) with a clear view of what’s in the data center, the health of the systems, and their performance. There are many data center monitoring tools available depending on your needs, including networking monitoring, server monitoring, or virtual environment monitoring, and it’s important to considering both open-source and proprietary tools available.

 

Network Monitoring Tools for Data Centers

 

Networking can get complicated, even for the most seasoned network pros. Depending on the size of the network you operate and maintain, managing it without a dedicated monitoring tool can be overwhelming. Most large organizations will have multiple subnets, VLANs, and devices connected across the network fabric. Deploying a networking tool will go a long way in understanding what network is what, and whether or not there are any issues with your networking configuration.

 

An effective networking tool for a data center is more than just a script that pings hosts or devices across the network. A good network tool monitors everything down to the packet. Areas in the network where throughput is crawling will be captured and reported within the GUI or through SMTP alerts. High error rates and slow response times will also be captured and reported. Network administrators can customize the views and reports that are fed to the GUI to their specifications. If networking is bad or broken, things will escalate quickly. The best network monitoring tools can help avoid this.

 

Data Center Server Monitoring Tools

 

Much of the work that a server or virtual machine monitoring tool does can be also accomplished using a good network monitoring tool. However, there are nuances within server/VM monitoring tools that go above and beyond the work of a network monitoring tool. For example, there are tools designed specifically to monitor your virtual environment.

 

A virtual environment essentially contains the entire data center stack, from storage to networking to compute. This entire stack is more than just simple reachability and SNMP monitoring. It’s imperative to deploy a data center monitoring solution that understands things at a hypervisor level where transactions are brokered between the kernel and the guest OS. You need a tool that does more than tell you the lights are still green on your server. You need a tool that will alert you if your server light turns amber and why it’s amber, as well as how to turn it back to green.

 

Some tools offer automation in their systems monitoring. For instance, if one of your servers is running high on CPU utilization, the tool would migrate that VM to a cluster with more available CPU for that VM. That kind of monitoring is helpful, especially when things go wrong in the middle of the night and you’re on call.

 

Application Monitoring Tools

 

Applications are the lifeblood of most organizations. Depending on the customer, some organizations may have to manage and monitor several different applications. Having a solid application performance monitoring (APM) tool in place is crucial to ensuring that your applications are running smoothly and the end users are happy.

 

APM tools allow administrators to see up-to-date information on application usage, performance metrics, and any potential issues that may arise. If an application begins to deliver a poor end-user experience, you want to know about it as much in advance of the end user as possible. APM tools track everything from client CPU utilization, bandwidth consumption, and many other performance metrics. If you’re managing multiple applications in your environment, don’t leave out an APM tool—it might save you when you need it most.

 

Finding the Best Data Center Monitoring Tools for Your Needs

 

Ensure that you have one or all of these types of tools in your data center. It saves you time and money in the long run. Having a clear view of all aspects of your data center and their performance and health helps build confidence in the reliability of your systems and applications.

Cost plays a factor in most IT decisions. Whether the costs are hardware- or software-related, understanding how the tool’s cost will affect the bottom line is important. Typically, it’s the engineer’s or administrator’s task to research tools and/or hardware to fit the organization's needs both fiscally and technologically. Multiple options are available to organizations from open-source tools, proprietary tools, and off-the-shelf tools for purchase. Many organizations prefer to either build their own tools or purchase off-the-shelf solutions that have been tried and tested. However, the option of open-source software has become increasingly popular and adopted by many organizations both in the public and private sector. Open-source software is built, maintained, and updated by a community of individuals on the internet and it can change on the fly. This poses the question: is open-source software suitable for the enterprise? There are both pros and cons that can make that decision easier. 

 

The Pros of Open-source Software

 

Open-source software is cost-effective. Most open-source software is free to use. In cases where third-party products are involved, such as plug-ins, there may be a small cost incurred. However, open-source software is meant for anyone to download and do with as they please, to some extent based on licensing. With budgets being tight for many, open-source could be the solution to help stretch your IT dollars.

 

Constant improvements are a hallmark of open-source software. The idea of open-source software is that it can and will be improved as users see flaws and room for improvements. Open-source software is just that: open, and anyone can update it or improve its usage. A user that finds a bug can fix it and post the updated iteration of the software. Most large-scale enterprise software solutions require major releases to fix bugs and are bound by major release schedules to get the latest and greatest out for their customers.

 

The Cons of Open-source Software

 

Open-source software might not stick around. There’s a possibility that the open-source software your organization has hedged their bets on simply goes away. When the community behind updating the software and writing changes to the source code closes up shop, you’re the one now tasked with maintaining it and writing any changes pertinent to your organization. The possibility of this happening makes open-source a vulnerable choice for your organization.

 

Support isn’t always reliable. When there is an issue with your software or tool, it’s nice to be able to turn to support for help resolving your issue. With open-source software, this isn’t always guaranteed, and if there is support, there aren’t usually the kind of SLAs in place that you would expect with a proprietary enterprise-class software suite.

 

Security becomes a major issue. Anyone can be hacked. However, the risk is far less when it comes to proprietary software. Due to the nature of open-source software allowing anyone to update the code, the risk of downloading malicious code is much higher. One source referred to using open-source software as “eating from a dirty fork.” When you reach in the drawer for a clean fork, you could be pulling out a dirty utensil. That analogy is right on the money.

 

The Verdict

 

Swim at your own risk. Much like the sign you see at a swimming pool when there is no lifeguard present, you have to swim at your own risk. If you are planning on downloading and installing an open-source software package, do your best to scan it and be prepared to accept the risk of using it. There are pros and cons, and it’s important to weigh them with your goals in mind to decide whether or not to use open-source.

When it comes to getting something done in IT, the question of "How?" can be overwhelming. There are many different solutions on the market to achieve our data center monitoring and management goals. The best way to achieve success is for project managers and engineers to work together to determine what tools best fit the needs of the project.

 

With most projects, budgets are a major factor in deciding what to use. This can make initial decisions relatively easy or increasingly difficult. In the market, you’ll find a spectrum from custom implementations of enterprise management software to smaller, more nimble solutions. There are pros and cons to each type of solution (look for a future post!), and depending on the project, some cons can be deal-breakers. Here are a couple of points to think about when deciding on a tool/solution to get your project across the finish line.

 

Budget, Anyone?

 

Budgets run the IT world. Large companies with healthy revenues have large budgets to match. Smaller organizations have budgets more in line with the IT services that they need to operate. Each of these types of companies need to have a solution that fits their needs without causing problems with their budgets. There are enterprise management systems to fit a variety of budgets for sure. Some are big, sprawling systems with complicated licensing and costs to match. Still others consist of community-managed tools that have less costs associated with them, but also have less support. And, of course, there are tools that fit in the middle of those two extremes.

 

Don’t think that having limitless budget means that you should just buy the most expensive tool out there. You need to find a solution that first and foremost fits your needs. Likewise, don’t feel like a small budget means that you can only go after free solutions or tools with limited support. Investigating all the options and knowing what you need are the keys to finding good software at reasonable costs.

 

Do I Have the Right People?

Having the right people on your IT staff also helps when choosing what type of management tool to use. Typically, IT pros love researching tools on their own and spend hours in community threads talking about tools. If you have a seasoned and dedicated staff, go with a more nimble tool. It usually costs less, and your staff will ensure it gets used properly.

 

Conversely, if your IT staff is lacking, or is filled with junior level admins, a nimble tool might not be the best solution. An enterprise solution often comes with more support and a technical account manager assigned directly to your account. Enterprise solutions often offer professional services to install the tool and configure it to meet the demands of your infrastructure. Some enterprise management software vendors offer on-site training to get your staff up to speed on the tool’s use.

 

Don’t forget that sometimes the best person for the job may not even be a person at all! Enterprise management systems often provide the ability to have tools that can automate a large number of tasks, such as data collection or performance optimization. If your staff finds itself overworked or lacking in certain areas, you may be able rely on your EMS platform to help you streamline the things you need to accomplish and help you fill in the gaps where necessary. Not everyone has the need for a huge IT team, but using your platforms as a force multiplier can give you an advantage.

 

There are many other points to discuss when deciding on an enterprise monitoring or management system versus a nimble tool. However, the points discussed above should be the most pertinent to your discussions. Do not make any decisions on a solution without taking the time to make some proper assessments first. Trust your staff to be honest about their capabilities, ensure your budgetary constraints are met, and choose a tool that will be the best fit for the project. In the end, what matters most is delivering a solution that meets your customer and/or company’s needs.

I don’t think there’s anyone out there that truly loves systems monitoring. I may be wrong, but traditionally, it’s not the most awesome tool to play with. When you think of systems monitoring, what comes to mind? For me, as an admin, a vision of reading through logs and looking at events and timestamps keeps me up at night. I definitely see that there is a need for monitoring, and your CIO or IT manager definitely wants you to set up the tooling to get any and all metrics and data you can dig up on performance. Then there’s the ‘root cause’ issue. The decision makers want to know what the root cause was when their application crashed and went down for four hours. You get that data from a good monitoring tool. Well, time to put on a happy face and implement a good tool. Not just any tool will do though--you want a tool that isn’t just going to show you a bunch of red and green lights. For it to be successful, there has to be something in it for you! I’m going to lay out my top three things that a good monitoring tool can do for you, the admin or engineer in the trenches day in and day out!

 

Find the Root Cause

 

Probably the single best thing a (good) systems monitoring tool can do is find the root cause of an issue that has become seriously distressing for your team. If you’ve been in IT long enough, the experience of having an unexplained outage is all too familiar. After the outage is finally fixed and things are back online, the first thing the higher-ups want to know is “why?” or “what was the root cause?” I cringe whenever I hear this. It means I need to dig through system logs, applications event logs, networking logs, and any other avenue I might have to find the fabled root cause. Most great monitoring tools today have root cause analysis (RCA) built in to their tool. RCA can literally save you hours and days of poring over logs. In discussions about implementing a systems monitoring tool, make sure RCA is high on your list of requirements. 

 

Establish a Performance Baseline

How are you supposed to know what is an actual event or just a false positive? How could you point out something that’s out of the norm for your environment? Well, you can’t, unless you have a monitoring tool in place that learns what normal activity looks like and what events are simply anomalies. With some tools that offer high frequency polling, you can pull baseline statistics for behavior down to the second. Any good monitoring tool will take a while to collect data and analyze it before producing metrics that have meaning to your organization. Over time, the metrics collected will learn adaptively, and constantly provide you with the most up-to-date, accurate metrics. Things like false positives can eat up a lot of resources for nothing. 

 

Reports, Reports, Reports

When there are issues that arise, or RCA that needs to be done, you want the systems monitoring tool to be capable of producing reports. Reports can come in the form of an exportable .csv file, .xls file, or .pdf. Some managers like a print out, a hard copy, they can write on and mark up. With the ability to produce reports, you can have a solid history of network or systems behavior that you can store in SharePoint or whatever file share you have. Most tools keep an archive or history of reports, but it’s always good to have the option of exporting for backup and recovery purposes. I’ve found that a sortable Excel file that I can search through comes in very handy when I need to really dig in and find an issue that might be hiding in the metrics.

 

Systems monitoring tools can do so much for your organization, and more importantly, you! Make sure that when you are looking for a systems monitoring tool, sift through all the bells and whistles and be sure that there are at least these three features built in… it might save your hide one day, trust me!

When it comes to IT, things go wrong from time to time. Servers crash, memory goes bad, power supplies die, files get corrupted, backups get corrupted...there are so many things that can go wrong. When things do go wrong, you work to troubleshoot the issue and end up bringing it all back online as quickly as humanly possible. It feels good, you might even high five or fist bump your co-worker, for the admin, this is a win. However, for the higher-ups, this is where the finger pointing begins.  Have you ever had a manager ask you “So what was the root cause?” or say “Let’s drill down and find the root cause.”

 

 

I have nightmares of having to write after action reports (AARs) on what happened and what the root cause was. In my imagination, the root cause is a nasty monster that wreaks havoc in your data center, the kind of monster that lived under your bed when you were 8 years old, only now it lives in your data center. This monster barely leaves a trace of evidence as to what he did to bring your systems down or corrupt them. This is where a good systems monitoring tool steps in to save the day and help sniff out the root cause. 

 

Three Things to Look for in a Good Root Cause Analysis Tool

A good root cause analysis (RCA) tool can accomplish three things for you, which can provide you with the best track on what the root cause most likely is and how to prevent it in the future. 

  1. A good RCA tool will…be both reactive and predictive. You don’t want a tool that simply points to logs or directories where there might be issues. You want a tool that can describe what happened in detail and point to the location of the issue. You can't begin to track down the issue if you don’t understand what happened and have a clear timeline of events.  Second, the tool can learn patterns of activity within the data center that allow it to become predictive in the future if it sees things going downhill. 
  2. A good RCA tool will…build a baseline and continue to update that baseline as time goes by.  The idea here is for the RCA tool to really understand what looks “normal” to you, what is a normal set of activities and events that take place within your systems. When a consistent and accurate baseline is learned, the RCA tool can get much more accurate as to what a root cause might be when things happen outside of what’s normal. 
  3. A good RCA tool will…sort out what matters, and what doesn’t matter. The last thing you want is a false positive when it comes to root cause analysis. The best tools can accurately measure false positives against real events that can do serious damage to your systems. 

 

Use More Than One Method if Necessary

Letting your RCA tool become a crutch to your team can be problematic. There will be times that an issue is so severe and confusing that it’s sometimes necessary to reach out for help. The best monitoring tools do a good job of bundling log files for export should you need to bring in a vendor support technician. Use the info gathered from logs, plus the RCA tool output and vendor support for those times when critical systems are down hard, and your business is losing money every minute that it’s down.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.