1 2 Previous Next

Geek Speak

25 Posts authored by: gregwstuart

While there are many silly depictions of machine learning and artificial intelligence throughout Hollywood, its reality delivers significant benefits. Administrators today oversee so many tasks, like system monitoring, performance optimizing, networking configuration, and more. Many of these tasks can be monotonous and tedious. Also, those tasks are generally required daily. In these cases, machine learning helps ease the burden on the administrator and helps make them more productive with their time. Lately, however, more people seem to think too much machine learning may replace the need for humans to get a job done. While there are instances of machine learning eliminating the need for some tasks to be manned by a human, I don’t believe we’ll see humans replaced by machines (sorry, Terminator fans). Instead, I’ll highlight why I believe machine learning matters now and will continue to matter for generations to come.

 

Machine Learning Improves Administrator’s Lives

Some tasks administrators are responsible for can be very tedious and take a long time to complete. With machine learning, automation makes the daily tedious tasks run on a schedule and efficiently as system behavior is learned and optimized on the fly. A great example comes in the form of spam mail or calls. Big name telecom companies are now using machine learning to filter out the spam callers flooding cell phones everywhere. Call blocker apps can now screen calls for you based on spam call lists analyzed by machine learning and then block potential spam. In other examples, machine learning can analyze system behavior against a performance baseline and then alert the team of any anomalies and/or the need to make changes. Machine learning is here to help the administrator, not give them anxiety about being replaced.

 

Machine Learning Makes Technology Better


There are so many amazing software packages available today for backup and recovery, server virtualization, storage optimization, or security hardening. There’s something for every type of workload. When machine learning is applied to these software technologies, it enhances the application and increases the ease of use. Machine learning is doing just that: always learning. If an application workload suddenly increases, machine learning captures it and then will use an algorithm to determine how to react in those situations. When there’s a storage bottleneck, machine learning analyzes the traffic to determine what’s causing the backup and then works out a possible solution to the problem for administrators to implement.

 

Machine Learning Reduces Complexity

Nobody wants their data center to be more complex. In fact, technology trends in the past 10 to 15 years have leaned towards reducing complexity. Virtualization technology has reduced the need for a large footprint in the data center and reduced the complexity of systems management. Hyperconverged infrastructure (HCI) has gone a step further and consolidated an entire rack’s worth of technology into one box. Machine learning takes it a step further by enabling automation and fast analysis of large data sets to produce actionable tasks. Tasks requiring a ton of administrative overhead are now reduced to an automated and scheduled task monitored by the administrator. Help desk analysts benefit from machine learning’s ability to recognize trending data to better triage certain incident tickets and reduce complexity in troubleshooting those incidents.

 

Learn Machine Learning

If you don’t have experience with machine learning, dig in and start reading everything you can about it. In some cases, your organization may already be using machine learning. Figure out where it’s being used and start learning how it affects your job day to day. There are so many benefits to using machine learning—find out how it benefits you and start leveraging its power.

There have been so many changes in data center technology in the past 10 years, it’s hard to keep up at times. We’ve gone from a traditional server/storage/networking stack with individual components, to a hyperconverged infrastructure (HCI) where it’s all in one box. The data center is more software-defined today than it ever has been with networking, storage, and compute being abstracted from the hardware. On top of all the change, we’re now seeing the rise of artificial intelligence (AI) and machine learning. There are so many advantages to using AI and machine learning in the data center. Let’s look at ways this technology is transforming the data center.

 

Storage Optimization

Storage is a major component of the data center. Having efficient storage is of the utmost importance. So many things can go wrong with storage, especially in the case of large storage arrays. Racks full of disk shelves with hundreds of disks, of both the fast and slow variety, fill data centers. What happens when a disk fails? The administrator gets an alert and has to order a new disk, pull the old one out, and replace it with the new disk when it arrives. AI uses analytics to predict workload needs and possible storage issues by collecting large amounts of raw data and finding trends in the usage. AI also helps with budgetary concerns. By analyzing disk performance and capacity, AI can help administrators see how the current configuration performs and order more storage if it sees a trend in growth.

 

Fast Workload Learning

Capacity planning is an important part of building and maintaining a data center. Fortunately, with technology like HCI being used today, scaling out as the workload demands is a simpler process than it used to be with traditional infrastructure. AI and machine learning capture workload data from applications and use it to analyze the impact of future use. Having a technology aid in predicting the demand of your workloads can be beneficial in avoiding downtime or loss of service for the application user. This is especially important in the process of building a new data center or stack for a new application. The analytics AI provides helps to see the entire needs of the data center, from cooling to power and space.

 

Less Administrative Overhead

The new word I love to pair with artificial intelligence and machine learning is “autonomy.” AI works on its own to analyze large amounts of data and find trends and create performance baselines in data centers. In some cases, certain types of data center activities such as power and cooling can use AI to analyze power loads and environmental variables to adjust cooling. This is done autonomously (love that word!) and used to adjust tasks on the fly and keep performance at a high level. In a traditional setting, you’d need several different tools and administrators or NOC staff to handle the analysis and monitoring.

 

Embrace AI and Put it to Work

The days of AI and machine learning being a scary and unknown thing are past. Take stock of your current data center technology and decide whether AI and/or machine learning would be of value to your project. Another common concern is AI replacing lots of jobs soon, and while it’s partially true, it isn’t something to fear. It’s time to embrace the benefits of AI and use it to enhance the current jobs in the data center instead of fearing it and missing out on the improvements it can bring.

Social media has become a mainstream part of our lives. Day in and day out, most of us are using social media to do micro-blogging, interact with family, share photos, capture moments, and have fun. Over the years, social media has changed how we interact with others, how we use our language, and how we see the world. Since social media is so prevalent today, it’s interesting to how artificial intelligence (AI) is changing social media. When was the last time you used social media for fun? What about for business? School? There are so many applications for social media, and AI is changing the way we use it and how we digest the tons of data out there.

 

Social Media Marketing

I don’t think the marketing world has been more excited about an advertising vehicle than they are with social media marketing. Did you use social media today? Chances are this article triggered you to pick up your phone and check at least one of your feeds. When you scroll through your feeds, how many ads do you get bombarded with? The way I see it, there are two methods of social media marketing: overt and covert. Some ads are overt and obviously placed in your feed to get your attention. AI has allowed those ads to be placed in user feeds based on their user data and browsing habits. AI crunches the data and pulls in ads relevant to the current user. Covert ads are a little sneakier, hence the name. Covert social media marketing is slipped into your feeds via paid influencers or YouTube/Instagram/Twitter mega users with large followings. Again, AI analyzes the data and posts on the internet to bring you the most relevant images and user posts.

 

Virtual Assistants

Siri, Alexa, Cortana, Bixby… whatever other names are out there. You know what I’m talking about, the virtual assistant living in your phone, car, or smart speaker, always listening and willing to pull up whatever info you need. The need to tweet while driving or search for the highest rated restaurant on Yelp! while biking isn’t necessary—let Siri do it for you. When you want to use Twitter, ask Alexa to tweet and then compose it all with your voice. Social media applications tied into virtual assistants make interacting with your followers much easier. AI constantly allows these virtual assistants to tweet, type, and text via dictation easily and accurately.

 

Facial Recognition

Facebook is heavily invested in AI, as evidenced by their facial recognition technology tagging users in a picture automatically via software driven by AI. You can also see this technology in place at Instagram and other photo-driven social media offerings. Using facial recognition makes it easier for any user who wants to tag family or friends with Facebook accounts. Is this important? I don’t think so, but it’s easy to see how AI is shaping the way we interact with social media.

 

Catching the Latest Trends

AI can bring the latest trends to your social media feed daily, even hourly if you want it to. Twitter is a prime example of how AI is used to crunch large amounts of data and track trends in topics across the world. AI has the ability analyze traffic across the web and present the end user with breaking news, or topics suddenly generating a large spike in internet traffic. In some cases, this can help social media users get the latest news, especially as it pertains to personal safety and things to avoid. In other cases, it simply leads us to more social media usage, as witnessed by meteoric trending recently when a bunch of celebrities started using FaceApp to see how their older selves might look.

 

What About the Future?

It seems like what we have today is never enough. Every time a new iPhone comes out, you can read hundreds of articles online the next day speculating on the next iPhone design and features. Social media seems to be along the same lines, especially since we use it daily. I believe AI will shape the future of our social media usage by better aligning our recommendations and advertisements. Ads will become much better targeted to a specific user based on AI analysis and machine learning. Apps like LinkedIn, Pinterest, and others will be much more usable thanks to developers using AI to deliver social media content to the user based on their data and usage patterns.

For some, artificial intelligence (AI) can be a scary technology. There are so many articles on the web about how AI will end up replacing X% of IT jobs by Y year. There’s no reason to be afraid of AI or machine learning. If anything, most IT jobs will benefit from AI/machine learning. The tech world is always changing, and AI is becoming a big driver of change. Lots of people interact with or use AI without even realizing it. Sometimes I marvel at the video games my kids are playing and think back to when I played Super Mario Brothers on the original Nintendo Entertainment System (which I still have!). I ask myself, “What kind of video games will my kids be playing?” The same applies to the tech world now. What kinds of things will AI and machine learning have changed in 10 years? What about 20 years?

 

Let’s look at how AI is changing the tech world and how it will continue to benefit us in decades to come.

 

Making IoT Great

The idea of the Internet of Things focuses on all the things in our world capable of connecting to the internet. Our cars, phones, homes, dishwashers, washing machines, watches, and yes, even our refrigerators. The internet-connected devices we use need to do what we want, when we want, and do it all effectively. AI and machine learning help IoT devices perform their services effectively and efficiently. Our devices collect and analyze so much data, and they rely more and more on AI and machine learning to sift through and analyze all the data to make our interactions better.

 

Customer Service

I don’t know a single human being who gets a warm and fuzzy feeling when thinking about customer service. No one wants to deal with customer service, especially when it comes to getting help over the phone. What’s our experience today? We call customer service and we likely get a robotic voice (IVR) routing us to the next most helpful robot. I find myself yelling “representative” over and over until I get a human… really, it works!

 

AI is improving our experience with IVR systems by making them easier to interact with and get help from. IVR systems also use AI to analyze input from callers to better route their calls to the right queue based on common trending issues. AI also helps ensure 24/7 customer service, which can be helpful with off-hours issues. You don’t have to feed AI-enhanced IVR systems junk food and caffeine to get through the night!

 

Getting Info to the User Faster

Have you noticed the recommendations you get on YouTube? What about the “because you watched…” on Netflix? AI and machine learning are changing the way we get info. Analytical engines pour through user data and match users with the info they most want, and quickly. On the slightly darker side of this technology, phones and smart speakers have become hot mics. If you’re talking about how hungry you are, your phone or speaker hears you, then sends you an email or pops up an ad on your navigation system to the nearest restaurant. Is that good? I’m not sold on it yet—it feels a little invasive. Like it or not, the way we get our data is changing because of AI.

 

Embrace It or Pump the Brakes?

For some, yeah you know who I’m talking about (off-gridders), AI is an invasive and non-voluntary technology. I wouldn’t be surprised if you had to interact with AI at least once in your day. We can’t get around some of those moments. What about interacting with AI on purpose? Do you automate your home? Surf the web from your fridge door? Is AI really on track with helping us as humans function easier? It’s still up for debate, and I’d love to hear your response in the comments.

Everyone take a deep breath and calm down. The likeliness of a robot taking over your job any time soon is very low. Yes, artificial intelligence (AI) is a growing trend, and machine learning has improved by leaps and bounds. However, the information technology career field is fairly safe, and if anything, AI/machine learning will only make things better for us in the future. However, a few IT jobs already have experienced the impact of AI, and I want to cover those here. Now, take this with a grain of salt, since AI/machine learning technology is fairly young and a lot of the news out there is simply conjecture.

 

Help Desk/IT Support

Think about the last time you called a support desk. Did your call get answered by a human or a robot? OK, maybe not an actual robot (that would be awesome), but an interactive voice response (IVR) system. How annoying is that? How often do we just start yelling, “Representative... representative... REPRESENTATIVE!” It can take several routes from an IVR system before we get a human who can help us out. This is all too often the situation when we call support or the help desk. Unfortunately (for help desk specialists), AI is only making IVR more efficient. AI enhances the capability of the IVR system to better understand and process human interaction over the phone. With IVR systems configured for automatic speech recognition (ASR), AI essentially eliminates the need for input via the keypad as it can more intelligently process the human voice response.

 

Data Center Admins

This one hurts because I’ve done a lot of data center admin work and still do some today. The idea of machine learning or AI replacing this job hits close to home. The truth is automation tools are already replacing common tasks data center admins used to carry out daily. Monitoring tools have used AI to improve data analytics pulled from system scans. Back when I started in IT, the general ratio was around one hundred systems to one administrator. With advances in monitoring, virtualization and AI, it’s now closer to one thousand systems to every administrator. While this is great for organizations looking to cut down on OPEX, it’s not great news for administrators.

 

Adapt or Die

Yeah, maybe that’s a little exaggerated, but it’s not a bad way to think. If you don’t see the technological advances as a hint to adapt, your career likely will die. AI and machine learning are hot topics right now, and there’s no better time to start learning the ins and outs of it and how you can adapt to work with AI instead of becoming obsolete. Understanding how to bridge the gap between humanity and technology can serve you well in the future. One way you can adapt is by learning programming, thereby gaining a better understanding of automation, AI, and machine learning. Maintain your creativity by implementing new ideas and using AI and machine learning to your advantage.

 

In the end, I don’t believe AI or machine learning will eliminate the need for a human workforce in IT. The human brain is far more adept at storing, processing, and analyzing data than any robot or machine will ever be. The human brain can adapt, learn, and connect with other humans in ways machines can’t. There might be an influx of jobs being taken over by AI, but we’ll always need humans to program the software and design the machines.

Perhaps the title of this post should be “To Hyperconverge or Not to Hyperconverge,” since it’s the question at hand. Understanding whether HCI is a good idea requires a hard look at metrics and budget. If your data center has been running a traditional server, storage, network architecture for a while, it should be easy to gather the metrics on power, space, and cooling. By gathering these metrics, you can compare the TCO of running a traditional architecture versus an HCI architecture.

 

Start by getting an accurate comparison. While having a TCO baseline will help with the comparison, you need to consider a few other items before making a final decision.

 

Application Workload

Forecasting the application workload(s) running in your current environment is important when considering HCI over traditional infrastructure. Current application workloads aren’t a good indicator of the health and status of your infrastructure. A good rule of thumb is to forecast three years out, which gives you a game plan for upgrading or scaling out your current configuration. If you’re only running a few workloads and they aren’t forecasted to be out of space within three years, you probably don’t need to upgrade to HCI. Re-evaluate HCI again in two years while looking three years ahead.

 

Time to Scale

Having an accurate three-year workload forecast will help you understand when you need to scale out. If you need to scale out now, I suggest going all in on HCI. Why all in on HCI? Because scaling with HCI is a piece of cake. It doesn’t require a forklift upgrade and you can scale on-demand. With most HCI vendors, simply adding a new block or node to your existing HCI deployment will scale out. You can add more than one block or node at a time, making the choice to go HCI very attractive. The process of scaling out in a traditional infrastructure costs more and is more time-constrained.

 

Staff Experience

You can’t afford to overlook this area in the decision process. Moving from traditional infrastructure to HCI can present a learning curve for some. In traditional infrastructure, most technologies are separate and require a separate team to manage them. In some cases where security is a big requirement, there’s a separation of duties, with which different admins manage network, compute, and storage. Upgrading to an HCI deployment gives you the benefit of having all components in one. When you move to a new technology, it requires somewhat of a learning curve, and this couldn’t be truer with HCI. The interfaces differ, and the way the technologies are managed differ. Some HCI vendors offer a proprietary hypervisor, which will require learning from the old industry standard hypervisor.

 

Make an Informed Decision

If you’re determined to transition to a new HCI deployment, make sure you consider all the previous items. In most cases, the decisionmakers attend vendor conferences and get excited about new technology. They then buy in to a new technology and leave it to their IT staff to figure out. Ensure there’s open communication and consider staff experience, TCO, and forecasted application workload. If you consider those things, you can feel good about making an informed decision.

Hyperconverged infrastructure has become a widely adopted approach to data center architecture. With so many moving parts involved, it can be difficult to keep up with the speed at which everything evolves. Your organization may require you to be up to speed with the latest and greatest technologies, especially when they decide to adopt a brand-new hyperconverged infrastructure. The question then arises, how do you become a hyperconverged infrastructure expert? There are many ways to become an expert, depending on the technology or vendor your organization chooses. While there aren’t many HCI certifications yet, there are many certification tracks you can obtain to make you an HCI expert.

 

Storage Certifications

There are many great storage options for certification depending on which vendor your organization uses for storage. If you’re already proficient with storage, you’re a step ahead. The storage vendor isn’t nearly as important as the storage technologies and concepts. Storage networking is important and getting trained on its concepts will be helpful in your quest to become an HCI expert.

 

Networking Certifications

There aren’t many certifications more important than networking. I strongly believe everyone in IT should have at least an entry-level networking certification. Networking is the lifeblood of the data center. Storage networking also exists in the data center and gaining a certification or training in networking will help to build your expert status.

 

Virtualization Certifications

Building virtual machines has become a daily occurrence, and if you’re in IT, it’s become necessary to understand virtualization technology. Regardless of the virtualization vendor of choice, having a solid foundational knowledge of virtualization will be key in becoming an HCI expert. Most HCI solutions use a specific vendor for their virtualization piece of the puzzle, but some HCI vendors have proprietary hypervisors built in to their products. Find a virtualization vendor with a good certification and training roadmap to gain knowledge of ins and outs of virtualization. When it comes to HCI, you’ll need it.

 

HCI Training

If you already have a good understanding of all the technologies listed above, you might be better suited to taking a training class or going after an HCI-specific certification. Most HCI vendors offer training on their platforms to bring you and your team up to speed and help build a foundational knowledge base for you. Classes are offered through various authorized training centers worldwide. Some vendors offer HCI certifications—while it’s currently a very small amount, I believe this will change over time. Do a web search for HCI training and see what returns. There are many options to choose from depending on your level of HCI experience thus far.

 

Hands-on Experience

I saved the best for last, as you can’t get better training than on-the-job training. Learning as you go is the best route to becoming an HCI expert. Granted, certifications help validate your experience, and training helps you dive deeper, but hands-on experience is second-to-none. Making mistakes, learning from your mistakes, and documenting everything you do is the fastest way to becoming an expert in any field in my opinion. Unfortunately, not everyone can learn on the job, as most organizations cannot afford to have a production system go down, or have admins making changes on the fly without a prior change board approval. In this case, find an opportunity to build a sandbox or use an existing one to build things and tear them down, break things, and fix things. Doing this will help you become the HCI expert your organization desperately needs.

Running a traditional data center infrastructure for years can put your company in a rut, especially when it’s time to pick a new solution. When electing to trade out your traditional infrastructure for a sleek new hyperconverged infrastructure (HCI), it can be a difficult paradigm to shift. So many questions can arise in selecting, and many HCI vendors are willing to answer those questions, which doesn’t necessarily make it easier. When deciding to switch to an HCI solution, it’s important to take stock of your current situation and assess why you’re searching for a new solution. Here are some things to think about when choosing an HCI solution.

 

Do You Have Experienced Staff?

Having staff on-hand to manage an HCI solution should be the main concern when choosing a solution. Traditional server infrastructures generally rely on several siloed teams to manage different technologies. When there are storage, networking, server, and security personnel, it’s important to decide if an all-in-one HCI solution is a possibility. Is there enough time to get your team spun up on the latest HCI solution and all the nuances it brings? Take a good look at your staff and take stock of their skillset and level of experience before diving headfirst into a brand-new HCI solution.

 

Support, Support, Support

Support is only considered expensive until it isn’t. When your new HCI solution isn’t working as planned or your team is having trouble configuring something, a support call can come in very handy. If the HCI solution you’re looking into doesn’t have the level of support to meet your needs, forget about it. It does no good to pay for support you can’t rely on when it all comes crashing down, which it can from time to time. Ensure the vendor’s support provides coverage for both hardware and software and offers support coverage hours suited to your needs. If you’re a government entity, does the vendor provide a U.S.-citizen-only support team? These are all important questions to ask of prospective vendors.

 

How Will the HCI Solution Be Used?

First things first, how will you be using the HCI solution? If your plan is to employ a new HCI solution to host your organization’s new VDI implementation, specific questions need to be addressed. What are the configuration maximums for CPU and memory, and how much flash memory can be configured? VDI is a very resource-intensive project, and going into the deployment without the right amount of resources in the new HCI solution can put your organization in a bad spot. If the idea of HCI procurement is coming specifically from an SMB/ROBO situation, it’s extremely important to get the sizing right and ensure the process of scaling out is simple and fast.

 

Don't Get Pressured Into Choosing

Your decision needs to come when your organization is ready, not a vendor’s schedule or pressure to commit. Purchasing a new HCI solution is not a small decision and it can come with some sticker shock, so it’s important to choose wisely and choose what’s right for your organization. Take stock of the items I listed above and decide how to proceed with all the vendor calls, which will flood your phones once you decide you’re looking for a new HCI solution.

So much in IT today is focused on the enterprise. At times, smaller organizations get left out of big enterprise data center conversations. Enterprise tools are far too expensive and usually way more than what they need to operate. Why pay more for data center technologies beyond what your organization needs? Unfortunately for some SMBs, this happens, and the ROI on the equipment they purchase never really realizes its full potential. Traditional data center infrastructure hardware and software can be complicated for an SMB to operate alone, creating further costs for professional services for configuration and deployment. This was all very true until the advent of hyperconverged infrastructure (HCI). So to answer the question I posed above: yes, HCI is very beneficial and well suited for most SMBs. Here's why:

 

1. The SMB is small- to medium-sized – Large enterprise solutions don’t suit SMBs. Aside from the issue of over-provisioning and the sheer cost of running an enterprise data center solution, SMBs just don't need those solutions. If the need for growth arises, with HCI, an organization can easily scale out according to their workloads.

 

2. SMBs can’t afford complexity – A traditional data center infrastructure usually involves some separation of duties and siloes. Where there are so many moving parts with different management interfaces, it can become complex to manage it all without stepping on anyone’s toes. HCI offers an all-in-one solution—storage, compute, and memory all contained in a singular chassis. HCI avoids the need for an SMB to employ a networking team, virtualization team, storage team, and more.

 

3. Time to market is speedy – SMBs don’t need to take months to procure, configure, and deploy a large-scale solution for their needs. Larger corporations might require a long schedule, where the SMB might not require this. HCI helps them to get to market quickly. HCI is as close to a plug-and-play data center as you can get. In some cases, depending on the vendor chosen, time to market can be down to minutes.

 

4. Agility, flexibility, all of the above – SMBs need to be more agile and don’t want to carry all the overhead required to run a full data center. Power, space, and cooling can be expensive when it comes to large enterprise systems. Space itself can be a very expensive commodity. Depending on the SMB’s needs, their HCI system can be trimmed down to a single rack or even a half rack. HCI is also agile in nature due to the ability to scale on demand. If workloads spike overnight, simply add another block or node to your existing HCI deployment to bring you the performance your workloads require.

 

5. Don’t rely on the big players – Big-name vendors for storage and compute licensing can come at a significant cost. Some HCI vendors offer proprietary and built-in hypervisor solutions included in the cost and easier to manage than an enterprise license agreement. Management software is also built in to many HCI vendor’s solutions.

 

HCI has given the SMB more choices when it comes to building out a data center. In the past, an SMB had to purchase licensing and hardware generally built for the large enterprise. Now they can purchase a less expensive solution with HCI. HCI offers agility, quick time to market, cost savings, and reduced complexity. These can all be pain points for an SMB, which can be solved by implementing an HCI solution. If you work for an SMB, have you found this to be true? Does HCI solve many of these problems?

Hyperconverged infrastructure is adopted more and more every year by companies both big and small. For small- to medium-businesses (SMBs) and remote branch type companies, HCI provides an all-in-one solution to eliminate the need to have power, space, and cooling for multiple different systems. For bigger companies at the enterprise level and below, HCI gives them a flexible means of scaling out as the workload demand increases. There are many other benefits to adopting a hyperconverged approach to data center infrastructure. Here are what I consider the top five.

 

Flexible Scalability

The most attractive benefit of HCI is the ability to scale out your environment as your workload demand increases. Every application or data center has different workloads. Some demand a lot of resources while others don’t need as much. In the case of HCI, as workload demands increase, there’s no need to do a forklift upgrade. You simply add a new node or block to your current infrastructure and configure it as needed.

 

A Lower Total Cost of Ownership

Total Cost of Ownership (TCO) isn’t usually something the IT community really cares about. Generally, we’re given what we get, and we have to make it work effectively. Decision makers find TCO extremely important, and this metric plays a major role in the procurement process. TCO is good for both the decision makers and the engineers in the trenches. On one hand, the decision makers see a lower cost solution overall, and the engineers get to work with equipment and software that isn’t completely budget-restrained. Time to market is much quicker, less administrative staff is required, and there are cost savings when it comes to power, space, and cooling.

 

Less Administrative Overhead

When there’s less to manage, there’s less administrative overhead. Lowering administrative overhead doesn’t necessarily mean cutting down your staff, it means making your staff more effective with less work. In a traditional data center infrastructure, there are many moving parts, which requires many different teams or silos. Consolidating all the moving parts into one chassis cuts down on administrative overhead and segregated teams.

 

Avoid Overprovisioning

HCI essentially eliminates the possibility of overprovisioning. Traditional infrastructures require a big purchase upfront, which is based off an analyst’s projections for the workload three to five years out. Most of those workload projections end up being too generous and the organization is left with expensive hardware that never gets near capacity usage. HCI allows you to buy only what you currently need, with maybe a 15% buffer, and then as the workload increases, you can flexibly scale out to meet the increase. By eliminating the need for workload projection and large upfront hardware/software purchases, TCO decreases and flexibility increases.

 

Takes the Complexity Out of New Deployments

Traditional infrastructure deployments can be a very complex process. As discussed earlier, the workload projections move forward to procurement of hardware and software, which then need to be racked and stacked, cabled, and configured. This deployment approach requires multiple people, server lifts, and rack space. HCI deployments make it easier for less experienced administrators to unbox rack and configure an entire deployment. In the case of scaling, the same deployment model exists… unbox and bolt on a new block and configure it. There’s no need for the storage team to provision new storage, or the networking team to add more cables. It’s an all-in-one solution and simple to deploy.

 

There are numerous benefits to adopting HCI over traditional infrastructure. I’ve only listed five, which barely scratch the surface. For the benefit of those of us who haven’t deployed HCI before, what are some benefits you’ve realized by your deployment, but aren’t on this list?

Today’s data center is full of moving parts. If your data center is hosted on-premises, there’s a lot to do day-in and day-out to make sure everything is functioning as planned. If your data center is a SaaS data center hosted in the cloud, there are still things you need to do, but far fewer compared to an on-premises data center. Each data center carries different workloads, but there’s a set of common technologies that need to be monitored. When VM performance isn’t monitored, you can miss a CPU overload or max out memory. When the right enterprise monitoring tools are in place, it’s easier to manage the workloads and monitor their performance. The following is a list of tools I believe every data center should have.

 

Network Monitoring Tools

Networking is so important to the health of any data center. Both internal networking and external network play a key role in the day-to-day usage of the data center. Without networking, your infrastructure goes nowhere. Installing a network monitoring tool that tracks bandwidth usage, networking metrics, and more allows a more proactive or preventative approach to solving networking issues. Furthermore, a networking tool such as an IP management tool that stores all the available IP addresses and ranges and dynamically updates as addresses get handed out. This will go a long way in staying organized.

 

Virtual Machine Health/Performance Monitoring Tools

Virtualization has taken over the data center landscape. I don’t know of a data center that doesn’t have some element of the software-defined data center in use. Industry-leading hypervisor vendors have created so many tools and advanced options for virtualization that go above and beyond server virtualization. With so many advanced features and software in place, it’s important to have a tool that monitors not just your VMs, but your entire virtualization stack. Software-defined networking (SDN) has become popular, and while that might end up falling under the networking section, most of the SDN configurations can be handled directly from the virtual administration console. Find a tool that will provide more metrics than you think you need; it may turn out that you scale out and then require them at some point. A VM monitoring tool can catch issues like CPU contention, lack of VM memory, resource contention, and more.

 

Storage Monitoring Tools

You can’t have a data center without some form of storage, whether it be slow disks, fast disks, or a combination of both. Implementing a storage monitoring tool will help the administrator catch issues that slow business continuity such as a storage path dropping, a storage mapping being lost, loss of connectivity to a specific disk shelf, a bad disk, or a bad motherboard in a controller head unit. Data is king in today’s data center, so it’s imperative to have a storage monitoring tool in place to catch anomalies or issues that might hurt business continuity or compromise data integrity.

Environment Monitoring Tools          

Last, but definitely not least, a data center environment monitoring tool will keep you from a loss of hardware and data altogether. This type of tool will protect a data center against physical or environmental issues within the data center. A good environment monitoring tool will alert you to the presence of excess moisture in the room, an extreme drop in temperature, or spike in temperature. Monitoring tools usually come with a video aspect to monitor it visually, plus sensors installed in the data center room to monitor environmental factors. Water can do serious damage in your data center. Great monitoring tools will have monitors installed near the floor to detect moisture and a build-up of water.

 

You can’t be too careful when it comes to protecting your enterprise data center. Monitoring tools like the ones listed above are a good place to start. Check your data center against my list and see if it matches up. Today, there are many tools that encompass all these areas in one package, making it convenient for the administrator to manage it all from a single screen.

Small-to-medium-sized businesses (SMBs) tend to get overlooked when it comes to solutions that fit their needs for infrastructure monitoring. There are many tools that exist today that cater to the enterprise, where there’s a much larger architecture with many moving parts. The enterprise data center requires solutions for monitoring all the systems within it that an SMB might not have. The SMB or remote office/branch office (ROBO) is a much smaller operation, sometimes using a few servers and some smaller networking gear. There may be a storage solution in the back-end, but it’s more likely that the data for their systems is stored on a local drive within one or all of their servers.

 

It seems unfair for the SMB to be ignored as developers typically work to create solutions that will fit better in an enterprise architecture than in a smaller architecture, but that’s the nature of the beast. There’s more money in enterprise solutions, with enterprise licensing agreements (ELAs) reaching into the millions of dollars for some clients. It would make sense that enterprise software is more readily available than SMB software, but that doesn’t mean there aren’t solutions out there for the SMB.

Find What’s Right for YOU

Don’t pick a solution with more than what you need for the infrastructure within your organization. If your organization consists of a single office with three physical servers, a modem/router, and a direct attached storage (DAS) solution, you don’t need an enterprise solution to achieve the systems monitoring. There are many less expensive or open-source server monitoring tools out there that contain a lot of documentation to help you get through installation and configuration. Enterprise solutions aren’t always the answer just because they are “enterprise solutions.” Bigger isn’t always better. If a solution with a support agreement is more in line with your expectations, there are quite a few providers that can offer an SMB-class monitoring solution for you.

 

Don’t Overpay

Software salespeople are all about selling, selling, selling. Many times, salespeople are sent along on a client call with a solutions engineer (SE) who has more technical experience than the salesperson. Focus more attention on the SE and less on the salesperson. There’s no need to shell out a ton of money for an ELA that’s way more than what you need for your SMB infrastructure. Many times, quality is associated with costs, and that’s just plain false. When it comes to choosing a monitoring tool for your SMB, "quality over quantity" should be your mantra. If you don’t require 24/7/365 monitoring and SLAs at the platinum level with two-minute response time, don’t buy it. Find a tool that fits your SMB budget, and don’t feel you are slighted because you didn’t buy the slightly shiny, more expensive enterprise solution.

 

Pick a Vendor That Will Work for YOU

Software vendors, especially those that work to develop large enterprise monitoring solutions, don’t always have the best interests of the SMB in mind when building a tool. By focusing your search on vendors that scale to the SMB market, you’ll find that the sales process and the support will be tailored to the needs of your organization. With vendors building scalable tools, customization becomes a key selling point. Vendors can cater to the needs and requirements of the customer, not the market.

 

Peer Pressure is Real

Don’t take calls from software vendors that cater only to the needs of enterprise-scale monitoring solutions. Nothing against enterprise monitoring solutions—they’re needed for the enterprise. However, focusing on the chatter and the “latest and greatest” types of marketing will make your SMB feel even smaller. There’s no competition. Pick what works for your SMB. Don’t overpay. Find a vendor that will support you. By putting all these tips in place, you can find a monitoring tool for your SMB that won’t make you feel like you had to settle.

Monitoring tools are vital to any infrastructure. They analyze and give feedback on what’s going on in your data center. If there are anomalies in traffic, a network monitoring tool will catch it and alert the administrators. When disk space is getting full on a critical server, a server monitoring tool alerts the server administrators that they need to add space. Some tools are only network tools, or only systems tools. However, these may not always provide all the analysis you need. There are additional monitoring tools that can cover everything happening within your environment.

 

In searching for a monitoring tool that fits the needs of your organization, it can be difficult to find one that’s the right size for your environment. Not all monitoring tools are one-size-fits-all. If you’re searching for a network monitoring tool, you don’t need to purchase one that covers server performance, storage metrics, and more. There are several things to consider when choosing a monitoring tool that fits your environment.

 

Run an Analysis on Your Environment

The first order of business when trying to determine which monitoring tool best fits your needs is to analyze your current environment. There are tools on the market today that help map out your network environment and gather key information such as operating systems, IP addresses, and more. Knowing which systems are in your data center, what types of technologies are present, and what application or applications they support will help you decide which tools are the best fit.

 

Define Your Requirements

There may be legal requirements defining what tools need to be present in your environment. Understanding these specific requirements will likely narrow down the list of potential tools that will work for you. If you’re running a Windows environment, there are many built-in tools that perform the tasks needed in an environment. Additionally, if your organization is using these built-in tools, it may not be necessary to spend money on another tool to do the same thing.

 

Know Your Budget

Budgetary demands typically determine these decisions for most organizations. Analyzing your budget will help you understand which tools you can afford and will narrow the list down. Many tools do more than needed for some, so it’s not necessary to spend more on a tool that might be outside your budget.

 

On-prem or Cloud?

When picking a monitoring tool, it’s important to research whether you want an on-premises tool or a cloud-based one. SaaS tools are very flexible and can store the information the tool gathers in the cloud. On the other hand, having an on-premises tool keeps everything in-house and provides a more secure option for data gathered. Choosing an on-prem tool gives you the ability to see your data 24/7/365 and have complete ownership of it. With a SaaS tool, it’s likely you could lose some visibility into how things are operating on a daily basis. Picking the right hosting option should be strictly based on your requirements and comfort with the accessibility of your data.

 

Just Pick One Already

This isn’t meant to be harsh, but spending too time researching and looking for a tool that fits your needs may put you in a bad position. While you’re trying to choose between the best network monitoring tools, you could be missing out on what’s actually going on inside your systems. Analyze your environment, define your requirements, know your budget, pick a hosting model, and then make your selection. By ensuring the monitoring tool solution fits the needs of your environment, it will pay dividends in the end.

 

Many organizations grow each year in business scope and footprint. When I say footprint, it’s not merely the size of the organization, but the number of devices, hardware, and other data center-related items. New technologies creep up every year, and many of those technologies live on data center hardware, servers, networking equipment, and even mobile devices. Keeping track of the systems within your organization’s data center can be tricky. Simply knowing where the device is and if it’s powered on isn’t enough information to get an accurate assessment of the systems' performance and health.

 

Data center monitoring tools provide the administrator(s) with a clear view of what’s in the data center, the health of the systems, and their performance. There are many data center monitoring tools available depending on your needs, including networking monitoring, server monitoring, or virtual environment monitoring, and it’s important to considering both open-source and proprietary tools available.

 

Network Monitoring Tools for Data Centers

 

Networking can get complicated, even for the most seasoned network pros. Depending on the size of the network you operate and maintain, managing it without a dedicated monitoring tool can be overwhelming. Most large organizations will have multiple subnets, VLANs, and devices connected across the network fabric. Deploying a networking tool will go a long way in understanding what network is what, and whether or not there are any issues with your networking configuration.

 

An effective networking tool for a data center is more than just a script that pings hosts or devices across the network. A good network tool monitors everything down to the packet. Areas in the network where throughput is crawling will be captured and reported within the GUI or through SMTP alerts. High error rates and slow response times will also be captured and reported. Network administrators can customize the views and reports that are fed to the GUI to their specifications. If networking is bad or broken, things will escalate quickly. The best network monitoring tools can help avoid this.

 

Data Center Server Monitoring Tools

 

Much of the work that a server or virtual machine monitoring tool does can be also accomplished using a good network monitoring tool. However, there are nuances within server/VM monitoring tools that go above and beyond the work of a network monitoring tool. For example, there are tools designed specifically to monitor your virtual environment.

 

A virtual environment essentially contains the entire data center stack, from storage to networking to compute. This entire stack is more than just simple reachability and SNMP monitoring. It’s imperative to deploy a data center monitoring solution that understands things at a hypervisor level where transactions are brokered between the kernel and the guest OS. You need a tool that does more than tell you the lights are still green on your server. You need a tool that will alert you if your server light turns amber and why it’s amber, as well as how to turn it back to green.

 

Some tools offer automation in their systems monitoring. For instance, if one of your servers is running high on CPU utilization, the tool would migrate that VM to a cluster with more available CPU for that VM. That kind of monitoring is helpful, especially when things go wrong in the middle of the night and you’re on call.

 

Application Monitoring Tools

 

Applications are the lifeblood of most organizations. Depending on the customer, some organizations may have to manage and monitor several different applications. Having a solid application performance monitoring (APM) tool in place is crucial to ensuring that your applications are running smoothly and the end users are happy.

 

APM tools allow administrators to see up-to-date information on application usage, performance metrics, and any potential issues that may arise. If an application begins to deliver a poor end-user experience, you want to know about it as much in advance of the end user as possible. APM tools track everything from client CPU utilization, bandwidth consumption, and many other performance metrics. If you’re managing multiple applications in your environment, don’t leave out an APM tool—it might save you when you need it most.

 

Finding the Best Data Center Monitoring Tools for Your Needs

 

Ensure that you have one or all of these types of tools in your data center. It saves you time and money in the long run. Having a clear view of all aspects of your data center and their performance and health helps build confidence in the reliability of your systems and applications.

Cost plays a factor in most IT decisions. Whether the costs are hardware- or software-related, understanding how the tool’s cost will affect the bottom line is important. Typically, it’s the engineer’s or administrator’s task to research tools and/or hardware to fit the organization's needs both fiscally and technologically. Multiple options are available to organizations from open-source tools, proprietary tools, and off-the-shelf tools for purchase. Many organizations prefer to either build their own tools or purchase off-the-shelf solutions that have been tried and tested. However, the option of open-source software has become increasingly popular and adopted by many organizations both in the public and private sector. Open-source software is built, maintained, and updated by a community of individuals on the internet and it can change on the fly. This poses the question: is open-source software suitable for the enterprise? There are both pros and cons that can make that decision easier. 

 

The Pros of Open-source Software

 

Open-source software is cost-effective. Most open-source software is free to use. In cases where third-party products are involved, such as plug-ins, there may be a small cost incurred. However, open-source software is meant for anyone to download and do with as they please, to some extent based on licensing. With budgets being tight for many, open-source could be the solution to help stretch your IT dollars.

 

Constant improvements are a hallmark of open-source software. The idea of open-source software is that it can and will be improved as users see flaws and room for improvements. Open-source software is just that: open, and anyone can update it or improve its usage. A user that finds a bug can fix it and post the updated iteration of the software. Most large-scale enterprise software solutions require major releases to fix bugs and are bound by major release schedules to get the latest and greatest out for their customers.

 

The Cons of Open-source Software

 

Open-source software might not stick around. There’s a possibility that the open-source software your organization has hedged their bets on simply goes away. When the community behind updating the software and writing changes to the source code closes up shop, you’re the one now tasked with maintaining it and writing any changes pertinent to your organization. The possibility of this happening makes open-source a vulnerable choice for your organization.

 

Support isn’t always reliable. When there is an issue with your software or tool, it’s nice to be able to turn to support for help resolving your issue. With open-source software, this isn’t always guaranteed, and if there is support, there aren’t usually the kind of SLAs in place that you would expect with a proprietary enterprise-class software suite.

 

Security becomes a major issue. Anyone can be hacked. However, the risk is far less when it comes to proprietary software. Due to the nature of open-source software allowing anyone to update the code, the risk of downloading malicious code is much higher. One source referred to using open-source software as “eating from a dirty fork.” When you reach in the drawer for a clean fork, you could be pulling out a dirty utensil. That analogy is right on the money.

 

The Verdict

 

Swim at your own risk. Much like the sign you see at a swimming pool when there is no lifeguard present, you have to swim at your own risk. If you are planning on downloading and installing an open-source software package, do your best to scan it and be prepared to accept the risk of using it. There are pros and cons, and it’s important to weigh them with your goals in mind to decide whether or not to use open-source.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.