Geek Speak Blogs

cancel
Showing results for 
Search instead for 
Did you mean: 

Geek Speak Blogs

Level 17

The number one question people have been asking as they move to remote work is not “How do I set up my Wi-Fi?” or “How do I make my kitchen table more comfortable to sit at?” No, the first thing people are saying is, “I feel so disconnected.” Or, “I feel like I’m not good at this.” Or simply, “I don’t like this.” Simply stated, people need to know how they are going to establish a feeling of community and connection.

Read more...

Read more
8 4 110

The right tools and team can make scaling up to handle disasters fast and efficient.  Here are some of my experiences ramping up a health care system's network to support environmental changes for doctors, nurses, and professionals during a pandemic.

Read more...

Read more
5 1 135
Level 17

So many things at SolarWinds, arise out of a desire to share our experiences and offer lessons we learned. In this case, our goal is to help you wrap your head around the new reality of you (and your users) working remotely when it wasn’t part of our work habit before.

Read more...

Read more
10 5 320
Level 17

This week's Actuator comes to you from my home office, where I’ll be located for the foreseeable future. We’ve successfully completed the first week of a three-week school closure. Last night we watched the documentary “Contagion,” so I fully expect this situation to last longer than just three weeks.

Read more...

Read more
2 10 86
Level 12

Here’s an interesting article by my colleague Brandon Shopp, who offers tips for monitoring and backing up cloud-based applications like Office 365.

Read more...

Read more
1 1 45

As part of a series of Blog posts helping organisations focus on the important stuff during this period of Corona Virus crisis, this post discusses how bringing focus to your monitoring to the important metrics is essential. Having visibility of the normal operational baselines will give you access to understand what, how, where and why these metrics can impact the change in how your IT infrastructure and the services they deliver are potentially able and actually coping.

With countries on lock down, businesses telling staff to work from home, the capability of IT to support such a dramatic shift in how the services are consumed by users. I hope this article helps and contributes to you gaining better understanding and knowledge on maintaining an operational business and organisation during this dramatic time.

Read more...

Read more
8 8 456
Level 17

I hope this edition of the Actuator finds you and yours in good health and spirits today. The world has gone slightly cray-cray these past few weeks. It seems like yesterday I found myself elbow deep in fried pork products wondering if I would be able to fly home before the borders were closed. (Spoiler alert: I did).

Read more...

Read more
2 14 138
Level 12

Here’s an interesting article by my colleague Mav Turner about the increasing use of automation, and the benefits and changing skills needed to be successful.

Read more...

Read more
1 2 83
Level 12

Here’s an interesting article by my colleague Brandon Shopp with specific suggestions on improving access rights management and how to improve security without creating too much friction.

Read more...

Read more
2 5 98
Level 17

After two weeks away at events it is good to be back home, safe and healthy. I hope the same is true for anyone reading this post. Here's hoping the second half of the year sees a return to normalcy, whatever that may mean for you.

Read more...

Read more
4 16 231
Level 12

Here’s an interesting article by my colleague Brandon Shopp reviewing containers, their monitoring challenges, and suggestions on tools to manage them effectively.

 

Read more...

Read more
2 7 245
Level 13

My Journey in IT

<Memory fade in…> It’s 2010 and I’m at a furniture store doing office administration. I love the people I work with and I find satisfaction in my job daily by solving problems and helping people. I have no idea how much my life is going to change in one year. I’m 23, have a newborn, and have no idea what I want to do with my life. In late 2010, this company—where I had ambitions of rising through the ranks and had, in fact, just scheduled an interview for position I hoped to be promoted into—announced bankruptcy.

What the heck? The store closed its doors for the last time in January of 2011, and I had no idea what I was going to do. I hadn’t quite figured out life yet or planned for the future.

One month on unemployment later, I get a call. Brace yourselves for some nepotism, y’all! My brother-in-law worked in sales at a tech startup and wanted me to talk to an engineer about doing some work with SolarWinds software. He was confident in my abilities and encouraged me to give it a chance. I had numerous concerns, but what did I have to lose? That engineer took a chance on me, and I took a chance on both a startup and a new-to-me industry. I ended up working there for the next nine years. The leap into tech (with no previous experience other than I really love video games and computers, mind you) was both terrifying and rewarding. By the second or third week, I was teaching others how to do admin tasks in NPM. How had I come to learn such things so quickly? You can probably guess—THWACK. This community saved me! I asked (and eventually began answering) loads of questions here and learned along the way. Within about six months, I was named a THWACK MVP. Let me tell you, it’s still one of my favorite and proudest accomplishments.

Flash forward a few years and the “startup” has grown exponentially. By now, you may have guessed the startup I refer to is Loop1 Systems, Inc. I was leading a team of engineers with many years of experience in various sections of IT in optimizing and assisting with SolarWinds environments. I’ve spent years working in hundreds of environments, large and small, in many disparate industries, helping boots-on-the-ground IT pros optimize their monitoring. Every time I heard someone say they were able to get quality actionable data which saved the day—or simply saved them 10 minutes—I felt excited. If I was involved in the process, I felt satisfaction and pride in what small way I contributed to success; if I wasn’t involved, I liked to give kudos to those who made it possible and then sought to understand the entirety of the situation to carry the knowledge forward.

Over those nine years, my desire to learn and grow was never satiated, and even now I continue to expand my repertoire by immersing myself in tech every day.

Which sort of explains why I pursued becoming a Head Geek, but not completely.

News Flash: I Didn’t Know I Wanted to Be a Head Geek.

Why not? The answer’s silly. Even though I had personally met and spoken to many of the Head Geeks during their tenures, it didn’t seem like a “real” job. To me, the Head Geek role is almost mythical and certainly legendary. A job where you get paid to visit with people all day, share your experienced opinions, and hang out on social media? It seemed unreal and out of reach.

I started talking to current and former Head Geeks and a whole host of other people I’m proud to call my support system (family, friends, coworkers, etc.) about the opportunity. Only about half of those people are even in IT in any capacity, so it was a LOT of explaining on my part to solicit advice. I received advice and encouragement in many forms from all the wonderful people I know. My support system reminded me of all I’ve accomplished thus far and helped bring me back around to the notion that I can do anything I put my mind to. The interview and application process, by nature, requires you to rehash all your accomplishments and reminds you to think about your failures as well and what you learned from those. During this process, I came to realize Head Geeks aren’t just storytellers, bloggers, and personalities but people who get to use their experiences and knowledge to affect change in IT. Whether it be simply vocalizing a different perspective or sharing opinions and ideas with the world or opening a dialogue with the various communities in IT, Head Geeks have a platform to tell the stories of the IT pro. That is powerful in a way which speaks to me as an individual. People who share experiences both real and theoretical in a meaningful way which makes IT make sense to anyone. I want to do that.

I’m still blown away that I get to claim this role now. Me! Someone who still feels like she doesn’t know enough!

I’m beyond excited to transition from my role as a SolarWinds fangirl/THWACK MVP to a real-life Head Geek. I endeavor to continue to learn and grow and share knowledge and all my experience working in hundreds of different environments to the fore. I get to continue being excited when monitoring saves the day and the bacon!

My Advice

Never stop learning. Tech is constantly evolving and revolutionizing the ideas of what we think tech is. Drive yourself forward and look for help along the way. This community is a testament to what we can do together if we’re kind and helpful to each other. Occasionally, look back at what you’ve achieved thus far. Remember your successes, but also your failures and how you grew from them. The path behind you can remind you what you’re capable of and give you the confidence to pursue something meaningful to you. Go for it!

That’s been my journey through IT so far.  I would love to hear your experiences and stories as well.  Share them in the comments below so we can all celebrate where we are today and what it took to get here.

Catch you around the virtual water cooler! 😊

Read more
24 22 581
Level 12

Here’s an interesting article by my colleague Mav Turner, who offers three steps to improve cloud security. From creating policies to improving visibility and automation.

Read more...

Read more
2 5 246
Level 17

This edition of the Actuator comes to you from my kitchen, where I'm enjoying some time at home before I hit the road. I'll be at RSA next week, then Darmstadt, Germany the following week. And then I head to Seattle for the Microsoft MVP Summit. This is all my way of saying future editions of the Actuator may be delayed. I'll do my best, but I hope you understand.

As always, here's a bunch of links I hope you will find useful. Enjoy!

It doesn’t matter if China hacked Equifax

No, it doesn't, because the evidence suggests China was but one of many entities that helped themselves to the data Equifax was negligent in guarding.

Data centers generate the same amount of carbon emissions as global airlines

Machine learning, and bitcoin mining, are large users of power in any data center. This is why Microsoft has announced they'll look to be carbon neutral as soon as possible.

Delta hopes to be the first carbon neutral airline

On the heels of Microsoft's announcement, seeing this from Delta gives me hope many other companies will take action, and not issue press releases only.

Apple’s Mac computers now outpace Windows in malware and virus

Nothing is secure. Stay safe out there.

Over 500 Chrome Extensions Secretly Uploaded Private Data

Everything is terrible.

Judge temporarily halts work on JEDI contract until court can hear AWS protest

This is going to get ugly to watch. You stay right there, I'll go grab the popcorn.

How to Add “Move to” or “Copy to” to Windows 10’s Context Menu

I didn't know I needed this until now, and now I'm left wondering how I've lived so long without this in my life.

Our new Sunday morning ritual is walking through Forest Park. Each week we seem to find something new to enjoy.

048F2408-CFE5-4464-8C7A-842A9FFC1832.GIF

Read more
3 27 870
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by Brandon Shopp about DoD’s not-so-secret weapon against cyberthreats. DISA has created technical guidelines that evolve to help keep ahead of threats, and this blog helps demystify DISA STIGs.

The Defense Information Systems Agency (DISA) has a set of security regulations to provide a baseline standard for Department of Defense (DoD) networks, systems, and applications. DISA enforces hundreds of pages of detailed rules IT pros must follow to properly secure or “harden” the government computer infrastructure and systems.

If you’re responsible for a DoD network, these STIGs (Security Technical Implementation Guides) help guide your network management, configuration, and monitoring strategies across access control, operating systems, applications, network devices, and even physical security. DISA releases new STIGs at least once every quarter. This aggressive release schedule is designed to catch as many recently patched vulnerabilities as possible and ensure a secure baseline for the component in operation.

How can a federal IT pro get compliant when so many requirements must be met on a regular basis? The answer is automation.

First, let’s revisit STIG basics. The DoD developed STIGs, or hardening guidelines, for the most common components comprising agency systems. As of this writing, there are nearly 600 STIGs, each of which may comprise hundreds of security checks specific to the component being hardened.

A second challenge, in addition to the cost of meeting STIG requirements, is the number of requirements needing to be met. Agency systems may be made up of many components, each requiring STIG compliance. Remember, there are nearly 600 different versions of STIGs, some unique to a component, some targeting specific release versions of the component.

Wouldn’t it be great if automation could step in and solve the cost challenge while saving time by building repeatable processes? That’s precisely what automation does.

  • Automated tools for Windows servers let you test STIG compliance on a single instance, test all changes until approved, then push out those changes to other Windows servers via Group Policy Object (GPO) automation. Automated tools for Linux permit a similar outcome: test all changes due to STIG compliance and then push all approved changes as a tested, secure baseline out to other servers
  • Automated network monitoring tools digest system logs in real time, create alerts based on predefined rules, and help meet STIG requirements for Continuous Monitoring (CM) security controls while providing the defense team with actionable response guidance
  • Automated device configuration tools can continuously monitor device configurations for setting changes across geographically dispersed networks, enforcing compliance with security policies, and making configuration backups useful in system restoration efforts after an outage
  • Automation also addresses readability. STIGs are released in XML format—not the most human-readable form for delivering data. Some newer automated STIG compliance tools generate easy-to-read compliance reports useful for both security management and technical support teams

If you’re a federal IT pro within a DoD agency, you have an increasing number of requirements to satisfy. Let automation take some of the heavy lifting when it comes to compliance, so you and your team can focus on more pressing tasks.

Find the full article on Government Technology Insider.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
1 12 545

Some folks find a profession early in their life and that’s what they do until they’re old and gray and ready to retire. Other folks find themselves switching careers mid-way. Me? I’m on the third career in my working life. I know a thing or two about learning something new. Why do I bring this up? Because if you’re in the IT industry, chances are you’ll spend a big chunk of your professional life learning something new. There’s also a good chance you’ll have to sit for an exam or two to prove you’ve learned something new. And for some, that prospect overwhelms. If that’s you, keep reading! In this two-part series, I’m going to share my thoughts, tips, and tools for picking up a new skill.

Be the Captain of Your Own Ship

Sometimes you get told you need to have XYZ certification to qualify for the next pay raise. Sometimes the idea comes from your own self-motivation. Either way, the first step to successful completion is for you to make the commitment to the journey. Even if the idea isn’t your own, your personal commitment to the journey will be critical to its success. We’ve all seen what happens when there isn’t personal commitment. Someone gets assigned something they have no interest or desire. Whatever the task, it usually gets done half-heartedly and the results are terrible. You don’t want to be terrible. You want that cert. Make the commitment. It doesn’t have to be flashy or public, but it does have to authentic to you.

Make a New Plan, Stan...

Once you’ve made the decision to go after your goal, it’s time to make your plan. After all, no captain sets sail without first plotting a course. For certification-chasers, there is usually a blueprint out there with what the certification exam will cover. That’s a good place to start.

Charting your course should include things like:

  • A concrete, measurable goal.

  • A realistic timeline.

  • The steps to get from today to success[i].

Think about what hazards might impede your progress. After all, you don’t want to plot your course right through a reef. Things like:

  • How much time you can realistically devote to studying?

  • What stressors might affect your ability to stay on track?

  • Will your own starting point knowledge-wise make the journey longer or shorter?

Make Like a Penguin

If you’re a Madagascar[ii] fan, you know the penguin credo is “Never swim alone.” It’s great advice for penguins and for IT knowledge-seekers. Making your journey alone is like filling your bag with rocks before you start. It just makes life harder.

There are a ton of great online and real-life IT communities out there. Find one that works for you and get engaged. If your journey is at all like mine, at first you might just be asking questions. I know I asked a zillion questions in the beginning. These days I end up answering more questions than I ask, but I find answering others’ questions helps me solidify the strength of my knowledge. Another community is a formal study group. They help by providing structure, feedback on your progress, and motivation.

  Lastly, don’t forget about your friends and family. They might not know the subject matter, but they can make your road smoother by freeing up your time or giving you valuable moral support. Make sure they swim, too. This article has been a little high-level design for a successful certification journey. Stay tuned for the next installment. We’ll go low-level with some tips for getting to success one day at a time. Until then remember to keep doing it... just for fun!


[i] https://www.forbes.com/sites/biancamillercole/2019/02/07/how-to-create-and-reach-your-goals-in-4-ste...

[ii] https://www.imdb.com/title/tt0484439/characters/nm0569891

Read more
4 18 680
Level 11

As a database administrator (aka DBA, or Default Blame Acceptor) throughout my career, I’ve worked with a myriad of developers, system administrators, and business users who have all had the same question—why is my query (or application) slow? Many organizations lack a full-time DBA, which makes the question even harder to answer. The answer is sometimes simple, sometimes complicated, but they all start with one bit of analysis you need to do: whether the relational database management system (RDBMS) you are using is DB2, MySQL, Microsoft SQL Server, Oracle, or PostgreSQL.

It’s All About the Execution Plan

A database engine balances CPU, memory, and storage resources to try to provide the best overall performance for all queries. As part of this, the engine will try to limit the number of times it executes expensive processes, by caching various objects in RAM—one use is saving blocks with the data needed to return results to a query. Another common use of caching is for execution plans or explain plans (different engines call these different things), which is probably the most important factor in your queries performance.

When you submit a query to a database engine, a couple of things happen—the query is first parsed, to ensure its syntax is valid, the objects (tables, views, functions) you’re querying exist, and you have permission to access them. This process is very fast and happens in a matter of microseconds. Next, the database engine will look to see if that query has been recently executed and if the cache of execution plans has a plan for that query. If there’s not an existing plan, the engine will have to generate a new plan. This process is very expensive from a CPU perspective, which is why the database engine will attempt to cache plans.

Execution or explain plans are simply the map and order of operations required to gather the data to answer your query. The engine uses statistics or metadata about the data in your table to build its best guess at the optimal way to gather your data. Depending on your database engine, other factors such as the number of CPUs, the amount of available memory, various server settings, and even the speed of your storage may impact the operations included in your plan (DBAs frequently refer to this as the shape of the plan).

How Do I Get This Plan and How Do I Read It?

Depending on your RDBMS, there are different approaches to gathering the plan. Typically, you can get the engine to give you a pre-plan, which tells you the operations the engine will perform to retrieve your data. This is helpful when you need to identify large operations like table scans, which would benefit from an index would be helpful. For example—if I had the following table called Employees:

EmployeeID

LastName

State

02

Dantoni

PA

09

Brees

LA

If I wanted to query by LastName, e.g.,

SELECT STATE

FROM Employees

WHERE LastName = ‘Dantoni’

I would want to add an index to the LastName column. Some database engines will even flag a missing index warning, to let you know an index on that column would help the query go faster.

There is also the notion of a post plan, which includes the actual row counts and execution times of the query. This can be useful if your statistics are very out of date, and the engine is making poor assumptions about the number of rows your query will return.

Performance tuning database systems are a combination of dark arts and science and can require a deep level of experience. However, knowing about the existence of and how to capture execution plans, allows you to have a much better understanding of the work your database engine is doing, and can give you a path to fix it.

Read more
1 8 372

Introduction

Much of the dialog surrounding cloud these days seems centered on multi-cloud. Multi-cloud this, multi-cloud that—have we so soon forgotten about the approach that addresses the broadest set of needs for businesses beginning their transition from the traditional data center?

Friends, I’m talking about good-old hybrid cloud, and the path to becoming hybrid doesn’t have to be complicated. In short, we select the cloud provider who best meets most of our needs, establish network connectivity between our existing data center(s) and the cloud, and if we make intelligent, well-reasoned decisions about what should live where (and why), we’ll end up in a good place.

However, as many of you may already know, ending up in “a good place” isn’t a foregone conclusion. It requires careful consideration of what our business (or customers’ businesses) needs to ensure the decisions we’re making are sound.

In this series, we’ll look at the primary phases involved in transitioning toward a hybrid cloud architecture. Along the way, we’ll examine some of the most important considerations needed to ensure the results of this transition are positive.

But first, we’ll spend some time in an often-non-technical realm inhabited by our bosses and boss’s bosses. This is the realm of business strategy and requirements, where we determine if the path we’re evaluating is the right one. Shirt starched? Tie straightened? All right, here we go.

The Importance of Requirements

There’s a lot of debate around which cloud platform is “best” for a certain situation. But if we’re pursuing a strategy involving use of cloud services, we’re likely after a series of benefits common to all public cloud providers.

These benefits include global resource availability, practically unlimited capacity, flexibility to scale both up and down as needs change, and consumption-based billing, among others. Realizing these benefits isn’t without potential drawbacks, though.

The use of cloud services means we now have another platform to master and additional complexity to consider when it comes to construction, operation, troubleshooting, and recovery processes. And there are many instances where a service is best kept completely on-premises, making a cloud-only approach unrealistic.

Is this additional complexity even worth it? Do these benefits make any difference to the business? Answering a few business-focused questions upfront may help clarify requirements and avoid trouble later. These questions might resemble the following:

  1. Does the business expect rapid expansion or significant fluctuation in service demand? Is accurately predicting and preparing for this demand ahead of time difficult?
  2. Are the large capital expenditures every few years required to keep a data center environment current causing financial problems for the business?

  3. Is a lack of a worldwide presence negatively impacting the experience of your users and customers? Would it be difficult to distribute services yourself by building out additional self-managed data centers?

  4. Is day-to-day operation of the existing data center facility and hardware infrastructure becoming a time-sink? Would the business like to lessen this burden?

  5. Does the business possess applications with sensitive data that must reside on self-maintained infrastructure? Would there be consequences if this were violated?

In these examples, you can see we’re focusing less on technical features we’d “like to have” and instead thinking about the requirements of the business and specific needs that should be addressed.

If the answers to some of these questions are “yes,” then a hybrid cloud approach could be worthwhile. On the other hand, if a clear business benefit cannot be determined, then it may be best to keep doing what you are doing.

In any case, having knowledge of business requirements will help you select the correct path, whether it involves use of cloud services or maintaining your existing on-premises strategy. And through asking your key business stakeholders questions like these, you show them you’re interested in the healthy operation of the business, not just your technical responsibilities.

Wrap-Up

There are many ways to go about consuming cloud services, and for many organizations, a well-executed hybrid strategy will provide the best return while minimizing complexity and operational overhead. But before embarking on the journey it’s always best to make sure decisions are justified by business requirements. Otherwise, you could end up contributing to a “percentage of cloud initiatives that fail” statistic—not a good place to be.

Read more
1 12 565
Level 17

This week's Actuator comes to you from Austin, as I'm in town to host SolarWinds Lab live. We'll be talking about Database Performance Monitor (nee VividCortex). I hope you find time to watch and bring questions!

As always, here's a bunch of links I hope you find useful. Enjoy!

First clinical trial of gene editing to help target cancer

Being close to the biotech industry in and around Boston, I heard rumors of these treatments two years ago. I'm hopeful our doctors can get this done, and soon.

What Happened With DNC Tech

Twitter thread about the tech failure in Iowa last week.

Analysis of compensation, level, and experience details of 19K tech workers

Wonderful data analysis on salary information. Start at the bottom with the conclusions, then decide for yourself if you want to dive into the details above.

Things I Believe About Software Engineering

There's some deep thoughts in this brief post. Take time to reflect on them.

Smart Streetlights Are Experiencing Mission Creep

Nice reminder that surveillance is happening all around us, in ways you may never know.

11 Reasons Not to Become Famous (or “A Few Lessons Learned Since 2007”)

A bit long, but worth the time. I've never been a fan of Tim or his book, but this post struck a chord.

Berlin artist uses 99 phones to trick Google into traffic jam alert

Is it wrong that I want to try this now?

I think I understand why they never tell me anything around here...

Read more
1 17 633
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Mav Turner with ideas about how the government could monitor and automate their hyperconverged infrastructure to help achieve their modernization objectives.

It’s no surprise hyperconverged infrastructure (HCI) has been embraced by a growing number of government IT managers, since HCI merges storage, compute, and networking into a much smaller and more manageable footprint.

As with any new technology, however, HCI’s wide adoption can be slowed by skeptics, such as IT administrators concerned with interoperability and implementation costs. However, HCI doesn’t mean starting from scratch. Indeed, migration is best achieved gradually, with agencies buying only what they need, when they need it, as part of a long-term IT modernization plan.

Let’s take a closer look at the benefits HCI provides to government agencies, then examine key considerations when it comes to implementing the technology—namely, the importance of automation and infrastructure monitoring.

Combining the Best of Physical and Virtual Worlds

Enthusiasts like HCI giving them the performance, reliability, and availability of an on-premises data center along with the ability to scale IT in the cloud. This flexibility allows them to easily incorporate new technologies and architectures into the infrastructure. HCI also consolidates previously disparate compute, networking, and storage functions into a single, compact data center.

Extracting Value Through Monitoring and Automation

Agencies are familiar with monitoring storage, network, and compute as separate entities; when these functions are combined with HCI, network monitoring is still required. Indeed, having complete IT visibility becomes more important as infrastructure converges.

Combining different services into one is a highly complex task fraught with risk. Things change rapidly, and errors can easily occur. Managers need clear insight into what’s going on with their systems.

After the initial deployment is complete monitoring should continue unabated. It’s vital for IT managers to understand the impact apps, services, and integrated components have on each other and the legacy infrastructure around them.

Additionally, all these processes should be fully automated. Autonomous workload acceleration is a core HCI benefit. Automation binds HCI components together, making them easier to manage and maintain—which in turn yields a more efficient data center. If agencies don’t spend time automating the monitoring of their HCI, they’ll run the risk of allocating resources or building out capacity they don’t need and may expose organizational data to additional security threats.

Investing in the Right Technical Skills

HCI requires a unique skillset. It’s important they invest in technical staff with practical HCI experience and the knowledge to effectively implement infrastructure monitoring and automation capabilities. These experts will be critical in helping agencies take advantage of the vast potential this technology has to offer.

Reaping the Rewards of HCI

Incorporating infrastructure monitoring and automation into HCI implementation plans will enable agencies to reap the full rewards: lower total cost of IT ownership thanks to simplified data center architecture, consistent and predictable performance, faster application delivery, improved agility, IT service levels accurately matched to capacity requirements, and more.

There’s a lot of return for simply applying the same level of care and attention to monitoring HCI as traditional infrastructure.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 5 492
Level 17

This week's Actuator comes to you from New England where it has been 367 days since our team last appeared in a Super Bowl. I'm still not ready to talk about it, though.

As always, here's a bunch of links I hope you find interesting. Enjoy!

97% of airports showing signs of weak cybersecurity

I would have put the number closer to 99%.

Skimming heist that hit convenience chain may have compromised 30 million cards

Looks like airports aren't the only industry with security issues.

It’s 2020 and we still have a data privacy problem

SPOILER ALERT: We will always have a data privacy problem.

Don’t be fooled: Blockchains are not miracle security solutions

No, you don't need a blockchain.

Google’s tenth messaging service will “unify” Gmail, Drive, Hangouts Chat

Tenth time is the charm, right? I'm certain this one will be the killer messaging app they have been looking for. And there's no way once it gets popular they'll kill it, either.

A Vermont bill would bring emoji license plates to the US

Just like candy corn, here's something else no one wants.

For the game this year I made some pork belly bites in a garlic honey soy sauce.

pastedImage_6.png

Read more
0 20 798
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Jim Hansen reviewing data from our cybersecurity survey, including details on how agencies are combatting threats.

According to a 2019 Federal Cybersecurity Survey released last year by IT management software company SolarWinds, careless and malicious insiders topped the list of security threats for federal agencies. Yet, despite the increased threats, federal IT security pros believe they’re making progress managing risk.

Why the positive attitude despite the increasing challenge? While threats may be on the rise, strategies to combat these threats—such as government mandates, security tools, and best practices—are seeing vast improvements.

Greater Threat, Greater Solutions

According to the Cybersecurity Survey, 56% of respondents said the greatest source of security threats to federal agencies is careless and/or untrained agency insiders; 36% cited malicious insiders as the greatest source of security threats.

Most respondents cited numerous reasons why these types of threats have improved or remained in control, from policy and process improvements to better cyberhygiene and advancing security tools.

•Policy and process improvements: 58% of respondents cited “improved strategy and processes to apply security best practices” as the primary reason careless insider threats have improved.

•Basic security hygiene: 47% of respondents cited “end-user security awareness training” as the primary reason careless insider threats have improved.

•Advanced security tools: 42% of respondents cited “intrusion detection and prevention tools” as the primary reason careless insider threats have improved.

“NIST Framework for Improving Critical Infrastructure Cybersecurity” topped the list of the most critical regulations and mandates, with FISMA (Federal Information Security Management Act) and DISA STIGs (Security Technical Implementation Guides) following close behind, at 60%, 55%, and 52% of respondents, respectively, citing these as the primary contributing factor in managing agency risks.

There’s also no question the tools and technologies to help reduce risk are advancing quickly; this was evidenced by the number of tools federal IT security pros rely on to ensure a stronger security posture within their agencies. The following are the tools cited, and the percentage of respondents saying these are their most important technologies in their proverbial tool chest:

•Intrusion detection and prevention tools 42%

•Endpoint and mobile security 34%

•Web application firewalls 34%

•Fire and disk encryption 34%

•Network traffic encryption 34%

•Web security or web content filtering gateways 33%

•Internal threat detection/intelligence 30%

Training was deemed the most important factor in reducing agency risk, particularly when it comes to reducing risks associated with contractors or temporary workers:

•53% cited “ongoing security training” as the most important factor

•49% cited “training on security policies when onboarding” as the most important factor

•44% cited “educate regular employees on the need to protect sensitive data” as the most important factor

Conclusion

Any federal IT security pro will tell you although things are improving, there’s no one answer or one solution. The most effective way to reduce risk is a combination of tactics, from implementing ever-improving technologies to meeting federal mandates to ensuring all staffers are trained in security best practices.

Find the full article on our partner DLT’s blog Technically Speaking.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
1 15 848
Level 17

This week's Actuator comes to you from the suddenly mild January here in the Northeast. I'm taking advantage of the warm and dry days up here, spending time walking outdoors. Being outdoors is far better than the treadmill at the gym.

As always, here's a bunch of links from the internet I hope you will find useful. Enjoy!

Jeff Bezos hack: Amazon boss's phone 'hacked by Saudi crown prince'

I don't know where to begin. Maybe we can start with the idea that Bezos uses WhatsApp, an app known to be unsecured and owned by the unsecured Facebook. I'm starting to think he built a trillion-dollar company by accident, not because he's smart.

New Ransomware Process Leverages Native Windows Features

This is notable, but not new. Ransomware often uses resources available on the machine to do damage. For example, VB macros embedded in spreadsheets. I don't blame Microsoft for saying they won't provide security service for this, but it would be nice if they could hint at finding ways to identify and halt malicious activity.

London facial recognition: Metropolitan police announces new deployment of cameras

Last week the EU was talking about a five-year ban on facial recognition technology. Naturally, the U.K. decides to double down on their use of that same tech. I can't help but draw the conclusion this shows the deep divide between the U.K. and the EU.

Security Is an Availability Problem

I'm not certain, but I suspect many business decision-makers tend to think "that can't happen to us," and thus fail to plan for the day when it does happen to them.

Apple's dedication to 'a diversity of dongles' is polluting the planet

Words will never express my frustration with Apple for the "innovation" of removing a headphone jack and forcing me to buy additional hardware to continue to use my existing accessories.

Webex flaw allowed anyone to join private online meetings - no password required

The last thing I'm doing during the day is trying to join *more* meetings.

Play Dungeons & Deadlines

You might want to set aside some time for this one.

Walking through Forest Park this past Sunday, after a rainstorm the day before and the temperature so perfect to catch the steam coming off the trees.

pastedImage_0.png

Read more
2 34 1,135
Level 17

Back in October, 2019, I shared my love of both Raspberry Pi (https://www.raspberrypi.org/) devices and the Pi-Hole (https://pi-hole.net/) software; and showed how—with a little know-how about Application Programming Interfaces (APIs) and scripting (in this case, I used it as an excuse to make my friend @kmsigma happy and expand my knowledge of PowerShell) you could fashion a reasonable API-centric monitoring template in Server & Application Monitor (SAM). For those who are curious, you can find part 1 here: Don’t Shut Your Pi-Hole, Monitor It! (part 1 of 2) and part 2 here: Don’t Shut Your Pi-Hole, Monitor It! (part 2 of 2) 

It was a good tutorial, as far as things went, but it missed one major point: even as I wrote the post, I knew @Serena and her daring department of developers were hard at work building an API poller into SAM 2019.4. As my tutorial went to post, this new functionality was waiting in the wings, about to be introduced to the world.

Leaving the API poller out of my tutorial was a necessary deceit at the time, but not anymore. In this post I’ll use all the same goals and software as my previous adventure with APIs, but with the new functionality.

A Little Review

I’m not going to spend time here discussing what a Raspberry Pi or Pi-Hole solution is (you can find that in part 1 of the original series: Don’t Shut Your Pi-Hole, Monitor It! (part 1 of 2) . But I want to take a moment to refamiliarize you with what we’re trying to accomplish.

Once you have your Raspberry Pi and Pi-Hole up and running, you get to the API by going to http://<your pi-hole IP or name>/admin/api.php. When you do, the data you get back looks something like this:

{”domains_being_blocked”:115897,”dns_queries_today”:284514,”ads_blocked_today”:17865,”ads_percentage_today”:6.279129,”unique_domains”:14761,”queries_forwarded”:216109,”queries_cached”:50540,”clients_ever_seen”:38,”unique_clients”:22,”dns_queries_all_types”:284514,”reply_NODATA”:20262,”reply_NXDOMAIN”:19114,”reply_CNAME”:16364,”reply_IP”:87029,”privacy_level”:0,”status”:”enabled,””gravity_last_updated”:{”file_exists”:true,”absolute”:1567323672,”relative”:{”days”:”3,””hours”:”09,””minutes”:”53”}}}

If you look at it with a browser capable of formatting JSON data, it looks a little prettier:

api_simple_data_pretty.png

That’s the data we want to collect using the new Orion API monitoring function.

The API Poller – A Step-by-Step Guide

To start off, make sure you’re monitoring the Raspberry Pi in question at all, so there’s a place to display this data. What’s different from the SAM Component version is you can monitor it using the ARM agent, or SNMP, or even as a ping-only node.

Next, on the Node Details page for the Pi, look in the “Management” block and you should see an option for “API Poller.” Click that, then click “Create,” and you’re on your way.

01_create.png

You want to give this poller a name, or else you won’t be able to include these statistics in PerfStack (Performance Analyzer) later. You can also give it a description and (if required) the authentication credentials for the API.

1-5_set-name.png

On the next screen, put in the Pi-Hole API URL. As I said before, that’s http://<your pi-hole IP or Name>/admin/api.php. Then click “Send Request” to pull a sample of the available metrics.

02_URL.png

The “Response” area below will populate with items. For the ones you want to monitor, click the little computer screen icon to the right.

03_get-values.png

If you want to monitor the value without warning or critical thresholds, click “Save.” Otherwise change the settings as you desire.

05_monitor-value.png

As you do, you’ll see the “Values to Monitor” list on the right column populate. Of course, you can go back and edit or remove those items later. Because nobody’s perfect.

06_monitor-values2.png

Once you’re done, click “Save” at the bottom of the screen. Scroll down on the Node Details page and you’ll notice a new “API Pollers” Section is now populated.

07_poller-stats.png

I’m serious, it’s this easy. I’m not saying coding API monitors with PowerShell wasn’t a wonderful learning experience, and I’m sure down the road I’ll use the techniques I learned.

But when you have several APIs, with a bunch of values each, this process is significantly easier to set up and maintain.

Kudos once again to @kmsigma for the PowerShell support; and @serena and her team for all their hard work and support making our lives as monitoring engineers better every day.

Try it out yourself and let us know your experiences in the comments below!

Read more
1 11 569
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by Jim Hansen about using patching, credential management, and continuous monitoring to improve security of IoT devices.

Security concerns over the Internet of Things (IoT) are growing, and federal and state lawmakers are taking action. First, the U.S. Senate introduced the Internet of Things Cybersecurity Improvement Act of 2017, which sought to “establish minimum security requirements for federal procurements of connected devices.” More recently, legislators in the state of California introduced Senate Bill No. 327, which stipulated manufacturers of IoT devices include “a reasonable security feature” within their products.

While these laws are good starting points, they don’t go far enough in addressing IoT security concerns.

IoT Devices: A Hacker’s Best Friend?

Connected devices all have the potential to connect to the internet and local networks and, for the most part, were designed for convenience and speed—not security. And since they’re connected to the network, they offer a backdoor through which other solutions can be easily compromised.

As such, IoT devices offer tantalizing targets for hackers. A single exploit from one connected device can lead to a larger, more damaging breach. Remember the Target hack from a few years ago? Malicious attackers gained a foothold into the retail giant’s infrastructure by stealing credentials from a heating and air condition company whose units were connected to Target’s network. It’s easy to imagine something as insidious—and even more damaging to national security—taking place within the Department of Defense or other agencies, which has been an early adopter of connected devices.

Steps for Securing IoT Devices

When security managers initiate IoT security measures, they’re not only protecting their devices, they’re safeguarding everything connected to those devices. Therefore, it’s important to go beyond the government’s baseline security recommendations and embrace more robust measures. Here are some proactive steps government IT managers can take to lock down their devices and networks.

  • Make patching and updating a part of the daily routine. IoT devices should be subject to a regular cadence of patches and updates to help ensure the protection of those devices against new and evolving vulnerabilities. This is essential to the long-term security of connected devices.

The Internet of Things Cybersecurity Improvement Act of 2017 specifically requires vendors to make their IoT devices patchable, but it’s easy for managers to go out and download what appears to be a legitimate update—only to find it’s full of malware. It’s important to be vigilant and verify security packages before applying them to their devices. After updates are applied, managers should take precautions to ensure those updates are genuine.

  • Apply basic credential management to interaction with IoT devices. Managers must think differently when it comes to IoT device user authentication and credential management. They should ask, “How does someone interact with this device?” “What do we have to do to ensure only the right people, with the right authorization, are able to access the device?” “What measures do we need to take to verify this access and understand what users are doing once they begin using the device?”

Being able to monitor user sessions is key. IoT devices may not have the same capabilities as modern information systems, such as the ability to maintain or view log trails or delete a log after someone stops using the device. Managers may need to proactively ensure their IoT devices have these capabilities.

  • Employ continuous threat monitoring to protect against attacks. There are several common threat vectors hackers can use to tap into IoT devices. SQL injection and cross-site scripting are favorite weapons malicious actors use to target web-based applications and could be used to compromise connected devices.

Managers should employ IoT device threat monitoring to help protect against these and other types of intrusions. Continuous threat monitoring can be used to alert, report, and automatically address any potentially harmful anomalies. It can monitor traffic passing to and from a device to detect whether the device is communicating with a known bad entity. A device in communication with a command and control system outside of the agency’s infrastructure is a certain red flag that the device—and the network it’s connected to—may have been compromised.

The IoT is here to stay, and it’s important for federal IT managers to proactively tackle the security challenges it poses. Bills passed by federal and state legislators are a start, but they’re not enough to protect government networks against devices that weren’t designed with security top-of-mind. IoT security is something agencies need to take into their own hands. Managers must understand the risks and put processes, strategies, and tools in place to proactively mitigate threats caused by the IoT.

Find the full article on Fifth Domain.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 11 498
Level 9

Submitted for your approval; a story of cloud horrors.

One of performance issues impacting production.

Where monthly cloud billing began spiraling out of control.

The following story is true. The names have been changed to protect the innocent.

During my consulting career, I’ve encountered companies at many different stages of their cloud journey. What was particularly fun about walking into this shop is they were already about 75% up into public cloud. The remaining 25% was working towards being migrated off their aging hardware. They seemed to be ahead of the game, so why were my services needed?

Let’s set up some info about the company, which I’ll call “ABC Co.” ABC Co. provides medical staff and medical management to many hospitals and clinics, with approximately 1,000 employees and contractors spread across many states. Being in both medical staffing and recordkeeping, ABC Co. was subject to many compliance regulations such as HIPAA, PCI, etc. Their on-premises data center was on older hardware nearing end of life, and given the size of their IT staff, they decided to move out of the data center business.

The data center architect at ABC Co. did his homework. He spent many hours learning about public cloud, crunching numbers, and comparing virtual machine configurations to cloud-based compute sizing. Additionally, due to compliance requirements, ABC Co. needed to use dedicated hosts in the public cloud. After factoring in all the sizing, storage capacity, and necessary networking, the architect arrived at an expected monthly spend number: $50,000. He took this number to the board of directors with a migration plan and outlined the benefits of going to the cloud versus refreshing their current physical infrastructure. The board was convinced and gave the green light to move into the public cloud.

Everything was moving along perfectly early in the project. The underlying cloud architecture of networking, identity and access management, and security were deployed. A few workloads were moved up into the cloud to great success. ABC Co. continued their migration, putting applications and remote desktop servers in the cloud, along with basic workloads such as email servers and databases. But something wasn’t right.

End users started to complain of performance issues on the RDP servers. Application processing had slowed down to a crawl. The employee’s ability to perform their tasks was being impeded. The architect and cloud administrators added more remote desktop servers into the environment and increased their size. Sizing on the application servers, which were just Microsoft Windows Servers in the public cloud, were also increased. This alleviated the problems, albeit temporarily. As more and more users were logging in to the public cloud-based services, performance and availability took a hit.

And then the bill showed up.

It was creeping up slowly to the anticipated $50,000 per month. Unfortunately, as a side effect of the increasing resources, the bill had risen to more than triple the original estimates presented to the board of directors. In the peak of the “crisis,” the bill had surpassed $150,000 per month. This put the C-suite on edge. What was going on with the cloud migration project? How is the bill so high when they were promised a third of what was being spent? It was time for the ABC Co. team to call for an assist.

This is where I entered the scene. I’ll start this next section of the story by stating this outright: I didn’t solve all their problems. I wasn’t a savior on a white horse galloping in to save the day. I did, however, help ABC Co. start to reduce their bill and get cloud spend under control.

One of the steps they implemented before I arrived was to use scripted shutdown of servers during non-work hours. This cut off some of the wasteful spend on machines not being used. We also looked at the actual usage on all servers in the cloud. After running some scans, we found many servers not in use for 30 days or more being left on and piling onto the bill. These servers were promptly shut down, archived, then deleted after a set time. Applications experiencing performance issues were analyzed, and it was determined they could be converted to a cloud native architecture. And those pesky ever-growing remote desktop boxes? Smaller, more cost-effective servers were placed behind a load balancer to automatically boot additional servers should the user count demand it. All these are a few of the steps to reducing the cloud bill. Many things occurred after I had left, but this was a start to send them on the right path.

  So, what can be learned from this story? While credit should be given for the legwork done to develop the strategy, on-premises virtual machines and public cloud-based instances aren’t apples to apples. Workloads behave differently in the cloud. The way resources are consumed has costs behind it; you can’t just add RAM and CPU to a problem server like you can in your data center (nor is it often the correct solution). Many variables go into a cloud migration. If your company is looking at moving to the cloud, be sure to ask the deep questions during the initial planning phase—it may just save hundreds of thousands of dollars.

Read more
5 21 1,024
Level 17

Back from Austin and home for a few weeks before I head...back to Austin for a live episode of SolarWinds Lab. Last week was the annual Head Geeks Summit, and it was good to be sequestered for a few days with just our team as we map out our plans for world domination in 2020 (or 2021, whatever it takes).

As always, here's a bunch of stuff I found on the internetz this week that I think you might enjoy. Cheers!

Critical Windows 10 vulnerability used to Rickroll the NSA and Github

Patch your stuff, folks. Don't wait, get it done.

WeLeakInfo, the site which sold access to passwords stolen in data breaches, is brought down by the ...

In case you were wondering, the website was allowed to exist for three years before it was finally shut down. No idea what took so long, but I tip my hat to the owners. They didn't steal anything, they just took available data and made it easy to consume. Still, they must have known they were in murky legal waters.

Facial recognition: EU considers ban of up to five years

I can't say if that's the right amount of time; I'd prefer they ban it outright for now. This isn't just a matter of the tech being reliable, it brings about questions regarding basic privacy versus a surveillance state.

Biden wants Sec. 230 gone, calls tech “totally irresponsible,” “little creeps”

Politics aside, I agree with the idea that a website publisher should bear some burden regarding the content allowed. Similar to how I feel developers should be held accountable for deploying software that's not secure, or leaving S3 buckets wide open. Until individuals understand the risks, we will continue to have a mess of things on our hands.

Microsoft pledges to be 'carbon negative' by 2030

This is a lofty goal, and I applaud the effort here by Microsoft to erase their entire carbon footprint since they were founded in 1975. It will be interesting to see if any other companies try to follow, but I suspect some (*cough* Apple) won't even bother.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

In today's edition of "do as I say, not as I do", Google reminds us that their new motto is "Only slightly evil."

Technical Debt Is like a Tetris Game

I like this analogy, and thought you might like it as well. Let me know if it helps you.

If you are ever in Kansas City, run, don't walk, to Jack Stack and order the beef rib appetizer. You're welcome.

7_18_13 - 1.jpg

Read more
2 21 648
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Brandon Shopp with ideas for improving security at the DoD by finding vulnerabilities and continuously monitoring agency infrastructure.

An early 2019 report from the Defense Department Officer of Inspector General revealed how difficult it’s been for federal agencies to stem the tide of cybersecurity threats. Although the DoD has made significant progress toward bolstering its security posture, 266 cybersecurity vulnerabilities still existed. Most vulnerabilities have only been discovered within the past year—a sure sign of rising risk levels.

The report cited several areas for improvement, including continuous monitoring and detection processes, security training, and more. Here are three strategies DOD can use to tackle those remaining 200-plus vulnerabilities.

1. Identify Existing Threats and Vulnerabilities

Identifying and addressing vulnerabilities will become more difficult as the number of devices and cloud-based applications on defense networks proliferates. Although government IT managers have gotten a handle on bring-your-own-device issues, undetected devices are still used on DoD networks.

Scanning for applications and devices outside the control of IT is the first step toward plugging potential security holes. Apps like Dropbox and Google Drive may be great for productivity, but they could also expose the agency to risk if they’re not security hardened.

The next step is to scan for hard-to-find vulnerabilities. The OIG report called out the need to improve “information protection processes and procedures.” Most vulnerabilities occur when configuration changes aren’t properly managed. Automatically scanning for configuration changes and regularly testing for vulnerabilities can help ensure employees follow the proper protocols and increase the department’s security posture.

2. Implement Continuous Monitoring, Both On-Premises and in the Cloud

While the OIG report specifically stated the DoD must continue to proactively monitor its networks, those networks are becoming increasingly dispersed. It’s no longer only about keeping an eye on in-house applications; it’s equally as important to be able to spot potential vulnerabilities in the cloud.

DoD IT managers should go beyond traditional network monitoring and look more deeply into the cloud services they use. The ability to see the entire network, including destinations in the cloud, is critically important, especially as the DoD becomes more reliant on hosted service providers.

3. Establish Ongoing User Training and Education Programs

A well-trained user can be the best protection against vulnerabilities, making it important for the DoD to implement a regular training cadence for its employees.

Training shouldn’t be relegated to the IT team alone. A recent study indicates insider threats pose some of the greatest risk to government networks. As such, all employees should be trained on the agency’s policies and procedures and encouraged to follow best practices to mitigate potential threats. The National Institute of Standards and Technology provides an excellent guide on how to implement an effective security training program.

When it comes to cybersecurity, the DoD has made a great deal of progress, but there’s still room for improvement. By implementing these three best practices, the DoD can build off what it’s already accomplished and focus on improvements.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
2 13 891
Level 11

While there are many silly depictions of machine learning and artificial intelligence throughout Hollywood, its reality delivers significant benefits. Administrators today oversee so many tasks, like system monitoring, performance optimizing, networking configuration, and more. Many of these tasks can be monotonous and tedious. Also, those tasks are generally required daily. In these cases, machine learning helps ease the burden on the administrator and helps make them more productive with their time. Lately, however, more people seem to think too much machine learning may replace the need for humans to get a job done. While there are instances of machine learning eliminating the need for some tasks to be manned by a human, I don’t believe we’ll see humans replaced by machines (sorry, Terminator fans). Instead, I’ll highlight why I believe machine learning matters now and will continue to matter for generations to come.

Machine Learning Improves Administrator’s Lives

Some tasks administrators are responsible for can be very tedious and take a long time to complete. With machine learning, automation makes the daily tedious tasks run on a schedule and efficiently as system behavior is learned and optimized on the fly. A great example comes in the form of spam mail or calls. Big name telecom companies are now using machine learning to filter out the spam callers flooding cell phones everywhere. Call blocker apps can now screen calls for you based on spam call lists analyzed by machine learning and then block potential spam. In other examples, machine learning can analyze system behavior against a performance baseline and then alert the team of any anomalies and/or the need to make changes. Machine learning is here to help the administrator, not give them anxiety about being replaced.

Machine Learning Makes Technology Better


There are so many amazing software packages available today for backup and recovery, server virtualization, storage optimization, or security hardening. There’s something for every type of workload. When machine learning is applied to these software technologies, it enhances the application and increases the ease of use. Machine learning is doing just that: always learning. If an application workload suddenly increases, machine learning captures it and then will use an algorithm to determine how to react in those situations. When there’s a storage bottleneck, machine learning analyzes the traffic to determine what’s causing the backup and then works out a possible solution to the problem for administrators to implement.

Machine Learning Reduces Complexity

Nobody wants their data center to be more complex. In fact, technology trends in the past 10 to 15 years have leaned towards reducing complexity. Virtualization technology has reduced the need for a large footprint in the data center and reduced the complexity of systems management. Hyperconverged infrastructure (HCI) has gone a step further and consolidated an entire rack’s worth of technology into one box. Machine learning takes it a step further by enabling automation and fast analysis of large data sets to produce actionable tasks. Tasks requiring a ton of administrative overhead are now reduced to an automated and scheduled task monitored by the administrator. Help desk analysts benefit from machine learning’s ability to recognize trending data to better triage certain incident tickets and reduce complexity in troubleshooting those incidents.

Learn Machine Learning

If you don’t have experience with machine learning, dig in and start reading everything you can about it. In some cases, your organization may already be using machine learning. Figure out where it’s being used and start learning how it affects your job day to day. There are so many benefits to using machine learning—find out how it benefits you and start leveraging its power.

Read more
1 15 626