Geek Speak Blogs

cancel
Showing results for 
Search instead for 
Did you mean: 

Geek Speak Blogs

orafik
Level 11

Here’s an interesting article by my colleague Mav Turner, who offers three steps to improve cloud security. From creating policies to improving visibility and automation.

Read more...

Read more
2 3 58
sqlrockstar
Level 17

This edition of the Actuator comes to you from my kitchen, where I'm enjoying some time at home before I hit the road. I'll be at RSA next week, then Darmstadt, Germany the following week. And then I head to Seattle for the Microsoft MVP Summit. This is all my way of saying future editions of the Actuator may be delayed. I'll do my best, but I hope you understand.

As always, here's a bunch of links I hope you will find useful. Enjoy!

It doesn’t matter if China hacked Equifax

No, it doesn't, because the evidence suggests China was but one of many entities that helped themselves to the data Equifax was negligent in guarding.

Data centers generate the same amount of carbon emissions as global airlines

Machine learning, and bitcoin mining, are large users of power in any data center. This is why Microsoft has announced they'll look to be carbon neutral as soon as possible.

Delta hopes to be the first carbon neutral airline

On the heels of Microsoft's announcement, seeing this from Delta gives me hope many other companies will take action, and not issue press releases only.

Apple’s Mac computers now outpace Windows in malware and virus

Nothing is secure. Stay safe out there.

Over 500 Chrome Extensions Secretly Uploaded Private Data

Everything is terrible.

Judge temporarily halts work on JEDI contract until court can hear AWS protest

This is going to get ugly to watch. You stay right there, I'll go grab the popcorn.

How to Add “Move to” or “Copy to” to Windows 10’s Context Menu

I didn't know I needed this until now, and now I'm left wondering how I've lived so long without this in my life.

Our new Sunday morning ritual is walking through Forest Park. Each week we seem to find something new to enjoy.

048F2408-CFE5-4464-8C7A-842A9FFC1832.GIF

Read more
3 22 317
orafik
Level 11

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by Brandon Shopp about DoD’s not-so-secret weapon against cyberthreats. DISA has created technical guidelines that evolve to help keep ahead of threats, and this blog helps demystify DISA STIGs.

The Defense Information Systems Agency (DISA) has a set of security regulations to provide a baseline standard for Department of Defense (DoD) networks, systems, and applications. DISA enforces hundreds of pages of detailed rules IT pros must follow to properly secure or “harden” the government computer infrastructure and systems.

If you’re responsible for a DoD network, these STIGs (Security Technical Implementation Guides) help guide your network management, configuration, and monitoring strategies across access control, operating systems, applications, network devices, and even physical security. DISA releases new STIGs at least once every quarter. This aggressive release schedule is designed to catch as many recently patched vulnerabilities as possible and ensure a secure baseline for the component in operation.

How can a federal IT pro get compliant when so many requirements must be met on a regular basis? The answer is automation.

First, let’s revisit STIG basics. The DoD developed STIGs, or hardening guidelines, for the most common components comprising agency systems. As of this writing, there are nearly 600 STIGs, each of which may comprise hundreds of security checks specific to the component being hardened.

A second challenge, in addition to the cost of meeting STIG requirements, is the number of requirements needing to be met. Agency systems may be made up of many components, each requiring STIG compliance. Remember, there are nearly 600 different versions of STIGs, some unique to a component, some targeting specific release versions of the component.

Wouldn’t it be great if automation could step in and solve the cost challenge while saving time by building repeatable processes? That’s precisely what automation does.

  • Automated tools for Windows servers let you test STIG compliance on a single instance, test all changes until approved, then push out those changes to other Windows servers via Group Policy Object (GPO) automation. Automated tools for Linux permit a similar outcome: test all changes due to STIG compliance and then push all approved changes as a tested, secure baseline out to other servers
  • Automated network monitoring tools digest system logs in real time, create alerts based on predefined rules, and help meet STIG requirements for Continuous Monitoring (CM) security controls while providing the defense team with actionable response guidance
  • Automated device configuration tools can continuously monitor device configurations for setting changes across geographically dispersed networks, enforcing compliance with security policies, and making configuration backups useful in system restoration efforts after an outage
  • Automation also addresses readability. STIGs are released in XML format—not the most human-readable form for delivering data. Some newer automated STIG compliance tools generate easy-to-read compliance reports useful for both security management and technical support teams

If you’re a federal IT pro within a DoD agency, you have an increasing number of requirements to satisfy. Let automation take some of the heavy lifting when it comes to compliance, so you and your team can focus on more pressing tasks.

Find the full article on Government Technology Insider.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
1 12 311

Some folks find a profession early in their life and that’s what they do until they’re old and gray and ready to retire. Other folks find themselves switching careers mid-way. Me? I’m on the third career in my working life. I know a thing or two about learning something new. Why do I bring this up? Because if you’re in the IT industry, chances are you’ll spend a big chunk of your professional life learning something new. There’s also a good chance you’ll have to sit for an exam or two to prove you’ve learned something new. And for some, that prospect overwhelms. If that’s you, keep reading! In this two-part series, I’m going to share my thoughts, tips, and tools for picking up a new skill.

Be the Captain of Your Own Ship

Sometimes you get told you need to have XYZ certification to qualify for the next pay raise. Sometimes the idea comes from your own self-motivation. Either way, the first step to successful completion is for you to make the commitment to the journey. Even if the idea isn’t your own, your personal commitment to the journey will be critical to its success. We’ve all seen what happens when there isn’t personal commitment. Someone gets assigned something they have no interest or desire. Whatever the task, it usually gets done half-heartedly and the results are terrible. You don’t want to be terrible. You want that cert. Make the commitment. It doesn’t have to be flashy or public, but it does have to authentic to you.

Make a New Plan, Stan...

Once you’ve made the decision to go after your goal, it’s time to make your plan. After all, no captain sets sail without first plotting a course. For certification-chasers, there is usually a blueprint out there with what the certification exam will cover. That’s a good place to start.

Charting your course should include things like:

  • A concrete, measurable goal.

  • A realistic timeline.

  • The steps to get from today to success[i].

Think about what hazards might impede your progress. After all, you don’t want to plot your course right through a reef. Things like:

  • How much time you can realistically devote to studying?

  • What stressors might affect your ability to stay on track?

  • Will your own starting point knowledge-wise make the journey longer or shorter?

Make Like a Penguin

If you’re a Madagascar[ii] fan, you know the penguin credo is “Never swim alone.” It’s great advice for penguins and for IT knowledge-seekers. Making your journey alone is like filling your bag with rocks before you start. It just makes life harder.

There are a ton of great online and real-life IT communities out there. Find one that works for you and get engaged. If your journey is at all like mine, at first you might just be asking questions. I know I asked a zillion questions in the beginning. These days I end up answering more questions than I ask, but I find answering others’ questions helps me solidify the strength of my knowledge. Another community is a formal study group. They help by providing structure, feedback on your progress, and motivation.

  Lastly, don’t forget about your friends and family. They might not know the subject matter, but they can make your road smoother by freeing up your time or giving you valuable moral support. Make sure they swim, too. This article has been a little high-level design for a successful certification journey. Stay tuned for the next installment. We’ll go low-level with some tips for getting to success one day at a time. Until then remember to keep doing it... just for fun!


[i] https://www.forbes.com/sites/biancamillercole/2019/02/07/how-to-create-and-reach-your-goals-in-4-ste...

[ii] https://www.imdb.com/title/tt0484439/characters/nm0569891

Read more
2 15 389
jdanton
Level 10

As a database administrator (aka DBA, or Default Blame Acceptor) throughout my career, I’ve worked with a myriad of developers, system administrators, and business users who have all had the same question—why is my query (or application) slow? Many organizations lack a full-time DBA, which makes the question even harder to answer. The answer is sometimes simple, sometimes complicated, but they all start with one bit of analysis you need to do: whether the relational database management system (RDBMS) you are using is DB2, MySQL, Microsoft SQL Server, Oracle, or PostgreSQL.

It’s All About the Execution Plan

A database engine balances CPU, memory, and storage resources to try to provide the best overall performance for all queries. As part of this, the engine will try to limit the number of times it executes expensive processes, by caching various objects in RAM—one use is saving blocks with the data needed to return results to a query. Another common use of caching is for execution plans or explain plans (different engines call these different things), which is probably the most important factor in your queries performance.

When you submit a query to a database engine, a couple of things happen—the query is first parsed, to ensure its syntax is valid, the objects (tables, views, functions) you’re querying exist, and you have permission to access them. This process is very fast and happens in a matter of microseconds. Next, the database engine will look to see if that query has been recently executed and if the cache of execution plans has a plan for that query. If there’s not an existing plan, the engine will have to generate a new plan. This process is very expensive from a CPU perspective, which is why the database engine will attempt to cache plans.

Execution or explain plans are simply the map and order of operations required to gather the data to answer your query. The engine uses statistics or metadata about the data in your table to build its best guess at the optimal way to gather your data. Depending on your database engine, other factors such as the number of CPUs, the amount of available memory, various server settings, and even the speed of your storage may impact the operations included in your plan (DBAs frequently refer to this as the shape of the plan).

How Do I Get This Plan and How Do I Read It?

Depending on your RDBMS, there are different approaches to gathering the plan. Typically, you can get the engine to give you a pre-plan, which tells you the operations the engine will perform to retrieve your data. This is helpful when you need to identify large operations like table scans, which would benefit from an index would be helpful. For example—if I had the following table called Employees:

EmployeeID

LastName

State

02

Dantoni

PA

09

Brees

LA

If I wanted to query by LastName, e.g.,

SELECT STATE

FROM Employees

WHERE LastName = ‘Dantoni’

I would want to add an index to the LastName column. Some database engines will even flag a missing index warning, to let you know an index on that column would help the query go faster.

There is also the notion of a post plan, which includes the actual row counts and execution times of the query. This can be useful if your statistics are very out of date, and the engine is making poor assumptions about the number of rows your query will return.

Performance tuning database systems are a combination of dark arts and science and can require a deep level of experience. However, knowing about the existence of and how to capture execution plans, allows you to have a much better understanding of the work your database engine is doing, and can give you a path to fix it.

Read more
1 8 233

Introduction

Much of the dialog surrounding cloud these days seems centered on multi-cloud. Multi-cloud this, multi-cloud that—have we so soon forgotten about the approach that addresses the broadest set of needs for businesses beginning their transition from the traditional data center?

Friends, I’m talking about good-old hybrid cloud, and the path to becoming hybrid doesn’t have to be complicated. In short, we select the cloud provider who best meets most of our needs, establish network connectivity between our existing data center(s) and the cloud, and if we make intelligent, well-reasoned decisions about what should live where (and why), we’ll end up in a good place.

However, as many of you may already know, ending up in “a good place” isn’t a foregone conclusion. It requires careful consideration of what our business (or customers’ businesses) needs to ensure the decisions we’re making are sound.

In this series, we’ll look at the primary phases involved in transitioning toward a hybrid cloud architecture. Along the way, we’ll examine some of the most important considerations needed to ensure the results of this transition are positive.

But first, we’ll spend some time in an often-non-technical realm inhabited by our bosses and boss’s bosses. This is the realm of business strategy and requirements, where we determine if the path we’re evaluating is the right one. Shirt starched? Tie straightened? All right, here we go.

The Importance of Requirements

There’s a lot of debate around which cloud platform is “best” for a certain situation. But if we’re pursuing a strategy involving use of cloud services, we’re likely after a series of benefits common to all public cloud providers.

These benefits include global resource availability, practically unlimited capacity, flexibility to scale both up and down as needs change, and consumption-based billing, among others. Realizing these benefits isn’t without potential drawbacks, though.

The use of cloud services means we now have another platform to master and additional complexity to consider when it comes to construction, operation, troubleshooting, and recovery processes. And there are many instances where a service is best kept completely on-premises, making a cloud-only approach unrealistic.

Is this additional complexity even worth it? Do these benefits make any difference to the business? Answering a few business-focused questions upfront may help clarify requirements and avoid trouble later. These questions might resemble the following:

  1. Does the business expect rapid expansion or significant fluctuation in service demand? Is accurately predicting and preparing for this demand ahead of time difficult?
  2. Are the large capital expenditures every few years required to keep a data center environment current causing financial problems for the business?

  3. Is a lack of a worldwide presence negatively impacting the experience of your users and customers? Would it be difficult to distribute services yourself by building out additional self-managed data centers?

  4. Is day-to-day operation of the existing data center facility and hardware infrastructure becoming a time-sink? Would the business like to lessen this burden?

  5. Does the business possess applications with sensitive data that must reside on self-maintained infrastructure? Would there be consequences if this were violated?

In these examples, you can see we’re focusing less on technical features we’d “like to have” and instead thinking about the requirements of the business and specific needs that should be addressed.

If the answers to some of these questions are “yes,” then a hybrid cloud approach could be worthwhile. On the other hand, if a clear business benefit cannot be determined, then it may be best to keep doing what you are doing.

In any case, having knowledge of business requirements will help you select the correct path, whether it involves use of cloud services or maintaining your existing on-premises strategy. And through asking your key business stakeholders questions like these, you show them you’re interested in the healthy operation of the business, not just your technical responsibilities.

Wrap-Up

There are many ways to go about consuming cloud services, and for many organizations, a well-executed hybrid strategy will provide the best return while minimizing complexity and operational overhead. But before embarking on the journey it’s always best to make sure decisions are justified by business requirements. Otherwise, you could end up contributing to a “percentage of cloud initiatives that fail” statistic—not a good place to be.

Read more
1 12 369
sqlrockstar
Level 17

This week's Actuator comes to you from Austin, as I'm in town to host SolarWinds Lab live. We'll be talking about Database Performance Monitor (nee VividCortex). I hope you find time to watch and bring questions!

As always, here's a bunch of links I hope you find useful. Enjoy!

First clinical trial of gene editing to help target cancer

Being close to the biotech industry in and around Boston, I heard rumors of these treatments two years ago. I'm hopeful our doctors can get this done, and soon.

What Happened With DNC Tech

Twitter thread about the tech failure in Iowa last week.

Analysis of compensation, level, and experience details of 19K tech workers

Wonderful data analysis on salary information. Start at the bottom with the conclusions, then decide for yourself if you want to dive into the details above.

Things I Believe About Software Engineering

There's some deep thoughts in this brief post. Take time to reflect on them.

Smart Streetlights Are Experiencing Mission Creep

Nice reminder that surveillance is happening all around us, in ways you may never know.

11 Reasons Not to Become Famous (or “A Few Lessons Learned Since 2007”)

A bit long, but worth the time. I've never been a fan of Tim or his book, but this post struck a chord.

Berlin artist uses 99 phones to trick Google into traffic jam alert

Is it wrong that I want to try this now?

I think I understand why they never tell me anything around here...

Read more
1 17 388
orafik
Level 11

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Mav Turner with ideas about how the government could monitor and automate their hyperconverged infrastructure to help achieve their modernization objectives.

It’s no surprise hyperconverged infrastructure (HCI) has been embraced by a growing number of government IT managers, since HCI merges storage, compute, and networking into a much smaller and more manageable footprint.

As with any new technology, however, HCI’s wide adoption can be slowed by skeptics, such as IT administrators concerned with interoperability and implementation costs. However, HCI doesn’t mean starting from scratch. Indeed, migration is best achieved gradually, with agencies buying only what they need, when they need it, as part of a long-term IT modernization plan.

Let’s take a closer look at the benefits HCI provides to government agencies, then examine key considerations when it comes to implementing the technology—namely, the importance of automation and infrastructure monitoring.

Combining the Best of Physical and Virtual Worlds

Enthusiasts like HCI giving them the performance, reliability, and availability of an on-premises data center along with the ability to scale IT in the cloud. This flexibility allows them to easily incorporate new technologies and architectures into the infrastructure. HCI also consolidates previously disparate compute, networking, and storage functions into a single, compact data center.

Extracting Value Through Monitoring and Automation

Agencies are familiar with monitoring storage, network, and compute as separate entities; when these functions are combined with HCI, network monitoring is still required. Indeed, having complete IT visibility becomes more important as infrastructure converges.

Combining different services into one is a highly complex task fraught with risk. Things change rapidly, and errors can easily occur. Managers need clear insight into what’s going on with their systems.

After the initial deployment is complete monitoring should continue unabated. It’s vital for IT managers to understand the impact apps, services, and integrated components have on each other and the legacy infrastructure around them.

Additionally, all these processes should be fully automated. Autonomous workload acceleration is a core HCI benefit. Automation binds HCI components together, making them easier to manage and maintain—which in turn yields a more efficient data center. If agencies don’t spend time automating the monitoring of their HCI, they’ll run the risk of allocating resources or building out capacity they don’t need and may expose organizational data to additional security threats.

Investing in the Right Technical Skills

HCI requires a unique skillset. It’s important they invest in technical staff with practical HCI experience and the knowledge to effectively implement infrastructure monitoring and automation capabilities. These experts will be critical in helping agencies take advantage of the vast potential this technology has to offer.

Reaping the Rewards of HCI

Incorporating infrastructure monitoring and automation into HCI implementation plans will enable agencies to reap the full rewards: lower total cost of IT ownership thanks to simplified data center architecture, consistent and predictable performance, faster application delivery, improved agility, IT service levels accurately matched to capacity requirements, and more.

There’s a lot of return for simply applying the same level of care and attention to monitoring HCI as traditional infrastructure.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 5 402
sqlrockstar
Level 17

This week's Actuator comes to you from New England where it has been 367 days since our team last appeared in a Super Bowl. I'm still not ready to talk about it, though.

As always, here's a bunch of links I hope you find interesting. Enjoy!

97% of airports showing signs of weak cybersecurity

I would have put the number closer to 99%.

Skimming heist that hit convenience chain may have compromised 30 million cards

Looks like airports aren't the only industry with security issues.

It’s 2020 and we still have a data privacy problem

SPOILER ALERT: We will always have a data privacy problem.

Don’t be fooled: Blockchains are not miracle security solutions

No, you don't need a blockchain.

Google’s tenth messaging service will “unify” Gmail, Drive, Hangouts Chat

Tenth time is the charm, right? I'm certain this one will be the killer messaging app they have been looking for. And there's no way once it gets popular they'll kill it, either.

A Vermont bill would bring emoji license plates to the US

Just like candy corn, here's something else no one wants.

For the game this year I made some pork belly bites in a garlic honey soy sauce.

pastedImage_6.png

Read more
0 20 553
orafik
Level 11

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Jim Hansen reviewing data from our cybersecurity survey, including details on how agencies are combatting threats.

According to a 2019 Federal Cybersecurity Survey released last year by IT management software company SolarWinds, careless and malicious insiders topped the list of security threats for federal agencies. Yet, despite the increased threats, federal IT security pros believe they’re making progress managing risk.

Why the positive attitude despite the increasing challenge? While threats may be on the rise, strategies to combat these threats—such as government mandates, security tools, and best practices—are seeing vast improvements.

Greater Threat, Greater Solutions

According to the Cybersecurity Survey, 56% of respondents said the greatest source of security threats to federal agencies is careless and/or untrained agency insiders; 36% cited malicious insiders as the greatest source of security threats.

Most respondents cited numerous reasons why these types of threats have improved or remained in control, from policy and process improvements to better cyberhygiene and advancing security tools.

•Policy and process improvements: 58% of respondents cited “improved strategy and processes to apply security best practices” as the primary reason careless insider threats have improved.

•Basic security hygiene: 47% of respondents cited “end-user security awareness training” as the primary reason careless insider threats have improved.

•Advanced security tools: 42% of respondents cited “intrusion detection and prevention tools” as the primary reason careless insider threats have improved.

“NIST Framework for Improving Critical Infrastructure Cybersecurity” topped the list of the most critical regulations and mandates, with FISMA (Federal Information Security Management Act) and DISA STIGs (Security Technical Implementation Guides) following close behind, at 60%, 55%, and 52% of respondents, respectively, citing these as the primary contributing factor in managing agency risks.

There’s also no question the tools and technologies to help reduce risk are advancing quickly; this was evidenced by the number of tools federal IT security pros rely on to ensure a stronger security posture within their agencies. The following are the tools cited, and the percentage of respondents saying these are their most important technologies in their proverbial tool chest:

•Intrusion detection and prevention tools 42%

•Endpoint and mobile security 34%

•Web application firewalls 34%

•Fire and disk encryption 34%

•Network traffic encryption 34%

•Web security or web content filtering gateways 33%

•Internal threat detection/intelligence 30%

Training was deemed the most important factor in reducing agency risk, particularly when it comes to reducing risks associated with contractors or temporary workers:

•53% cited “ongoing security training” as the most important factor

•49% cited “training on security policies when onboarding” as the most important factor

•44% cited “educate regular employees on the need to protect sensitive data” as the most important factor

Conclusion

Any federal IT security pro will tell you although things are improving, there’s no one answer or one solution. The most effective way to reduce risk is a combination of tactics, from implementing ever-improving technologies to meeting federal mandates to ensuring all staffers are trained in security best practices.

Find the full article on our partner DLT’s blog Technically Speaking.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
1 15 618
sqlrockstar
Level 17

This week's Actuator comes to you from the suddenly mild January here in the Northeast. I'm taking advantage of the warm and dry days up here, spending time walking outdoors. Being outdoors is far better than the treadmill at the gym.

As always, here's a bunch of links from the internet I hope you will find useful. Enjoy!

Jeff Bezos hack: Amazon boss's phone 'hacked by Saudi crown prince'

I don't know where to begin. Maybe we can start with the idea that Bezos uses WhatsApp, an app known to be unsecured and owned by the unsecured Facebook. I'm starting to think he built a trillion-dollar company by accident, not because he's smart.

New Ransomware Process Leverages Native Windows Features

This is notable, but not new. Ransomware often uses resources available on the machine to do damage. For example, VB macros embedded in spreadsheets. I don't blame Microsoft for saying they won't provide security service for this, but it would be nice if they could hint at finding ways to identify and halt malicious activity.

London facial recognition: Metropolitan police announces new deployment of cameras

Last week the EU was talking about a five-year ban on facial recognition technology. Naturally, the U.K. decides to double down on their use of that same tech. I can't help but draw the conclusion this shows the deep divide between the U.K. and the EU.

Security Is an Availability Problem

I'm not certain, but I suspect many business decision-makers tend to think "that can't happen to us," and thus fail to plan for the day when it does happen to them.

Apple's dedication to 'a diversity of dongles' is polluting the planet

Words will never express my frustration with Apple for the "innovation" of removing a headphone jack and forcing me to buy additional hardware to continue to use my existing accessories.

Webex flaw allowed anyone to join private online meetings - no password required

The last thing I'm doing during the day is trying to join *more* meetings.

Play Dungeons & Deadlines

You might want to set aside some time for this one.

Walking through Forest Park this past Sunday, after a rainstorm the day before and the temperature so perfect to catch the steam coming off the trees.

pastedImage_0.png

Read more
2 34 788
adatole
Level 17

Back in October, 2019, I shared my love of both Raspberry Pi (https://www.raspberrypi.org/) devices and the Pi-Hole (https://pi-hole.net/) software; and showed how—with a little know-how about Application Programming Interfaces (APIs) and scripting (in this case, I used it as an excuse to make my friend @kmsigma happy and expand my knowledge of PowerShell) you could fashion a reasonable API-centric monitoring template in Server & Application Monitor (SAM). For those who are curious, you can find part 1 here: Don’t Shut Your Pi-Hole, Monitor It! (part 1 of 2) and part 2 here: Don’t Shut Your Pi-Hole, Monitor It! (part 2 of 2) 

It was a good tutorial, as far as things went, but it missed one major point: even as I wrote the post, I knew @Serena and her daring department of developers were hard at work building an API poller into SAM 2019.4. As my tutorial went to post, this new functionality was waiting in the wings, about to be introduced to the world.

Leaving the API poller out of my tutorial was a necessary deceit at the time, but not anymore. In this post I’ll use all the same goals and software as my previous adventure with APIs, but with the new functionality.

A Little Review

I’m not going to spend time here discussing what a Raspberry Pi or Pi-Hole solution is (you can find that in part 1 of the original series: Don’t Shut Your Pi-Hole, Monitor It! (part 1 of 2) . But I want to take a moment to refamiliarize you with what we’re trying to accomplish.

Once you have your Raspberry Pi and Pi-Hole up and running, you get to the API by going to http://<your pi-hole IP or name>/admin/api.php. When you do, the data you get back looks something like this:

{”domains_being_blocked”:115897,”dns_queries_today”:284514,”ads_blocked_today”:17865,”ads_percentage_today”:6.279129,”unique_domains”:14761,”queries_forwarded”:216109,”queries_cached”:50540,”clients_ever_seen”:38,”unique_clients”:22,”dns_queries_all_types”:284514,”reply_NODATA”:20262,”reply_NXDOMAIN”:19114,”reply_CNAME”:16364,”reply_IP”:87029,”privacy_level”:0,”status”:”enabled,””gravity_last_updated”:{”file_exists”:true,”absolute”:1567323672,”relative”:{”days”:”3,””hours”:”09,””minutes”:”53”}}}

If you look at it with a browser capable of formatting JSON data, it looks a little prettier:

api_simple_data_pretty.png

That’s the data we want to collect using the new Orion API monitoring function.

The API Poller – A Step-by-Step Guide

To start off, make sure you’re monitoring the Raspberry Pi in question at all, so there’s a place to display this data. What’s different from the SAM Component version is you can monitor it using the ARM agent, or SNMP, or even as a ping-only node.

Next, on the Node Details page for the Pi, look in the “Management” block and you should see an option for “API Poller.” Click that, then click “Create,” and you’re on your way.

01_create.png

You want to give this poller a name, or else you won’t be able to include these statistics in PerfStack (Performance Analyzer) later. You can also give it a description and (if required) the authentication credentials for the API.

1-5_set-name.png

On the next screen, put in the Pi-Hole API URL. As I said before, that’s http://<your pi-hole IP or Name>/admin/api.php. Then click “Send Request” to pull a sample of the available metrics.

02_URL.png

The “Response” area below will populate with items. For the ones you want to monitor, click the little computer screen icon to the right.

03_get-values.png

If you want to monitor the value without warning or critical thresholds, click “Save.” Otherwise change the settings as you desire.

05_monitor-value.png

As you do, you’ll see the “Values to Monitor” list on the right column populate. Of course, you can go back and edit or remove those items later. Because nobody’s perfect.

06_monitor-values2.png

Once you’re done, click “Save” at the bottom of the screen. Scroll down on the Node Details page and you’ll notice a new “API Pollers” Section is now populated.

07_poller-stats.png

I’m serious, it’s this easy. I’m not saying coding API monitors with PowerShell wasn’t a wonderful learning experience, and I’m sure down the road I’ll use the techniques I learned.

But when you have several APIs, with a bunch of values each, this process is significantly easier to set up and maintain.

Kudos once again to @kmsigma for the PowerShell support; and @serena and her team for all their hard work and support making our lives as monitoring engineers better every day.

Try it out yourself and let us know your experiences in the comments below!

Read more
1 11 416
orafik
Level 11

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by Jim Hansen about using patching, credential management, and continuous monitoring to improve security of IoT devices.

Security concerns over the Internet of Things (IoT) are growing, and federal and state lawmakers are taking action. First, the U.S. Senate introduced the Internet of Things Cybersecurity Improvement Act of 2017, which sought to “establish minimum security requirements for federal procurements of connected devices.” More recently, legislators in the state of California introduced Senate Bill No. 327, which stipulated manufacturers of IoT devices include “a reasonable security feature” within their products.

While these laws are good starting points, they don’t go far enough in addressing IoT security concerns.

IoT Devices: A Hacker’s Best Friend?

Connected devices all have the potential to connect to the internet and local networks and, for the most part, were designed for convenience and speed—not security. And since they’re connected to the network, they offer a backdoor through which other solutions can be easily compromised.

As such, IoT devices offer tantalizing targets for hackers. A single exploit from one connected device can lead to a larger, more damaging breach. Remember the Target hack from a few years ago? Malicious attackers gained a foothold into the retail giant’s infrastructure by stealing credentials from a heating and air condition company whose units were connected to Target’s network. It’s easy to imagine something as insidious—and even more damaging to national security—taking place within the Department of Defense or other agencies, which has been an early adopter of connected devices.

Steps for Securing IoT Devices

When security managers initiate IoT security measures, they’re not only protecting their devices, they’re safeguarding everything connected to those devices. Therefore, it’s important to go beyond the government’s baseline security recommendations and embrace more robust measures. Here are some proactive steps government IT managers can take to lock down their devices and networks.

  • Make patching and updating a part of the daily routine. IoT devices should be subject to a regular cadence of patches and updates to help ensure the protection of those devices against new and evolving vulnerabilities. This is essential to the long-term security of connected devices.

The Internet of Things Cybersecurity Improvement Act of 2017 specifically requires vendors to make their IoT devices patchable, but it’s easy for managers to go out and download what appears to be a legitimate update—only to find it’s full of malware. It’s important to be vigilant and verify security packages before applying them to their devices. After updates are applied, managers should take precautions to ensure those updates are genuine.

  • Apply basic credential management to interaction with IoT devices. Managers must think differently when it comes to IoT device user authentication and credential management. They should ask, “How does someone interact with this device?” “What do we have to do to ensure only the right people, with the right authorization, are able to access the device?” “What measures do we need to take to verify this access and understand what users are doing once they begin using the device?”

Being able to monitor user sessions is key. IoT devices may not have the same capabilities as modern information systems, such as the ability to maintain or view log trails or delete a log after someone stops using the device. Managers may need to proactively ensure their IoT devices have these capabilities.

  • Employ continuous threat monitoring to protect against attacks. There are several common threat vectors hackers can use to tap into IoT devices. SQL injection and cross-site scripting are favorite weapons malicious actors use to target web-based applications and could be used to compromise connected devices.

Managers should employ IoT device threat monitoring to help protect against these and other types of intrusions. Continuous threat monitoring can be used to alert, report, and automatically address any potentially harmful anomalies. It can monitor traffic passing to and from a device to detect whether the device is communicating with a known bad entity. A device in communication with a command and control system outside of the agency’s infrastructure is a certain red flag that the device—and the network it’s connected to—may have been compromised.

The IoT is here to stay, and it’s important for federal IT managers to proactively tackle the security challenges it poses. Bills passed by federal and state legislators are a start, but they’re not enough to protect government networks against devices that weren’t designed with security top-of-mind. IoT security is something agencies need to take into their own hands. Managers must understand the risks and put processes, strategies, and tools in place to proactively mitigate threats caused by the IoT.

Find the full article on Fifth Domain.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 11 351
explorevm
Level 9

Submitted for your approval; a story of cloud horrors.

One of performance issues impacting production.

Where monthly cloud billing began spiraling out of control.

The following story is true. The names have been changed to protect the innocent.

During my consulting career, I’ve encountered companies at many different stages of their cloud journey. What was particularly fun about walking into this shop is they were already about 75% up into public cloud. The remaining 25% was working towards being migrated off their aging hardware. They seemed to be ahead of the game, so why were my services needed?

Let’s set up some info about the company, which I’ll call “ABC Co.” ABC Co. provides medical staff and medical management to many hospitals and clinics, with approximately 1,000 employees and contractors spread across many states. Being in both medical staffing and recordkeeping, ABC Co. was subject to many compliance regulations such as HIPAA, PCI, etc. Their on-premises data center was on older hardware nearing end of life, and given the size of their IT staff, they decided to move out of the data center business.

The data center architect at ABC Co. did his homework. He spent many hours learning about public cloud, crunching numbers, and comparing virtual machine configurations to cloud-based compute sizing. Additionally, due to compliance requirements, ABC Co. needed to use dedicated hosts in the public cloud. After factoring in all the sizing, storage capacity, and necessary networking, the architect arrived at an expected monthly spend number: $50,000. He took this number to the board of directors with a migration plan and outlined the benefits of going to the cloud versus refreshing their current physical infrastructure. The board was convinced and gave the green light to move into the public cloud.

Everything was moving along perfectly early in the project. The underlying cloud architecture of networking, identity and access management, and security were deployed. A few workloads were moved up into the cloud to great success. ABC Co. continued their migration, putting applications and remote desktop servers in the cloud, along with basic workloads such as email servers and databases. But something wasn’t right.

End users started to complain of performance issues on the RDP servers. Application processing had slowed down to a crawl. The employee’s ability to perform their tasks was being impeded. The architect and cloud administrators added more remote desktop servers into the environment and increased their size. Sizing on the application servers, which were just Microsoft Windows Servers in the public cloud, were also increased. This alleviated the problems, albeit temporarily. As more and more users were logging in to the public cloud-based services, performance and availability took a hit.

And then the bill showed up.

It was creeping up slowly to the anticipated $50,000 per month. Unfortunately, as a side effect of the increasing resources, the bill had risen to more than triple the original estimates presented to the board of directors. In the peak of the “crisis,” the bill had surpassed $150,000 per month. This put the C-suite on edge. What was going on with the cloud migration project? How is the bill so high when they were promised a third of what was being spent? It was time for the ABC Co. team to call for an assist.

This is where I entered the scene. I’ll start this next section of the story by stating this outright: I didn’t solve all their problems. I wasn’t a savior on a white horse galloping in to save the day. I did, however, help ABC Co. start to reduce their bill and get cloud spend under control.

One of the steps they implemented before I arrived was to use scripted shutdown of servers during non-work hours. This cut off some of the wasteful spend on machines not being used. We also looked at the actual usage on all servers in the cloud. After running some scans, we found many servers not in use for 30 days or more being left on and piling onto the bill. These servers were promptly shut down, archived, then deleted after a set time. Applications experiencing performance issues were analyzed, and it was determined they could be converted to a cloud native architecture. And those pesky ever-growing remote desktop boxes? Smaller, more cost-effective servers were placed behind a load balancer to automatically boot additional servers should the user count demand it. All these are a few of the steps to reducing the cloud bill. Many things occurred after I had left, but this was a start to send them on the right path.

  So, what can be learned from this story? While credit should be given for the legwork done to develop the strategy, on-premises virtual machines and public cloud-based instances aren’t apples to apples. Workloads behave differently in the cloud. The way resources are consumed has costs behind it; you can’t just add RAM and CPU to a problem server like you can in your data center (nor is it often the correct solution). Many variables go into a cloud migration. If your company is looking at moving to the cloud, be sure to ask the deep questions during the initial planning phase—it may just save hundreds of thousands of dollars.

Read more
5 21 714
sqlrockstar
Level 17

Back from Austin and home for a few weeks before I head...back to Austin for a live episode of SolarWinds Lab. Last week was the annual Head Geeks Summit, and it was good to be sequestered for a few days with just our team as we map out our plans for world domination in 2020 (or 2021, whatever it takes).

As always, here's a bunch of stuff I found on the internetz this week that I think you might enjoy. Cheers!

Critical Windows 10 vulnerability used to Rickroll the NSA and Github

Patch your stuff, folks. Don't wait, get it done.

WeLeakInfo, the site which sold access to passwords stolen in data breaches, is brought down by the ...

In case you were wondering, the website was allowed to exist for three years before it was finally shut down. No idea what took so long, but I tip my hat to the owners. They didn't steal anything, they just took available data and made it easy to consume. Still, they must have known they were in murky legal waters.

Facial recognition: EU considers ban of up to five years

I can't say if that's the right amount of time; I'd prefer they ban it outright for now. This isn't just a matter of the tech being reliable, it brings about questions regarding basic privacy versus a surveillance state.

Biden wants Sec. 230 gone, calls tech “totally irresponsible,” “little creeps”

Politics aside, I agree with the idea that a website publisher should bear some burden regarding the content allowed. Similar to how I feel developers should be held accountable for deploying software that's not secure, or leaving S3 buckets wide open. Until individuals understand the risks, we will continue to have a mess of things on our hands.

Microsoft pledges to be 'carbon negative' by 2030

This is a lofty goal, and I applaud the effort here by Microsoft to erase their entire carbon footprint since they were founded in 1975. It will be interesting to see if any other companies try to follow, but I suspect some (*cough* Apple) won't even bother.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

In today's edition of "do as I say, not as I do", Google reminds us that their new motto is "Only slightly evil."

Technical Debt Is like a Tetris Game

I like this analogy, and thought you might like it as well. Let me know if it helps you.

If you are ever in Kansas City, run, don't walk, to Jack Stack and order the beef rib appetizer. You're welcome.

7_18_13 - 1.jpg

Read more
2 21 444
orafik
Level 11

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Brandon Shopp with ideas for improving security at the DoD by finding vulnerabilities and continuously monitoring agency infrastructure.

An early 2019 report from the Defense Department Officer of Inspector General revealed how difficult it’s been for federal agencies to stem the tide of cybersecurity threats. Although the DoD has made significant progress toward bolstering its security posture, 266 cybersecurity vulnerabilities still existed. Most vulnerabilities have only been discovered within the past year—a sure sign of rising risk levels.

The report cited several areas for improvement, including continuous monitoring and detection processes, security training, and more. Here are three strategies DOD can use to tackle those remaining 200-plus vulnerabilities.

1. Identify Existing Threats and Vulnerabilities

Identifying and addressing vulnerabilities will become more difficult as the number of devices and cloud-based applications on defense networks proliferates. Although government IT managers have gotten a handle on bring-your-own-device issues, undetected devices are still used on DoD networks.

Scanning for applications and devices outside the control of IT is the first step toward plugging potential security holes. Apps like Dropbox and Google Drive may be great for productivity, but they could also expose the agency to risk if they’re not security hardened.

The next step is to scan for hard-to-find vulnerabilities. The OIG report called out the need to improve “information protection processes and procedures.” Most vulnerabilities occur when configuration changes aren’t properly managed. Automatically scanning for configuration changes and regularly testing for vulnerabilities can help ensure employees follow the proper protocols and increase the department’s security posture.

2. Implement Continuous Monitoring, Both On-Premises and in the Cloud

While the OIG report specifically stated the DoD must continue to proactively monitor its networks, those networks are becoming increasingly dispersed. It’s no longer only about keeping an eye on in-house applications; it’s equally as important to be able to spot potential vulnerabilities in the cloud.

DoD IT managers should go beyond traditional network monitoring and look more deeply into the cloud services they use. The ability to see the entire network, including destinations in the cloud, is critically important, especially as the DoD becomes more reliant on hosted service providers.

3. Establish Ongoing User Training and Education Programs

A well-trained user can be the best protection against vulnerabilities, making it important for the DoD to implement a regular training cadence for its employees.

Training shouldn’t be relegated to the IT team alone. A recent study indicates insider threats pose some of the greatest risk to government networks. As such, all employees should be trained on the agency’s policies and procedures and encouraged to follow best practices to mitigate potential threats. The National Institute of Standards and Technology provides an excellent guide on how to implement an effective security training program.

When it comes to cybersecurity, the DoD has made a great deal of progress, but there’s still room for improvement. By implementing these three best practices, the DoD can build off what it’s already accomplished and focus on improvements.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
2 13 758
gregwstuart
Level 11

While there are many silly depictions of machine learning and artificial intelligence throughout Hollywood, its reality delivers significant benefits. Administrators today oversee so many tasks, like system monitoring, performance optimizing, networking configuration, and more. Many of these tasks can be monotonous and tedious. Also, those tasks are generally required daily. In these cases, machine learning helps ease the burden on the administrator and helps make them more productive with their time. Lately, however, more people seem to think too much machine learning may replace the need for humans to get a job done. While there are instances of machine learning eliminating the need for some tasks to be manned by a human, I don’t believe we’ll see humans replaced by machines (sorry, Terminator fans). Instead, I’ll highlight why I believe machine learning matters now and will continue to matter for generations to come.

Machine Learning Improves Administrator’s Lives

Some tasks administrators are responsible for can be very tedious and take a long time to complete. With machine learning, automation makes the daily tedious tasks run on a schedule and efficiently as system behavior is learned and optimized on the fly. A great example comes in the form of spam mail or calls. Big name telecom companies are now using machine learning to filter out the spam callers flooding cell phones everywhere. Call blocker apps can now screen calls for you based on spam call lists analyzed by machine learning and then block potential spam. In other examples, machine learning can analyze system behavior against a performance baseline and then alert the team of any anomalies and/or the need to make changes. Machine learning is here to help the administrator, not give them anxiety about being replaced.

Machine Learning Makes Technology Better


There are so many amazing software packages available today for backup and recovery, server virtualization, storage optimization, or security hardening. There’s something for every type of workload. When machine learning is applied to these software technologies, it enhances the application and increases the ease of use. Machine learning is doing just that: always learning. If an application workload suddenly increases, machine learning captures it and then will use an algorithm to determine how to react in those situations. When there’s a storage bottleneck, machine learning analyzes the traffic to determine what’s causing the backup and then works out a possible solution to the problem for administrators to implement.

Machine Learning Reduces Complexity

Nobody wants their data center to be more complex. In fact, technology trends in the past 10 to 15 years have leaned towards reducing complexity. Virtualization technology has reduced the need for a large footprint in the data center and reduced the complexity of systems management. Hyperconverged infrastructure (HCI) has gone a step further and consolidated an entire rack’s worth of technology into one box. Machine learning takes it a step further by enabling automation and fast analysis of large data sets to produce actionable tasks. Tasks requiring a ton of administrative overhead are now reduced to an automated and scheduled task monitored by the administrator. Help desk analysts benefit from machine learning’s ability to recognize trending data to better triage certain incident tickets and reduce complexity in troubleshooting those incidents.

Learn Machine Learning

If you don’t have experience with machine learning, dig in and start reading everything you can about it. In some cases, your organization may already be using machine learning. Figure out where it’s being used and start learning how it affects your job day to day. There are so many benefits to using machine learning—find out how it benefits you and start leveraging its power.

Read more
1 15 495
ian0x0r
Level 10

The marketing machines of today often paint new technologies to suggest they’re the best thing since sliced bread. Sometimes though, the new products are just a rehash of an existing technology. In this blog post, I’ll look at some of these.

As some of you may know, my tech background is heavily focused around virtualization and the associated hardware and software products. With this in mind, this post will have a slant towards those types of products.

One of the recent technology trends I have seen cropping up is something called dHCI or disaggregated hyperconverged infrastructure. I mean, what is that? If you break down to its core components, it’s nothing more than separate switching, compute, and storage. Why is this so familiar? Oh yeah—it’s called converged infrastructure. There’s nothing HCI about it. HCI is the convergence of storage and compute on to a single chassis. To me, it’s like going to a hipster café and asking for a hyperconverged sandwich. You expect a ready-to-eat, turnkey sandwich but instead, you receive a disassembled sandwich you have to construct yourself and then somehow it’s better than the thing it was trying to be in the first place: a sandwich. No thanks. If you dig a little deeper, the secret sauce to dHCI is the lifecycle management software overlaying the converged infrastructure but hey, not everyone wants secret sauce with their sandwich.

If you take this a step further and label these types of components as cloud computing, nothing has really changed. One could argue true cloud computing is the ability to self-provision workloads, but rarely does a product labeled as cloud computing deliver those results, especially private clouds.

An interesting term I came across as a future technology trend is distributed cloud.¹ This sounds an awful lot like hybrid cloud to me. Distributed cloud is when public cloud service offerings are moved into private data centers on dedicated hardware to give a public cloud-like experience locally. One could argue this already happens the other way around with a hybrid cloud. Technologies like VMware on AWS (or any public cloud for that matter) make this happen today.

What about containers? Containers have held the media’s attention for the last few years now as a new way to package and deliver a standardized application portable across environments. The concept of containers isn’t new, though. Docker arguably brought containers to the masses but if you look at this excellent article by Ell Marquez on the history of containers, we can see its roots go all the way back to the mainframe era of the late 70s and 80s.

The terminology used by data protection companies to describe their products also grinds my gears. Selling technology on being immutable. Immutable meaning it cannot be changed once it has been committed to media. Err, WORM media anyone? This technology has existed for years on tape and hard drives. Don’t try and sell it as a new thing.

While this may seem a bit ranty, if you’re in the industry, you can probably guess which companies I’m referring to with my remarks. What I am hoping to highlight though is not everything is new and shiny, some of it is wrapped up in hype or clever marketing.

I’d love to hear your thoughts on this, if you think I’m right or wrong, and if you can think of any examples of old tech, new name.

¹Source: CRN https://www.crn.com/slide-shows/virtualization/gartner-s-top-10-technology-trends-for-2020-that-will...

Read more
2 24 673
rorz
Level 10

2019 was busy year for DevOps as measured by the events held on the topic. Whether it be DevOps days around the globe, DockerCon, DevOps Enterprise Summits, KubeCon, or CloudNativeCon, events are springing up to support this growing community. With a huge number of events already scheduled for 2020, people plan on improving their skills with this technology. This is great—it’ll allow DevOps leaders to close capability gaps and it should be a must for those on a DevOps journey in 2020.

Hopefully, we’ll see more organizations adopt the key stages of DevOps evolution (foundation building, normalization, standardization, expansion, automated infrastructure delivery, and self-service) by following this model. Understanding where you are on the journey helps you plan what needs to be satisfied at each level before trying to move on to an area of greater complexity. By looking at the levels of integration and the growing tool chain, we can see where you are and plan accordingly. I look forward to seeing and reading about the trials and how they were overcome by organizations looking to further their DevOps movement in 2020.

You’ll probably hear terms like NoOps and DevSecOps gain more traction over the coming year from certain analysts. I believe the name DevOps is currently fine for what you’re trying to achieve. If you follow correct procedures, then security and operations already make up a large subset of your workflows. Therefore, you shouldn’t need to call them out in separate terms. If you’re not pushing changes to live systems, then you aren’t really doing any operations, and therefore not truly testing your code. So how can you go back and improve or reiterate on it? As for security, while it’s hard to implement correctly and as difficult to work collaboratively together, there’s a greater need to adopt this technology correctly. Organizations that have matured and evolved through the stages above are far more likely to place emphasis on the integration of security than those just starting out. Improved security posture will be a key talking point as we progress through 2020 and into the next decade.

Kubernetes will gain even more ground in 2020 as more people look to a way to provide a robust method of container orchestration to scale, monitor, and run any application, with many big-name software vendors investing in what they see as the “next battleground” for variants on the open-source application management tool.

Organizations will start to invest in more use of artificial intelligence, whether it be for automation, remediation, or improved testing. You can’t deny artificial intelligence and machine learning are hot right now and will seep into this aspect of technology in 2020. The best place to be to try this is in a cloud provider, saving you the need to invest in hardware. Instead, the provider can get you up and running in minutes.

Microservices and containers infrastructure will be another area of growth within the coming 12 months. Container registries are beneficial to organizations. Registries allow companies to apply policies, whether a security or access control policy and more, to how they manage containers. JFrog Container Registry is probably going to lead the charge in 2020, but don’t think they’ll have it easy as AWS, Google, Azure, and other software vendors have products fighting for this space.

These are just a few areas I see will be topics of conversation and column inches as we move into 2020 and beyond, but it tells me this is the area to develop your skills if you want to be in demand as we move into the second decade of this century.

Read more
1 9 393
sqlrockstar
Level 17

In Austin this week for our annual meeting of Head Geeks. The first order of business is to decide what to call our group. I prefer a "gigabyte of Geeks," but I continue to be outvoted. Your suggestions are welcome.

As always, here's a bunch of links from the internet I hope you find interesting. Enjoy!

Facebook again refuses to ban political ads, even false ones

Zuckerberg continues to show the world he only cares about ad revenue, for without that revenue stream his company would collapse.

Scooter Startup Lime Exits 12 Cities and Lays Off Workers in Profit Push

Are you saying renting scooters your customers then abandon across cities *is not* a profitable business model? That's crazy!

Russian journals retract more than 800 papers after ‘bombshell’ investigation

I wish we could do the same thing with blog posts, old and new.

Alleged head of $3.5M crypto mining scam bought stake in nightclub

A cryptocurrency scam? Say it isn't so! Who knew this was even possible?

Ring confirms it fired four employees for watching customer videos

Ah, but only after an external complaint, and *after* their actions were known internally. In other words, these four would still have jobs if not for the external probe.

Tesla driver arrested for flossing at 84 mph on autopilot

Don't judge, we've all been there, stuck in our car and in need of flossing our teeth.

It's helpful for a restaurant to publish their menu outside for everyone to see.

IEBYE5932.JPG

Read more
2 36 581
orafik
Level 11

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Brandon Shopp with ideas about modernizing security along with agency infrastructure to reduce cyberthreats.

As agencies across the federal government modernize their networks to include and accommodate the newest technologies such as cloud and the Internet of Things (IoT), federal IT professionals are faced with modernizing security tactics to keep up.

There’s no proverbial silver bullet, no single thing capable of protecting an agency’s network. The best defense is implementing a range of tactics working in concert to provide the most powerful security solution.

Let’s take a closer look.

Access Control

Something nearly all of us take for granted is access. The federal IT pro can help dramatically improve the agency’s security posture by reining in access.

There can be any number of reasons for federal IT pros to set overly lenient permissions—from a lack of configuration skills to a limited amount of time. The latter is often the more likely culprit as access control applies to many aspects of the environment. From devices to file folders and databases, it’s difficult and time-consuming to manage setting access rights.

Luckily, an increasing number of tools are available to help automate the process. Some of these tools can go so far as to automatically define permission parameters, create groups and ranges based on these parameters, and automatically apply the correct permissions to any number of devices, files, or applications.

Once permissions have been set successfully, be sure to implement multifactor authentication to ensure access controls are as effective as possible.

Diverse Protection

The best protection against a complex network is multi-faceted security. Specifically, to ensure the strongest defense, invest in both cloud-based and on-premises security.

For top-notch cloud-based security, consider the security offerings of the cloud provider with as much importance as other benefits. Too many decisionmakers overlook security in favor of more bells and whistles.

Along similar lines of implementing diverse, multi-faceted security, consider network segmentation. If an attack happens, the federal IT pro should be able to shut down a portion of the network to contain the attack while the rest of the network remains unaffected.

Testing

Once the federal IT pro has put everything in place, the final phase—testing—will quickly become the most important aspect of security.

Testing should include technology testing (penetration testing, for example), process testing (is multi-factor authentication working?), and people testing (testing the weakest link).

People testing may well be the most important part of this phase. Increasingly, security threats caused by human error are becoming one of the federal government’s greatest threats. In fact, according to a recent Cybersecurity Survey, careless and malicious insiders topped the list of security threats for federal agencies.

Conclusion

There are tactics federal IT pros can employ to provide a more secure environment, from enhancing access control to implementing a broader array of security defenses to instituting a testing policy.

While each of these is important individually, putting them together goes a long way toward strengthening any agency’s security infrastructure.

Find the full article on Government Technology Insider.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
1 5 232
wollmannbruno
Level 10

Part 1 of this series introduced IT Service Management (ITSM) and a few of the adaptable frameworks available to fit the needs of an organization. This post focuses on the benefits of using a couple of the principles from the Lean methodology to craft ITSM.

I’d like to acknowledge and thank Trish Livingstone and Julie Johnson for sharing their expertise in this subject area. I leaned on them heavily.

The Lean Methodology

The Lean methodology is a philosophy and mindset focused on driving maximum value to the customer while minimizing waste. These goals are accomplished through continuous improvement and respect for people. More on this in a bit.

Because the Lean methodology originated in the Toyota Production System, it’s most commonly associated with applications in manufacturing. However, over the years, Lean has brought tremendous benefits to other industries as well, including knowledge work. Probably the most recognizable form of Lean in knowledge work is the Agile software development method.

Continuous Improvement

“If the needs of the end user are the North Star, are all the services aligned with achieving their needs?” — Julie Johnson

Continuous improvement should be, not surprisingly, continuous. Anyone involved in a process should be empowered to make improvements to it at any time. However, some situations warrant bringing a team of people together to affect more radical change than can be achieved by a single person.

3P—Production, Preparation, Process

One such situation is right at the beginning of developing a new product or process.

This video and this video demonstrate 3P being used to design a clinic and a school, respectively. In both cases, the design team includes stakeholders of all types to gather the proper information to drive maximum value to the customer. The 3P for the clinic consists of patients, community members, caregivers, and support staff. The same process for the school includes students, parents, teachers, and community members.

While both examples are from tangible, brick-and-mortar situations, the 3P is also helpful in knowledge work. One of the most challenging things to gather when initiating a new project is proper and complete requirements. Without sufficient requirements, the creative process of architecting and designing IT systems and services often ends up with disappointing outcomes and significant rework. This 3P mapped out the information flows that fed directly into the design of new safety alert system software for healthcare.

The goal of the 3P is to get the new initiative started on the right path by making thoughtful, informed decisions using information gathered from all involved in the process. A 3P should begin with the team members receiving a short training session on how to participate in the 3P. Armed with this knowledge and facilitated by a Lean professional keeping things on track, the 3P will produce results more attuned to the customer’s needs.

Value Stream Mapping (VSM)

If you bring visibility to the end-to-end process, you create understanding around why change is needed, which builds buy-in.” — Trish Livingstone

Another situation warranting a team of people coming together to affect more radical change is when trying to improve an existing process or workflow. Value Stream Mapping (VSM) is especially useful when the process contains multiple handoffs or islands of work.

For knowledge work, VSM is the technique of analyzing the flow of information through a process delivering a service to a customer. The process and flow are visually mapped out, and every step is marked as either adding value or not adding value.

There are many good people in IT, and many want to do good work. As most IT departments operate in silos, it’s natural to think if you produce good quality work, on time, and then hand it off to the next island in the process, the customer will see the most value. The assumption here is each island in the process is also producing good quality work on time. This style of work is known as resource efficiency. The alternative, referred to as flow efficiency, focuses on the entire process to drive maximum customer value. This video, although an example from healthcare and not knowledge work, explains why flow efficiency can be superior to resource efficiency.

I was presented with a case study where a government registrar took an average of 53 days to fulfill a request for a new birth certificate. VSM revealed many inefficiencies because of tasks adding no value. The process after the VSM fulfilled requests in 3 days without adding new resources. The registrar’s call volume fell by 60% as customers no longer needed to phone for updates.

It’s easy to see how Value Stream Mapping could help optimize many processes in IT, including change management, support ticket flow, and maintenance schedules, to name a few.

Respect for People

Respect for people is one of the core tenets of Lean and a guiding principle for Lean to be successful in an organization.

Respect for the customer eliminates waste. Waste is defined as anything the customer wouldn’t be willing to pay for. In the case of government service, waste is anything they wouldn’t want their tax dollars to pay for.

Language matters when respecting the customer. The phrase “difficult user” is replaced with “a customer who has concerns.” As demonstrated in the 3P videos above, rather than relying on assumptions or merely doing things “the way we’ve always done them,” customers are actively engaged to meet their needs better.

Lean leadership starts at the top. Respect for employees empowers them to make decisions allowing them to do their best work. Leadership evolves to be less hands-on and takes on a feeling of mentorship.

Respect for coworkers keeps everyone’s focus on improving the processes delivering value to the customer. Documenting these processes teaches new skills, so everyone can participate and move work through the system faster.

The 4 Wins of Lean

Using some or all of the Lean methodology to customize IT service management can be a win, win, win, win situation. Potential benefits could include:

1. Better value for the ultimate customer (employees)

    • Reduced costs by eliminating waste
    • Faster service or product delivery
    • Better overall service

2. Better for the people working in the process

    • Empowered to make changes
    • Respected by their leaders
    • Reduced burden

3. Better financially for the organization

    • Reduced waste
    • Increased efficiency
    • Reduced cost

4. Better for morale

    • Work has meaning

    • No wasted effort

    • Work contributes to the bottom line

Read more
1 23 557
gregwstuart
Level 11

There have been so many changes in data center technology in the past 10 years, it’s hard to keep up at times. We’ve gone from a traditional server/storage/networking stack with individual components, to a hyperconverged infrastructure (HCI) where it’s all in one box. The data center is more software-defined today than it ever has been with networking, storage, and compute being abstracted from the hardware. On top of all the change, we’re now seeing the rise of artificial intelligence (AI) and machine learning. There are so many advantages to using AI and machine learning in the data center. Let’s look at ways this technology is transforming the data center.

Storage Optimization

Storage is a major component of the data center. Having efficient storage is of the utmost importance. So many things can go wrong with storage, especially in the case of large storage arrays. Racks full of disk shelves with hundreds of disks, of both the fast and slow variety, fill data centers. What happens when a disk fails? The administrator gets an alert and has to order a new disk, pull the old one out, and replace it with the new disk when it arrives. AI uses analytics to predict workload needs and possible storage issues by collecting large amounts of raw data and finding trends in the usage. AI also helps with budgetary concerns. By analyzing disk performance and capacity, AI can help administrators see how the current configuration performs and order more storage if it sees a trend in growth.

Fast Workload Learning

Capacity planning is an important part of building and maintaining a data center. Fortunately, with technology like HCI being used today, scaling out as the workload demands is a simpler process than it used to be with traditional infrastructure. AI and machine learning capture workload data from applications and use it to analyze the impact of future use. Having a technology aid in predicting the demand of your workloads can be beneficial in avoiding downtime or loss of service for the application user. This is especially important in the process of building a new data center or stack for a new application. The analytics AI provides helps to see the entire needs of the data center, from cooling to power and space.

Less Administrative Overhead

The new word I love to pair with artificial intelligence and machine learning is “autonomy.” AI works on its own to analyze large amounts of data and find trends and create performance baselines in data centers. In some cases, certain types of data center activities such as power and cooling can use AI to analyze power loads and environmental variables to adjust cooling. This is done autonomously (love that word!) and used to adjust tasks on the fly and keep performance at a high level. In a traditional setting, you’d need several different tools and administrators or NOC staff to handle the analysis and monitoring.

Embrace AI and Put it to Work

The days of AI and machine learning being a scary and unknown thing are past. Take stock of your current data center technology and decide whether AI and/or machine learning would be of value to your project. Another common concern is AI replacing lots of jobs soon, and while it’s partially true, it isn’t something to fear. It’s time to embrace the benefits of AI and use it to enhance the current jobs in the data center instead of fearing it and missing out on the improvements it can bring.

Read more
2 9 383
rorz
Level 10

There are many configuration management, deployment, and orchestration tools available, ranging from open-source tools to automation engines. Ansible is one such software stack available to cover all the bases, and seems to be gaining more traction by the day. In this post, we’ll look at how this simple but powerful tool can change your software deployments by bringing consistency and reliability to your environment.

Ansible gives you the ability to provision, control, configure, and deploy applications to multiple servers from a single machine. Ansible allows for successful repetition of tasks, can scale from one to 10,000 or more endpoints, and uses YAML to apply configuration changes, which is easy to read and understand. It’s lightweight, uses SSH PowerShell and APIs for access, and as mentioned above, is an open-source project. It’s also agentless, differentiating it from some other similar competitive tools in this marketplace. Ansible is designed with your whole infrastructure in mind rather than individual servers. If you need dashboard monitoring, then Ansible Tower is for you.

Once installed on a master server, you create an inventory of machines or nodes for it to perform tasks on. You can then start to push configuration changes to nodes. An Ansible playbook is a collection of tasks you want to be executed on a remote server, in a configuration file. Get complicated with playbooks from the simple management and configuration of remote machines all the way to a multifaceted deployment with these five tips to start getting the most out of what tool can deliver.

  1. Passwordless keys (for SSH) is the way to go. Probably one you should undertake from day one. Not just for Ansible, this uses a public shared key between hosts based on the v2 standard with most default OSs creating 2048-bit keys, but can be changed in certain situations up to 4096-bit. No longer do you have to type in long complex passwords for every login session—this more reliable and easier-to-maintain method makes your environment both more secure and easier for Ansible to execute.
  2. Use check mode to dry run most modules. If you’re not sure how a new playbook or update will perform, dry runs are for you. With configuration management and Ansible’s ability to provide you with desired state and your end goal, you can use dry run mode to preview what changes are going to be applied to the system in question. Simply add the --check command to the ansible-playbook command for a glance at what will happen.
  3. Use Ansible roles. This is where you break a playbook out into multiple files. This file structure consists of a grouping of files, tasks, and variables, which now moves you to modularization of your code and thus independent adaptation upgrade, and allows for reuse of configuration steps, making changes and improvements to your Ansible configurations easier.
  4. Ansible Galaxy is where you should start any new project. Access to roles, playbooks, and modules from community and vendors—why reinvent the wheel? Galaxy is a free site for searching, rating, downloading, and even reviewing community-developed Ansible roles. This is a great way to get a helping hand with your automation projects.
  5. Use a third-party vault software. Ansible Vault is functional, but a single shared secret makes it hard to audit or control who has access to all the nodes in your environment. Look for something with a centrally managed repository of secrets you can audit and lock down in a security breach scenario. I suggest HashiCorp Vault as it can meet all these demands and more, but others are available.

Hopefully you now have a desire to either start using Ansible and reduce time wasted on rinse and repeat configuration tasks, or you’ve picked up a few tips to take your skills to the next level and continue your DevOps journey.

Read more
3 14 828
sqlrockstar
Level 17

Welcome back! I hope y'all had a happy and healthy holiday break. I'm back in the saddle after hosting a wonderful Christmas dinner for 20 friends and family. I had some time off as well, which I used to work a bit on my blog as well as some Python and data science learning.

As usual, here's a bunch of links from the internet I hope you'll find useful. Enjoy!

Team that made gene-edited babies sentenced to prison, fined

I wasn't aware we had reached the point of altering babies' DNA, but here we are.

2019 Data Breach Hall of Shame: These were the biggest data breaches of the year

I expect a longer list from 2020.

Bing’s Top Search Results Contain an Alarming Amount of Disinformation

A bit long, but worth some time and a discussion. I never think about how search engines try to determine the veracity of the websites returned in a search.

Google and Amazon are now in the oil business

File this under "Do as I say, not as I do."

Seven Ways to Think Like a Programmer

An essay about data that warmed my heart. I think a lot of this applies to every role, especially for those of us inside IT.

The other side of Stack Overflow content moderation

Start this post by reading the summary, then take in some of the specific cases he downvoted. The short of it is this: humans are horrible at communicating through texts, no matter what the forum.

This Is How To Change Someone’s Mind: 6 Secrets From Research

If you want to have more success at work, read this post. I bet you can think of previous discussions at work and understand where things went wrong.

For New Year's Eve I made something special - 6 pounds of pork belly bites in a honey soy sauce. They did not last long. No idea what everyone else ate, though.

IMG_3763.JPG

Read more
1 31 549
orafik
Level 11

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Jim Hansen about the state of security and insider threats for the federal government and what’s working to improve conditions. We’ve been doing these cyber surveys for years and I always find the results interesting.

Federal IT professionals feel threats posed by careless or malicious insiders and foreign governments are at an all-time high, yet network administrators and security managers feel like they’re in a better position to manage these threats.

Those are two of the key takeaways from a recent SolarWinds federal cybersecurity survey, which asked 200 federal government IT decision makers and influencers their impressions regarding the current security landscape.

The findings showed enterprising hackers are becoming increasingly focused on agencies’ primary assets: their people. On the bright side, agencies feel more confident to handle risk thanks to better security controls and government-mandated frameworks.

People Are the Biggest Targets

IT security threats posed by careless or untrained insiders and nation states have risen substantially over the past five years. Sixty-six percent of survey respondents said things have improved or are under control when it comes to malicious threats, but when asked about careless or accidental insiders, the number decreased to 58%.

Indeed, hackers have seen the value in targeting agencies’ employees. People can be careless and make mistakes—it’s human nature. Hackers are getting better at exploiting these vulnerabilities through simple tactics like phishing attacks and stealing or guessing passwords. The most vulnerable are those with access to the most sensitive data.

There are several strategies agencies should consider to even the playing field.

Firstly, ongoing training must be a top priority. All staff members should be hyper-aware of the realities their agencies are facing, including the potential for a breach and what they can do to stop it. Simply creating unique and undetectable passwords or reporting suspicious emails might be enough to save the organization from a perilous data breach. Agency security policies must be updated and shared with the entire organization at least once a month, if not more. Emails can help relay this information, but live meetings are much better at conveying urgency and importance.

Employing a policy of zero trust is also important. Agency workers aren’t bad people, but everyone makes mistakes. Data access must be limited to those who need it and security controls, such as access rights management, should be deployed to monitor and manage access.

Finally, agencies must implement automated monitoring solutions to help security managers understand what’s happening on their network at all times. They can detect when a person begins trying to access data they normally wouldn’t attempt to retrieve or don’t have authorization to view. Or perhaps when someone in China is using the login credentials of an agency employee based in Virginia. Threat monitoring and log and event management tools can flag these incidents, making them essential for every security manager’s toolbox.

Frameworks and Best Practices Being Embraced, and Working

Most survey respondents believe they’re making progress managing risk, thanks in part to government mandates. This is a sharp change from the previous year’s cybersecurity report, when more than half of the respondents indicated regulations and mandates posed a challenge. Clearly, agencies are starting to get used to—and benefit from—programs like the Risk Management Framework (RMF) and Cybersecurity Framework.

These frameworks help make security a fundamental component of government IT and provide a roadmap on how to do it right. With frameworks like the RMF, developing a better security hygiene isn’t a matter of “should we do this?” but a matter of “here’s how we need to do this.” The frameworks and guidelines bring order to chaos by giving agencies the basic direction and necessities they need to protect themselves and, by extension, the country.

A New Cold War

It’s encouraging to see recent survey respondents appearing to be emboldened by their cybersecurity efforts. Armed with better tools, guidelines, and knowledge, they’re in a prime position to defend their agencies against those who would seek to infiltrate and do harm.

But it’s also clear this battle is only just beginning. As hackers get smarter and new technologies become available, it’s incumbent upon agency IT professionals to not rest on their laurels. We’re entering what some might consider a cyber cold war, with each side stocking up to one-up the other. To win this arms race, federal security managers must continue to be innovative, proactive, and smarter than their adversaries.

Find the full article on Federal News Network.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 11 486
scott.driver
Level 12

“Security? We don’t need no stinking security!”

I’ve actually heard a CTO utter words this effect. If you subscribe to a similar mindset, here are five ways you too can stink at information security.

  • Train once and never test

Policy says you and your users need to be trained once a year, so once a year is good enough. Oh, and make sure you never test the users either—it’ll only confuse them.

  • Use the same password

It just makes life so much easier. Oh, and a good place to store your single password is in your email, or on Post-It notes stuck to your monitor.

  • Patching breaks things, so don’t patch

Troubleshooting outages is a pain. If you don’t patch and you don’t look at the device in the corner, then it won’t break.

  • The firewall will protect everything on the inside

We have the firewall! The bad guys stay out, so on the inside, we can let everyone get to everything.

  • Just say no and lock EVERYTHING down

If we say no to everything, and we restrict everything, then nothing bad will happen.

OK, now it’s out of my system—the above is obviously sarcasm.

But some of you will work in places that subscribe to one or more of the above. I’ve been there. But what can YOU do? Well, it’s 2020, and information security is everyone’s responsibility. One thing I commonly emphasize with our staff is no cybersecurity tool can ever be 100% effective. To even think about approaching 100% efficacy, everyone has to play a role as the human firewall. As IT professionals, our jobs aren’t just to put the nuts and bolt in place to keep the org safe. It’s also our job to educate our staff about the impact information security has on them.

So, let’s flip the above “tips” on their head and talk about what you can do to positively affect the cyber mindsets in your organization.

Train and Test Your Users Often

Use different training methods. Our head of marketing likes to use the phrase “six to eight to resonate.” You’re trying to keep the security mindset at the front of your staff’s consciousness. In addition to frequent CBT trainings, use security incidents as a learning mechanism. One of our most effective awareness campaigns was when we gamified a phishing campaign. The winner got something small like a pair of movie tickets. This voluntary “training” activity got a significant portion of our staff to actively respond. Don’t minimize the positive effect incentives can have on your users.

Lastly, speaking of incentives, make sure you run actual simulated phishing exercises. It’s a safe way to train your users. It’s also an easy way to test the effectiveness of your InfoSec training program and let users know how important data security is to the business.

Practice Good Password Hygiene

Security pros generally agree you should use unique, complex passwords or passphrases for every service you consume. This way, when (not if) an account you’re using is compromised, the account is only busted for a single service, rather than everywhere. If you use passwords across sites, you may be susceptible to credential stuffing campaigns.

Once you get beyond a handful of sites, it’s impossible to expect your users to remember all their passwords. So, what do you do? The easiest and most effective thing to do is introduce a password management solution. Many solutions out there run as a SaaS offering. The best solutions will dramatically impact security, while simplifying operations for your users. It’s a win-win!

One final quick point before moving on: make sure someone in your org is signed up for notifications from haveibeenpwned.com. At the time of this writing, there are over 9 BILLION accounts on HIBP. This valuable free service can be an early warning sign if users in your org have been scooped up in data breaches. Additionally, SolarWinds Identity Monitor can notify you if your monitored domains or email addresses have been exposed in a data leak.

Patch Early and Often

I’m guessing I’m not alone in having worked at places afraid of applying security patches. Let’s just say if you’ve been around IT operations for a while, chances are you have battle scars from patching. Times change, and in my opinion, vendors have gotten much better at QAing their patches. Legacy issues aside, I’ll give you three reasons to patch frequently: Petya, NotPetya, and WannaCry. These three instances of ransomware caused some of the largest computer disruptions in recent memory. They were also completely preventable, as Microsoft released a patch plugging the EternalBlue vulnerability months before attacks were seen in the wild. From a business standpoint, patching makes good fiscal sense. The operational cost related to a virus can be extreme—just ask Maersk, the company projected to lose $300 million dollars from NotPetya. This doesn’t even account for the reputational risk a company can suffer from a data breach, which in many cases can be just as detrimental to the long-term vibrancy of a business.

Firewall Everywhere

If you’re breached, you want to limit the bad actors’ ability to pivot their attack from a web server to a system with financials. This technique is demonstrated with a DMZ approach. However, a traditional DMZ may not be enough, resulting in the rise of micro-segmentation over the last few years. The fun added benefit you can get with a micro-segmentation approach is as you’re limiting the attack surface, you can also handle events programmatically, like having the firewall automatically isolate a VM when a piece of malware has been observed on it.

Work With the Business to Understand the “Right” Level of Security

If you’ve read my other blog posts, you know I believe IT organizations should partner with business units. But more than a couple of us have seen InfoSec folks who just want to lock everything down to the point where running the business can be difficult. When this sort of a combative approach is taken, distrust between the units can be sowed, and shadow IT is one of the possible results.

Instead, work with the BUs to understand their needs and craft your InfoSec posture based on that. After all, an R&D team or a Dev org needs different levels of security than credit card processing, which must follow regulatory requirements. This for me was one of the most resonant messages to come out of The Phoenix Project: if you craft the security solution to fit the requirements, the business can better meet their needs, Security can still have an appropriate level of rigor, and better relationships should ensue. Win, win, win.

Security is a balancing act. We all have a role to play in cybersecurity. If you can apply these five simple information security hygiene tips, then you’re on the path towards having a secure organization, and I think we can all agree, that’s something to be thankful for.

Read more
6 42 1,896
explorevm
Level 9

Two roads diverged in a yellow wood,

And sorry I could not travel both

-Robert Frost

At this point in our “Battle of the Clouds” journey, we’ve seen what the landscape of the various clouds looks like, cut through some of the fog around cloud, and glimpsed what failing to plan can do to your cloud migration. So, where does that leave us? Now it’s time to lay the groundwork for the data center’s future. Beginning this planning and assessment phase can seem daunting, so in this post, we’ll lay out some basic guidelines and questions to help build your roadmap.

First off, let’s start at what’s already in the business’s data center.

The Current State of Applications and Infrastructure

When looking forward, you must always look to see where you’ve been. By understanding the previous decisions, you can gain an understanding of the business’ thinking, see where mistakes may have been made, and work to correct them in the newest iteration of the data center. Inventory everything in the data center, both hardware and software. You’d be surprised what may play a critical role in prevention of migration to new hardware or a cloud. Look at the applications in use not only by the IT department, but also the business, as their implementation will be key to a successful migration.

  • How much time is left on support for the current hardware platforms?
    • This helps determine how much time is available before the execution of the plan has to be done
  • What vendors are currently in use in the data center?
    • Look at storage, virtualization, compute, networking, and security
    • Many existing partners already have components to ease the migration to public cloud
  • What applications are in use by IT?
    • Automation tools, monitoring tools, config management tools, ticketing systems, etc.
  • What applications are in use by the business?
    • Databases, customer relationship management (CRM), business intelligence (BI), enterprise resource planning (ERP), call center software, and so on

What Are the Future-State Goals of the Business?

As much as most of us want to hide in our data centers and play with the nerd knobs all day, we’re still part of a business. Realistically, our end goal is to deliver consistent and reliable operations to keep the business successful. Without a successful business, it doesn’t matter how cool the latest technology you installed is, its capabilities, or how many IOPs it can process—you won’t have a job. Planning out the future of the data center has to line up with the future of the company. It’s a harsh reality we live in. But it doesn’t mean you’re stuck in your decision making. Use this opportunity to make the best choices on platforms and services based on the collective vision of the company.

  • Does the company run on a CapEx or OpEx operating model, or a mixture?
    • This helps guide decisions around applications and services
  • What regulations and compliances need to be considered?
    • Regulations such as HIPAA
  • Is the company attempting to “get out of the data center business?”
    • Why does the C-suite think this, and should it be the case?
  • Is there heavy demand for changes in the operations of IT and its interaction with end users?
    • This could lead to more self-service interactions for end users and more automation by admins
  • How fast does the company need to react and evolve to changes in the environment?
    • DevOps and CI/CD can come into effect
    • Will applications need to be spun up and down quickly?
  • Of the applications inventoried in the current state planning, how many could be moved to a SaaS product?
    • Whether moving to a public cloud or simply staying put, the ability to reduce the total application footprint can affect costs and sizing.
    • This can also call back to the OpEx or CapEx question from earlier

Using What You’ve Collected

All the information is collected and it’s time to start building the blueprint, right? Well, not quite. One final step in the planning journey should be a cloud readiness assessment. Many value-added resellers, managed services providers, and public cloud providers can help the business in this step. This step will collect deep technical data about the data center and applications, map all dependencies, and provide an outline of what it’d look like to move them to a public cloud. This information is crucial as well as it lays out what can easily be moved as well as what applications would need to be refactored or completely rebuilt. The data can be applied to a business impact analysis as well, which will give guidance on what these changes can do to the business’s ability to execute.

This seems like a lot of work. A lot of planning and effort into deciding to go to the public cloud or to stay put. To stick to “what works.” Honestly, may companies look at the work and decide to stay on-premises. Some choose to forgo the planning and have their cloud projects fail. I can’t tell you what to do in your business’s setting—you have to make the choices based on your needs. All I can do is offer up advice and hope it helps.

https://www.poetryfoundation.org/poems/44272/the-road-not-taken

Read more
3 9 477
gregwstuart
Level 11

Social media has become a mainstream part of our lives. Day in and day out, most of us are using social media to do micro-blogging, interact with family, share photos, capture moments, and have fun. Over the years, social media has changed how we interact with others, how we use our language, and how we see the world. Since social media is so prevalent today, it’s interesting to how artificial intelligence (AI) is changing social media. When was the last time you used social media for fun? What about for business? School? There are so many applications for social media, and AI is changing the way we use it and how we digest the tons of data out there.

Social Media Marketing

I don’t think the marketing world has been more excited about an advertising vehicle than they are with social media marketing. Did you use social media today? Chances are this article triggered you to pick up your phone and check at least one of your feeds. When you scroll through your feeds, how many ads do you get bombarded with? The way I see it, there are two methods of social media marketing: overt and covert. Some ads are overt and obviously placed in your feed to get your attention. AI has allowed those ads to be placed in user feeds based on their user data and browsing habits. AI crunches the data and pulls in ads relevant to the current user. Covert ads are a little sneakier, hence the name. Covert social media marketing is slipped into your feeds via paid influencers or YouTube/Instagram/Twitter mega users with large followings. Again, AI analyzes the data and posts on the internet to bring you the most relevant images and user posts.

Virtual Assistants

Siri, Alexa, Cortana, Bixby… whatever other names are out there. You know what I’m talking about, the virtual assistant living in your phone, car, or smart speaker, always listening and willing to pull up whatever info you need. The need to tweet while driving or search for the highest rated restaurant on Yelp! while biking isn’t necessary—let Siri do it for you. When you want to use Twitter, ask Alexa to tweet and then compose it all with your voice. Social media applications tied into virtual assistants make interacting with your followers much easier. AI constantly allows these virtual assistants to tweet, type, and text via dictation easily and accurately.

Facial Recognition

Facebook is heavily invested in AI, as evidenced by their facial recognition technology tagging users in a picture automatically via software driven by AI. You can also see this technology in place at Instagram and other photo-driven social media offerings. Using facial recognition makes it easier for any user who wants to tag family or friends with Facebook accounts. Is this important? I don’t think so, but it’s easy to see how AI is shaping the way we interact with social media.

Catching the Latest Trends

AI can bring the latest trends to your social media feed daily, even hourly if you want it to. Twitter is a prime example of how AI is used to crunch large amounts of data and track trends in topics across the world. AI has the ability analyze traffic across the web and present the end user with breaking news, or topics suddenly generating a large spike in internet traffic. In some cases, this can help social media users get the latest news, especially as it pertains to personal safety and things to avoid. In other cases, it simply leads us to more social media usage, as witnessed by meteoric trending recently when a bunch of celebrities started using FaceApp to see how their older selves might look.

What About the Future?

It seems like what we have today is never enough. Every time a new iPhone comes out, you can read hundreds of articles online the next day speculating on the next iPhone design and features. Social media seems to be along the same lines, especially since we use it daily. I believe AI will shape the future of our social media usage by better aligning our recommendations and advertisements. Ads will become much better targeted to a specific user based on AI analysis and machine learning. Apps like LinkedIn, Pinterest, and others will be much more usable thanks to developers using AI to deliver social media content to the user based on their data and usage patterns.

Read more
0 11 547