1 2 3 Previous Next

Geek Speak

2,911 posts

This week's Actuator comes to you from the suddenly mild January here in the Northeast. I'm taking advantage of the warm and dry days up here, spending time walking outdoors. Being outdoors is far better than the treadmill at the gym.

 

As always, here's a bunch of links from the internet I hope you will find useful. Enjoy!

 

Jeff Bezos hack: Amazon boss's phone 'hacked by Saudi crown prince'

I don't know where to begin. Maybe we can start with the idea that Bezos uses WhatsApp, an app known to be unsecured and owned by the unsecured Facebook. I'm starting to think he built a trillion-dollar company by accident, not because he's smart.

 

New Ransomware Process Leverages Native Windows Features

This is notable, but not new. Ransomware often uses resources available on the machine to do damage. For example, VB macros embedded in spreadsheets. I don't blame Microsoft for saying they won't provide security service for this, but it would be nice if they could hint at finding ways to identify and halt malicious activity.

 

London facial recognition: Metropolitan police announces new deployment of cameras

Last week the EU was talking about a five-year ban on facial recognition technology. Naturally, the U.K. decides to double down on their use of that same tech. I can't help but draw the conclusion this shows the deep divide between the U.K. and the EU.

 

Security Is an Availability Problem

I'm not certain, but I suspect many business decision-makers tend to think "that can't happen to us," and thus fail to plan for the day when it does happen to them.

 

Apple's dedication to 'a diversity of dongles' is polluting the planet

Words will never express my frustration with Apple for the "innovation" of removing a headphone jack and forcing me to buy additional hardware to continue to use my existing accessories.

 

Webex flaw allowed anyone to join private online meetings - no password required

The last thing I'm doing during the day is trying to join *more* meetings.

 

Play Dungeons & Deadlines

You might want to set aside some time for this one.

 

Walking through Forest Park this past Sunday, after a rainstorm the day before and the temperature so perfect to catch the steam coming off the trees.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by Jim Hansen about using patching, credential management, and continuous monitoring to improve security of IoT devices.

 

Security concerns over the Internet of Things (IoT) are growing, and federal and state lawmakers are taking action. First, the U.S. Senate introduced the Internet of Things Cybersecurity Improvement Act of 2017, which sought to “establish minimum security requirements for federal procurements of connected devices.” More recently, legislators in the state of California introduced Senate Bill No. 327, which stipulated manufacturers of IoT devices include “a reasonable security feature” within their products.

 

While these laws are good starting points, they don’t go far enough in addressing IoT security concerns.

 

IoT Devices: A Hacker’s Best Friend?

 

Connected devices all have the potential to connect to the internet and local networks and, for the most part, were designed for convenience and speed—not security. And since they’re connected to the network, they offer a backdoor through which other solutions can be easily compromised.

 

As such, IoT devices offer tantalizing targets for hackers. A single exploit from one connected device can lead to a larger, more damaging breach. Remember the Target hack from a few years ago? Malicious attackers gained a foothold into the retail giant’s infrastructure by stealing credentials from a heating and air condition company whose units were connected to Target’s network. It’s easy to imagine something as insidious—and even more damaging to national security—taking place within the Department of Defense or other agencies, which has been an early adopter of connected devices.

 

Steps for Securing IoT Devices

 

When security managers initiate IoT security measures, they’re not only protecting their devices, they’re safeguarding everything connected to those devices. Therefore, it’s important to go beyond the government’s baseline security recommendations and embrace more robust measures. Here are some proactive steps government IT managers can take to lock down their devices and networks.

 

  • Make patching and updating a part of the daily routine. IoT devices should be subject to a regular cadence of patches and updates to help ensure the protection of those devices against new and evolving vulnerabilities. This is essential to the long-term security of connected devices.

 

The Internet of Things Cybersecurity Improvement Act of 2017 specifically requires vendors to make their IoT devices patchable, but it’s easy for managers to go out and download what appears to be a legitimate update—only to find it’s full of malware. It’s important to be vigilant and verify security packages before applying them to their devices. After updates are applied, managers should take precautions to ensure those updates are genuine.

 

  • Apply basic credential management to interaction with IoT devices. Managers must think differently when it comes to IoT device user authentication and credential management. They should ask, “How does someone interact with this device?” “What do we have to do to ensure only the right people, with the right authorization, are able to access the device?” “What measures do we need to take to verify this access and understand what users are doing once they begin using the device?”

 

Being able to monitor user sessions is key. IoT devices may not have the same capabilities as modern information systems, such as the ability to maintain or view log trails or delete a log after someone stops using the device. Managers may need to proactively ensure their IoT devices have these capabilities.

 

  • Employ continuous threat monitoring to protect against attacks. There are several common threat vectors hackers can use to tap into IoT devices. SQL injection and cross-site scripting are favorite weapons malicious actors use to target web-based applications and could be used to compromise connected devices.

 

Managers should employ IoT device threat monitoring to help protect against these and other types of intrusions. Continuous threat monitoring can be used to alert, report, and automatically address any potentially harmful anomalies. It can monitor traffic passing to and from a device to detect whether the device is communicating with a known bad entity. A device in communication with a command and control system outside of the agency’s infrastructure is a certain red flag that the device—and the network it’s connected to—may have been compromised.

 

The IoT is here to stay, and it’s important for federal IT managers to proactively tackle the security challenges it poses. Bills passed by federal and state legislators are a start, but they’re not enough to protect government networks against devices that weren’t designed with security top-of-mind. IoT security is something agencies need to take into their own hands. Managers must understand the risks and put processes, strategies, and tools in place to proactively mitigate threats caused by the IoT.

 

Find the full article on Fifth Domain.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Back in October, 2019, I shared my love of both Raspberry Pi (https://www.raspberrypi.org/) devices and the Pi-Hole (https://pi-hole.net/) software; and showed how—with a little know-how about Application Programming Interfaces (APIs) and scripting (in this case, I used it as an excuse to make my friend @kmsigma happy and expand my knowledge of PowerShell) you could fashion a reasonable API-centric monitoring template in Server & Application Monitor (SAM). For those who are curious, you can find part 1 here: Don’t Shut Your Pi-Hole, Monitor It! (part 1 of 2) and part 2 here: Don’t Shut Your Pi-Hole, Monitor It! (part 2 of 2) 

 

It was a good tutorial, as far as things went, but it missed one major point: even as I wrote the post, I knew @Serena and her daring department of developers were hard at work building an API poller into SAM 2019.4. As my tutorial went to post, this new functionality was waiting in the wings, about to be introduced to the world.

 

Leaving the API poller out of my tutorial was a necessary deceit at the time, but not anymore. In this post I’ll use all the same goals and software as my previous adventure with APIs, but with the new functionality.

 

A Little Review

I’m not going to spend time here discussing what a Raspberry Pi or Pi-Hole solution is (you can find that in part 1 of the original series: Don’t Shut Your Pi-Hole, Monitor It! (part 1 of 2) . But I want to take a moment to refamiliarize you with what we’re trying to accomplish.

 

Once you have your Raspberry Pi and Pi-Hole up and running, you get to the API by going to http://<your pi-hole IP or name>/admin/api.php. When you do, the data you get back looks something like this:

 

{”domains_being_blocked”:115897,”dns_queries_today”:284514,”ads_blocked_today”:17865,”ads_percentage_today”:6.279129,”unique_domains”:14761,”queries_forwarded”:216109,”queries_cached”:50540,”clients_ever_seen”:38,”unique_clients”:22,”dns_queries_all_types”:284514,”reply_NODATA”:20262,”reply_NXDOMAIN”:19114,”reply_CNAME”:16364,”reply_IP”:87029,”privacy_level”:0,”status”:”enabled,””gravity_last_updated”:{”file_exists”:true,”absolute”:1567323672,”relative”:{”days”:”3,””hours”:”09,””minutes”:”53”}}}

 

If you look at it with a browser capable of formatting JSON data, it looks a little prettier:

That’s the data we want to collect using the new Orion API monitoring function.

 

The API Poller – A Step-by-Step Guide

To start off, make sure you’re monitoring the Raspberry Pi in question at all, so there’s a place to display this data. What’s different from the SAM Component version is you can monitor it using the ARM agent, or SNMP, or even as a ping-only node.

 

Next, on the Node Details page for the Pi, look in the “Management” block and you should see an option for “API Poller.” Click that, then click “Create,” and you’re on your way.

You want to give this poller a name, or else you won’t be able to include these statistics in PerfStack (Performance Analyzer) later. You can also give it a description and (if required) the authentication credentials for the API.

 

On the next screen, put in the Pi-Hole API URL. As I said before, that’s http://<your pi-hole IP or Name>/admin/api.php. Then click “Send Request” to pull a sample of the available metrics.

The “Response” area below will populate with items. For the ones you want to monitor, click the little computer screen icon to the right.

If you want to monitor the value without warning or critical thresholds, click “Save.” Otherwise change the settings as you desire.

As you do, you’ll see the “Values to Monitor” list on the right column populate. Of course, you can go back and edit or remove those items later. Because nobody’s perfect.

Once you’re done, click “Save” at the bottom of the screen. Scroll down on the Node Details page and you’ll notice a new “API Pollers” Section is now populated.

I’m serious, it’s this easy. I’m not saying coding API monitors with PowerShell wasn’t a wonderful learning experience, and I’m sure down the road I’ll use the techniques I learned.

 

But when you have several APIs, with a bunch of values each, this process is significantly easier to set up and maintain.

 

Kudos once again to @kmsigma for the PowerShell support; and @serena and her team for all their hard work and support making our lives as monitoring engineers better every day.

 

Try it out yourself and let us know your experiences in the comments below!

Submitted for your approval; a story of cloud horrors.

One of performance issues impacting production.

Where monthly cloud billing began spiraling out of control.

 

The following story is true. The names have been changed to protect the innocent.

 

During my consulting career, I’ve encountered companies at many different stages of their cloud journey. What was particularly fun about walking into this shop is they were already about 75% up into public cloud. The remaining 25% was working towards being migrated off their aging hardware. They seemed to be ahead of the game, so why were my services needed?

 

Let’s set up some info about the company, which I’ll call “ABC Co.” ABC Co. provides medical staff and medical management to many hospitals and clinics, with approximately 1,000 employees and contractors spread across many states. Being in both medical staffing and recordkeeping, ABC Co. was subject to many compliance regulations such as HIPAA, PCI, etc. Their on-premises data center was on older hardware nearing end of life, and given the size of their IT staff, they decided to move out of the data center business.

 

The data center architect at ABC Co. did his homework. He spent many hours learning about public cloud, crunching numbers, and comparing virtual machine configurations to cloud-based compute sizing. Additionally, due to compliance requirements, ABC Co. needed to use dedicated hosts in the public cloud. After factoring in all the sizing, storage capacity, and necessary networking, the architect arrived at an expected monthly spend number: $50,000. He took this number to the board of directors with a migration plan and outlined the benefits of going to the cloud versus refreshing their current physical infrastructure. The board was convinced and gave the green light to move into the public cloud.

 

Everything was moving along perfectly early in the project. The underlying cloud architecture of networking, identity and access management, and security were deployed. A few workloads were moved up into the cloud to great success. ABC Co. continued their migration, putting applications and remote desktop servers in the cloud, along with basic workloads such as email servers and databases. But something wasn’t right.

 

End users started to complain of performance issues on the RDP servers. Application processing had slowed down to a crawl. The employee’s ability to perform their tasks was being impeded. The architect and cloud administrators added more remote desktop servers into the environment and increased their size. Sizing on the application servers, which were just Microsoft Windows Servers in the public cloud, were also increased. This alleviated the problems, albeit temporarily. As more and more users were logging in to the public cloud-based services, performance and availability took a hit.

 

And then the bill showed up.

 

It was creeping up slowly to the anticipated $50,000 per month. Unfortunately, as a side effect of the increasing resources, the bill had risen to more than triple the original estimates presented to the board of directors. In the peak of the “crisis,” the bill had surpassed $150,000 per month. This put the C-suite on edge. What was going on with the cloud migration project? How is the bill so high when they were promised a third of what was being spent? It was time for the ABC Co. team to call for an assist.

 

This is where I entered the scene. I’ll start this next section of the story by stating this outright: I didn’t solve all their problems. I wasn’t a savior on a white horse galloping in to save the day. I did, however, help ABC Co. start to reduce their bill and get cloud spend under control.

 

One of the steps they implemented before I arrived was to use scripted shutdown of servers during non-work hours. This cut off some of the wasteful spend on machines not being used. We also looked at the actual usage on all servers in the cloud. After running some scans, we found many servers not in use for 30 days or more being left on and piling onto the bill. These servers were promptly shut down, archived, then deleted after a set time. Applications experiencing performance issues were analyzed, and it was determined they could be converted to a cloud native architecture. And those pesky ever-growing remote desktop boxes? Smaller, more cost-effective servers were placed behind a load balancer to automatically boot additional servers should the user count demand it. All these are a few of the steps to reducing the cloud bill. Many things occurred after I had left, but this was a start to send them on the right path.

 

  So, what can be learned from this story? While credit should be given for the legwork done to develop the strategy, on-premises virtual machines and public cloud-based instances aren’t apples to apples. Workloads behave differently in the cloud. The way resources are consumed has costs behind it; you can’t just add RAM and CPU to a problem server like you can in your data center (nor is it often the correct solution). Many variables go into a cloud migration. If your company is looking at moving to the cloud, be sure to ask the deep questions during the initial planning phase—it may just save hundreds of thousands of dollars.

Back from Austin and home for a few weeks before I head...back to Austin for a live episode of SolarWinds Lab. Last week was the annual Head Geeks Summit, and it was good to be sequestered for a few days with just our team as we map out our plans for world domination in 2020 (or 2021, whatever it takes).

 

As always, here's a bunch of stuff I found on the internetz this week that I think you might enjoy. Cheers!

 

Critical Windows 10 vulnerability used to Rickroll the NSA and Github

Patch your stuff, folks. Don't wait, get it done.

 

WeLeakInfo, the site which sold access to passwords stolen in data breaches, is brought down by the FBI

In case you were wondering, the website was allowed to exist for three years before it was finally shut down. No idea what took so long, but I tip my hat to the owners. They didn't steal anything, they just took available data and made it easy to consume. Still, they must have known they were in murky legal waters.

 

Facial recognition: EU considers ban of up to five years

I can't say if that's the right amount of time; I'd prefer they ban it outright for now. This isn't just a matter of the tech being reliable, it brings about questions regarding basic privacy versus a surveillance state.

 

Biden wants Sec. 230 gone, calls tech “totally irresponsible,” “little creeps”

Politics aside, I agree with the idea that a website publisher should bear some burden regarding the content allowed. Similar to how I feel developers should be held accountable for deploying software that's not secure, or leaving S3 buckets wide open. Until individuals understand the risks, we will continue to have a mess of things on our hands.

 

Microsoft pledges to be 'carbon negative' by 2030

This is a lofty goal, and I applaud the effort here by Microsoft to erase their entire carbon footprint since they were founded in 1975. It will be interesting to see if any other companies try to follow, but I suspect some (*cough* Apple) won't even bother.

 

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

In today's edition of "do as I say, not as I do", Google reminds us that their new motto is "Only slightly evil."

 

Technical Debt Is like a Tetris Game

I like this analogy, and thought you might like it as well. Let me know if it helps you.

 

If you are ever in Kansas City, run, don't walk, to Jack Stack and order the beef rib appetizer. You're welcome.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp with ideas for improving security at the DoD by finding vulnerabilities and continuously monitoring agency infrastructure.

 

An early 2019 report from the Defense Department Officer of Inspector General revealed how difficult it’s been for federal agencies to stem the tide of cybersecurity threats. Although the DoD has made significant progress toward bolstering its security posture, 266 cybersecurity vulnerabilities still existed. Most vulnerabilities have only been discovered within the past year—a sure sign of rising risk levels.

 

The report cited several areas for improvement, including continuous monitoring and detection processes, security training, and more. Here are three strategies DOD can use to tackle those remaining 200-plus vulnerabilities.

 

1. Identify Existing Threats and Vulnerabilities

 

Identifying and addressing vulnerabilities will become more difficult as the number of devices and cloud-based applications on defense networks proliferates. Although government IT managers have gotten a handle on bring-your-own-device issues, undetected devices are still used on DoD networks.

 

Scanning for applications and devices outside the control of IT is the first step toward plugging potential security holes. Apps like Dropbox and Google Drive may be great for productivity, but they could also expose the agency to risk if they’re not security hardened.

 

The next step is to scan for hard-to-find vulnerabilities. The OIG report called out the need to improve “information protection processes and procedures.” Most vulnerabilities occur when configuration changes aren’t properly managed. Automatically scanning for configuration changes and regularly testing for vulnerabilities can help ensure employees follow the proper protocols and increase the department’s security posture.

 

2. Implement Continuous Monitoring, Both On-Premises and in the Cloud

 

While the OIG report specifically stated the DoD must continue to proactively monitor its networks, those networks are becoming increasingly dispersed. It’s no longer only about keeping an eye on in-house applications; it’s equally as important to be able to spot potential vulnerabilities in the cloud.

 

DoD IT managers should go beyond traditional network monitoring and look more deeply into the cloud services they use. The ability to see the entire network, including destinations in the cloud, is critically important, especially as the DoD becomes more reliant on hosted service providers.

 

3. Establish Ongoing User Training and Education Programs

 

A well-trained user can be the best protection against vulnerabilities, making it important for the DoD to implement a regular training cadence for its employees.

 

Training shouldn’t be relegated to the IT team alone. A recent study indicates insider threats pose some of the greatest risk to government networks. As such, all employees should be trained on the agency’s policies and procedures and encouraged to follow best practices to mitigate potential threats. The National Institute of Standards and Technology provides an excellent guide on how to implement an effective security training program.

 

When it comes to cybersecurity, the DoD has made a great deal of progress, but there’s still room for improvement. By implementing these three best practices, the DoD can build off what it’s already accomplished and focus on improvements.

 

Find the full article on Government Computer News.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

While there are many silly depictions of machine learning and artificial intelligence throughout Hollywood, its reality delivers significant benefits. Administrators today oversee so many tasks, like system monitoring, performance optimizing, networking configuration, and more. Many of these tasks can be monotonous and tedious. Also, those tasks are generally required daily. In these cases, machine learning helps ease the burden on the administrator and helps make them more productive with their time. Lately, however, more people seem to think too much machine learning may replace the need for humans to get a job done. While there are instances of machine learning eliminating the need for some tasks to be manned by a human, I don’t believe we’ll see humans replaced by machines (sorry, Terminator fans). Instead, I’ll highlight why I believe machine learning matters now and will continue to matter for generations to come.

 

Machine Learning Improves Administrator’s Lives

Some tasks administrators are responsible for can be very tedious and take a long time to complete. With machine learning, automation makes the daily tedious tasks run on a schedule and efficiently as system behavior is learned and optimized on the fly. A great example comes in the form of spam mail or calls. Big name telecom companies are now using machine learning to filter out the spam callers flooding cell phones everywhere. Call blocker apps can now screen calls for you based on spam call lists analyzed by machine learning and then block potential spam. In other examples, machine learning can analyze system behavior against a performance baseline and then alert the team of any anomalies and/or the need to make changes. Machine learning is here to help the administrator, not give them anxiety about being replaced.

 

Machine Learning Makes Technology Better


There are so many amazing software packages available today for backup and recovery, server virtualization, storage optimization, or security hardening. There’s something for every type of workload. When machine learning is applied to these software technologies, it enhances the application and increases the ease of use. Machine learning is doing just that: always learning. If an application workload suddenly increases, machine learning captures it and then will use an algorithm to determine how to react in those situations. When there’s a storage bottleneck, machine learning analyzes the traffic to determine what’s causing the backup and then works out a possible solution to the problem for administrators to implement.

 

Machine Learning Reduces Complexity

Nobody wants their data center to be more complex. In fact, technology trends in the past 10 to 15 years have leaned towards reducing complexity. Virtualization technology has reduced the need for a large footprint in the data center and reduced the complexity of systems management. Hyperconverged infrastructure (HCI) has gone a step further and consolidated an entire rack’s worth of technology into one box. Machine learning takes it a step further by enabling automation and fast analysis of large data sets to produce actionable tasks. Tasks requiring a ton of administrative overhead are now reduced to an automated and scheduled task monitored by the administrator. Help desk analysts benefit from machine learning’s ability to recognize trending data to better triage certain incident tickets and reduce complexity in troubleshooting those incidents.

 

Learn Machine Learning

If you don’t have experience with machine learning, dig in and start reading everything you can about it. In some cases, your organization may already be using machine learning. Figure out where it’s being used and start learning how it affects your job day to day. There are so many benefits to using machine learning—find out how it benefits you and start leveraging its power.

ian0x0r

Old Tech, New Name?

Posted by ian0x0r Jan 20, 2020

The marketing machines of today often paint new technologies to suggest they’re the best thing since sliced bread. Sometimes though, the new products are just a rehash of an existing technology. In this blog post, I’ll look at some of these.

 

As some of you may know, my tech background is heavily focused around virtualization and the associated hardware and software products. With this in mind, this post will have a slant towards those types of products.

 

One of the recent technology trends I have seen cropping up is something called dHCI or disaggregated hyperconverged infrastructure. I mean, what is that? If you break down to its core components, it’s nothing more than separate switching, compute, and storage. Why is this so familiar? Oh yeah—it’s called converged infrastructure. There’s nothing HCI about it. HCI is the convergence of storage and compute on to a single chassis. To me, it’s like going to a hipster café and asking for a hyperconverged sandwich. You expect a ready-to-eat, turnkey sandwich but instead, you receive a disassembled sandwich you have to construct yourself and then somehow it’s better than the thing it was trying to be in the first place: a sandwich. No thanks. If you dig a little deeper, the secret sauce to dHCI is the lifecycle management software overlaying the converged infrastructure but hey, not everyone wants secret sauce with their sandwich.

 

If you take this a step further and label these types of components as cloud computing, nothing has really changed. One could argue true cloud computing is the ability to self-provision workloads, but rarely does a product labeled as cloud computing deliver those results, especially private clouds.

 

An interesting term I came across as a future technology trend is distributed cloud.¹ This sounds an awful lot like hybrid cloud to me. Distributed cloud is when public cloud service offerings are moved into private data centers on dedicated hardware to give a public cloud-like experience locally. One could argue this already happens the other way around with a hybrid cloud. Technologies like VMware on AWS (or any public cloud for that matter) make this happen today.

 

What about containers? Containers have held the media’s attention for the last few years now as a new way to package and deliver a standardized application portable across environments. The concept of containers isn’t new, though. Docker arguably brought containers to the masses but if you look at this excellent article by Ell Marquez on the history of containers, we can see its roots go all the way back to the mainframe era of the late 70s and 80s.

 

The terminology used by data protection companies to describe their products also grinds my gears. Selling technology on being immutable. Immutable meaning it cannot be changed once it has been committed to media. Err, WORM media anyone? This technology has existed for years on tape and hard drives. Don’t try and sell it as a new thing.

 

While this may seem a bit ranty, if you’re in the industry, you can probably guess which companies I’m referring to with my remarks. What I am hoping to highlight though is not everything is new and shiny, some of it is wrapped up in hype or clever marketing.

 

I’d love to hear your thoughts on this, if you think I’m right or wrong, and if you can think of any examples of old tech, new name.

 

¹Source: CRN https://www.crn.com/slide-shows/virtualization/gartner-s-top-10-technology-trends-for-2020-that-will-shape-the-future/9

2019 was busy year for DevOps as measured by the events held on the topic. Whether it be DevOps days around the globe, DockerCon, DevOps Enterprise Summits, KubeCon, or CloudNativeCon, events are springing up to support this growing community. With a huge number of events already scheduled for 2020, people plan on improving their skills with this technology. This is great—it’ll allow DevOps leaders to close capability gaps and it should be a must for those on a DevOps journey in 2020.

 

Hopefully, we’ll see more organizations adopt the key stages of DevOps evolution (foundation building, normalization, standardization, expansion, automated infrastructure delivery, and self-service) by following this model. Understanding where you are on the journey helps you plan what needs to be satisfied at each level before trying to move on to an area of greater complexity. By looking at the levels of integration and the growing tool chain, we can see where you are and plan accordingly. I look forward to seeing and reading about the trials and how they were overcome by organizations looking to further their DevOps movement in 2020.

 

You’ll probably hear terms like NoOps and DevSecOps gain more traction over the coming year from certain analysts. I believe the name DevOps is currently fine for what you’re trying to achieve. If you follow correct procedures, then security and operations already make up a large subset of your workflows. Therefore, you shouldn’t need to call them out in separate terms. If you’re not pushing changes to live systems, then you aren’t really doing any operations, and therefore not truly testing your code. So how can you go back and improve or reiterate on it? As for security, while it’s hard to implement correctly and as difficult to work collaboratively together, there’s a greater need to adopt this technology correctly. Organizations that have matured and evolved through the stages above are far more likely to place emphasis on the integration of security than those just starting out. Improved security posture will be a key talking point as we progress through 2020 and into the next decade.

 

Kubernetes will gain even more ground in 2020 as more people look to a way to provide a robust method of container orchestration to scale, monitor, and run any application, with many big-name software vendors investing in what they see as the “next battleground” for variants on the open-source application management tool.

 

Organizations will start to invest in more use of artificial intelligence, whether it be for automation, remediation, or improved testing. You can’t deny artificial intelligence and machine learning are hot right now and will seep into this aspect of technology in 2020. The best place to be to try this is in a cloud provider, saving you the need to invest in hardware. Instead, the provider can get you up and running in minutes.

 

Microservices and containers infrastructure will be another area of growth within the coming 12 months. Container registries are beneficial to organizations. Registries allow companies to apply policies, whether a security or access control policy and more, to how they manage containers. JFrog Container Registry is probably going to lead the charge in 2020, but don’t think they’ll have it easy as AWS, Google, Azure, and other software vendors have products fighting for this space.

 

These are just a few areas I see will be topics of conversation and column inches as we move into 2020 and beyond, but it tells me this is the area to develop your skills if you want to be in demand as we move into the second decade of this century.

In Austin this week for our annual meeting of Head Geeks. The first order of business is to decide what to call our group. I prefer a "gigabyte of Geeks," but I continue to be outvoted. Your suggestions are welcome.

 

As always, here's a bunch of links from the internet I hope you find interesting. Enjoy!

 

Facebook again refuses to ban political ads, even false ones

Zuckerberg continues to show the world he only cares about ad revenue, for without that revenue stream his company would collapse.

 

Scooter Startup Lime Exits 12 Cities and Lays Off Workers in Profit Push

Are you saying renting scooters your customers then abandon across cities *is not* a profitable business model? That's crazy!

 

Russian journals retract more than 800 papers after ‘bombshell’ investigation

I wish we could do the same thing with blog posts, old and new.

 

Alleged head of $3.5M crypto mining scam bought stake in nightclub

A cryptocurrency scam? Say it isn't so! Who knew this was even possible?

 

Ring confirms it fired four employees for watching customer videos

Ah, but only after an external complaint, and *after* their actions were known internally. In other words, these four would still have jobs if not for the external probe.

 

Tesla driver arrested for flossing at 84 mph on autopilot

Don't judge, we've all been there, stuck in our car and in need of flossing our teeth.

 

It's helpful for a restaurant to publish their menu outside for everyone to see.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp with ideas about modernizing security along with agency infrastructure to reduce cyberthreats.

 

As agencies across the federal government modernize their networks to include and accommodate the newest technologies such as cloud and the Internet of Things (IoT), federal IT professionals are faced with modernizing security tactics to keep up.

 

There’s no proverbial silver bullet, no single thing capable of protecting an agency’s network. The best defense is implementing a range of tactics working in concert to provide the most powerful security solution.

 

Let’s take a closer look.

 

Access Control

 

Something nearly all of us take for granted is access. The federal IT pro can help dramatically improve the agency’s security posture by reining in access.

 

There can be any number of reasons for federal IT pros to set overly lenient permissions—from a lack of configuration skills to a limited amount of time. The latter is often the more likely culprit as access control applies to many aspects of the environment. From devices to file folders and databases, it’s difficult and time-consuming to manage setting access rights.

 

Luckily, an increasing number of tools are available to help automate the process. Some of these tools can go so far as to automatically define permission parameters, create groups and ranges based on these parameters, and automatically apply the correct permissions to any number of devices, files, or applications.

 

Once permissions have been set successfully, be sure to implement multifactor authentication to ensure access controls are as effective as possible.

 

Diverse Protection

 

The best protection against a complex network is multi-faceted security. Specifically, to ensure the strongest defense, invest in both cloud-based and on-premises security.

 

For top-notch cloud-based security, consider the security offerings of the cloud provider with as much importance as other benefits. Too many decisionmakers overlook security in favor of more bells and whistles.

 

Along similar lines of implementing diverse, multi-faceted security, consider network segmentation. If an attack happens, the federal IT pro should be able to shut down a portion of the network to contain the attack while the rest of the network remains unaffected.

 

Testing

 

Once the federal IT pro has put everything in place, the final phase—testing—will quickly become the most important aspect of security.

 

Testing should include technology testing (penetration testing, for example), process testing (is multi-factor authentication working?), and people testing (testing the weakest link).

 

People testing may well be the most important part of this phase. Increasingly, security threats caused by human error are becoming one of the federal government’s greatest threats. In fact, according to a recent Cybersecurity Survey, careless and malicious insiders topped the list of security threats for federal agencies.

 

Conclusion

 

There are tactics federal IT pros can employ to provide a more secure environment, from enhancing access control to implementing a broader array of security defenses to instituting a testing policy.

 

While each of these is important individually, putting them together goes a long way toward strengthening any agency’s security infrastructure.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Part 1 of this series introduced IT Service Management (ITSM) and a few of the adaptable frameworks available to fit the needs of an organization. This post focuses on the benefits of using a couple of the principles from the Lean methodology to craft ITSM.

 

I’d like to acknowledge and thank Trish Livingstone and Julie Johnson for sharing their expertise in this subject area. I leaned on them heavily.

 

The Lean Methodology

The Lean methodology is a philosophy and mindset focused on driving maximum value to the customer while minimizing waste. These goals are accomplished through continuous improvement and respect for people. More on this in a bit.

 

Because the Lean methodology originated in the Toyota Production System, it’s most commonly associated with applications in manufacturing. However, over the years, Lean has brought tremendous benefits to other industries as well, including knowledge work. Probably the most recognizable form of Lean in knowledge work is the Agile software development method.

 

Continuous Improvement

“If the needs of the end user are the North Star, are all the services aligned with achieving their needs?” — Julie Johnson

 

Continuous improvement should be, not surprisingly, continuous. Anyone involved in a process should be empowered to make improvements to it at any time. However, some situations warrant bringing a team of people together to affect more radical change than can be achieved by a single person.

 

3P—Production, Preparation, Process

One such situation is right at the beginning of developing a new product or process.

 

This video and this video demonstrate 3P being used to design a clinic and a school, respectively. In both cases, the design team includes stakeholders of all types to gather the proper information to drive maximum value to the customer. The 3P for the clinic consists of patients, community members, caregivers, and support staff. The same process for the school includes students, parents, teachers, and community members.

 

While both examples are from tangible, brick-and-mortar situations, the 3P is also helpful in knowledge work. One of the most challenging things to gather when initiating a new project is proper and complete requirements. Without sufficient requirements, the creative process of architecting and designing IT systems and services often ends up with disappointing outcomes and significant rework. This 3P mapped out the information flows that fed directly into the design of new safety alert system software for healthcare.

 

The goal of the 3P is to get the new initiative started on the right path by making thoughtful, informed decisions using information gathered from all involved in the process. A 3P should begin with the team members receiving a short training session on how to participate in the 3P. Armed with this knowledge and facilitated by a Lean professional keeping things on track, the 3P will produce results more attuned to the customer’s needs.

 

Value Stream Mapping (VSM)

If you bring visibility to the end-to-end process, you create understanding around why change is needed, which builds buy-in.” — Trish Livingstone

 

Another situation warranting a team of people coming together to affect more radical change is when trying to improve an existing process or workflow. Value Stream Mapping (VSM) is especially useful when the process contains multiple handoffs or islands of work.

 

For knowledge work, VSM is the technique of analyzing the flow of information through a process delivering a service to a customer. The process and flow are visually mapped out, and every step is marked as either adding value or not adding value.

 

There are many good people in IT, and many want to do good work. As most IT departments operate in silos, it’s natural to think if you produce good quality work, on time, and then hand it off to the next island in the process, the customer will see the most value. The assumption here is each island in the process is also producing good quality work on time. This style of work is known as resource efficiency. The alternative, referred to as flow efficiency, focuses on the entire process to drive maximum customer value. This video, although an example from healthcare and not knowledge work, explains why flow efficiency can be superior to resource efficiency.

 

I was presented with a case study where a government registrar took an average of 53 days to fulfill a request for a new birth certificate. VSM revealed many inefficiencies because of tasks adding no value. The process after the VSM fulfilled requests in 3 days without adding new resources. The registrar’s call volume fell by 60% as customers no longer needed to phone for updates.

 

It’s easy to see how Value Stream Mapping could help optimize many processes in IT, including change management, support ticket flow, and maintenance schedules, to name a few.

 

Respect for People

Respect for people is one of the core tenets of Lean and a guiding principle for Lean to be successful in an organization.

 

Respect for the customer eliminates waste. Waste is defined as anything the customer wouldn’t be willing to pay for. In the case of government service, waste is anything they wouldn’t want their tax dollars to pay for.

 

Language matters when respecting the customer. The phrase “difficult user” is replaced with “a customer who has concerns.” As demonstrated in the 3P videos above, rather than relying on assumptions or merely doing things “the way we’ve always done them,” customers are actively engaged to meet their needs better.

 

Lean leadership starts at the top. Respect for employees empowers them to make decisions allowing them to do their best work. Leadership evolves to be less hands-on and takes on a feeling of mentorship.

 

Respect for coworkers keeps everyone’s focus on improving the processes delivering value to the customer. Documenting these processes teaches new skills, so everyone can participate and move work through the system faster.

 

The 4 Wins of Lean

Using some or all of the Lean methodology to customize IT service management can be a win, win, win, win situation. Potential benefits could include:

 

1. Better value for the ultimate customer (employees)

    • Reduced costs by eliminating waste
    • Faster service or product delivery
    • Better overall service

2. Better for the people working in the process

    • Empowered to make changes
    • Respected by their leaders
    • Reduced burden

3. Better financially for the organization

    • Reduced waste
    • Increased efficiency
    • Reduced cost

4. Better for morale

    • Work has meaning

    • No wasted effort

    • Work contributes to the bottom line

There have been so many changes in data center technology in the past 10 years, it’s hard to keep up at times. We’ve gone from a traditional server/storage/networking stack with individual components, to a hyperconverged infrastructure (HCI) where it’s all in one box. The data center is more software-defined today than it ever has been with networking, storage, and compute being abstracted from the hardware. On top of all the change, we’re now seeing the rise of artificial intelligence (AI) and machine learning. There are so many advantages to using AI and machine learning in the data center. Let’s look at ways this technology is transforming the data center.

 

Storage Optimization

Storage is a major component of the data center. Having efficient storage is of the utmost importance. So many things can go wrong with storage, especially in the case of large storage arrays. Racks full of disk shelves with hundreds of disks, of both the fast and slow variety, fill data centers. What happens when a disk fails? The administrator gets an alert and has to order a new disk, pull the old one out, and replace it with the new disk when it arrives. AI uses analytics to predict workload needs and possible storage issues by collecting large amounts of raw data and finding trends in the usage. AI also helps with budgetary concerns. By analyzing disk performance and capacity, AI can help administrators see how the current configuration performs and order more storage if it sees a trend in growth.

 

Fast Workload Learning

Capacity planning is an important part of building and maintaining a data center. Fortunately, with technology like HCI being used today, scaling out as the workload demands is a simpler process than it used to be with traditional infrastructure. AI and machine learning capture workload data from applications and use it to analyze the impact of future use. Having a technology aid in predicting the demand of your workloads can be beneficial in avoiding downtime or loss of service for the application user. This is especially important in the process of building a new data center or stack for a new application. The analytics AI provides helps to see the entire needs of the data center, from cooling to power and space.

 

Less Administrative Overhead

The new word I love to pair with artificial intelligence and machine learning is “autonomy.” AI works on its own to analyze large amounts of data and find trends and create performance baselines in data centers. In some cases, certain types of data center activities such as power and cooling can use AI to analyze power loads and environmental variables to adjust cooling. This is done autonomously (love that word!) and used to adjust tasks on the fly and keep performance at a high level. In a traditional setting, you’d need several different tools and administrators or NOC staff to handle the analysis and monitoring.

 

Embrace AI and Put it to Work

The days of AI and machine learning being a scary and unknown thing are past. Take stock of your current data center technology and decide whether AI and/or machine learning would be of value to your project. Another common concern is AI replacing lots of jobs soon, and while it’s partially true, it isn’t something to fear. It’s time to embrace the benefits of AI and use it to enhance the current jobs in the data center instead of fearing it and missing out on the improvements it can bring.

There are many configuration management, deployment, and orchestration tools available, ranging from open-source tools to automation engines. Ansible is one such software stack available to cover all the bases, and seems to be gaining more traction by the day. In this post, we’ll look at how this simple but powerful tool can change your software deployments by bringing consistency and reliability to your environment.

 

Ansible gives you the ability to provision, control, configure, and deploy applications to multiple servers from a single machine. Ansible allows for successful repetition of tasks, can scale from one to 10,000 or more endpoints, and uses YAML to apply configuration changes, which is easy to read and understand. It’s lightweight, uses SSH PowerShell and APIs for access, and as mentioned above, is an open-source project. It’s also agentless, differentiating it from some other similar competitive tools in this marketplace. Ansible is designed with your whole infrastructure in mind rather than individual servers. If you need dashboard monitoring, then Ansible Tower is for you.

 

Once installed on a master server, you create an inventory of machines or nodes for it to perform tasks on. You can then start to push configuration changes to nodes. An Ansible playbook is a collection of tasks you want to be executed on a remote server, in a configuration file. Get complicated with playbooks from the simple management and configuration of remote machines all the way to a multifaceted deployment with these five tips to start getting the most out of what tool can deliver.

 

  1. Passwordless keys (for SSH) is the way to go. Probably one you should undertake from day one. Not just for Ansible, this uses a public shared key between hosts based on the v2 standard with most default OSs creating 2048-bit keys, but can be changed in certain situations up to 4096-bit. No longer do you have to type in long complex passwords for every login session—this more reliable and easier-to-maintain method makes your environment both more secure and easier for Ansible to execute.
  2. Use check mode to dry run most modules. If you’re not sure how a new playbook or update will perform, dry runs are for you. With configuration management and Ansible’s ability to provide you with desired state and your end goal, you can use dry run mode to preview what changes are going to be applied to the system in question. Simply add the --check command to the ansible-playbook command for a glance at what will happen.
  3. Use Ansible roles. This is where you break a playbook out into multiple files. This file structure consists of a grouping of files, tasks, and variables, which now moves you to modularization of your code and thus independent adaptation upgrade, and allows for reuse of configuration steps, making changes and improvements to your Ansible configurations easier.
  4. Ansible Galaxy is where you should start any new project. Access to roles, playbooks, and modules from community and vendors—why reinvent the wheel? Galaxy is a free site for searching, rating, downloading, and even reviewing community-developed Ansible roles. This is a great way to get a helping hand with your automation projects.
  5. Use a third-party vault software. Ansible Vault is functional, but a single shared secret makes it hard to audit or control who has access to all the nodes in your environment. Look for something with a centrally managed repository of secrets you can audit and lock down in a security breach scenario. I suggest HashiCorp Vault as it can meet all these demands and more, but others are available.

 

Hopefully you now have a desire to either start using Ansible and reduce time wasted on rinse and repeat configuration tasks, or you’ve picked up a few tips to take your skills to the next level and continue your DevOps journey.

Welcome back! I hope y'all had a happy and healthy holiday break. I'm back in the saddle after hosting a wonderful Christmas dinner for 20 friends and family. I had some time off as well, which I used to work a bit on my blog as well as some Python and data science learning.

 

As usual, here's a bunch of links from the internet I hope you'll find useful. Enjoy!

 

Team that made gene-edited babies sentenced to prison, fined

I wasn't aware we had reached the point of altering babies' DNA, but here we are.

 

2019 Data Breach Hall of Shame: These were the biggest data breaches of the year

I expect a longer list from 2020.

 

Bing’s Top Search Results Contain an Alarming Amount of Disinformation

A bit long, but worth some time and a discussion. I never think about how search engines try to determine the veracity of the websites returned in a search.

 

Google and Amazon are now in the oil business

File this under "Do as I say, not as I do."

 

Seven Ways to Think Like a Programmer

An essay about data that warmed my heart. I think a lot of this applies to every role, especially for those of us inside IT.

 

The other side of Stack Overflow content moderation

Start this post by reading the summary, then take in some of the specific cases he downvoted. The short of it is this: humans are horrible at communicating through texts, no matter what the forum.

 

This Is How To Change Someone’s Mind: 6 Secrets From Research

If you want to have more success at work, read this post. I bet you can think of previous discussions at work and understand where things went wrong.

 

For New Year's Eve I made something special - 6 pounds of pork belly bites in a honey soy sauce. They did not last long. No idea what everyone else ate, though.

 

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.