1 2 3 Previous Next

Geek Speak

2,759 posts

We put in all this work to create a fully customized toolchain tailored to your situation. Great! I bet you're happy you can change small bits and pieces of your toolchain when circumstances change, and the Periodic Table of DevOps tools has become a staple when looking for alternatives. You're probably still fighting the big vendors that want to offer you a one-size-fits-all solution.

 

But there's a high chance you're drowning in tool number 9,001. The tool sprawl, like VM sprawl after we started using virtualization 10-15 years ago, is real. In this post, we'll look at managing the tools managing your infrastructure.

 

A toolchain is supposed to make it easier for you to manage changes, automate workflows, and monitor your infrastructure, but isn't it just shifting the problem from one system to another? How do we prevent spending disproportionate amounts of time managing the toolchain, keeping us from more important value-add work?

 

The key here isn’t just to look at your infrastructure and workflows with a pair of LEAN glasses, but also to look at the toolchain with the same perspective. Properly integrating tools and selecting the right foundation makes all the difference.

 

Integration

One key aspect of managing tool sprawl is properly integrating all the tools. With each handover between tools, there's a chance of manual work, errors, and friction. But what do we mean by properly integrating? Simply put: no manual work between steps, and as little as possible custom code or scripting.

 

In other words, tools integrating natively with each other take less custom work. The integration comes ready out of the box, and are an integral, vendor-supported part of the product. This means you don't have to do (much) work to integrate it with different tools. Selecting tools that natively integrate is a great way to reduce the negative side effects of tool sprawl.

 

Fit for Purpose to Prevent DIY Solutions

Some automation solutions are very flexible and customizable, like Microsoft PowerShell. It's widely adopted, very flexible, highly customizable, and one of the most powerful tools in an admin's tool chest, but getting started with it can be daunting. This flexibility leads to some complexity, and often there's no single way of accomplishing things. This means you need to put in the work to make PowerShell do what you want, instead of a vendor providing workflow automation for you. Sometimes, it's worthwhile to use a fit-for-purpose automation tool (like Jenkins, SonarQube, or Terraform) that has automated the workflows for you instead of a great task automation scripting language. Developing and maintaining your own scripts for task automation and configuration management can easily take up a large chunk of your workweek, and some of that work has little to do with the automation effort you set out to accomplish in the first place. Outsourcing that responsibility to the company (or open-source project) that created the high-level automation logic (Terraform by HashiCorp is a great example of this) makes sense, saving your time for job-specific work adding actual value to your organization.

 

If you set out to use a generic scripting language, choose one that fits your technical stack (like PowerShell if you're running Microsoft stack) and technical pillar (such as Terraform for infrastructure-as-code automation).

Crisis Averted. Here's Another.

So, your sprawl crisis has been averted. Awesome; now we have the right balance of tools doing what they're supposed to do. But there's a new potential hazard around the corner: lock-in.

 

In the next and final post in this series, we'll have a look at the different forms of lock-in: vendor lock-in, competition lock-in, and commercial viability. 

 

Home from San Diego and Cisco Live! for a few days, just enough time to spend Father's Day with friends and family. Now I'm back on the road, heading to Lingen, Germany, for Data Grillen. That's right, an event dedicated to data, bratwurst, and beer. I may never come home.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

This Austrian company is betting you’ll take a flying drone taxi. Really.

I *might* be willing to take one, but I’d like to know more about how they plan to control air traffic flow over city streets.

 

How Self-Driving Cars Could Disrupt the Airline Industry

Looks like everyone is going after airline passengers. If airlines don’t up their level of customer service, autonomous cars and flying taxis could put a dent in airline revenue.

 

Salesforce to Buy Tableau for $15.3 Billion in Analytics Push

First, Google bought Looker, then Salesforce buys Tableau. That leaves AWS looking for a partner as everyone tries to keep pace with Microsoft and PowerBI.

 

Genius Claims Google Stole Lyrics Embedded With Secret Morse Code

After decades of users stealing content from websites found with a Google search, Google decides to start stealing content for themselves.

 

Your Smartphone Is Spying (If You’re a Spanish Soccer Fan)

Something tells me this is likely not the only app that is spying on users. And while La Liga was doing so to catch pirated broadcasts, this is a situation where two wrongs don’t make a right.

 

Domino’s will start robot pizza deliveries in Houston this year

How much do you tip a robot?

 

An Amazing Job Interview

I love this approach, and I want you to consider using it as well, no matter which side of the interview you are sitting.

 

Sorry to all the other dads out there, but it's game over, and I've won:

 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp about choosing a database tool to help improve application performance and support advanced troubleshooting. Given the importance of databases to application performance, I’m surprised this topic doesn’t get more attention.

 

The database is at the heart of every application; when applications are having performance issues, there’s a good chance the database is somehow involved. The greater the depth of insight a federal IT pro has into its performance, the more opportunity to enhance application performance.

 

There’s a dramatic difference between simply collecting data and correlating and analyzing that data to get actionable results. For example, often, federal IT teams collect highly granular information that’s difficult to analyze; others may collect a vast amount of information, making correlation and analysis too time-consuming.

 

The key is to unlock a depth of information in the right context, so you can enhance database performance and, in turn, optimize application performance.

 

The following are examples of “non-negotiables” when collecting and analyzing database performance information.

 

Insight Across the Entire Environment

 

One of the most important factors in ensuring you’re collecting all necessary data is to choose a toolset providing visibility across all environments, from on-premises, to virtualized, to the cloud, and any combination thereof. No federal IT pro can completely understand or optimize database performance with only a subset of information.

 

Tuning and Indexing Data

 

Enhancing database performance often requires significant manual effort. That’s why it’s critical to find a tool that can tell you where to focus the federal IT team’s tuning and indexing efforts to optimize performance and simultaneously reduce manual processes.

 

Let’s take database query speed as an example. Slow SQL queries can easily result in slow application performance. A quality database performance analyzer presents federal IT pros with a single-pane-of-glass view of detailed query profile data across databases. This guides the team toward critical tuning data while reducing the time spent correlating information across systems. What about indexing? A good tool will identify and validate index opportunities by monitoring workload usage patterns, and then make recommendations regarding where a new index can help optimize inefficiencies.

 

Historical Baselining

 

Federal IT pros must be able to compare performance levels—for example, expected performance with abnormal performance—by establishing historic baselines of database performance to look at performance at the same time on the same day last week, and the week befor, etc.

 

This is the key to database anomaly detection. With this information, it’s easier to identify a slight variation—the smallest anomaly—before it becomes a larger problem.

 

Remember, every department, group, or function within your agency relies on a database in some way or another, particularly to drive application performance. Having a complete database performance tool enables agencies to stop finger-pointing and pivot from being reactive to being proactive; from having high-level information to having deeper, more efficient insight.

 

This level of insight can help optimize database performance and make users happier across the board.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

This is a continuation of one of my previous blog posts, Which Infrastructure Monitoring Tools Are Right for You? On-Premises vs. SaaS, where I talked about the considerations and benefits of running your monitoring platform in either location. I touched on the two typical licensing models geared either towards a CapEx or an OpEx purchase. CapEx is something you buy and own, or a perpetual license. OpEx is usually a monthly spend that can vary month-to-month, like a rental license. Think of it like buying Microsoft Office off-the-shelf (who does that anymore?) where you own the product, or you buy Office 365 and pay month-by-month for the services that you consume.

 

Let’s Break This Down

As mentioned, there are the two purchasing options. But within those, there are different ways monitoring products are licensed, which could ultimately sway your decision on which product to choose. If your purchasing decisions are ruled by the powers that be, then cost is likely one of the top considerations when choosing a monitoring product.

 

So what license models are there?

 

Per Device Licensing

This is one of the most common operating models for a monitoring solution. What you need to look at closely, though, is the definition of a device. I have seen tools classify a network port on a switch as an individual device, and I have seen other tools classify the entire switch as a device. You can win big with this model if something like a VMware vCenter server or an IPMI management tool is classed as one device. You can effectively monitor all the devices managed by the management tool for the cost of a single device license.

 

Amount of Data Ingested

This model seems to apply more to SaaS-based security information and event management (SIEM) solutions, but may apply to other products. Essentially, you’re billed per GB of data ingested into the solution. The headline cost for 1GB of data may look great, but once you start pumping hundreds of GBs of data into the solution daily, the costs add up very quickly. If you can, evaluate log churn in terms of data generated before looking at a solution licensed like this.

 

Pay Your Money, Monitor Whatever You Like

Predominantly used for on-premises monitoring solutions deployments, you pay a flat fee and you can monitor as much or as little as you like. The initial buy-in price may be high, but if you compare it to the other licensing models, it may work out cheaper in the long run. I typically find established players in the monitoring solution game have on-premises and SaaS versions of their products. There will be a point where paying the upfront cost for the on-premises perpetual license is cheaper than paying for the monthly rental SaaS equivalent. It’s all about doing your homework on that one.

 

Conclusion

I may be preaching to the choir here, but I hope there are some useful snippets of info you can take away and apply to your next monitoring solution evaluation. As with any project, planning is key. The more info you have available to help with that planning, the more successful you will be.

Information Security artwork, modified from Pete Linforth at Pixabay.

In post #3 of this information security series, let's cover one of the essential components in an organization's defense strategy: their approach to patching systems.

Everywhere an Attack

When did you NOT see a ransomware attack or security breach story in the news? And when was weak patching not cited as one of the reasons making the attack possible? If only the organization had applied those security patches from a couple of years ago…

 

Baltimore City is one of the most recent ransomware attacks in the news. For the past month, RobbinHood ransomware, a variant of NSA's EternalBlue, crippled many of the city's online processes like email, paying water bills, and paying parking tickets. More than four weeks later, city IT workers are still working diligently to restore operations and services.

 

How could Baltimore have protected itself? Could it have applied those 2017 WannaCry exploit patches from Microsoft? Sure, but it's never just one series of patches that weren’t applied.

 

Important Questions

How often do you patch? How do you find out about vulnerabilities? What processes do you have in place for testing patches?

 

How and when an organization patches systems tells you a lot about how much they value security and their commitment to designing systems and processes for those routine maintenance tasks critical to an organization's overall security posture. Are they proactive or reactive? Patching (or lack thereof) can shine a light on systematic problems.

 

If an organization has taken the time to implement systems and processes for security patch management and applying security updates, the higher-visibility areas of their information security posture (like identity management, auditing, and change management) are likely also in order. If two-year-old patches weren't applied to public-facing web servers, you can only guess what other information security best practices have been neglected.

 

If you've ever worked in Ops, you know that a reprieve from patching systems and devices will probably never come. Applying patches and security updates is a lot like doing laundry. It's hard work and no matter how much you try, you’ll never catch up or finish. When you let it slide for a while, the catch-up process is more time-consuming and arduous than if you had stayed vigilant in the first place. However, unlike the risk of having to wear dirty clothes, the threat of not patching your systems is a serious security problem with real-world consequences.

 

Accountability-Driven Compliance

We all do better jobs when we’re accountable to someone, whether it be an outside organization or even yourself. If your mother-in-law continuously checked in to make sure you never fell behind on your laundry, you would be far more dutiful on keeping up with washing your clothes. If laundry is a constant challenge for you (like me), you would probably design a system to make sure you could keep up with laundry duties.

 

In the federal space, continuous monitoring is a tenet of government security initiatives like the Federal Risk and Authorization Program (FedRAMP) and Risk Management Framework (RMF). These initiatives centralize accountability and consequences while driving continuous patching in organizations. FedRAMP and RMF assess an organization's risk. Because unpatched systems are inherently risky, failure to comply can result in revoking an organization's Authority to Operate (ATO) or shutting down their network.

 

How can you tell if systems are vulnerable? Most likely, you run vulnerability scans that leverage Common Vulnerability and Exploits (CVE) entries. This CVE list, an "international cybersecurity community effort," drives most security vulnerability assessment tools in on-premises and off-premises products like Nessus and Amazon Inspector. In addition, software vendors leverage the CVE lists to develop security patches for their products. Vulnerability assessment and continuous scanning end up driving an organization's continuous patching stance.

 

Vendor Questions

FedRAMP also assesses security and accredits secure cloud offerings and services. This initiative allows federal organizations to tap into the "as a Service" world and let outside organizations and third-party vendors assume the security, operations, and maintenances of applications, of which patching and applying updates are an important component.

When selecting vendors or "as a service" providers, you could assess part of their commitment to security by inquiring about software component versions. How often do they issue security updates? Can you apply minor software updates out-of-cycle without affecting support?

 

If a software vendor's latest software release deployed a two-year-old version of Tomcat Web Server or a version is close to end-of-support with no planned upgrades, it may be wise to question their commitment to creating secure software.

 

Conclusion

The odds are that some entity will assess your organization's risk, whether it's a group of hackers looking for an easy target, your organization's security officer, or an insurance company deciding whether to issue a cyber-liability insurance policy. Here’s one key metric that will interest these entities: how many unpatched systems and vulnerabilities are lying around on your network and the story your organization's patching tells.

Did you come here hoping to read a summary of the past 5+ years of my research on self-driving, autonomous vehicles, Tesla, and TNC businesses?

Well, you’re in luck… that’s my next post.

 

This post helps make that post far more comprehensible. So here we lay the foundation, with fun, humor, and excitement. (Mild disclaimer for those who may have heard me speak on this topic before or listened to this story shared in talks and presentations. I hope you have fun all the same.)

 

I trust you’re all relatively smart and capable humans with a wide knowledge, depth, and breadth of the world, so for the following image, I want you to take a step back from your machine. In fact, I want you to look at it from as far as you possibly can, without putting a wall between you and the following image.

 

THIS IS A HOT DOG

 

OK, have you looked at the image? Do you know what it is? If you're not sure, and perhaps have a child or aging grandparents, I encourage you to ask them.

 

Did you get hot dog? OK, cool. We’re all on the same page here.

 

Now as a mild disclaimer to my prior disclaimer—I’m familiar with the television series Silicon Valley. I don’t watch it, but I know they had a similar use-case in that environment. This is not that story, but just as when I present on computer vision being used for active facial recognition by the Chinese government to profile the Muslim minority in China and people say, "Oh, you mean like Black Mirror..." I mean “like that,” but I mean it's real and not just "TV magic."

 

Last year in April (April 28, 2018), this innocuous meat enclosure was run through the paces on our top four favorite cloud and AI/ML platforms, giving us all deep insight into how machines work, how they think, and what kinds of things we like to eat for lunch on a random day—the 4th day of July, perhaps. These are our findings.

 

I think we can comfortably say Google is the leader in machine learning and artificial intelligence. You can disagree with that, but you'd likely be wrong. Consider one of the leading machine learning platforms Tensor Flow (TF), which is an open-source project by Google. TF was recently released in 2.0 as an Alpha, so you might think "Yeah, immature product is more like it." But when you peel back that onion a bit and realize Google has been using it internally for 20 years to power search and other internal products, you might feel it more appropriate to call it version 25.0, but I digress.

 

What does Google think a hotdog is?

Image: https://twitter.com/cloud_opinion/status/989691222590550016

 

As you can see, if we ask Google, "Hey, what does that look like to you?" it seems pretty spot on. It doesn't come right out and directly say, "that’s a hot dog," but we can't exactly argue with any of its findings. (I'll let you guys argue over eating hot dogs for breakfast.) So, we're in good hands. Google is invited to MY 4th of July party!

 

But seriously, who cares what Google thinks? Who even uses Google anyway? According to the 2018 cloud figures, Microsoft is actually the leader in cloud revenue, so I'm not likely to even use Google. Instead, I'm a Microsoft-y through and through. I bleed Azure! So, forget Google. I only care about what my platform of choice believes in, because that's what I'll be using.

 

What Microsoft KNOWS a hotdog is!

Image: https://twitter.com/cloud_opinion/status/989691222590550016

 

Let’s peel back the chopped white onion and dig into the tags a bit. Microsoft has us pegged pretty heavily with its confidences, and I can't say I agree more. With 0.95 confidence, Microsoft can unequivocally and definitively say, without a doubt, I am certain:

 

This is a carrot.

 

We're done here. We don't need to say any more on this matter. And to think we Americans are only weeks away from celebrating the 4th of July. So, throw some carrots on the BBQ and get ready, because I know how I'll be celebrating!

 

Perhaps that's too much hyperbole. It’s also 0.94 confident that it's "hot" (which brings me back to my Paris Hilton days, and I'm sure those are memories you've all worked so hard to block out). But... 0.66 gives us relative certainty that if it wasn't just a carrot, it's a carrot stick. I mean, obviously it's a nicely cut, shaved, and cleaned-off carrot stick. Throw it on the BBQ!

 

OK, but just as we couldn't figure out what color “the dress” was, Microsoft knows what’s up, at 0.58 confidence. It's orange. Hey, it got something right. I mean, I figure if 0.58 were leveraged as a percentage instead of (1, 0, -1) it would be some kind of derivative of 0.42 and it would think it’s purple. (These are the jokes, people.)

 

But, if I leave you with nothing more than maybe, just MAYBE, if it's not a carrot and definitely NOT a carrot stick... it's probably bread, coming in at 0.41 confidence. (Which is cute and rings very true of The Neural Net Dreams of Sheep. I'm sure the same is true of bread, too. Even the machines know we want carbs.)

 

But who really cares about Microsoft, anyway? I mean, I'm an AWS junkie, so I use Amazon as MY platform of choice. Amazon was the platform of choice of many police departments to leverage its facial recognition technology to profile criminals, acts, and actions to do better policing and protect us better. (There are so many links on this kind of thing being used; picking just one was difficult. Feel free to read up on how police are trying to use this, and how some states and cities are trying to ban it.)

 

Obviously, Amazon is the only choice. But before we dig into that, I want to ask you a question. Do you like smoothies? I do. My favorite drink is the Açai Super Anti-Oxidant from Jamba Juice. So, my fellow THWACKers, I have a business opportunity for you here today.

 

Between you, me, and Amazon, I think we can build a business to put smoothie stores out of business, and we can do that through the power of robots, artificial intelligence, and machine learning powered by Amazon. Who's with me? Are you as excited as I am?

 

Amazon prefers you to DRINK your Hotdogs

Image: https://twitter.com/cloud_opinion/status/989691222590550016

 

The first thing I'll order will be a sweet caramel dessert smoothie. It will be an amazing confectionery that will put your local smoothie shop out of business by the end of the week.

 

At this point, you might be asking yourself, "OK, this has been hilarious, but wasn't the topic something about businesses? Did you bury the lede?”

 

So, I'll often ask this question, usually of engineers: do businesses want machine learning? And they'll often say yes. But the truth is really, WE want machine learning, but businesses want machine knowledge.

 

It may not seem so important in the context of something silly like a hot dog, but, when applied at scale and in diverse situations, things can get very grave. The default dystopian future or “realistic” future I lean towards is plasma pumps in a hospital. Imagine a fully machine controlled and AI-leveraged device like a plasma pump. It picks out your blood type based on your chart or perhaps it pricks your finger, figures out what blood it should give you, and starts to administer it. Not an unrealistic future, to be honest. Now, what if it was accurate 99.99999% of the time? Pretty awesome, right? But let's say it was as accurate as Google was with a hot dog, at 98% confidence. A business can hardly accept the liability that 99.9999999% might give. Drop that down to 98%, or the more realistic 90-95%, and that is literal deaths on our hands. 

 

Yes, businesses don't really want machine learning. They say they do because it's popular and buzzword-y, but when lives are potentially on the line, tiny mistakes come with a cost. And those mistakes add up, which can affect the confidence the market can tolerate. But hey, how much IS a life really worth? If you can give up your life to make a device, system, or database run by machine learning/AI better for the next person, versus, oh I don't know, maybe training your models better and a little longer—is it worth it?

 

There will come a point or many points where there are acceptable tolerances, and we'll see those tolerances and accept them for what they are at a certain point because, "It doesn't affect me. That's someone else's problem." Frankly, the accuracy of machine learning has skyrocketed in the past few years alone. That compounded with better, faster, smarter, and smaller TPUs (tensor processing units) means we truly are in a next-generation era in the kinds of things we can do. 

 

Google Edge TPU on a US Penny

Image: Google unveils tiny new AI chips for on-device machine learning - The Verge

 

Yes, indeed, the future will be here before we know it. But mistakes can come with dire costs, and business mistakes will cost in so many more ways because we "trust" businesses to do the better or the right thing.

 

"Ooh, ooh! You said four cloud providers. Where's IBM Watson?"

 

Yes, you're right. I wasn't going to forget IBM Watson. After all, they're the businessman's business cloud. No one ever got fired for buying IBM, blah blah blah, so on and so forth.

 

I always include this for good measure. No, not because it’s better than Google, Microsoft, or Amazon, but because of how funny it is.

 

IBM is like a real business cloud AI ML!

Image: https://twitter.com/cloud_opinion/status/989762287819935744

 

I’ll give IBM one thing—they're confident in the color of a hot dog, and really, what more can we ask for?

 

Hopefully, this was helpful, informative, and funny (I was really angling on funny). But seriously, you should have a better foundation for the constantly “learning” nature of the technology we work with. Consider that machines have a large knowledge base of information, and what they can learn in such a short time to make determinations about what a particular object is. Then also consider if you have young children. Send them into the kitchen and say, "Bring me a hot dog," and you're far more likely to get a hot dog back than you are to get a carrot stick (or a sweet caramel dessert). The point being, the machines that learn and the "artificial" intelligence we expect them to work with and operate from have the approximate intelligence of a 2- or 3-year-old.

 

They will get better, especially as we throw more resources at them (models, power, TPUs, CPU, and so forth) but they're not there yet. If you liked this, you'll equally enjoy the next article (or be terrified when we get to the actual subject matter).

 

Are you excited? I know I am. Let us know in the comments below what you thought. It's a nice day here in Portland, so I'm going to go enjoy a nice hot dog smoothie. 

This week's Actuator comes to you from sunny San Diego and Cisco Live! If you are here, and reading this, then you still have time to stop by Booth #1621 and say hello. I enjoy talking data with anyone, but especially with our customers.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Baltimore ransomware attack will cost the city over $18 million

I wonder how much it would have cost for the city to keep their systems up to date, and able to be patched in a timely manner.

 

Many iOS Developers Don’t Use Encryption: Report

So, Apple tries to force the use of encryption, but allows for developers to easily override. This is why we can't have secure things.

 

EU will force electric cars to emit a noise below 20 km/h on July 1

I would pay good money for my car to sound like George Jetson's.

 

Employees are almost as dangerous to business security as hackers and cybercriminals

Always a difficult conversation, when you want to enforce security standards and your colleagues take offense, as if you don't trust them. And yet, they are often the weakest link. Security is necessary to help good people from doing dumb things.

 

Uber’s Path of Destruction

A bit long, but worth every minute. Uber, along with other tech companies, are horrible for our economy. And they will remain so, until we elect legislators that understand not only technology, but the business finance of technology companies.

 

Microsoft and Oracle link up their clouds

The biggest shock in this article was discovering that Oracle has a cloud.

 

AWS launches Textract, machine learning for text and data extraction

Just a quick PSA here, but the code and model driving this service likely has inherent biases in a similar manner to issues with facial recognition tech. "Trust, but verify" should be the disclaimer pasted at the top of the page for this and every other AI-as-a-Service out there.

 

 

This is my first Cisco Live! and I think there's a chance I'm the only DBA here:

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article from my colleague Mav Turner. He offers a good overview of edge computing and offers suggestions on overcoming challenges. I like the idea of using technology like this to reduce latency.

 

Edge computing has become a critical enabler for successful cloud computing as well as the continued growth of the Internet of Things (IoT). Edge computing does come with its own unique challenges, but the advantages far outweigh the challenges.

 

What Is Edge Computing?

To understand why edge computing is so important, let’s first look at cloud computing for government agencies, which is fast becoming a staple of most agency environments.

 

One of the primary advantages of cloud computing is the ability to remove a vast amount of data and processing out of the agency—off-premises—so the federal IT pro no longer has to spend time, effort, and money maintaining it.

 

Logistically, this means agency data is traveling hundreds or thousands of miles back and forth between the end-user device and the cloud. Clearly, latency can become an issue. Enter edge computing, which essentially places micro data centers at the “edge” of the agency network. These micro data centers, or edge devices, serve as a series of distributed way stations between the agency network and the cloud.

 

With edge devices in place, computing and analytics power is still close to end-user devices—eliminating the latency issue—and the data that doesn’t have to be processed immediately is routed to a cloud data center at a later time. Latency problem solved.

 

Overcoming Edge Computing Challenges

The distributed nature of edge computing may seem to bring more complexity, more machines, greater management needs, and a larger attack surface.

 

Luckily, as computing technology has advanced, so has monitoring and visualization technology to help the federal IT pro realize the benefits of edge computing without additional management or monitoring pains.

 

Strategizing

Start by creating a strategy; this will help drive a successful implementation.

 

Be sure to include compliance and security details in the strategy, as well as configuration information. Create thorough documentation to standardize your hardware and software requirements.

 

Be sure patch management is part of the security strategy. This is an absolute requirement for ensuring a secure edge environment, as is an advanced security information and event management (SIEM) tool that will ensure compliance while mitigating potential threats.

 

Monitoring, Visualizing, and Troubleshooting

Equally important to managing edge systems is monitoring, visualization, and troubleshooting.

 

Monitoring all endpoints on the network will be a critical piece of successful edge computing management. Choose a tool that not only monitors remote systems, but provides automated discovery and mapping, so the federal IT pro has a complete understanding of all edge devices.

 

Additionally, be sure to invest in tools that provide full infrastructure visualization, so the federal IT pro can get a complete picture of the network at all times. Add in full network troubleshooting to be sure the team can monitor its entire infrastructure as well.

 

Creating a sound edge computing implementation strategy and using the right tools to monitor and manage the network will ease the pains and let the benefits of edge computing be fully realized.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

If, like me, you believe we’re living in a hybrid IT world when it comes to workload placement, then it would make sense to also consider a hybrid approach when deploying IT infrastructure monitoring solutions. What I mean by this is deciding where to physically run a monitoring solution, be that on-premises or in the public cloud.

 

I work for a value-added reseller, predominantly involved with deploying infrastructure for virtualization, end-user computing, disaster recovery, and Office 365 solutions. During my time I’ve run a support team offering managed services for a range of different-sized customers from SMB to SME, and managing several different monitoring solutions. They range from SaaS products to bolster our managed service offering to customers’ locally installed monitoring applications. Each has its place and there are reasons for choosing one platform over another.

 

Licensing Models

Or what I really mean, how much does this thing cost?

 

Typically for an on-premises solution, there will be a one-off upfront cost and then a yearly maintenance plan. This is called perpetual licensing. The advantage here is that you own the product with the initial purchase, and then maintaining the product support and subscription entitles you to future upgrades and support. Great for a CapEx investment.

 

SaaS offerings tend to be a monthly cost. If you stop paying, you no longer have access to the solution. This is called subscription licensing. Subscription licensing is more flexible in that you can scale up or scale down the licenses required on a month-by-month basis. I will cover more on this in a future blog post.

 

Data Gravity

This is physically where the bulk of your application data resides, be that on-premises or in the public cloud. Some applications need to be close to their data so they can run quickly and effectively, and a monitoring solution should take this into consideration. It could be bad to have a SaaS-based monitoring solution if you have time-sensitive data running on-premises that needs near-instantaneous alerts. Consider time sensitivity and criticality when deciding what a monitoring solution should deliver for you. If being alerted about something up to 15 minutes after an event has occurred is acceptable, then a SaaS solution for on-premises workloads could be a good fit.

 

Infrastructure to Run a Monitoring Solution

Some on-premises monitoring solutions can be resource-intensive, which can prove problematic if the existing environment is not currently equipped to run it. This could lead to further CapEx expenditure for new hardware just to run the monitoring solution, which in turn is something else that will need to be monitored. SaaS, in this instance, could be a good fit.

 

Also, consider this: do you want to own and maintain another platform, or do you want a SaaS provider to worry about it? If we compare email solutions like Exchange and Exchange Online, Exchange is in many instances faster for user access and you have total control over how to run the solution. Exchange Online, however, is usually good enough and removes the hassle of managing an Exchange server. Ask yourself: do you need good enough or do you need some nerd knobs to turn?

 

Conclusion

A lot of this article may seem obvious, but the key to choosing a monitoring solution is to start out with a set of criteria that fits your needs and then evaluate the marketplace. Personally, I’ve started with lists with X, Y, and Z requirements with a weighting on each item. Whichever product scored the highest ended up being the best choice.

 

To give you an idea of those requirements, I typically start out with things like:

  • Is the solution multi-tenanted?
  • How much does it cost?
  • How is it licensed?
  • What else can it integrate with (like a SIEM solution, etc.)?

 

  A good end product is always the result of meticulous preparation and planning.

The bigger the project, the more risk there is. Any decent-sized project needs a professional project manager to run it and help mitigate that risk. When it comes to big projects, great project managers are worth their weight in gold. Over the years, I’ve worked with some horrible project managers and some fantastic project managers.

 

So, what makes a good project manager? A successful project manager will do several things to help keep the project on schedule, including:

 

  • Check in with team members
  • Collate regular feedback from the team members
  • Help team members escalate issues to the right people to resolve them
  • Coordinate meetings with non-project team members
  • Provide positive feedback to help drive the project forward
  • Serve as a liaison between IT and the business
  • Work with the project team on release night to see the project through to completion

 

One of the best experiences I had with a project manager was back when SQL Server 2005 was released in November 2005. We pitched to management to upgrade the loan origination system at the company from SQL Server 2000 to SQL Server 2005. The project manager was fantastic, bringing in development and business resources as needed to test the system prior to the upgrade.

 

The part that sticks out in my mind the most is the night of the upgrade. For other projects with different project managers, the managers would leave by 6 p.m. no matter what. For this release, the project manager came in later in the day with the rest of the folks on the project, and stayed until 4 a.m. while we did the release. When the team needed to order some dinner, our project manager made sure everyone got dinner (and that vegetarians got vegetarian food) and that anything we needed was taken care of. Even though this release went until 4 or 5 a.m. (I don’t remember exactly when we finished), this release went shockingly easy compared to others of which I’ve been a part.

 

On the flip side, I’ve had plenty of project releases that were the exact opposite experience. The project manager collects status updates every week from team members, and the weekly project status meetings are just basically regurgitating these updates back to the rest of the team. When it comes to release night, the project manager in many cases was nowhere to be found, and sometimes unavailable via email or cell phone. This made questions or problems with the release difficult if not impossible to solve.

 

When projects don’t have a good project manager guiding them to completion, it’s easy for the project to fail. On numerous occasions, I’ve seen IT team members attempt to step into the role of project manager. This means someone is filling this role on a part-time basis, which means the project management isn’t getting the full attention it deserves. Having someone not focused on the details means tasks are going to fall through the cracks and things that need to be done aren’t done (or they aren’t done in a timely fashion). All of this can lead to the project failing, sometimes in a rather spectacular or expensive way; and a failed project, for any reason, is never a good thing.

Hybrid IT merges private on-premises data centers with a public cloud vendor. These public cloud vendors have a global reach, near limitless resources, and drive everything through an API. Tapping into this new resource is creating some new technologies and new career paths. Every company today can run a business analytics system. This was never feasible in the past because these systems traditionally took months to deploy and required specialized analysts to maintain and run. Public cloud providers now offer these business analytics systems to anyone as a service and people can get started as easily as swiping their credit card.

 

We’re moving away from highly trained and specialized analysts. Instead, we require people to have a higher holistic view of these systems. Businesses aren’t looking for specialists in one area, they want people who can adapt and merge their foundation with cross-siloed skillsets. People are gravitating more and more towards skills in development. Marketing managers, for example, are learning SQL to analyze data to see if a campaign is working or not. People are seeking to learn skills that allow them to interact with the cloud through computer program languages because while the public cloud may act as a traditional data center, it's still a highly programmable data center.

 

The IT systems operator must follow this same trend and continue to add more software development and scripting abilities to their toolbelt. The new role of DevOps engineer has emerged because of hybrid IT. This role was originally created because of the siloed interaction between software developers, who are primarily concerned about code, and the systems operation team, who must support everything once it hits production. The DevOps role was created to help break the silos and create more communication between the two teams, but as time has passed, the two groups have merged together because they need to rely on each other. Developers showed operations how to run code effectively and efficiently, and SysOps folks showed developers how to monitor, support, and back up the systems on which their code runs.

 

A lot of new skills are emerging, but I want to focus on the three I feel are becoming the heavy favorites: infrastructure as code, CI/CD, and monitoring.

 

Infrastructure as Code (IaC)

According to this blog from Azure DevOps, "Infrastructure as Code is the management of infrastructure (networks, virtual machines, storage) in a descriptive model,” or a way to use code to deploy and manage everything in your environment. A lot of data centers share similar hardware from the same set of manufacturers, yet each is unique. That's because people configure their environments manually. If a change is required on a system, they make the change and complete the task. Nothing is documented, nobody knows what the change was, or how it was implemented. When the issue happens again, someone else repeats the same task or does it differently. As this goes on, our environments begin to morph and change and there's no resetting it, thus causing unique snowflakes.

 

A team that practices good IaC will always make their change in code, and for good reason. Once a task is captured in code, that code is a living document and can be shared and run by others for consistent deployment. The code can be checked into a CI/CD pipeline to test before it hits production and to version it for tracking purposes. As the list of tasks controlled by code gets larger, we can free up people's time by putting the code in a scheduler and start letting our computers manage the environment. This opens the doors for automation and allows people to work on meaningful projects.

 

Continuous Integration/Continuous Deployment (CI/CD)

When you're working in a team of DevOps engineers, you and your team are going to be writing a lot more code. You may be writing IaC code for deploying an S3 bucket, while your team member might be working on deploying an Amazon RDS instance, both writing in an isolated environment. The team needs a way to centrally store, manage, and test the code. Continuous integration helps merge those two different pieces into one unit, but the frequency of merging is very high. Continually merging and testing code helps shorten feedback loops and shorten the time it takes to find bugs in the code. Continuous delivery is the approach of taking what's in the CI code repo and continuously delivering and testing it in different environments. The end goal is to deliver a bug-free code base to our end customer, whether that customer is an internal employee or someone external paying for our services. The more frequently we can deliver our code to different environments like test or QA, the better our product will become.

 

Monitoring

We can never seem to get away from monitoring, but this makes sense given how important it is. Monitoring provides the insight we need into our environments. As we start to deploy more of our infrastructure and applications through code and deliver those resources quickly and through automation, we need a monitoring solution to alert us when there’s an issue. The resources we provide with IaC or ...manually... will be consumed and our monitoring solution will help tell us how far out those resources will be available or if we need to provide additional resources to prevent loss of service. Monitoring gives us historic trends of our environment, so we can compare how things are doing today to what they were doing yesterday. Did that new global load balancer help our traffic or hurt it? In today's security landscape, we need eyes everywhere, and having a monitoring solution can help notify if there's a brute force attack or if a single user is trying to log in from another part of the country.

 

Hybrid IT is a fast-growing market bringing lots of change to businesses as well as creating new careers and new skillsets. From hybrid IT, new careers are emerging because of the need to merge different skillsets from traditional IT, development, and business units. This combination is stretching people’s knowledge. The most successful people are merging what they know about traditional infrastructure with the new offerings from the public cloud.

THWACKcamp is Back! - YouTube

 

Here at SolarWinds, convention season is just beginning to heat up. Whether you’re lucky enough to travel to these shows or are just following our exploits online, you’ll see us across the globe—from London (Info Security Europe) to New York (SWUG), Vegas (Black Hat) San Diego (Cisco Live!), San Francisco (VMworld, Oracle World), and Singapore (RSA)—demoing and discussing the best monitoring features, whether they’re brand-new or just new-to-you.

 

But there’s one conference that, for us, is circled in red marker on our calendar: THWACKcamp 2019, which is so very happening October 16 – 17, 2019.

 

Running the Numbers

Now in its 8th year, THWACKcamp has grown in both quality and quantity each year. Last year, we saw more than 2,300 people attend, consuming 22 hours of content accompanied by real-time discussions in live chat. On top of that, people kept coming back for more, and viewed the recordings of those same sessions over 16,000 more times after THWACKcamp 2018 ended.

 

This coming year promises to be our most ambitious one yet—and not just because we expect more attendees, more content, or more amazing giveaway prizes.

 

Brand New Formula, Same Great Taste!

First, I want to talk about the things that AREN’T changing.

 

THWACKcamp 2019 is still 100% free and 100% online. That means you don’t have to beg your boss for budget, risk one of TSA’s “very personal” pat-down procedures, fight flocks of other IT pros thronging to the next session, or deal with less-than-optimal hotel options.

 

The event is still going to be two full days packed with content and live segments. A legion of SolarWinds folks ranging from Head Geeks to engineers to product managers will be on chat to field questions, offer insights, and take conversations offline if needed.

 

And of course, we’re still going to have some awesome prizes to give away throughout the event. (Look, we know you come for the information, but we also know the prizes add a whole ‘nother level of fun to it and we’re not about to give that up either. We have as much fun brainstorming what cool swag to give away as you do winning it. That said, this thread on THWACK.com will let you offer your ideas on what you’d like to see us give away:  https://thwack.solarwinds.com/message/419618#419618

 

SO... what about the “new and improved” part of THWACKcamp?

The thing you’ll notice most is every session is going to take the time it needs, rather than conforming to a standard 30- to 40-minute window. This allows us to intersperse deep-dive topics with quick 10-minute how-tos, and even a few funny “commercials” to make sure you’re paying attention.

 

The second thing you’ll notice is that we’ve broken out of the studio. We love our set and still have a bunch of the sessions there, but you’ll also see us in discussions in lounge areas, outside, and maybe even on-location at events. IT professionals are rarely “at rest” and THWACKcamp reflects that this year too.

 

Both of those elements allowed us to make one other big improvement: a single track of sessions each day. We’ll be able to cover more topics and ensure that everyone is in the right “room” at the right time to hear all the THWACKcamp-y goodness.

 

And the last thing you’ll notice is how THWACKcamp will be even more interactive than ever. During the sessions, we’re adding an interactive question-and-answer system called Sli.do. If you’ve attended a SWUG (and if you haven’t, you really should! http://THWACK.com/swug), you know exactly how this works. We’ll use it to get your feedback in real-time, find out how many people prefer one feature over another, and you’ll be able to post your own questions for our staff to answer, where it won’t get lost in the mad banter of the live chat window. Speaking of chat, it’ll still be there, in a THWACKcamp “watercooler” section, where you can talk about your experiences with SolarWinds modules, ask for tips on configuring ACLs, or debate the supremacy of the MCU vs. DCU.

 

Take My Money!

By this point, I hope you’re shouting at your screen “BUT LEON, HOW DO I SIGN UP???” If this is you, maybe have the barista bring you a decaf on the next round. And while you’re waiting, head over to the THWACKcamp Registration page THWACKcamp 2019. You’ll be able to sign yourself up for the event and see the full schedule of sessions.

 

But you’ll also gain valuable insight and information between now and October 16. We’ll be sharing exclusive blog posts, videos, and even behind-the-scenes images to give you insight into how an event of this magnitude goes together, and prepare you to get the most out of the THWACKcamp experience.

 

Also exclusive to folks who register will be “Ask Me (Almost) Anything” sessions. After you complete your registration, you’ll get access to that same Sli.do system I mentioned earlier. Once again, using Sli.do, you will have an opportunity to submit questions or upvote questions from other folks. We’ll host five live on-camera sessions between now and October 16 to answer those questions for you. But remember, you only get access to that after you register.

 

So, what are you waiting for? Go register now: THWACKcamp 2019! And don’t be selfish, either. Share that link with coworkers, colleagues, and friends in IT who may be thinking “No way am I going to make it to a conference this year.” Sure you are. Here at SolarWinds we’ve got a solution for you, just like we do for so many of your IT challenges.

This week's Actuator comes to you from June, where the weather has turned for the better after what seems like an endless amount of rain. We were able to work in the yard and it reminded me that one of the best ways to reduce stress is some physical exercise. So wherever you are, get moving, even if it is a walk around the block.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

The Vehicle of the Future Has Two Wheels, Handlebars, and Is a Bike

Regular readers of the Actuator know I have a fondness for autonomous vehicles. This article made me rethink that our future may best be served with bicycles, at least in more urban areas.

 

SpaceX Starlink satellites dazzle but pose big questions for astronomers

Is there any group of people Elon Musk hasn’t upset at this point?

 

How much does it cost to get an employee to steal workplace data? About $300

And for the low price of $1,200, you can get them to steal all the data. This is why security is hard: because humans are involved.

 

Real estate title insurance company exposed 885,000,000 customers' records, going back 16 years: bank statements, drivers' licenses, SSNs, and tax records

Setting aside the nature of the data, much of which I believe is public record, I want you to understand how the breach happened: because of lousy code. Until we hold individuals, not just companies, responsible for avoiding common security practices, we will continue to suffer data breaches.

 

Bad metadata means billions in unpaid royalties from streaming music services

Each paragraph I read made me a little sadder than the one before.

 

Artificial Intelligence Isn’t Just About Cutting Costs. It’s Also About Growth.

Let the machines do the tasks for which they are able, freeing up humans to do tasks for which they are able. Yes, automation and AI is about growth, and about efficiency (i.e., cutting costs).

 

What 10,000 Steps Will Really Get You

I never knew why Fitbit chose 10,000 steps as a default goal, but this may explain why. I’m a huge advocate for finding ways to get extra steps into your day. I hope this article gets you thinking about how to get more steps for yourself each day.

 

Pictured here is six yards of gravel. Not pictured is another 12 yards of topsoil. All moved by hand. I need a nap.

 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article written by my colleague Jim Hansen. It seems that our BYO challenges are not over, and Jim offers some great steps agencies can take to help with these issues.

 

In 2017, the Department of Defense (DoD) released a policy memo stating that DoD personnel—as well as contractors and visitors to DoD facilities—may no longer carry mobile devices in areas specifically designated for “processing, handling, or discussion of classified information.”

 

For federal IT pros, managing and securing “allowable” personal and government devices is already a challenge. Factor in the additional restrictions and the real possibility that not everyone will follow the rules, mobile-device management and security can seem even more overwhelming.

 

Luckily, there are steps federal IT pros can take to help get a better handle on managing this seemingly unmanageable Bring Your Own Everything (BYOx) environment, starting with policy creation and implementation, and including software choices and strategic network segmentation.

 

Agency BYOx Challenges

 

Some agencies allow personnel to use their own devices, some do not. For those that do, the main challenges tend to be access issues: which devices are allowed to access the government network? Which devices are not?

 

For agencies that don’t, there’s the added challenge of preventing unauthorized use by devices that “sneak through” security checkpoints.

 

Implementing some of the below best practices to support your government cybersecurity solutions can help ensure complete protection against a BYOx threat.

 

Three-Step BYOx Security Plan

 

Step One: Train and Test

 

Most agencies have mobile device management policies, but not every agency requires personnel to take training and pass a policy-based exam. Training can be far more effective if agency personnel are tested on how they would respond in certain scenarios.

 

Effective training emphasizes the importance of policies and their consequences. What actions will personnel face if they don’t comply or blatantly break the rules? In the testing phase, be sure to include scenarios to help solidify personnel understanding of what to do when the solution may not be completely obvious.

 

Step Two: Access Control

 

Identity-based access management is used to ensure only authorized personnel are able to access the agency network using only authorized devices. Add a level of security to this by choosing a solution that requires two-factor authentication.

 

Additionally, be sure to create, maintain, and carefully monitor access-control lists to help ensure that users have access to only the networks and resources they need to do their jobs. When establishing these access control lists, include as much information as possible about the users and resources—systems and applications—they are allowed to access. A detailed list could aid in discovering and thwarting fraudulent access from a non-authorized device.

 

Step Three: Implement the Right Tools

 

Mobile phones are far and away today’s biggest BYOx issue for federal IT pros. As a result, access control (step two) is of critical importance. That said, ensuring the following basic security-focused tasks are being implemented is a critical piece of the larger security picture:

 

• Patch management – Patch management is a simple and effective security measure. Choose a product that provides automated patch management to make things even easier and keep your personnel’s devices patched, up to date, and free of vulnerabilities and misconfigurations.

 

• Threat detection – Users often have no idea their devices have been infected, so it’s up to the federal IT pro to be sure a threat detection system is in place to help ensure that compromised devices don’t gain access to agency networks.

 

• Device management – If a user tries to attach an unauthorized device to the network, the quicker the federal IT pro can detect and shut down access, the quicker a potential breach is mitigated.

 

Access rights management – Provisioning personnel, deprovisioning personnel, and knowing and managing their access to the critical systems and applications across the agency is necessary to help ensure the right access to resources is granted to the right people.

 

Conclusion

 

Sticking to the basics and implementing a logical series of IT and end user-based solutions can help reduce the risk of mobile technologies.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Security chain link

 

In the second post in this information security in a hybrid IT world series, let’s cover the best-designed security controls and measures, which are no match for the human element.

 

“Most people don’t come to work to do a bad job” is a sentiment with which most people will agree. So, how and why do well-meaning people sometimes end up injecting risk into an organization’s hardened security posture?

 

Maybe your first answer would be falling victim to social engineering tricks like phishing. However, there’s a more significant risk: unintentional negligence in the form of circumventing existing security guidelines or not applying established best practices. If you’ve ever had to troubleshoot blocked traffic or user who can’t access a file share, you know that one quick fix is to disable the firewall or give the user access to everything. It’s easy to tell yourself you’ll revisit the issue later and re-enable that firewall or tighten down those share permissions. Later will probably never come, and you’ve inadvertently loosened some of the security controls.

 

It’s easy to blame the administrator who made what appears to be a short-sighted decision. However, human nature prompts us to take these shortcuts. In our days on the savannah, our survival depended on taking shortcuts to conserve physical and mental energy to get through the harsh times on the horizon. Especially on short-staffed or overwhelmed teams, you save energy in the form of shortcuts that let you move on to the next fire. For as many security issues that may exist on-premises, “62% of IT decision makers in large enterprises said that their on-premises security is stronger than cloud security,” according to Dimensional Research, 2018.The stakes are even higher when data and workloads move to the cloud, where your data exploits can have further reach.

 

In 2017, one of the largest U.S. defense contractors was caught storing unencrypted application credentials and sensitive data related to a military project on a public, unprotected AWS S3 instance. The number of organizations caught storing sensitive data in unprotected, public S3 instances continues to grow. However, dealing with the complexity of securing data in the cloud requires other tools for improving the security posture and helping to combat the human element in SaaS and cloud offerings: Cloud Access Security Brokers (CASBs).

 

Gartner defines CASBs as “on-premises, or cloud-based security policy enforcement points, placed between cloud service consumers and cloud service providers to combine and interject enterprise security policies as the cloud-based resources are accessed.” By leveraging machine learning, CASBs can aggregate and analyze user traffic and actions across a myriad of cloud-based applications to provide visibility, threat protection, data security, and compliance in the cloud. Also, CASBs can handle authentication/authorization with SSO and credential mapping, as well as masking sensitive data with tokenization.

 

Nifty security solutions aside, the best security tools for on-premises and off-premises are infinitely more effective when the people in your organization get behind the whole mission of what you are trying to accomplish.

 

Continuing user education and training is excellent. However, culture matters. Environments in which people feel they have a role in information security increase an organization’s security posture. What do you think are some of the best ways to change an organization’s culture when it comes to security?

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.