1 2 3 Previous Next

Geek Speak

2,906 posts

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp with ideas for improving security at the DoD by finding vulnerabilities and continuously monitoring agency infrastructure.

 

An early 2019 report from the Defense Department Officer of Inspector General revealed how difficult it’s been for federal agencies to stem the tide of cybersecurity threats. Although the DoD has made significant progress toward bolstering its security posture, 266 cybersecurity vulnerabilities still existed. Most vulnerabilities have only been discovered within the past year—a sure sign of rising risk levels.

 

The report cited several areas for improvement, including continuous monitoring and detection processes, security training, and more. Here are three strategies DOD can use to tackle those remaining 200-plus vulnerabilities.

 

1. Identify Existing Threats and Vulnerabilities

 

Identifying and addressing vulnerabilities will become more difficult as the number of devices and cloud-based applications on defense networks proliferates. Although government IT managers have gotten a handle on bring-your-own-device issues, undetected devices are still used on DoD networks.

 

Scanning for applications and devices outside the control of IT is the first step toward plugging potential security holes. Apps like Dropbox and Google Drive may be great for productivity, but they could also expose the agency to risk if they’re not security hardened.

 

The next step is to scan for hard-to-find vulnerabilities. The OIG report called out the need to improve “information protection processes and procedures.” Most vulnerabilities occur when configuration changes aren’t properly managed. Automatically scanning for configuration changes and regularly testing for vulnerabilities can help ensure employees follow the proper protocols and increase the department’s security posture.

 

2. Implement Continuous Monitoring, Both On-Premises and in the Cloud

 

While the OIG report specifically stated the DoD must continue to proactively monitor its networks, those networks are becoming increasingly dispersed. It’s no longer only about keeping an eye on in-house applications; it’s equally as important to be able to spot potential vulnerabilities in the cloud.

 

DoD IT managers should go beyond traditional network monitoring and look more deeply into the cloud services they use. The ability to see the entire network, including destinations in the cloud, is critically important, especially as the DoD becomes more reliant on hosted service providers.

 

3. Establish Ongoing User Training and Education Programs

 

A well-trained user can be the best protection against vulnerabilities, making it important for the DoD to implement a regular training cadence for its employees.

 

Training shouldn’t be relegated to the IT team alone. A recent study indicates insider threats pose some of the greatest risk to government networks. As such, all employees should be trained on the agency’s policies and procedures and encouraged to follow best practices to mitigate potential threats. The National Institute of Standards and Technology provides an excellent guide on how to implement an effective security training program.

 

When it comes to cybersecurity, the DoD has made a great deal of progress, but there’s still room for improvement. By implementing these three best practices, the DoD can build off what it’s already accomplished and focus on improvements.

 

Find the full article on Government Computer News.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

While there are many silly depictions of machine learning and artificial intelligence throughout Hollywood, its reality delivers significant benefits. Administrators today oversee so many tasks, like system monitoring, performance optimizing, networking configuration, and more. Many of these tasks can be monotonous and tedious. Also, those tasks are generally required daily. In these cases, machine learning helps ease the burden on the administrator and helps make them more productive with their time. Lately, however, more people seem to think too much machine learning may replace the need for humans to get a job done. While there are instances of machine learning eliminating the need for some tasks to be manned by a human, I don’t believe we’ll see humans replaced by machines (sorry, Terminator fans). Instead, I’ll highlight why I believe machine learning matters now and will continue to matter for generations to come.

 

Machine Learning Improves Administrator’s Lives

Some tasks administrators are responsible for can be very tedious and take a long time to complete. With machine learning, automation makes the daily tedious tasks run on a schedule and efficiently as system behavior is learned and optimized on the fly. A great example comes in the form of spam mail or calls. Big name telecom companies are now using machine learning to filter out the spam callers flooding cell phones everywhere. Call blocker apps can now screen calls for you based on spam call lists analyzed by machine learning and then block potential spam. In other examples, machine learning can analyze system behavior against a performance baseline and then alert the team of any anomalies and/or the need to make changes. Machine learning is here to help the administrator, not give them anxiety about being replaced.

 

Machine Learning Makes Technology Better


There are so many amazing software packages available today for backup and recovery, server virtualization, storage optimization, or security hardening. There’s something for every type of workload. When machine learning is applied to these software technologies, it enhances the application and increases the ease of use. Machine learning is doing just that: always learning. If an application workload suddenly increases, machine learning captures it and then will use an algorithm to determine how to react in those situations. When there’s a storage bottleneck, machine learning analyzes the traffic to determine what’s causing the backup and then works out a possible solution to the problem for administrators to implement.

 

Machine Learning Reduces Complexity

Nobody wants their data center to be more complex. In fact, technology trends in the past 10 to 15 years have leaned towards reducing complexity. Virtualization technology has reduced the need for a large footprint in the data center and reduced the complexity of systems management. Hyperconverged infrastructure (HCI) has gone a step further and consolidated an entire rack’s worth of technology into one box. Machine learning takes it a step further by enabling automation and fast analysis of large data sets to produce actionable tasks. Tasks requiring a ton of administrative overhead are now reduced to an automated and scheduled task monitored by the administrator. Help desk analysts benefit from machine learning’s ability to recognize trending data to better triage certain incident tickets and reduce complexity in troubleshooting those incidents.

 

Learn Machine Learning

If you don’t have experience with machine learning, dig in and start reading everything you can about it. In some cases, your organization may already be using machine learning. Figure out where it’s being used and start learning how it affects your job day to day. There are so many benefits to using machine learning—find out how it benefits you and start leveraging its power.

ian0x0r

Old Tech, New Name?

Posted by ian0x0r Jan 20, 2020

The marketing machines of today often paint new technologies to suggest they’re the best thing since sliced bread. Sometimes though, the new products are just a rehash of an existing technology. In this blog post, I’ll look at some of these.

 

As some of you may know, my tech background is heavily focused around virtualization and the associated hardware and software products. With this in mind, this post will have a slant towards those types of products.

 

One of the recent technology trends I have seen cropping up is something called dHCI or disaggregated hyperconverged infrastructure. I mean, what is that? If you break down to its core components, it’s nothing more than separate switching, compute, and storage. Why is this so familiar? Oh yeah—it’s called converged infrastructure. There’s nothing HCI about it. HCI is the convergence of storage and compute on to a single chassis. To me, it’s like going to a hipster café and asking for a hyperconverged sandwich. You expect a ready-to-eat, turnkey sandwich but instead, you receive a disassembled sandwich you have to construct yourself and then somehow it’s better than the thing it was trying to be in the first place: a sandwich. No thanks. If you dig a little deeper, the secret sauce to dHCI is the lifecycle management software overlaying the converged infrastructure but hey, not everyone wants secret sauce with their sandwich.

 

If you take this a step further and label these types of components as cloud computing, nothing has really changed. One could argue true cloud computing is the ability to self-provision workloads, but rarely does a product labeled as cloud computing deliver those results, especially private clouds.

 

An interesting term I came across as a future technology trend is distributed cloud.¹ This sounds an awful lot like hybrid cloud to me. Distributed cloud is when public cloud service offerings are moved into private data centers on dedicated hardware to give a public cloud-like experience locally. One could argue this already happens the other way around with a hybrid cloud. Technologies like VMware on AWS (or any public cloud for that matter) make this happen today.

 

What about containers? Containers have held the media’s attention for the last few years now as a new way to package and deliver a standardized application portable across environments. The concept of containers isn’t new, though. Docker arguably brought containers to the masses but if you look at this excellent article by Ell Marquez on the history of containers, we can see its roots go all the way back to the mainframe era of the late 70s and 80s.

 

The terminology used by data protection companies to describe their products also grinds my gears. Selling technology on being immutable. Immutable meaning it cannot be changed once it has been committed to media. Err, WORM media anyone? This technology has existed for years on tape and hard drives. Don’t try and sell it as a new thing.

 

While this may seem a bit ranty, if you’re in the industry, you can probably guess which companies I’m referring to with my remarks. What I am hoping to highlight though is not everything is new and shiny, some of it is wrapped up in hype or clever marketing.

 

I’d love to hear your thoughts on this, if you think I’m right or wrong, and if you can think of any examples of old tech, new name.

 

¹Source: CRN https://www.crn.com/slide-shows/virtualization/gartner-s-top-10-technology-trends-for-2020-that-will-shape-the-future/9

2019 was busy year for DevOps as measured by the events held on the topic. Whether it be DevOps days around the globe, DockerCon, DevOps Enterprise Summits, KubeCon, or CloudNativeCon, events are springing up to support this growing community. With a huge number of events already scheduled for 2020, people plan on improving their skills with this technology. This is great—it’ll allow DevOps leaders to close capability gaps and it should be a must for those on a DevOps journey in 2020.

 

Hopefully, we’ll see more organizations adopt the key stages of DevOps evolution (foundation building, normalization, standardization, expansion, automated infrastructure delivery, and self-service) by following this model. Understanding where you are on the journey helps you plan what needs to be satisfied at each level before trying to move on to an area of greater complexity. By looking at the levels of integration and the growing tool chain, we can see where you are and plan accordingly. I look forward to seeing and reading about the trials and how they were overcome by organizations looking to further their DevOps movement in 2020.

 

You’ll probably hear terms like NoOps and DevSecOps gain more traction over the coming year from certain analysts. I believe the name DevOps is currently fine for what you’re trying to achieve. If you follow correct procedures, then security and operations already make up a large subset of your workflows. Therefore, you shouldn’t need to call them out in separate terms. If you’re not pushing changes to live systems, then you aren’t really doing any operations, and therefore not truly testing your code. So how can you go back and improve or reiterate on it? As for security, while it’s hard to implement correctly and as difficult to work collaboratively together, there’s a greater need to adopt this technology correctly. Organizations that have matured and evolved through the stages above are far more likely to place emphasis on the integration of security than those just starting out. Improved security posture will be a key talking point as we progress through 2020 and into the next decade.

 

Kubernetes will gain even more ground in 2020 as more people look to a way to provide a robust method of container orchestration to scale, monitor, and run any application, with many big-name software vendors investing in what they see as the “next battleground” for variants on the open-source application management tool.

 

Organizations will start to invest in more use of artificial intelligence, whether it be for automation, remediation, or improved testing. You can’t deny artificial intelligence and machine learning are hot right now and will seep into this aspect of technology in 2020. The best place to be to try this is in a cloud provider, saving you the need to invest in hardware. Instead, the provider can get you up and running in minutes.

 

Microservices and containers infrastructure will be another area of growth within the coming 12 months. Container registries are beneficial to organizations. Registries allow companies to apply policies, whether a security or access control policy and more, to how they manage containers. JFrog Container Registry is probably going to lead the charge in 2020, but don’t think they’ll have it easy as AWS, Google, Azure, and other software vendors have products fighting for this space.

 

These are just a few areas I see will be topics of conversation and column inches as we move into 2020 and beyond, but it tells me this is the area to develop your skills if you want to be in demand as we move into the second decade of this century.

In Austin this week for our annual meeting of Head Geeks. The first order of business is to decide what to call our group. I prefer a "gigabyte of Geeks," but I continue to be outvoted. Your suggestions are welcome.

 

As always, here's a bunch of links from the internet I hope you find interesting. Enjoy!

 

Facebook again refuses to ban political ads, even false ones

Zuckerberg continues to show the world he only cares about ad revenue, for without that revenue stream his company would collapse.

 

Scooter Startup Lime Exits 12 Cities and Lays Off Workers in Profit Push

Are you saying renting scooters your customers then abandon across cities *is not* a profitable business model? That's crazy!

 

Russian journals retract more than 800 papers after ‘bombshell’ investigation

I wish we could do the same thing with blog posts, old and new.

 

Alleged head of $3.5M crypto mining scam bought stake in nightclub

A cryptocurrency scam? Say it isn't so! Who knew this was even possible?

 

Ring confirms it fired four employees for watching customer videos

Ah, but only after an external complaint, and *after* their actions were known internally. In other words, these four would still have jobs if not for the external probe.

 

Tesla driver arrested for flossing at 84 mph on autopilot

Don't judge, we've all been there, stuck in our car and in need of flossing our teeth.

 

It's helpful for a restaurant to publish their menu outside for everyone to see.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp with ideas about modernizing security along with agency infrastructure to reduce cyberthreats.

 

As agencies across the federal government modernize their networks to include and accommodate the newest technologies such as cloud and the Internet of Things (IoT), federal IT professionals are faced with modernizing security tactics to keep up.

 

There’s no proverbial silver bullet, no single thing capable of protecting an agency’s network. The best defense is implementing a range of tactics working in concert to provide the most powerful security solution.

 

Let’s take a closer look.

 

Access Control

 

Something nearly all of us take for granted is access. The federal IT pro can help dramatically improve the agency’s security posture by reining in access.

 

There can be any number of reasons for federal IT pros to set overly lenient permissions—from a lack of configuration skills to a limited amount of time. The latter is often the more likely culprit as access control applies to many aspects of the environment. From devices to file folders and databases, it’s difficult and time-consuming to manage setting access rights.

 

Luckily, an increasing number of tools are available to help automate the process. Some of these tools can go so far as to automatically define permission parameters, create groups and ranges based on these parameters, and automatically apply the correct permissions to any number of devices, files, or applications.

 

Once permissions have been set successfully, be sure to implement multifactor authentication to ensure access controls are as effective as possible.

 

Diverse Protection

 

The best protection against a complex network is multi-faceted security. Specifically, to ensure the strongest defense, invest in both cloud-based and on-premises security.

 

For top-notch cloud-based security, consider the security offerings of the cloud provider with as much importance as other benefits. Too many decisionmakers overlook security in favor of more bells and whistles.

 

Along similar lines of implementing diverse, multi-faceted security, consider network segmentation. If an attack happens, the federal IT pro should be able to shut down a portion of the network to contain the attack while the rest of the network remains unaffected.

 

Testing

 

Once the federal IT pro has put everything in place, the final phase—testing—will quickly become the most important aspect of security.

 

Testing should include technology testing (penetration testing, for example), process testing (is multi-factor authentication working?), and people testing (testing the weakest link).

 

People testing may well be the most important part of this phase. Increasingly, security threats caused by human error are becoming one of the federal government’s greatest threats. In fact, according to a recent Cybersecurity Survey, careless and malicious insiders topped the list of security threats for federal agencies.

 

Conclusion

 

There are tactics federal IT pros can employ to provide a more secure environment, from enhancing access control to implementing a broader array of security defenses to instituting a testing policy.

 

While each of these is important individually, putting them together goes a long way toward strengthening any agency’s security infrastructure.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Part 1 of this series introduced IT Service Management (ITSM) and a few of the adaptable frameworks available to fit the needs of an organization. This post focuses on the benefits of using a couple of the principles from the Lean methodology to craft ITSM.

 

I’d like to acknowledge and thank Trish Livingstone and Julie Johnson for sharing their expertise in this subject area. I leaned on them heavily.

 

The Lean Methodology

The Lean methodology is a philosophy and mindset focused on driving maximum value to the customer while minimizing waste. These goals are accomplished through continuous improvement and respect for people. More on this in a bit.

 

Because the Lean methodology originated in the Toyota Production System, it’s most commonly associated with applications in manufacturing. However, over the years, Lean has brought tremendous benefits to other industries as well, including knowledge work. Probably the most recognizable form of Lean in knowledge work is the Agile software development method.

 

Continuous Improvement

“If the needs of the end user are the North Star, are all the services aligned with achieving their needs?” — Julie Johnson

 

Continuous improvement should be, not surprisingly, continuous. Anyone involved in a process should be empowered to make improvements to it at any time. However, some situations warrant bringing a team of people together to affect more radical change than can be achieved by a single person.

 

3P—Production, Preparation, Process

One such situation is right at the beginning of developing a new product or process.

 

This video and this video demonstrate 3P being used to design a clinic and a school, respectively. In both cases, the design team includes stakeholders of all types to gather the proper information to drive maximum value to the customer. The 3P for the clinic consists of patients, community members, caregivers, and support staff. The same process for the school includes students, parents, teachers, and community members.

 

While both examples are from tangible, brick-and-mortar situations, the 3P is also helpful in knowledge work. One of the most challenging things to gather when initiating a new project is proper and complete requirements. Without sufficient requirements, the creative process of architecting and designing IT systems and services often ends up with disappointing outcomes and significant rework. This 3P mapped out the information flows that fed directly into the design of new safety alert system software for healthcare.

 

The goal of the 3P is to get the new initiative started on the right path by making thoughtful, informed decisions using information gathered from all involved in the process. A 3P should begin with the team members receiving a short training session on how to participate in the 3P. Armed with this knowledge and facilitated by a Lean professional keeping things on track, the 3P will produce results more attuned to the customer’s needs.

 

Value Stream Mapping (VSM)

If you bring visibility to the end-to-end process, you create understanding around why change is needed, which builds buy-in.” — Trish Livingstone

 

Another situation warranting a team of people coming together to affect more radical change is when trying to improve an existing process or workflow. Value Stream Mapping (VSM) is especially useful when the process contains multiple handoffs or islands of work.

 

For knowledge work, VSM is the technique of analyzing the flow of information through a process delivering a service to a customer. The process and flow are visually mapped out, and every step is marked as either adding value or not adding value.

 

There are many good people in IT, and many want to do good work. As most IT departments operate in silos, it’s natural to think if you produce good quality work, on time, and then hand it off to the next island in the process, the customer will see the most value. The assumption here is each island in the process is also producing good quality work on time. This style of work is known as resource efficiency. The alternative, referred to as flow efficiency, focuses on the entire process to drive maximum customer value. This video, although an example from healthcare and not knowledge work, explains why flow efficiency can be superior to resource efficiency.

 

I was presented with a case study where a government registrar took an average of 53 days to fulfill a request for a new birth certificate. VSM revealed many inefficiencies because of tasks adding no value. The process after the VSM fulfilled requests in 3 days without adding new resources. The registrar’s call volume fell by 60% as customers no longer needed to phone for updates.

 

It’s easy to see how Value Stream Mapping could help optimize many processes in IT, including change management, support ticket flow, and maintenance schedules, to name a few.

 

Respect for People

Respect for people is one of the core tenets of Lean and a guiding principle for Lean to be successful in an organization.

 

Respect for the customer eliminates waste. Waste is defined as anything the customer wouldn’t be willing to pay for. In the case of government service, waste is anything they wouldn’t want their tax dollars to pay for.

 

Language matters when respecting the customer. The phrase “difficult user” is replaced with “a customer who has concerns.” As demonstrated in the 3P videos above, rather than relying on assumptions or merely doing things “the way we’ve always done them,” customers are actively engaged to meet their needs better.

 

Lean leadership starts at the top. Respect for employees empowers them to make decisions allowing them to do their best work. Leadership evolves to be less hands-on and takes on a feeling of mentorship.

 

Respect for coworkers keeps everyone’s focus on improving the processes delivering value to the customer. Documenting these processes teaches new skills, so everyone can participate and move work through the system faster.

 

The 4 Wins of Lean

Using some or all of the Lean methodology to customize IT service management can be a win, win, win, win situation. Potential benefits could include:

 

1. Better value for the ultimate customer (employees)

    • Reduced costs by eliminating waste
    • Faster service or product delivery
    • Better overall service

2. Better for the people working in the process

    • Empowered to make changes
    • Respected by their leaders
    • Reduced burden

3. Better financially for the organization

    • Reduced waste
    • Increased efficiency
    • Reduced cost

4. Better for morale

    • Work has meaning

    • No wasted effort

    • Work contributes to the bottom line

There have been so many changes in data center technology in the past 10 years, it’s hard to keep up at times. We’ve gone from a traditional server/storage/networking stack with individual components, to a hyperconverged infrastructure (HCI) where it’s all in one box. The data center is more software-defined today than it ever has been with networking, storage, and compute being abstracted from the hardware. On top of all the change, we’re now seeing the rise of artificial intelligence (AI) and machine learning. There are so many advantages to using AI and machine learning in the data center. Let’s look at ways this technology is transforming the data center.

 

Storage Optimization

Storage is a major component of the data center. Having efficient storage is of the utmost importance. So many things can go wrong with storage, especially in the case of large storage arrays. Racks full of disk shelves with hundreds of disks, of both the fast and slow variety, fill data centers. What happens when a disk fails? The administrator gets an alert and has to order a new disk, pull the old one out, and replace it with the new disk when it arrives. AI uses analytics to predict workload needs and possible storage issues by collecting large amounts of raw data and finding trends in the usage. AI also helps with budgetary concerns. By analyzing disk performance and capacity, AI can help administrators see how the current configuration performs and order more storage if it sees a trend in growth.

 

Fast Workload Learning

Capacity planning is an important part of building and maintaining a data center. Fortunately, with technology like HCI being used today, scaling out as the workload demands is a simpler process than it used to be with traditional infrastructure. AI and machine learning capture workload data from applications and use it to analyze the impact of future use. Having a technology aid in predicting the demand of your workloads can be beneficial in avoiding downtime or loss of service for the application user. This is especially important in the process of building a new data center or stack for a new application. The analytics AI provides helps to see the entire needs of the data center, from cooling to power and space.

 

Less Administrative Overhead

The new word I love to pair with artificial intelligence and machine learning is “autonomy.” AI works on its own to analyze large amounts of data and find trends and create performance baselines in data centers. In some cases, certain types of data center activities such as power and cooling can use AI to analyze power loads and environmental variables to adjust cooling. This is done autonomously (love that word!) and used to adjust tasks on the fly and keep performance at a high level. In a traditional setting, you’d need several different tools and administrators or NOC staff to handle the analysis and monitoring.

 

Embrace AI and Put it to Work

The days of AI and machine learning being a scary and unknown thing are past. Take stock of your current data center technology and decide whether AI and/or machine learning would be of value to your project. Another common concern is AI replacing lots of jobs soon, and while it’s partially true, it isn’t something to fear. It’s time to embrace the benefits of AI and use it to enhance the current jobs in the data center instead of fearing it and missing out on the improvements it can bring.

There are many configuration management, deployment, and orchestration tools available, ranging from open-source tools to automation engines. Ansible is one such software stack available to cover all the bases, and seems to be gaining more traction by the day. In this post, we’ll look at how this simple but powerful tool can change your software deployments by bringing consistency and reliability to your environment.

 

Ansible gives you the ability to provision, control, configure, and deploy applications to multiple servers from a single machine. Ansible allows for successful repetition of tasks, can scale from one to 10,000 or more endpoints, and uses YAML to apply configuration changes, which is easy to read and understand. It’s lightweight, uses SSH PowerShell and APIs for access, and as mentioned above, is an open-source project. It’s also agentless, differentiating it from some other similar competitive tools in this marketplace. Ansible is designed with your whole infrastructure in mind rather than individual servers. If you need dashboard monitoring, then Ansible Tower is for you.

 

Once installed on a master server, you create an inventory of machines or nodes for it to perform tasks on. You can then start to push configuration changes to nodes. An Ansible playbook is a collection of tasks you want to be executed on a remote server, in a configuration file. Get complicated with playbooks from the simple management and configuration of remote machines all the way to a multifaceted deployment with these five tips to start getting the most out of what tool can deliver.

 

  1. Passwordless keys (for SSH) is the way to go. Probably one you should undertake from day one. Not just for Ansible, this uses a public shared key between hosts based on the v2 standard with most default OSs creating 2048-bit keys, but can be changed in certain situations up to 4096-bit. No longer do you have to type in long complex passwords for every login session—this more reliable and easier-to-maintain method makes your environment both more secure and easier for Ansible to execute.
  2. Use check mode to dry run most modules. If you’re not sure how a new playbook or update will perform, dry runs are for you. With configuration management and Ansible’s ability to provide you with desired state and your end goal, you can use dry run mode to preview what changes are going to be applied to the system in question. Simply add the --check command to the ansible-playbook command for a glance at what will happen.
  3. Use Ansible roles. This is where you break a playbook out into multiple files. This file structure consists of a grouping of files, tasks, and variables, which now moves you to modularization of your code and thus independent adaptation upgrade, and allows for reuse of configuration steps, making changes and improvements to your Ansible configurations easier.
  4. Ansible Galaxy is where you should start any new project. Access to roles, playbooks, and modules from community and vendors—why reinvent the wheel? Galaxy is a free site for searching, rating, downloading, and even reviewing community-developed Ansible roles. This is a great way to get a helping hand with your automation projects.
  5. Use a third-party vault software. Ansible Vault is functional, but a single shared secret makes it hard to audit or control who has access to all the nodes in your environment. Look for something with a centrally managed repository of secrets you can audit and lock down in a security breach scenario. I suggest HashiCorp Vault as it can meet all these demands and more, but others are available.

 

Hopefully you now have a desire to either start using Ansible and reduce time wasted on rinse and repeat configuration tasks, or you’ve picked up a few tips to take your skills to the next level and continue your DevOps journey.

Welcome back! I hope y'all had a happy and healthy holiday break. I'm back in the saddle after hosting a wonderful Christmas dinner for 20 friends and family. I had some time off as well, which I used to work a bit on my blog as well as some Python and data science learning.

 

As usual, here's a bunch of links from the internet I hope you'll find useful. Enjoy!

 

Team that made gene-edited babies sentenced to prison, fined

I wasn't aware we had reached the point of altering babies' DNA, but here we are.

 

2019 Data Breach Hall of Shame: These were the biggest data breaches of the year

I expect a longer list from 2020.

 

Bing’s Top Search Results Contain an Alarming Amount of Disinformation

A bit long, but worth some time and a discussion. I never think about how search engines try to determine the veracity of the websites returned in a search.

 

Google and Amazon are now in the oil business

File this under "Do as I say, not as I do."

 

Seven Ways to Think Like a Programmer

An essay about data that warmed my heart. I think a lot of this applies to every role, especially for those of us inside IT.

 

The other side of Stack Overflow content moderation

Start this post by reading the summary, then take in some of the specific cases he downvoted. The short of it is this: humans are horrible at communicating through texts, no matter what the forum.

 

This Is How To Change Someone’s Mind: 6 Secrets From Research

If you want to have more success at work, read this post. I bet you can think of previous discussions at work and understand where things went wrong.

 

For New Year's Eve I made something special - 6 pounds of pork belly bites in a honey soy sauce. They did not last long. No idea what everyone else ate, though.

 

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Jim Hansen about the state of security and insider threats for the federal government and what’s working to improve conditions. We’ve been doing these cyber surveys for years and I always find the results interesting.

 

Federal IT professionals feel threats posed by careless or malicious insiders and foreign governments are at an all-time high, yet network administrators and security managers feel like they’re in a better position to manage these threats.

 

Those are two of the key takeaways from a recent SolarWinds federal cybersecurity survey, which asked 200 federal government IT decision makers and influencers their impressions regarding the current security landscape.

 

The findings showed enterprising hackers are becoming increasingly focused on agencies’ primary assets: their people. On the bright side, agencies feel more confident to handle risk thanks to better security controls and government-mandated frameworks.

 

People Are the Biggest Targets

 

IT security threats posed by careless or untrained insiders and nation states have risen substantially over the past five years. Sixty-six percent of survey respondents said things have improved or are under control when it comes to malicious threats, but when asked about careless or accidental insiders, the number decreased to 58%.

 

Indeed, hackers have seen the value in targeting agencies’ employees. People can be careless and make mistakes—it’s human nature. Hackers are getting better at exploiting these vulnerabilities through simple tactics like phishing attacks and stealing or guessing passwords. The most vulnerable are those with access to the most sensitive data.

 

There are several strategies agencies should consider to even the playing field.

 

Firstly, ongoing training must be a top priority. All staff members should be hyper-aware of the realities their agencies are facing, including the potential for a breach and what they can do to stop it. Simply creating unique and undetectable passwords or reporting suspicious emails might be enough to save the organization from a perilous data breach. Agency security policies must be updated and shared with the entire organization at least once a month, if not more. Emails can help relay this information, but live meetings are much better at conveying urgency and importance.

 

Employing a policy of zero trust is also important. Agency workers aren’t bad people, but everyone makes mistakes. Data access must be limited to those who need it and security controls, such as access rights management, should be deployed to monitor and manage access.

 

Finally, agencies must implement automated monitoring solutions to help security managers understand what’s happening on their network at all times. They can detect when a person begins trying to access data they normally wouldn’t attempt to retrieve or don’t have authorization to view. Or perhaps when someone in China is using the login credentials of an agency employee based in Virginia. Threat monitoring and log and event management tools can flag these incidents, making them essential for every security manager’s toolbox.

 

Frameworks and Best Practices Being Embraced, and Working

 

Most survey respondents believe they’re making progress managing risk, thanks in part to government mandates. This is a sharp change from the previous year’s cybersecurity report, when more than half of the respondents indicated regulations and mandates posed a challenge. Clearly, agencies are starting to get used to—and benefit from—programs like the Risk Management Framework (RMF) and Cybersecurity Framework.

 

These frameworks help make security a fundamental component of government IT and provide a roadmap on how to do it right. With frameworks like the RMF, developing a better security hygiene isn’t a matter of “should we do this?” but a matter of “here’s how we need to do this.” The frameworks and guidelines bring order to chaos by giving agencies the basic direction and necessities they need to protect themselves and, by extension, the country.

 

A New Cold War

 

It’s encouraging to see recent survey respondents appearing to be emboldened by their cybersecurity efforts. Armed with better tools, guidelines, and knowledge, they’re in a prime position to defend their agencies against those who would seek to infiltrate and do harm.

 

But it’s also clear this battle is only just beginning. As hackers get smarter and new technologies become available, it’s incumbent upon agency IT professionals to not rest on their laurels. We’re entering what some might consider a cyber cold war, with each side stocking up to one-up the other. To win this arms race, federal security managers must continue to be innovative, proactive, and smarter than their adversaries.

 

Find the full article on Federal News Network.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

“Security? We don’t need no stinking security!”

I’ve actually heard a CTO utter words this effect. If you subscribe to a similar mindset, here are five ways you too can stink at information security.

 

  • Train once and never test

Policy says you and your users need to be trained once a year, so once a year is good enough. Oh, and make sure you never test the users either—it’ll only confuse them.

  • Use the same password

It just makes life so much easier. Oh, and a good place to store your single password is in your email, or on Post-It notes stuck to your monitor.

  • Patching breaks things, so don’t patch

Troubleshooting outages is a pain. If you don’t patch and you don’t look at the device in the corner, then it won’t break.

  • The firewall will protect everything on the inside

We have the firewall! The bad guys stay out, so on the inside, we can let everyone get to everything.

  • Just say no and lock EVERYTHING down

If we say no to everything, and we restrict everything, then nothing bad will happen.

 

OK, now it’s out of my system—the above is obviously sarcasm.

 

But some of you will work in places that subscribe to one or more of the above. I’ve been there. But what can YOU do? Well, it’s 2020, and information security is everyone’s responsibility. One thing I commonly emphasize with our staff is no cybersecurity tool can ever be 100% effective. To even think about approaching 100% efficacy, everyone has to play a role as the human firewall. As IT professionals, our jobs aren’t just to put the nuts and bolt in place to keep the org safe. It’s also our job to educate our staff about the impact information security has on them.

 

So, let’s flip the above “tips” on their head and talk about what you can do to positively affect the cyber mindsets in your organization.

 

Train and Test Your Users Often

Use different training methods. Our head of marketing likes to use the phrase “six to eight to resonate.” You’re trying to keep the security mindset at the front of your staff’s consciousness. In addition to frequent CBT trainings, use security incidents as a learning mechanism. One of our most effective awareness campaigns was when we gamified a phishing campaign. The winner got something small like a pair of movie tickets. This voluntary “training” activity got a significant portion of our staff to actively respond. Don’t minimize the positive effect incentives can have on your users.

 

Lastly, speaking of incentives, make sure you run actual simulated phishing exercises. It’s a safe way to train your users. It’s also an easy way to test the effectiveness of your InfoSec training program and let users know how important data security is to the business.

 

Practice Good Password Hygiene

Security pros generally agree you should use unique, complex passwords or passphrases for every service you consume. This way, when (not if) an account you’re using is compromised, the account is only busted for a single service, rather than everywhere. If you use passwords across sites, you may be susceptible to credential stuffing campaigns.

 

Once you get beyond a handful of sites, it’s impossible to expect your users to remember all their passwords. So, what do you do? The easiest and most effective thing to do is introduce a password management solution. Many solutions out there run as a SaaS offering. The best solutions will dramatically impact security, while simplifying operations for your users. It’s a win-win!

 

One final quick point before moving on: make sure someone in your org is signed up for notifications from haveibeenpwned.com. At the time of this writing, there are over 9 BILLION accounts on HIBP. This valuable free service can be an early warning sign if users in your org have been scooped up in data breaches. Additionally, SolarWinds Identity Monitor can notify you if your monitored domains or email addresses have been exposed in a data leak.

 

Patch Early and Often

I’m guessing I’m not alone in having worked at places afraid of applying security patches. Let’s just say if you’ve been around IT operations for a while, chances are you have battle scars from patching. Times change, and in my opinion, vendors have gotten much better at QAing their patches. Legacy issues aside, I’ll give you three reasons to patch frequently: Petya, NotPetya, and WannaCry. These three instances of ransomware caused some of the largest computer disruptions in recent memory. They were also completely preventable, as Microsoft released a patch plugging the EternalBlue vulnerability months before attacks were seen in the wild. From a business standpoint, patching makes good fiscal sense. The operational cost related to a virus can be extreme—just ask Maersk, the company projected to lose $300 million dollars from NotPetya. This doesn’t even account for the reputational risk a company can suffer from a data breach, which in many cases can be just as detrimental to the long-term vibrancy of a business.

 

Firewall Everywhere

If you’re breached, you want to limit the bad actors’ ability to pivot their attack from a web server to a system with financials. This technique is demonstrated with a DMZ approach. However, a traditional DMZ may not be enough, resulting in the rise of micro-segmentation over the last few years. The fun added benefit you can get with a micro-segmentation approach is as you’re limiting the attack surface, you can also handle events programmatically, like having the firewall automatically isolate a VM when a piece of malware has been observed on it.

 

Work With the Business to Understand the “Right” Level of Security

If you’ve read my other blog posts, you know I believe IT organizations should partner with business units. But more than a couple of us have seen InfoSec folks who just want to lock everything down to the point where running the business can be difficult. When this sort of a combative approach is taken, distrust between the units can be sowed, and shadow IT is one of the possible results.

 

Instead, work with the BUs to understand their needs and craft your InfoSec posture based on that. After all, an R&D team or a Dev org needs different levels of security than credit card processing, which must follow regulatory requirements. This for me was one of the most resonant messages to come out of The Phoenix Project: if you craft the security solution to fit the requirements, the business can better meet their needs, Security can still have an appropriate level of rigor, and better relationships should ensue. Win, win, win.

 

Security is a balancing act. We all have a role to play in cybersecurity. If you can apply these five simple information security hygiene tips, then you’re on the path towards having a secure organization, and I think we can all agree, that’s something to be thankful for.

Two roads diverged in a yellow wood,

And sorry I could not travel both

-Robert Frost

 

At this point in our “Battle of the Clouds” journey, we’ve seen what the landscape of the various clouds looks like, cut through some of the fog around cloud, and glimpsed what failing to plan can do to your cloud migration. So, where does that leave us? Now it’s time to lay the groundwork for the data center’s future. Beginning this planning and assessment phase can seem daunting, so in this post, we’ll lay out some basic guidelines and questions to help build your roadmap.

 

First off, let’s start at what’s already in the business’s data center.

 

The Current State of Applications and Infrastructure

 

When looking forward, you must always look to see where you’ve been. By understanding the previous decisions, you can gain an understanding of the business’ thinking, see where mistakes may have been made, and work to correct them in the newest iteration of the data center. Inventory everything in the data center, both hardware and software. You’d be surprised what may play a critical role in prevention of migration to new hardware or a cloud. Look at the applications in use not only by the IT department, but also the business, as their implementation will be key to a successful migration.

 

  • How much time is left on support for the current hardware platforms?
    • This helps determine how much time is available before the execution of the plan has to be done
  • What vendors are currently in use in the data center?
    • Look at storage, virtualization, compute, networking, and security
    • Many existing partners already have components to ease the migration to public cloud
  • What applications are in use by IT?
    • Automation tools, monitoring tools, config management tools, ticketing systems, etc.
  • What applications are in use by the business?
    • Databases, customer relationship management (CRM), business intelligence (BI), enterprise resource planning (ERP), call center software, and so on

 

What Are the Future-State Goals of the Business?

 

As much as most of us want to hide in our data centers and play with the nerd knobs all day, we’re still part of a business. Realistically, our end goal is to deliver consistent and reliable operations to keep the business successful. Without a successful business, it doesn’t matter how cool the latest technology you installed is, its capabilities, or how many IOPs it can process—you won’t have a job. Planning out the future of the data center has to line up with the future of the company. It’s a harsh reality we live in. But it doesn’t mean you’re stuck in your decision making. Use this opportunity to make the best choices on platforms and services based on the collective vision of the company.

 

  • Does the company run on a CapEx or OpEx operating model, or a mixture?
    • This helps guide decisions around applications and services
  • What regulations and compliances need to be considered?
    • Regulations such as HIPAA
  • Is the company attempting to “get out of the data center business?”
    • Why does the C-suite think this, and should it be the case?
  • Is there heavy demand for changes in the operations of IT and its interaction with end users?
    • This could lead to more self-service interactions for end users and more automation by admins
  • How fast does the company need to react and evolve to changes in the environment?
    • DevOps and CI/CD can come into effect
    • Will applications need to be spun up and down quickly?
  • Of the applications inventoried in the current state planning, how many could be moved to a SaaS product?
    • Whether moving to a public cloud or simply staying put, the ability to reduce the total application footprint can affect costs and sizing.
    • This can also call back to the OpEx or CapEx question from earlier

Using What You’ve Collected

 

All the information is collected and it’s time to start building the blueprint, right? Well, not quite. One final step in the planning journey should be a cloud readiness assessment. Many value-added resellers, managed services providers, and public cloud providers can help the business in this step. This step will collect deep technical data about the data center and applications, map all dependencies, and provide an outline of what it’d look like to move them to a public cloud. This information is crucial as well as it lays out what can easily be moved as well as what applications would need to be refactored or completely rebuilt. The data can be applied to a business impact analysis as well, which will give guidance on what these changes can do to the business’s ability to execute.

 

This seems like a lot of work. A lot of planning and effort into deciding to go to the public cloud or to stay put. To stick to “what works.” Honestly, may companies look at the work and decide to stay on-premises. Some choose to forgo the planning and have their cloud projects fail. I can’t tell you what to do in your business’s setting—you have to make the choices based on your needs. All I can do is offer up advice and hope it helps.

 

https://www.poetryfoundation.org/poems/44272/the-road-not-taken

Social media has become a mainstream part of our lives. Day in and day out, most of us are using social media to do micro-blogging, interact with family, share photos, capture moments, and have fun. Over the years, social media has changed how we interact with others, how we use our language, and how we see the world. Since social media is so prevalent today, it’s interesting to how artificial intelligence (AI) is changing social media. When was the last time you used social media for fun? What about for business? School? There are so many applications for social media, and AI is changing the way we use it and how we digest the tons of data out there.

 

Social Media Marketing

I don’t think the marketing world has been more excited about an advertising vehicle than they are with social media marketing. Did you use social media today? Chances are this article triggered you to pick up your phone and check at least one of your feeds. When you scroll through your feeds, how many ads do you get bombarded with? The way I see it, there are two methods of social media marketing: overt and covert. Some ads are overt and obviously placed in your feed to get your attention. AI has allowed those ads to be placed in user feeds based on their user data and browsing habits. AI crunches the data and pulls in ads relevant to the current user. Covert ads are a little sneakier, hence the name. Covert social media marketing is slipped into your feeds via paid influencers or YouTube/Instagram/Twitter mega users with large followings. Again, AI analyzes the data and posts on the internet to bring you the most relevant images and user posts.

 

Virtual Assistants

Siri, Alexa, Cortana, Bixby… whatever other names are out there. You know what I’m talking about, the virtual assistant living in your phone, car, or smart speaker, always listening and willing to pull up whatever info you need. The need to tweet while driving or search for the highest rated restaurant on Yelp! while biking isn’t necessary—let Siri do it for you. When you want to use Twitter, ask Alexa to tweet and then compose it all with your voice. Social media applications tied into virtual assistants make interacting with your followers much easier. AI constantly allows these virtual assistants to tweet, type, and text via dictation easily and accurately.

 

Facial Recognition

Facebook is heavily invested in AI, as evidenced by their facial recognition technology tagging users in a picture automatically via software driven by AI. You can also see this technology in place at Instagram and other photo-driven social media offerings. Using facial recognition makes it easier for any user who wants to tag family or friends with Facebook accounts. Is this important? I don’t think so, but it’s easy to see how AI is shaping the way we interact with social media.

 

Catching the Latest Trends

AI can bring the latest trends to your social media feed daily, even hourly if you want it to. Twitter is a prime example of how AI is used to crunch large amounts of data and track trends in topics across the world. AI has the ability analyze traffic across the web and present the end user with breaking news, or topics suddenly generating a large spike in internet traffic. In some cases, this can help social media users get the latest news, especially as it pertains to personal safety and things to avoid. In other cases, it simply leads us to more social media usage, as witnessed by meteoric trending recently when a bunch of celebrities started using FaceApp to see how their older selves might look.

 

What About the Future?

It seems like what we have today is never enough. Every time a new iPhone comes out, you can read hundreds of articles online the next day speculating on the next iPhone design and features. Social media seems to be along the same lines, especially since we use it daily. I believe AI will shape the future of our social media usage by better aligning our recommendations and advertisements. Ads will become much better targeted to a specific user based on AI analysis and machine learning. Apps like LinkedIn, Pinterest, and others will be much more usable thanks to developers using AI to deliver social media content to the user based on their data and usage patterns.

December Writing Challenge Week 5 Recap

 

And with these last four words, the 2019 writing challenge comes to a close. I know I’ve said it a couple of times, but even so, I cannot express enough my gratitude and appreciation for everyone who took part in the challenge this year—from the folks who run the THWACK® community, to our design team, to the lead writers, to the editing team, and, of course, to everyone who took time out of their busy day (and nights, in some cases) to thoughtfully comment and contribute.

 

Because we work in IT, and therefore have an ongoing love affair with data, here are some numbers for you:

 

The 2019 December Writing Challenge

  • 31 days, 31 words
  • 29 authors
    • 13 MVPs
  • 14,000 Views
  • 960 Comments

 

It’s been an incredible way to mentally pivot from the previous year, and set ourselves up for success, health, and joy in 2020.

 

Thank you again.

  • - Leon

 

Day 28. Software Defined Network (SDN)

THWACK MVP Mike Ashton-Moore returned to the ELI5 roots, by crafting his explanation using the xkcd Simple Writer online tool. Despite limiting himself to the first 1,000 (or “ten-hundred,” to use the xkcd term) words, Mike created a simple and compelling explanation.

 

George Sutherland Dec 29, 2019 8:49 PM

SDN is the network version of the post office. You put your package in the system and let the post office figure out the best and fastest mode of delivery.

 

Ravi Khanchandani  Dec 30, 2019 12:58 AM

Life comes a full circle with SDN, from centralized routing to distributed routing and back to centralized routing.

The SDN controller is like the Traffic Police HQ that sends out instructions to the crossings or edge devices to control the traffic. How much traffic gets diverted to what paths, what kind of traffic goes which path, who gets priority over the others. Ambulances accorded highest priority, trucks get diverted to the wider paths, car pools & public transport get dedicated lanes, other cars get a best effort path

 

Juan Bourn Dec 30, 2019 10:37 AM

I gotta admit, I had to read this twice. Not being very familiar with SDN prior to this, I didn’t understand the special boxes and bypassing them lol. I couldn’t make the relationship tangible. But after a second read through, it made sense. Good job on making it easy to understand. You can’t do anything about your audience, so no knock for my inability to understand the first time around!

 

Day 29. Anomaly detection

As product marketing manager for our security portfolio, Kathleen Walker is extremely well versed in the idea of anomaly detection. But her explanation today puts it in terms even non-InfoSec folks can understand.

 

Vinay BY  Dec 30, 2019 5:06 AM

As the standard definition states -> “anomaly detection is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data”

 

Something unusual from the data pattern that you see on a regular basis, this as well helps you to dig down further to understand what happened exactly and why was it so. Anomaly detection can be performed in several areas, basically performed before aggregating the data into your system or application.

 


Thomas Iannelli
  Dec 30, 2019 5:27 AM

We don’t have kids living with us, but we do the same thing for our dog, Alba, and she for us. We watch her to make sure she eats, drinks, and performs her biological functions. When one of those things is off we either change the diet or take her to the vet. She watches us. She even got used to my wife, who works at home, going into her office at certain times, taking a break at certain times to put her feet up, or watch TV during lunch. So much so that Alba will put herself in the rooms of the house before my wife. She does it just so casually. But when my wife doesn’t show up, she frantically goes thru the house looking for her. Why isn’t she where she is supposed to be. Alba does the same thing when I go to work. It is fine that I am leaving during the week. She will not fuss and sometimes will greet me at the door. But if I get my keys on the weekend or in the evening she is all over me wanting to go for the ride. There is a trip happening out of the ordinary. When we have house guests, as we did over the holiday, she gets very excited when they arrive, and even the next morning will try to go to the guest bedroom and check to make sure they are still here. But after a day or two it is just the new normal. Nothing to get too excited about. The anomaly has become the norm.

 

I guess the trick is to detect the anomaly and assess quickly if it is outlier, if it is going to be the new normal, or if it is a bad thing that needs to be corrected.

 

Paul Guido  Dec 30, 2019 9:32 AM (in response to Charles Hunt)

cahunt As soon as I saw the subject, I thought of this song. “One of these things is not like the other” is one of my primary trouble-shooting methods to this very day.

 

Once I used the phrase that “The systems were nominal” and people did not understand the way I used “nominal.” I was using it in the same way that NASA uses it in space systems that are running within specifications.

 

In my brain, an anomaly is outside the tolerance of nominal.

 

Day 30. AIOps

Melanie Achard returns for one more simple explanation, this time of a term heavily obscured by buzzwords, vendor-speak, and confusion.

 

Vinay BY  Dec 30, 2019 10:21 AM

AIOps or artificial intelligence for IT operations includes the below attributes in one or the other way, to me they are all interlinked:

Proactive Monitoring

Data pattern detection, Anomaly detection, self-understanding and Machine Learning which improvises the entire automation flow

Events, Logs, Ticket Dump

Bots & Automation

Reduction of Human effort, cost reduction, time reduction and service availability

 

Thomas Iannelli  Dec 30, 2019 5:41 AM

Then the computer is watching and based on either machine learning or anthropogenic algorithms process the data for anomaly detection and then takes some action. In the form of an automated response to remediate the situation or to alert a human that something here needs you to focus attention on it. Am I understanding correctly?

 

Jake Muszynski Dec 30, 2019 12:13 PM

Computers don't lose focus, my biggest issue with people reviewing the hordes of data that varous monitors create is that they get distracted, they only focus on the latest thing. AI helps by looking at all the things, then surfacing what might need attention. In a busy place, it can really make a difference.

 

 

Day 31. Ransomware

On our final day of the challenge, THWACK MVP Jeremy Mayfield spins a story bringing into sharp clarity both the meaning and the risk of the word of the day.

 

Faz f Dec 31, 2019 5:57 AM

This is when your older Sibling/friend has your toy and will not give it back unless you do something for them. It's always good to keep your toys safe.

 

Michael Perkins Dec 31, 2019 11:36 AM

Ransomware lets a crook lock you out of your own stuff, then make you pay whatever the crook wants for the key. This is why you keep copies of your stuff. It takes away the crook's leverage and lets you go "Phbbbbbbbbbbt!" in the crook's face.

 

Brian Jarchow Dec 31, 2019 12:38 PM

Ransomware reminds me of the elementary school bully who is kind enough to make sure you won't get beaten up if you give him all of your lunch money.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.