Geek Speak

5 Posts authored by: beckyelliott

Labyrinth or Layered approach

Ladies and gentlemen, we’ve reached the fifth and final post of this information security in hybrid IT series. I hope you’ve found as much value in these posts as I have in your thoughtful comments. Thanks for following along.

 

Let’s take a quick look back at the previous posts.

 

Post #1: When "Trust but Verify" Isn’t Enough: Life in a Zero Trust World

Post # 2: The Weakest (Security) Link Might Be You

Post #3: The Story Patching Tells

Post #4: Logs, Logs, and More Logs

 

Throughout the series, we’ve covered topics vital to an organization’s overall security posture, including zero trust, people, patching, and logs. These are a pivotal part of the people, process, and technology model vital to an organization’s Defense in Depth security strategy.

 

What Is Defense in Depth?

Originally a military term, Defense in Depth, also known as layered security, operates from the premise that if one layer of your defense is compromised, another layer is still in place to thwart would-be attackers. These preventative controls typically fall into one of the following categories.

 

  • Technical controls use hardware or software to protect assets. Micro-segmentation, multi-factor authentication, and data loss protection are examples of technical controls.
  • Administrative controls relate to security policies and procedures. Examples of this could include policies requiring least-privilege and user education.
  • Physical applies to controls you can physically touch. Security badges, security guards, fences, and doors are all examples of physical controls.

 

Why Defense in Depth?

If you’ve ever looked into setting up an investment strategy, you’ve heard the phrase “Diversify, diversify, diversify.” In other words, you can’t predict if a fund will flop completely, so it’s best to spread funds across a broad category of short-term and long-term investments to minimize the risk of losing all your money on one fund.

 

Similarly, because you can’t know what vulnerabilities an attacker will try to exploit to gain access to your data or network, it’s best to implement a layered and diverse range of security controls to help minimize the risk.

 

Here’s a simple example of layered security controls. If an attacker bypassed physical security to gain access to your facility, 802.1x, or a similar port-based security technical control, stops them from simply plugging in a laptop and gaining access to the network.

 

Off-Premises Concerns

Because of the shared responsibilities for security in a hybrid cloud environment, the cloud adds complexity to the process of designing and implementing a Defense in Depth strategy. While you can’t control how a cloud provider handles the physical security of the facility or facilities hosting your applications, you still have a responsibility to exercise due diligence in the vendor selection process. In addition, SLAs can be designed to act as a deterrent for vendor neglect and malfeasance. However, ultimately, the liability for data loss or compromise rests with the system owner.

 

Conclusion

When an organization’s culture treats security as more than a compliance requirement, they have an opportunity to build more robust and diverse security controls to protect their assets. Too often, though, organizations fail to recognize security as more than the Information Security team's problem. It's everyone's problem, and thoroughly implementing a Defense in Depth strategy takes an entire organization.

Binary Noise for Logs.  Photo modified from Pixabay.

Four score and one post ago, we talked about Baltimore’s beleaguered IT department, which is in the throes of a ransomware-related recovery.

 

Complicating the recovery mission is the fact that the city’s IT team didn't know when the systems were compromised initially. They knew when the systems went offline, but not if the systems were infected earlier. The IT team can’t go back and check a compromised system’s logs because ransomware rendered the infected computers inaccessible.

 

Anyone who has worked in IT operations knows logs can contain a wealth of valuable information. Logs are usually the first place you go to troubleshoot or detect problems. Logs are where you can find clues of security events. Commonly, though, you can end up having to sift through a lot of data to find the information you need.

 

In any ransomware or security attack, a centralized logging server or Syslog is an invaluable resource to trace back and correlate events across a plethora of servers and network devices. Aggregation and correlation are jobs for a SIEM.

 

All About Those SIEMs

 

SIEM is mostly mentioned as an acronym, not its extended form of Security Information and Event Management tool. SIEMs serve an essential role in security threat detection and commonly make up a valuable part of an organization’s defense-in-depth strategy.

 

SIEM tools also form the basis for many regulatory auditing requirements for PCI-DSS, HIPAA, and NIST security checks, as well as aid with threat detection.

 

In a video recording of a session at RSA 2018, a presenter asked the audience who was happy with their current SIEM. When no hands went up, the presenter quipped that maybe the happy people were in the next room.

 

If I were in that room, I wouldn’t raise my hand either. On a previous contract, our SIEM tool consumed terabytes upon terabytes of space. When it came to time to pull information, the application was slow and unresponsive. Checking the logs ourselves was a more efficient use of time. So, why did we do this? Our SIEM was a compliance checkbox.

 

Extending SIEMs With UEBA and SOAR

 

SIEMs are much more than compliance checkboxes. User and Entity Behavior Analytics (UEBA) and Security Orchestration, Automation, and Response (SOAR), when bundled with SIEMs, offer additional features to extend security event management features.

 

UEBAs look for normal and abnormal behavior for both users and entities to improve visibility across an organization. By using advanced policies and machine learning, UEBAs improve visibility to help protect against insider threats. However, like SIEMs, UEBAs may require fine-tuning to weed out the false positives.

 

SOARs, on the other hand, are designed to automate and respond to low-level security events or threats. SOARs can provide similar functionality to an Intrusion Detection System (IDS) or Intrusion Prevent System (IPS), without the manual intervention.

 

Conclusion

 

At the end of the day, SIEMs, SOARs, UEBAs, and other security tools can be challenging to configure and manage. It makes sense for organizations to begin outsourcing part of this responsibility. Also, you could argue that applications reliant on machine learning belong in cloud-like environments, where you could build large data lakes for additional analytics.

 

In traditional SIEMs, feeding in more information probably won’t result in a better experience. Without dedicated security analysts to fine-tune the data collected, many organizations struggle with unwieldy SIEMs. While it’s easy to blame the tool, in many cases, the people and processes contribute to the implementation woes. 

Information Security artwork, modified from Pete Linforth at Pixabay.

In post #3 of this information security series, let's cover one of the essential components in an organization's defense strategy: their approach to patching systems.

Everywhere an Attack

When did you NOT see a ransomware attack or security breach story in the news? And when was weak patching not cited as one of the reasons making the attack possible? If only the organization had applied those security patches from a couple of years ago…

 

Baltimore City is one of the most recent ransomware attacks in the news. For the past month, RobbinHood ransomware, a variant of NSA's EternalBlue, crippled many of the city's online processes like email, paying water bills, and paying parking tickets. More than four weeks later, city IT workers are still working diligently to restore operations and services.

 

How could Baltimore have protected itself? Could it have applied those 2017 WannaCry exploit patches from Microsoft? Sure, but it's never just one series of patches that weren’t applied.

 

Important Questions

How often do you patch? How do you find out about vulnerabilities? What processes do you have in place for testing patches?

 

How and when an organization patches systems tells you a lot about how much they value security and their commitment to designing systems and processes for those routine maintenance tasks critical to an organization's overall security posture. Are they proactive or reactive? Patching (or lack thereof) can shine a light on systematic problems.

 

If an organization has taken the time to implement systems and processes for security patch management and applying security updates, the higher-visibility areas of their information security posture (like identity management, auditing, and change management) are likely also in order. If two-year-old patches weren't applied to public-facing web servers, you can only guess what other information security best practices have been neglected.

 

If you've ever worked in Ops, you know that a reprieve from patching systems and devices will probably never come. Applying patches and security updates is a lot like doing laundry. It's hard work and no matter how much you try, you’ll never catch up or finish. When you let it slide for a while, the catch-up process is more time-consuming and arduous than if you had stayed vigilant in the first place. However, unlike the risk of having to wear dirty clothes, the threat of not patching your systems is a serious security problem with real-world consequences.

 

Accountability-Driven Compliance

We all do better jobs when we’re accountable to someone, whether it be an outside organization or even yourself. If your mother-in-law continuously checked in to make sure you never fell behind on your laundry, you would be far more dutiful on keeping up with washing your clothes. If laundry is a constant challenge for you (like me), you would probably design a system to make sure you could keep up with laundry duties.

 

In the federal space, continuous monitoring is a tenet of government security initiatives like the Federal Risk and Authorization Program (FedRAMP) and Risk Management Framework (RMF). These initiatives centralize accountability and consequences while driving continuous patching in organizations. FedRAMP and RMF assess an organization's risk. Because unpatched systems are inherently risky, failure to comply can result in revoking an organization's Authority to Operate (ATO) or shutting down their network.

 

How can you tell if systems are vulnerable? Most likely, you run vulnerability scans that leverage Common Vulnerability and Exploits (CVE) entries. This CVE list, an "international cybersecurity community effort," drives most security vulnerability assessment tools in on-premises and off-premises products like Nessus and Amazon Inspector. In addition, software vendors leverage the CVE lists to develop security patches for their products. Vulnerability assessment and continuous scanning end up driving an organization's continuous patching stance.

 

Vendor Questions

FedRAMP also assesses security and accredits secure cloud offerings and services. This initiative allows federal organizations to tap into the "as a Service" world and let outside organizations and third-party vendors assume the security, operations, and maintenances of applications, of which patching and applying updates are an important component.

When selecting vendors or "as a service" providers, you could assess part of their commitment to security by inquiring about software component versions. How often do they issue security updates? Can you apply minor software updates out-of-cycle without affecting support?

 

If a software vendor's latest software release deployed a two-year-old version of Tomcat Web Server or a version is close to end-of-support with no planned upgrades, it may be wise to question their commitment to creating secure software.

 

Conclusion

The odds are that some entity will assess your organization's risk, whether it's a group of hackers looking for an easy target, your organization's security officer, or an insurance company deciding whether to issue a cyber-liability insurance policy. Here’s one key metric that will interest these entities: how many unpatched systems and vulnerabilities are lying around on your network and the story your organization's patching tells.

Security chain link

 

In the second post in this information security in a hybrid IT world series, let’s cover the best-designed security controls and measures, which are no match for the human element.

 

“Most people don’t come to work to do a bad job” is a sentiment with which most people will agree. So, how and why do well-meaning people sometimes end up injecting risk into an organization’s hardened security posture?

 

Maybe your first answer would be falling victim to social engineering tricks like phishing. However, there’s a more significant risk: unintentional negligence in the form of circumventing existing security guidelines or not applying established best practices. If you’ve ever had to troubleshoot blocked traffic or user who can’t access a file share, you know that one quick fix is to disable the firewall or give the user access to everything. It’s easy to tell yourself you’ll revisit the issue later and re-enable that firewall or tighten down those share permissions. Later will probably never come, and you’ve inadvertently loosened some of the security controls.

 

It’s easy to blame the administrator who made what appears to be a short-sighted decision. However, human nature prompts us to take these shortcuts. In our days on the savannah, our survival depended on taking shortcuts to conserve physical and mental energy to get through the harsh times on the horizon. Especially on short-staffed or overwhelmed teams, you save energy in the form of shortcuts that let you move on to the next fire. For as many security issues that may exist on-premises, “62% of IT decision makers in large enterprises said that their on-premises security is stronger than cloud security,” according to Dimensional Research, 2018.The stakes are even higher when data and workloads move to the cloud, where your data exploits can have further reach.

 

In 2017, one of the largest U.S. defense contractors was caught storing unencrypted application credentials and sensitive data related to a military project on a public, unprotected AWS S3 instance. The number of organizations caught storing sensitive data in unprotected, public S3 instances continues to grow. However, dealing with the complexity of securing data in the cloud requires other tools for improving the security posture and helping to combat the human element in SaaS and cloud offerings: Cloud Access Security Brokers (CASBs).

 

Gartner defines CASBs as “on-premises, or cloud-based security policy enforcement points, placed between cloud service consumers and cloud service providers to combine and interject enterprise security policies as the cloud-based resources are accessed.” By leveraging machine learning, CASBs can aggregate and analyze user traffic and actions across a myriad of cloud-based applications to provide visibility, threat protection, data security, and compliance in the cloud. Also, CASBs can handle authentication/authorization with SSO and credential mapping, as well as masking sensitive data with tokenization.

 

Nifty security solutions aside, the best security tools for on-premises and off-premises are infinitely more effective when the people in your organization get behind the whole mission of what you are trying to accomplish.

 

Continuing user education and training is excellent. However, culture matters. Environments in which people feel they have a role in information security increase an organization’s security posture. What do you think are some of the best ways to change an organization’s culture when it comes to security?

Welcome to the first in a five-part series focusing on information security in a hybrid IT world. Because I’ve spent the vast majority of my IT career as a contractor for the U.S. Department of Defense, I view information security through the lens that protecting national security and keeping lives safe is the priority. The effort and manageability challenges of the security measures are secondary concerns.

 

 

Photograph of the word "trust" written in the sand with a red x on top.

Modified from image by Lisa Caroselli from Pixabay.

 

About Zero Trust

In this first post, we’ll explore the Zero Trust model. Odds are you’ve heard the term “Zero Trust” multiple times in the nine years since Forrester Research’s John Kindervag created the model. In more recent years, Google and Gartner followed suit with their own Zero Trust-inspired models: BeyondCorp and LeanTrust, respectively.

 

“Allow, allow, allow,” Windows Guy must authorize each request. “It’s a security feature of Windows Vista,” he explains to Justin Long, the much cooler Mac Guy. In this TV commercial, Windows Guy trusts nothing, and each request requires authentication (from himself) and authorization.

 

The Zero Trust model kind of works like this. By default, nothing is trusted or privileged. Internal requests don’t get preference over external requests. Additionally, some other methods help enforce that Zero Trust model: least-privilege authentication, some strict access right controls, using intelligent analytics for greater insight and logging purposes, and additional security controls are the Zero Trust model in action.

 

If you think Zero Trust sounds like “Defense-in-Depth,” you are correct. Defense-in-Depth will be covered in a later blog post. As you know, the best security controls are always layered.

 

Why Isn’t Trust but Verify Enough?

Traditional perimeter firewalls, the gold standard for “trust but verify,” leave a significant vulnerability in the form of internal, trusted traffic. Perimeter firewalls focus on keeping the network free of that untrusted (and not authorized) external traffic. This type of traffic is usually referred to as “North-South” or “Client-Server.” Another kind of traffic exists, though: “East-West” or “Application-Application” traffic that probably won’t hit a perimeter firewall because it doesn’t leave the data center.

 

Most importantly, perimeter firewalls don’t apply to hybrid cloud, a term for that space where private and public network coalesce, or public cloud traffic. Additionally, while the cloud simplifies some things like building scalable, resilient applications, it adds complexity in other areas like network, troubleshooting, and securing one of your greatest assets: data. Cloud also introduces new traffic patterns and infrastructure you share with others but don’t control. Hybrid cloud blurs the trusted and untrusted lines even further. Applying the Zero Trust model allows you to begin to mitigate some of the risks from untrusted public traffic.

 

Who Uses Zero Trust?

In any layered approach to security, most organizations are probably already applying some of Zero Trust principles like multi-factor authentication, least-privilege, and strict ACLs, even if they haven’t reached the stage of requiring authentication and authorization for all requests from processes, users, devices, applications, and network traffic.

 

Also, the CIO Council, “the principal interagency forum to improve [U.S. Government] agency practices for the management of information technology,” has a Zero Trust pilot slated to begin in summer 2019. The National Institute of Standards and Technology, Department of Justice, Defense Information Systems Agency, GSA, OMB, and several other agencies make up this government IT security council.

 

How Can You Apply Principles From the Zero Trust Model?

 

  • Whitelists. A list of who to trust. It can specifically apply to processes, users, devices, applications, or network traffic that are granted access. Anything not on the list is denied. The opposite of this is a blacklist, where you need to know the specific threats to deny, and everything else gets through.

  • Least privilege. The principle in which you assign the minimum rights to the minimum number of accounts to accomplish the task. Other parts include separation of user and privileged accounts with the ability to audit actions.

  • Security automation for monitoring and detection. Intrusion prevention systems that stop suspect traffic or processes with manual intervention.

  • Identity management. Harden the authentication process with a one-time password or implement multi-factor authentication (requires proof from at least two of the following categories: something you know, something you have, and something you are).

  • Micro-segmentation. Network security access control that allows you to protect groups of applications and workloads and minimize any damage in case of a breach or compromise. Micro-segmentation also can apply security to East-West traffic.

  • Security defined perimeter. Micro-segmentation, designed for a cloud world, in which assets or endpoints are obscured in a “black cloud” unless you “need to know (or see)” the assets or group of assets.

 

Conclusion

Implementing any security measure takes work and effort to keep the bad guys out while letting the good guys in and, most importantly, keeping valuable data safe.

 

However, security breaches and ransomware attacks increase every year. As more devices come online, perimeters dissolve, and the amount of sensitive data stored online grows more extensive, the pool of malicious actors and would-be hackers increases.

 

It’s a scary world, one in which you should consider applying “Zero Trust.”

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.