I know, I'm a day late and quite possibly 37 cents short for my coffee this morning, so let's jump in, shall we?

 

Let's start with the Equifax breach. This came up in the Shields Down Conversation Number Two, so, I thought I would invite some of my friends from our security products to join me to discuss the breach from a few different angles.

 

My take will be from a business strategy (or lack of) standpoint. Roughly 143 million people had their personal data exposed because Equifax did not properly execute a simple patching plan. Seriously?

 

Is this blog series live and viewable? I am not the only person who implements patching, monitoring, log and event management in my environments. This is common knowledge. What I don't get is the why. Why, for the love of everything holy, do businesses not follow these basic practices?

 

CIxO or CXOs do not implement these practices. However, it is their duty (to their company and their core values) to put the right people in place who will ensure that security measures are being carried out.

 

Think about that for a moment and then know that there was a patch produced for the vulnerability that Equifax failed to remediate in March. This breach happened, as we all know, in mid-May. Where is the validation? Where was the plan? Where is the ticketing system tracking the maintenance that should've been completed on their systems? There are so many questions, especially since this happened in an enterprise organization, not some small shop somewhere.

 

Now, let's take this another step further. Equifax dropped another juicy nugget of information of another breach in March. Don't worry, though. It was an entirely different attack. However, the incredible part is that some of the upper-level folks were able to sell their stock. That makes my heart happy, you know, to know that they had the time to sell their stock before they released information on being breached. Hat's off to them for that, right?

 

Then, another company decided they needed to market and sell credit monitoring (for a reduced fee, that just so happens to use EQUIFAX SERVICES) to the individuals who were now at a high(er) risk of identity theft and credit fraud. I'm still blown away by this.

 

Okay. Deep breath. Whooooo.

 

I was recently informed that when you have third-party software, patching is limited and that organization's SLAs for application uptime don't allow patching on some of their servers. I hear you! I am a big believer that some patching servers can cause software to stop working or result in downtime. However, this is where you have to implement a lab and test patching. You should check your patching regardless to make sure you are not causing issues with your environment in the first place. 

 

I will implement patching on test servers usually on a Friday, and then I will verify the status of my applications on the server.

I will also go through my security checks to validate that no new holes or revert have happened before I implement in production within two weeks. 

 

Now let's bring this back to the strategy at hand. When you are an enterprise corporation with large amounts of personal data belonging to your trusting customers (who are the very reason you are as large as you are), you better DARN WELL have a security plan that is overseen by more than one individual! Come on! This is not a small shop or even a business that could argue, "Who would want our customer data?" We're talking about Equifax, a company that holds data about plenty of consumers who happen to have great credit. Equifax is figuratively a lavish buffet for hackers.

 

The C-level of this company should have kept a close eye on the security measures being taken by the organization, including patching, SQL monitoring, log, events, and traffic monitoring. They should have known there were unpatched servers. The only thing I think they could have argued was the common refrain, "We cannot afford downtime for patching." But still. 

 

Your CxO or CIxO has to be your IT champion! They have to go nose to nose with their peers to make sure their properly and thoroughly designed security plans get implemented 100%. They hire the people to carry out such plans, and it is their responsibility to ensure that it gets done and isn't blocked at any level.

 

Enough venting, for the moment. Now I'd like to bring in some of my friends for their take on this Equifax nightmare that is STILL unfolding! Welcome joshberman, just one of my awesome friends here at SolarWinds, who always offers up great security ideas and thoughts.

 

Dez summed up things nicely in her comments above, but let's go back to the origins of this breach and explore the timeline of events to illustrate a few points.

 

  • March 6th: the exploited vulnerability, CVE-2017-5638, became public
  • March 7th: Security analysts began seeing attacks propagate that were designed to exploit this flaw
  • Mid-May: Equifax tracked the date of compromise back to this window of time
  • July 29th: the date Equifax discovered a breach had occurred

 

Had a proper patch management strategy been set in place and backed by the right patch management software to enable the patching of third-party applications, it is likely that Equifax might not have succumbed to such a devastating attack. This applies even if testing had been factored into the timelines, just as Dez recommends. "Patch early, patch often" certainly applies in this scenario, given the voracious speed of hackers to leverage newly discovered vulnerabilities as a means to their end. Once all is said and done, if there is one takeaway here it is that patching as a baseline IT security practice, is and will forever be a must. Beyond the obvious chink in Equifax's armor, there is a multitude of other means by which they could have thwarted this attack, or at least minimized its impact.

 

That's fantastic information, Josh. I appreciate your thoughts. 

 

I also asked mandevil (Robert) for his thoughts on the topic. He was on vacation, but he returned early to knock out some pertinent thoughts for me! Much appreciated, Robert!

 

Thanks, Dez. "We've had a breach and data has been obtained by entities outside of this company."

Imagine being the one responsible for maintaining a good security posture, and the sinking feeling you had when these words were spoken. If this is you, or even if you are tangentially involved in security, I hope this portion of this post helps you understand the importance of securing data at rest as it pertains to databases.

 

Securing data in your database

 

The only place data can't be encrypted is when it is in cache (memory). While data is at rest (on disk) or in flight (on the wire), it can and should be encrypted if it is deemed sensitive. This section will focus on encrypting data at rest. There are a couple different ways to encrypt data at rest when it is contained within a database. Many major database vendors like Microsoft (SQL Server) and Oracle provide a method of encrypting called Transparent Data Encryption (TDE). This allows you to encrypt the data in the files at the database, table space, or column level depending on the vendor. Encryption is implemented using certificates, keys, and strong algorithms and ciphers.

 

Links for more detail on vendor TDE description and implementation:

 

SQL Server TDE

Oracle TDE

 

Data encryption can also be implemented using an appliance. This would be a solution if you would want to encrypt data but the database vendor doesn't offer a solution or licensing structures change with the usage of their encryption. You may also have data outside of a database that you'd want to encrypt that would make this option more attractive (think of log files that may contain sensitive data). I won't go into details about different offers out there, but I have researched several of these appliances and many appear to be highly securitized (strong algorithms and ciphers). Your storage array vendor(s) may also have solutions available.

 

What does this mean and how does it help?

 

Specifically, in the case of Equifax, storage level hacks do not appear to have been employed, but there are many occurrences where storage was the target. By securing your data at rest on your storage tier, it can prevent any storage level hacks from obtaining any useful data. Keep in mind that even large database vendors have vulnerabilities that can be exploited by capturing data in cache. Encrypting data at the storage level will not help mitigate this.

 

What you should know

 

Does implementing TDE impact performance? There is overhead associated with encrypting data at rest because the data needs to be decrypted when read from disk into cache. That will take additional CPU cycles and a bit more time. However, unless you are CPU-constrained, the impact should not be noticeable to end-users. It should be noted that index usage is not affected by TDE. Bottom line is if the data is sensitive enough that the statement at the top of this section gets you thinking along the lines of a resume-generating event, the negligible overhead impact of implementing encryption should not be a deterrent from its use. However, don't encrypt more than is needed. Understand any compliance policies that govern your business (PCI, HIPAA, SOX, etc.).

 

Now to wrap this all up.

 

When we think of breaches, especially those involving highly sensitive data or data that falls under the scope of regulatory compliance, SIEM solutions certainly come to mind. This software performs a series of critical functions to support defense-in-depth strategies. In the case of Equifax, their most notable influence appears to be their attempt to minimize the time of detection with either the compromise or the breach itself. On one hand, they support the monitoring and alerting of anomalies on the network that could indicate a compromise. On the other, they can signal the exfiltration of data – the actual event of the breach – by monitoring traffic on endpoints and bringing to the foreground spikes in outbound traffic, which, depending on the details, may otherwise go unnoticed. I'm not prepared to make the assumption that Equifax was lacking such a solution, but given this timeline of events and their lag in response, it begs the question.

 

As always, thank you all for reading and keep up these excellent conversations.