5 More Ways I Can Steal Your Data: Ask the Security Guard to Help Me Carry it Out

DatachickLegoSWIDesk.jpg

In my recent post  5 More Ways I Can Steal Your Data - Work for You & Stop Working for You I started telling the story of a security guard who helped a just fired contractor take servers with copies of production data out of the building:

Soon after he was rehired, the police called to say they had raided his home and found servers and other computer equipment with company asset control tags on them. They reviewed surveillance video that showed a security guard holding the door for the man as he carried equipment out in the early hours of the morning. The servers contained unencrypted personal data, including customer and payment information. Why? These were development servers where backups of production data were used as test data.

Apparently, the contractor was surprised to be hired back by a company that had caught him stealing, so he decided since he knew about physical security weaknesses, he would focus not on taking equipment, but the much more valuable customer and payment data.

How the Heck Was He Able to Do This?

You might think he was able to get away with this by having insider help, right?  He did, sort of.  But it didn't come from the security guard.  It came from poor management practices, not enough resources, and more. I'm going to refer to the thief here as "Our Friend".

Not Enough Resources

Our Friend had insider information about how lax physical security was at this location.  There was only ever one security person working at a time.  When she took breaks, or had to deal with a security issues elsewhere, no one else was there to cover the entrance.  Staff could enter with badges and anyone could exit.  Badging systems were old and nearly featureless.  Printers and other resources available to the security group were old and nearly non-functioning.  Staff in security weren't required or tested to be security minded.

In this case, it was easy to figure out the weaknesses in this system.

Poor Security Practices

In the case of Our Friend, he was rehired by a different group who had no access to a "do not hire" list because he was a contractor, not an employee.  He was surprised at being rehired (as were others).  This culture of this IT group was very much "mind your own business" and "don't make waves".  I find that a toxic management culture plays a key role in security approaches.  When security issues were raised, the response was more often than not "we don't have time to worry about that" or "focus on your own job".

Poor Physical Security

Piggybacking or Tailgating (following a person with access through a door without scanning a badge) is a common unenforced practice in many facilities.  Sometimes employees would actually hold the door open for complete strangers.  This seems like being nice, but it's not. Another contractor, who had recently been let go, was let in several times during off hours to wander the hallways looking for his former work laptop.  He wanted to remove traces of improper files and photos.  He accomplished this by tailgating his way into the building.  This happened just weeks before Our Friend carried out his acts.

When Our Friend was rehired, there was a printout of his old badge photo hanging on the wall at the security area.  It was a low-resolution photo printed on a cheap inkjet printer running low on ink.  The guard working that day couldn't even tell that this guy had a "no entry" warning.  The badge printing software had no checks for "no new badge".

After being rehired, Our Friend was caught again stealing networking equipment and was let go.  Security was notified and another poorly printed photo was put up in the security area. Then Our Friend came back in the early morning hours on the weekend, said he forgot his badge and was issued a new one.  Nothing in the system set up an alert.

He spent some time gathering computers that were installed in development and QA labs, then some running in other unsecured areas.  He got a cart, and the security guard held the door open while he took them out to his car.  How do we know this?  There were video tapes. How do we know this? The security guard sold the tapes to a local news station. News stations love when there is video.

Data Ignorance

Ask I mentioned in the previous post, the company didn't even know the items were missing. It took several calls from the local police to get a response.  And even then the company denied anything was missing.  Because they didn't know.   Many of us knew that these computers would have production data on them because this organization used production data in their development and test processes.

But the company itself had no data inventory system. They had no way of knowing just what data was on those computers.  It was also common to find these systems had virtually no security or they had a single login for the QA environment that was written on the whiteboard in the QA labs. No one knew just what data was copied where.  Anyone could deploy production data anywhere they could find. Request for production data were pretty much allowed for anyone in IT or the rest of the company.   Requests could be done verbally.  There were no records of any request or the provision of data.  Employees were given no indication that any set of data held sensitive or otherwise protected data.

The lack of inventory let the company spokesperson say something like "These were just test devices; we have no indication that any customer data was involved in this theft".

Fixing It

I could go on with a list of tips on how to fix these issues. But the main fix, that no one wants to embrace, is to stop using production data for dev and test.  I have some more writing on this topic, but this will be my agenda for 2018.  If this company had embraced this option, the theft would have been just of equipment and some test data with no value.

The main fix that no one wants to embrace is to stop using production data for dev and test.

If we as IT professionals started following the practice of having real test data, many of the breaches we know of would not have been breaches of real data.  Yes, we need to fix physical security issues.  But let's keep production data in production.  Unless we are testing a production migration, there's no need to use production data for any reason.  In fact, many data protection compliance schemes forbid it.

Have you developed real test data, not based on just trying to obscure production data, for all your dev/text needs?

  • And  think that practices like this occur in our government all of the time. The human side of security is wrought with peril. It's sticky, messy, nobody's favorite to deal with. My auditing background has a field day with scenarios like this. My blood pressure gets dialed up when I make recommendations and they are not acted upon.

  • Physical security really is the most basic level of security, if you can't get that right then you probably aren't going to get the rest of it right either. 

  • Inventorying assets annually (at least!) and then randomly performing spot checks to verify the inventory numbers are accurate is essential.

    Doing the same for problem employees would be an interesting task--one filled with risks of lawsuits.

    A "data inventory" is a term I'd not heard before, although we use the concept every day with electronic medical health records and employee records and finance and payroll, etc.  Each have their own checks and balances and alarms every time they are accessed, and the accesses are reviewed continually.

    It sure is tough people can't be trusted . . .

  • Personally I have not had the chance to develop test data to use for Dev. Normally Dev environments have been setup for the devices that certain departments needs to test, keeping them isolated from other Dev instances that could be used to interact and with given test proper test data we would have had that opportunity to properly benchmark setups and configurations. Too often I see these isolated Dev instances that allow you to work in what you need, but it provides no real data to use and scope your Dev towards the proper Production setup.

    The one instance where this was done, was a massive EMR rollout and network refreshes had been done before hand to make sure the proper backbone was in place to power the data. Still no actual Dev done on any of our devices at the time since our network was scoped to 'handle' the new system. Being able to provide a comparison of actual Dev/Go Live against what was experienced in Dev would have been massively useful with some of the issues that were experienced (i.e. there were a few choke points that had to be 'adjusted' after the roll out).

Thwack - Symbolize TM, R, and C