Skip navigation

Geek Speak

4 Posts authored by: buraglio

Incident responders: Build or buy?

There is far more to security management than technology. In fact, one could argue that the human element is more important in a field where intuition is just as valuable as knowledge of tech. In the world of security management I have not seen a more hotly debated non-technical issue than the figurative “build or buy” when it comes to incident responder employees. The polarized camps are the obvious:

  • Hire for experience.
  • In this model the desirable candidate is a mid-career or senior level, experienced incident responder. The pros and cons are debatable:Hire for ability
    • More expensive
    • Potentially set in ways
    • Can hit the ground running
    • Low management overhead
  • In this model, a highly motivated but less experienced engineer is hired and molded into as close to what the enterprise requires as they can get. Using this methodology the caveats and benefits are a bit different, as it is a longer term strategy.
    • Less expensive
    • “Blank Slate”
    • Requires more training and attention
    • Initially less efficient
    • More unknowns due to lack of experience
    • Can potentially become exactly what is required
    • May leave after a few years

In my stints managing and being involved with hiring, I have found that it is a difficult task to find a qualified, senior level engineer or incident responder that has the personality traits conducive to melding seamlessly into an existing environment. That is not so say it isn’t possible, but soft skills are a lost art in technology, and especially so in development and security. In my travels, sitting on hiring committees and writing job descriptions, I have found that the middle ground is the key. Mid-career, still hungry incident responders that have a budding amount of intuition have been the blue chips in the job searches and hires I have been involved with. They tend to have the fundamentals and a formed gut instinct that makes them incredibly valuable and at the same time, very open to mentorship. Now, the down side is that 40% of the time they’re going to move on just when they’re really useful, but that 60% that stick around a lot longer? They are often the framework that thinks outside the box and keeps the team fresh.

What seems like a lifetime ago I worked for a few enterprises doing various things like firewall configurations, email system optimizations and hardening of Netware, NT4, AIX and HPUX servers. There were 3 good sized employers, a bank and two huge insurance companies that both had financial components. While working at each and every one of them, I was, subject to their security policy (one of which I helped to craft, but that is a different path all together), and none of which really addressed data retention. When I left those employers, they archived my home directories, remaining email boxes and whatever other artifacts I left behind. None of this was really an issue for me as I never brought any personal or sensitive data in and everything I generated on site was theirs by the nature of what it was. What did not occur to me then, though, was that this was essentially a digital trail of breadcrumbs that could exist indefinitely. What else was left behind and was it also archived? Mind you, this was in the 1990s and network monitoring was fairly clunky, especially at scale, so the likely answer to this question is "nothing", but I assert that the answer to that question has changed significantly in this day and age.

Liability is a hard pill for businesses to swallow. Covering bases is key and that is where data retention is a double edged sword. Thinking like I am playing a lawyer on TV, keeping data on hand is useful for forensic analysis of potentially catastrophic data breaches, but it can also be a liability in that it can prove culpability in employee misbehavior on corporate time, resources and behalf. Is it worth it?

Since that time oh so long ago I have found that the benefit has far outweighed the risk in retaining the information, especially traffic data such proxy, firewall, and network flows.  The real issues I have, as noted in previous posts, is the correlation of said data and, more often than not, the archival method of what can amount to massive amounts of disk space.

If I can offer one nugget of advice, learned through years of having to decide what goes, what stays and for how long, it is this: Buy the disks. Procure the tape systems, do what you need to do to keep as much of the data as you can get away with because once it is gone it is highly unlikely that you can ever get it back.

Of all of the security techniques, few garner more polarized views than interception and decryption of trusted protocols. There are many reasons to do it and a great deal of legitimate concerns about compromising the integrity of a trusted protocol like SSL. SSL is the most common protocol to intercept, unwrap and inspect and accomplishing this has become easier and requires far less operational overhead than it did even 5 years ago. Weighing those concerns against the information that can be ascertained by cracking it open and looking at its content is often a struggle for enterprise security engineers due to the privacy implied. In previous lives I have personally struggled to reconcile this but have ultimately decided that the ethics involved in what I consider to be violation of implied security outweighed the benefit of SSL intercept. With other options being few, blocking protocols that obfuscate their content seems to be the next logical option, however, with the prolific increase of SSL enabled sites over the last 18 months, even this option seems unrealistic and frankly, clunky. Exfiltration of data, being anything from personally identifiable information to trade secrets and intellectual property is becoming a more and more common "currency" and much more desirable and lucrative to transport out of businesses and other entities. These are hard problems to solve.

Are there options out there that make better sense? Are large and medium sized enterprises doing SSL intercept? How is the data being analyzed and stored?

Given the current state of networking and security and with the prevalence of DDoS attacks such as the NTP Monlist attack, SNMP and DNS amplifications as well as the very directed techniques like DoXing and most importantly to many enterprises, exfiltration of sensitive data, network and security professionals are forced to look at creative and often innovative means to ascertain information about their networks and traffic patterns. This can sometimes mean finding and collecting data from many sources and correlating it or in extreme cases, obtaining access to otherwise secure protocols.

Knowing your network and computational environment is absolutely critical to classification and detection of anomalies and potential security infractions. In today’s hostile environments that have often had to grow organically over time, and with the importance and often associated expenses of obtaining, analyzing and storing this information, what creative ways are being utilized to accomplish these tasks? How is the correlation being done? Are enterprises and other large networks utilizing techniques like full packet capture at their borders? Are you performing SSL intercept and decryption? How is correlation and cross referencing of security and log data accomplished in your environment? Is it tied into any performance or outage sources?

Filter Blog

By date: By tag: