Skip navigation
1 2 3 4 5 Previous Next

Geek Speak

2,302 posts

'FIVE"

 

In my soon-to-be-released eBook, 10 Ways I Can Steal Your Data, I cover the not-so-talked-about ways that people can access your enterprise data. It covers things like you're just GIVING me your data, ways you might not realize you are giving me your data, and how to keep those things from happening.

 

The 10 Ways eBook was prepared to complement my upcoming panel during next week's ThwackCamp on the data management lifecycle. You've registered for ThwackCamp, right? In this panel, a group of fun and sometimes irreverent IT professionals, including Thomas LaRock sqlrockstar, Stephen Foskett sfoskett and me, talk with Head Geek Kong Yang kong.yang about things we want to see in the discipline of monitoring and systems administration. We also did a fun video about stealing data. I knew I couldn't trust that Kong guy!

 

In this blog series, I want to talk about bit more about other ways I can steal your data. In fact, there are so many ways this can happen I could do a semi-monthly blog series from now until the end of the world. Heck, with so many data breaches happening, the end of the world might just be sooner than we think.

 

More Data, More Breaches

We all know that data protection is getting more and wider attention. But why is that? Yes, there are more breaches, but I also think legislation, especially the regulations coming out of Europe, such as General Data Protection Regulation (GDPR), means we are getting more reports. In the past, organizations would keep quiet about failures in their infrastructure and processes because they didn't want us to know about how poorly they treated our data. In fact, during the "software is eating the world" phase of IT professionals making software developers kings of world, most data had almost no protection and was haphazardly secured. We valued performance over privacy and security. We favored developer productivity over data protection. We loved our software more than we loved our data.

 

But this is all changing due to an increased focus on the way the enterprise values data.

 

I have some favorite mantras for data protection:

 

  • Data lasts longer than code, so treat it right
  • Data privacy is not security, but security is required to protect data privacy
  • Data protection must begin at requirements time
  • Data protection cannot be an after-production add-on
  • Secure your data and secure your job
  • Customer data is valuable to the customers, so if you value it, your customers will value your company
  • Data yearns to be free, but not to the entire world
  • Security features are used to protect data, but they have to be designed appropriately
  • Performance desires should never trump security requirements

 

 

And my favorite one:

 

  • ROI also stands for Risk of Incarceration: Keeping your boss out of jail is part of your job description

 

 

So keep an eye out for the announcement of the eBook release and return here in two weeks when I'll share even more ways I can steal your data.

It is true to the point of cliche that cloud came and changed everything, or at the very least is in the process of changing everything, even as we stand here looking at it. Workloads are moving to the cloud. Hybrid IT is a reality in almost every business. Terms and concepts like microservices, containers, and orchestration pepper almost every conversation in every IT department in every company.

 

Like many IT professionals, I wanted to increase my exposure without completely changing my career or having to carve out huge swaths of time to learn everything from the ground up. Luckily, there is a community ready-made to help folks of every stripe and background: DevOps. The DevOps mindset runs slightly counter to the traditional IT worldview. Along with a dedication to continuous delivery, automation, and process, a central DevOps tenet is, "It's about the people." This alone makes the community extremely open and welcoming to newcomers, especially newcomers like me who have an ops background and a healthy dose of curiosity.

 

So, for the last couple of years, I've been on a bit of journey. I wanted to see if there was a place in the DevOps community for an operations-minded monitoring warhorse like me. While I wasn't worried about being accepted into the DevOps community, I was worried if I would find a place where my interests and skills fit.

 

What concerned me the most was the use of words that sounded familiar but were presented in unfamiliar ways, chief among them the word "monitoring" itself. Every time I found a talk purporting to focus on monitoring, it was mostly about business analytics, developer-centric logging, and application tracing. I was presented with slides that presented such self-evident truths as:

 

"The easy and predictable issues are mostly solved for at scale, so all the interesting problems need high cardinality solutions."

 

Where, I wondered, were the hardcore systems monitoring experts? Where were the folks talking about leveraging what they learned in enterprise-class environments as they transitioned to the cloud?

 

I began to build an understanding that monitoring in DevOps was more than just a change of scale (like going from physical to virtual machines) or location (like going from on-premises to colo). But at the same time, it was less than an utterly different area of monitoring that had no bearing on what I've been doing for 20-odd years. What that meant was that, while I couldn't ignore DevOps' definition of monitoring, nor was I free to write it off as a variation of something I already knew.

 

Charity Majors (@mipsytipsy) has, for me at least, done the best job of painting a picture of what DevOps hopes to address with monitoring:

(excerpted from https://opensource.com/article/17/7/state-systems-administration)

"...And then on the client side: take mobile, for heaven's sake. The combinatorial explosion of (device types * firmwares * operating systems * apps) is a quantum leap in complexity on its own. Mix that in with distributed cache strategy, eventual consistency, datastores that split their brain between client and server, IoT, and the outsourcing of critical components to third-party vendors (which are effectively black boxes), and you start to see why we are all distributed systems engineers in the near and present future. Consider the prevailing trends in infrastructure: containers, schedulers, orchestrators. Microservices. Distributed data stores, polyglot persistence. Infrastructure is becoming ever more ephemeral and composable, loosely coupled over lossy networks. Components are shrinking in size while multiplying in count, by orders of magnitude in both directions... Compared to the old monoliths that we could manage using monitoring and automation, the new systems require new assumptions:

  • Distributed systems are never "up." They exist in a constant state of partially degraded service. Accept failure, design for resiliency, protect and shrink the critical path.
  • You can't hold the entire system in your head or reason about it; you will live or die by the thoroughness of your instrumentation and observability tooling.
  • You need robust service registration and discovery, load balancing, and backpressure between every combination of components..."

While Charity's post goes into greater detail about the challenge and some possible solutions, this excerpt should give you a good sense of the world she's addressing. With this type of insight, I began to form a better understanding of DevOps monitoring. But as my familiarity grew, so too did my desire to make the process easier for other "old monolith" (to use Charity's term) monitoring experts.

 

And this, more than anything else, was the driving force behind my desire to assemble some of the brightest minds in DevOps circles and discuss what it means, "When DevOps Says Monitor" for THWACKcamp this year. It is not too late to register for that session (https://thwack.solarwinds.com/community/thwackcamp-2017/when-devops-says-monitor). Better still, I hope you will join me for the full two-day conference and see what else might shake your assumptions, rattle your sense of normalcy, and set your feet on the road to the next stage of your IT journey.

 

PS: If you CAN'T join for the actual convention, not to worry! All the sessions will be available online to watch at a more convenient time.

I’ve always loved the story about the way Henry Ford built his automotive imperium. During the Industrial Revolution, it became increasingly important to automate the construction of products to gain a competitive advantage in the marketplace. Ford understood that building cars faster and more efficiently would be hugely advantageous. Developing an assembly line as well as a selling method (you could buy a Model-T in every color, as long as it was black.) If you want to know more about how Ford changed the automotive industry (and much more), there is plenty of information on the interwebs.

 

In the next couple of posts, I will dive a little deeper into the reasons why keeping your databases healthy in the digital revolution is so darn important. So please, let’s dive into the first puzzle of this important part of the database infrastructure we call storage.

 

As I already said, I really love the story of Ford and the way he changed the world forever. We, however, live in a revolutionary time that is changing the world even faster. It seems -- and seems is the right word if you ask me -- to focus on software instead of hardware. Given that the Digital Revolution is still relatively young, we must be like Henry and think like pioneers in this new space.

 

In the database realm, it seems to be very hard to know what the performance, or lack thereof,  s and where we should look to solve the problems at hand. In a lot of cases, it is almost automatic to blame it all on the storage, as the title implies. But knowledge is power as my friend SpongeBob has known for so long.

Spongebob on knowledge

Storage is an important part of the database world, and with constantly changing and evolving hardware technology, we can squeeze more and more performance out of our databases. That being said, there is always a bottleneck. Of course, it could be that storage is the bottleneck we’re looking for when our databases aren’t performing the way they should. But in the end, we need to know what the bottleneck is and how we can fix it. More important is the ability to analyze and monitor the environment in a way that we can predict and modify database performance so that it can be adjusted as needed before problems occur.

 

Henry Ford was looking for ways to fine-tune the way a car was built, and ultimately developed an assembly line for that purpose. His invention cut the amount of time it took to build a car from 12 hours to surprising two-and-a-half hours. In a database world, speed is important, but blaming storage and focusing on solving only part of the database puzzle is sshort-sighted. Knowing your infrastructure and being able to tweak and solve problems before they start messing with your performance is where it all starts. Do you think otherwise? Please let me know if I forgot something, or got it all wrong. Would love to start the discussion and see you on the next post.

Are you excited about THWACKcamp, yet? It's next week, and I am flying down to Austin to be there for the live broadcast. Go here to register. Do it now!

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

It’s official: Data science proves Mondays are the worst

I like having data to back up what we already knew about Mondays.

 

The Seven Deadly Sins of AI Predictions

A bit long, but worth the read. Many of these sins apply to predictions in general, as well as Sci-Fi writing.

 

Who should die when a driverless car crashes? Q&A ponders the future

I've thought and talked a lot about autonomous cars, but I’ve never once pondered what the car should do when faced with a decision about who should die, and who should live.

 

Traceroute Lies! A Typical Misinterpretation Of Output

Because I learned something new here and you might too: MPLS is as dumb as a stump.

 

Replacing Social Security Numbers Is Harder Than You Think

And they make for lousy primary keys, too.

 

Russia reportedly stole NSA secrets with help of Kaspersky—what we know now

I included this story even though it hasn't yet been confirmed because I wanted to remind everyone that there is at least one government agency that listens to the American people: the NSA.

 

History of the browser user-agent string

And you will read this, and there will be much rejoicing.

 

Shopping this past weekend I found these socks and now I realize that SolarWinds needs to up our sock game:

The THWACKcamp 2017 session, "Protecting the Business: Creating a Security Maturity Model with SIEM" is a must-see for anyone who’s curious about how event-based security managers actually work. SolarWinds Product Manager Jamie Hynds will join me to present a hands-on, end-to-end, how-to on configuring and using SolarWinds Log & Event Manager. The session will include configuring file integrity monitoring, understating the effects of normalization, and creating event correlation rules. We'll also do a live demonstration of USB Defender’s insertion, copy activity detection, and USB blocking, Active Directory® user, group, and group-policy configuration for account monitoring, lock-outs for suspicious activity, and detecting security log tampering.

 

Even if you’re not using LEM or a SIEM tool, this will be a valuable lesson on Active Directory threat considerations that will reveal real-world examples of attack techniques.

 

THWACKcamp is the premier virtual IT learning event connecting skilled IT professionals with industry experts and SolarWinds technical staff. Every year, thousands of attendees interact with each other on topics like network and systems monitoring. This year, THWACKcamp further expands its reach to consider emerging IT challenges like automation, hybrid IT, cloud-native APM, DevOps, security, and more. For 2017, we’re also including MSP operators for the first time.

THWACKcamp is 100% free and travel-free, and we'll be online with tips and tricks on how to your use SolarWinds products better, as well as best practices and expert recommendations on how to make IT more effective regardless of whose products you use. THWACKcamp comes to you so it’s easy for everyone on your team to attend. With over 16 hours of training, educational content, and collaboration, you won’t want to miss this!

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

If you're in technology, chances are you don't go too many days without hearing something about DevOps. Maybe your organization is thinking about transitioning from a traditional operations model to a DevOps model. In this series of blog posts, I plan to delve into DevOps to help you determine if it's right for your organization and illustrate how to start down the path of migrating to a DevOps model. Because DevOps means different things to different people, let's start by defining exactly what it is.

 

My big, semi-official definition of DevOps is that it is a way to approach software engineering with the goal of closely integrating software development and software operations. This encompasses many things, including automation, shorter release times for software builds, and continuous monitoring and improvement of software releases. That’s a lot, and it’s understandable that a good amount of people may be confused about what DevOps is.

 

With that in mind, I reached out to people working in technology to get their thoughts on what exactly DevOps means to them. These individuals work in varying technology disciplines from data center engineers, Windows SysAdmins, and Agile developers to unified communications, wireless, and network engineers. About thirty people responded to my simple query of, "When someone says 'DevOps,' what do you think it means?" While there was a fair amount of overlap in some of the answers, here's a summary of where the spread fell out:
  • It's supposed to be the best combination of both dev and ops working in really tight coordination
  • Automation of basic tasks so that operations can focus on other things
  • It's glorified script, in other words, developers trying to understand network and storage concepts
  • It's the ability to get development or developers and operations to coexist in a single area with the same workflow and workloads
  • It's something for developers, but as far as what they actually do, I have no clue

 

A lot of these responses hit on some of the big definitions we discussed earlier, but none of them really sums up the many facets of DevOps. That’s okay. With the breadth of what DevOps encompasses, it makes sense for organizations to pick and choose the things that make the most sense for what they want to accomplish. The key part, regardless of what you are doing with DevOps, is the tight integration between development and software operations. Whether you’re looking to automate infrastructure configurations or support custom-written, inventory-tracking software, the message is the same. Make sure that your development and operations teams are closely integrated so that the tool does what it needs to do and can be quickly updated or fixed when something new is encountered.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Abruptly moving from legacy systems to the cloud is akin to building a new house without a foundation. Sure, it might have the greatest and most efficient new appliances and cool fixtures, but it’s not really going to work unless the fundamentals that support the entire structure are in place.

 

Administrators can avoid this pitfall by building modern networks designed for both the cloud of today and the needs of tomorrow. These networks must be software-defined, intelligent, open and able to accommodate both legacy technologies and more-advanced solutions during the cloud migration strategy period. Simultaneously, their administrators must have complete visibility into network operations and applications, wherever they may be hosted.

 

Let’s look at some building practices that administrators can use to effectively create a solid, modern and cloud-ready network foundation.

 

Create a blueprint to monitor network bandwidth on the cloud

 

Many network challenges will likely come from increased traffic derived from an onslaught of devices on the cloud. The result is that both traditional and non-traditional devices are enabling network traffic that will inevitably impact bandwidth. Backhaul issues can also occur, particularly with traditional network architectures that aren’t equipped to handle the load that more devices and applications can put on the network.

 

It’s becoming increasingly important for administrators to be able to closely monitor and analyze network traffic patterns. They must have a means to track bandwidth usage down to individual users, applications, and devices so they can more easily pinpoint the root cause of slowdowns before, during, and after deploying a cloud migration strategy.

 

Construct automated cloud security protocols

 

Agencies moving from a traditional network infrastructure to the cloud will want to make sure their security protocols evolve as well. Network software should automatically detect and report on potentially malicious activity, use of rogue or unauthorized devices, and other factors that can prove increasingly hazardous as agencies commence their cloud migration strategy efforts.

 

Automation will become vitally important because there are simply too many moving parts to a modern, cloud-ready network for managers to easily and manually control. In addition to the aforementioned monitoring practices, regular software updates should be automatically downloaded to ensure that the latest versions of network tools are installed. And administrators should consider instituting self-healing protocols that allow the network to automatically correct itself in case of a slowdown or breach.

 

Create an open-concept cloud environment

 

Lack of visibility can be a huge network management challenge when migrating to the cloud. Agency IT personnel must be able to maintain a holistic view of everything that’s happening on the network, wherever that activity may be taking place. Those taking a hybrid cloud approach will require network monitoring that allows them to see into the dark hallways that exist between on-premises and cloud infrastructures. They must also be able to continuously monitor the performance of those applications, regardless of where they exist.

 

Much as well-built real estate increases in value over time, creating a cloud-ready, modernized network will offer significant benefits, both now and in the future. Agencies will be able to enjoy better security and greater flexibility through networks that can grow along with demand, and they’ll have a much easier time managing the move to the cloud with an appropriate network infrastructure. In short, they’ll have a solid foundation upon which to build their cloud migration strategies.

 

Find the full article on Government Computer News.

Change is coming fast and furious, and in the eye of the storm are applications. I’ve seen entire IT departments go into scramble drills to see if they possessed the necessary personnel and talent to bridge the gap as their organizations embraced digital transformation. And by digital transformation, I mean the ubiquitous app or service that can be consumed from anywhere on any device at any given time. For organizations, it’s all about making money all the time from every possible engagement. Remove the barrier to consume and make the organization more money is what digital transformation is all about.

 

There are new roles and responsibilities to match these new tech paradigms. Businesses have to balance their technical debt and deficit and retire any part of their organization that cannot close the gap. Ultimately, IT leaders have to decide on the architecture that they’ll go with, and identify whether to buy or build that corresponding talent. The buy and build talent question becomes THE obstacle to success.

 

There is a need for IT talent that can navigate the change that is coming. Because of the increased velocity, volume, and variety that apps bring, IT leaders are going into binge-buying mode. In the rush to accomplish their goals, they don't take the time to seek out latent talent that is likely already in their organizations. Have IT pros become merely disposable resources? Are they another form of tech debt?

 

Buy or build? This is the billion dollar question because personnel talent is the primary driver of innovation. That talent turns technology and applications into industry disruptors. Where does your organization stand on this issue? Are they building or buying talent? Let me know in the comments below.

Some reports from your IT monitoring system help you do your job—like auditing, troubleshooting, managing resources, and planning. There are also reports that you create to let your boss know how the IT environment is performing, such as enterprise-level SLAs, availability/downtime, and future-proofing. But what exactly does senior management really need to know, and why?

 

In the "Executive BabelFish: The Reports Your CIO Wants to See" session, you’ll hear from SolarWinds CIO Rani Johnson, Director of IT, SANDOW, Mindy Marks, Kinnser Software CTO Joel Dolisy, and me, SolarWinds Head Geek Patrick Hubbard. We will present relevant information and how it is used.

 

THWACKcamp is the premier virtual IT learning event connecting skilled IT professionals with industry experts and SolarWinds technical staff. Every year, thousands of attendees interact with each other on topics like network and systems monitoring. This year, THWACKcamp further expands its reach to consider emerging IT challenges like automation, hybrid IT, cloud-native APM, DevOps, security, and more. For 2017, we’re also including MSP operators for the first time.

THWACKcamp is 100% free and travel-free, and we'll be online with tips and tricks on how to your use SolarWinds products better, as well as best practices and expert recommendations on how to make IT more effective regardless of whose products you use. THWACKcamp comes to you so it’s easy for everyone on your team to attend. With over 16 hours of training, educational content, and collaboration, you won’t want to miss this!

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

I'm back from Microsoft Ignite, which was my third event in the past five weeks. I get to stay home for a couple of weeks before heading to the premier event of the year, THWACKcamp 2017! If you haven't registered for THWACKcamp, yet, go here and do it now.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

With new Microsoft breakthroughs, general purpose quantum computing moves closer to reality

By far the biggest announcement last week at Microsoft Ignite was (for me, at least) their research into quantum computing. I’m looking forward to using up to 40 qubits of compute on Azure in a future billing cycle. I can’t wait to both pay my bill and not pay my bill each month, too!

 

Do You Really Need That SQL to Be Dynamic?

No. #SavedYouAClick

 

Microsoft CEO Satya Nadella: We will regret sacrificing privacy for national security

Truer words will never be spoken.

 

Malware Analysis—Mouse Hovering Can Cause Infection

I suppose the easy answer here is to not allow for URLs to be included in hover text. But short links are likely a bigger concern than hover text, so maybe focus on disabling those first. If you can’t see the full URL, don’t click.

 

Breach at Sonic Drive-In May Have Impacted Millions of Credit, Debit Cards

Attacking a fast food chain? Hackers are crossing a line here.

 

Kalashnikov unveils electric “flying car”

In case you were wondering what to get me for Christmas this year (besides bacon, of course).

 

Mario was originally punching his companion Yoshi in Super Mario World

Mario. What a jerk.

 

Behold! The Glory that is an American State Fair:

Managing federal IT networks has always been a monumental task. They have traditionally been massive, monolithic systems that require significant resources to maintain.

 

One would think that this situation would have improved with the advent of virtualization, but the opposite has proved to be true. In many agencies, the rise of the virtual machine has led to massive VM sprawl, which wastes resources and storage capacity because of a lack of oversight and control over VM resource provisioning. Left unattended, VM sprawl can wreak havoc, from degraded network and application performance to network downtime.

 

Oversized VMs that were provisioned with more resources than necessary can waste storage and compute resources, and so can the overallocation of RAM and idle VMs.

 

There are two ways to successfully combat VM sprawl. First, administrators should put processes and policies in place to prevent it from happening. Even then, however, VM sprawl may occur, which makes it imperative that administrators also establish a second line of defense that keeps it in check during day-to-day operations.

 

Let’s take a closer look at strategies that can be implemented:

 

Process

 

The best way to get an early handle on VM sprawl is to define specific policies and processes. This first step involves a combination of five different approaches, all designed to stop VM sprawl before it has a chance to spread.

 

  1. Establish role-based access control policies that clearly articulate who has the authority to create new VMs.
  2. Allocate resources based on actual utilization.
  3. Challenge oversized VM requests.
  4. Create standard VM categories to help filter out abnormal or oversized VM requests.
  5. Implement policies regarding snapshot lifetimes.

 

Operations

 

Unfortunately, VM sprawl can occur even if these initial defenses are put in place. Therefore, it’s incumbent upon IT teams to be able to maintain a second layer of defense that addresses sprawl during operations.

 

Consider a scenario in which a project is cancelled or delayed. Or, think about what happens in an environment where storage is incorrectly provisioned.

 

During operations, it’s important to use an automated approach to virtualization management that employs predictive analysis and reclamation capabilities. Using these solutions, federal IT managers can tap into data on past usage trends to optimize their current and future virtual environments. Through predictive analysis, administrators can apply what they’ve learned from historical analysis to address issues before they occur. They can also continually monitor and evaluate their virtual environments and get alerts when issues arise so problems can be remediated quickly and efficiently.

 

While each of these strategies by themselves can be effective in controlling VM sprawl, together they create a complete and sound foundation that can greatly improve and simplify virtualization management. They allow administrators to build powerful, yet contained, virtualized networks.

 

Find the full article on Government Computer News.

Recently, I wrote a post about the concept of a pre-mortem, which was inspired by an amazing podcast I’d heard (listen to it here). I felt that this concept could be interpreted exceptionally well within a framework of project management. The idea that thinking of as many variables and hindrances to the success of individual tasks, which in turn would delay the milestones necessary to the completion of the project as a whole, quite literally correlates to medical care. Addressing the goals linked to a person's or project's wellness really resonated with me.

 

In my new series of five posts, this being the first, I will discuss how concepts of IT code correlate to the way we live life. I will begin with how I see project management as a potentially correlative element to life in general and how to successfully live this life. This is not to say that I am entirely successful, as definitions of success are subjective. But I do feel that each day I get closer to setting and achieving better goals.

 

First, we need to determine what our goals are, and whether financial, physical, fitness, emotional, romantic, professional, social, or whatever else matters to you are success goals. For me, lately, a big one has become getting better at guitar. But ultimately, the goals themselves are not as important as achieving them.

 

So, how do I apply the tenets of project management to my own goals? First and foremost, the most critical step is keeping the goals in mind.

 

  • Define your goals and set timelines
  • Define the steps you need to take to achieve your goals
  • Determine the assets necessary to achieve these goals, including vendors, partners, friends, equipment, etc.
  • Define potential barriers to achieving those goals, such as travel for work, illness, family emergencies, etc.
  • Define those barriers (as best you can)
  • Establish a list of potential roadblocks, and establish correlating contingencies that could mitigate those roadblocks
  • Work it

 

What does the last point mean? It means engaging with individuals who are integral to setting and keeping commitments. These people will help keep an eye on the commitments you’ve made to yourself, the steps you’ve outlined for yourself, and doing the work necessary to make each discrete task, timeline, and milestone achievable.

 

If necessary, create a project diagram of each of your goals, including the above steps, and define these milestones with dates marked as differentiators. Align ancillary tasks with their subtasks, and defined those as in-line sub-projects. Personally, I do this step for every IT project with which I am involved. Visualizing a project using a diagram helps me keep each timeline in check. I also take each of these tasks and build an overall timeline of each sub-project, and put it together as a master diagram. By visualizing these, I can ensure that future projects aren't forgotten, while also giving me a clear view of when I need to contact outside resources for their buy-in. Defining my goals prior to going about achieving them allows me to be specific. Also, when I can see that I am accomplishing minor successes along the way (staying on top of the timeline I've set, for example), helps me stay motivated. 

 

Below is a sample of a large-scale precedence diagram I created for a global DR project I completed years ago.  As you start from the left side and continue to the right you’ll see each step along the way, with sub-steps, milestones, go/no-go determinations, and final project success on the far right. Of course, this diagram has been minimized to fit to the page. In this case, visibility into each step is not as important as is the definition and precedence of each step. What does that step rely on for the achievement of this step? These have all been defined on this diagram.

 

 

The satisfaction I garner from a personal or professional task well accomplished cannot be minimized.

I'm back from Microsoft Ignite and I've got to tell you that even though this was my first time at the event, I felt like it was one for the record books. My record books, if nothing else.

 

My feelings about Microsoft are... complicated. I've used Windows for almost 30 years, since version 2.10, which came for free on 12 5.25" floppies when you bought a copy of Excel 2.0. From that point on, while also using and supporting systems that ran DOS, Novell, MacOS, OS/2, SunOS, HP-UX, and Linux, Microsoft's graphical operating system was a consistent part of my life. For a while, I was non-committal about it. Windows had its strengths and weaknesses. But as time went on, and both the company and the technology changed (or didn't), I became apathetic, and then contrarian. I've run Linux on my primary systems for over a decade now, mostly out of spite.

 

That said, I'm also an IT pro. I know which side of the floppy the write-protect tab is flipped to. Disliking Windows technically didn't stop me from getting my MCSE or supporting it at work.

 

The Keynote

The preceding huge build-up is intended to imply that, previous to this year's MS:Ignite keynote, I wasn't ready to lower my guard, open my heart, and learn to trust (if not love) again. I had been so conflicted. But then I heard the keynote.

 

I appreciate, respect, and even rejoice in much of what was shared in Satya Nadella's address. His zen-like demeanor and measured approach proved, in every visible way, that he owns his role as Microsoft's chief visionary. (An amusing sidenote: On my way home after a week at Ignite, I heard Satya on NPR. Once again, he was calm, unassuming, passionate, focused, and clear.

 

Mr. Nadella's vision for Microsoft did not include a standard list of bulleted goals. Instead, he framed a far more comprehensive and interesting vision of the kind of world the company wants to build. This was made clear in many ways, from the simultaneous, real-time translation of his address into more than a dozen languages (using AI, of course), to the fact that, besides himself, just three panelists and one non-speaking demonstration tech were men. Women who occupied important technical roles at Microsoft filled the rest of the seats onstage. Each delivered presentations full of clarity, passion, and enthusiasm.

 

But the inspirational approach didn't stop there. One of the more captivating parts of the keynote came when Nadella spoke about the company's work on designing and building a quantum computer. This struck me as the kind of corporate goal worthy of our best institutions. It sounded, at least to my ears, like Microsoft had an attitude of, "We MIGHT not make it to this goal, but we're going to try our damnedest. Even if we don't go all the way, we're going to learn a ton just by trying to get there!"

 

The Floor

Out on the exhibitor floor, some of that same aspirational thinking was in evidence.

 

I spent more time than I ought to have in the RedHat booth. (I needed a little me time in a Linux-based safe space, okay?) The folks staffing the booth were happy to indulge me. They offered insights into their contributions to past cross-platform efforts, such as getting the Visual Studio ported to Linux. This provided context when I got to zero-in on the details of running SQL server on Linux. While sqlrockstar and I have a lot to say about that, the short story is that it's solid. You aren't going to see 30% performance gains (like you do, in fact, by running Visual Studio on Linux), but there's a modest increase that makes your choice to cuddle up to Tux more than just a novelty.

 

I swung by the MSDN booth on a lark (I maintain my own subscription) and was absolutely blown away when the staffer took the time to check out my account and show me some ways to slice $300 off my renewal price next year. That's a level of dedication to customer satisfaction that I have not expected from M$ in... well, forever, honestly.

 

I also swung by the booth that focused on SharePoint. As some people can attest (I'm looking at you, @jbiggley), I hate that software with the fiery passion of 1,000 suns and I challenged the techs there to change my mind. The person I spoke to got off to a pretty poor start by creating a document and then immediately losing it. That's pretty consistent with my experience of SharePoint, to be honest. He recovered gracefully and indicated a few features and design changes which MIGHT make the software easier to stomach, in case I'm ever forced able to work on that platform again.

 

In the booth, SolarWinds dominated in the way I've come to expect over the past four years of convention-going. There was a mad rush for our signature backpacks (the next time you can find us at a show, you NEED to sign up for one of them, and you need to hit the show floor early). The THWACK.com socks made a lot of people happy, of course. Folks from other booths even came over to see if they could score a pair.

 

What was gratifying to me was the way people came with specific, focused questions. There was less, "Welp, I let you scan my badge, so show me what you got" attitude than I've experienced at other shows. Folks came with a desire to address a specific challenge, to find out about a specific module, or even to replace their current solution.

 

The Buzz

But what had everyone talking? What was the "it" tech that formed the frame of reference for convention goers as they stalked the vendor booths? Microsoft gave everyone three things to think about:

  1. Powershell. This is, of course, not a new tech. But now more than ever (and I realize one could have said this a year ago, or even two, but it keeps becoming ever truer) a modicum of Powershell skills are a requirement if you expect your life as an IT pro to include Microsoft tech.
  2. Teams. This is the new name for the function that is currently occupied by Lync/Skype/Skype for Business. While there is a lot of development runway left before this software takes flight, you could already see (in the keynote demos and on the show floor) that it, along with Bing, will be central to Microsoft's information strategy. Yes, Bing. And no, I'm not making any further comment on that. I feel good enough about Microsoft that I won't openly mock them, but I haven't lost my damn mind.
  3. Project Honolulu. This is another one of those things that will probably require a completely separate blog post. But everyone who showed up at the booth (especially after the session where they unveiled the technical preview release) wanted to know where SolarWinds stood in relation to it.

 

The SWUG Life

Finally, there was the SolarWinds User Group (SWUG) session Tuesday night. In one of the most well-attended SWUGs ever, Steven Hunt, Kevin Sparenberg, and I had the privilege of presenting to a group of users whose enthusiasm and curiosity was undiminished, despite having been at the convention all day. Steven kicked things off with a deep dive into PerfStack, making a compelling case that we monitoring experts need to strongly consider our role as data scientists, considering the vast pile of information we sit on top of. Then I had a chance to show off NetPath a bit, taking Chris O'Brien's NetPath Chalk Talk and extending it to show off some of the hidden tricks and new insights we've gained after watching users interact with it for over a year. And Kevin brought it home with his Better Together talk, which honestly gets better every time I watch him deliver it.

 

The Summary

If you were at Ignite this year, I'd love to hear whether you think my observations are off-base or spot-on. And if you weren't there, I'd highly recommend adding it to your travel plans (it will be in Orlando again next year), especially if you tend to walk on the wild systems side in your IT journey. Share your thoughts in the comments below.

IT operations are evolving to include business and services that move beyond the technology. With change being a constant, how does an IT professional remain relevant?

 

In the "Soft Skills Beyond the Tech," session, I will be joined by fellow Head Geek™ Thomas LaRock, the Exchange Goddess, Phoummala Schmitt, and Tech Field Day organizer-in-chief and owner of Foskett Services, Stephen Foskett, to discuss the top three soft skills that IT professionals need to have to not only survive but thrive in their careers.

 

THWACKcamp is the premier virtual IT learning event connecting skilled IT professionals with industry experts and SolarWinds technical staff. Every year, thousands of attendees interact with each other on topics like network and systems monitoring. This year, THWACKcamp further expands its reach to consider emerging IT challenges like automation, hybrid IT, cloud-native APM, DevOps, security, and more. For 2017, we’re also including MSP operators for the first time.

THWACKcamp is 100% free and travel-free, and we'll be online with tips and tricks on how to your use SolarWinds products better, as well as best practices and expert recommendations on how to make IT more effective regardless of whose products you use. THWACKcamp comes to you so it’s easy for everyone on your team to attend. With over 16 hours of training, educational content, and collaboration, you won’t want to miss this!

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

I know, I'm a day late and quite possibly 37 cents short for my coffee this morning, so let's jump in, shall we?

 

Let's start with the Equifax breach. This came up in the Shields Down Conversation Number Two, so, I thought I would invite some of my friends from our security products to join me to discuss the breach from a few different angles.

 

My take will be from a business strategy (or lack of) standpoint. Roughly 143 million people had their personal data exposed because Equifax did not properly execute a simple patching plan. Seriously?

 

Is this blog series live and viewable? I am not the only person who implements patching, monitoring, log and event management in my environments. This is common knowledge. What I don't get is the why. Why, for the love of everything holy, do businesses not follow these basic practices?

 

CIxO or CXOs do not implement these practices. However, it is their duty (to their company and their core values) to put the right people in place who will ensure that security measures are being carried out.

 

Think about that for a moment and then know that there was a patch produced for the vulnerability that Equifax failed to remediate in March. This breach happened, as we all know, in mid-May. Where is the validation? Where was the plan? Where is the ticketing system tracking the maintenance that should've been completed on their systems? There are so many questions, especially since this happened in an enterprise organization, not some small shop somewhere.

 

Now, let's take this another step further. Equifax dropped another juicy nugget of information of another breach in March. Don't worry, though. It was an entirely different attack. However, the incredible part is that some of the upper-level folks were able to sell their stock. That makes my heart happy, you know, to know that they had the time to sell their stock before they released information on being breached. Hat's off to them for that, right?

 

Then, another company decided they needed to market and sell credit monitoring (for a reduced fee, that just so happens to use EQUIFAX SERVICES) to the individuals who were now at a high(er) risk of identity theft and credit fraud. I'm still blown away by this.

 

Okay. Deep breath. Whooooo.

 

I was recently informed that when you have third-party software, patching is limited and that organization's SLAs for application uptime don't allow patching on some of their servers. I hear you! I am a big believer that some patching servers can cause software to stop working or result in downtime. However, this is where you have to implement a lab and test patching. You should check your patching regardless to make sure you are not causing issues with your environment in the first place. 

 

I will implement patching on test servers usually on a Friday, and then I will verify the status of my applications on the server.

I will also go through my security checks to validate that no new holes or revert have happened before I implement in production within two weeks. 

 

Now let's bring this back to the strategy at hand. When you are an enterprise corporation with large amounts of personal data belonging to your trusting customers (who are the very reason you are as large as you are), you better DARN WELL have a security plan that is overseen by more than one individual! Come on! This is not a small shop or even a business that could argue, "Who would want our customer data?" We're talking about Equifax, a company that holds data about plenty of consumers who happen to have great credit. Equifax is figuratively a lavish buffet for hackers.

 

The C-level of this company should have kept a close eye on the security measures being taken by the organization, including patching, SQL monitoring, log, events, and traffic monitoring. They should have known there were unpatched servers. The only thing I think they could have argued was the common refrain, "We cannot afford downtime for patching." But still. 

 

Your CxO or CIxO has to be your IT champion! They have to go nose to nose with their peers to make sure their properly and thoroughly designed security plans get implemented 100%. They hire the people to carry out such plans, and it is their responsibility to ensure that it gets done and isn't blocked at any level.

 

Enough venting, for the moment. Now I'd like to bring in some of my friends for their take on this Equifax nightmare that is STILL unfolding! Welcome joshberman, just one of my awesome friends here at SolarWinds, who always offers up great security ideas and thoughts.

 

Dez summed up things nicely in her comments above, but let's go back to the origins of this breach and explore the timeline of events to illustrate a few points.

 

  • March 6th: the exploited vulnerability, CVE-2017-5638, became public
  • March 7th: Security analysts began seeing attacks propagate that were designed to exploit this flaw
  • Mid-May: Equifax tracked the date of compromise back to this window of time
  • July 29th: the date Equifax discovered a breach had occurred

 

Had a proper patch management strategy been set in place and backed by the right patch management software to enable the patching of third-party applications, it is likely that Equifax might not have succumbed to such a devastating attack. This applies even if testing had been factored into the timelines, just as Dez recommends. "Patch early, patch often" certainly applies in this scenario, given the voracious speed of hackers to leverage newly discovered vulnerabilities as a means to their end. Once all is said and done, if there is one takeaway here it is that patching as a baseline IT security practice, is and will forever be a must. Beyond the obvious chink in Equifax's armor, there is a multitude of other means by which they could have thwarted this attack, or at least minimized its impact.

 

That's fantastic information, Josh. I appreciate your thoughts. 

 

I also asked mandevil (Robert) for his thoughts on the topic. He was on vacation, but he returned early to knock out some pertinent thoughts for me! Much appreciated, Robert!

 

Thanks, Dez. "We've had a breach and data has been obtained by entities outside of this company."

Imagine being the one responsible for maintaining a good security posture, and the sinking feeling you had when these words were spoken. If this is you, or even if you are tangentially involved in security, I hope this portion of this post helps you understand the importance of securing data at rest as it pertains to databases.

 

Securing data in your database

 

The only place data can't be encrypted is when it is in cache (memory). While data is at rest (on disk) or in flight (on the wire), it can and should be encrypted if it is deemed sensitive. This section will focus on encrypting data at rest. There are a couple different ways to encrypt data at rest when it is contained within a database. Many major database vendors like Microsoft (SQL Server) and Oracle provide a method of encrypting called Transparent Data Encryption (TDE). This allows you to encrypt the data in the files at the database, table space, or column level depending on the vendor. Encryption is implemented using certificates, keys, and strong algorithms and ciphers.

 

Links for more detail on vendor TDE description and implementation:

 

SQL Server TDE

Oracle TDE

 

Data encryption can also be implemented using an appliance. This would be a solution if you would want to encrypt data but the database vendor doesn't offer a solution or licensing structures change with the usage of their encryption. You may also have data outside of a database that you'd want to encrypt that would make this option more attractive (think of log files that may contain sensitive data). I won't go into details about different offers out there, but I have researched several of these appliances and many appear to be highly securitized (strong algorithms and ciphers). Your storage array vendor(s) may also have solutions available.

 

What does this mean and how does it help?

 

Specifically, in the case of Equifax, storage level hacks do not appear to have been employed, but there are many occurrences where storage was the target. By securing your data at rest on your storage tier, it can prevent any storage level hacks from obtaining any useful data. Keep in mind that even large database vendors have vulnerabilities that can be exploited by capturing data in cache. Encrypting data at the storage level will not help mitigate this.

 

What you should know

 

Does implementing TDE impact performance? There is overhead associated with encrypting data at rest because the data needs to be decrypted when read from disk into cache. That will take additional CPU cycles and a bit more time. However, unless you are CPU-constrained, the impact should not be noticeable to end-users. It should be noted that index usage is not affected by TDE. Bottom line is if the data is sensitive enough that the statement at the top of this section gets you thinking along the lines of a resume-generating event, the negligible overhead impact of implementing encryption should not be a deterrent from its use. However, don't encrypt more than is needed. Understand any compliance policies that govern your business (PCI, HIPAA, SOX, etc.).

 

Now to wrap this all up.

 

When we think of breaches, especially those involving highly sensitive data or data that falls under the scope of regulatory compliance, SIEM solutions certainly come to mind. This software performs a series of critical functions to support defense-in-depth strategies. In the case of Equifax, their most notable influence appears to be their attempt to minimize the time of detection with either the compromise or the breach itself. On one hand, they support the monitoring and alerting of anomalies on the network that could indicate a compromise. On the other, they can signal the exfiltration of data – the actual event of the breach – by monitoring traffic on endpoints and bringing to the foreground spikes in outbound traffic, which, depending on the details, may otherwise go unnoticed. I'm not prepared to make the assumption that Equifax was lacking such a solution, but given this timeline of events and their lag in response, it begs the question.

 

As always, thank you all for reading and keep up these excellent conversations.

Filter Blog

By date: By tag: