Skip navigation
1 2 3 4 Previous Next

Geek Speak

2,278 posts

I'm back from Microsoft Ignite, which was my third event in the past five weeks. I get to stay home for a couple of weeks before heading to the premier event of the year, THWACKcamp 2017! If you haven't registered for THWACKcamp, yet, go here and do it now.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

With new Microsoft breakthroughs, general purpose quantum computing moves closer to reality

By far the biggest announcement last week at Microsoft Ignite was (for me, at least) their research into quantum computing. I’m looking forward to using up to 40 qubits of compute on Azure in a future billing cycle. I can’t wait to both pay my bill and not pay my bill each month, too!

 

Do You Really Need That SQL to Be Dynamic?

No. #SavedYouAClick

 

Microsoft CEO Satya Nadella: We will regret sacrificing privacy for national security

Truer words will never be spoken.

 

Malware Analysis—Mouse Hovering Can Cause Infection

I suppose the easy answer here is to not allow for URLs to be included in hover text. But short links are likely a bigger concern than hover text, so maybe focus on disabling those first. If you can’t see the full URL, don’t click.

 

Breach at Sonic Drive-In May Have Impacted Millions of Credit, Debit Cards

Attacking a fast food chain? Hackers are crossing a line here.

 

Kalashnikov unveils electric “flying car”

In case you were wondering what to get me for Christmas this year (besides bacon, of course).

 

Mario was originally punching his companion Yoshi in Super Mario World

Mario. What a jerk.

 

Behold! The Glory that is an American State Fair:

Managing federal IT networks has always been a monumental task. They have traditionally been massive, monolithic systems that require significant resources to maintain.

 

One would think that this situation would have improved with the advent of virtualization, but the opposite has proved to be true. In many agencies, the rise of the virtual machine has led to massive VM sprawl, which wastes resources and storage capacity because of a lack of oversight and control over VM resource provisioning. Left unattended, VM sprawl can wreak havoc, from degraded network and application performance to network downtime.

 

Oversized VMs that were provisioned with more resources than necessary can waste storage and compute resources, and so can the overallocation of RAM and idle VMs.

 

There are two ways to successfully combat VM sprawl. First, administrators should put processes and policies in place to prevent it from happening. Even then, however, VM sprawl may occur, which makes it imperative that administrators also establish a second line of defense that keeps it in check during day-to-day operations.

 

Let’s take a closer look at strategies that can be implemented:

 

Process

 

The best way to get an early handle on VM sprawl is to define specific policies and processes. This first step involves a combination of five different approaches, all designed to stop VM sprawl before it has a chance to spread.

 

  1. Establish role-based access control policies that clearly articulate who has the authority to create new VMs.
  2. Allocate resources based on actual utilization.
  3. Challenge oversized VM requests.
  4. Create standard VM categories to help filter out abnormal or oversized VM requests.
  5. Implement policies regarding snapshot lifetimes.

 

Operations

 

Unfortunately, VM sprawl can occur even if these initial defenses are put in place. Therefore, it’s incumbent upon IT teams to be able to maintain a second layer of defense that addresses sprawl during operations.

 

Consider a scenario in which a project is cancelled or delayed. Or, think about what happens in an environment where storage is incorrectly provisioned.

 

During operations, it’s important to use an automated approach to virtualization management that employs predictive analysis and reclamation capabilities. Using these solutions, federal IT managers can tap into data on past usage trends to optimize their current and future virtual environments. Through predictive analysis, administrators can apply what they’ve learned from historical analysis to address issues before they occur. They can also continually monitor and evaluate their virtual environments and get alerts when issues arise so problems can be remediated quickly and efficiently.

 

While each of these strategies by themselves can be effective in controlling VM sprawl, together they create a complete and sound foundation that can greatly improve and simplify virtualization management. They allow administrators to build powerful, yet contained, virtualized networks.

 

Find the full article on Government Computer News.

Recently, I wrote a post about the concept of a pre-mortem, which was inspired by an amazing podcast I’d heard (listen to it here). I felt that this concept could be interpreted exceptionally well within a framework of project management. The idea that thinking of as many variables and hindrances to the success of individual tasks, which in turn would delay the milestones necessary to the completion of the project as a whole, quite literally correlates to medical care. Addressing the goals linked to a person's or project's wellness really resonated with me.

 

In my new series of five posts, this being the first, I will discuss how concepts of IT code correlate to the way we live life. I will begin with how I see project management as a potentially correlative element to life in general and how to successfully live this life. This is not to say that I am entirely successful, as definitions of success are subjective. But I do feel that each day I get closer to setting and achieving better goals.

 

First, we need to determine what our goals are, and whether financial, physical, fitness, emotional, romantic, professional, social, or whatever else matters to you are success goals. For me, lately, a big one has become getting better at guitar. But ultimately, the goals themselves are not as important as achieving them.

 

So, how do I apply the tenets of project management to my own goals? First and foremost, the most critical step is keeping the goals in mind.

 

  • Define your goals and set timelines
  • Define the steps you need to take to achieve your goals
  • Determine the assets necessary to achieve these goals, including vendors, partners, friends, equipment, etc.
  • Define potential barriers to achieving those goals, such as travel for work, illness, family emergencies, etc.
  • Define those barriers (as best you can)
  • Establish a list of potential roadblocks, and establish correlating contingencies that could mitigate those roadblocks
  • Work it

 

What does the last point mean? It means engaging with individuals who are integral to setting and keeping commitments. These people will help keep an eye on the commitments you’ve made to yourself, the steps you’ve outlined for yourself, and doing the work necessary to make each discrete task, timeline, and milestone achievable.

 

If necessary, create a project diagram of each of your goals, including the above steps, and define these milestones with dates marked as differentiators. Align ancillary tasks with their subtasks, and defined those as in-line sub-projects. Personally, I do this step for every IT project with which I am involved. Visualizing a project using a diagram helps me keep each timeline in check. I also take each of these tasks and build an overall timeline of each sub-project, and put it together as a master diagram. By visualizing these, I can ensure that future projects aren't forgotten, while also giving me a clear view of when I need to contact outside resources for their buy-in. Defining my goals prior to going about achieving them allows me to be specific. Also, when I can see that I am accomplishing minor successes along the way (staying on top of the timeline I've set, for example), helps me stay motivated. 

 

Below is a sample of a large-scale precedence diagram I created for a global DR project I completed years ago.  As you start from the left side and continue to the right you’ll see each step along the way, with sub-steps, milestones, go/no-go determinations, and final project success on the far right. Of course, this diagram has been minimized to fit to the page. In this case, visibility into each step is not as important as is the definition and precedence of each step. What does that step rely on for the achievement of this step? These have all been defined on this diagram.

 

 

The satisfaction I garner from a personal or professional task well accomplished cannot be minimized.

I'm back from Microsoft Ignite and I've got to tell you that even though this was my first time at the event, I felt like it was one for the record books. My record books, if nothing else.

 

My feelings about Microsoft are... complicated. I've used Windows for almost 30 years, since version 2.10, which came for free on 12 5.25" floppies when you bought a copy of Excel 2.0. From that point on, while also using and supporting systems that ran DOS, Novell, MacOS, OS/2, SunOS, HP-UX, and Linux, Microsoft's graphical operating system was a consistent part of my life. For a while, I was non-committal about it. Windows had its strengths and weaknesses. But as time went on, and both the company and the technology changed (or didn't), I became apathetic, and then contrarian. I've run Linux on my primary systems for over a decade now, mostly out of spite.

 

That said, I'm also an IT pro. I know which side of the floppy the write-protect tab is flipped to. Disliking Windows technically didn't stop me from getting my MCSE or supporting it at work.

 

The Keynote

The preceding huge build-up is intended to imply that, previous to this year's MS:Ignite keynote, I wasn't ready to lower my guard, open my heart, and learn to trust (if not love) again. I had been so conflicted. But then I heard the keynote.

 

I appreciate, respect, and even rejoice in much of what was shared in Satya Nadella's address. His zen-like demeanor and measured approach proved, in every visible way, that he owns his role as Microsoft's chief visionary. (An amusing sidenote: On my way home after a week at Ignite, I heard Satya on NPR. Once again, he was calm, unassuming, passionate, focused, and clear.

 

Mr. Nadella's vision for Microsoft did not include a standard list of bulleted goals. Instead, he framed a far more comprehensive and interesting vision of the kind of world the company wants to build. This was made clear in many ways, from the simultaneous, real-time translation of his address into more than a dozen languages (using AI, of course), to the fact that, besides himself, just three panelists and one non-speaking demonstration tech were men. Women who occupied important technical roles at Microsoft filled the rest of the seats onstage. Each delivered presentations full of clarity, passion, and enthusiasm.

 

But the inspirational approach didn't stop there. One of the more captivating parts of the keynote came when Nadella spoke about the company's work on designing and building a quantum computer. This struck me as the kind of corporate goal worthy of our best institutions. It sounded, at least to my ears, like Microsoft had an attitude of, "We MIGHT not make it to this goal, but we're going to try our damnedest. Even if we don't go all the way, we're going to learn a ton just by trying to get there!"

 

The Floor

Out on the exhibitor floor, some of that same aspirational thinking was in evidence.

 

I spent more time than I ought to have in the RedHat booth. (I needed a little me time in a Linux-based safe space, okay?) The folks staffing the booth were happy to indulge me. They offered insights into their contributions to past cross-platform efforts, such as getting the Visual Studio ported to Linux. This provided context when I got to zero-in on the details of running SQL server on Linux. While sqlrockstar and I have a lot to say about that, the short story is that it's solid. You aren't going to see 30% performance gains (like you do, in fact, by running Visual Studio on Linux), but there's a modest increase that makes your choice to cuddle up to Tux more than just a novelty.

 

I swung by the MSDN booth on a lark (I maintain my own subscription) and was absolutely blown away when the staffer took the time to check out my account and show me some ways to slice $300 off my renewal price next year. That's a level of dedication to customer satisfaction that I have not expected from M$ in... well, forever, honestly.

 

I also swung by the booth that focused on SharePoint. As some people can attest (I'm looking at you, @jbiggley), I hate that software with the fiery passion of 1,000 suns and I challenged the techs there to change my mind. The person I spoke to got off to a pretty poor start by creating a document and then immediately losing it. That's pretty consistent with my experience of SharePoint, to be honest. He recovered gracefully and indicated a few features and design changes which MIGHT make the software easier to stomach, in case I'm ever forced able to work on that platform again.

 

In the booth, SolarWinds dominated in the way I've come to expect over the past four years of convention-going. There was a mad rush for our signature backpacks (the next time you can find us at a show, you NEED to sign up for one of them, and you need to hit the show floor early). The THWACK.com socks made a lot of people happy, of course. Folks from other booths even came over to see if they could score a pair.

 

What was gratifying to me was the way people came with specific, focused questions. There was less, "Welp, I let you scan my badge, so show me what you got" attitude than I've experienced at other shows. Folks came with a desire to address a specific challenge, to find out about a specific module, or even to replace their current solution.

 

The Buzz

But what had everyone talking? What was the "it" tech that formed the frame of reference for convention goers as they stalked the vendor booths? Microsoft gave everyone three things to think about:

  1. Powershell. This is, of course, not a new tech. But now more than ever (and I realize one could have said this a year ago, or even two, but it keeps becoming ever truer) a modicum of Powershell skills are a requirement if you expect your life as an IT pro to include Microsoft tech.
  2. Teams. This is the new name for the function that is currently occupied by Lync/Skype/Skype for Business. While there is a lot of development runway left before this software takes flight, you could already see (in the keynote demos and on the show floor) that it, along with Bing, will be central to Microsoft's information strategy. Yes, Bing. And no, I'm not making any further comment on that. I feel good enough about Microsoft that I won't openly mock them, but I haven't lost my damn mind.
  3. Project Honolulu. This is another one of those things that will probably require a completely separate blog post. But everyone who showed up at the booth (especially after the session where they unveiled the technical preview release) wanted to know where SolarWinds stood in relation to it.

 

The SWUG Life

Finally, there was the SolarWinds User Group (SWUG) session Tuesday night. In one of the most well-attended SWUGs ever, Steven Hunt, Kevin Sparenberg, and I had the privilege of presenting to a group of users whose enthusiasm and curiosity was undiminished, despite having been at the convention all day. Steven kicked things off with a deep dive into PerfStack, making a compelling case that we monitoring experts need to strongly consider our role as data scientists, considering the vast pile of information we sit on top of. Then I had a chance to show off NetPath a bit, taking Chris O'Brien's NetPath Chalk Talk and extending it to show off some of the hidden tricks and new insights we've gained after watching users interact with it for over a year. And Kevin brought it home with his Better Together talk, which honestly gets better every time I watch him deliver it.

 

The Summary

If you were at Ignite this year, I'd love to hear whether you think my observations are off-base or spot-on. And if you weren't there, I'd highly recommend adding it to your travel plans (it will be in Orlando again next year), especially if you tend to walk on the wild systems side in your IT journey. Share your thoughts in the comments below.

IT operations are evolving to include business and services that move beyond the technology. With change being a constant, how does an IT professional remain relevant?

 

In the "Soft Skills Beyond the Tech," session, I will be joined by fellow Head Geek™ Thomas LaRock, the Exchange Goddess, Phoummala Schmitt, and Tech Field Day organizer-in-chief and owner of Foskett Services, Stephen Foskett, to discuss the top three soft skills that IT professionals need to have to not only survive but thrive in their careers.

 

THWACKcamp is the premier virtual IT learning event connecting skilled IT professionals with industry experts and SolarWinds technical staff. Every year, thousands of attendees interact with each other on topics like network and systems monitoring. This year, THWACKcamp further expands its reach to consider emerging IT challenges like automation, hybrid IT, cloud-native APM, DevOps, security, and more. For 2017, we’re also including MSP operators for the first time.

THWACKcamp is 100% free and travel-free, and we'll be online with tips and tricks on how to your use SolarWinds products better, as well as best practices and expert recommendations on how to make IT more effective regardless of whose products you use. THWACKcamp comes to you so it’s easy for everyone on your team to attend. With over 16 hours of training, educational content, and collaboration, you won’t want to miss this!

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

I know, I'm a day late and quite possibly 37 cents short for my coffee this morning, so let's jump in, shall we?

 

Let's start with the Equifax breach. This came up in the Shields Down Conversation Number Two, so, I thought I would invite some of my friends from our security products to join me to discuss the breach from a few different angles.

 

My take will be from a business strategy (or lack of) standpoint. Roughly 143 million people had their personal data exposed because Equifax did not properly execute a simple patching plan. Seriously?

 

Is this blog series live and viewable? I am not the only person who implements patching, monitoring, log and event management in my environments. This is common knowledge. What I don't get is the why. Why, for the love of everything holy, do businesses not follow these basic practices?

 

CIxO or CXOs do not implement these practices. However, it is their duty (to their company and their core values) to put the right people in place who will ensure that security measures are being carried out.

 

Think about that for a moment and then know that there was a patch produced for the vulnerability that Equifax failed to remediate in March. This breach happened, as we all know, in mid-May. Where is the validation? Where was the plan? Where is the ticketing system tracking the maintenance that should've been completed on their systems? There are so many questions, especially since this happened in an enterprise organization, not some small shop somewhere.

 

Now, let's take this another step further. Equifax dropped another juicy nugget of information of another breach in March. Don't worry, though. It was an entirely different attack. However, the incredible part is that some of the upper-level folks were able to sell their stock. That makes my heart happy, you know, to know that they had the time to sell their stock before they released information on being breached. Hat's off to them for that, right?

 

Then, another company decided they needed to market and sell credit monitoring (for a reduced fee, that just so happens to use EQUIFAX SERVICES) to the individuals who were now at a high(er) risk of identity theft and credit fraud. I'm still blown away by this.

 

Okay. Deep breath. Whooooo.

 

I was recently informed that when you have third-party software, patching is limited and that organization's SLAs for application uptime don't allow patching on some of their servers. I hear you! I am a big believer that some patching servers can cause software to stop working or result in downtime. However, this is where you have to implement a lab and test patching. You should check your patching regardless to make sure you are not causing issues with your environment in the first place. 

 

I will implement patching on test servers usually on a Friday, and then I will verify the status of my applications on the server.

I will also go through my security checks to validate that no new holes or revert have happened before I implement in production within two weeks. 

 

Now let's bring this back to the strategy at hand. When you are an enterprise corporation with large amounts of personal data belonging to your trusting customers (who are the very reason you are as large as you are), you better DARN WELL have a security plan that is overseen by more than one individual! Come on! This is not a small shop or even a business that could argue, "Who would want our customer data?" We're talking about Equifax, a company that holds data about plenty of consumers who happen to have great credit. Equifax is figuratively a lavish buffet for hackers.

 

The C-level of this company should have kept a close eye on the security measures being taken by the organization, including patching, SQL monitoring, log, events, and traffic monitoring. They should have known there were unpatched servers. The only thing I think they could have argued was the common refrain, "We cannot afford downtime for patching." But still. 

 

Your CxO or CIxO has to be your IT champion! They have to go nose to nose with their peers to make sure their properly and thoroughly designed security plans get implemented 100%. They hire the people to carry out such plans, and it is their responsibility to ensure that it gets done and isn't blocked at any level.

 

Enough venting, for the moment. Now I'd like to bring in some of my friends for their take on this Equifax nightmare that is STILL unfolding! Welcome joshberman, just one of my awesome friends here at SolarWinds, who always offers up great security ideas and thoughts.

 

Dez summed up things nicely in her comments above, but let's go back to the origins of this breach and explore the timeline of events to illustrate a few points.

 

  • March 6th: the exploited vulnerability, CVE-2017-5638, became public
  • March 7th: Security analysts began seeing attacks propagate that were designed to exploit this flaw
  • Mid-May: Equifax tracked the date of compromise back to this window of time
  • July 29th: the date Equifax discovered a breach had occurred

 

Had a proper patch management strategy been set in place and backed by the right patch management software to enable the patching of third-party applications, it is likely that Equifax might not have succumbed to such a devastating attack. This applies even if testing had been factored into the timelines, just as Dez recommends. "Patch early, patch often" certainly applies in this scenario, given the voracious speed of hackers to leverage newly discovered vulnerabilities as a means to their end. Once all is said and done, if there is one takeaway here it is that patching as a baseline IT security practice, is and will forever be a must. Beyond the obvious chink in Equifax's armor, there is a multitude of other means by which they could have thwarted this attack, or at least minimized its impact.

 

That's fantastic information, Josh. I appreciate your thoughts. 

 

I also asked mandevil (Robert) for his thoughts on the topic. He was on vacation, but he returned early to knock out some pertinent thoughts for me! Much appreciated, Robert!

 

Thanks, Dez. "We've had a breach and data has been obtained by entities outside of this company."

Imagine being the one responsible for maintaining a good security posture, and the sinking feeling you had when these words were spoken. If this is you, or even if you are tangentially involved in security, I hope this portion of this post helps you understand the importance of securing data at rest as it pertains to databases.

 

Securing data in your database

 

The only place data can't be encrypted is when it is in cache (memory). While data is at rest (on disk) or in flight (on the wire), it can and should be encrypted if it is deemed sensitive. This section will focus on encrypting data at rest. There are a couple different ways to encrypt data at rest when it is contained within a database. Many major database vendors like Microsoft (SQL Server) and Oracle provide a method of encrypting called Transparent Data Encryption (TDE). This allows you to encrypt the data in the files at the database, table space, or column level depending on the vendor. Encryption is implemented using certificates, keys, and strong algorithms and ciphers.

 

Links for more detail on vendor TDE description and implementation:

 

SQL Server TDE

Oracle TDE

 

Data encryption can also be implemented using an appliance. This would be a solution if you would want to encrypt data but the database vendor doesn't offer a solution or licensing structures change with the usage of their encryption. You may also have data outside of a database that you'd want to encrypt that would make this option more attractive (think of log files that may contain sensitive data). I won't go into details about different offers out there, but I have researched several of these appliances and many appear to be highly securitized (strong algorithms and ciphers). Your storage array vendor(s) may also have solutions available.

 

What does this mean and how does it help?

 

Specifically, in the case of Equifax, storage level hacks do not appear to have been employed, but there are many occurrences where storage was the target. By securing your data at rest on your storage tier, it can prevent any storage level hacks from obtaining any useful data. Keep in mind that even large database vendors have vulnerabilities that can be exploited by capturing data in cache. Encrypting data at the storage level will not help mitigate this.

 

What you should know

 

Does implementing TDE impact performance? There is overhead associated with encrypting data at rest because the data needs to be decrypted when read from disk into cache. That will take additional CPU cycles and a bit more time. However, unless you are CPU-constrained, the impact should not be noticeable to end-users. It should be noted that index usage is not affected by TDE. Bottom line is if the data is sensitive enough that the statement at the top of this section gets you thinking along the lines of a resume-generating event, the negligible overhead impact of implementing encryption should not be a deterrent from its use. However, don't encrypt more than is needed. Understand any compliance policies that govern your business (PCI, HIPAA, SOX, etc.).

 

Now to wrap this all up.

 

When we think of breaches, especially those involving highly sensitive data or data that falls under the scope of regulatory compliance, SIEM solutions certainly come to mind. This software performs a series of critical functions to support defense-in-depth strategies. In the case of Equifax, their most notable influence appears to be their attempt to minimize the time of detection with either the compromise or the breach itself. On one hand, they support the monitoring and alerting of anomalies on the network that could indicate a compromise. On the other, they can signal the exfiltration of data – the actual event of the breach – by monitoring traffic on endpoints and bringing to the foreground spikes in outbound traffic, which, depending on the details, may otherwise go unnoticed. I'm not prepared to make the assumption that Equifax was lacking such a solution, but given this timeline of events and their lag in response, it begs the question.

 

As always, thank you all for reading and keep up these excellent conversations.

In my last post regarding IT and healthcare policy, we talked about the somewhat unique expectation of "extreme availability" within the environments we support. I shared some of my past experiences and learned a lot from the community interaction in the comments. Thanks for participating! That kind of interaction is what I strive for, and it's really what makes these forums what they are. I’ve got one more topic I’d like to discuss in this series of blog posts, and I’m curious what you all have to say about it.

 

Just like in traditional SMB and enterprise IT, healthcare IT is concerned about managing mobile devices. In a traditional SMB or enterprise environment, most of the time we’re talking about company-issued laptops, cell phones, tablets, and the like. Sure, they’re carrying potentially sensitive data, and we need to be able to manage and protect those assets, but that’s pretty much where it stops. I’ll talk more about those considerations later in this post. In healthcare IT, our mobile devices are an entirely different beast. Not only do we have to worry about the types of devices mentioned above (and even more so, because even if they don’t carry protected healthcare information about patients, they are able to access systems that contain it), we also have mobile devices such as laptops and computers on rolling carts that move about the facility. We also have network-connected patient-care equipment (think MRI machines, etc.), all of which are potential risks that must be managed.

 

It all starts with strategy

Every implementation varies, so your specific goals may differ here, but traditional targets for mobile device management include the ability to control what software or applications are installed on mobile devices, control security policies on those devices (think screensavers, automatic-locking policies, etc.), control and require data encryption, location monitoring to help ensure that devices are where they’re supposed to be, or track when devices that aren’t supposed to leave the premises are no longer able to be reached, remote device wipes, etc. These days, there are a lot of commercial, off-the-shelf products that can help with mobile device management, but it all starts with strategy. Before you can start solving all of the problems I’ve listed above, you’ve got to first identify your individual goals for your overall mobile device management strategy. Are you only concerned with enterprise-owned assets, or do you care about BYOD equipment as well? What type of encryption rules are you going to mandate for your assets, and do they even support it? What about systems provided by and supported by third-party vendors? Are you going to require their compliance with your mobile device management strategy? Will you refuse to connect their solutions to your network if they aren’t willing or able to comply? As an IT resource, do you even have the authority to make that determination?  The list goes on. Defining the mobile device management strategy may be the most difficult part of the entire operation.

 

Once you’ve defined your strategy and the goals that are important to you, you’re going to review the types of equipment you need to support. Are you going to be Apple-only, PC-only, or are you going to support capabilities in a cross-platform environment? Is your mobile device management strategy able to deliver feature parity of everything it provides in this cross-platform world, or are you going to discover that some of your goals are only achievable on two of the three platforms you want to support? In traditional IT, mobile device management is much less challenging than in healthcare IT, mainly because IT usually has the final say in what equipment will and will not be connected to the environment. That's not always the case in healthcare IT.

 

This post hasn't been about answering questions, it's been about asking them. What I was really aiming for was to get you thinking about everything that goes into mobile device management from a healthcare IT standpoint. How does policy influence it? How do the IT organization's controls impact equipment decisions? What other MDM challenges do you experience now in healthcare IT, and what new challenges do you see coming in the future? What solutions have you found that address these challenges, and what have their shortcomings been? Do you feel like you've been able to achieve your goals? I’d love to hear your thoughts in the comments! Until next time!

You’ve read up on the history of hacking, its motivations, and benefits for you as an IT professional. You’ve watched videos and read technical books on hacking tools and even spent a few hard-earned dollars on some nifty hacking gadgets to learn from on your own personal hack-lab playground. Now what? What can you do with this newfound knowledge?

 

Well, you can get a certification or two.

 

Wait, what? A certification for hacking?

 

Sort of. There are certifications that recognize your knowledge and understanding of hacking vectors, tools, techniques, and methodologies. More importantly, these certifications validate your skill at being able to prevent and mitigate those same vectors, tools, techniques, and methodologies.

 

These are valuable certifications for anyone wishing to move their career into a security-focused area of IT. As hackers and malware evolve and become more sophisticated, the demand for well-trained, knowledgeable, and certified information security professionals has risen sharply, and organizations around the world are investing heavily in protecting themselves.

 

Certified Ethical Hacker

 

The International Council of E-Commerce Consultants, or EC-Council, developed the popular Certified Ethical Hacker (CEH) designation after the 9/11 attack on the World Trade Center. There was growing concern that a similar attack could be carried out on electronic systems, with widespread impact to commerce and financial systems.

 

Other EC-Council certifications include the Certified Network Defender, and Certified Hacking Forensic Investigator, among others. These certifications vary in terms of study and experience required.

From the CEH information page, the purpose of this certification is to:

 

  • Establish and govern the minimum standards for credentialing professional information security specialists in ethical hacking measures
  • Inform the public that credentialed individuals meet or exceed the minimum standards
  • Reinforce ethical hacking as a unique and self-regulating profession

 

While the term "ethical hacking" may be open to some interpretation, it’s clear from that last bullet that the EC-Council would agree that IT professionals can and should participate in some form of hacking as a learning tool. Ethical hacking, or hacking that won’t land you in prison, is something anyone can do at home to further learn about cybersecurity and risks to their own environments.

 

“To beat a hacker, you need to think like a hacker."

 

Certified Information Systems Security Professional

 

The International Information Systems Security Certification Consortium, or (SSC)2 was formed in 1989 and offers training and certification in a number of Information Security topics. Their cornerstone certification is the Certified Information Systems Security Professional or CISSP. This certification is a bit more daunting to achieve. It requires direct, full-time work experience in two or more of the information security domains outlined in the Common Body of Knowledge (CBK), along with a multiple choice exam, and endorsement from another (ISC)2 certification holder.

 

The CBK is described as a “common framework of information security terms and principles”, and is constantly evolving as new knowledge is added to it through developments in different attack vectors and defense protocols.

The CISSP has three different areas of specialty:

 

  • CISSP-ISSAP – Information Systems Security Architecture Professional
  • CISSP-ISSEP – Information Systems Security Engineering Professional
  • CISSP-ISSMP – Information Systems Security Management Professional

 

Each of these is valid for three years and is maintained through earning Continuing Professional Education credits, or CPE’s. CPE’s can be earned by attending training, or online seminars, along with other educational opportunities.

 

It is one of the most sought-after security certifications and many IT professionals surveyed year after year report the CISSP as having a fairly significant salary advantage as well.

 

Cisco Cyber Ops and Security

 

Cisco is easily one of the most recognized and well-known vendors to offer certifications in various networking topics. Their Security track consisting of the Cisco Certified Network Associate, Cisco Certified Network Professional, and the Cisco Certified Internetwork Expert, is a graduate program that covers a wide variety of practical topics for someone who is responsible for hardening and protecting their infrastructure from cyber threats.

 

To begin with the Associate level certification, you must first demonstrate a fundamental understanding of networks and basic routing and switching by completing the Cisco Certified Entry Networking Technician (CCENT) or the CCNA-Routing & Switching. After this, completion of one more exam will net you the CCNA-Security designation.

 

The Professional level certification then requires four additional exams, focusing on secure access or network access control; edge solutions, such as firewalls, mobility, and VPN; and malware, web, and email security solutions.

 

Once you've achieved your CCNP-Security designation, for those brave enough to continue, there’s the CCIE-Security, which only requires one exam.

 

And a lab. A very difficult, 8-hour lab.

 

Those with the CCIE-Security designation have demonstrated knowledge and practical experience with a wide range of topics, solutions, and applications, and would also be recognized as top experts in their field.

 

Cisco recently announced another certification track, the CCNA Cyber Ops. This focuses less on enterprise security and is more aimed at those who might work in a Security Operations Center or SOC. In this track, the focus is more on analysis, real-time threat defense, and event correlation.

 

For Fun or For Profit

 

Hacking can be fun, and it can be something you as the IT professional do as a hobby or just to keep your skills sharp. Alternatively, you can also develop those skills into marketable professional skills that employers would be keen to leverage. Whether you choose to hack for fun or to further your employment opportunities, it’s an area of expertise that is constantly evolving and requires that you be able to learn and adapt to the threat landscape because it changes by the minute.

 

Gone are the days when you could include "some security" as part of your jack-of-all-trades portfolio, but Information Security has become a full-time job, even sometimes requiring an entire team of security staff to protect and defend IT environments. Private enterprise and vendors alike are investing millions, if not billions, of dollars into protecting themselves from malware, hacking, and cybercrime, and that doesn’t seem to be a trend that will slow down anytime soon.

Having a great week at Microsoft Ignite. I love the energy of the crowd, especially on top of the all the major announcements. if you are at Ignite, stop by the booth and say hello.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Blockchain Technology is Hot Right Now, But Tread Carefully

The author talks about how blockchain will help reduce cases of fraud, then mentions how hackers stole $65M in bitcoin recently. Tread carefully, indeed.

 

Actuaries are bringing Netflix-like predictive modeling to health care

I had not thought about the use of predictive analytics and machine learning in the field of healthcare, but now that I do it seems to be a logical use case. Unless I get denied coverage because of my bacon habit. If that happens, then this is a horrible idea.

 

Uber stripped of London license due to lack of corporate responsibility

I wish this headline was written three years ago.

 

What the Commission found out about copyright infringement but ‘forgot’ to tell us

Someone needs to send this to James Hetfield and remind him that maybe the reason Metallica had poor sales was that their music wasn’t good, not that Napster was to blame. No, I’m not bitter.

 

Equifax Says It Had A Security Breach Earlier In The Year

File Equifax under “how not to handle data security breaches."

 

CCleaner Was Hacked: What You Need to Know

Just a reminder that everything is awful.

 

Water-Powered Trike Does 0-257 Km/H Under 4 Sec

This looks crazy dangerous. Yes, of course, I want to take a ride.

 

Adding another lanyard to my collection:

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Within a healthcare environment, IT failure is not an option.

 

This is a critical issue across the two largest federal entities: the DoD and the Department of Veterans Affairs (VA). In these agencies’ healthcare environments, there is no margin for error; the network must always be up and running, and doctors and nurses must be able to access patient data 24/7.

 

Adding to the challenge is the size of healthcare networks and the vast amount of data they store. Every DoD and VA facility must be part of the greater agency network. Then consider each patient, each visit, and each record that must be stored and tracked across the environment. On top of that is the day-to-day operations of each facility, which may itself encompass a large enterprise.

 

What’s the best way to monitor and keep this type of extremely large health IT network running smoothly and consistently? The answer can be found in the following four best practices.

 

Step 1: Gain Visibility

 

Tracking network performance is one of the best ways to understand and mitigate problems that might affect your environment. Agencies should invest in a set of tools that not only provide network and system monitoring, but also a view that spans across the environment.

 

Having this capability provides the visibility necessary to troubleshoot network problems or outages, resolve configuration issues, and support end-users and systems from a central location.

 

Step 2: Secure All Devices

 

From a security perspective, risks are high as healthcare staff and patients alike connect to a facility’s Wi-Fi. And countless endpoints—including medical devices—must be managed, monitored, and controlled.

 

It is critical to put extra emphasis on protecting the network from whatever connects to it by carefully monitoring and treating every single device or “thing” as a potential threat or point of vulnerability.

 

Step 3: Enhance Firewall Management

 

With networks as large as most healthcare environments, firewalls can become an issue. With so many firewalls in place, the network administrator can easily accumulate an ever-growing list of conflicting and redundant rules and objects, which can cause mayhem in firewall management.

 

Federal IT pros should regularly run automated scripts and, to help save time and effort, leverage a network management tool to help identify conflicting rules, remove redundancies, and generally streamline the access control list structure.

 

Step 4: Implement an Automation Tool

 

Time can be the federal IT pro’s greatest challenge, especially working with healthcare environments that push the limits of 24/7 demands.

 

This final step is the most critical for pulling the previous three together. Adding automation is the difference between monitoring problems and fixing them manually, and implementing a complete solution. It can also be the difference between receiving a call at 3:00 a.m. to fix a problem, and being able to sleep soundly and having the system fix itself.

Find the full article on Federal Technology Insider.

The SolarWinds crew including sqlrockstar, chrispaap, and I just returned stateside after a successful jaunt across the Atlantic at VMworld Europe in Barcelona, Spain. Thank you to all of the attendees who joined us at Tom’s speaking sessions and at our booth. Thank you to Barcelona for your hospitality!

 

Below are a few pictures of the SolarWinds team as we walked the walk and talked the talk of monitoring with discipline.

 

The SolarWinds Family at VMworld Europe 2017 in BarcelonaThe SolarWinds Family Team Dinner

 

 

Our journey doesn’t stop with the end of the VMworld two-continent tour. We are about to ignite a full course of monitoring with discipline in Orlando. At Microsoft Ignite, visit us in Booth #1913 for the most 1337 swag as well as fantastic demos on monitoring hybrid IT with discipline.

Let us know in the comments if you'll be joining us in Orlando for Microsoft Ignite.

Back from Barcelona and VMworld Europe. In the past three weeks, I've logged 14k miles, on 10 flights, and delivered three sessions, and had two sessions listed in the Top Ten for each event. And in a week, I get to do it all over again at Microsoft Ignite. If you are heading to Ignite, stop by the booth and say hello. We've got plenty of stickers and buttons and maybe a few pairs of socks, too.

 

As always, here is a bunch of links I hope you will find interesting!

 

Atlanta Tests Self-Driving Vehicle In Heart Of The City

It's been a while since I've posted about autonomous vehicles, so I decided to fix that.

 

Self-driving trucks enter the fast lane using deep learning

And then I decided to double down by sharing info about a self-driving truck. I love living in the future! Next we'll have self-tuning databases!

 

Oracle preps autonomous database at OpenWorld, aims to cut labor, admin time

And there it is, the other shoe dropping. Microsoft, AWS, and now Oracle are all on the AI train and offering self-tuning systems. If you are an operational DBA, it's time to think about a pivot.

 

Azure Confidential Computing will keep data secret, even from Microsoft

Microsoft continues to make progress in the area of data security, because they know that data is the most critical asset any company (or person) owns.

 

Understanding the prevalence of web traffic interception

And this is why the Microsoft announcement matters more than any AWS product announcement. Faster storage and apps don't mean a thing if your data is breached.

 

Google Parent Alphabet To Consider $1 Billion Investment In Lyft

Goodbye Uber, I won't miss you, your questionable business practices, or your toxic work culture.

 

Identity Theft, Credit Reports, and You

Some advice for everyone in the wake of the Equifax breach: be nice until it's time to not be nice.

 

It's been a fun two weeks, but you are looking at two very tired friki cabezas atop the Arenas de Barcelona:

Next week I'm flying down to Orlando, Florida to spend a week filling my brain with all things MS: Ignite. I'm especially excited about this trip because it is one of the first times as a Head Geek that I'm traveling to a server- and application-centric show.

 

To be sure, we get our fair share of questions on SAM, DPA, SRM, WPM, VMAN, and the rest of the systems-side of the house at shows like Cisco Live, but I'm expecting (and looking forward to) a whole different class of questions from a convention that boasts 15,000+ folks who care deeply about operating systems, desktops, servers, cloud, and applications.

 

There are a few other things that will make next week special:

          • There. Will. Be. SOCKS. It was a hit at Cisco Live. It was even more of a hit at VMWorld. And now the THWACK socks are ready to take MS: Ignite (and your feet) by storm. We can't wait to see the reactions to our toe-hugging goodness.
          • SWUGLife on the beach: For the second year in a row, Ignite will play host to the most incredible group of users ever assembled: the illustrious, inimitable SolarWinds User Group (or SWUG for short).
          • Geek Boys, Assemble!:  For the first time ever, Patrick, Tom, Kong, and myself will all be at the same show at the same time. It obviously a crime that Destiny couldn't join in the fun, but somehow I think she'll find a way to be with us in spirit. And of course we can just consider this a prelude to the all out Geeksplosion next month at THWACKcamp.
          • THERE. WILL. BE. EBOOKS.: For several weeks, I've been busy crafting the second installment in the SolarWinds Dummies series: Systems Monitoring for Dummies. While you can download it now from this link, we'll also have handouts at the booth to let all of the Ignite attendees know about it, marking the book's first live appearance on the show floor.


But that's more or less the view from show floor. There are things that I'm eager to experience beyond the booth border (#1913, for those who will be in the neighborhood).

 

Tom will be giving two different talks, both of which I personally want to hear about: "Upgrading to SQL Server 2016" is going to be packed full of information and one of those sessions where you'll either want a keyboard or a recorder to get all the details. But "When bad things happen to good applications" promises to be classic SQLRockStar in action. For that one, I plan to bring a lighter for the encore number at the end.

 

I also am very eager to get out and see what Microsoft is all about these days. Sure, I use it on the desktop, and I read the news, and I'm friends with an ever-growing number of folks who work in Redmond. But shows like these are where you get to see the aspirational side of a company and it's technology. Ignite is where I will get to see who Microsoft WANTS to be, at least in the coming year.

 

That aspirational quality will be on display nowhere as much as the keynote talk by Satya Nadella on Monday. Look for me to be live-tweeting at that event, at the very least.

 

Stay tuned for my follow-up log in two weeks, which I expect will be full of unexpected discoveries, a few food-related pictures, and hopefully a few shots of the SolarWinds friends we met up with while we were there.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

My colleague Patrick Hubbard made some interesting predictions about 2017 for government IT professionals, and how DevOps culture could change the landscape. I’d like to share them with you now, as we approach Q4, to see how his predictions have played out so far.

 

If there is one thing government organizations are used to, it’s change. Budgets, technology, and policies are constantly changing, and there’s no surprise that the IT professional’s role is constantly evolving.

 

Not only do government IT professionals have to deal with the usual difficulties of trying to keep up with new technology, such as cloud, containers, microservices, and the Internet of Things (IoT), they also need to deal with budget cuts, restrictive policies, and a lack of resources. It is now more important than ever to scrap the traditional siloed IT roles, such as network, storage, and systems administrators.

 

A general, holistic approach to government IT

Having generalists is particularly important within government IT, where resources and budgets may be stretched. The ability to have a holistic understanding of the IT infrastructure and make quick and informed decisions is crucial over the next year and beyond.

 

2017 is likely to bring new machine-based technologies and the continued adoption of DevOps, which encourages collaboration between siloed IT departments. Government IT professionals need to expand their viewpoints to focus on tools and methodologies they may not be immediately familiar with to prepare for and manage next-generation data centers.

 

Leave the automation to machines

As predicted, new machine-based technologies are going to become better and more sophisticated over time. Before technology, such as bots and artificial intelligence, is leveraged, new management and monitoring processes will need to be introduced to government organizations.

 

DevOps culture is coming

DevOps describes the culture and collaboration of the development and operations teams that is geared toward software development. The transition to DevOps is certainly not without its challenge, however. By leveraging these principles, government organizations can be well on their way to reap the benefits of an integrated DevOps mentality.

 

DevOps is a positive organizational movement that will help government organizations empower IT departments to innovate. It also has the potential to improve agility, deliver innovation faster, provide higher quality software, better align work and value, and give the ability to respond to problems or changes.

 

The role of the government IT professional is constantly evolving. Since the good old days, when IT pros did little more than assist when emails stopped working, they now have much more power to shape the wider business strategy due to the reliance on technology for everyday tasks. By staying relevant and maintaining a general knowledge across the entire IT infrastructure, embracing a collaborative DevOps culture, and being open-minded to the integration of machines, government IT professionals will find themselves prepared for the changes that are coming their way.
 

Find the full article on Adjacent Open Access.

I have, on the occasional gray and challenging day, suggested that the state of IT contains a measure of plight. That it’s beset on all sides with the hounds of increasing complexity, reduced budgets, impossible business demands, and unhappy users. Fortunately for us all, however, being an IT professional is, as it has always been, pretty freaking awesome. Would I really make a major career shift away from casting spells on hardware, writing a little code to automate drudgery, and generally making the bezels blink? Nope. I’m not saying I’d do it for free, but after all these years I still have moments where it’s hard to believe I get paid to do what I love. And again this year we get to celebrate IT pro day.

 

There’s a certain special comradery among IT professionals that runs deeper than merely sharing a foxhole in the server room. It’s celebrating the power-on of a new data center. It's the subdued antithesis of the sales dude-style slap-on-the-back compliment when we finally identify a bedeviling, intermittent root cause. It's mostly cultural. Our ilk actually cares about users, and for many it’s what brought them to technology in the first place. We solve problems they didn’t know they had in ways they wouldn’t understand, avoiding the adulation usually reserved for such heroics. Aww, shucks. We're just here to do what we can. #humblebrag

 

This year, share the fun and recognition of IT Pro Day with a friend. Take an extra-long lunch, and remember you’re the one that keeps the bezel lights on. We don’t just keep our businesses working, we affect the human experiences and the feelings of our users. With a little time and the right questions, we truly make the world a better place.

 

So have a little fun, send some eCards, and maybe appear in next year’s video here: http://www.itproday.org

 

Happy IT Pro Day!

Filter Blog

By date: By tag: