Skip navigation

IT operations are evolving to include business and services that move beyond the technology. With change being a constant, how does an IT professional remain relevant?


In the "Soft Skills Beyond the Tech," session, I will be joined by fellow Head Geek™ Thomas LaRock, the Exchange Goddess, Phoummala Schmitt, and Tech Field Day organizer-in-chief and owner of Foskett Services, Stephen Foskett, to discuss the top three soft skills that IT professionals need to have to not only survive but thrive in their careers.


THWACKcamp is the premier virtual IT learning event connecting skilled IT professionals with industry experts and SolarWinds technical staff. Every year, thousands of attendees interact with each other on topics like network and systems monitoring. This year, THWACKcamp further expands its reach to consider emerging IT challenges like automation, hybrid IT, cloud-native APM, DevOps, security, and more. For 2017, we’re also including MSP operators for the first time.

THWACKcamp is 100% free and travel-free, and we'll be online with tips and tricks on how to your use SolarWinds products better, as well as best practices and expert recommendations on how to make IT more effective regardless of whose products you use. THWACKcamp comes to you so it’s easy for everyone on your team to attend. With over 16 hours of training, educational content, and collaboration, you won’t want to miss this!


Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

I know, I'm a day late and quite possibly 37 cents short for my coffee this morning, so let's jump in, shall we?


Let's start with the Equifax breach. This came up in the Shields Down Conversation Number Two, so, I thought I would invite some of my friends from our security products to join me to discuss the breach from a few different angles.


My take will be from a business strategy (or lack of) standpoint. Roughly 143 million people had their personal data exposed because Equifax did not properly execute a simple patching plan. Seriously?


Is this blog series live and viewable? I am not the only person who implements patching, monitoring, log and event management in my environments. This is common knowledge. What I don't get is the why. Why, for the love of everything holy, do businesses not follow these basic practices?


CIxO or CXOs do not implement these practices. However, it is their duty (to their company and their core values) to put the right people in place who will ensure that security measures are being carried out.


Think about that for a moment and then know that there was a patch produced for the vulnerability that Equifax failed to remediate in March. This breach happened, as we all know, in mid-May. Where is the validation? Where was the plan? Where is the ticketing system tracking the maintenance that should've been completed on their systems? There are so many questions, especially since this happened in an enterprise organization, not some small shop somewhere.


Now, let's take this another step further. Equifax dropped another juicy nugget of information of another breach in March. Don't worry, though. It was an entirely different attack. However, the incredible part is that some of the upper-level folks were able to sell their stock. That makes my heart happy, you know, to know that they had the time to sell their stock before they released information on being breached. Hat's off to them for that, right?


Then, another company decided they needed to market and sell credit monitoring (for a reduced fee, that just so happens to use EQUIFAX SERVICES) to the individuals who were now at a high(er) risk of identity theft and credit fraud. I'm still blown away by this.


Okay. Deep breath. Whooooo.


I was recently informed that when you have third-party software, patching is limited and that organization's SLAs for application uptime don't allow patching on some of their servers. I hear you! I am a big believer that some patching servers can cause software to stop working or result in downtime. However, this is where you have to implement a lab and test patching. You should check your patching regardless to make sure you are not causing issues with your environment in the first place. 


I will implement patching on test servers usually on a Friday, and then I will verify the status of my applications on the server.

I will also go through my security checks to validate that no new holes or revert have happened before I implement in production within two weeks. 


Now let's bring this back to the strategy at hand. When you are an enterprise corporation with large amounts of personal data belonging to your trusting customers (who are the very reason you are as large as you are), you better DARN WELL have a security plan that is overseen by more than one individual! Come on! This is not a small shop or even a business that could argue, "Who would want our customer data?" We're talking about Equifax, a company that holds data about plenty of consumers who happen to have great credit. Equifax is figuratively a lavish buffet for hackers.


The C-level of this company should have kept a close eye on the security measures being taken by the organization, including patching, SQL monitoring, log, events, and traffic monitoring. They should have known there were unpatched servers. The only thing I think they could have argued was the common refrain, "We cannot afford downtime for patching." But still. 


Your CxO or CIxO has to be your IT champion! They have to go nose to nose with their peers to make sure their properly and thoroughly designed security plans get implemented 100%. They hire the people to carry out such plans, and it is their responsibility to ensure that it gets done and isn't blocked at any level.


Enough venting, for the moment. Now I'd like to bring in some of my friends for their take on this Equifax nightmare that is STILL unfolding! Welcome joshberman, just one of my awesome friends here at SolarWinds, who always offers up great security ideas and thoughts.


Dez summed up things nicely in her comments above, but let's go back to the origins of this breach and explore the timeline of events to illustrate a few points.


  • March 6th: the exploited vulnerability, CVE-2017-5638, became public
  • March 7th: Security analysts began seeing attacks propagate that were designed to exploit this flaw
  • Mid-May: Equifax tracked the date of compromise back to this window of time
  • July 29th: the date Equifax discovered a breach had occurred


Had a proper patch management strategy been set in place and backed by the right patch management software to enable the patching of third-party applications, it is likely that Equifax might not have succumbed to such a devastating attack. This applies even if testing had been factored into the timelines, just as Dez recommends. "Patch early, patch often" certainly applies in this scenario, given the voracious speed of hackers to leverage newly discovered vulnerabilities as a means to their end. Once all is said and done, if there is one takeaway here it is that patching as a baseline IT security practice, is and will forever be a must. Beyond the obvious chink in Equifax's armor, there is a multitude of other means by which they could have thwarted this attack, or at least minimized its impact.


That's fantastic information, Josh. I appreciate your thoughts. 


I also asked mandevil (Robert) for his thoughts on the topic. He was on vacation, but he returned early to knock out some pertinent thoughts for me! Much appreciated, Robert!


Thanks, Dez. "We've had a breach and data has been obtained by entities outside of this company."

Imagine being the one responsible for maintaining a good security posture, and the sinking feeling you had when these words were spoken. If this is you, or even if you are tangentially involved in security, I hope this portion of this post helps you understand the importance of securing data at rest as it pertains to databases.


Securing data in your database


The only place data can't be encrypted is when it is in cache (memory). While data is at rest (on disk) or in flight (on the wire), it can and should be encrypted if it is deemed sensitive. This section will focus on encrypting data at rest. There are a couple different ways to encrypt data at rest when it is contained within a database. Many major database vendors like Microsoft (SQL Server) and Oracle provide a method of encrypting called Transparent Data Encryption (TDE). This allows you to encrypt the data in the files at the database, table space, or column level depending on the vendor. Encryption is implemented using certificates, keys, and strong algorithms and ciphers.


Links for more detail on vendor TDE description and implementation:


SQL Server TDE

Oracle TDE


Data encryption can also be implemented using an appliance. This would be a solution if you would want to encrypt data but the database vendor doesn't offer a solution or licensing structures change with the usage of their encryption. You may also have data outside of a database that you'd want to encrypt that would make this option more attractive (think of log files that may contain sensitive data). I won't go into details about different offers out there, but I have researched several of these appliances and many appear to be highly securitized (strong algorithms and ciphers). Your storage array vendor(s) may also have solutions available.


What does this mean and how does it help?


Specifically, in the case of Equifax, storage level hacks do not appear to have been employed, but there are many occurrences where storage was the target. By securing your data at rest on your storage tier, it can prevent any storage level hacks from obtaining any useful data. Keep in mind that even large database vendors have vulnerabilities that can be exploited by capturing data in cache. Encrypting data at the storage level will not help mitigate this.


What you should know


Does implementing TDE impact performance? There is overhead associated with encrypting data at rest because the data needs to be decrypted when read from disk into cache. That will take additional CPU cycles and a bit more time. However, unless you are CPU-constrained, the impact should not be noticeable to end-users. It should be noted that index usage is not affected by TDE. Bottom line is if the data is sensitive enough that the statement at the top of this section gets you thinking along the lines of a resume-generating event, the negligible overhead impact of implementing encryption should not be a deterrent from its use. However, don't encrypt more than is needed. Understand any compliance policies that govern your business (PCI, HIPAA, SOX, etc.).


Now to wrap this all up.


When we think of breaches, especially those involving highly sensitive data or data that falls under the scope of regulatory compliance, SIEM solutions certainly come to mind. This software performs a series of critical functions to support defense-in-depth strategies. In the case of Equifax, their most notable influence appears to be their attempt to minimize the time of detection with either the compromise or the breach itself. On one hand, they support the monitoring and alerting of anomalies on the network that could indicate a compromise. On the other, they can signal the exfiltration of data – the actual event of the breach – by monitoring traffic on endpoints and bringing to the foreground spikes in outbound traffic, which, depending on the details, may otherwise go unnoticed. I'm not prepared to make the assumption that Equifax was lacking such a solution, but given this timeline of events and their lag in response, it begs the question.


As always, thank you all for reading and keep up these excellent conversations.

In my last post regarding IT and healthcare policy, we talked about the somewhat unique expectation of "extreme availability" within the environments we support. I shared some of my past experiences and learned a lot from the community interaction in the comments. Thanks for participating! That kind of interaction is what I strive for, and it's really what makes these forums what they are. I’ve got one more topic I’d like to discuss in this series of blog posts, and I’m curious what you all have to say about it.


Just like in traditional SMB and enterprise IT, healthcare IT is concerned about managing mobile devices. In a traditional SMB or enterprise environment, most of the time we’re talking about company-issued laptops, cell phones, tablets, and the like. Sure, they’re carrying potentially sensitive data, and we need to be able to manage and protect those assets, but that’s pretty much where it stops. I’ll talk more about those considerations later in this post. In healthcare IT, our mobile devices are an entirely different beast. Not only do we have to worry about the types of devices mentioned above (and even more so, because even if they don’t carry protected healthcare information about patients, they are able to access systems that contain it), we also have mobile devices such as laptops and computers on rolling carts that move about the facility. We also have network-connected patient-care equipment (think MRI machines, etc.), all of which are potential risks that must be managed.


It all starts with strategy

Every implementation varies, so your specific goals may differ here, but traditional targets for mobile device management include the ability to control what software or applications are installed on mobile devices, control security policies on those devices (think screensavers, automatic-locking policies, etc.), control and require data encryption, location monitoring to help ensure that devices are where they’re supposed to be, or track when devices that aren’t supposed to leave the premises are no longer able to be reached, remote device wipes, etc. These days, there are a lot of commercial, off-the-shelf products that can help with mobile device management, but it all starts with strategy. Before you can start solving all of the problems I’ve listed above, you’ve got to first identify your individual goals for your overall mobile device management strategy. Are you only concerned with enterprise-owned assets, or do you care about BYOD equipment as well? What type of encryption rules are you going to mandate for your assets, and do they even support it? What about systems provided by and supported by third-party vendors? Are you going to require their compliance with your mobile device management strategy? Will you refuse to connect their solutions to your network if they aren’t willing or able to comply? As an IT resource, do you even have the authority to make that determination?  The list goes on. Defining the mobile device management strategy may be the most difficult part of the entire operation.


Once you’ve defined your strategy and the goals that are important to you, you’re going to review the types of equipment you need to support. Are you going to be Apple-only, PC-only, or are you going to support capabilities in a cross-platform environment? Is your mobile device management strategy able to deliver feature parity of everything it provides in this cross-platform world, or are you going to discover that some of your goals are only achievable on two of the three platforms you want to support? In traditional IT, mobile device management is much less challenging than in healthcare IT, mainly because IT usually has the final say in what equipment will and will not be connected to the environment. That's not always the case in healthcare IT.


This post hasn't been about answering questions, it's been about asking them. What I was really aiming for was to get you thinking about everything that goes into mobile device management from a healthcare IT standpoint. How does policy influence it? How do the IT organization's controls impact equipment decisions? What other MDM challenges do you experience now in healthcare IT, and what new challenges do you see coming in the future? What solutions have you found that address these challenges, and what have their shortcomings been? Do you feel like you've been able to achieve your goals? I’d love to hear your thoughts in the comments! Until next time!

You’ve read up on the history of hacking, its motivations, and benefits for you as an IT professional. You’ve watched videos and read technical books on hacking tools and even spent a few hard-earned dollars on some nifty hacking gadgets to learn from on your own personal hack-lab playground. Now what? What can you do with this newfound knowledge?


Well, you can get a certification or two.


Wait, what? A certification for hacking?


Sort of. There are certifications that recognize your knowledge and understanding of hacking vectors, tools, techniques, and methodologies. More importantly, these certifications validate your skill at being able to prevent and mitigate those same vectors, tools, techniques, and methodologies.


These are valuable certifications for anyone wishing to move their career into a security-focused area of IT. As hackers and malware evolve and become more sophisticated, the demand for well-trained, knowledgeable, and certified information security professionals has risen sharply, and organizations around the world are investing heavily in protecting themselves.


Certified Ethical Hacker


The International Council of E-Commerce Consultants, or EC-Council, developed the popular Certified Ethical Hacker (CEH) designation after the 9/11 attack on the World Trade Center. There was growing concern that a similar attack could be carried out on electronic systems, with widespread impact to commerce and financial systems.


Other EC-Council certifications include the Certified Network Defender, and Certified Hacking Forensic Investigator, among others. These certifications vary in terms of study and experience required.

From the CEH information page, the purpose of this certification is to:


  • Establish and govern the minimum standards for credentialing professional information security specialists in ethical hacking measures
  • Inform the public that credentialed individuals meet or exceed the minimum standards
  • Reinforce ethical hacking as a unique and self-regulating profession


While the term "ethical hacking" may be open to some interpretation, it’s clear from that last bullet that the EC-Council would agree that IT professionals can and should participate in some form of hacking as a learning tool. Ethical hacking, or hacking that won’t land you in prison, is something anyone can do at home to further learn about cybersecurity and risks to their own environments.


“To beat a hacker, you need to think like a hacker."


Certified Information Systems Security Professional


The International Information Systems Security Certification Consortium, or (SSC)2 was formed in 1989 and offers training and certification in a number of Information Security topics. Their cornerstone certification is the Certified Information Systems Security Professional or CISSP. This certification is a bit more daunting to achieve. It requires direct, full-time work experience in two or more of the information security domains outlined in the Common Body of Knowledge (CBK), along with a multiple choice exam, and endorsement from another (ISC)2 certification holder.


The CBK is described as a “common framework of information security terms and principles”, and is constantly evolving as new knowledge is added to it through developments in different attack vectors and defense protocols.

The CISSP has three different areas of specialty:


  • CISSP-ISSAP – Information Systems Security Architecture Professional
  • CISSP-ISSEP – Information Systems Security Engineering Professional
  • CISSP-ISSMP – Information Systems Security Management Professional


Each of these is valid for three years and is maintained through earning Continuing Professional Education credits, or CPE’s. CPE’s can be earned by attending training, or online seminars, along with other educational opportunities.


It is one of the most sought-after security certifications and many IT professionals surveyed year after year report the CISSP as having a fairly significant salary advantage as well.


Cisco Cyber Ops and Security


Cisco is easily one of the most recognized and well-known vendors to offer certifications in various networking topics. Their Security track consisting of the Cisco Certified Network Associate, Cisco Certified Network Professional, and the Cisco Certified Internetwork Expert, is a graduate program that covers a wide variety of practical topics for someone who is responsible for hardening and protecting their infrastructure from cyber threats.


To begin with the Associate level certification, you must first demonstrate a fundamental understanding of networks and basic routing and switching by completing the Cisco Certified Entry Networking Technician (CCENT) or the CCNA-Routing & Switching. After this, completion of one more exam will net you the CCNA-Security designation.


The Professional level certification then requires four additional exams, focusing on secure access or network access control; edge solutions, such as firewalls, mobility, and VPN; and malware, web, and email security solutions.


Once you've achieved your CCNP-Security designation, for those brave enough to continue, there’s the CCIE-Security, which only requires one exam.


And a lab. A very difficult, 8-hour lab.


Those with the CCIE-Security designation have demonstrated knowledge and practical experience with a wide range of topics, solutions, and applications, and would also be recognized as top experts in their field.


Cisco recently announced another certification track, the CCNA Cyber Ops. This focuses less on enterprise security and is more aimed at those who might work in a Security Operations Center or SOC. In this track, the focus is more on analysis, real-time threat defense, and event correlation.


For Fun or For Profit


Hacking can be fun, and it can be something you as the IT professional do as a hobby or just to keep your skills sharp. Alternatively, you can also develop those skills into marketable professional skills that employers would be keen to leverage. Whether you choose to hack for fun or to further your employment opportunities, it’s an area of expertise that is constantly evolving and requires that you be able to learn and adapt to the threat landscape because it changes by the minute.


Gone are the days when you could include "some security" as part of your jack-of-all-trades portfolio, but Information Security has become a full-time job, even sometimes requiring an entire team of security staff to protect and defend IT environments. Private enterprise and vendors alike are investing millions, if not billions, of dollars into protecting themselves from malware, hacking, and cybercrime, and that doesn’t seem to be a trend that will slow down anytime soon.

Having a great week at Microsoft Ignite. I love the energy of the crowd, especially on top of the all the major announcements. if you are at Ignite, stop by the booth and say hello.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Blockchain Technology is Hot Right Now, But Tread Carefully

The author talks about how blockchain will help reduce cases of fraud, then mentions how hackers stole $65M in bitcoin recently. Tread carefully, indeed.


Actuaries are bringing Netflix-like predictive modeling to health care

I had not thought about the use of predictive analytics and machine learning in the field of healthcare, but now that I do it seems to be a logical use case. Unless I get denied coverage because of my bacon habit. If that happens, then this is a horrible idea.


Uber stripped of London license due to lack of corporate responsibility

I wish this headline was written three years ago.


What the Commission found out about copyright infringement but ‘forgot’ to tell us

Someone needs to send this to James Hetfield and remind him that maybe the reason Metallica had poor sales was that their music wasn’t good, not that Napster was to blame. No, I’m not bitter.


Equifax Says It Had A Security Breach Earlier In The Year

File Equifax under “how not to handle data security breaches."


CCleaner Was Hacked: What You Need to Know

Just a reminder that everything is awful.


Water-Powered Trike Does 0-257 Km/H Under 4 Sec

This looks crazy dangerous. Yes, of course, I want to take a ride.


Adding another lanyard to my collection:

By Joe Kim, SolarWinds EVP, Engineering and Global CTO


Within a healthcare environment, IT failure is not an option.


This is a critical issue across the two largest federal entities: the DoD and the Department of Veterans Affairs (VA). In these agencies’ healthcare environments, there is no margin for error; the network must always be up and running, and doctors and nurses must be able to access patient data 24/7.


Adding to the challenge is the size of healthcare networks and the vast amount of data they store. Every DoD and VA facility must be part of the greater agency network. Then consider each patient, each visit, and each record that must be stored and tracked across the environment. On top of that is the day-to-day operations of each facility, which may itself encompass a large enterprise.


What’s the best way to monitor and keep this type of extremely large health IT network running smoothly and consistently? The answer can be found in the following four best practices.


Step 1: Gain Visibility


Tracking network performance is one of the best ways to understand and mitigate problems that might affect your environment. Agencies should invest in a set of tools that not only provide network and system monitoring, but also a view that spans across the environment.


Having this capability provides the visibility necessary to troubleshoot network problems or outages, resolve configuration issues, and support end-users and systems from a central location.


Step 2: Secure All Devices


From a security perspective, risks are high as healthcare staff and patients alike connect to a facility’s Wi-Fi. And countless endpoints—including medical devices—must be managed, monitored, and controlled.


It is critical to put extra emphasis on protecting the network from whatever connects to it by carefully monitoring and treating every single device or “thing” as a potential threat or point of vulnerability.


Step 3: Enhance Firewall Management


With networks as large as most healthcare environments, firewalls can become an issue. With so many firewalls in place, the network administrator can easily accumulate an ever-growing list of conflicting and redundant rules and objects, which can cause mayhem in firewall management.


Federal IT pros should regularly run automated scripts and, to help save time and effort, leverage a network management tool to help identify conflicting rules, remove redundancies, and generally streamline the access control list structure.


Step 4: Implement an Automation Tool


Time can be the federal IT pro’s greatest challenge, especially working with healthcare environments that push the limits of 24/7 demands.


This final step is the most critical for pulling the previous three together. Adding automation is the difference between monitoring problems and fixing them manually, and implementing a complete solution. It can also be the difference between receiving a call at 3:00 a.m. to fix a problem, and being able to sleep soundly and having the system fix itself.

Find the full article on Federal Technology Insider.

The SolarWinds crew including sqlrockstar, chrispaap, and I just returned stateside after a successful jaunt across the Atlantic at VMworld Europe in Barcelona, Spain. Thank you to all of the attendees who joined us at Tom’s speaking sessions and at our booth. Thank you to Barcelona for your hospitality!


Below are a few pictures of the SolarWinds team as we walked the walk and talked the talk of monitoring with discipline.


The SolarWinds Family at VMworld Europe 2017 in BarcelonaThe SolarWinds Family Team Dinner



Our journey doesn’t stop with the end of the VMworld two-continent tour. We are about to ignite a full course of monitoring with discipline in Orlando. At Microsoft Ignite, visit us in Booth #1913 for the most 1337 swag as well as fantastic demos on monitoring hybrid IT with discipline.

Let us know in the comments if you'll be joining us in Orlando for Microsoft Ignite.

Back from Barcelona and VMworld Europe. In the past three weeks, I've logged 14k miles, on 10 flights, and delivered three sessions, and had two sessions listed in the Top Ten for each event. And in a week, I get to do it all over again at Microsoft Ignite. If you are heading to Ignite, stop by the booth and say hello. We've got plenty of stickers and buttons and maybe a few pairs of socks, too.


As always, here is a bunch of links I hope you will find interesting!


Atlanta Tests Self-Driving Vehicle In Heart Of The City

It's been a while since I've posted about autonomous vehicles, so I decided to fix that.


Self-driving trucks enter the fast lane using deep learning

And then I decided to double down by sharing info about a self-driving truck. I love living in the future! Next we'll have self-tuning databases!


Oracle preps autonomous database at OpenWorld, aims to cut labor, admin time

And there it is, the other shoe dropping. Microsoft, AWS, and now Oracle are all on the AI train and offering self-tuning systems. If you are an operational DBA, it's time to think about a pivot.


Azure Confidential Computing will keep data secret, even from Microsoft

Microsoft continues to make progress in the area of data security, because they know that data is the most critical asset any company (or person) owns.


Understanding the prevalence of web traffic interception

And this is why the Microsoft announcement matters more than any AWS product announcement. Faster storage and apps don't mean a thing if your data is breached.


Google Parent Alphabet To Consider $1 Billion Investment In Lyft

Goodbye Uber, I won't miss you, your questionable business practices, or your toxic work culture.


Identity Theft, Credit Reports, and You

Some advice for everyone in the wake of the Equifax breach: be nice until it's time to not be nice.


It's been a fun two weeks, but you are looking at two very tired friki cabezas atop the Arenas de Barcelona:

Next week I'm flying down to Orlando, Florida to spend a week filling my brain with all things MS: Ignite. I'm especially excited about this trip because it is one of the first times as a Head Geek that I'm traveling to a server- and application-centric show.


To be sure, we get our fair share of questions on SAM, DPA, SRM, WPM, VMAN, and the rest of the systems-side of the house at shows like Cisco Live, but I'm expecting (and looking forward to) a whole different class of questions from a convention that boasts 15,000+ folks who care deeply about operating systems, desktops, servers, cloud, and applications.


There are a few other things that will make next week special:

          • There. Will. Be. SOCKS. It was a hit at Cisco Live. It was even more of a hit at VMWorld. And now the THWACK socks are ready to take MS: Ignite (and your feet) by storm. We can't wait to see the reactions to our toe-hugging goodness.
          • SWUGLife on the beach: For the second year in a row, Ignite will play host to the most incredible group of users ever assembled: the illustrious, inimitable SolarWinds User Group (or SWUG for short).
          • Geek Boys, Assemble!:  For the first time ever, Patrick, Tom, Kong, and myself will all be at the same show at the same time. It obviously a crime that Destiny couldn't join in the fun, but somehow I think she'll find a way to be with us in spirit. And of course we can just consider this a prelude to the all out Geeksplosion next month at THWACKcamp.
          • THERE. WILL. BE. EBOOKS.: For several weeks, I've been busy crafting the second installment in the SolarWinds Dummies series: Systems Monitoring for Dummies. While you can download it now from this link, we'll also have handouts at the booth to let all of the Ignite attendees know about it, marking the book's first live appearance on the show floor.

But that's more or less the view from show floor. There are things that I'm eager to experience beyond the booth border (#1913, for those who will be in the neighborhood).


Tom will be giving two different talks, both of which I personally want to hear about: "Upgrading to SQL Server 2016" is going to be packed full of information and one of those sessions where you'll either want a keyboard or a recorder to get all the details. But "When bad things happen to good applications" promises to be classic SQLRockStar in action. For that one, I plan to bring a lighter for the encore number at the end.


I also am very eager to get out and see what Microsoft is all about these days. Sure, I use it on the desktop, and I read the news, and I'm friends with an ever-growing number of folks who work in Redmond. But shows like these are where you get to see the aspirational side of a company and it's technology. Ignite is where I will get to see who Microsoft WANTS to be, at least in the coming year.


That aspirational quality will be on display nowhere as much as the keynote talk by Satya Nadella on Monday. Look for me to be live-tweeting at that event, at the very least.


Stay tuned for my follow-up log in two weeks, which I expect will be full of unexpected discoveries, a few food-related pictures, and hopefully a few shots of the SolarWinds friends we met up with while we were there.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO


My colleague Patrick Hubbard made some interesting predictions about 2017 for government IT professionals, and how DevOps culture could change the landscape. I’d like to share them with you now, as we approach Q4, to see how his predictions have played out so far.


If there is one thing government organizations are used to, it’s change. Budgets, technology, and policies are constantly changing, and there’s no surprise that the IT professional’s role is constantly evolving.


Not only do government IT professionals have to deal with the usual difficulties of trying to keep up with new technology, such as cloud, containers, microservices, and the Internet of Things (IoT), they also need to deal with budget cuts, restrictive policies, and a lack of resources. It is now more important than ever to scrap the traditional siloed IT roles, such as network, storage, and systems administrators.


A general, holistic approach to government IT

Having generalists is particularly important within government IT, where resources and budgets may be stretched. The ability to have a holistic understanding of the IT infrastructure and make quick and informed decisions is crucial over the next year and beyond.


2017 is likely to bring new machine-based technologies and the continued adoption of DevOps, which encourages collaboration between siloed IT departments. Government IT professionals need to expand their viewpoints to focus on tools and methodologies they may not be immediately familiar with to prepare for and manage next-generation data centers.


Leave the automation to machines

As predicted, new machine-based technologies are going to become better and more sophisticated over time. Before technology, such as bots and artificial intelligence, is leveraged, new management and monitoring processes will need to be introduced to government organizations.


DevOps culture is coming

DevOps describes the culture and collaboration of the development and operations teams that is geared toward software development. The transition to DevOps is certainly not without its challenge, however. By leveraging these principles, government organizations can be well on their way to reap the benefits of an integrated DevOps mentality.


DevOps is a positive organizational movement that will help government organizations empower IT departments to innovate. It also has the potential to improve agility, deliver innovation faster, provide higher quality software, better align work and value, and give the ability to respond to problems or changes.


The role of the government IT professional is constantly evolving. Since the good old days, when IT pros did little more than assist when emails stopped working, they now have much more power to shape the wider business strategy due to the reliance on technology for everyday tasks. By staying relevant and maintaining a general knowledge across the entire IT infrastructure, embracing a collaborative DevOps culture, and being open-minded to the integration of machines, government IT professionals will find themselves prepared for the changes that are coming their way.

Find the full article on Adjacent Open Access.

I have, on the occasional gray and challenging day, suggested that the state of IT contains a measure of plight. That it’s beset on all sides with the hounds of increasing complexity, reduced budgets, impossible business demands, and unhappy users. Fortunately for us all, however, being an IT professional is, as it has always been, pretty freaking awesome. Would I really make a major career shift away from casting spells on hardware, writing a little code to automate drudgery, and generally making the bezels blink? Nope. I’m not saying I’d do it for free, but after all these years I still have moments where it’s hard to believe I get paid to do what I love. And again this year we get to celebrate IT pro day.


There’s a certain special comradery among IT professionals that runs deeper than merely sharing a foxhole in the server room. It’s celebrating the power-on of a new data center. It's the subdued antithesis of the sales dude-style slap-on-the-back compliment when we finally identify a bedeviling, intermittent root cause. It's mostly cultural. Our ilk actually cares about users, and for many it’s what brought them to technology in the first place. We solve problems they didn’t know they had in ways they wouldn’t understand, avoiding the adulation usually reserved for such heroics. Aww, shucks. We're just here to do what we can. #humblebrag


This year, share the fun and recognition of IT Pro Day with a friend. Take an extra-long lunch, and remember you’re the one that keeps the bezel lights on. We don’t just keep our businesses working, we affect the human experiences and the feelings of our users. With a little time and the right questions, we truly make the world a better place.


So have a little fun, send some eCards, and maybe appear in next year’s video here:


Happy IT Pro Day!

Data is the new gold to be mined, analyzed, controlled, and wielded to create disruption. The value of data-driven decisions is guiding the next generation of services. Data can be utilized to frame any story, but you should avoid being framed by your data.


Join me, fellow Head Geek Kong Yang, and industry experts Stephen Foskett and Karen Lopez for "Optimizing the Data Lifecycle" a discussion of the challenges presented by the data-driven era. In this panel discussion, we will also share best practices to optimize the consumption of that data.


THWACKcamp is the premier virtual IT learning event connecting skilled IT professionals with industry experts and SolarWinds technical staff. Every year, thousands of attendees interact with each other on topics like network and systems monitoring. This year, THWACKcamp further expands its reach to consider emerging IT challenges like automation, hybrid IT, cloud-native APM, DevOps, security, and more. For 2017, we’re also including MSP operators for the first time.

THWACKcamp is 100% free and travel-free, and we'll be online with tips and tricks on how to your use SolarWinds products better, as well as best practices and expert recommendations on how to make IT more effective regardless of whose products you use. THWACKcamp comes to you so it’s easy for everyone on your team to attend. With over 16 hours of training, educational content, and collaboration, you won’t want to miss this!


Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

IT organizations are embracing hybrid IT because these services and technologies are critical to enabling the full potential of an application’s disruptive innovation. Although change is coming fast, the CIO’s mission remains the same: keep the app healthy and running smoothly. It’s time for application performance management to extend its strategy and practice to handle the modern application’s needs.


In the "Extend Your Modern APM Strategy" session, I will be joined by a panel of SolarWinds product experts, including Jerry Schwartz, director of product marketing, Robert Mandeville, product marketing manager, product managers Steven Hunt, Chris Paap, and Chris O'Brien, and Dan Kuebrich, director of engineering. We will explore the five elements of a modern approach to APM, including product demonstrations. The session will cover concepts from WPM to response-time analysis. After attending this session, you will have a better understanding of what an APM approach entails and the technologies that are available to support each of the five fundamental aspects of APM.


After attending this session, you will have a better understanding of what a comprehensive APM strategy entails and what technologies are available to support each of the five elements.


THWACKcamp is the premier virtual IT learning event connecting skilled IT professionals with industry experts and SolarWinds technical staff. Every year, thousands of attendees interact with each other on topics like network and systems monitoring. This year, THWACKcamp further expands its reach to consider emerging IT challenges like automation, hybrid IT, cloud-native APM, DevOps, security, and more. For 2017, we’re also including MSP operators for the first time.

THWACKcamp is 100% free and travel-free, and we'll be online with tips and tricks on how to your use SolarWinds products better, as well as best practices and expert recommendations on how to make IT more effective regardless of whose products you use. THWACKcamp comes to you so it’s easy for everyone on your team to attend. With over 16 hours of training, educational content, and collaboration, you won’t want to miss this!


Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

>register now


Thank You, IT Pros

Posted by sqlrockstar Employee Sep 14, 2017

They work in mystery, toiling away at all hours. Nobody ever sees them working, but many are happy with the results. And if anyone tries to reproduce their work, they end up disappointed. No, I’m not talking about the Keebler® Elves, although I suppose there are some similarities between these two groups of workers. Both are overworked, underpaid, and no one understands how they do their job so well.


I am talking about IT professionals, the unsung heroes of modern corporate enterprises around the globe. Except they are no longer unsung because back in 2015, SolarWinds created IT Pro Day! Created by IT professionals for IT professionals, IT Pro Day happens on the third Tuesday of September each year. IT Pro Day serves as a great reminder about all the work that goes on behind the scenes.


Here’s some data for you to think about this IT Pro Day:


  • IT pros spend 65% of their time managing IT and IT-related services
  • Nearly half of that time (47%) is dedicated to resolving technology issues for senior executives/chief officers


Let that sink in for a minute. Most of our time is spent catering to executives. You would think that the executives would appreciate all this effort, right? Maybe not:


  • 61% of IT pros are concerned about job security, with almost half (42%) suggesting the key reason is that company leadership does not understand the importance of IT


Okay, so maybe the executives appreciate the effort, but IT pros don’t believe that the executives understand the importance of IT. Which only seems odd when you find out everyone else does understand the importance:


  • 63% of end-users agree that IT has a greater impact on their daily work lives than the executives in the corner office


I’ve always thought most executives started out as regular employees. I guess I was wrong, because if that were true, then the above numbers would be different. And so would this one:


  • 91% of IT pros surveyed work overtime hours, and of those, 57% do so with no compensation for working overtime


Lots of overtime, for people who don’t understand the importance of quality IT staff. Overworked. Underpaid. And no one can explain what it is they do for work. But we are dedicated to making things better for others:


  • 25% of IT professionals agree that half of the time, end-users who try to solve their own IT problems ultimately make things worse


Okay, so making things better for others also makes things better for us. But IT pros aren’t just looking out for the people (or themselves). They’re also looking out for the business:


  • 89% of IT professionals most fear a security breach


Somehow, all this data makes sense to me. I understand each data point because I have lived each data point. I am an IT pro, and damn proud to say that to anyone who cares to listen. Oh, who am I kidding? I’ll say it even to people who don’t care to listen.


IT pros don’t do this for money. We aren’t interested in that. (But it’s nice, don’t get me wrong, and here’s hoping someone in the corner office on the fourth floor sends me bacon for Christmas this year.) We truly love what we do for work:


  • 94% of IT pros say they love what they do


Here’s to you, IT pro. Enjoy your day. Walk with your head held high. Smile a few seconds longer when an executive asks you to fix their kid’s iPad®.


You’ve earned this day, and many more.


Thank you.



Starting out in IT there are many things that I wish I had known about, but one of them is the value of the soft skills required.  Organizations want people who are willing to learn with the proper drive, but the ability to communicate, support, empathize, and the ability to help other people in the business will go a long way for your success within any enterprise.


Finding a Job


Over the years I have spent in the field I have been on both the interviewing side and the interviewee side of the table.  I have found that it always starts with how you related to others and whether or not you can have a real conversation with the person you are talking to. I have met people during the interviewing process that have been proud to be the guy/gal from Office Space with the red stapler: hiding out without any social skills.  I have not ever once seen them be hired into the organizations I have worked in.  So, what are the key skills that a person must have to succeed in IT?  Let’s break it down here.


  • Communication – The ability to have a conversation with a person will go a long way for your IT career.  In most IT roles staff interacts with the business daily.  From the ability to just have a conversation to the ability to listen, and then assist by articulating clearly is necessary. I read something somewhere that said you should be able to explain complex technology in a simple form, so simple that even a child can understand. That is not always an easy task, but I compare it to when I go to the doctor. They have a lot of complex terms like we do, but at the end of the day, they need to remove that from the conversation and explain what they are doing so that a non-medical professional can understand. That is the same level of communication required to be successful in your IT career.


  • Negotiation – The art of negotiation is so important to anyone in life as a whole, but here is how it applies to your IT career. As you are looking at third-party products to support your organization, are you going to pay retail price? No way! Negotiation is necessary. How about when you are talking to a future employer about salary. Do you ever take the first offer? No way! Lastly, we even get to negotiate with our users/management/team in IT.  They may ask for the most complex and crazy technology to do their jobs. You may be inclined to say no, but this is not how it works. Figure out what it takes, price it out, and odds are they won’t do it. This is the art of negotiation.


  • Empathy – Always empathize with the frustrated user. They are likely consuming the technology you implement. While the issue may not even be your fault, it is important to share that you understand they are having a hard day. More importantly, you will do what you can to resolve their issue as quickly as possible.


Soft skills go further than even the key ones that I have highlighted, but my hope is that this did get you thinking. IT is no longer full of people that don’t communicate well with others. That is a stereotype that needs to go away.


Long-term success


The only way to be successful in IT is to communicate well and play nice with others.  Use those soft skills that you have.  Any other approach, no matter how well you know your job, will find you looking for a new one sooner rather than later.

In my previous blog, I discussed the somewhat unique expectations of high availability as they exist within a healthcare IT environment. It was no surprise to hear the budget approval challenges that my peers in the industry are facing regarding technology solutions. It also came as no surprise to hear that I’m not alone in working with businesses that demand extreme levels of availability of services. I intentionally asked some loaded questions, and made some loaded statements to inspire some creative dialogue, and I’m thrilled with the results!


In this post, I’m going to talk about another area in healthcare IT that I think is going to hit home for a lot of people involved in this industry: continuity of operations. Call it what you want. Disaster recovery, backup and recovery, business continuity, it all revolves around the key concept of getting the business back up and running after something unexpected happens, and then sustaining it into the future. Hurricane Irma just ripped through Florida, and you can bet the folks supporting healthcare IT (and IT and business, in general) in those areas are implementing certain courses of action right now. Let’s hope they’ve planned and are ready to execute.


If your experiences with continuity of operations planning are anything like mine, they evolved in a sequence. In my organization (healthcare on the insurance side of the house), the first thing we thought about was disaster recovery. We made plans to rebuild from the ashes in the event of a catastrophic business impact. We mainly focused on getting back and running. We spent time looking at solutions like tape backup and offline file storage. We spent most of our time talking about factors such as recovery-point objective (to what point in time are you going to recover), and recovery-time objective (how quickly can you recover back to this pre-determined state). We wrote processes to rebuild business systems, and we drilled and practiced every couple of months to make sure we were prepared to execute the plan successfully. It worked. We learned a lot about our business systems in the process, and ultimately developed skills to bring them back online in a fairly short period of time. In the end, while this approach might work for some IT organizations, we came to realize pretty quickly that this approach isn’t going to cut it long term as the business continued to scale. So, we decided to pivot.


Next we started talking about the next evolution in our IT operational plan: business continuity. So, what’s the difference, you ask? Well, in short, everything. With business continuity planning, we’re not so much focused on how to get back to some point in time within a given window, but instead we’re focused on keeping the systems running at all costs, through any event. It’s going to cost a whole lot more to have a business continuity strategy, but it can be done. Rather than spending our time learning how to reinstall and reconfigure software applications, we spent our time analyzing single points of failure in our systems. Those included software applications, processes, and the infrastructure itself. As those single points of failure were identified, we started to design around them. We figured out how to travel a second path in the event the first path failed, to the extreme of even building a completely redundant secondary data center a few states away so that localized events would never impact both sites at once. We looked at leveraging telecommuting to put certain staff offsite, so that in the event a site became inhabitable, we had people who could keep the business running. To that end, we largely stopped having to do our drills because we were no longer restoring systems. We just kept the business survivable.


While some of what we did in that situation was somewhat specific to our environment, many of these concepts can be applied to the greater IT community. I’d love to hear what disaster recovery or business continuity conversations are taking place within your organization. Are you building systems when they fail, or are you building the business to survive (there is certainly a place for both, I think)?


What other approaches have you taken to address the topic of continuity of operations that I haven’t mentioned here? I can’t wait to see the commentary and dialogue in the forum!

Anyone who is having issues with performance or considering expanding their deployment has had to wrestle with the question of how, exactly, to get the performance they need. This session will focus on maximizing performance, whether tuning equipment to optimize capabilities, tuning polling intervals to capture the data you need, or adding additional pollers for load balancing and better network visibility.


In the "Orion at Scale: Best Practices for the Big League" session, Kevin Sparenberg, product manager, SolarWinds, and Head Geek Patrick Hubbard will teach you best practices for scaling your monitoring environment and ways to confidently plan monitoring expansion. They will focus on maximizing the performance of your expanded deployment, and more!


THWACKcamp 2017 is a two-day, live, virtual learning event with eighteen sessions split into two tracks. This year, THWACKcamp has expanded to include topics from the breadth of the SolarWinds portfolio: there will be deep-dive presentations, thought leadership discussions, and panels that cover more than best practices, insider tips, and recommendations from the community about the SolarWinds Orion suite. This year we also introduce SolarWinds Monitoring Cloud product how-tos for cloud-native developers, as well as a peek into managed service providers’ approaches to assuring reliable service delivery to their subscribers.


Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

Training is a topic I hold near and dear to my heart. Here are some of my thoughts about how a company will succeed or fail based on the training (and thereby the competence) of their technical staff.



My team members decide what they need to learn to better support our needs, then set aside a couple of hours each week, during work hours, to do training. This is informal, undirected time that benefits the company a lot!



Companies miss out when they don't allocate formal time and funds to help ensure that their employees have professional training. It doesn't matter whether those needs involve learning internal safety procedures, corporate IT security policies, basic or advanced switching/routing/firewalling, setting up V-Motion or VoIP or Storage LUNs, or just learning to smile while talking to customers on the phone.


Companies that don't budget time and money to train their staff risk not having the right staff to

  • Answer questions quickly
  • Do great designing
  • Provide excellent implementations
  • Troubleshoot problems efficiently and effectively.


It may surprise or dismay you, but training is more effective when it's done off site. Being at a training facility in person--not remotely or via eLearning--gets you more bang for your training dollars. It may look more expensive and inconvenient than participating in recorded or online/remote training sessions, but that perception is deceiving.


Relying solely on distance learning has unique costs and drawbacks:

  • Technical problems
    • Hearing audio
    • Sharing screens
    • Losing training time while waiting for the instructor to troubleshoot others' technical problems
  • Missing out on the pre-class, lunchtime, and post-class conversations and meetings. I've learned a lot from sharing information with students during these "off class" times. I've made some personal connections that have helped me get a lot more out of the training, long after the sessions are over. Those opportunities are lost when a class is attended online.
  • Remote eLearning sessions conducted onsite are ineffective due to work interruptions. Work doesn't stop when you are attending training sessions in your cube. The help desk calls our desk phones when we are needed, and our cell phones when we're not at our desks. Work doesn't stop when you are attending training sessions at your desk. People stop by for help without notice (we call these "drive-bys"), expecting us to interrupt our online training session to deal with their issues whenever they stop by our cubes. Hours or days of training are lost this way.
  • Remote or recorded training sessions are often dry and time-consuming.   We don't need to sit through introductions and explanations of training settings, yet that's what some companies include in their online training offerings. These sessions end up becoming cut-rate solutions for people or companies who can't afford to do training the right way. Actual hands-on, real-time, face-time experiences are richer in training fulfillment. They are critical to getting the most out of every training dollar.  Plus, getting out of the office helps encourage active participation during training, and results in a refreshed employee coming back to work. Training is no vacation (especially when taking a regimen of 12 to 14-hour classes for four or five days straight), but a change of environment is a welcome pick-me-up.


Relying on people to seek their own training using their own time and money is often a mistake

You can end up with people who either can't serve your company's needs or are burned out and frustrated. They'll look for a company that properly supports them with in-house training, and you'll potentially lose whatever expense you budgeted to train them, as well as losing the time wasted during their learning curve when they were a new employee.


To avoid this, establish a corporate policy that protects your investment.

  1. If a person leaves within twelve months of receiving training at the company's expense, they must reimburse the company for travel costs, meals, hotel, and tuition.
  2. If a person leaves between twelve months and twenty-four months after receiving training at the company's expense, they must only reimburse the company the cost of the tuition, not the travel, hotel, or meals.
  3. Once a person has been with the company for some arbitrary longer length of time (7-8 years or so), they don't have to reimburse any training costs when they leave, no matter how soon after training they take off. Your human resources team should be able to provide statistics about the likelihood of a person staying with the company after X years. Use their figures, or you can omit this option.




If you don't fund enough training for your people, you won't have the needed tools for the job when you need them. Your company will not prosper as well as it should. Those underappreciated employees will either inadvertently slow down your progress, or they'll take their services to a company that appreciates them. They'll see their value when the new company reinvests in those employees by sending them to great training programs.


How much does training cost?


The real question is, "How much does it cost to have untrained people on staff?"

If your people can't do the job because they haven't been trained, they'll make mistakes and provide poor recommendations. You won't be able to trust them.  You'll have to contract out for advanced services that bring in a hired gun to solve one issue one time. Once the expert leaves, you still have needs that your staff can't fill. Worse, you don't have impartial internal experts to advise you about the directions and solutions you should implement.


You can find many different vendor-certified training solutions at varying price points, but we can talk about some general budget items for a week of off-site training.

  • Tuition:  ~$3,500 - $6,000  (or more!) for a one-week class at the trainer's facility
  • Travel:
    • Flight ~$750 (depending on source and destination)
    • Car rental ~$300 (again, depends on distance, corporate discounts, etc.)
    • Hotel ~$150 per night (roughly)
    • Meals ~$125 per day (this is pretty high, but we're just looking at ballpark figures here)


You could spend up to $7,500 for one week of training one person.


Consider discounts and special offers.  You may be able to reduce your company's training costs to almost zero, especially if your employees live in the same city that is hosting the training.

  • Cisco Learning Credits can pay for all of the Cisco training if you have a good arrangement with your Cisco reseller if you choose a training company that accepts Learning Credits. If you don't have Cisco hardware, approach the vendor or your VAR for free or discounted training.
  • Some training centers offer big discounts or two-for-one training (or better) opportunities. It never hurts to ask for incentives and discounts to use their services.
  • Some training companies cover all hotel costs when training at their sites!
  • Some training programs include breakfast and lunch as part of the overall cost, leaving you to expense only dinners.
  • Car rental may not be required if you select a hotel adjacent to the training facility. Walk between them, rely on the hotel's airport shuttle, or use a taxi.


Do not rely solely on Cisco Learning Credits (CLC's)


A CLC is typically worth about $100, and if a class costs $3,500, you need 35 Learning Credits for an employee to have "free" training. Of course, those learning credits are NOT free. Your company either buys them (at a discount) or earns them as an incentive for their business. Perhaps you can sign an agreement with Cisco or your VAR that guarantees you'll spend X dollars on new hardware or services annually, and in return receive some percentage of X to use as learning credits. I've worked with two VARs who do this, and it's much appreciated.


CLCs are never enough to cover all of our training needs.  For one thing, they're only good for Cisco training.  If you have F5's, CLC's are of no value for their training.  Many training companies offer 2-for-1 discounts, or buy-one-get-a-second-at-50%-off, or better.  And you can make those dollars go further if you follow a great "Train The Trainer" program.  In this, you select a person who has great communication and understanding skills to receive the training.  When they return to the company, they train their peers.  They're fresh, they have contacts from their class that can be queried for answers to questions, and they may save you the cost of sending people to training.


Relying solely on CLCs means you've either got to spend a lot of capital dollars up front (to build up a bank of CLS's to use in the next twelve month), or you need more budget to cover the training gap.  Allocate sufficient funds to ensure your people have the exposure, training, and knowledge to correctly guide your company to a better IT future.  I can't emphasize this enough!


Discover your training needs. I have found that each analyst typically needs two weeks of off-site training annually, perhaps more for the first few years, until everyone is up to speed.


Why so much training?  Training is necessary for your team to:

  • Keep up with versions, bug fixes, better ways of doing things, security vulnerabilities and their solutions.
  • Do the highly technical and specialized things that make your network, servers, and applications run the best they can.
  • Maintain their skill sets and ensure they're aware of the right options and security solutions to apply to your organization.
  • Ensure they can properly design and implement and support every new technology that your company adopts.
  • Trust them to provide the right advice to decision makers.



You COULD hire outside contractors to be your occasional technical staff . . .    But then you'd be left with unthinking, non-advancing worker drones on your staff, who'll drag you down or leave you in the lurch when they find employers who will believe and invest in them.


Harsh? You bet! But when you understand the risks of having untrained people on staff, you see all the benefits that result from training.


If you have staff who sacrifice their personal expenses and family time (evenings, weekends, and holidays) to train themselves for the company's benefit, cherish them--they're unusual, and won't stay with you long.  They're on the fast path to leave you behind.  Give them raises and promotions to encourage them to stay, and compensate their training expenses. If you don't, they'll leave for the competition, who'll jump another step ahead of you.


Succeed by reinvesting in your staff, showing them they're appreciated by sending them to training, and they will help your company succeed.

Greetings from Barcelona! I’m here for VMworld and you can find me at the SolarWinds booth, at my session on Wednesday, or in the VM Village hanging out with other vExperts. If you are at the event, please stop by and say hello. I’d love to talk data with you.


As always, here are some links from the intertubz that I hope will hold your interest. Enjoy!


Equifax Says Cyberattack May Have Affected 143 Million Customers

There are only 126 million adults in the United States. So, yeah. You are affected by this.


Are you an Equifax breach victim? You could give up right to sue to find out 

As if suffering a breach isn’t bad enough, Equifax is tricking people into waiving their right to sue. Like I said, you are affected. Don’t bother checking. Equifax needs to notify you in a timely manner.


Three Equifax executives sold $2 million worth of shares days after cyberattack

If true, that they sold knowing about the breach, then my thought is this: Equifax can’t go out of business fast enough for me.


Surprising nobody, lawyers line up to sue the crap out of Equifax

Oh, good. That should solve everything because lawyers can go back in time to prevent the theft of our identities, right?


Windows 10 security: Microsoft offers free trial of latest Defender ATP features

Security should be free for everyone. Here's hoping Microsoft does the right thing and tries to protect everyone, always, for no additional cost. Too bad they didn’t help Equifax.


Hackers Gain Direct Access To US Power Grid Controls

If the Equifax stories didn’t rattle you enough, here’s one about hackers controlling your electricity.


A Simple Design Flaw Makes It Astoundingly Easy To Hack Siri And Alexa



To Understand Rising Inequality, Consider the Janitors at Two Top Companies, Then and Now

Long, but worth the read. It’s a fascinating comparison between the American workforce 35 years ago and today.


The view from my hotel room at VMworld, overlooking Plaza Espanya at sunset:


SaaS and the SysAdmin

Posted by scuff Sep 12, 2017

In the SMB market, SaaS vendors are quick to promote that you can turn off your on-premise servers and ditch your IT guy/gal (I kid you not). In the Enterprise, it’s unlikely that all of your workloads will move to SaaS, so the IT Pros may still be safe. But let’s pick on one technology for a moment as an example – Microsoft Exchange. Assuming you ditch your Exchange boxes for Exchange Online, what’s an Exchange Administrator to do? How does their role change in a SaaS world?


What stays the same?
Administration: There’s still a need for general administration of Exchange Online, especially Identity & Access Management. People will still join, leave, change their names and move teams. Departments will still want distribution groups and shared mailboxes. The mechanics to do this are different and tasks will likely be done by someone who’s administering the other Office 365 services at a tenancy level, but that’s not too different to Enterprises that have a separate “data security” team anyway for IAM.


Hello, PowerShell: Speaking of changes in how you achieve things, being proficient in PowerShell is the best new skill to have, thought PowerShell is not limited to Exchange Online/Office 365. If you’re already using PowerShell to administer on-premises Exchange servers, you’re more than halfway there.


Compliance: It’s rare to find an organization that leaves all the settings at their defaults. Exchange Online may still need tweaking to ensure it locks down things and applies rules that you’re using in-house to achieve and maintain policy or regulatory compliance. That can be as simple as the blocked/allowed domains or more complex like Exchange transport rules and Data Loss Prevention settings.


Integration: We’ve been using SMTP to handle information flow and systems alerts for a very very long time now. It’s possible that you’ll need to replicate these connections from and to other systems with your Exchange Online instance. There’s a gotcha in there for ageing multi-function fax machines that don’t support TLS (don’t laugh), but this connectivity doesn’t just go away because you’ve moved to the Cloud.


End user support: Sorry, the Cloud won’t make all the support calls go away. Brace yourselves for calls that Outlook isn’t doing what it’s supposed to, and it’s only impacting one user. Then again, maybe that’s an Outlook problem and not an Exchange server problem (usually). A quick “do you see the same problem in Outlook Web Access” is your first troubleshooting step.


What changes?
Bye bye, eseutil: Sorry not sorrry, the Exchange database is no longer your problem. I will miss using eseutil to check and repair it.


No more upgrades: Patches, service packs and major version upgrades be gone, when the Microsoft team are managing the application. Ditto for the same on the underlying server operating system.


Monitoring: We’re still interested in knowing if the service is down before the users have to tell us, but we’re no longer able to directly monitor the running Microsoft Exchange services. In addition, we’re monitoring the Office 365 status information and message center.


Server provisioning and consolidation: Shutting down a big project and making people redundant? Expanding the business with a merger or acquisition? No more building servers or decommissioning them – just add more licenses or close accounts.


Your new role
The more things change, the more they stay the same. Though technology changes how we do our jobs, the things that need to be done don’t change. Yes, in this case Microsoft has the responsibility and the power for some parts that you would have taken care of with your own server. But I’m not seeing that the shift is enough to cut your hours in half just yet.


Join the conversation – let me know how adopting a SaaS solution has changed what you do in your role or how you do it.

In most office environments, power strips or surge protectors are a normal, everyday device that most of our computers, printers, copiers, etc. are plugged into. They’re fairly innocuous and probably something we take for granted, right? Just a normal piece of equipment in our office. What if that power strip was actually a hacker’s tool, and was quietly facilitating the exfiltration of private data from your organization?


Check out the Power Pwn – a fully functional 8-outlet, 120V power strip, that also contains anything you would need to penetrate a network, including dual Ethernet ports, a high-gain wireless antenna, Bluetooth, and optional 3G/LTE. Once this device is carefully placed in your environment, a hacker can remotely access and control it, and begin to explore and attack anything it can see on your network.


Maybe your network team have things locked down fairly tight, and plugging this thing into an Ethernet port for a photocopier isn’t going to get access to anything important. Then an employee decides they need more power outlets at their desk and quietly moves this shiny new surge protector off the copier, and to their desk. I mean, that copier only needs one power outlet, why waste 8 perfectly good outlets there? Now, they happily “protect” their desktop computer with this device once it has been relocated to their office. Let’s say this employee is a member of your Finance team, or Human Resources…and their desktop Ethernet port has a lot more access to sensitive information on your network…


This is one example of some of the toys tools available to anyone interested in doing a little hacking. More often than not they are sold as ‘Penetration Testing’ devices for use by security professionals who might be hired by private companies to do a vulnerability assessment or penetration test on their networks.


These are also tools that you, the IT Pro can use to do a little hacking of your own, allowing you to learn more about the potential threats to your environment, and further protect it with that knowledge.


A Pineapple, a Ducky, and a Turtle walk into a bar…


As we’ve progressed through the last 50 years of technology advanced according to Moore’s Law, the size of processors and devices that use them have scaled down considerably as well. This has allowed the emergence of tiny microcomputers that are as powerful or more powerful than their full-sized counterparts from 3-5 years past.


The Power Pwn is just one example of a pre-fabricated, plug-and-play hacking device, with a tiny embedded computer, capable of running a fully functional operating system and tool package that allows for penetration and possible attack of an unsuspecting network.


Check out the store at Hak5Shop for some of these other great tools.


For those interested in lurking about the airwaves, there is the Wifi Pineapple. This nefarious little device allows you to scan and analyze wireless networks. With it you can create your

own ad-hoc network, or mimic your local coffee shop’s wireless network and intercept and analyze traffic across it from other patrons, while they check their bank balances sipping on a latte.


I hope this goes without saying but I’ll say it anyway - DO NOT DO THIS. This is about hacking without getting arrested.


It would be perfectly okay to use a Wifi Pineapple at home, and intercept your teenager’s Snapchat conversations perhaps…


The USB Rubber Ducky looks like a harmless USB key, but plug it into the USB port of your Windows, OSX, Android, or Linux device, and it will fool any of those operating systems into believing it’s just a keyboard (getting around any pesky security policy blocking USB drives by acting as a HID – Human Interface Device) and then dropping a malicious payload, opening a reverse shell, or logging keystrokes.


Right, but people don’t put strange USB keys into their devices, right? Well, it turns out about half of them still do. A presentation from Blackhat 2016 discussed an experiment in which almost 300 USB keys were randomly dropped around the campus of the University of Illinois, and 48% of them reported back into the researchers, indicating they had been plugged in and were able to establish connectivity to the researcher’s command and control server. There was no malicious payload here obviously, but it shows that what we as IT Pros may see as common sense, isn’t all that common. People see a free 32GB USB key sitting on a park bench and think it’s perfectly okay to plug it in and check it out.


Pick up a few Duckys and set up a quick test at your office, with permission of course, and see if Dave from HR likes free USB keys. I bet he does.


Another cool tool from this site is the Lan Turtle. This little guy looks like a USB Ethernet adapter – perfect for the latest lightweight notebooks that don’t have Ethernet, right? Well, now you’ve provided an attacker with remote access, network scanning, and man-in-the-middle capabilities.


Finally, if you haven’t already bought one, get yourself a Raspberry Pi. These micro computers are the perfect platform for doing some playing/hacking in your home lab or at work, especially coupled with one of the OS or software packages I will talk about next.


Sharks and Dragons


I’ll caveat this segment by suggesting that you get comfortable with Linux, of any flavor. I don’t mean you need to grow a ridiculous beard and lose the ability to walk outside in daylight, but at least be able to navigate the filesystem, install applications, do some basic configuration (networking, users, permissions), and edit text. I don’t want to open the Nano vs. Vi can of worms here, but let’s just say I opened Vi once, and I’m still stuck in it, so use Nano if you’re a ‘Nix rookie like me.


Also if you know how to get out of Vi, please let me know.


The reason here is that many of the popular pentest/hacking software packages are Linux-based. Many of the tools are open source, and community-driven, and so they are written to run in a command line on an open source platform like Linux.


There are some that have Windows/OSX variants or some sort of GUI, but if you want to get your hands on all the bells and whistles, the shell is your friend.


Having said all of that, I’ll start with a tool that actually doesn’t need Linux, and that is the packet capture tool – Wireshark. Wireshark does one thing and it does it really well, it captures network traffic at the packet level, wired or wireless, and allows you to actually see the traffic crossing your network in extreme detail. It’s a cornerstone tool for network administrators for troubleshooting, and it’s a powerful tool for security professionals who want to take a deep granular view of the information crossing their networks.


Wireshark 101 by Laura Chappell – the preeminent expert on Wireshark, is recommended reading if you want to build a solid foundation on packet capture and analysis.


Next up, Kali Linux. I warned you about the Linux, right? Often referred to as simply “Kali” – this is a Debian-based Linux distribution that is actually a package of over 600 penetration testing and hacking tools. It’s the Swiss Army Knife for security professionals, and hackers wearing hats of any color. While the underlying platform is still Linux, it does have a great GUI that allows access to the tools within. Not to mention, the really cool dragon logo that has made its way into popular culture, making appearances in Mr. Robot.


Mr. Robot is required viewing if you’re interested in hacking, by the way.


Kali also has a fantastic resource available for learning how to properly use it – Kali Linux Revealed should also be added to your reading list if you want to take a deeper look at using Kali for your own purposes.


Less of a hacking tool, and more of a security analysis product is Nessus. Nessus is primarily a vulnerability scanner, allowing you to discover and assess any significant security flaws in your environment. This isn’t a penetration test mind you, but an assessment of software and operating systems within your network. It will identify devices that are exposed or vulnerable to malware, un-patched operating systems, and common exploits. It is free to use for individuals, and another software product I highly recommend testing within your own environment.


Homework Assignment


All of the tools outlined here are simply that, tools. They can be used to learn and assess, or they can be used maliciously and illegally. For us, we want to learn and develop skills, rather than end up with lengthy prison terms because we packet-captured a bunch of credit card numbers at our local Starbucks.


So, please don’t do that.


If you are interested in hacking, as an IT professional, I’d highly encourage you to try and get your hands on the software I’ve outlined here at the very minimum. It’s all free, and doesn’t require a lot of resources to run. If you want to take things a bit further, get your hands on some of the hardware tools as well. The combined creative potential between the hardware and software here is limitless.


Mr. Robot was already mentioned as required viewing, but there’s more! If you haven’t already seen these multiple times, you budding hackers have a homework assignment – to watch the following movies:


Wargames (How about a nice game of chess?)

Hackers (Hack the planet!)

Swordfish (NSFW)

Sneakers (Setec Astronomy)


Please comment below and let me know of any other tools, hardware or software you'd recommend to a greenhorn hacker. What movies, books, or TV should be required viewing/reading?

The other day, I was talking with my dad and told him IT Pro Day was coming up, and that I needed to write something about it. "Why is it IT PRO Day?" he asked, "Why not just ‘IT People Day’ or ‘IT Enthusiasts Day’? Why leave out all those aspiring amateurs?"


My dad was trolling me using my own arguments from a debate we frequently had when I was a kid. You see, my dad has been a musician his whole life. He attended Music & Arts high school in NYC, then Julliard and Columbia, and then had a career that included stints with the New York Philharmonic, NBC Symphony of the Air, and 46 years with the Cleveland Orchestra. Suffice to say, my dad knew what it meant to be "a professional."


As a kid, I insisted that the only thing separating pros from amateurs was a paycheck (and the fact that he got to wear a tuxedo to work), and that this simplistic distinction wasn't fair. Of course, what was simplistic was my reasoning. Eventually I understood what made a musician a "pro," and it had nothing to do with their bank account.


So that was the nature of his baiting when I brought up IT Pro Day. And it got me thinking: what IS it that makes an IT practitioner a professional? Here's what I've learned from dear old dad:


First, having grown up among musicians, I can PROMISE you that being a professional has nothing to do with how much you do (or don't) earn at “the craft,” how obsessively you focus on it, or how you dress (or are asked to dress) for work.


Do you take your skills seriously? Dad would say, "If you skip one day of practice, you notice. Two days and the conductor notices. Three days and the audience notices. Pros never let the conductor notice." In an IT context, do you make it your business to stay informed, up to date, know what the upcoming trends are, and get your hands on the new tech (if you can)? It even extends to keeping tabs on your environment, knowing where the project stands, and being on top of the status of your tickets.


"If you're not 30 minutes early, you're an hour late," Dad would say as he headed out at 6 p.m. for an 8 p.m. concert. "I can't play faster and catch up if I'm 10 minutes late, you know!"


Besides the uncertainty of traffic, instruments needed to be tuned, music sorted, warm ups run. While not every job requires that level of physical punctuality, it's the mental piece that's relevant to us. Are you "present" when you need to be? Do you do what it takes to make sure you CAN be present when it is time to play your part, whether that's in a meeting, during a change control, or when a ticket comes into your queue?


When you first learn an instrument, a lot of time is spent learning scales. For those who never made it past the beginner lessons, I have some shocking (and possibly upsetting) news: even the pros practice scales. In fact, I'll say *especially* the pros practice scales. I asked dad about it. He said that you need to work on something until you don't have to think about it any more. That way, it will be there when you need it. As IT pros, we each have certain techniques, command sequences, key combinations, and more that just become a part of us and roll off our fingers. We feel like we could do data center rollouts in our sleep. We run product upgrades "by the numbers." The point is that we've taken the time to get certain things into our bones, so that we don't have to think about them any more. That's what professionals do.


This IT Pro Day, I'm offering my thanks and respect to the true IT professionals. The ones who work every day to stay at the top of their game. Who prepare in advance so they can be present when they're needed. Who grind out the hours getting skills, concepts, and processes into their bones so it's second nature when they need them. Doesn't that sound like the kind of IT pros you know? The kind you look up to?


The truth is, it probably sounds a lot like you.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO


Every federal IT pro should be doing standard database auditing, which includes:


  • Taking a weekly inventory of who accessed the database, as well as other performance and capacity data
  • Ensuring they receive daily, weekly, and monthly alerts through a database-monitoring tool
  • Keeping daily logs of logins and permissions on all objects
  • Maintaining access to the National Vulnerability Database (NVD), which changes daily
  • Performing regular patching, particularly server patching against new vulnerabilities


These are just the basics. To optimize database auditing with the goal of improving IT security, there are additional core steps that federal IT pros can take. The following six steps are the perfect place to start.


Step 1: Assess Inventory


Tracking data access can help you better understand the implications of how, when, where, and by whom that data is being accessed. Keeping an inventory of your PII is the perfect example. Keep this inventory in conjunction with your audits can help you better understand who is accessing the PII.


Step 2: Monitor Vulnerabilities


Documented vulnerabilities are being updated every day within the NIST NVD. It is critical that you monitor these on a near-constant basis. We suggest a tool that monitors the known-vulnerabilities database and alerts your agency, so action can be immediate and risks are mitigated in near real-time.


Step 3: Create Reports


Make sure you have a tool in place that takes your logs and provides analysis. This should, ideally, be part of your database monitoring software. Your reports should tell you in an easy-to-digest format who’s using what data, from where, at what time of day, the amount of data used, etc.


Step 4: Monitor Active Directory®


Who is accessing this information—particularly if the person shouldn’t be accessing that data. That’s why it is critical to understand more than just who is accessing your data; you must have a clear understanding of who, what, and which data they’re accessing, and when they are accessing data.


Step 5: Create a Baseline


If you have a baseline of data access on a normal day, or at a particular time on any normal day, you’ll know immediately if something is outside of that normal activity. Based on this baseline, you’ll immediately be able to research the anomaly and mitigate risk to the database and associated data.


Step 6: Create One View


It is certainly possible that the most critical step to improving security through database auditing is to understand its role within the larger IT environment. It is worth the investment to find a tool that allows federal IT pros to see database audit information within the context of the greater infrastructure. Application and server monitoring should work in conjunction with database monitoring.


There is one final step: monitor the monitor. There should never be a single point of failure when performing database audits. Make sure you’ve got secondary checks and balances in place so no single tool or person has all the information, access, or control.

Find the full article on Federal Technology Insider.

The backup technology landscape is almost as complex as the environments it needs to protect. Do you go with point solutions to solve specific problems or a broader solution to consolidate backups into one platform? In this session, we discuss the latest backup approaches for each of the different types of IT environments you may need to protect.


In my session, "Understanding Backup Technologies and What They Can Do for You," I will be joined by Keith Young, Sr. Sales Engineer here at SolarWinds, to discuss the ins and outs of backup technologies and their impact on your IT environment.


THWACKcamp 2017 is a two-day, live, virtual learning event with eighteen sessions split into two tracks. This year, THWACKcamp has expanded to include topics from the breadth of the SolarWinds portfolio: there will be deep-dive presentations, thought leadership discussions, and panels that cover more than best practices, insider tips, and recommendations from the community about the SolarWinds Orion suite. This year we also introduce SolarWinds Monitoring Cloud product how-tos for cloud-native developers, as well as a peek into managed service providers’ approaches to assuring reliable service delivery to their subscribers.


Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

A big hearty THANK YOU to everyone who joined us at our SolarWinds booth, breakout and theater sessions, and Monitoring Morning! We were excited to converse with you in person about the challenges that practitioners face.

SolarWinds Views from VMworld
SolarWinds Family at VMworld 2017Monitoring Morning at VMworld
SolarWinds Booth at VMworld 2017Future:Net


There were plenty of announcements at VMworld. The two that stood out for me were:

  1. VMware Cloud on AWS was announced as a normalization of going from a VMware vSphere environment to an AWS Cloud environment. It runs as a single-tenant of bare-metal AWS infrastructure which allows you to bring your Windows Server licenses to VMware Cloud on AWS. Each software-defined data center can consist of 4 to 16 instances, each with 36 cores, 512GB of memory, and 15.2TB of NVMe storage. The initial availability is quite limiting in the restrictions because there is only one region, and clusters run in a single AWS Availability Zone. The use cases for this service is data center extension, test and development environments, and app migration. I’ll withhold final judgment on whether this VMware Cloud derivative will sink or swim.
  2. AppDefense was another announcement at VMworld 2017. It is billed as an application-level security solution that uses machine learning to build workload profiles. It gathers behavioral baselines for these workloads and allows the user to implement controls and procedures to restrict any anomalous or deviated behavior.


Finally, I was invited to Future:Net, a conference within a conference. It was really cool to talk shop about the latest academic research, as well as what problems the next generation of startups are trying to solve.

Future:Net Keynote

P.S. let me know if you will be at VMworld Europe 2017 in Barcelona. If so, definitely stop by to talk to me, chrispaap , and sqlrockstar .

Cloud fixes everything. Well, no it doesn’t. But cloud technology is finally coming out of the trough of disillusionment and entering the plateau of productivity. That means as people take cloud technologies more seriously and look at practical hybrid-cloud solutions for their businesses, engineers of all stripes are going to need to expand their skills outside their beloved silos. 


Rather than focusing only on storage or networking or application development, there is great value in IT professionals designing and building cloud solutions knowing a little bit about the entire cloud stack.


The National Institute of Standards and Technology (NIST) defines cloud computing as “a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”


We call it a “cloud stack” because of all the components built one on top of another. This includes elements such as networking, storage, virtualization, compute, load-balancing, application development tools, and more specific to operations, things like user account management, logging, and authentication services. These are all built right in to the IaaS cloud.


But when looking at the overall picture, the overall cloud stack, IaaS exists as the foundation for the Platform as a Service, such as development tools, web servers and database servers, which in turn serves as a platform for Software as a Service, such as email and virtual desktops.


So when an IT professional is looking at a cloud solution for their organization, regardless of their background and specific area of expertise, there’s a clear need to be able to understand a little bit about networking, a little bit about storage, a little bit about virtualization, even little bit about application development. Sure, there’s still a need for experts in each one of those areas, but when looking at an overall cloud (or more realistically hybrid-cloud) initiative, a technical engineer or architect must understand all those components to some extent to design, spec, build, and maintain the environment.


I really believe this has always been the case, though, at least for good engineers. The really good IT pros have always had some level of understanding of these other areas. Personally, as a network engineer, I’ve had to spin up VMs, provision storage and work with validation platforms to one extent or another from the very beginning of my career, and I don’t consider myself that great of an engineer.


When I put in new data center switching and firewalling solution, I’m sitting down with someone from the storage team, Linux team, Windows team, virtualization team, and maybe even the security team. Often I need to be able to speak to all of those areas because of how, when it comes down to it, our individual sections of the infrastructure really all work together in one environment.


Cloud is no different

All those components still exist in a cloud solution, so when IT pros look at an overall design, there’s discussion about network connectivity, bandwidth and latency, storage capacity, what sort of virtualization platform to run, and what sort of UI to use to deliver the actual application to the end-user. The only difference now is that the cloud stack is one orchestrated organism rather than many more disparate silos to address individually.


For example, how will a particular application that lives in AWS perform over the latency of a company’s new SD-WAN solution?


And in my experience, I see hybrid-cloud approach more than anything else which requires very careful consideration of networking between the organization and the cloud provider and how applications can be delivered in a hybrid environment.


I love this, though, because I love technology, building things, and making things work. So the idea that I have to stretch myself outside of my cozy networking comfort zone is an exciting challenge I’m looking forward to.


Cloud doesn’t fix everything, but organizations are certainly taking advantage of the benefits of moving some of their applications and services to today’s popular cloud provider platforms. This means that IT pros need a breadth of knowledge to provide the depth of technical skill a cloud design requires and today’s organizations demand.

I am fascinated by the fact that in over twenty years, the networking industry still deploys firewalls in most of our networks exactly the way it did back in the day. And for many networks, that's it. The reality is that the attack vectors today are different from what they were twenty years ago, and we now need something more than just edge security.


The Ferro Doctrine

Listeners to the Packet Pushers podcast may have heard the inimitable Greg Ferro expound on the concept that firewalls are worthless at this point because the cost of purchasing, supporting, and maintaining them exceeds the cost to the business of any data breach that may occur as a result. To some extent, Greg has a point. After the breach has been cleared up and the costs of investigation, fines, and compensatory actions have been taken into account, the numbers in many cases do seem to be quite close. With that in mind, if you're willing to bet on not being breached for a period of time, it might actually be a money-saving strategy to just wait and hope. There's a little more to this than meets the eye, however.


Certificate of Participation

It's all very well to argue that a firewall would not have prevented a breach (or delayed it any longer than it already took for a company to be breached), but I'd hate to be the person trying to make that argument to my shareholders, or (in the U.S.) the Securities Exchange Commission or the Department of Health and Human Services, to pick a couple of random examples. At least if you have a firewall, you get to claim "Well, at least we tried." As a parallel, imagine that two friends have their bicycles stolen from the local railway station where they had left them. One friend used a chain and padlock to secure their bicycle, but the other just left their bicycle there because the thieves can cut through the chain easily anyway. Which friend would you feel more sympathy for? The chain and padlock at least raised the barrier of entry to only include thieves with bolt cutters.


The Nature Of Attacks

Greg's assertion that firewalls are not needed does have a subtle truth to it -- if it's coupled with the idea that some kind of port-based filtering at the edge is still necessary. But perhaps it doesn't need to be stateful and, typically, expensive. What if edge security was implemented on the existing routers using (by definition, stateless) access control lists instead? The obvious initial reaction might be to think, "Ah, but we must have session state!" Why? When's the last TCP sequence prediction attack you heard of? Maybe it's a long time ago, because we have stateful firewalls, but maybe it's also because the attack surface has changed.


Once upon a time, firewalls protected devices from attacks on open ports, but I would posit that the majority of attacks today are focused on applications accessed via a legitimate port (e.g. tcp/80 or tcp/443), and thus a firewall does little more than increment a few byte and sequence counters as an application-layer attack is taking place. A quick glance at the OWASP 2017 Top 10 List release candidate shows the wide range of ways in which applications are being assaulted. (I should note that this release candidate, RC1, was rejected, but it's a good example of what's at stake even if some specifics change when it's finally approved.)


If an attack takes place using a port which the firewall will permit, how is the firewall protecting the business assets? Some web application security might help here too, of course.


Edge Firewalls Only Protect The Edge

Another change which has become especially prevalent in the last five years is the idea of using distributed security (usually firewalls!) to move the enforcement point down toward the servers. Once upon a time, it was sometimes necessary to do this simply because centralized firewalls simply did not scale well enough to cope with the traffic they were expected to handle. The obvious solution is to have more firewalls and place them closer to the assets they are being asked to protect.


Host-based firewalls are perhaps the ultimate in distributed firewalls, and whether implemented within the host or at the host edge (e.g. within a vSwitch or equivalent within a hypervisor), flows within a data center environment can now be controlled, preventing the spread of attacks between hosts. VMWare's NSX is probably the most commonly seen implementation of a microsegmentation solution, but whether using NSX or another solution, the key to managing so many firewalls is to have a front end where policy is defined, then let the system figure out where to deploy which rules. It's all very well spinning up a Juniper cSRX (an SRX firewall implemented as a container) for example, on every virtualization host, but somebody has to configure the firewalls, and that's a task, if performed manually, that would rapidly spiral out of control.


Containers bring another level of security angst too since they can communicate with each other within a host. This has led to the creation of nanosegmentation security, which controls traffic within a host, at the container level.


Distributed firewalls are incredibly scalable because every new virtualization host can have a new firewall, which means that security capacity expands at the same rate as the compute capacity. Sure, licensing costs likely grow at the same rate as well, but it's the principal that's important.


Extending the distributed firewall idea to end-user devices isn't a bad idea either. Imagine how the spread of a worm like wannacry could have been limited if the user host firewalls could have been configured to block SMB while the worm was rampant within a network.


Trusted Platforms

In God we trust; all others must pay cash. For all the efforts we make to secure our networks and applications, we are usually also making the assumption that the hardware on which our network and computer runs is secure in the first place. After the many releases of NSA data, I think many have come to question whether this is actually the case. To that end, trusted platforms have become available, where components and software are monitored all the way from the original manufacturer through to assembly, and the hardware/firmware is designed to identify and warn about any kind of tampering that may have been attempted. There's a catch here, which is that the customer always has to decide to trust someone, but I get the feeling that many people would believe a third-party company's claims of non-interference over a government's. If this is important to you, there are trusted compute platforms available, and now even some trusted network platforms with a similar chain of custody-type procedures in place to help ensure legitimacy.


There's Always Another Tool

The good news is that security continues to be such a hot topic that there is no shortage of options when it comes to adding tools to your network (and there are many I have chosen not to mention here for the sake of brevity). There's no perfect security architecture, and whatever tools are currently running, there's usually another that could be added to fill a hole in the security stance. Many tools, at least the inline ones, add latency to the packet flows; it's unavoidable. In an environment where transaction speed is critical (e.g. high-speed trading), what's the trade off between security and latency?


Does this mean that we should give up on in-depth security and go back to ACLs? I don't think so. However, a security posture isn't something that can be created once then never updated. It has to be a dynamic strategy that is updated based on new technologies, new threats, and budgetary concerns. Maybe at some point, ACLs will become the right answer in a given situation. It's also not usually possible to protect against every known threat, so every decision is going to be a balance between cost, staffing, risk, and exposure. Security will always be a best effort given the known constraints.


We've come so far since the early firewall days, and it looks like things will continue changing, refreshing, and improving going forward as well. Today's security is not your mama's security architecture, indeed.

There is something to be said about ignorance being bliss, and then there are times when it is good to know a little bit more about what you are getting into. My IT journey started over 20 years ago.  All I knew going in was that I liked IT based upon my studies and that the university I attended had 100 % placement in IT positions of it graduates.  That’s not a whole lot of detail to start from, but I was all in.


At the time I certainly didn’t have the foresight to understand how big this IT thing would become.


Done with college, done with learning

So I was done with college, and I was done taking tests forever right?  Wrong!  I would be forever learning.

IT becomes part of you. It becomes natural to want to read a book, search the web for new insights, or start working with some of the latest new technologies.


Always learning

The best part of working in IT is the always learning and growing nature of the industry. Even more exciting is that people who never spent a day studying IT, but are willing to learn, can easily move into this space. I have worked with history majors, music majors, sociology majors, and more. You name it. When you think about it, this is really cool!


As long as you have the drive to learn, keep learning, and get your hands dirty in technology, working in IT really is an opportunity for many.


Just getting started in IT?

Today, there are countless varieties of IT jobs. Organizations around the world are looking for very smart and driven individuals. Be willing to research the answer to questions, and spend time on certifications. Certifications are important to everyone, but especially when you are getting started in your IT career. It shows drive and it also prompts you to learn enterprise technologies that will benefit you both personally and professionally.


This approach will also provide a good foundation for your entire IT career. IT is full of opportunity, so also be sure to keep an open mind about what you can do. You will be sure to go places with a position-driven approach.


Best of luck!

THWACK members, I'm 100% loving the comments in this series! You all are giving me a much-needed boost in security thoughts and ideas. Thank you so much!


Conversation Number One led me to realize that I need to jot down the resources I use as my "go-to's." These are links to several places that help me to be cyber-aware if you will.  I would love for all of you to share your resources as well so we can help create a thread of wholesome greatness! tomiannelli, your comment, from Conversation One, that provided a link for more information (18 U.S. Code § 1030 - Fraud and related activity in connection with computers) was really thoughtful. I truly appreciate the sharing of knowledge.


Now, let's dive in, shall we?


Security Conferences



Conferences - O'Reilly Media


SANS Events


Knowledge Links


Department of Homeland Security

I spend hours on this site trying to see which direction the government is leaning toward. I also like going there to view their education suggestions and which cyber security fields they are hiring in.


National Vulnerability Database

Checklists, data feeds, vulnerability metrics, and more resource links provided within. This is a bookmarked staple.


SANS Institute InfoSec

This is a white paper that I find myself reflecting on a lot. Especially when I'm focusing on new security plans with companies that have never really had one in place. The concepts and case studies within help to ground me for some reason.


Cisco Umbrella

Okay, if you click on this one it will want you to fill out information before you download any of their books. I'm a huge Cisco user and when it comes to security and concepts, well, I'm just like my best friend, Kate Asaff, when Apple has a release. Let's just say that I'm interested in the new capabilities and features.


There is SO much more, but these are my top picks that I consistently go back to. Now, DEF CON is not on any of my previous lists, and this is merely because I would assume it's expected. 


The challenge now (drum roll, please), is to prompt EVERYONE reading this to share your favorite security sites. On your mark, get set, GO!

I'm home from VMworld Las Vegas for a few days before I turn around and do it all over again in Barcelona. Big events like VMworld remind me that I'm getting older, as my feet and back start to betray me. But the energy I get from engaging with customers keeps me going. It's great to talk through the problems they are facing and how our tools and products are helping. If you are in Barcelona next week, please stop by and say hello!


As always, here's some stuff I found on the internet you might find interesting. Enjoy!


465k patients told to visit doctor to patch critical pacemaker vulnerability

I'm now wondering what other medical devices might have software. Things like knees, hips, etc. This might get worse before it gets better.


WSU professor says IRS is breaking privacy laws by mining social media

If you are on Facebook bragging about getting away with tax fraud, I think you get what you deserve here.


Router flaws put AT&T customers at hacking risk

I'd wager we see these routers exploited for another hack before the end of the month.


Introducing KSQL: Open Source Streaming SQL for Apache Kafka

Am I the only one excited by the introduction of KSQL?


Microsoft just made it easier for programmers to use archrival Amazon's cloud

In a world where nothing makes sense, Microsoft has now helped foster a revenue stream for AWS.


Why is My Database Application so Slow?

HINT: It's not the database.


Massive black hole discovered near heart of the Milky Way

And it's full of socks that go missing from the clothes dryer.


This is the best part of VMworld in Las Vegas: the "Instant Bacon" appetizer at Strip Steak inside Mandalay Bay.

All too often, data professionals are our own worst enemies when it comes to handling data and data management. Many data professionals and SysAdmins fail to recognize that the danger in our own habits increases the risk that the business will fall short of its goals. The danger may not be as destructive as an all-out data breach, but we are often to blame for enabling our business end-users to lust after BIG DATA, resulting in data hoarding that leads to redundant, outdated, and trivial information (ROT).


So, while the world’s collective media shines a light on the never-ending list of security breaches, we suggest that there are common, and bigger, threats that data professionals need to guard against. Not all data professionals are guilty of every one of these sins; rather, the collection of individuals that comprise modern enterprise IT shops is culpable. In my session, "The Seven Deadly Sins of Data Management," fellow Head Geek Destiny Bertucci will join me to walk through examples for each data mismanagement sin.


THWACKcamp 2017 is a two-day, live, virtual learning event with eighteen sessions split into two tracks. This year, THWACKcamp has expanded to include topics from the breadth of the SolarWinds portfolio: there will be deep-dive presentations, thought leadership discussions, and panels that cover more than best practices, insider tips, and recommendations from the community about the SolarWinds Orion suite. This year we also introduce SolarWinds Monitoring Cloud product how-tos for cloud-native developers, as well as a peek into managed service providers’ approaches to assuring reliable service delivery to their subscribers.


Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

By Joe Kim, SolarWinds EVP, Engineering and Global CTO


As the roles and responsibilities of government IT professionals continue to evolve, these professionals will want to focus on investing in new skills in several key areas to better their chances for success.


Much of this evolution is being driven by agencies’ increasing predilection for hybrid IT environments. By hosting at least some of their IT infrastructure on the cloud while keeping a number of sensitive applications in-house, they satisfy the need to balance greater efficiencies with lockdown security.


SolarWinds released an IT trends report that tackles the issue of hybrid IT and its impact on the roles and responsibilities of network administrators. While not exclusively focused on the government arena, the findings are reflective of what we’re seeing among federal IT professionals.


Let’s take a closer look at some of the more noteworthy outside-the-box skills that the IT administrators surveyed for the report feel are worthwhile to pursue.


Vendor management


Today, being a federal IT manager means managing the cloud vendors, including identifying potential vendor partners and managing SLAs, costs, and the entire vendor/agency relationship.


It’s important for IT managers to receive training on how to effectively work with vendors and manage partner relationships. They also need greater insight into the overall goals of their agencies. Gaining this insight requires working closely with agency leadership so that IT managers can deliver an effective hybrid IT strategy that aligns with their agency’s objectives.




Many agencies are adopting DevOps strategies to enhance the speed at which they are able to deliver solutions. However, old habits die hard, and many IT administrators may be having a difficult time letting go of government’s traditionally siloed approach to IT management.


The walls between developers, engineers, and IT operations managers are quickly crumbling, though, and it’s important for all of these groups to lay the groundwork for working together. It starts with internal training that simultaneously outlines individual roles and responsibilities and establishes an agency-endorsed approach to how these formally disparate teams will work together. Using network monitoring solutions provides a single and shared point of visibility into the performance of their entire hybrid IT environment.


Hybrid monitoring/management tools and metrics


The toughest part is gaining visibility into everything in the hybrid environments. How do you know if the applications being hosted offsite are secure or operating appropriately? How do you maintain application performance as apps are migrated from on-premises to the cloud? There are solutions available that address these questions, but administrators must be trained on these solutions to effectively and continuously monitor every aspect of their hybrid IT deployments.


The bottom line: learning is a key element to agencies’ success. Federal IT professionals need to take the time to learn new skill sets, and their agencies need to prioritize the institution of training programs that help these professionals attain them. This will lead to greater innovation—and more opportunities—for both agencies and their IT teams.


Find the full article on GovLoop.

In my last blog post, we talked about protecting data at rest and data in motion. Thanks for all the really good comments and feedback you left. I think they gave us some good food for thought, especially a few items I hadn’t talked about, including mobile device security and management. In this post, I want to take things in a slightly different direction and talk about health care policy and how it affects data availability.


After working on the insurance side of health care for a good part of a decade, it became very clear to me that business policy influence had created a mentality of, "Everything must be up 100% of the time," and in many ways, it was true. While supporting a nursing triage hotline, people often called in with potentially life and death situations. Obviously, the availability of the telephone system, which was network-based, was critical to operating our environment. Our contact center, also network-based, and the backend systems our triage nurses needed to access, were also critical. We couldn’t have an outage that prevented our callers from reaching the nurses they needed to speak to. Lives were literally in the balance.


So how does one go about ensuring that data availability is achieved and that services are operational to an extended period of uptime, beyond your typical business? The answer to that, my friends, is architecture. You can only achieve the levels of high availability that are required in a healthcare environment when you specifically design for it. And these kinds of designs usually come with a mighty big price tag. But before I get into that part of the conversation, let’s break this challenge down into three steps. How do we go about achieving this unprecedented level of uptime?


You design it to be redundant.


First, you gain a full understanding of your business requirements, which are most often non-technical in nature. Then you design a model, whether it be network or software application architecture, which removes any and all single points of failure. This ideally results in an architecture design that can lose one or more critical components while operations continue. Ideally, you do this without the end-user noticing. This might mean network infrastructure, telecommunications circuits, application servers... it really can be anything. If a component can fail, you need to understand the failure modes, and plan for how to mitigate them through a redundant design.


You design it to be maintainable, and you take a proactive approach to maintenance.


No environment can operate forever without maintenance. You need to have a strategy in place for dealing with failed components or applications, and one that also allows you to take proactive measures to prevent future service disruption. This can mean an end of lifecycle hardware replacement, application software patching, or any other standard maintenance task. Maintenance should be routine and have time allocated for it. Simply saying, "You can’t have a maintenance window" isn’t going to fly. So, forget that illusion right now.


You figure out how to monitor it so you can react before service impact occurs.


The final key to preparing an environment to be highly available is to monitor it. You must first know what "normal" looks like to determine what "abnormal" is. This applies equally to network performance as it does software application performance. This is always a moving target, and it’s a lot of work. There are a lot of really good off-the-shelf software packages that can help with the basics (insert shameless, unsolicited plug for some of the cool SolarWinds tools here), or you can develop your own monitoring solutions. I’m not going to tell you what to monitor or how to do it, but I’m going to tell you that you need to figure out the answers to those questions and take the action appropriate for your environment.


Wrapping this discussion up, I know that achieving a truly highly-available IT environment sounds kind of like the Holy Grail, right? In many ways, it can be. I don’t know that you’ll ever achieve 100% of every one of these goals, but this is what you strive for, and how you need to approach it.


What do you think about the availability of IT services within your healthcare organization? What have some of your key challenges been? How are you addressing them? Do you have any tips, tricks, or battle scars to share that can help the rest of us? I'd love to hear your thoughts!


Until next time….

"The network is down!" screams an unhappy user via VOIP. Ugh! How are we able to stay on top of applications, databases, networks, and services as network engineers? Metrics are something we can all understand. So, why not combine these into one view for easier troubleshooting and helping to assess situations quickly and accurately?


Join me and Senior Product Managers Steven Hunt and Chris O’Brien for our THWACKcamp 2017 session, "Monitoring Like a SysAdmin When You're a Network Engineer" to learn how you can apply system monitors to cover your business-critical applications and be proactive about keeping network issues to a minimum. We will also cover how to verify the performance of systems/applications after network upgrades or features have been applied, and discuss how to break down silos and engage with your systems teams to better monitor your network. You'll learn how to share dashboards that allow you to prove your network before, during, and after the fallout.


We are continuing our expanded-session, two-day, two-track format for THWACKcamp 2017. SolarWinds product managers and technical experts will guide attendees through how-to sessions designed to shed light on new challenges, while Head Geeks and IT thought leaders will discuss, debate, and provide context for a range of industry topics.


In our 100% free, virtual, multi-track IT learning event, thousands of attendees will have the opportunity to hear from industry experts and SolarWinds Head Geeks and technical staff. Registrants also get to interact with each other to discuss topics related to emerging IT challenges, including automation, hybrid IT, DevOps, and more.


With over 16 hours of training, educational content, and collaboration, you won’t want to miss this!


Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch my session!

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.