Skip navigation
1 6 7 8 9 10 Previous Next

Geek Speak

2,351 posts
sqlrockstar

Thank You, IT Pros

Posted by sqlrockstar Employee Sep 14, 2017

They work in mystery, toiling away at all hours. Nobody ever sees them working, but many are happy with the results. And if anyone tries to reproduce their work, they end up disappointed. No, I’m not talking about the Keebler® Elves, although I suppose there are some similarities between these two groups of workers. Both are overworked, underpaid, and no one understands how they do their job so well.

 

I am talking about IT professionals, the unsung heroes of modern corporate enterprises around the globe. Except they are no longer unsung because back in 2015, SolarWinds created IT Pro Day! Created by IT professionals for IT professionals, IT Pro Day happens on the third Tuesday of September each year. IT Pro Day serves as a great reminder about all the work that goes on behind the scenes.

 

Here’s some data for you to think about this IT Pro Day:

 

  • IT pros spend 65% of their time managing IT and IT-related services
  • Nearly half of that time (47%) is dedicated to resolving technology issues for senior executives/chief officers

 

Let that sink in for a minute. Most of our time is spent catering to executives. You would think that the executives would appreciate all this effort, right? Maybe not:

 

  • 61% of IT pros are concerned about job security, with almost half (42%) suggesting the key reason is that company leadership does not understand the importance of IT

 

Okay, so maybe the executives appreciate the effort, but IT pros don’t believe that the executives understand the importance of IT. Which only seems odd when you find out everyone else does understand the importance:

 

  • 63% of end-users agree that IT has a greater impact on their daily work lives than the executives in the corner office

 

I’ve always thought most executives started out as regular employees. I guess I was wrong, because if that were true, then the above numbers would be different. And so would this one:

 

  • 91% of IT pros surveyed work overtime hours, and of those, 57% do so with no compensation for working overtime

 

Lots of overtime, for people who don’t understand the importance of quality IT staff. Overworked. Underpaid. And no one can explain what it is they do for work. But we are dedicated to making things better for others:

 

  • 25% of IT professionals agree that half of the time, end-users who try to solve their own IT problems ultimately make things worse

 

Okay, so making things better for others also makes things better for us. But IT pros aren’t just looking out for the people (or themselves). They’re also looking out for the business:

 

  • 89% of IT professionals most fear a security breach

 

Somehow, all this data makes sense to me. I understand each data point because I have lived each data point. I am an IT pro, and damn proud to say that to anyone who cares to listen. Oh, who am I kidding? I’ll say it even to people who don’t care to listen.

 

IT pros don’t do this for money. We aren’t interested in that. (But it’s nice, don’t get me wrong, and here’s hoping someone in the corner office on the fourth floor sends me bacon for Christmas this year.) We truly love what we do for work:

 

  • 94% of IT pros say they love what they do

 

Here’s to you, IT pro. Enjoy your day. Walk with your head held high. Smile a few seconds longer when an executive asks you to fix their kid’s iPad®.

 

You’ve earned this day, and many more.

 

Thank you.

 

 

Starting out in IT there are many things that I wish I had known about, but one of them is the value of the soft skills required.  Organizations want people who are willing to learn with the proper drive, but the ability to communicate, support, empathize, and the ability to help other people in the business will go a long way for your success within any enterprise.

 

Finding a Job

 

Over the years I have spent in the field I have been on both the interviewing side and the interviewee side of the table.  I have found that it always starts with how you related to others and whether or not you can have a real conversation with the person you are talking to. I have met people during the interviewing process that have been proud to be the guy/gal from Office Space with the red stapler: hiding out without any social skills.  I have not ever once seen them be hired into the organizations I have worked in.  So, what are the key skills that a person must have to succeed in IT?  Let’s break it down here.

 

  • Communication – The ability to have a conversation with a person will go a long way for your IT career.  In most IT roles staff interacts with the business daily.  From the ability to just have a conversation to the ability to listen, and then assist by articulating clearly is necessary. I read something somewhere that said you should be able to explain complex technology in a simple form, so simple that even a child can understand. That is not always an easy task, but I compare it to when I go to the doctor. They have a lot of complex terms like we do, but at the end of the day, they need to remove that from the conversation and explain what they are doing so that a non-medical professional can understand. That is the same level of communication required to be successful in your IT career.

 

  • Negotiation – The art of negotiation is so important to anyone in life as a whole, but here is how it applies to your IT career. As you are looking at third-party products to support your organization, are you going to pay retail price? No way! Negotiation is necessary. How about when you are talking to a future employer about salary. Do you ever take the first offer? No way! Lastly, we even get to negotiate with our users/management/team in IT.  They may ask for the most complex and crazy technology to do their jobs. You may be inclined to say no, but this is not how it works. Figure out what it takes, price it out, and odds are they won’t do it. This is the art of negotiation.

 

  • Empathy – Always empathize with the frustrated user. They are likely consuming the technology you implement. While the issue may not even be your fault, it is important to share that you understand they are having a hard day. More importantly, you will do what you can to resolve their issue as quickly as possible.

 

Soft skills go further than even the key ones that I have highlighted, but my hope is that this did get you thinking. IT is no longer full of people that don’t communicate well with others. That is a stereotype that needs to go away.

 

Long-term success

 

The only way to be successful in IT is to communicate well and play nice with others.  Use those soft skills that you have.  Any other approach, no matter how well you know your job, will find you looking for a new one sooner rather than later.

In my previous blog, I discussed the somewhat unique expectations of high availability as they exist within a healthcare IT environment. It was no surprise to hear the budget approval challenges that my peers in the industry are facing regarding technology solutions. It also came as no surprise to hear that I’m not alone in working with businesses that demand extreme levels of availability of services. I intentionally asked some loaded questions, and made some loaded statements to inspire some creative dialogue, and I’m thrilled with the results!

 

In this post, I’m going to talk about another area in healthcare IT that I think is going to hit home for a lot of people involved in this industry: continuity of operations. Call it what you want. Disaster recovery, backup and recovery, business continuity, it all revolves around the key concept of getting the business back up and running after something unexpected happens, and then sustaining it into the future. Hurricane Irma just ripped through Florida, and you can bet the folks supporting healthcare IT (and IT and business, in general) in those areas are implementing certain courses of action right now. Let’s hope they’ve planned and are ready to execute.

 

If your experiences with continuity of operations planning are anything like mine, they evolved in a sequence. In my organization (healthcare on the insurance side of the house), the first thing we thought about was disaster recovery. We made plans to rebuild from the ashes in the event of a catastrophic business impact. We mainly focused on getting back and running. We spent time looking at solutions like tape backup and offline file storage. We spent most of our time talking about factors such as recovery-point objective (to what point in time are you going to recover), and recovery-time objective (how quickly can you recover back to this pre-determined state). We wrote processes to rebuild business systems, and we drilled and practiced every couple of months to make sure we were prepared to execute the plan successfully. It worked. We learned a lot about our business systems in the process, and ultimately developed skills to bring them back online in a fairly short period of time. In the end, while this approach might work for some IT organizations, we came to realize pretty quickly that this approach isn’t going to cut it long term as the business continued to scale. So, we decided to pivot.

 

Next we started talking about the next evolution in our IT operational plan: business continuity. So, what’s the difference, you ask? Well, in short, everything. With business continuity planning, we’re not so much focused on how to get back to some point in time within a given window, but instead we’re focused on keeping the systems running at all costs, through any event. It’s going to cost a whole lot more to have a business continuity strategy, but it can be done. Rather than spending our time learning how to reinstall and reconfigure software applications, we spent our time analyzing single points of failure in our systems. Those included software applications, processes, and the infrastructure itself. As those single points of failure were identified, we started to design around them. We figured out how to travel a second path in the event the first path failed, to the extreme of even building a completely redundant secondary data center a few states away so that localized events would never impact both sites at once. We looked at leveraging telecommuting to put certain staff offsite, so that in the event a site became inhabitable, we had people who could keep the business running. To that end, we largely stopped having to do our drills because we were no longer restoring systems. We just kept the business survivable.

 

While some of what we did in that situation was somewhat specific to our environment, many of these concepts can be applied to the greater IT community. I’d love to hear what disaster recovery or business continuity conversations are taking place within your organization. Are you building systems when they fail, or are you building the business to survive (there is certainly a place for both, I think)?

 

What other approaches have you taken to address the topic of continuity of operations that I haven’t mentioned here? I can’t wait to see the commentary and dialogue in the forum!

Anyone who is having issues with performance or considering expanding their deployment has had to wrestle with the question of how, exactly, to get the performance they need. This session will focus on maximizing performance, whether tuning equipment to optimize capabilities, tuning polling intervals to capture the data you need, or adding additional pollers for load balancing and better network visibility.

 

In the "Orion at Scale: Best Practices for the Big League" session, Kevin Sparenberg, product manager, SolarWinds, and Head Geek Patrick Hubbard will teach you best practices for scaling your monitoring environment and ways to confidently plan monitoring expansion. They will focus on maximizing the performance of your expanded deployment, and more!

 

THWACKcamp 2017 is a two-day, live, virtual learning event with eighteen sessions split into two tracks. This year, THWACKcamp has expanded to include topics from the breadth of the SolarWinds portfolio: there will be deep-dive presentations, thought leadership discussions, and panels that cover more than best practices, insider tips, and recommendations from the community about the SolarWinds Orion suite. This year we also introduce SolarWinds Monitoring Cloud product how-tos for cloud-native developers, as well as a peek into managed service providers’ approaches to assuring reliable service delivery to their subscribers.

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

Training is a topic I hold near and dear to my heart. Here are some of my thoughts about how a company will succeed or fail based on the training (and thereby the competence) of their technical staff.

 

 

My team members decide what they need to learn to better support our needs, then set aside a couple of hours each week, during work hours, to do training. This is informal, undirected time that benefits the company a lot!

 

 

Companies miss out when they don't allocate formal time and funds to help ensure that their employees have professional training. It doesn't matter whether those needs involve learning internal safety procedures, corporate IT security policies, basic or advanced switching/routing/firewalling, setting up V-Motion or VoIP or Storage LUNs, or just learning to smile while talking to customers on the phone.

 

Companies that don't budget time and money to train their staff risk not having the right staff to

  • Answer questions quickly
  • Do great designing
  • Provide excellent implementations
  • Troubleshoot problems efficiently and effectively.

 

It may surprise or dismay you, but training is more effective when it's done off site. Being at a training facility in person--not remotely or via eLearning--gets you more bang for your training dollars. It may look more expensive and inconvenient than participating in recorded or online/remote training sessions, but that perception is deceiving.

 

Relying solely on distance learning has unique costs and drawbacks:

  • Technical problems
    • Hearing audio
    • Sharing screens
    • Losing training time while waiting for the instructor to troubleshoot others' technical problems
  • Missing out on the pre-class, lunchtime, and post-class conversations and meetings. I've learned a lot from sharing information with students during these "off class" times. I've made some personal connections that have helped me get a lot more out of the training, long after the sessions are over. Those opportunities are lost when a class is attended online.
  • Remote eLearning sessions conducted onsite are ineffective due to work interruptions. Work doesn't stop when you are attending training sessions in your cube. The help desk calls our desk phones when we are needed, and our cell phones when we're not at our desks. Work doesn't stop when you are attending training sessions at your desk. People stop by for help without notice (we call these "drive-bys"), expecting us to interrupt our online training session to deal with their issues whenever they stop by our cubes. Hours or days of training are lost this way.
  • Remote or recorded training sessions are often dry and time-consuming.   We don't need to sit through introductions and explanations of training settings, yet that's what some companies include in their online training offerings. These sessions end up becoming cut-rate solutions for people or companies who can't afford to do training the right way. Actual hands-on, real-time, face-time experiences are richer in training fulfillment. They are critical to getting the most out of every training dollar.  Plus, getting out of the office helps encourage active participation during training, and results in a refreshed employee coming back to work. Training is no vacation (especially when taking a regimen of 12 to 14-hour classes for four or five days straight), but a change of environment is a welcome pick-me-up.

 

Relying on people to seek their own training using their own time and money is often a mistake

You can end up with people who either can't serve your company's needs or are burned out and frustrated. They'll look for a company that properly supports them with in-house training, and you'll potentially lose whatever expense you budgeted to train them, as well as losing the time wasted during their learning curve when they were a new employee.

 

To avoid this, establish a corporate policy that protects your investment.

  1. If a person leaves within twelve months of receiving training at the company's expense, they must reimburse the company for travel costs, meals, hotel, and tuition.
  2. If a person leaves between twelve months and twenty-four months after receiving training at the company's expense, they must only reimburse the company the cost of the tuition, not the travel, hotel, or meals.
  3. Once a person has been with the company for some arbitrary longer length of time (7-8 years or so), they don't have to reimburse any training costs when they leave, no matter how soon after training they take off. Your human resources team should be able to provide statistics about the likelihood of a person staying with the company after X years. Use their figures, or you can omit this option.

 

 

 

If you don't fund enough training for your people, you won't have the needed tools for the job when you need them. Your company will not prosper as well as it should. Those underappreciated employees will either inadvertently slow down your progress, or they'll take their services to a company that appreciates them. They'll see their value when the new company reinvests in those employees by sending them to great training programs.

 

How much does training cost?

 

The real question is, "How much does it cost to have untrained people on staff?"

If your people can't do the job because they haven't been trained, they'll make mistakes and provide poor recommendations. You won't be able to trust them.  You'll have to contract out for advanced services that bring in a hired gun to solve one issue one time. Once the expert leaves, you still have needs that your staff can't fill. Worse, you don't have impartial internal experts to advise you about the directions and solutions you should implement.

 

You can find many different vendor-certified training solutions at varying price points, but we can talk about some general budget items for a week of off-site training.

  • Tuition:  ~$3,500 - $6,000  (or more!) for a one-week class at the trainer's facility
  • Travel:
    • Flight ~$750 (depending on source and destination)
    • Car rental ~$300 (again, depends on distance, corporate discounts, etc.)
    • Hotel ~$150 per night (roughly)
    • Meals ~$125 per day (this is pretty high, but we're just looking at ballpark figures here)

 

You could spend up to $7,500 for one week of training one person.

 

Consider discounts and special offers.  You may be able to reduce your company's training costs to almost zero, especially if your employees live in the same city that is hosting the training.

  • Cisco Learning Credits can pay for all of the Cisco training if you have a good arrangement with your Cisco reseller if you choose a training company that accepts Learning Credits. If you don't have Cisco hardware, approach the vendor or your VAR for free or discounted training.
  • Some training centers offer big discounts or two-for-one training (or better) opportunities. It never hurts to ask for incentives and discounts to use their services.
  • Some training companies cover all hotel costs when training at their sites!
  • Some training programs include breakfast and lunch as part of the overall cost, leaving you to expense only dinners.
  • Car rental may not be required if you select a hotel adjacent to the training facility. Walk between them, rely on the hotel's airport shuttle, or use a taxi.

 

Do not rely solely on Cisco Learning Credits (CLC's)

 

A CLC is typically worth about $100, and if a class costs $3,500, you need 35 Learning Credits for an employee to have "free" training. Of course, those learning credits are NOT free. Your company either buys them (at a discount) or earns them as an incentive for their business. Perhaps you can sign an agreement with Cisco or your VAR that guarantees you'll spend X dollars on new hardware or services annually, and in return receive some percentage of X to use as learning credits. I've worked with two VARs who do this, and it's much appreciated.

 

CLCs are never enough to cover all of our training needs.  For one thing, they're only good for Cisco training.  If you have F5's, CLC's are of no value for their training.  Many training companies offer 2-for-1 discounts, or buy-one-get-a-second-at-50%-off, or better.  And you can make those dollars go further if you follow a great "Train The Trainer" program.  In this, you select a person who has great communication and understanding skills to receive the training.  When they return to the company, they train their peers.  They're fresh, they have contacts from their class that can be queried for answers to questions, and they may save you the cost of sending people to training.

 

Relying solely on CLCs means you've either got to spend a lot of capital dollars up front (to build up a bank of CLS's to use in the next twelve month), or you need more budget to cover the training gap.  Allocate sufficient funds to ensure your people have the exposure, training, and knowledge to correctly guide your company to a better IT future.  I can't emphasize this enough!

 

Discover your training needs. I have found that each analyst typically needs two weeks of off-site training annually, perhaps more for the first few years, until everyone is up to speed.

 

Why so much training?  Training is necessary for your team to:

  • Keep up with versions, bug fixes, better ways of doing things, security vulnerabilities and their solutions.
  • Do the highly technical and specialized things that make your network, servers, and applications run the best they can.
  • Maintain their skill sets and ensure they're aware of the right options and security solutions to apply to your organization.
  • Ensure they can properly design and implement and support every new technology that your company adopts.
  • Trust them to provide the right advice to decision makers.

 

 

You COULD hire outside contractors to be your occasional technical staff . . .    But then you'd be left with unthinking, non-advancing worker drones on your staff, who'll drag you down or leave you in the lurch when they find employers who will believe and invest in them.

 

Harsh? You bet! But when you understand the risks of having untrained people on staff, you see all the benefits that result from training.

 

If you have staff who sacrifice their personal expenses and family time (evenings, weekends, and holidays) to train themselves for the company's benefit, cherish them--they're unusual, and won't stay with you long.  They're on the fast path to leave you behind.  Give them raises and promotions to encourage them to stay, and compensate their training expenses. If you don't, they'll leave for the competition, who'll jump another step ahead of you.

 

Succeed by reinvesting in your staff, showing them they're appreciated by sending them to training, and they will help your company succeed.

Greetings from Barcelona! I’m here for VMworld and you can find me at the SolarWinds booth, at my session on Wednesday, or in the VM Village hanging out with other vExperts. If you are at the event, please stop by and say hello. I’d love to talk data with you.

 

As always, here are some links from the intertubz that I hope will hold your interest. Enjoy!

 

Equifax Says Cyberattack May Have Affected 143 Million Customers

There are only 126 million adults in the United States. So, yeah. You are affected by this.

 

Are you an Equifax breach victim? You could give up right to sue to find out 

As if suffering a breach isn’t bad enough, Equifax is tricking people into waiving their right to sue. Like I said, you are affected. Don’t bother checking. Equifax needs to notify you in a timely manner.

 

Three Equifax executives sold $2 million worth of shares days after cyberattack

If true, that they sold knowing about the breach, then my thought is this: Equifax can’t go out of business fast enough for me.

 

Surprising nobody, lawyers line up to sue the crap out of Equifax

Oh, good. That should solve everything because lawyers can go back in time to prevent the theft of our identities, right?

 

Windows 10 security: Microsoft offers free trial of latest Defender ATP features

Security should be free for everyone. Here's hoping Microsoft does the right thing and tries to protect everyone, always, for no additional cost. Too bad they didn’t help Equifax.

 

Hackers Gain Direct Access To US Power Grid Controls

If the Equifax stories didn’t rattle you enough, here’s one about hackers controlling your electricity.

 

A Simple Design Flaw Makes It Astoundingly Easy To Hack Siri And Alexa

EVERYTHING IS AWFUL

 

To Understand Rising Inequality, Consider the Janitors at Two Top Companies, Then and Now

Long, but worth the read. It’s a fascinating comparison between the American workforce 35 years ago and today.

 

The view from my hotel room at VMworld, overlooking Plaza Espanya at sunset:

scuff

SaaS and the SysAdmin

Posted by scuff Sep 12, 2017

In the SMB market, SaaS vendors are quick to promote that you can turn off your on-premise servers and ditch your IT guy/gal (I kid you not). In the Enterprise, it’s unlikely that all of your workloads will move to SaaS, so the IT Pros may still be safe. But let’s pick on one technology for a moment as an example – Microsoft Exchange. Assuming you ditch your Exchange boxes for Exchange Online, what’s an Exchange Administrator to do? How does their role change in a SaaS world?

 

What stays the same?
Administration: There’s still a need for general administration of Exchange Online, especially Identity & Access Management. People will still join, leave, change their names and move teams. Departments will still want distribution groups and shared mailboxes. The mechanics to do this are different and tasks will likely be done by someone who’s administering the other Office 365 services at a tenancy level, but that’s not too different to Enterprises that have a separate “data security” team anyway for IAM.

 

Hello, PowerShell: Speaking of changes in how you achieve things, being proficient in PowerShell is the best new skill to have, thought PowerShell is not limited to Exchange Online/Office 365. If you’re already using PowerShell to administer on-premises Exchange servers, you’re more than halfway there.

 

Compliance: It’s rare to find an organization that leaves all the settings at their defaults. Exchange Online may still need tweaking to ensure it locks down things and applies rules that you’re using in-house to achieve and maintain policy or regulatory compliance. That can be as simple as the blocked/allowed domains or more complex like Exchange transport rules and Data Loss Prevention settings.

 

Integration: We’ve been using SMTP to handle information flow and systems alerts for a very very long time now. It’s possible that you’ll need to replicate these connections from and to other systems with your Exchange Online instance. There’s a gotcha in there for ageing multi-function fax machines that don’t support TLS (don’t laugh), but this connectivity doesn’t just go away because you’ve moved to the Cloud.

 

End user support: Sorry, the Cloud won’t make all the support calls go away. Brace yourselves for calls that Outlook isn’t doing what it’s supposed to, and it’s only impacting one user. Then again, maybe that’s an Outlook problem and not an Exchange server problem (usually). A quick “do you see the same problem in Outlook Web Access” is your first troubleshooting step.

 

What changes?
Bye bye, eseutil: Sorry not sorrry, the Exchange database is no longer your problem. I will miss using eseutil to check and repair it.

 

No more upgrades: Patches, service packs and major version upgrades be gone, when the Microsoft team are managing the application. Ditto for the same on the underlying server operating system.

 

Monitoring: We’re still interested in knowing if the service is down before the users have to tell us, but we’re no longer able to directly monitor the running Microsoft Exchange services. In addition, we’re monitoring the Office 365 status information and message center.

 

Server provisioning and consolidation: Shutting down a big project and making people redundant? Expanding the business with a merger or acquisition? No more building servers or decommissioning them – just add more licenses or close accounts.

 

Your new role
The more things change, the more they stay the same. Though technology changes how we do our jobs, the things that need to be done don’t change. Yes, in this case Microsoft has the responsibility and the power for some parts that you would have taken care of with your own server. But I’m not seeing that the shift is enough to cut your hours in half just yet.

 

Join the conversation – let me know how adopting a SaaS solution has changed what you do in your role or how you do it.

In most office environments, power strips or surge protectors are a normal, everyday device that most of our computers, printers, copiers, etc. are plugged into. They’re fairly innocuous and probably something we take for granted, right? Just a normal piece of equipment in our office. What if that power strip was actually a hacker’s tool, and was quietly facilitating the exfiltration of private data from your organization?

 

Check out the Power Pwn – a fully functional 8-outlet, 120V power strip, that also contains anything you would need to penetrate a network, including dual Ethernet ports, a high-gain wireless antenna, Bluetooth, and optional 3G/LTE. Once this device is carefully placed in your environment, a hacker can remotely access and control it, and begin to explore and attack anything it can see on your network.

 

Maybe your network team have things locked down fairly tight, and plugging this thing into an Ethernet port for a photocopier isn’t going to get access to anything important. Then an employee decides they need more power outlets at their desk and quietly moves this shiny new surge protector off the copier, and to their desk. I mean, that copier only needs one power outlet, why waste 8 perfectly good outlets there? Now, they happily “protect” their desktop computer with this device once it has been relocated to their office. Let’s say this employee is a member of your Finance team, or Human Resources…and their desktop Ethernet port has a lot more access to sensitive information on your network…

 

This is one example of some of the toys tools available to anyone interested in doing a little hacking. More often than not they are sold as ‘Penetration Testing’ devices for use by security professionals who might be hired by private companies to do a vulnerability assessment or penetration test on their networks.

 

These are also tools that you, the IT Pro can use to do a little hacking of your own, allowing you to learn more about the potential threats to your environment, and further protect it with that knowledge.

 

A Pineapple, a Ducky, and a Turtle walk into a bar…

 

As we’ve progressed through the last 50 years of technology advanced according to Moore’s Law, the size of processors and devices that use them have scaled down considerably as well. This has allowed the emergence of tiny microcomputers that are as powerful or more powerful than their full-sized counterparts from 3-5 years past.

 

The Power Pwn is just one example of a pre-fabricated, plug-and-play hacking device, with a tiny embedded computer, capable of running a fully functional operating system and tool package that allows for penetration and possible attack of an unsuspecting network.

 

Check out the store at Hak5Shop for some of these other great tools.

 

For those interested in lurking about the airwaves, there is the Wifi Pineapple. This nefarious little device allows you to scan and analyze wireless networks. With it you can create your

own ad-hoc network, or mimic your local coffee shop’s wireless network and intercept and analyze traffic across it from other patrons, while they check their bank balances sipping on a latte.

 

I hope this goes without saying but I’ll say it anyway - DO NOT DO THIS. This is about hacking without getting arrested.

 

It would be perfectly okay to use a Wifi Pineapple at home, and intercept your teenager’s Snapchat conversations perhaps…

 

The USB Rubber Ducky looks like a harmless USB key, but plug it into the USB port of your Windows, OSX, Android, or Linux device, and it will fool any of those operating systems into believing it’s just a keyboard (getting around any pesky security policy blocking USB drives by acting as a HID – Human Interface Device) and then dropping a malicious payload, opening a reverse shell, or logging keystrokes.

 

Right, but people don’t put strange USB keys into their devices, right? Well, it turns out about half of them still do. A presentation from Blackhat 2016 discussed an experiment in which almost 300 USB keys were randomly dropped around the campus of the University of Illinois, and 48% of them reported back into the researchers, indicating they had been plugged in and were able to establish connectivity to the researcher’s command and control server. There was no malicious payload here obviously, but it shows that what we as IT Pros may see as common sense, isn’t all that common. People see a free 32GB USB key sitting on a park bench and think it’s perfectly okay to plug it in and check it out.

 

Pick up a few Duckys and set up a quick test at your office, with permission of course, and see if Dave from HR likes free USB keys. I bet he does.

 

Another cool tool from this site is the Lan Turtle. This little guy looks like a USB Ethernet adapter – perfect for the latest lightweight notebooks that don’t have Ethernet, right? Well, now you’ve provided an attacker with remote access, network scanning, and man-in-the-middle capabilities.

 

Finally, if you haven’t already bought one, get yourself a Raspberry Pi. These micro computers are the perfect platform for doing some playing/hacking in your home lab or at work, especially coupled with one of the OS or software packages I will talk about next.

 

Sharks and Dragons

 

I’ll caveat this segment by suggesting that you get comfortable with Linux, of any flavor. I don’t mean you need to grow a ridiculous beard and lose the ability to walk outside in daylight, but at least be able to navigate the filesystem, install applications, do some basic configuration (networking, users, permissions), and edit text. I don’t want to open the Nano vs. Vi can of worms here, but let’s just say I opened Vi once, and I’m still stuck in it, so use Nano if you’re a ‘Nix rookie like me.

 

Also if you know how to get out of Vi, please let me know.

 

The reason here is that many of the popular pentest/hacking software packages are Linux-based. Many of the tools are open source, and community-driven, and so they are written to run in a command line on an open source platform like Linux.

 

There are some that have Windows/OSX variants or some sort of GUI, but if you want to get your hands on all the bells and whistles, the shell is your friend.

 

Having said all of that, I’ll start with a tool that actually doesn’t need Linux, and that is the packet capture tool – Wireshark. Wireshark does one thing and it does it really well, it captures network traffic at the packet level, wired or wireless, and allows you to actually see the traffic crossing your network in extreme detail. It’s a cornerstone tool for network administrators for troubleshooting, and it’s a powerful tool for security professionals who want to take a deep granular view of the information crossing their networks.

 

Wireshark 101 by Laura Chappell – the preeminent expert on Wireshark, is recommended reading if you want to build a solid foundation on packet capture and analysis.

 

Next up, Kali Linux. I warned you about the Linux, right? Often referred to as simply “Kali” – this is a Debian-based Linux distribution that is actually a package of over 600 penetration testing and hacking tools. It’s the Swiss Army Knife for security professionals, and hackers wearing hats of any color. While the underlying platform is still Linux, it does have a great GUI that allows access to the tools within. Not to mention, the really cool dragon logo that has made its way into popular culture, making appearances in Mr. Robot.

 

Mr. Robot is required viewing if you’re interested in hacking, by the way.

 

Kali also has a fantastic resource available for learning how to properly use it – Kali Linux Revealed should also be added to your reading list if you want to take a deeper look at using Kali for your own purposes.

 

Less of a hacking tool, and more of a security analysis product is Nessus. Nessus is primarily a vulnerability scanner, allowing you to discover and assess any significant security flaws in your environment. This isn’t a penetration test mind you, but an assessment of software and operating systems within your network. It will identify devices that are exposed or vulnerable to malware, un-patched operating systems, and common exploits. It is free to use for individuals, and another software product I highly recommend testing within your own environment.

 

Homework Assignment

 

All of the tools outlined here are simply that, tools. They can be used to learn and assess, or they can be used maliciously and illegally. For us, we want to learn and develop skills, rather than end up with lengthy prison terms because we packet-captured a bunch of credit card numbers at our local Starbucks.

 

So, please don’t do that.

 

If you are interested in hacking, as an IT professional, I’d highly encourage you to try and get your hands on the software I’ve outlined here at the very minimum. It’s all free, and doesn’t require a lot of resources to run. If you want to take things a bit further, get your hands on some of the hardware tools as well. The combined creative potential between the hardware and software here is limitless.

 

Mr. Robot was already mentioned as required viewing, but there’s more! If you haven’t already seen these multiple times, you budding hackers have a homework assignment – to watch the following movies:

 

Wargames (How about a nice game of chess?)

Hackers (Hack the planet!)

Swordfish (NSFW)

Sneakers (Setec Astronomy)

 

Please comment below and let me know of any other tools, hardware or software you'd recommend to a greenhorn hacker. What movies, books, or TV should be required viewing/reading?

The other day, I was talking with my dad and told him IT Pro Day was coming up, and that I needed to write something about it. "Why is it IT PRO Day?" he asked, "Why not just ‘IT People Day’ or ‘IT Enthusiasts Day’? Why leave out all those aspiring amateurs?"

 

My dad was trolling me using my own arguments from a debate we frequently had when I was a kid. You see, my dad has been a musician his whole life. He attended Music & Arts high school in NYC, then Julliard and Columbia, and then had a career that included stints with the New York Philharmonic, NBC Symphony of the Air, and 46 years with the Cleveland Orchestra. Suffice to say, my dad knew what it meant to be "a professional."

 

As a kid, I insisted that the only thing separating pros from amateurs was a paycheck (and the fact that he got to wear a tuxedo to work), and that this simplistic distinction wasn't fair. Of course, what was simplistic was my reasoning. Eventually I understood what made a musician a "pro," and it had nothing to do with their bank account.

 

So that was the nature of his baiting when I brought up IT Pro Day. And it got me thinking: what IS it that makes an IT practitioner a professional? Here's what I've learned from dear old dad:

 

First, having grown up among musicians, I can PROMISE you that being a professional has nothing to do with how much you do (or don't) earn at “the craft,” how obsessively you focus on it, or how you dress (or are asked to dress) for work.

 

Do you take your skills seriously? Dad would say, "If you skip one day of practice, you notice. Two days and the conductor notices. Three days and the audience notices. Pros never let the conductor notice." In an IT context, do you make it your business to stay informed, up to date, know what the upcoming trends are, and get your hands on the new tech (if you can)? It even extends to keeping tabs on your environment, knowing where the project stands, and being on top of the status of your tickets.

 

"If you're not 30 minutes early, you're an hour late," Dad would say as he headed out at 6 p.m. for an 8 p.m. concert. "I can't play faster and catch up if I'm 10 minutes late, you know!"

 

Besides the uncertainty of traffic, instruments needed to be tuned, music sorted, warm ups run. While not every job requires that level of physical punctuality, it's the mental piece that's relevant to us. Are you "present" when you need to be? Do you do what it takes to make sure you CAN be present when it is time to play your part, whether that's in a meeting, during a change control, or when a ticket comes into your queue?

 

When you first learn an instrument, a lot of time is spent learning scales. For those who never made it past the beginner lessons, I have some shocking (and possibly upsetting) news: even the pros practice scales. In fact, I'll say *especially* the pros practice scales. I asked dad about it. He said that you need to work on something until you don't have to think about it any more. That way, it will be there when you need it. As IT pros, we each have certain techniques, command sequences, key combinations, and more that just become a part of us and roll off our fingers. We feel like we could do data center rollouts in our sleep. We run product upgrades "by the numbers." The point is that we've taken the time to get certain things into our bones, so that we don't have to think about them any more. That's what professionals do.

 

This IT Pro Day, I'm offering my thanks and respect to the true IT professionals. The ones who work every day to stay at the top of their game. Who prepare in advance so they can be present when they're needed. Who grind out the hours getting skills, concepts, and processes into their bones so it's second nature when they need them. Doesn't that sound like the kind of IT pros you know? The kind you look up to?

 

The truth is, it probably sounds a lot like you.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Every federal IT pro should be doing standard database auditing, which includes:

 

  • Taking a weekly inventory of who accessed the database, as well as other performance and capacity data
  • Ensuring they receive daily, weekly, and monthly alerts through a database-monitoring tool
  • Keeping daily logs of logins and permissions on all objects
  • Maintaining access to the National Vulnerability Database (NVD), which changes daily
  • Performing regular patching, particularly server patching against new vulnerabilities

 

These are just the basics. To optimize database auditing with the goal of improving IT security, there are additional core steps that federal IT pros can take. The following six steps are the perfect place to start.

 

Step 1: Assess Inventory

 

Tracking data access can help you better understand the implications of how, when, where, and by whom that data is being accessed. Keeping an inventory of your PII is the perfect example. Keep this inventory in conjunction with your audits can help you better understand who is accessing the PII.

 

Step 2: Monitor Vulnerabilities

 

Documented vulnerabilities are being updated every day within the NIST NVD. It is critical that you monitor these on a near-constant basis. We suggest a tool that monitors the known-vulnerabilities database and alerts your agency, so action can be immediate and risks are mitigated in near real-time.

 

Step 3: Create Reports

 

Make sure you have a tool in place that takes your logs and provides analysis. This should, ideally, be part of your database monitoring software. Your reports should tell you in an easy-to-digest format who’s using what data, from where, at what time of day, the amount of data used, etc.

 

Step 4: Monitor Active Directory®

 

Who is accessing this information—particularly if the person shouldn’t be accessing that data. That’s why it is critical to understand more than just who is accessing your data; you must have a clear understanding of who, what, and which data they’re accessing, and when they are accessing data.

 

Step 5: Create a Baseline

 

If you have a baseline of data access on a normal day, or at a particular time on any normal day, you’ll know immediately if something is outside of that normal activity. Based on this baseline, you’ll immediately be able to research the anomaly and mitigate risk to the database and associated data.

 

Step 6: Create One View

 

It is certainly possible that the most critical step to improving security through database auditing is to understand its role within the larger IT environment. It is worth the investment to find a tool that allows federal IT pros to see database audit information within the context of the greater infrastructure. Application and server monitoring should work in conjunction with database monitoring.

 

There is one final step: monitor the monitor. There should never be a single point of failure when performing database audits. Make sure you’ve got secondary checks and balances in place so no single tool or person has all the information, access, or control.

Find the full article on Federal Technology Insider.


The backup technology landscape is almost as complex as the environments it needs to protect. Do you go with point solutions to solve specific problems or a broader solution to consolidate backups into one platform? In this session, we discuss the latest backup approaches for each of the different types of IT environments you may need to protect.

 

In my session, "Understanding Backup Technologies and What They Can Do for You," I will be joined by Keith Young, Sr. Sales Engineer here at SolarWinds, to discuss the ins and outs of backup technologies and their impact on your IT environment.

 

THWACKcamp 2017 is a two-day, live, virtual learning event with eighteen sessions split into two tracks. This year, THWACKcamp has expanded to include topics from the breadth of the SolarWinds portfolio: there will be deep-dive presentations, thought leadership discussions, and panels that cover more than best practices, insider tips, and recommendations from the community about the SolarWinds Orion suite. This year we also introduce SolarWinds Monitoring Cloud product how-tos for cloud-native developers, as well as a peek into managed service providers’ approaches to assuring reliable service delivery to their subscribers.

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

A big hearty THANK YOU to everyone who joined us at our SolarWinds booth, breakout and theater sessions, and Monitoring Morning! We were excited to converse with you in person about the challenges that practitioners face.

SolarWinds Views from VMworld
SolarWinds Family at VMworld 2017Monitoring Morning at VMworld
SolarWinds Booth at VMworld 2017Future:Net

 

There were plenty of announcements at VMworld. The two that stood out for me were:

  1. VMware Cloud on AWS was announced as a normalization of going from a VMware vSphere environment to an AWS Cloud environment. It runs as a single-tenant of bare-metal AWS infrastructure which allows you to bring your Windows Server licenses to VMware Cloud on AWS. Each software-defined data center can consist of 4 to 16 instances, each with 36 cores, 512GB of memory, and 15.2TB of NVMe storage. The initial availability is quite limiting in the restrictions because there is only one region, and clusters run in a single AWS Availability Zone. The use cases for this service is data center extension, test and development environments, and app migration. I’ll withhold final judgment on whether this VMware Cloud derivative will sink or swim.
  2. AppDefense was another announcement at VMworld 2017. It is billed as an application-level security solution that uses machine learning to build workload profiles. It gathers behavioral baselines for these workloads and allows the user to implement controls and procedures to restrict any anomalous or deviated behavior.

 

Finally, I was invited to Future:Net, a conference within a conference. It was really cool to talk shop about the latest academic research, as well as what problems the next generation of startups are trying to solve.

Future:Net Keynote

P.S. let me know if you will be at VMworld Europe 2017 in Barcelona. If so, definitely stop by to talk to me, chrispaap , and sqlrockstar .

Cloud fixes everything. Well, no it doesn’t. But cloud technology is finally coming out of the trough of disillusionment and entering the plateau of productivity. That means as people take cloud technologies more seriously and look at practical hybrid-cloud solutions for their businesses, engineers of all stripes are going to need to expand their skills outside their beloved silos. 

 

Rather than focusing only on storage or networking or application development, there is great value in IT professionals designing and building cloud solutions knowing a little bit about the entire cloud stack.

 

The National Institute of Standards and Technology (NIST) defines cloud computing as “a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

 

We call it a “cloud stack” because of all the components built one on top of another. This includes elements such as networking, storage, virtualization, compute, load-balancing, application development tools, and more specific to operations, things like user account management, logging, and authentication services. These are all built right in to the IaaS cloud.

 

But when looking at the overall picture, the overall cloud stack, IaaS exists as the foundation for the Platform as a Service, such as development tools, web servers and database servers, which in turn serves as a platform for Software as a Service, such as email and virtual desktops.

 

So when an IT professional is looking at a cloud solution for their organization, regardless of their background and specific area of expertise, there’s a clear need to be able to understand a little bit about networking, a little bit about storage, a little bit about virtualization, even little bit about application development. Sure, there’s still a need for experts in each one of those areas, but when looking at an overall cloud (or more realistically hybrid-cloud) initiative, a technical engineer or architect must understand all those components to some extent to design, spec, build, and maintain the environment.

 

I really believe this has always been the case, though, at least for good engineers. The really good IT pros have always had some level of understanding of these other areas. Personally, as a network engineer, I’ve had to spin up VMs, provision storage and work with validation platforms to one extent or another from the very beginning of my career, and I don’t consider myself that great of an engineer.

 

When I put in new data center switching and firewalling solution, I’m sitting down with someone from the storage team, Linux team, Windows team, virtualization team, and maybe even the security team. Often I need to be able to speak to all of those areas because of how, when it comes down to it, our individual sections of the infrastructure really all work together in one environment.

 

Cloud is no different

All those components still exist in a cloud solution, so when IT pros look at an overall design, there’s discussion about network connectivity, bandwidth and latency, storage capacity, what sort of virtualization platform to run, and what sort of UI to use to deliver the actual application to the end-user. The only difference now is that the cloud stack is one orchestrated organism rather than many more disparate silos to address individually.

 

For example, how will a particular application that lives in AWS perform over the latency of a company’s new SD-WAN solution?

 

And in my experience, I see hybrid-cloud approach more than anything else which requires very careful consideration of networking between the organization and the cloud provider and how applications can be delivered in a hybrid environment.

 

I love this, though, because I love technology, building things, and making things work. So the idea that I have to stretch myself outside of my cozy networking comfort zone is an exciting challenge I’m looking forward to.

 

Cloud doesn’t fix everything, but organizations are certainly taking advantage of the benefits of moving some of their applications and services to today’s popular cloud provider platforms. This means that IT pros need a breadth of knowledge to provide the depth of technical skill a cloud design requires and today’s organizations demand.

I am fascinated by the fact that in over twenty years, the networking industry still deploys firewalls in most of our networks exactly the way it did back in the day. And for many networks, that's it. The reality is that the attack vectors today are different from what they were twenty years ago, and we now need something more than just edge security.

 

The Ferro Doctrine

Listeners to the Packet Pushers podcast may have heard the inimitable Greg Ferro expound on the concept that firewalls are worthless at this point because the cost of purchasing, supporting, and maintaining them exceeds the cost to the business of any data breach that may occur as a result. To some extent, Greg has a point. After the breach has been cleared up and the costs of investigation, fines, and compensatory actions have been taken into account, the numbers in many cases do seem to be quite close. With that in mind, if you're willing to bet on not being breached for a period of time, it might actually be a money-saving strategy to just wait and hope. There's a little more to this than meets the eye, however.

 

Certificate of Participation

It's all very well to argue that a firewall would not have prevented a breach (or delayed it any longer than it already took for a company to be breached), but I'd hate to be the person trying to make that argument to my shareholders, or (in the U.S.) the Securities Exchange Commission or the Department of Health and Human Services, to pick a couple of random examples. At least if you have a firewall, you get to claim "Well, at least we tried." As a parallel, imagine that two friends have their bicycles stolen from the local railway station where they had left them. One friend used a chain and padlock to secure their bicycle, but the other just left their bicycle there because the thieves can cut through the chain easily anyway. Which friend would you feel more sympathy for? The chain and padlock at least raised the barrier of entry to only include thieves with bolt cutters.

 

The Nature Of Attacks

Greg's assertion that firewalls are not needed does have a subtle truth to it -- if it's coupled with the idea that some kind of port-based filtering at the edge is still necessary. But perhaps it doesn't need to be stateful and, typically, expensive. What if edge security was implemented on the existing routers using (by definition, stateless) access control lists instead? The obvious initial reaction might be to think, "Ah, but we must have session state!" Why? When's the last TCP sequence prediction attack you heard of? Maybe it's a long time ago, because we have stateful firewalls, but maybe it's also because the attack surface has changed.

 

Once upon a time, firewalls protected devices from attacks on open ports, but I would posit that the majority of attacks today are focused on applications accessed via a legitimate port (e.g. tcp/80 or tcp/443), and thus a firewall does little more than increment a few byte and sequence counters as an application-layer attack is taking place. A quick glance at the OWASP 2017 Top 10 List release candidate shows the wide range of ways in which applications are being assaulted. (I should note that this release candidate, RC1, was rejected, but it's a good example of what's at stake even if some specifics change when it's finally approved.)

 

If an attack takes place using a port which the firewall will permit, how is the firewall protecting the business assets? Some web application security might help here too, of course.

 

Edge Firewalls Only Protect The Edge

Another change which has become especially prevalent in the last five years is the idea of using distributed security (usually firewalls!) to move the enforcement point down toward the servers. Once upon a time, it was sometimes necessary to do this simply because centralized firewalls simply did not scale well enough to cope with the traffic they were expected to handle. The obvious solution is to have more firewalls and place them closer to the assets they are being asked to protect.

 

Host-based firewalls are perhaps the ultimate in distributed firewalls, and whether implemented within the host or at the host edge (e.g. within a vSwitch or equivalent within a hypervisor), flows within a data center environment can now be controlled, preventing the spread of attacks between hosts. VMWare's NSX is probably the most commonly seen implementation of a microsegmentation solution, but whether using NSX or another solution, the key to managing so many firewalls is to have a front end where policy is defined, then let the system figure out where to deploy which rules. It's all very well spinning up a Juniper cSRX (an SRX firewall implemented as a container) for example, on every virtualization host, but somebody has to configure the firewalls, and that's a task, if performed manually, that would rapidly spiral out of control.

 

Containers bring another level of security angst too since they can communicate with each other within a host. This has led to the creation of nanosegmentation security, which controls traffic within a host, at the container level.

 

Distributed firewalls are incredibly scalable because every new virtualization host can have a new firewall, which means that security capacity expands at the same rate as the compute capacity. Sure, licensing costs likely grow at the same rate as well, but it's the principal that's important.

 

Extending the distributed firewall idea to end-user devices isn't a bad idea either. Imagine how the spread of a worm like wannacry could have been limited if the user host firewalls could have been configured to block SMB while the worm was rampant within a network.

 

Trusted Platforms

In God we trust; all others must pay cash. For all the efforts we make to secure our networks and applications, we are usually also making the assumption that the hardware on which our network and computer runs is secure in the first place. After the many releases of NSA data, I think many have come to question whether this is actually the case. To that end, trusted platforms have become available, where components and software are monitored all the way from the original manufacturer through to assembly, and the hardware/firmware is designed to identify and warn about any kind of tampering that may have been attempted. There's a catch here, which is that the customer always has to decide to trust someone, but I get the feeling that many people would believe a third-party company's claims of non-interference over a government's. If this is important to you, there are trusted compute platforms available, and now even some trusted network platforms with a similar chain of custody-type procedures in place to help ensure legitimacy.

 

There's Always Another Tool

The good news is that security continues to be such a hot topic that there is no shortage of options when it comes to adding tools to your network (and there are many I have chosen not to mention here for the sake of brevity). There's no perfect security architecture, and whatever tools are currently running, there's usually another that could be added to fill a hole in the security stance. Many tools, at least the inline ones, add latency to the packet flows; it's unavoidable. In an environment where transaction speed is critical (e.g. high-speed trading), what's the trade off between security and latency?

 

Does this mean that we should give up on in-depth security and go back to ACLs? I don't think so. However, a security posture isn't something that can be created once then never updated. It has to be a dynamic strategy that is updated based on new technologies, new threats, and budgetary concerns. Maybe at some point, ACLs will become the right answer in a given situation. It's also not usually possible to protect against every known threat, so every decision is going to be a balance between cost, staffing, risk, and exposure. Security will always be a best effort given the known constraints.

 

We've come so far since the early firewall days, and it looks like things will continue changing, refreshing, and improving going forward as well. Today's security is not your mama's security architecture, indeed.

There is something to be said about ignorance being bliss, and then there are times when it is good to know a little bit more about what you are getting into. My IT journey started over 20 years ago.  All I knew going in was that I liked IT based upon my studies and that the university I attended had 100 % placement in IT positions of it graduates.  That’s not a whole lot of detail to start from, but I was all in.

 

At the time I certainly didn’t have the foresight to understand how big this IT thing would become.

 

Done with college, done with learning

So I was done with college, and I was done taking tests forever right?  Wrong!  I would be forever learning.

IT becomes part of you. It becomes natural to want to read a book, search the web for new insights, or start working with some of the latest new technologies.

 

Always learning

The best part of working in IT is the always learning and growing nature of the industry. Even more exciting is that people who never spent a day studying IT, but are willing to learn, can easily move into this space. I have worked with history majors, music majors, sociology majors, and more. You name it. When you think about it, this is really cool!

 

As long as you have the drive to learn, keep learning, and get your hands dirty in technology, working in IT really is an opportunity for many.

 

Just getting started in IT?

Today, there are countless varieties of IT jobs. Organizations around the world are looking for very smart and driven individuals. Be willing to research the answer to questions, and spend time on certifications. Certifications are important to everyone, but especially when you are getting started in your IT career. It shows drive and it also prompts you to learn enterprise technologies that will benefit you both personally and professionally.

 

This approach will also provide a good foundation for your entire IT career. IT is full of opportunity, so also be sure to keep an open mind about what you can do. You will be sure to go places with a position-driven approach.

 

Best of luck!

Filter Blog

By date: By tag: