Skip navigation
1 2 3 4 5 Previous Next

Geek Speak

2,254 posts

Viva Las Vegas! This version of the Actuator comes to you from VMworld Las Vegas. If you are reading this at VMworld, there is still time to stop by the booth and say hello!

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Going Multi-Cloud with AWS and GCP: Lessons Learned at Scale

Two key takeaways from this for me. First, this is the only company I’ve seen that has publicly stated they are using GCP. Second, the issues with AWS DR architecture (and old equipment in the Eastern U.S. region) make me smile when I think about how far ahead Microsoft Azure is when it comes to providing proper redundancy for customers.

 

Microsoft Tests Artificial Intelligence on its Drone in Nevada

I guess now that we have mastered self-driving cars, it’s time we build planes that can fly and think for themselves. I’m certain this will benefit humanity and not hasten the development of Skynet in any way.

 

MIT researchers use drone fleets to track warehouse inventory

That’s better. Start with something smaller than a plane that can track down items on a shelf…or humans. Skynet gets closer with each passing day.

 

CIA uses a secret tool to spy on NSA, FBI and other intel partners

This isn’t news. Countries spy on each other, even allied countries. And agencies inside those countries spy on each other, too. I don’t have data to verify this, but I suspect spying dates to the first time humans got together in tribes.

 

Leak of >1,700 valid passwords could make the IoT mess much worse

Making a bad situation worse. No idea at what point we will finally understand that our current security models are horribly broken, but I suspect it will take some Black Mirror scenarios.

 

Meet the man using data science to predict who dies next on Game of Thrones

Good to see people taking advances in computing and using them in ways to benefit all of mankind. Also? Tyene is as good as dead and we didn’t need a computer to tell us that.

 

The life-saving browser shortcut everyone should know

Okay, that’s cool. I can’t believe I hadn't heard about this before, either.

 

I'm not sure if I have the minimum pieces of flair yet.

As companies race to the cloud and adopt DevOps culture as part of that process, it's becoming more apparent that the word "monitoring" has a significantly different meaning within the walls of a data center than it does in the DevOps huddle area. But what, if anything, is actually different? Or is it all just jargon and an attitude of not invented here (NIH).


In my panel discussion, 'When DevOps Says "Monitor,"' I will be joined by Nathen Harvey, VP of Community Development at Chef, Michael Cote, Director of Technical Marketing at Pivotal, and Clinton Wolfe, cloud architect and DevOps practice lead (and current "hero for hire" seeking his next adventure). In our conversation, we'll break down expectations, and yes, even bad (monitoring) habits in the DevOps world in a way that will make a traditional monitoring engineer feel right at home.

 

Because it was so successful last year, we are continuing our expanded-session, two-day, two-track format for THWACKcamp 2017. SolarWinds product managers and technical experts will guide attendees through how-to sessions designed to shed light on new challenges, while Head Geeks and IT thought leaders will discuss, debate, and provide context for a range of industry topics.

 

In our 100% free, virtual, multi-track IT learning event, thousands of attendees will have the opportunity to hear from industry experts and SolarWinds Head Geeks, such as myself, and technical staff. Registrants also get to interact with each other to discuss topics related to emerging IT challenges, including automation, hybrid IT, DevOps, and more.

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch my session!

scuff

When the Cloud Wins

Posted by scuff Aug 29, 2017

Now that we’ve torn apart the cloud dream, it’s time to give it some credit. Let’s look at some scenarios where the cloud makes sense.

 

Email workloads: Faced with the costs of replacing an aging Microsoft Small Business Server (hardware, software, implementation, and ongoing support), the cloud numbers can stack up. This market has been an easy win for Microsoft, who sweetened the deal with other Office 365 products in your license. They’ve lost some of these to Google’s GSuite instead, but for the purpose of this discussion, it’s all cloud. Enterprises commonly pick this workload first in their cloud endeavours. It’s not rocket science. Hybrid directory integration is good, and they’ve saved the overhead of a number of Exchange servers in the process. Your mileage may vary.

 

Consolidating hardware/reducing support costs: Take the previous paragraph on email and apply that concept to any other workload you might stick in the cloud. If you can get rid of some hardware and the associate infrastructure support costs, the cloud numbers could stack up, especially if your financial controller wants to reign in asset spending but has a flexible operations budget. Those two separate financial buckets affect a company very differently.

 

Goodbye, experts: Why hire a DBA to manage the performance of an on-prem application when a SaaS app can do the trick? With the cloud, you’re paying a monthly fee for the app to just work. If you want to add extra security features, tick a box and license them, instead of running an RFP process to find and select a solution and a vendor. Worried about DevOps? Make it the SaaS provider's problem and enjoy the benefits of the latest production just by logging in. Need high availability and a 24x7 NOC? Use the cloud, as they’ll be up all night keeping it running for the rest of their customers, anyway.

 

Testing, testing, 1,2,3: Want to run a proof of concept or play with a new solution? Run it up in the cloud, no hardware required, and stop paying for it when you are done. For software developers, PaaS means they aren’t held back while an IT pro builds a server and they don’t have the tedium of patching it, etc.

 

Short term or seasonal demand: Releasing a new movie? You’ll have high demand for the trailer, over opening weekend, and in the short term. Two years later, though? Not so much. Village Roadshow in Australia just announced a big move to Microsoft Azure. If you have a short term project or a seasonal demand peak like the holidays, don’t underestimate the elasticity of cloud resources.

 

Modern capabilities: Satya wasn’t kidding when he said Microsoft was cloud-first (now AI first). There are new features being released constantly in the cloud, first because the vendor controls all of the moving parts and doesn’t have to take into account backward compatibility or version mismatches across the customer base. There are also a ton of SaaS vendors with capabilities you’d have difficulty finding in installable, on-premises software.

 

SaaS integration: Thanks to APIs, cloud solutions are really good at being connected, as data isn’t being locked away on inaccessible internal networks. Yes, the data storage thing is both a blessing and a curse. SaaS integration has lead to some great workflow and productivity tricks, whether the apps talk to each other directly or they’re doing a trigger and action dance with Zapier, IFTTT, or Microsoft Flow.

 

So, there’s a bright shiny cloud picture for you. Now, don’t tear apart these ideas too much in the comments. I want to hear where you think the cloud is a win, and where you’ve been glad you’ve moved something to the cloud.

If you’ve developed a desire to learn more about hacking or to try a little hacking yourself, there are a lot of resources at your disposal to get started. Keep in mind, our premise here is how to hack without breaking any laws, which is important to remember. That being said, there’s nothing illegal about talking about it, is there? On the contrary, talking about hacking ensures that the larger community of IT and security professionals are sharing ideas and techniques for prevention, detection, and mitigation. This makes everyone’s overall security posture a better one.

 

Books, podcasts, blogs, vlogs, and even full-blown conferences are all dedicated to sharing this knowledge and can be resources for you to get started. Let’s look at a few, as examples.

 

Books

 

Right off the bat, I’m going to recommend a couple of books written by Kevin Mitnick, who is arguably the world’s most infamous hacker. “Ghost in the Wires: My Adventures as the World’s Most Wanted Hacker," should be mandatory reading, in my opinion, for anyone interested in learning a lot more about the early days of hacking. Kevin has a number of other books as well and now works on the “right side” of the fence as a security consultant.

 

Go to Amazon and search for the word “hacking” and be prepared to sift through just over 13,000 results. Here, we can see once again how many different meanings there are for this concept. A lot of these results aren’t related to what we’re looking for at all, but a review of the more popular titles and customer ratings will easily identify the relevant titles versus the chaff.

 

Books such as “The Hacker Playbook 2,” “How to Hack like a God,” and “The Hacking Bible,” will give you a solid understanding of tools, commands, and techniques you can try yourself to build a foundation of knowledge on how to hack and how you can be hacked.

 

Blogs and Vlogs

 

Hak5 is probably the top example of a hacker how-to site and video series that comes to mind. Hak5 started back in 2005 with a siofle video series on simple hacks and tricks for technology enthusiasts to try at home. Over the years, it has evolved into a full-fledged penetration testing and information security training series, including a store where one can purchase some fun hacker tools.

 

Hackaday is another popular site that focuses on hacking as an alternative method to accomplishing a task, and less about computers or security.

 

Brian Krebs operates his news site, Krebs on Security, which is an excellent resource for keeping up on zero-day exploits and current news from the hacking/malware world. Krebs was a victim of a hacker and has dedicated himself to learning as much as he can about the exploits and hacks used. Now he educates people on security and how to mitigate threats.

 

There are many others and the list could go on, but search around and find some sites that look interesting, watch some videos, and try some of the techniques presented. Be cautious when searching for terms like “hacking” and “exploit,” however!

 

Conferences

 

Believe it or not, there are conferences you can attend that are dedicated to informing and educating the hacking community. The two prominent ones are Black Hat and DEF CON. Now, the name Black Hat for a convention might make you second guess whether you want to attend or not. In fact, Black Hat is more of the IT pro event filled with training and security briefings intended to prevent malicious attacks. DEF CON, on the other hand, is more of a carnival for hackers of all types.

 

Both were founded by Jeff Moss, known as “Dark Tangent” but seem to be aimed at different audiences. Granted, you’ll likely find people that attend both, and both events will have their share of white hats, black hats, and gray hats. DEF CON even hosts a contest called “Spot the Fed” referencing the attendance of several members of Federal law enforcement and cyber-security teams.

 

Now if you’re a first-time attendee to either of these events, it’s good to develop some safe practices with any electronic devices you might wish to bring along, for very obvious reasons. The entire place is filled with hackers, who might decide to have some fun with you. Turn off your wifi, turn off your Bluetooth, turn off Air Drop, or maybe even leave all of your electronics at home.

 

Past recorded sessions from both conferences are available on YouTube, and I’d invite you to check them out if you aren’t able to attend in person.

 

Information overload

 

It may seem like a lot, but regardless of the sources you choose, there is no shortage of information or media out there for you budding InfoSec professionals out there. Find a place to start, and jump in, whether it’s reading up on the history of hacking, or deciphering current zero-day exploits that you might be facing at your place of employment.

 

You may find a particular track that you want to focus on and can then begin to narrow your research to that one area. Specialization can be of some benefit, but a well-rounded security posture is always best.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Agencies rely heavily on their applications, so I thought I’d share a blog written earlier this year by my SolarWinds colleague, Patrick Hubbard.

 

IT administration is changing, or expanding, to be precise. We can no longer think of just “our part” of IT, “our part” of the network, or “our part” of the help desk. For agencies to efficiently and effectively manage IT, administrators must have visibility across the entire application stack, or “AppStack.” Only with visibility to all infrastructure elements that are supporting an application deployed in production can they understand how those elements relate to each other.

 

This, however, is easier said than done. There’s a good chance there are things in your AppStack that you’re not even aware of—things that, for better or worse, you must uncover and track to gain critical cross-stack visibility. Meeting the challenge will likely require broadening traditional definitions of an AppStack to include things we traditionally thought of as out of scope.

 

What kinds of things? I’m glad you asked.

 

Not too long ago, application management was simple. Well, less complex at least. Applications were more limited in extent and sat on a mainframe, micros, or PCs with local storage. Today, the complexity of applications—with shared storage and server virtualization—begs a restatement of the meaning of an application to include the entire AppStack. Today, the AppStack includes everything from the user’s experience to the spindles and everything in between. Yes, everything in between.

 

As many agencies are moving to a hybrid-cloud environment, let’s take the example of a hybrid-cloud hosted virtual classroom system. Are HTTP services for presentation and data tier included? Yes. As are storage and virtualization, as well as core, distribution, WAN, and firewall. What about VPN and firewall to the Virtual Private Cloud (VPC)? Yes— Cloud has become an important component of many application stacks.

 

There are even things you might have assumed aren’t components of a traditional application—therefore are not part of the AppStack—that are, in fact, critical to uptime. Some examples are security certificate expiration for the web portal, the integration of durable subscription endpoint services, and the Exchange™ servers that transmitted emails and relayed questions. Yes, these are part of the AppStack. And, yes, you should be monitoring all of these elements.

 

You will achieve the greatest success in application monitoring and management by expanding the definition of AppStack to include all elements of your environment. Encourage all the Federal IT pros in your team to broaden their perspective. Use your network monitoring systems to discover and document every element of the connectivity chain, identify and document the links between each. You will likely discover that there are more than a few previously unmonitored services in your AppStack you need start keeping more of an eye on. And in the long run, proactive observation has a tendency to decrease unplanned outages and even make your weekends a bit better.

 

Find the full article on Federal Technology Insider.

Security concerns are getting lots of media coverage these days, given the massive breaches of data that are becoming more common all the time. Businesses want to have a security plan, but sometimes don't have the resources to create or implement one. Protect your infrastructure with the simple features that a SIEM application provides. Simple, step-by-step implementation allows you to lock in a solid security plan today.

 

In my THWACKcamp 2017 session, "Protecting the Business: Creating a Security Maturity Model with SIEM," Jamie Hynds, SolarWinds Product Manager, and I will present a hands-on, end-to-end, how-to configure and use Log & Event Manager, including configuring file integrity monitoring, understating the effects of normalization, and creating event correlation rules.

 

In our 100% free, virtual, multi-track IT learning event, thousands of attendees will have the opportunity to hear from industry experts and SolarWinds Head Geeks -- such as Leon and me -- and technical staff. Registrants also get to interact with each other to discuss topics related to emerging IT challenges, including automation, hybrid IT, DevOps, and more.

 

We are bringing our expanded-session, two-day, two-track format from THWACKcamp 2016 to THWACKcamp 2017. SolarWinds product managers and technical experts will guide attendees through how-to sessions designed to shed light on new challenges, while Head Geeks and IT thought leaders will discuss, debate, and provide context for a range of industry topics.

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch my session!

For those of you living in Texas and in Harvey's path please check in. We're worried about you.

Infrastructure as Code is about the programmatic management of IT infrastructure, be it physical or virtual servers, network devices, or the variety of supporting appliances racked and stacked in our network closets and data centers.

 

Historically, the problem has been managing an increasing number of servers after the advent of virtualization. Spinning up new virtual servers became a matter of minutes creating issues like server sprawl and configuration drift. And today, the same problem is being recognized in networking as well.

 

Languages such as Python, Ruby, and Go, along with configuration management platforms such as Puppet and Chef are used to treat servers as pools of resources called on in configuration files. In networking, engineers are taking advantage of traditional programming languages, but Salt and Ansible have emerged as dominant frameworks to manage large numbers of network devices.

 

What problems does treating infrastructure as code solve?

 

First, providing there’s enough storage capacity, we’re able to spin up new servers so quickly and easily that many organizations struggle with server sprawl. In other words, when there’s a new application to deploy, we spin up a new virtual machine. Soon, server admins, even in medium-sized organizations, found themselves dealing with an unmanageably large number of VMs.

 

Second, though using proprietary or open tools to automate server deployment and management is great for provisioning (configuration management), it doesn’t provide a clear method for continuous delivery including validation, automated testing, or version control. Configurations in the environment can drift from the standard with little ability to track, repair, or roll back.

 

Treating the infrastructure as code means adopting the same software development tools and practices that developers use to make provisioning infrastructure services more efficient, to manage large numbers of devices in pools of resources, and to provide a methodology to test and validate configuration.

 

What tools can we use to manage infrastructure the same way developers manage code?

 

The answer to this question is, well, it depends. The tools we can use depend on what sort of devices we have in our infrastructure and the knowledge of SysAdmins, network admins, and application developers. The good news is that Chef, Puppet, Salt, and Ansible are all relatively easy enough to learn that infrastructure engineers of all stripes can quickly cross the divide into the configuration management part of DevOps.

 

Having a working knowledge of Python or Ruby wouldn’t hurt, either.

 

Configuration Management

 

Chef and Puppet are open source configuration management platforms designed to make managing large numbers of servers very easy. They both enable a SysAdmin to pull information and push configuration files to a variety of platforms, making both Chef and Puppet very popular.

 

Since both are open source, an infrastructure engineer just starting out might find the maze of plugins, and the debate within the community about which is better, a little bit confusing. But the reality is that it’s not too difficult to go from the ground floor to managing an infrastructure.

 

Both Puppet and Chef are agent-based, using a master server with agents installed directly on nodes. Both are used very effectively to manage a large variety of platforms, including Windows and VMware. Both are written in Ruby, and both scale very well.

 

The differences between the two are generally based on elements that I don’t feel are that relevant. Some believe that Chef lends itself more to developers, whereas Puppet lends itself more to SysAdmins. The thing is, though, both have a very large customer base and effectively solve the same problems. In a typical organization of reasonable size, you’ll likely have a mix of platforms to manage, including VMware, Windows, and Linux. Both Chef and Puppet are excellent components to a DevOps culture that's managing these platforms.

 

Ansible and Salt are both agent-less languages built on Python. And though they offer some support for Windows, Ansible and Salt are more geared for managing Linux and Unix-based systems, including network devices.

 

Continuous Delivery

 

Continuous Delivery is about keeping configurations consistent across the entire IT environment, reducing the potential “blast radius” with appropriate testing, and reducing time to deploy new configurations. This is very important because using Chef or Puppet alone stops at automation and doesn’t apply the DevOps practices that provide all the benefits of Infrastructure as Code.

 

Remember that Infrastructure as Code is as much a cultural shift as it is the adoption of technology.

The most common tools are Travis CI and Jenkins. Travis CI is a hosted solution that runs directly off GitHub, while Jenkins runs off a local server. Both have GitHub repositories. Some people like having total control using a local Jenkins server, while others prefer the ease of use of a hosted solution like Travis.

 

To me, it’s not all that important which one a team uses. A SysAdmin adopting an Infrastructure as Code model will benefit either way. Integrating one or the other into a team’s workflow will provide tremendously better continuous delivery than simple peer review and ad hoc lab environments.

 

Version Control

 

Version control is one component that, in my experience, infrastructure engineers instantly see the value in. In fact, in the IT departments in which I’ve worked, everyone seems to have some sort of cobbled together version control system (VCS) to keep track of changes to the many configuration files we have.

 

 

Infrastructure as Code formalizes this with both software and consistent practice. Rather than storing configurations in a dozen locations, likely each team member’s local computer, a version control system centralizes everything.

 

That’s the easy part, though. We can do that with a department share. What a good version control system does, however, is provide a constant flow of changes back to the source code, revision history, branch management, and even change traceability.

 

Git is probably the most common VCS along with GutHub and BitBucket, but just like the continuous delivery solutions I mentioned, it’s more about just doing something. Using any of these VCSs even minimally is light years ahead of a network share and file names with “FINAL” or CURRENT” at the end.

 

Culture

 

When it comes down to it, though, Infrastructure as Code is just as much about culture as it is about technology. It’s a paradigm shift – a shift in practice and methodology – in addition to the adoption of programming languages, tools, and management platforms.

 

An IT department will see absolutely zero benefits from standing up a Jenkins server if it isn’t integrated into the workflow and actually used. Getting buy-in from the team is extremely important. This is no easy task, because now you’re dealing with people instead of bits and bytes, and people are much more complex that even our most sophisticated systems.

 

One way to get there is to start with only version control with GitHub or some other VCS. Since this is completely non-intrusive and has zero impact on production, buy-in from the team and from leadership is much easier.

 

Another idea I’ve seen work in practice is to start with only one part of the infrastructure. This could mean starting with all the Linux machines or all the Windows machines and managing them with Chef. In this way, a SysAdmin can manage a group of like machines and see tangible benefits of Infrastructure as Code very quickly without having to get buy-in across all teams.

 

As the benefits become more apparent, either to colleagues working in the trenches or leadership looking for some proof of concept, the culture can be changed from the bottom up or top down.

 

Making something old into something new

 

Remember that Infrastructure as Code is a cultural shift as much as it is an adoption of technology. Developers have been using these same tools and same practices for years to develop, maintain, and deploy code, so this a is a time-tested paradigm that SysAdmins and network admins can adopt very quickly.

 

In time, Infrastructure as Code can help us become proactive managers of our infrastructure, making it work for us rather than making us work for the infrastructure.

It's the classic horror story twist from 40 (or more) years ago, back when homes had a single telephone line and the idea of calling your own home was something reserved only for phreakers and horror films. While this might not be so scary now, I can assure you that back in the day that scene was the cutting edge in horror.

 

Fast forward 40+ years, and today, the idea of calling someone from inside your house isn't scary. In fact, it's quite common. Never mind phone calls; I will send my daughter an instant message to come downstairs for dinner. The constant flow of communication today is not scary, it's expected. Which brings us to the topic at hand: the Internet of Things (IoT). The idea behind IoT is simple enough: anything, and anyone can have an IP address attached to them at any moment, tracking data about their movement (and other things). And IoT has given rise to the concept of “Little Data," the idea that we can gather data about ourselves (like your FitBit®) on a daily basis, data that we then analyze to make adjustments to our daily routines. Here's just a partial list of the current IoT devices in and around your house that have such capabilities:

 

• Smartphone

• Cable modem

• Wireless router

• Wireless printer

• Tablet

• Laptop

• PC

• Television

• Xbox®/PlayStation®/etc.

• Home alarm system

• Automobile

• Security camera

• Light bulb

• Refrigerator

• Toaster

• Microwave

• Stove/Oven

• Dishwasher

• Washer/Dryer

• Windows

 

Yes, that's right, windows. Your house will know when it's raining and close the windows for you, even while you're away. I suppose it’s also plausible to think that a thief could know you are away and hack your system, open the windows, and gain access to the stuff inside your house. Or maybe it is just kids who want to play a prank and leave your windows open in the rain.

 

Folks, this isn't good news. And the above list doesn't mention the data that is being collected when you are using your IoT devices. For example, the data that Google®/Apple®/Microsoft®/Facebook®/Yahoo!® are tracking as you navigate the internet.

 

And yet I *still* don't see people concerned about where IoT security is currently headed regarding our privacy. With the number of data breaches continuing to rise it would seem to me that the makers of these devices know less about security and privacy than we'd like to believe.

 

So why don't people care? I can only think of two reasons. One is that they haven't been victims of a data breach in any way. The other? No one has scared them into thinking twice about IoT.

 

So, that's why I made this, for you, as a reminder:

 

 

The next time you are shopping for an appliance and you are told about all the great "smart" features that are available, I want you to think about the above image, your privacy, and your security. And I want you to ask some smart questions, such as:

 

• Have I actually read and understood the user terms?

• Do I understand where my data is going and how it will be shared?

• Is the data portable or downloadable?

• Will the appliance be fully functional if not connected to the internet?

• Can I get my data from the device without it needing to be connected to the internet?

• Can I change the passwords on the device?

• Is my home network secure?

 

Look, I'm a fan of IoT. I really am. I enjoy data. I love my FitBit. I'm going to get the Nest® at some point, too.

 

But I also recognize that the IoT solutions could be something that makes our lives worse, not better. We need to start asking questions like those above so that manufacturers understand that privacy and security should be at the top of the feature list, and not an afterthought.

PowerShell has been around for a long time now, consistently proving its value in terms of automation. PowerShell (aka Shell) was first introduced to me personally when it was still code (named Monad) in the early 2000s. Even then, it was clear that Microsoft had a long-term plan when they began sharing bits of code within the Exchange platform. Every time you did something in the GUI they would share with you the code to complete the task in Shell. They were making sure the learning process was already beginning.

 

PowerShell has come a long way since the early days of Monad, and just about any product, Microsoft or not, has PowerShell commands to help you complete your job.

 

Modern Day

Today, PowerShell is seemingly everywhere. Its ability to automate tasks that the traditional toolset cannot is impactful and necessary. Today, all administrators need to be able to do some level of PowerShell from simple to complex.

 

So, why has this become a staple in our workday?

 

Organizations today are streamlining IT always striving to simplify their workday.  Some may argue that organizations are trying to do more with fewer employees. This almost implies they are trying to overwork their teams, but I prefer to take a more positive spin. To me, automation is about allowing a process to flow, and happen in a way that simplifies my work day. This doesn’t mean that I am kicked back with nothing to do when automation is complete. It means that I get more time to work on other things, learn more, and grow professionally.

 

Today, automation presents endless possibilities. For context, let's take a look at some things that we automate today that, in the past, we handled manually.

 

  • Operating System deployment – In the early days of IT, we were feeding floppy disks or bootable CDs into computers to manually deploy an operating system to a computer. Once that was complete, we still had to install all of the necessary applications. Repeat this for every single person that needed a PC, and you had potentially thousands of PCs repeating this very manual process. When the application “Ghost” was released, we were ecstatic. Now we finally had a way to copy an image and deploy it to another PC, which significantly reduced our deployment time. Today, really the only accepted approach is to automate workstations via third-party tools and/or PowerShell. Enterprise IT staff cannot imagine spending a whole day setting up a laptop or PC for a user anymore. Now you are done in less than and hour!
  • Reporting – Anyone who knows me knows that I have done a lot of work with Citrix, and over the years I have found that there are just some things traditional reporting tools don't offer. While third-party offerings in this space are much improved today, this doesn’t mean that I might now need a custom report. PowerShell to the rescue! Custom reporting gives me the insights into my environment that I need to help ensure that it’s healthy and running successfully for the enterprise.
  • Microsoft Exchange – As previously mentioned, Microsoft Exchange was one of the first applications from Microsoft to use PowerShell. When working with Exchange, you open Exchange Management Shell to get all of the Exchange-related PowerShell commands pre-loaded. From daily tasks to automating mailbox moves and more, Shell has proven its value over and over if you are working with Exchange.

 

This list really only scratches the surface of PowerShell's automation possibilities. If you are doing work on Windows, as with most applications today, PowerShell skills are necessary for your technical success.

 

Taking it to the Next Level

 

The automation movement has already started. The power of automation has the potential to really change the landscape of the work we do. I anticipate that the need for automation will continue, and over the next few years the more we can automate, the better. That being said, there is a risk to automating everything with custom code without proper documentation and team sharing.  As employees leave organizations, so does the knowledge that went into the code. Not everyone that writes code does it well. Even if the script works, this doesn’t mean that it is a quality script.  Application upgrades typically involve rewriting the automation scripts that go with it.

 

As you can see, the time savings can go right out the window with that next big upgrade. Or does it?

 

If you plan the upgrade timeframe to include the rewrite of your automation scripts, then you are still likely better off than you were without it due to all of the time savings you realized in between.

My final thoughts on this, despite all of the pros and cons to automation, would be to automate what you can as often as possible!

I have wanted to start an ongoing conversation about security on Geek Speak for a long time. And now I have! Consider this the beginning of a security conversation that I encourage everyone to join. This bi-monthly blog will cover security in a way that combines the discussions we hear going on around us with the ones we have with colleagues and friends. I’d love for you to share your thoughts, ask questions, and ENGAGE! Your input will make this series that much richer and more interesting.

 

You can bring up any topic or share any ideas that you would like for me to talk about. Please join me in creating some entertaining reading with a security vibe. Let’s start…NOW!

 

Let me dive into something that I feel is going to impact hacking behaviors. Microsoft is attempting to find clever, more intense ways to go after hackers. This may not sound surprising, but think about this: They are filing legal suits over trademarks. What? That’s right. They are suing known hacker groups for trademarks. Although you can’t drag hackers to court, you can observe and disrupt their end game.

 

Okay, so they went after the group that was allegedly involved with the United States voting process. So far, Microsoft has taken over at least 70 different Fancy Bear, or FB, domains!

 

Why does this matter? Why should we care? Because FB literally became the man in the middle, legally speaking. By using Microsoft’s products and services, they opened themselves up to be taken over by... that’s right: Microsoft!

 

Since 2016, Microsoft has mapped out and observed FB’s server networks, which means they can indirectly cause their own mayhem. Okay, so they aren’t doing THAT, but they are observing and disrupting foreign intelligence operations. Cheeky, Microsoft. Cheeky!

 

Now, for me, I’m more interested in when they decide they can flip it over into their hands to eavesdrop and scan out networks. The United States’ Computer Fraud and Abuse Act gives Microsoft quite a blanket to keep warm under. But we can go into that later, as it is currently in use at Def Con...

 

Now, I started the conversation. It’s your turn to keep it going. Share your thoughts about Microsoft, security, hackers, etc. below.

Finishing up prep for VMworld next week. Two sessions, booth duty, and various interviews and briefings, along with networking. Should be a busy week! If you are attending, stop by the booth and say hello. I always enjoy talking data with anyone.

 

As always, here's some stuff I found on the internet I hope you might find interesting. Enjoy!

 

8 Lessons from 20 Years of Hype Cycles

Brilliant summary of Gartner's 20 years of Hype Cycles. Long, but worth the time if you have an interest in the tech industry (and are experienced enough to remember that Desktop for Linux was a thing).

 

Reverse Engineering IoT Devices

Long, but worth the read because geek.

 

London council 'failed to test' parking ticket app, exposed personal info

Despite having the necessary resources to build the app correctly, they didn't. Because security is hard, y'all.

 

DARPA tunes machine learning to radio signals

Maybe the reason we haven't made First Contact is because we haven't let the machines do the talking for us, yet.

 

Screen Savers Haven’t Been Useful For Decades. Why Are They Still Here?

Then again, maybe we aren't quite yet ready to connect with other civilizations.

 

Chill: Robots Won’t Take All Our Jobs

Jobs shift, they don't disappear. My children will have jobs, and job titles, that don't exist, yet.

 

Worried you’re being automated? Think again…

This gives us all hope, I think.

 

 

There was an eclipse this past Monday. Maybe you heard about it. This was the view from my office.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Though cybercriminals are usually incentivized by financial gain, the reality is that a cyber-attack can create far more damage than just hitting an organization fiscally. This is especially the case when it comes to healthcare organizations. Health data is far more valuable to a cybercriminal, going for roughly 10 or 20 times more than a generic credit card number. Therefore, we can expect to see a surge in healthcare breaches. However, the impact of this won’t just cripple a facility financially. It’s possible a cybercriminal could take over a hospital, manipulate important hospital data, or even compromise medical devices.

 

It’s already started

These sort of breaches are already happening. At the start of 2016, three UK hospitals in Lincolnshire managed by the North Lincolnshire and Goole NHS Foundation Trust were infected by a computer virus. The breach was so severe it resulted in hundreds of planned operations and outpatient appointments being cancelled.

 

The event, which officials were forced to deem as a “major incident,” also made it difficult to access test results and identify blood for transfusions, and some hospitals struggled to process blood tests. This is one of the first examples of a healthcare cyber security breach directly impacting patients in the UK, but it won’t be the last.

 

Follow in the footsteps of enterprises

Breaches like these have put a great deal of pressure on healthcare IT professionals. Though there has been a shift in mentality in enterprise, with security becoming a priority, the same can’t be said for the healthcare sector. This needs to change. The situation is worsened with most healthcare organizations often having budget cuts, which make security a hard thing to prioritize.

 

It doesn’t need to break to be fixed

Many healthcare IT professionals assume that management will only focus on security once a significant breach occurs, but it’s time healthcare organizations learned from enterprises that have seen breaches occur and acted. In the meantime, there is work that requires little investment that IT professionals can do to protect the network.

 

Educate and enforce

Employees are often the weakest link when it comes to security in the workplace. An awareness campaign should encompass both education and enforcement. By approaching an education initiative in this way, employees will have a better understanding of potential threats that could come from having an unauthorized device connected to the network.

 

For example, healthcare workers need to be shown how a cybercriminal could infiltrate the network through hacking someone’s phone. This would also start a dialogue between healthcare employees, helping them to prioritize security and thus giving the IT department a better chance of protecting the organization from a breach.

 

It’s naturally assumed that a healthcare IT professional should be able to effectively protect his or her organization from an attack. However, even the most experienced security professional would struggle to do so without the right tools in place. To protect healthcare organizations from disastrous attacks requires funding, investment, and cooperation from employees.

 

Find the full article on Adjacent Open Access.

I'm not aware of an antivirus product for network operating systems, but in many ways, our routers and switches are just as vulnerable as a desktop computer. So, why don't we all protect them in the same way as our compute assets? In this post, I'll look at some basic tenets of securing the network infrastructure that underpins the entire business.

 

Authentication, authorization, and accounting (AAA)

Network devices intentionally leave themselves open to user access, so controlling who can get past the login prompt (authentication) is a key part of securing devices. Once logged in, it's important to control what a user can do (authorization). Ideally, what the user does should also be logged (accounting).

 

Local accounts are bad, mkay?

Local accounts (those created on the device itself) should be limited solely to backup credentials that allow access when the regular authentication service is unavailable. The password should be complex and changed regularly. In highly secure networks, access to the password should be restricted (kind of a "break glass for password" concept). Local accounts don't automatically disable themselves when an employee leaves, and far too often, I've seen accounts still active on devices for users who left the company years ago, with some of those accessible from the internet. Don't do it.

 

Use a centralized authentication service

If local accounts are bad, then the alternative is to use an authentication service like RADIUS or TACACS. Ideally, those services should, in turn, defer authentication to the company's existing authentication service, which in most cases, is Microsoft Active Directory (AD) or a similar LDAP service. This not only makes it easier to manage who has access in one place, but by using things like AD groups, it's possible to determine not just who is allowed to authenticate successfully, but what access rights they will have once logged in. The final, perhaps obvious, benefit is that it's only necessary to grant a user access in one place (AD), and they are implicitly granted access to all network devices.

 

The term process

A term (termination) process defines the list of steps to be taken when an employee leaves the company. While many of the steps relate to HR and payroll, the network team should also have a well-defined term process to help ensure that after a network employee leaves, things such as local fall back admin passwords are changed, or perhaps SNMP read/write strings are changed. The term process should also include disabling the employee's Active Directory account, which will also lock them out of all network devices because we're using an authentication service that authenticates against AD. It's magic! This is a particularly important process to have when an employee is terminated by the company, or may for any other reason be disgruntled.

 

Principal of least privilege

One of the basic security tenets is the principal of least privilege, which in basic terms, says Don't give people access to things unless they actually need it; default to giving no access at all. The same applies to network device logins, where users should be mapped to the privileged group that allows them to meet their (job) goals, while not granting permissions to do anything for which they are not authorized. For example, an NOC team might need read-only access to all devices to run show commands, but they likely should not be making configuration changes. If that's the case, one should ensure that the NOC AD group is mapped to have only read-only privileges.

 

Command authorization

Command authorization is a long-standing security feature of Cisco's TACACS+, and while sometimes painful to configure, it can allow granular control of issued commands. It's often possible to configure command filtering within the network OS configuration, often by defining privilege levels or user classes at which a command can be issued, and using RADIUS or TACACS to map the user to that group or user class at login. One company I worked for created a "staging" account on Juniper devices, which allowed the user to enter configuration mode and enter commands, and allowed the user to run commit check to validate the configuration's validity, but did not allow an actual commit to make the changes active on the device. This provided a safe environment in which to validate proposed changes without ever having the risk of the user forgetting to add check to their commit statement. Juniper users: tell me I'm not the only one who ever did that, right?

 

Command accounting

This one is simple: log everything that happens on a device. More than once in the past, we have found the root cause of an outage by checking the command logs on a device and confirming that, contrary to the claimed innocence of the engineer concerned, they actually did log in and make a change (without change control either, naturally). In the wild, I see command accounting configured on network devices far less often than I would have expected, but it's an important part of a secure network infrastructure.

 

Network time protocol (NTP)

It's great to have logs, but if the timestamps aren't accurate, it's very difficult to align events from different devices to analyze a problem. Every device should be using NTP to ensure that they have an accurate clock to use. Additionally, I advise choosing one time zone for all devices—servers included—and sticking to it. Configuring each device with its local time zone sounds like a good idea until, again, you're trying to put those logs together, and suddenly it's a huge pain. Typically, I lean towards UTC (Coordinated Universal Time, despite the letters being in the wrong order), mainly because it does not implement summer time (daylight savings time), so it's consistent all year round.

 

Encrypt all the things

Don't allow telnet to the device if you can use SSH instead. Don't run an HTTP server on the device if you can run HTTPS instead. Basically, if it's possible to avoid using an unencrypted protocol, that's the right choice. Don't just enable the encrypted protocol; go back and disable the unencrypted one. If you can run SSHv2 instead of SSHv1, you know what to do.

 

Password all the protocols

Not all protocols implement passwords perfectly, with some treating them more like SNMP strings. Nonetheless, consider using passwords (preferably using something like MD5) on any network protocols that support it, e.g., OSPF, BGP, EIGRP, NTP, VRRP, HSRP.

 

Change defaults

If I catch you with SNMP strings of public and private, I'm going to send you straight to the principal's office for a stern talking to. Seriously, this is so common and so stupid. It's worth scanning servers as well for this; quite often, if SNMP is running on a server, it's running the defaults.

 

Control access sources

Use the network operating system's features to control who can connect to them in the first place. This may take the form of a simple access list (e.g., a vty access-class in Cisco speak) or could fall within a wider Control Plane Policing (CoPP) policy, where the control for any protocol can be implemented. Access Control Lists (ACLs) aren't in themselves secure, but it's another step to overcome for any bad actor wishing to illicitly connect to the devices. If there are bastion management devices (aka jump boxes), perhaps make only those devices able to connect. Restrict from where SNMP commands can be issued. This all applies doubly for any internet-facing devices, where such protections are crucial. Don't allow management connections to a network device on an interface with a public IP. Basically, protect yourself at the IP layer as well by using passwords and AAA.

 

Ideally, all devices would be managed using their dedicated management ports, accessed through a separate management network. However, not everybody has the funding to build an out-of-band management network, and many are reliant on in-band access.

 

Define security standards and audit yer stuff

It's really worth creating a standard security policy (with reference configurations) for the network devices, and then periodically auditing the devices against it. If a device goes out of compliance is that a mistake or did somebody intentionally weaken the device security posture? Either way, just because a configuration was implemented once, it would be risky to assume it had remained in place from then on, so a regular check is worthwhile.

 

Remember why

Why are we doing all of this? The business runs over the network. If the network is impacted by a bad actor, the business can be impacted in turn. These steps are one part of a layered security plan; by protecting the underlying infrastructure, we help maintain the availability of the applications. Remember the security CIA triad —Confidentiality, Integrity, and Availability? The steps I have outlined above—and much more that I can think of—help maintain network availability and ensure that the network is not compromised. This means that we have a higher level of trust that the data we entrust to the network transport is not being siphoned off or altered in transit.

 

What steps do you take to keep your infrastructure secure?

Previously, I discussed the origins of the word “hacking” and the motivations around it from early phone phreakers, red-boxers, and technology enthusiasts.

 

Today, most hackers can be boiled down to Black Hats and White Hats. The hat analogy comes from old Western movies, where the good guys wore white and the bad guys wore black. Both groups have different reasons for hacking.

 

Spy vs. Spy

The White Hat/Black Hat analogy always makes me think of the old Spy vs. Spy comic in Mad Magazine. These two characters—one dressed all in white, the other all in black—were rivals who constantly tried to outsmart, steal from, or kill each other. The irony was that there was no real distinction between good or evil. In any given comic, the White Spy might be trying to kill the Black Spy or vice versa, and it was impossible to tell who was supposed to be the good guy or the bad guy.

 

Black Hat hackers are in it to make money, pure and simple. There are billions of dollars lost every year to information breaches, malware, cryptoware, and data ransoming. Often tied to various organized crime syndicates (think Russian Mafia and Yakuza), these are obviously the “bad guys” and the folks that we, as IT professionals, are trying to protect ourselves and our organizations from.

 

The White Hats are the “good guys," and if we practice and partake in our own hacking, we would (hopefully) consider ourselves part of this group. Often made up of cybersecurity and other information security professionals, the goal of the White Hat is to understand, plan for, predict, and prevent the attacks from the Black Hat community.

 

Not Always Black or White

There does remain another group of people whose hacking motivations are not necessarily determined by profit or protection, but instead, are largely political. These would be the Gray Hats, or the hackers who blur the distinction between black and white, and whose designation as “good or bad” is subjective and often depends on your own point of view. As I mentioned, the motivation for these groups is often political, and their technical resources are frequently used to spread a specific political message, often at the expense of a group with an opposing view. They hack websites and social media accounts, and replace their victims’ political messaging with their own.

 

Groups like Anonymous would fall into this category, the Guy Fawkes mask-wearing activists who are heavily involved in world politics, and who justify their actions as vigilantism. Whether you think what they do is good or not depends on your own personal belief structure, and which side of the black/white spectrum they land on is up to you. It’s important to consider such groups when trying to understand motivation and purpose, if you decide to embark on your own hacking journey.

 

What’s in It for Us?

Because hacking has multiple meanings, which approach do we take as IT pros when we sit down for a little private hacking session? For us, it should be about learning, solving problems, and dissecting how a given technology works. Let’s face it: most of us are in this industry because we enjoy taking things apart, learning how they work, and then putting them back together. Whether that’s breaking down a piece of hardware like a PC or printer, or de-compiling some software into its fundamental bits of code, we like to understand what makes things tick, and we’re good at it. Plus, someone actually pays us to do this!

 

Hacking as part of our own professional development can be extremely worthwhile because it helps us gain a deep understanding of a given piece of technology. Whether it is for troubleshooting purposes, or for a deep dive into a specific protocol while working toward a certification, hacking is one more tool you can use to become better at what you do.

 

Techniques you use in your everyday work may already be considered “hacks." Some tools you may have at your disposal may potentially be the same tools that hackers use in their daily “work." Have you ever fired up Wireshark to do some packet capturing? Used a utility from a well-known tool compilation to change a lost Windows password? Scanned a host on your network for open ports using NMAP? All of these are common tools that can be used by the IT professional to accomplish a task, or a malicious hacker trying to compromise your environment.

 

As this series continues, we will look at a number of different tools—both software and hardware—that have this kind of utility, and how you can use these in a way that will improve your understanding of the technology you support, as well as developing a respect for the full spectrum of hacking that may impact your business or organization.

 

There are some fun toys out there, but make sure to handle them with care.

 

As always, "with great power comes great responsibility." Please check your local, state, county, provincial, and/or federal regulations regarding any of the methods, techniques, or equipment outlined in these articles before attempting to use any of them, and always use your own private, isolated test/lab environment.

Filter Blog

By date: By tag: