Skip navigation

Geek Speak

3 Posts authored by: KMSigma Expert

Without a doubt, we're at a tipping point when it comes to security and the Internet of Things (IoT). Recently, security flaws have been exposed in consumer products, including children's toys, baby monitors, cars, and pacemakers. In late October 2016, Dyn®, an internet infrastructure vendor, suffered a malicious DDoS attack that was achieved by leveraging malware on IoT devices such as webcams and printers that were connected to the Dyn network.

 

No, IoT security concerns are not new. In fact, any device that's connected to a network represents an opportunity for malicious activity. But what is new is the exponential rate at which consumer-grade IoT devices are now being connected to corporate networks, and doing so (for the most part) without IT's knowledge. This trend is both astounding and alarming: if your end user is now empowered to bring in and deploy devices at their convenience, your IT department is left with an unprecedented security blind spot. How can you defend against something you don't know is a vulnerability?

 

BYOD 2.0


Right now, most of you are more than likely experiencing a flashback to the early days of Bring Your Own Device (BYOD) - when new devices were popping up on the network left and right faster than IT could regulate. For all intents and purposes, IoT can and should be considered BYOD 2.0. The frequency with which IoT devices are being connected to secured, corporate networks is accelerating dramatically, spurred on in large part by businesses' growing interest in leveraging data and insights collected from IoT devices, combined with vendors' efforts to significantly simplify the deployment process.

 

Whatever the reason, the proliferation of unprotected, largely unknown and unmonitored devices on the network poses several problems for the IT professionals tasked with managing networks and ensuring the organizational security.

 

The Challenges


First, there are cracks in the technology foundation upon which these little IoT devices are built. The devices themselves are inexpensive and the engineering that goes into them is more focused on a lightweight consumer experience as opposed to an enterprise use case that necessitates legitimate security. As a result, these devices re-introduce new vulnerabilities that can be leveraged against your organization, whether it's a new attack vector or an existing one that's increased in size.

 

Similarly, many consumer-grade devices aren't built to auto-update, and so the security patch process is lacking, creating yet another hole in your organization's security posture. In some cases, properly configured enterprise networks can identify unapproved devices being connected to the network (such as an employee attaching a home Wi-Fi router), shut down the port, and eradicate the potential security vulnerability. However, this type of network access control (NAC) usually requires a specialized security team to manage and is often seen only in large network environments. For the average network administrator, this means it is of premier importance that you have a fundamental understanding of and visibility into what's on your network - and what it's talking to - at all times.

It's also worth noting that just because your organization may own a device and consider it secure does not mean the external data repository is secure. Fundamentally, IoT boils down to a device inside your private network that is communicating some type of information out to a cloud-based service. When you don't recognize a connected device on your network and you're unsure where it's transmitting data, that's a problem.

 

Creating a Strategy and Staying Ahead


Gartner® estimates that there will be 21 billion endpoints in use by 2020. This is an anxiety-inducing number, and it may seem like the industry is moving too quickly for organizations to slow down and implement an effective IoT strategy.

 

Still, it's imperative that your organization does so, and sooner rather than later. Here are several best practices you can use to create an initial response to rampant IoT connections on your corporate network:

  • Create a vetting and management policy: Security oversight starts with policy. Developing a policy that lays out guidelines for IoT device integration and connection to your network will not only help streamline your management and oversight process today, but also in the future. Consider questions like, "Does my organization want to permit these types of devices on the corporate network?" If so, "What's the vetting process, and what management processes do they need to be compatible with?" "Are there any known vulnerabilities associated with the device and how are these vulnerabilities best remediated or mitigated?" The answers to these questions will form the foundation of all future security controls and processes.
    If you choose to allow devices to be added in the future, this policy will ideally also include guidelines around various network segments that should/should not be used to connect devices that may invite a security breach. For example, any devices that request connection to segments that include highly secured data or support highly critical business processes should be in accordance with the governance policy for each segment, or not allowed to connect. This security policy should include next steps that go beyond simply "unplugging" and that are written down and available for all IT employees to access. Security is and will always be about implementing and verifying policies.
  • Find your visibility baseline: Using a set of comprehensive network management and monitoring tools, you should work across the IT department to itemize everything currently connected to your wireless network and if it belongs or is potentially a threat. IT professionals should also look to leverage tools that provide a view into who and what is connected to your network, and when and where they are connected. These tools also offer administrators an overview of which ports are in-use and which are not, allowing you to keep unused ports closed against potential security threats and avoid covertly added devices.
    As part of this exercise, you should look to create a supplemental set of whitelists - lists of approved machines for your network that will help your team more easily and quickly identify when something out of the ordinary may have been added, as well as surface any existing unknown devices your team may need to vet and disconnect immediately.
  • Establish a "Who's Responsible?" list: It sounds like a no-brainer, but this is a critical element of an IoT management strategy. Having a go-to list of who specifically is responsible for any one device in the event there is a data breach will help speed time to resolution and reduce the risk of a substantial loss. Each owner should also be responsible for understanding their device's reported vulnerabilities and ensuring subsequent security patches are made on a regular basis.
  • Maintain awareness: The best way to stay ahead of the IoT explosion is to consume updates about everything. For network administrators, you should be monitoring for vulnerabilities and implementing patches at least once a week. For security administrators, you should be doing this multiple times a day. Your organization should also consider integrating regular audits to ensure all policy-mandated security controls and processes are operational as specified and directed. At the same time, your IT department should look to host some type of security seminar for end-users where you're able to review what is allowed to be connected to your corporate network and, more importantly, what's not allowed, in order to help ensure the safety of personal and enterprise data.

 

Final Thoughts

 

IoT is here to stay. If you're not already, you will soon be required to manage more and more network-connected devices, resulting in security issues and a monumental challenge in storing, managing, and analyzing mountains of data. The risk to your business will likely only increase the longer you work without a defined management strategy in place. Remember, with most IoT vendors more concerned about speed to market than security, the management burden falls to you as the IT professional to ensure both your organization and end-users' data is protected. Leveraging the best practices identified above can help you begin ensuring your organization is getting the most out of IoT without worrying (too much) about the potential risks.


This is a cross-post of IoT and Health Check on Sys-Con.

I may be dating myself, but anyone else remember when MTV® played music videos? The first one they ever played was The Buggle's "Video Killed the Radio Star."  The synth-pop feel of the song seemed so out of place with the words, which outlined the demise of the age of the radio personality. Thinking back, this was the first time I can remember thinking about one new technology completely supplanting another. The corollary to this concept is that radio stars are now antiquated and unneeded.

 

Fast forward a few decades and I'm entrenched in IT. I'm happily doing my job and I hear about a new technology: virtualization. At first, I discounted it as a fad (as I'm sure many of us old-school technologists did). Then it matured, stabilized, and gained a foothold.

 

One technology again supplanted another and virtualization killed the physical server star. Did this really kill off physical servers entirely? Of course not. No more so than video killed radio. It just added a level of abstraction. Application owners no longer needed to worry about the physical hardware, just the operating system and their applications. Two things happened:

1.       Application owners had less to worry about

2.       A need for people with virtualization experience developed

 

From that point on, every new person who entered IT understood virtualization as a part of the IT stack.  It was a technology that became accepted and direct knowledge of physical servers was relegated to secondary or specialized knowledge. Having knowledge about firmware and drivers was suddenly so "retro."

 

Virtualization matured and continued to flourish, and with it, new vendors and capabilities entered the market, but dark clouds were on the horizon. Or perhaps they weren't dark-just "clouds" on the horizon. As in private clouds, hybrid clouds, public clouds, fill-in-the-blank clouds. The first vendor I remember really pushing the cloud was Amazon® with their Amazon Web ServicesTM (AWS®).

 

Thinking back, this seemed like history repeating itself. After all, according to many, Amazon nearly destroyed all brick and mortar bookstores. It looked like they were trying to do the same for on-premises virtualization. After all, why worry about the hardware and storage yourself when you can pay someone else to worry about it, right?

 

This seems reminiscent of the what happened with virtualization. You didn't worry about the physical server anymore-it became someone else's problem. You just cared about your virtual machine.

 

So, did cloud kill the virtualization star, which previously killed the server star? Of course not. For the foreseeable future, cloud will not supplant the virtualization specialist, no more so than virtualization supplanted the server specialist. It's now just a different specialization within the IT landscape.

 

What does this mean for us in IT? Most importantly, keep abreast of emerging technologies. Look to where you can extend your knowledge and become more valuable, but don't "forget" your roots.

 

You never know-one day you may be asked to update server firmware.


This is a cross-post from a post with the same name on the VMBlog.

As technology professionals, we live in an interruption-driven world; responding to incidents is part of the job. All of our other job duties go out the window when a new issue hits the desk. Having the right information and understanding the part it plays in the organization is key to handling these incidents with speed and accuracy. This is why it's critical to have the ability to compare apples-to-apples when it comes to the all-important troubleshooting process.

 

What is our job as IT professionals?

Simply put, our job is to deliver services to end-users. It doesn't matter if those end-users are employees, customers, local, remote, or some combination of these. This may encompass things as simple as making sure a network link is running without errors, a server is online and responding, a website is handling requests, or a database is processing transactions. Of course, for most of us, it's not a single thing, it's a combination of them. And considering the fact that 95 percent of organizations report having migrated critical applications and IT infrastructure to the cloud over the past year, according to the SolarWinds IT Trends Report 2017, visibility into our infrastructure is getting increasingly murky.

 

So, why does this matter? Isn't it the responsibility of each application owner to make sure their portion of the environment is healthy? Yes and no. Ultimately, everyone is responsible for making sure that the services necessary for organizational success are met. Getting mean time to resolution (MTTR) down requires cooperation, not hostility. Blaming any one individual or team will invariably lead to a room full of people pointing fingers. This is counterproductive and must be avoided. There is a better way: prevention via comprehensive IT monitoring.

 

Solution silos

Monitoring solutions come in all shapes and sizes. Furthermore, they come with all manner of targets. We can use solutions specific to vendors or specific to infrastructure layers. A storage administrator may use one solution while a virtualization and server administrator may use another and the team handling website performance a third solution. And, of course, none of these tools may be applicable to the database administrators.

At best, monitoring infrastructure with disparate systems can be confusing, at worst, it can be downright dangerous. Consider the simple example of a network monitoring solution seeing traffic moving to a server at 50 megs/second, but the server monitoring solution sees incoming traffic at 400 megs/second. Which one is right? Maybe both of them, depending on if they mean 50 MBps and 400 Mbps. This is just the start of the confusion. What happens if your virtualization monitoring tool reports in Kb/sec and your storage solution reports in MB/sec? Also, when talking about kilos, does it mean 1,000 or 1,024?

 

You can see how the complexity of analyzing disparate metrics can very quickly grow out of hand. In the age of hybrid IT, this gets even more complex since cloud monitoring is inherently different than monitoring on-premises resources. You shouldn't have to massage the monitoring data you receive when troubleshooting a problem. That only serves to lengthen MTTR.

 

Data normalization

In the past, I've worked in environments with multiple monitoring solutions in place. During multi-team troubleshooting sessions, we've had to handle the above calculations on the fly. Was it successful?  Yes, we were able to get the issue remedied. Was it as quick as it should have been? No, because we were moving data into spreadsheets, trying to align timestamps, and calculating differences in scale (MB, Mb, KB, Kb, etc.). This is what I mean by data normalization: making sure everyone is on the same page with regard to time and scale.

 

Single pane of glass

Having everything you need in one place with the timestamps lined up and everything reporting with the same scale — a single pane of glass through which you see your entire environment — is critical to effective troubleshooting. Remember, our job is to provide services to our end-users and resolve issues as quickly as possible. If we spend the first half of our troubleshooting time trying to line up data, are we really addressing the problem?

 

About the Post

This is a cross-post from my personal blog @ blog.kmsigma.com. [Link].

Filter Blog

By date: By tag: