Skip navigation
1 14 15 16 17 18 Previous Next

Geek Speak

1,864 posts

I've had the opportunity over the past couple of years to work with a large customer of mine on a refresh of their entire infrastructure. Network management tools were one of the last pieces to be addressed as emphasis had been on legacy hardware first and the direction for management tools had not been established. This mini-series will highlight this company's journey and the problems solved, insights gained, as well as unresolved issues that still need addressing in the future. Hopefully this help other companies or individuals going through the process. Topics will include discovery around types of tools, how they are being used, who uses them and for what purpose, their fit within the organization, and lastly what more they leave to be desired.


Blog Series

One Company's Journey Out of Darkness, Part I: What Tools Do We Have?

One Company's Journey Out of Darkness, Part II: What Tools Should We Have?

One Company's Journey Out of Darkness, Part III: Justification of the Tools

One Company's Journey Out of Darkness, Part IV: Who Should Use the Tools?

One Company's Journey Out of Darkness, Part V: Seeing the Light

One Company's Journey Out of Darkness, Part VI: Looking Forward


IT organizations who have followed this segregated path of each team purchasing the tools they need tend to have some areas that have sufficient monitoring as well as areas in which there no visibility exists.  Predictably these gaps in visibility tend to reside between areas of responsibility or the "gray space" within an organization. Common examples of gray space could be the interaction between applications, clients and the transport between the two, the network and mobile devices, guest devices/users and their traffic patterns, help desk and network issues.

 

In a collaborative environment, the team is able to review the entirety of the tool set and discuss where gaps may exist. It is important that the right players have a seat at the table for these discussions - this will range from traditional network, application, security, and help desk teams to some of the newer teams like the mobile device teams. Spend some time exploring pain points within the existing work flows as these may stem from lack of knowledge that could be supplemented by one of the tools. There may be tools that aren't shared and that is quite alright, taking a phased approach to implementing tool sets on a wider basis will help ensure that these groups are getting tools that impact their ability to do their job.

 

With my customer we found the following to work:


Network Management

Consolidate network and wireless management tools to create "single pane of glass"

Troubleshooting tools helped the help desk resolve issues faster and provided them with access to info that could be more difficult to walk end users through providing.

Increase awareness of Netman and ensure contractors know how to use it


Point Solutions

Expand access to IPAM solution to include help desk and contractors as it helps with network address planning and troubleshooting

Increase awareness of available scripts and create internal portal so that others know where to find them and how to use them


Expand NAC Integration Through APIs

Integrate NAC via its APIs so that it shared data with IPAM and firewall improving network visibility for guests and improving reporting

Integrate NAC with log aggregation tool so that it has more device data

Expand log aggregation tool access to all senior IT staff


Operations

Improve ticketing system notification to include facilities for outage window

Create documentation repository on cloud storage so that all IT members can reach it


Issues to Address

Visibility into data center infrastructure is lacking

Legacy cloud managed switches floating around that need to be dealt with.  These have a great management platform in their own right, but they aren't integrated properly

Mobile device visibility and management at this point

Server visibility tools have not been shared with anyone outside of server team at this point as we are evaluating

Application performance management

 

 

The development of organizational tools should be an iterative process and each step should bring the company closer to its goals. The total value of a well integrated management system is greater than the sum of its parts as it can eliminate some of the holes in the processes. While many positive changes have been made, there are still many more to work through. This company has opted for a pace that enables them to make slow steady process on these tools while having to maintain day to day operations and plan for many future tools. Brand new tools will likely be integrated by VARs/System Integrators to ensure full deployment while minimizing impact on the IT staff.

By Corey Adler (ironman84), professional software developer

 

Greetings thwack®! You may know me as the guy who always pesters Leon during SolarWinds Lab™ (lab.solarwinds.com). What you don’t know is that Leon has been pestering me for quite a while to share some dev knowledge with all of you outside of Lab. I doubt a week has gone by without him asking me to contribute. I have finally acquiesced, and here is the result. For those of you networking types who like to dabble in the coding arts, here are a few tips to help you have a long and prosperous journey.

 

You are our new best friend…

Yes, it’s true! We’re happy that you’ve joined us. If you ever need any help, whatsoever, feel free to pester us. The more DBAs, network gurus, etc. that understand what developers go through, the better we’ll all be at communicating what’s needed to get our projects out the door in time. Imagine it in reverse: If a developer started learning and genuinely trying to understand what it is that you do, wouldn’t you encourage it? I have certainly come across a good number of IT people who’ve lamented how ignorant developers (and  their managers) are.  One job, in particular, had me interacting with the IT guys regularly, and they grew to like me because I wasn’t demanding ridiculous things for my project, I always listened, and I showed them that I wanted to learn the reasons behind what they were doing.

 

…and our worst nightmare

The reason for this should be pretty obvious. How many times have you seen a co-worker with less experience and technical knowledge than you successfully convince management to do something you know will end badly?  Or the person who doesn’t even have A+ certification who tries to solve a network issue instead of a CCIE? In both cases, I put good odds on something getting screwed up. Often, people who have just learned something think they know exactly how to use that new information. Trust me on this one: If you don’t have the know-how, don’t get involved. If you think that you know exactly what we need without talking to us first, you probably don’t.

 

For these first two points, the best approach, in summation, is for both sides to do something insane: COMMUNICATE!

 

Don’t try to reinvent the wheel

Imagine that someone wants to learn how cars work. They’re trying to figure out a good project that will help them dive right into the learning process. They go back and forth about it, and finally decide to create something that will change a stick-shift transmission into an automatic one. Is this person going to learn a lot about how cars work? Certainly! Is this going to be something practical that they will do on a regular basis? Probably not.

 

The same thing happens with programming. Chances are, someone has already created a tool that will help you do the thing you want to do, if not do it for you. Want to use CSS to have some cool styling on your Web page? Use Bootstrap to do it! Want to do some DOM manipulation in JavaScript®? Use jQuery®! Or how about setting up binding in your forms? Use Knockout! Does this mean that there’s no benefit to doing it the hard way? Absolutely not. It’s just unlikely that you’ll do it the hard way in practice. If you want to learn how to code properly, you should try and emulate those of us who do it for a living.

 

I hope this has been useful for you. Let me know if you want any more tips, tricks, or just general help. Remember the first point: I’d love to help you out! Until next time, I wish you good coding.

I've had the opportunity over the past couple of years to work with a large customer of mine on a refresh of their entire infrastructure. Network management tools were one of the last pieces to be addressed as emphasis had been on legacy hardware first and the direction for management tools had not been established. This mini-series will highlight this company's journey and the problems solved, insights gained, as well as unresolved issues that still need addressing in the future. Hopefully this will help other companies or individuals going through the process. Topics will include discovery around types of tools, how they are being used, who uses them and for what purpose, their fit within the organization, and lastly what more they leave to be desired.

 

Blog Series

One Company's Journey Out of Darkness, Part I: What Tools Do We Have?

One Company's Journey Out of Darkness, Part II: What Tools Should We Have?

One Company's Journey Out of Darkness, Part III: Justification of the Tools

One Company's Journey Out of Darkness, Part IV: Who Should Use the Tools?

One Company's Journey Out of Darkness, Part V: Seeing the Light

One Company's Journey Out of Darkness, Part VI: Looking Forward

 

Lean IT teams often do whatever they can to get by and my customer was no exception. One of the biggest challenges they had in approaching their network management strategy was to understand what they currently had. We had to work through the "day in the life" of a number of individuals to identify the core tools used, but were constantly surprised by new tools that would appear or were used so infrequently that the team would forget about them until a specific use case arose.

 

Open Source Tools

The team found open source tools to be of tremendous use, especially Netman and MRTG.  These provided much needed visibility and the price was right given the lack of investment in monitoring tools at the time of deployment. The relatively complex nature of deployment of these tools did limit adoption and we found that often these tools lagged behind from a configuration standpoint.  New equipment would be deployed without necessarily being integrated into the tool, likewise old equipment when replaced was not always removed from the tools. Lack of policy and discipline in a busy IT shop had effectively limited the effectiveness of these tools.  This was further compounded by only a small subset of the team having access. Additionally, as an outside resource, I had no idea what "normal" was when looking at the tool (e.g. is that router down or has it been removed?).


Vendor Specific Tools

These tools are something most are familiar with products like Cisco's Prime Infrastructure, Aruba's Airwave, VMWare's vSphere Operations Management (VSOM), etc. Each of these tools had been deployed widely and would tend to be used by those who's job responsibility primarily covered the area managed by the tool, however others that could benefit from this tool very rarely used it if at all. These tools tend to be fairly expensive and offer many features that are typically not leveraged very well. Additionally, most of the tools have robust AAA capabilities that would enable them to be shared with help desk teams, etc. but these features had been overlooked by the team, despite having been properly configured for their own purposes.


Third Party Tools

Some investment had been made in third party tools, typically around a specific need. A good example of this would be the Kiwi Cat Tools for ease of device backups. While this functionality existed in other tools, the company wanted a single location for all device configuration files. The customer found that numerous tools existed, but it took the entire team to enumerate them and in a couple cases multiple instances of the same tool had been purchased for usage by different teams.


Scripting

Certain members of the IT team who were comfortable with writing and using scripts would develop their own tool sets, however these would often not be shared with the rest of the IT team until some specific project jogged the author's memory who would then offer up some script that had been written. In all cases these were very specific and had never been fully socialized, the team decided to develop a website internally to reference these tools and their use cases.


Taking a Step Back

Working with each of the administrators and their areas of responsibility it was easy to understand how they've gotten to this point where substantial investment had been made in a myriad of tools without putting a strategy in place. Each of the teams had acquired or deployed tools to make their lives easier and tended to go with whatever was vendor aligned or free. Taking a step back together from it all and looking at the system in its entirety provided a much different perspective - is this really how we'd design our management infrastructure if we built it from the ground up? Clearly not, so what next? Looking at the tools currently deployed, it was obvious that substantial duplicate functionality existed as well as a number of gaps, especially as it pertained to any one specific team's visibility.

 

 

Enumerating the existing tools, processes and use cases highlighted how much organizations actually do spend on tools while complaining that they don't have the visibility needed. Open lines of communication between teams, the development of an official or virtual "tools team", and careful consideration of products purchased are key to the success of running the IT team properly. Highly custom scripts and those who can write them can be of great value to an organization, however this value is wasted if the team at large isn't aware of these scripts and how to best utilize them.

2015 is the year of the road trip for me. On the downside I know the insides of DFW, ORD, SFO, LHR, LGA and PHX airports way more than I’d like to. But on the plus side is the reason I still like getting out on the road after many years in technology: getting to speak with other die-hard IT professionals.

 

I’m headed to Microsoft Convergence in a few weeks and if you’re attending come by and say hello. This show is a little different than other shows we typically attend. Usually, I’m talking about implementation and strategy at the level where we geeks apply technology- in the datacenter.  We talk about the deep details of protocols, compatibility and applied security. We even get to talk about disaster recovery, availability, governance and budget on occasion, but for the most part we’re actually planning how to make IT happen for real.

 

But Convergence is focused on Microsoft Solutions, the full-stack, highly customizable and heavily integrated big-ironware welded by some of the largest IT departments on earth.  It’s not just setting up SolarWinds AppStack monitoring with SAM, VMan and SRM.  This is all that, plus Dynamics CRM, plus Azure plus IBM Watson and more.

 

Attendees for Convergence are more likely to be technology owners than administrators, and it’s always a pleasure to have conversations with them. As admins, we know the increasing rate of technology change and complexity is a serious problem because we’re the ones who actually have to wire it together. But, we are “allowed” to develop new skills because it’s necessary to manage new gear on the loading dock. Management on the other hand, is not always so fortunate.

 

Although many IT directors and VPs are sharp and a decent number once slogged it out on the helpdesk like us, today the rate of change is causing some real headaches when it comes to decision-making. They aren’t generally encouraged to block out regular time to develop new technology skills.  Instead, they’re working long hours running the business.

 

And it’s precisely their sleeping inner-geek, eager to have the occasional technical conversation that makes shows like Convergence so much fun. Most are happy in their roles, but if they could sneak away for a week in an SDN or OpenCompute lab, and dust-off their config and programming skills they would. It’s the reason being a part of SolarWinds is so easy. Vendor agnostic geeks, managing unruly piles of IT infrastructure for the quiet glory of doing the impossible.  Though sometimes, that window office looks pretty comfortable.

 

See you in November 30th in Barcelona!

We are becoming an IP-connected world. Home energy, city lights, cars, television, coffee machines, IP-enabled mobile devices, home security cameras, watches, manufacturing process automation, Star Trek-like hospital monitoring beds, you name it. If it’s been built in the last five years and has any kind of management or monitoring need, the device probably connects to an IP network.

 

Most of these systems should be non-routable, internally controlled networks to reduce the risk of tampering or accidental or intentional data loss. But we know that even if these networks are designed to be closed, some business need, convenience, or a clever hacker could open them up to external access. Consider the Target breach via an HVAC vendor[1], or the remote hack of a Jeep Cherokee[2] via an open port on the Sprint cellular IP network. (Sprint points out that it was merely providing the connectivity and transport for the attack, and that its network did not contain the end device vulnerability[3]). 

 

First, maintaining a strict policy of no remote connections creates a sense of assurance.  Second, such networks can drift from their original configuration, or become out of date with respect to patches and updates. Third, closed networks are more costly to maintain. The cost of an onsite visit to resolve a configuration issue or a patch gone wrong is certainly more expensive than remote remediation. Imagine the havoc that would ensue if the new LED road lighting system being deployed by the city of Los Angeles[4] were hacked?  Still, the benefits of a connected LED lighting system, including reduced energy, better management, and real-time communication, are likely a higher priority than the risk of hackers taking over nighttime lighting. 

 

It’s worth reviewing the Jeep situation because it illustrates the challenges of adding systems to IP networks. First, as we add remote access to previously disconnected complex systems, the design of command and control vs. the data path needs to be carefully considered. Jeep designers believed their systems were disconnected, but researchers were able to find a connection. Once the connection was found, further engineering enabled the researchers to use the entertainment system with its necessary network connectivity to piggyback commands into the control system, radio, windshield wipers, steering, and brakes. Computers have made cars safer by giving them the ability to sense obstacles, feather the brakes, and warn the human driver when maintenance is needed. But those same computers become dangerous if accessed by unauthorized users.

 

As devices become increasingly interconnected, system functions and controls may be accidentally accessed. We can mitigate this risk by understanding our network baseline protocols and carefully monitoring new types of devices that appear on the network.

 

In the words of Arthur Conan Doyle, “Never trust to general impressions, my boy, but concentrate yourself upon details.”

 


 


[1] http://krebsonsecurity.com/2014/02/target-hackers-broke-in-via-hvac-company/

[2] http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/

[3] http://www.fiercewireless.com/story/sprint-says-its-network-not-fault-hacking-demonstration-chrysler-vehicles/2015-07-28

[4] http://www.wired.com/2015/09/design-issue-future-of-cities/

Security tools: sometimes it seems that we never have enough to keep up with the task of protecting the enterprise. Or, at least it seems that way when walking the exhibit floor at most technology conferences. There’s a veritable smorgasbord of tools available, and you could easily spend your entire day looking for the perfect solution for every problem.

 

But, the truth is, IT teams at most organizations simply don’t have the budget or resources to implement dedicated security tools to meet every need and technical requirement. They’re too busy struggling with Cloud migrations, SaaS deployments, network upgrades, and essentially “keeping the lights on.”

 

Have you ever actually counted all the security tools your organization already owns? In addition to the licensing and support costs, every new product requires something most IT environments are in short supply of these days—time.

 

Optimism fades quickly when you’re confronted by the amount of time and effort required to implement and maintain a security tool in most organizations. As a result, these products end up either barely functional or as shelfware, leaving you to wonder if it’s possible to own too many tools.

 

There has to be a better way.

 

Maybe it’s time to stop the buying spree and consider whether you really need to implement another security tool. The fear, uncertainty, and doubt (FUD) that drives the need to increase the budget for improving IT security works for only so long. At some point, the enterprise will demand tangible results for the money spent.

 

Try a little experiment. Pretend that you don’t have any budget for security tools.  You might discover that your organization already owns plenty of products with functionality that can be used for security purposes.

 

What about open source? It isn’t just for academic environments. Plenty of large, for-profit organizations such as Google® and Apple® rely on open source software to build and support their own products. Open source can be reliable and even complement the commercial software in your existing portfolio. Additionally, many vendors originally started as open source and continue to maintain free community edition versions.

 

A recent trend in security is anomaly detection. If you can’t afford a dedicated tool, why not leverage existing monitoring systems for this purpose? Many of these tools track performance to create baselines and alert on unacceptable thresholds. While an alarm could be caused by hardware or software failures, alerts can also be the sign of an attack.

 

For incident response, data from a monitoring system can be correlated with information from security tools to help determine the scope of a breach. Many monitoring products even provide canned reports for compliance initiatives such as PCI DSS and HIPAA.

 

Your enterprise wireless management system (WMS) is a great example of a multifunctional monitoring system. It’s loaded with features such as threshold monitoring, rogue detection, alerting, pre-built security reports, and even some basic firewall features. It’s enough to make your auditors weep for joy.

 

Netflow is more than just a network tool. The data from your collector is another helpful resource for identifying anomalous traffic, which could be a sign of a breached system and data exfiltration by an attacker. In general, network analysis tools can be very useful during security investigations as they can reveal much about malware behavior and attack scope.

 

How about your configuration management systems? In addition to limiting access by centralizing changes, these tools can also automate patching and provide audit trails. Some asset management systems can even be used for file integrity monitoring (FIM) or application whitelisting. There are also open source host intrusion detection systems (HIDS) such as OSSEC, which also provide FIM functionality.

 

Think you can’t block traffic without a firewall? Think again. Every managed layer-3 device has the ability to implement access control lists (ACL).  While you won’t get some of the advanced or NextGen features vendors love to brag about, in many cases, an ACL will meet your needs, and, without the performance impact your network might experience when turning on all the features of a firewall.

 

If you own load-balancers, a.k.a. application delivery controllers (ADC), you also have some excellent built-in security controls. These devices do more than add high-availability to critical applications. Load-balancers provide application and network denial of service (DoS) protection through mechanisms such as SYN cookies, protocol checks, and connection throttling.

 

With the right add-ons, most Web browsers can be turned into tools for application analysis, testing, and reconnaissance. Most of these extensions are free; all you need to do is spend the time to find the ones that work for you. But in a pinch, the Google Hacking technique popularized by Johnny Long is still a viable option for determining your organization’s weak spots. You can even use no-cost online malware analysis and sandbox sites such as Wepawet and Virustotal for crude incident response.

 

DNS sinkholes, used for blocking access to malicious domains, have matured into the more easily manageable BIND Response Policy Zones. By adding an automatically updated reputation feed, your DNS server becomes a practical security control that can block access or redirect traffic to an internal remediation site.

 

White hat, black hat, grey hat; they all use the same tools for security testing. And you can use them, too. Whether you choose Kali®, Security Onion®, or Pentoo Linux®, you’ll find enough security tools inside these open source security distros to keep you busy assessing your own organization. Many of the tools even have commercial support contracts available.

 

Threat intelligence services can be expensive. However, there are inexpensive or even free versions from information sharing and analysis centers (ISAC) and organizations such as Team Cymru and Shadowserver™.

 

Social media is also a handy tool for monitoring the latest threats and vulnerabilities. Most security researchers and hacktivists maintain Twitter accounts and love to post information about breaches and zero-days, providing even faster updates than Reddit®.

 

Good security is about managing risk, not tools. Resisting the siren song of the latest product sales pitch doesn’t make you a cheapskate; it makes you a discerning buyer who understands that there is no quick fix to building a more secure enterprise. Most often, it’s not about having the best tool, but having the one that does the job. Moreover, the more tools you have, the more you have to manage, which can increase your liability, cost, and organizational risks.

 

For more information, you can view a webinar on this topic here.

Happy Columbus Day!

 

We all want ready access to email and other critical apps from every device, on any network, all the time.


We want to use company equipment and home equipment interchangeably because we work from different locations throughout the day. As if all this wasn’t hard enough for your IT security team, just watch them start to lose their minds when you throw in some social media platforms. Their mantra is: you can’t have all of this and still be secure. But is that really the case?  In fact, with a few restrictions, a little software, and some common sense, most positions in many organizations should be able to achieve this level of flexibility and still remain relatively secure.

 

Let’s start with devices. Who doesn’t use a mobile platform, phone, or iPad® to conduct at least some business during the day? Many of us use these devices to check email, run IT alert apps, or business tools, like expense management or HR apps. In fact, according to Tech Pro Research, 74% of businesses are planning to use, or are already using, Bring Your Own Device (BYOD).[1]

 

Most businesses use mobile devices, especially if you count business-purchased mobile phones. Fortunately, Enterprise Mobile Management (EMM) makes it easy to secure corporate data and applications. Features in EMM include the ability to encrypt corporate data, manage applications that reside on the phone, force VPN connections, force a pin, and separate personal data from corporate data. Additionally, mobile devices are commonly used as a secondary factor for authentication and authorization.[2] It is much more convenient to use your mobile device as a soft token than carry around a key fob-based token. However, in some environments, personal devices are not considered secure enough and key fobs are required.

 

 

 

Mobile device risks

 

Mobile device risk comes from two primary threat vectors. The biggest risk is loss. If a device does not have a pin or strong password, all of its data can be accessed. Even if your phone is authenticated, some good forensics packages can still extract data from it. If critical data is stored on the device, add-on encryption is essential. The second risk is malware. Malware enters a phone from two primary vectors: mobile advertising and compromised open source libraries. Because advertising on mobile devices is less controlled, malicious actors can insert malware through this application programming interface (API). Open source libraries have also been known to be compromised, as we saw with Xcode just this month.[3] EMM can help with both these risks by limiting apps in the enterprise container, and enforcing pin number and password rules.

 

Using a home personal computer for work is less common than using mobile devices, primarily because fewer people work on personal computers these days. Some companies are moving toward using tablets for work, and others use virtual desktops, which allow employees to use their own computers. Even companies that require employees to use laptops or desktops purchased and issued by their IT departments rely on Cloud-based applications to get work done. With Cloud-based apps, it is difficult to preclude access to personal devices.

 

The issues that accompany PC use are slightly more complicated than issues associated with mobile devices. The most successful remote desktop implementations are those that really only use the PC for its keyboard, video, audio, and mouse functions. If you want to allow local data storage, you need a policy around encryption (for sensitive corporate data) and a way to ensure that the home computer is as secure as a corporate device.

 

We are now adding social media to the equation. The issues to consider with social media include company reputation, policy restrictions, malware, and ownership. Organizations want to protect their reputation, so they write social media policies that provide guidelines on use, posting, and reporting. However, you may not know that the National Labor Relations Board has some strict guidelines on what an organization can and cannot have in its policy. There are First Amendment issues with the right to associate and discuss work issues that can conflict with certain social media policies. Check out NLRB guidelines to learn more.

 

Next, make sure your policy includes clear guidelines on who owns the account. If employees are allowed to post from their personal accounts, provide a disclaimer they can use to clearly show they are stating their own opinion. Require all work-related communications to be issued from organization-owned and -managed accounts.

 

Finally, there is malware to consider. Malware that arises on social media is the same type of malware you might see on many websites. The difference is that malware spreads quickly if it gets onto a popular topic or image on social media. This is why it is so important to ensure that nothing containing malware gets posted. Actively scan posts to make sure they don’t have images or attachments, and ensure that your browsers are up to date with the latest patches. Lastly, avoid risky programs, such as flash, if at all possible.

 

In the words of Mr. Universe, “You can’t stop the signal.[4]


BYOD and social media are here for the duration. If we evaluate our risks, and plan our controls, we can connect with confidence and assurance.



[1] http://www.zdnet.com/article/research-74-percent-using-or-adopting-byod/

[2] http://searchsecurity.techtarget.com/definition/multifactor-authentication-MFA

[3] https://www.washingtonpost.com/news/the-switch/wp/2015/09/21/apples-app-store-was-infected-with-malware-from-china/

 

[4] http://firefly.wikia.com/wiki/Mr._Universe

Remember grade school fire drills? Teachers demonstrated how to line up; they tested the door for heat; explained how dangerous smoke is; and a few times a year the obnoxiously loud bell rang and we’d all walk (not run) to the nearest exit. I’ll bet that fire safety ritual is forever etched in your mind, but do you know who to call in your organization if you suspect an information security issue? 

 

The challenge for organizations when it comes to information security awareness, is that most programs are a combination of once-a-year lectures, or worse, online training (complete with PowerPoint® slides) that makes online defensive driving classes seem alluring. While this type of training may meet compliance or policy guidelines, retention for non-security professionals is minimal. In fact, the low-effectivity level has prompted noted security researchers, such as Dave Atiel, to assert that security awareness is a waste of money.[1]

 

So what should an organization do about security awareness? Many in the security community are talking about establishing a Culture of Security, instead of imposing the “mandatory” annual training programs. Infusing security awareness as part of your organization’s culture requires commitments that are not always as easy to obtain as you might expect.

 

Security awareness must come from the top 

Your C suite must support all your security polices and be regarded as fully compliant. Too often, as security professionals, we write policies that the C suite ignores—something as simple as wearing a badge and requiring visitors to wear badges. Failure to adhere is noticeable and diminishes organizational respect for the security policies.

 

Measure and report on awareness campaigns

Often, security professionals run awareness campaigns and track who attends the classes, but do you track and report on:

  • Number of tailgaters spotted?
  • Laptops left unattended and not locked?
  • Phishing spots (up or down)?

Getting executives to report these stats in the company newsletter or all-hands meetings helps keep security top of mind.

 

 

Creativity elevates awareness and retention

As we said before, security awareness through traditional online and in-class training is useful, but the information doesn’t stick with us. Do something different.

  • Launch a security ambassador program.
  • Give out an award for best security risk identified.
  • Have a donuts (or breakfast taco) and security question station as employees arrive at work.

 

If you are responsible for IT security and your resources are limited, the following are some simple security awareness ideas.

 

See it, Say it

Set up an email alias for employees to report security risks—phishing, doors propped open, loose USB devices or laptops. You do need to respond. But at least you’ll have the information, and, over time, this is where you look for your deputies or security ambassadors.

 

Gamification

Yes, you can “gamify” security awareness. Try hosting quarterly or monthly contests. This really works[2]. Here are some game ideas:

 

  1. Pass the balloon. Attach a balloon to an unsecured desk (laptop open; confidential information, car keys, purse left out …). After correcting the infraction, the balloon recipient has to find someone else to pass the balloon to. 
  2. Candy for phishing. Put up a candy jar for a week. Anyone who reports a phish gets to dip into the jar. (Added challenge: you cannot eat the candy if you want to win). At the end of the week, the person with the most candy wins a gift card, or, perhaps more appropriately, a toothbrush.

 

Some of these ideas may seem frivolous or juvenile, but IT security is anything but that. Your objective is to establish a security-awareness mindset among everyone in the company. With more sentries on the lookout, you lower your risks of a security breach.


 


[1] http://www.csoonline.com/article/2131941/security-awareness/why-you-shouldn-t-train-employees-for-security-awareness.html

[2] http://www.computerworld.com/article/2489977/security0/boost-your-security-training-with-gamification-really.html

In my last posting, I wrote about my history with VDI, and some of the complications I dealt with as these things moved forward. In these cases, the problems we encountered were largely amplified exponentially when scaling the systems up. The difficulties we encountered, as I alluded to previously had mostly to do with storage.

Why is that? In many cases, particularly when implementing non-persistent desktops (those that would be destroyed upon log off, and regenerate upon fresh logins) we would see much load be placed on the storage environment. Often when many of these were being launched, we’d encounter what became known as a boot storm. To be fair most of the storage IO at the time, was generated by placing more discs into the disc group, or LUN. Mathematically, for example, a single 1500RPM disc produces at most 120 iops, so that if you aggregate 8 discs into one LUN, you receive a maximum throughput from the disc side of 960 iops. Compared to even the slowest of solid state discs today, that’s a mere pittance. I’ve seen SSD’s operate at as many as 80,000 iops. Or over 550MB/s. These discs were cost-prohibitive for the majority of the population, but pricing has dropped them to the point where today even the most casual of end user can purchase these discs for use in standard workstations.

Understand, please, that just throwing SSD at an issue like this isn’t necessarily a panacea to the issues we’ve discussed, but you can go a long way towards resolving things like boot storms with ample read-cache. You can go a long way toward resolving other issues by having ample write cache.

Storage environments, even monolithic storage providers, are essentially servers attached to discs. Mitigating many of these issues also requires adequate connection within the storage environment from server to disc. Also, ample RAM and processor in those servers or number of servers (Heads, Controllers, and Nodes are other names for these devices) are additional functional improvements for the IO issues faced. However, in some cases, and as I don’t focus on product, but solution here, one must establish the best method of solving these. Should you care to discuss discrete differentiations between architectures, please feel free to ask. Note: I will not recommend particular vendors products in this forum.

There have been many developments also in terms of server-side cache that aid in these kinds of boot storms. These typically involve placing either PCIe based solid state devices or true solid state discs in the VDI host servers, onto which the vdi guest images are loaded, and from which they are deployed. This alleviates the load on the storage environment itself.

The key in this from a mitigation perspective is not just hardware, but more often than not, the management software that allows a manager to allocate these resources most appropriately.

Remember, when architecting this kind of environment, the rule of the day is “Assess or Guess.” This means, that unless you have a good idea of what kind of IO will be required, you couldn’t possibly know what you will require. Optimizing the VM’s is key. Think about this: a good-sized environment with 10,000 desktops, running, for example Windows 7 at 50 iops per desktop as opposed to an optimized environment in which those same desktops are optimized down to 30 iops shows a differentiation of 200,000 iops at running state. Meanwhile, one must architect the storage mitigation at peak utilization. It’s really difficult to achieve these kinds of numbers on spinning discs alone.

I can go deeper into these issues if I find the audience is receptive.

Do you want your applications to perform at their best 24 x 7? Would you like to boost your organization’s productivity? Of course, everyone wants this, and here’s how: continuous monitoring of the application stack. Application stack?!! What’s that? Keep reading this article and I’ll walk you through the application stack.

 

To understand the application stack you should first understand application deployment in a typical organization.

appstack.png

One of these components will be the sole reason for an application’s performance issues. Therefore, continuous monitoring of these components is a must. However, the most important will be the graphic illustration of all the components in the application deployment. The graphic illustration should show the status of all the components, giving you can get a bird’s eye view of the all components’ working status. But just a view of the statuses is not enough, you have to ensure that you have enough information when there is an issue with one of the components.

 

Let’s go through an example, such as an issue with one of your applications, like MS SQL.

You can first start with the server. Being an administrator you will know which server SQL is running on. Then you can directly check the performance of the server, including memory, disc, etc. But if it is running on a virtualized environment (which is mostly the case these days) you can drill down to see the host, virtual cluster, virtual datacenter, and the data store the VM belongs to. If there is an issue with any of these you should further drill down to locate the exact issue. Minute details of node, Hyper-V® host, ESX® host, Vserver, etc., should be checked. If everything is perfect, you can move on to the next item in the application stack.

 

You can check the volume the VM is using, the LUN where the data store of the concerned VM is located, storage pool, and storage arrays.

 

If the issue is with a Web-based application, the troubleshooting should start by tracking the application’s Web performance. You should be able to track the transitions, which indicates if it’s an issue with the application itself. Also, you can back track to see if it’s a network issue.

 

Basically, if you are able to monitor your system, VM, storage, Web performance, and network, the complete application troubleshooting process will fall into place. When you have information about all these components together in one console, your life as an application engineer becomes a lot easier. Moreover, you will be able to proactively troubleshoot any upcoming issues, helping you achieve consistency in your application performance.

 

The combination of a couple of tools will help you achieve all of the above. Switching from one application to another is difficult. However, Solarwinds® has its Application Stack Management Bundle, which gives you full visibility of all components associated with an application. Want to know more about the Application Stack Management Bundle? Visit Solarwinds’ at booth HH18 at IPExpo, London.

DameWare Remote Support allows IT admins to connect to remote computers wherever they are situated—whether they are inside the network or outside, and whether they are attended or unattended. To underscore and make clear the value DameWare brings to your IT team, we made this graphic explaining all the popular remote connection options available in DameWare Remote Support.

 

The latest release of DameWare Remote Support v12.0 has introduced a new connection type – which is #4 in the graphic below – unattended, over-the-Internet remote session. This adds to DameWare’s many remote connection capabilities allowing IT pros to provide on-time IT support and administration to remote computers whether they have any end-users present or not.

 

1509_DRS_DameWare-Infographic.jpg

 

Deployment Modes for Remote Connection Types:

  • Use the built-in DameWare Mini Remote Control (DMRC) application available in DameWare Remote Support (DRS) to initiate remote connection.
  • Use the DameWare Remote Support application to leverage the built-in administration tools to troubleshoot Windows® systems and Active Directory®.

 

Remote Connection Type

Deployment Option

1. Attended Inside the LAN

DRS Standalone

(Use DMRC in DRS)

2. Unattended Inside the LAN

DRS Standalone

(Use DMRC in DRS)

3. Attended Outside the LAN

DRS Centralized

(Use DMRC in DRS + Central Server + Internet Proxy)

4. Unattended Outside the LAN (New in v12.0)

DRS Centralized

(Use DMRC in DRS + Central Server + Internet Proxy)

5. Remote Connection from Mobile Device

DRS Centralized

(Use DMRC in DRS + Central Server + Mobile Gateway)

 

Download the PDF of the Infographic below.

What makes an application perform optimally? I would say it is when there is collaborative performance from the server or the VM running the application, the network on which the application is used, and the storage. In this post, I provide information regarding storage and ways to configure it to avoid application performance bottlenecks.

 

LUN contention

LUN contention is the most common issue that storage admins try to avoid. Here are a few common mistakes that usually lead to LUN contention:

  • Deploying a new application on the same volume that handles busy systems, such as email, ERP, etc.
  • Using the same drives for applications with high IOPS.
  • Configuring many VMs on the same datastore.
  • Not matching storage drivers with processors.

Application issues can be traced to LUN contention only if the concerned database is being monitored. Drilling down to the appropriate LUN helps you make sure that the application runs fine.

 

Capacity planning

Poor application performance often can be tied to increased demand for services, and many times it can be storage and its IOPS. Storage is costly, and no organization wants to waste it on applications.

Capacity planning involves knowing your application, how much space it needs, and the kind of storage it requires. Capacity planning helps in predictive analytics, which allows users to choose the amount of storage their application requires. Capacity planning should be done before the application is moved into production. Doing so not only helps to right-size the application’s storage environments, but eventually helps lower the number of performance issues an application might experience during rollout.

 

Make sure it’s not the storage

SysAdmins often blame storage for application performance issues. It is always recommended to monitor your storage, which helps eliminate the blame game. Monitoring your storage helps you see whether it’s actually storage that’s causing performance issues, rather than the server or the network itself. Continuously monitoring your database can also help you avoid LUN contention. You will also be able to monitor the performance of your capacity planning, and be alerted when it’s not performing optimally.

Storage is the lowest common denominator of application monitoring. Application stack monitoring allows you to troubleshoot issues from the application itself. Consider the following troubleshooting checklist, and ask yourself:

  • Is it the application itself?
  • Is it the server on which the application is hosted?
  • Is it the VM?
  • Is it the storage?

I will walk you through the different layers and how they help troubleshoot application issues in my next blog. Also, to find out more about App stack monitoring, please visit us at booth HH18 at this year’s IP Expo in London.

When:           August 2005

Where:          Any random organization experimenting with e-commerce

 

Employee:         I can’t access the CRM! We might lose a deal today if I can’t send that quote.

Admin:               Okay, check the WAN, check the LAN, and the server crm1.primary.

Junior:                All fine.

Admin:               Great, Restart the application service. That should solve it.

Junior:                Yes, Boss! The app service was down. I restarted it and now CRM is back up.

 

When:             August 2015

Where:            Any random organization that depends on e-commerce

 

Sales Rep:           Hey! The CRM is down! I can’t see my data. Where are my leads! I’m losing deals!

Sales Director:     Quick! Raise a ticket, call the help desk, email them, and cc me!

Help desk:           The CRM is down again! Let me assign it to the application team.

App team:            Hmm. I’ll reassign the ticket to the server guys and see what they say.

SysAdmin:           Should we check the physical server? Or the VM instance? Maybe the database is down.

DB Admin:           Array, disc, and LUN are okay. There are no issues with queries. I think we might be fine.

Systems team:     Alright, time to blame the network!

Net Admin:           No! It’s not the network. It’s never the network. And it never will be the network!

Systems team:     Okay, where do we start? Server? VM? OS? Apache®? App?

 

See the difference?

 

App deployment today

Today’s networks have changed a lot. There are no established points of failure like there were when the networks were flat. Today’s enterprise networks are bigger, faster, and more complex than ever before. While current network capabilities provide more services to more users more efficiently, this also has led to an increase in the time it takes to resolve an issue, much less pinpoint the cause of failure.

 

For example, let’s say a user complains about failed transactions. Where would you begin troubleshooting? Keep in mind the fact that you’ll need to check the Web transaction failures, make sure the server is not misbehaving, and that the database is good. Don’t forget the hypervisors, VMs, OS, and the network. Also consider the fact that there’s switching between multiple monitoring tools, windows, and tabs, trying to correlate the information, finding what is dependent on what, collaborating with various teams, and more. All of this increases the mean time to repair (MTTR), which means increased service downtime and lost revenue for the enterprise.

 

Troubleshoot the stack

Applications are not standalone entities installed on a Windows® server anymore. Application deployment relies on a system of components that must perform in unison for the application to run optimally. A typical app deployment in most organizations looks like this:

app stack.png

When an application fails to function, any of these components could be blamed for the failure. When a hypervisor fails, you must troubleshoot multiple VMs and the multiple apps they host that may have also failed. Where would troubleshooting begin under these circumstances?

 

Ideally, the process would start with finding out which entity in the application stack failed or is in a critical state. Next, you determine the dependencies of that entity with other components in the application stack. For example, let’s say a Web-based application is slow. A savvy admin would begin troubleshooting by tracking Web performance, move to the database, and on to the hosting environment, which includes the VM, hypervisor, and the rest of the physical infrastructure.

 

To greatly reduce MTTR, it is suggested that you begin troubleshooting your application stack.  This will help move your organization closer to the magic three nines for availability. To make stack-based troubleshooting easier, admins can adopt monitoring tools that support correlation and mapping of dependencies, also known as the AppStack model of troubleshooting.

 

Learn more at Microsoft Convergence 2015.

If you would like to learn more, or see the AppStack demo, SolarWinds will be at booth 116 at Microsoft Convergence in Barcelona.


Security is Everyone’s Job

 

 

“Never was anything great achieved without danger.” -Niccolo Machiavelli

 

As we begin National Cybersecurity Month, it's a great time to reflect on how we can all protect ourselves at work and home. Let’s look at some current risks and see what changes we can adopt to mitigate these risks.

 

Email - we need it, love it, live it, but it’s risky.

 

Phishing is still the number one risk for most of us. Whether it’s an automatic preview in our work email system, or a browser injection on Web mail, SPAM and phishing are both a security risk and an irksome annoyance.

 

Unfortunately, we are not winning the battle against email cybercriminals and overzealous marketers, despite almost ubiquitous deployment of spam filters. Here we are in 2015, and spam still represents >10% of our inboxes.[1]  The statistics on phishing are even worse. From 2014 to 2015, the number of phishing sites increased from about 25,000 to 33,500, according to Google[2].

 

Furthermore, malicious email is becoming more sophisticated by embedding macros in ordinary looking attachments. In our busy lives, it’s easy to accidentally click on an attachment or link with malicious content.

 

The following are some email checks to keep top of mind:

 

Stay in familiar territory

 

Make sure the to: and reply to: emails match, or are from a company you know. Email phishers will try to fool you with an email that looks like someone you know, when it isn’t.

 

 

Watch out for typosquatters

 

These are email domains that are just slightly different from the real company name. These are commonly used in Business Email Compromise campaigns, where fraudsters trick businesses and consumers into sending money to a bank outside the US, often China or Russia. This money is very difficult to recover because we don’t have the right legal relationships, and international banking laws don’t provide the same protection as US laws.

These transactions pose a big business risk. We’ve lost 1.2 billion dollars in recent years. Even worse, this type of fraud is on the rise, up by 270% according to the FBI results released just last month.[3]

 

 

Personal email accounts are not safe from fraudsters

 

Personal email account breaches are difficult to detect because the fraudulent request comes from a real account. Hackers use the compromised account to steal money from relatives and friends. Particularly vulnerable are older parents and grandparents.

 

Don’t be a victim. Here are some safe computing practices that can help you avert email fraud:

 

Keep private information private

 

Never share your password. If someone genuinely needs access to your account (should never happen at work), change your password, then change it back when they are done.

 

Add variety to your login credentials

 

If you use a free email account, use a unique password for this account—not the one you use for social media, websites, and especially banking. Change your password frequently—at least once a quarter. It doesn’t need to be complicated, use your current password and add a special character for each quarter (see example below), or create your own that you can remember. Also, make your change date memorable, like the beginning of the quarter, or when you pay your mortgage.

 

  • 1st quarter “!”
  • 2nd quarter “?”
  • 3rd quarter “&”
  • 4th quarter “%”

 

This makes it more difficult for password crackers to guess your password, and if there is a password leak at another site, you haven’t handed over the keys to your email house as well.

 

Keep your system patched

 

Many of the security vulnerabilities exploited by hackers to compromise accounts are old and have been fixed by the vendors. If you are in a corporation, talk to IT about automatic updates. If you can’t patch because you are running an older application, ask IT about creating a VM (virtual machine) for you to run that old application. This helps you keep your system patched and up to date. At home, make sure your operating system, pdf reader (adobe.com), and browsers are set for automatic updates. Patching these three things will protect you from the majority of risks.

 

Educate your friends and relatives

 

Warn your less tech-savvy acquaintances of the dangers of cyber fraud. Remind them that no true friend would ever ask for money in an email. If they do get such a request, advise them to make a phone call to the person. Also, give them the numbers of the fraud department at their bank so they have someone to call if they need advice.

 

Make sure your security software is current

 

Make sure everyone in your house has up-to-date anti-malware software. Put it on an auto-renewing charge if needed.

 

You may hear a lot of talk about next generation endpoint protection. And yes, anti-malware software is not perfect, but you still brush and floss your teeth. If you can’t afford an anti-malware software package, at least run the free Windows® Essentials (for Vista to Windows 8, after Windows 8, it is called Windows Defender). For Mac users, Sophos offers a free antivirus solution.

 

 

As Albert Einstein said, “A ship is always safe at the shore - but that is NOT what it is built for.” If we want to fully utilize the Internet, a little caution and paranoia can reduce the risks.

 


[1] http://www.radicati.com/wp/wp-content/uploads/2011/05/Email-Statistics-Report-2011-2015-Executive-Summary.pdf

 

[2] https://www.yahoo.com/tech/googles-security-news-malwares-down-and-youre-120208482874.html

[3] https://www.ic3.gov/media/2015/150827-1.aspx

It’s getting cloudy. And I’m loving it! AWS re:Invent 2015 is back again and bigger than ever. SolarWinds will be there to talk and demo full-stack cloud monitoring so stop by Booth #643 for stellar swag and awesomely geeky conversations. Our Librato, Pingdom, and Papertrail subject matter experts will be on-site to answer questions about monitoring performance and events in the cloud. This includes application development, DevOps, infrastructure, and end-user experience aka the full stack.

                         

SolarWinds is also presenting a Lightning Talk in the AWS Partner Theater Booth #645. joeld, SolarWinds CTO, and Nik Wekwerth, Librato Tech Evangelist, will discuss buying or building monitoring software on Wednesday, October 7th at 12:40PM PT and again on Thursday, October 8th at 12:40PM PT. Their talk is entitled Let’s not re:Invent the wheel: When to build vs. buy monitoring software. I will be live tweeting and wearing my rage against the virtual machine t-shirt (photo courtesy of @sthulin) so you can join in on the discussion by following @Solarwinds and hashtags: #SolarWinds #reInvent.


 

rageshirt.jpg

The top 3 things that I’m looking forward to at re:Invent 2015 are:

1.)        Security Ops best practices

2.)        Microservices and containers integration best practices

3.)        Big Data with machine learning algorithms

 

They highlight the additional business values enumerated below that IT organizations must realize to remain relevant:

1.)        Compliance and governance

2.)        Agility, availability, and customizability

3.)        Automation and orchestration for scalability

 

Further, these cloud services need to be integrated with what IT is already responsible; thus, Hybrid IT monitoring and management will be key to the success of IT organizations as IT pros use their four core skills to bridge utility for their business units. Finally, for more context around cloud monitoring, please check out "Why Cloud Monitoring Matters in the App Age" by Dave Josephsen, Developer Evangelist for SolarWinds.

 

After re:Invent, I will share my experience at event, highlight key takeaways, and present my thoughts on what it means for fellow IT pros.


[UPDATED: To include Booth #645 for the Lightning Talk Presentation and the added Thursday talk as well as the reference to Why Cloud Monitoring Matters]

Filter Blog

By date:
By tag: