Skip navigation

I’m excited to announce a new SolarWinds product today, Firewall Security Manager (FSM).  To say that firewalling is a critical part of any modern network is an understatement, but managing the rules that determine whether your network is secure and whether your applications actually perform as expected can be daunting.  This is especially true if you are the second or third person to do the job, since there’s often little documentation on what rules exist and why.

 

FSM is all about addressing concerns that you’ve all told us are important to you around firewall change management.

  • Giving you the ability to test rule changes offline before they’re pushed to production
  • The ability to report on whether changes to rules create security problems
  • The ability to analyze all the rules on an existing firewall and determine whether the rules are used, are effective, or are just taking up space

What’s better for all of you Network Configuration Manager (NCM) users out there is that FSM already integrates with NCM so you can import device details over to FSM easily.

 

For a more detailed description of how FSM can add value to your network management tools check out Francois Caron’s product blog post and the Firewall Management page on Solarwinds.com.

Mobile_Device_Management_iPad.jpgThe Blackberry certainly started the “smart mobile device” revolution and there are some that still use them, but most would agree that it was Steve Jobs and Apple that really changed the world. With the advent of the iPhone, iPad, and then fast-following Android OS devices, employees and more importantly their executives enthusiastically brought the “new hotness” in mobility into the workplace regardless of whether their IT organizations were ready or not. We industry folks like to call this Bring Your Own Device (BYOD) for lack of a better term, but I think this implies a formal “hey, come watch the game at my house and BYOB” invitation that most IT organizations don’t remember sending out. A lot of IT folks are now left lying awake at night wondering whether all these new devices comply with their security policies and what happens when the first mobile-to-PC virus hits.

 

BYOD CAUSES MOBILE DEVICE MANAGEMENT (MDM) FERVOR?

I recently attended a MDM webinar and it seems most of the attendees were still trying to figure out what to do. A paltry 18% of those polled claimed to have rolled out an MDM program. So, let’s say you’re in the majority and still in the process of developing a program. Where do you start? Should you begin gathering requirements and evaluating MDM solutions immediately? In the immortal words of Lee Corso, not so fast my friends…

 

 

ROGUE AND UNINVITED DEVICES ON YOUR NETWORK

Here’s the thing. The concept of rogue or uninvited devices on the network isn’t a new concept for IT organizations. Anyone had a hub take out a switch on their network before? Or problems with rogue access points or servers? So, the issue is more fundamental than just mobile devices. This is a general problem that requires beginning with a comprehensive assessment of your network. And as the owner of network ops (and thus the network ports), you have a lot more solutions at your disposal than you might think. 

 

RECOMMENDATIONS

User_Device_Tracking_Switch_Port_Monitoring.jpg

1. Start by assessing what devices are on your network and where they’re connecting. Consider leveraging user device tracking and/or switch port monitoring software to track which devices and users are connecting to which switch ports (both wired and wireless) on your network over time and alert you to switch port capacity issues. Think of it as a switch port mapper on steroids.

 

 

 

Bandwidth_monitoring_traffic_analysis_netflow.jpg

 

2. Understand the performance impacts that rogue or uninvited devices are having on your network. Use your network management system and enable netflow analyzer to understand where you have network utilization issues and who and what is consuming your bandwidth based on the network bandwidth monitoring tool. Mobile devices may be consuming more than you think or it may turn out there are bigger fish to fry with bandwidth hogs running on “approved” devices.



3. In parallel, start gathering requirements from the various departments you support. Determine how they’re using their mobile devices today and how they’d like to use them in the future and reconcile this with your own security policies.

 

Once you understand the impact of BYOD on your network and have determined the right balance between user desire and your own corporate policies, I think the next steps will be much clearer. I'll discuss what comes after assessment more in a subsequent post, but for now...good luck!

Welcome back. In the following post, I'll continue to list another 5 events to monitor on the trail of stolen passwords...

You can see the first 5 in my post from the 10th, here: Top 10 Events to Monitor for Stolen Passwords, Part 1

 

The last 5 events fall into the following categories.

 

Unusual User Activity

Identify some key areas of segmentation you can take advantage of. Examples: sales people are in a common AD group and expected only to log on locally or via RDP to laptops in the laptops OU or group; developers are segmented to a certain part of the network and expected only to log on from those IPs or a corporate VPN.

  1. Successful or failed logon activity outside the norm by normal users. Eat the elephant one bite at a time, identifying any of those common patterns you can, and identifying logon or failure activity that falls outside of it. Sales person attempting to log on to developer workstation - even if they succeed? Developer not joined to the domain and attempting to log on to domain resources? Any of these patterns could be mistakes, but they are commonly at least policy violations, if not security issues. Start conservatively and confidently.
  2. Interactive logons (local or remote) to multiple machines simultaneously. You'll have to set up some thresholds here that make sense, like 3 different machines within 30 minutes of each other. If you're a computer lab, this won't work, but most businesses aren't using shared machines, or if they are, a situation like this would indicate either a technical issue or a security issue.

 

 

Privilege Use Gone Wild

There's mixed messages about abuse of privileged access to resources causing breaches – there are horror stories like the guy from San Francisco that made it impossible to access network devices after getting fired, and there are some breach reports that show insider threats to still be a significant issue – but it's common sense to know what your most trusted users are doing. After all, those most privileged users are also going to have the most access and be the highest value target. Think beyond just IT administrators here and identify other high visibility targets, too, like your CEO (probably has access to lots of data), financial officers (them too!), development operations staff (can make changes, may have access to databases and resources where passwords or critical data exists), and anyone with named access to visible resources.

  1. IT Administrators doing "maintenance" outside of normal business hours. Yes, they could just be doing their jobs during an outage, but this shouldn't be commonplace (if it is, you should implement others on this list first). If you've got multiple departments across the world, create different "normal" situations for each of them and look for the abnormal.
  2. Failed access to IT administrator accounts and computers. Set up thresholds that make sense here, you probably don't want to know every time I've forgotten my coffee, but you do want to know when it's probably not me, and if it continues after my account locks (if you've got lockout policies). Consider limiting this to interactive logons (local or remote) so you don't get a bunch of emails when someone changes their password and their mobile device starts failing to log in.
  3. Successful or failed access to network device administration. In most environments this would have a capital period at the end of it. Control where you access network devices from and watch for attempts outside of that norm. Logons to the router from a sales laptop? Nope.

 

Monitoring for these events will help your organization remain secure and not become the next victim of a security breach.

Chances are you've heard about the breaches of LinkedIn, eHarmony, and last.fm, among others. In these breaches, password hashes were leaked and almost immediately cracked, demonstrating how unfortunately basic many passwords are despite our attempts to make people smarter about passwords with password policies and constant user education. The reality is that passwords are an inconvenience to most people -- a necessary evil -- and a cost of doing business. People frequently reuse passwords or use a pattern with simple words or character addition/replacement, making it possible to follow them around the internet and gain access to multiple accounts. You might not care if the passwords that are exposed are for online dating, music, or other social sites, but what if they were yours? Or worse, what if it was a bank or credit card company that was breached?

 

Following these breaches, a lot of stories came out about watching accounts for stolen passwords.  But many of these articles lacked concrete advice such as what do you actually LOOK for when you get to all that log data?

 

There are 10 events in particular to monitor for stolen passwords and general abuse on a network.

 

Critical System Access

The first step is to make a list of all systems that have external access from the internet, then critical systems with internal (but theoretically limited) access from user networks (servers, network devices, management systems).

  1. Straight up failed logon attempts to any critical system or server. Consider excluding single failed logons from trusted IT administrators, who are infallible and probably have long passwords they might not type without enough coffee.
  2. Successful and failed logon attempts directly to local administrative accounts (administrator, root, dedicated domain admin), especially on critical systems or servers, but possibly extended into workstations.
  3. Multiple failed logins to any account. Some systems will have a lockout policy that kicks in, but if it trips and the failures continue, you could be looking at brute force attempts.
  4. All interactive remote logon activity to internet facing systems. If someone logs on via RDP (or SSH, whatever it might be) to that one system you've got so that guy that contracts for that one service can help you out in a pinch, you should know immediately that it's happening.
  5. Attempts by non-privileged, non-IT accounts (or IP addresses) to log on interactively to any critical system or server, failed or successful. This just shouldn't be happening, and is either misuse of their account, or possibly they've been compromised and their account is being used to gain access to other resources on the network.

 

Come back on August 17 to see the followup to blog listing events 6-10 you should monitor on the trail of stolen passwords...

So VMware picked up a product called Log Insight (from a company called Pattern Insight) today, another in their recent run of acquisitions.  What makes this acquisition interesting is the tacit acknowledgement that management can’t be done using data only extracted from the vSphere API.  Historically VMware and others have focused their data center management approach on the VM, host, network, and storage data available through the API, but anyone who’s managed an IT environment knows that’s like running a race with one arm tied behind your back.

 

SolarWinds has long believed that managing a data center environment is about collecting and correlating data from many sources, from the storage arrays, from the hosts, from network devices, and the application components themselves.  Sometimes you gather this data via logs, other times via direct APIs and where you have common bottlenecks you build management applications around the problem.  For example if you have a real cloud data center environment you know that from time to time you may run into I/O bottlenecks.  These may be at the host, or the array controller or at the disk, and when troubleshooting storage I/O issues it’s beneficial to have management software that can map the path from the VM all the way to the spindle.


On the log management side the problems tend to be more about looking for the needle in the haystack and the reality is that’s hard.  Log’s tend to be machine friendly but not user friendly, but often when you find a problem you want to make sure that problem either doesn’t happen again or you’re notified and you can run corrective action before it spirals out of control.  In dynamic data center environments waiting for a log management system that’s going to collect data, write it to a database, then run its alert and rules engine, and then send an alert without taking any automated action is like waiting an eternity.  Real-time log analysis and response is the right approach to tackling this problem – event log correlation happens in memory and the log management system can execute automated actions and then notify you of the problem and actions taken.


So it’s great to see VMware get in the game – operations is definitely about more than the data in the vSphere sandbox, and I know I didn’t need to tell all of you that.

Guest Blog by David Marshall, Sr. Marketing Manager and Social Media Strategist at Virtual Bridges

Virtual Bridges and SolarWinds have jointly put together a four part blog series intended to provide guidance in implementing and managing a VDI environment. These will be posted alternately on the SolarWinds Whiteboard blog at http://thwack.solarwinds.com/community/solarwinds-community/whiteboard/blog and the Virtual Bridges blog http://www.vbridges.com/company/news/blog/ . This is the second part in the series, the other parts are:

We hope this is useful to you and look forward to your feedback.

##

 

We’ve all heard the horror stories of VDI projects that are stalled and left in the IT graveyard. But with the right planning, you can enjoy the benefits of VDI without the headaches.


Below are 10 tips to ensure your VDI project is primed for success, and ready from the onset through project completion.

  1. End user analysis is essential. Evaluate not just how many users will be on the system, but more importantly, how many desktops each user needs and what type of activities they will be doing. For example, an admin or knowledge worker typically only needs a single desktop, but developers often need multiple desktops. Likewise, some users will be doing low bandwidth office style work, whereas others will engage in multimedia viewing. Consider how many different locations exist and how spread out they are as well. Together, these considerations will allow you to properly size your infrastructure, understand user experience implications, and establish reasonable SLAs.
  2. “Where” matters as much as “Who”. When you look at access clients and protocols, make sure to consider where the users will be located, how often will they be remote and accessing their VDI session from across a WAN, and how much multimedia will be used in the environment. 
  3. Take a holistic approach with your storage considerations. Start with a comprehensive understanding of your users (including types of users) and move through the entire implementation, carefully considering the design for both scaling and cost.  Evaluate your storage needs independent of vendor. And then consider using monitoring software such as Solarwinds to keep the storage vendor honest. Remember to keep in mind that you will more than likely be working with different types of users in your environment: task, knowledge and/or power users.  Ensure use cases are set up accordingly.
  4. Cluster to improve reliability. Clusters of host /server machines can improve both scaling and reliability. From a VDI scalability standpoint, the cluster allows you to more effectively load balance virtual desktops for better performance. When we look at reliability, clustering eliminates the concerns of having a single point of failure, while also leaving spare capacity in case of failure.
  5. Guest OS can replace a desktop. VDI done right eases the migration burden on both end users and IT staff associated with such upgrades. In fact, this has been so successful that many enterprises are turning to VDI to streamline the time-sensitive upgrade from Windows XP to Windows 7 (and for some, the planning stages of the anticipated move to Windows 8). In this case, the guest OS is the actual desktop OS that the users are interacting with in their VDI session, essentially replacing their desktop.
  6. Applications drive user productivity. An assessment of the application landscape is critical to the success of any VDI implementation. Applications drive user productivity and typically include a mix of commercial and in-house apps. Most of the typical commercial apps will become part of the Gold Master image, a common set of operating system and base application configurations shared among a group of users, and centrally managed, while the specialized, in-house apps may be packaged and associated with particular user’s desktops (e.g. a factory floor worker).  If custom in-house applications are widely deployed, they can live in the Gold Master as well. Applications typically included with the Gold Master are: MS Office, Adobe Reader, Flash and custom browsers (e.g. Chrome or FireFox).
  7. The most up-to-date USB devices work best. It’s important to review the peripherals that will be used in the VDI sessions.  Most up-to-date USB devices work well in the VDI environment as the USB is redirected from the physical client to the VDI session. Typical devices utilized in VDI environments might include printers, scanners and thumb drives. Special case or niche devices may need special consideration. Keep in mind, USB devices work best over a LAN connection.
  8. Client devices offer flexibility and mobility. Thin clients are typical replacements for the legacy PCs in use and offer a number of advantages including excellent power efficiencies (often enough to offset their costs over a 3-5 year payback period), superior security and much longer life.  Other physical clients may include thick clients, iPads and other tablets used in BYOD scenarios.  The same virtual desktop can be accessed from multiple clients and allows the end-user to be mobile without having to be tethered to a specific client device.  
  9. Make sure it scales. Infrastructure choices, including servers, storage and networking, should be scalable up to a maximum theoretical deployment.  While you may start with a small, initial roll-out, you should plan with the maximum intended size in mind. At the same time, ensure you can migrate with minimal downtime in the case that you do outgrow your current availability.  
  10. Keep cost in perspective. Storage and other VDI costs should be thought of as marginal cost per user, not overall cost.  For example, if you are planning a 6,000 user deployment and looking at a $230,000 NAS setup to support it, the number that really matters is $38/user, not $230k.

 

Do you have any tips that you’d offer to those considering VDI? Drop us a line and let us know. In the meantime, we hope you’ve enjoyed this blog series with Virtual Bridges. Keep an eye out for part 4, Key Considerations for Storage Optimization for VDI, coming from SolarWinds on Thursday.

Check out this recent infographic commissioned by Skype, and you’ll see why you shouldn’t trust the patching of your workstations to be left to your end users.  According to the survey results, 40% do not always update their software when prompted to do so.  The reason?  Half of users don’t see a benefit to doing so, nor do they understand the impact of the upgrade.  Nearly one-third of respondents think patching takes too long.

 

This can be very problematic, especially as we are seeing an increase in BYOD, and according to a joint survey by SolarWinds & Network World, 27% of IT pros are not at all confident on the level of visibility into personal devices accessing the corporate network.

 

This is also very concerning because applications residing on workstations are the kind that have critical vulnerabilities.  Just take a look at 3rd party updates and you can see that many of the critical patches in recent months are for applications like Chrome, Firefox and Flash.  In this other article from ars technica, Peter Bright reveals that 37% of Firefox users are running older versions of Firefox that are not being updated (by Firefox) with critical patches for known vulnerabilities.

 

How can you control whether applications on endpoints are protected from the latest known vulnerabilities?

  • Implement a policy to patch desktops, determining which and when these applications should be patched.  For example, your policy should outline timing of patching all on-line workstations, and when off-line workstations are updated.
  • Enable the Network Access Policies (NAP) feature of Windows Server 2008.  This feature was designed exactly for the scenario of a guest device desiring to connect to the network.  With NAP, before the device is allowed to connect, it must have certain requirements, e.g. current AV signature files; all security patches applied, etc.
  • Ensure these policies can be timely adopted with automated patch management software.  Automated patching software will help you very quickly inventory computers that are at risk, provide pre-built/pre-tested patches and will deploy patches to the right computers at the right time (within maintenance schedules), automate system re-boots and so forth.
  • Educate end users on the logic behind how frequently updates are made, examining the trade-offs between system downtime and risk of vulnerability.

 

Security incidents can be very costly and damaging to your business.  Responsibly protect your company’s reputation and assets with a sound patching strategy for all end users accessing the corporate network.

 

If you have not done so already, download a free 30 day trial of Solarwinds Patch Manager and automate the patching process for your endpoints.

A long time ago in a galaxy far far away….well you know the rest of that story, but SolarWinds, another name from the cosmos, has some interesting history behind it and I thought today, with our Q3 network management news release update I would take a look back and a look at what’s next.  For those of you who are new to SolarWinds first a few stats about our networking business that you may not know:

  • Over 75,000 customers around the globe use SolarWinds network management products
  • Millions of downloads of our community based free-tools
  • All of our tools that have evolved based on what you, our customers, have asked for

 

Our first set of tools were bundled together in what is today known as Engineer’s Toolset, it’s a collection of over 50 of the tools every network engineer and aspiring IT pro needs.  It’s holds a special place for many of us here because it represents the start of the journey to deliver an unexpected level of simplicity for network engineers everywhere.  It was also one of the drivers for creating the original SolarWinds Network Performance Monitor (NPM) product – our flagship network monitoring product.  Over the years we’ve heard a lot of great stories about how NPM has helped make your lives easier and we’re always excited to hear more so if you have a great story please let me know (sanjay.castelino@solarwinds.com).

 

But most folks don’t know that in addition to Engineer’s Toolset and NPM we actually have a whole bunch of other useful tools.  I can’t cover all of them here, but the key ones are listed on our website here.  In addition, there are some products you may use every day that you may not realize came from SolarWinds – Kiwi Syslog and DNSstuff.com are 2 of the best known.

This quarter we’re continuing the evolution of our network management tools with 4 new product releases that I wanted to talk about.

 

First, our new VoIP and Network Quality Manager product is going to be shipping soon.  The product combines both IPSLA operation data that provides visibility into WAN performance with call detail record metrics from Cisco Call Manager to give you a quick way to troubleshoot VoIP and WAN performance.  This is something many of you have been asking for and we’re excited to be shipping the product.

For those of you doing something with BYOD, or just rocking back and forth wishing it would go away, we’re shipping an updated version of our User Device Tracker product (UDT) that adds support for controller based wireless access points.  Now you’ll be able to pinpoint exactly where wired and wireless devices are on your network and who’s logged into them both in real-time and historically.

Third in line is a new version of NetFlow Traffic Analyzer (NTA) that will deliver some exciting new charting tools, making it much easier to drill into your traffic data.  In addition for those of you looking to monitor traffic in your virtualized data centers, NTA adds support for flows from VMware vSwitches.

 

And last but not least is a new version of Network Configuration Manager (NCM).  Network configuration management is often one of those things that you think you can live without, but in reality a large % of network problems are still caused by unintended configuration changes and if you’re an NPM customer NCM data can be automatically put into alerts giving you a quick view of config issues.  This new release adds greater scalability to NCM through the use of our scalability engines as well as support for new vendors in the Japanese market (Alaxala and Apresia).

 

So, it’s a busy few months ahead of us, we hope you enjoy what’s coming, and before I sign off I’ll add my thanks to all of our customers and community members for being loyal supporters and vocal critics – we wouldn’t have the leadership position we do without your support, so thank you and as always questions and comments are welcome.

Filter Blog

By date: By tag: