Skip navigation

We know that IP Address Management is more and more important because networks became much more flexible, dynamic and people are used to bringing their own devices into corporate networks. If you've seen your What we are working on after 3.1, you won't be surprised that the new IPAM 4.0 beta contains support for BIND DNS, active detection of IP Address conflicts and better integration with User Device Tracker.

 

Active IP Address conflict detection

IP Address conflict is a typical nightmare of admins. IP Address conflict occurs if there are two or more devices in your network and have the same IP Address configured.

IPConflict1.png

Conflict.png

This is an issue that could arise on devices across any operating system that connects to a local area network (LAN) across any operating system, wired or wireless.

 

What problems may IP Address conflict cause?

Primarily network connectivity issues. Impacted machines are loosing internet access or general network connectivity until the conflict is resolved. It can impact laptop, VoIP phone or application server.

 

What causes IP Address conflicts?

There are three typical scenarios:

 

  • Bring Your Own Device phenomenon

    Typically happens when you bring your laptop or tablet from home to the work and you still have the "home" IP address assigned which can cause collisions within corporate network. It's also typical for business trips when you got assigned static IP in the hotel and then you come back to work. It may also occur in Virtualized environments like spin up VM clone in the same subnet when virtual machine has a statically assigned IP.


  • DHCP servers & their configuration

    Two DHCP servers are managing the same IP subnet/segment with overlapping IP addresses and DHCP server doesn't check the network IP status (IP used or not). IP address conflict happens when one machine already has a DHCP address assigned from the first DHCP server and another device is given the same IP address from secondary DHCP server. This could be a typical problem in "load balancing" DHCP configurations.

 

  • Human mistakes during IP address assignments

    When admins do not use any IP address management tools it is so easy to assign already used IP address to the new device on the network.

 

How to manually solve IP address problem?

First, you need to know who caused the conflict and find the MAC addresses that are in conflict. If you have such possibility, unplug device which should have correct IP address. Then use 3rd machine within the subnet to PING the IP address in conflict. Use "ping -a x.x.x.x in order to get two important values. First DNS name of machine which causing the conflict, second TTL value, which may help you to identify operating system. For example Windows has typical TTL 128, Linux may have TTL 64. You may find the whole list here.

ping1.png

It may happen that there is no device name provided or ICMP protocol is blocked by firewall. In this case, you may use "arp -a" command and list MAC address assignment for your IP address:

arpa.png

MAC address is useful information, because you may identify the vendor of that device. MAC address are unique and each vendor has the first three octets of MAC are reserved for identification. You may find the MAC vendor pre-fixes list here.

 

With MAC address information, you may do go to the switch and block related port, or block that MAC on your wireless router/AP and let origin device to use its IP address.


How to solve IP address conflict with IPAM 4.0 Beta?

 

As I stated above, IPAM 4.0 can now actively detect IP address conflict. We primarily focused on alerting and information about MAC addresses which is a key-point information for conflict troubleshooting. IPAM actively scans the network and if it detects duplicate static IP assignment or duplicate IP provisioning form DHCP server, it will trigger an alert with conflict information:

IPConflict.png

Once you see IP Address in conflict, simply click on the IP or MAC address info in the alert message and it will take you to the IP address detail page, where you may see MAC address assignment history. Another IPAM 4.0 improvement is better integration with UDT product. So you may directly see device & port where are machines connected.

 

You may use IPAM Message Center too and get all history of IP Address conflicts:

messageCenterAlert.png

UDTSubviews.png

As you see, you no longer need to use multiple commands via CLI or use 3rd machine to ping who is IP address in collision. More than that, you can see connectivity details including port and user information on one screen. Now you can use for example NPM which can remotely shut-down interface and disconnect the device from the network, or simply connect to the switch and block that port via CLI. Also, because IPAM uses alerting engine you should get IP address information before impacted person creates IT ticket (which will take some time while disconnected from network)

 

BIND DNS Monitoring & Management

 

BIND DNS is one of the most used DNS solutions. IPAM 4.0 now adds support for monitoring and management of BIND DNS services on Linux. You can now manage your Microsoft DNS and BIND DNS via one web-console. IPAM supports all important DNS tasks like automatic monitoring and management of DNS zones, two-way synchronization of DNS records and DNS server availability.

If you want to add your BIND DNS server into IPAM use short wizard that will lead you through the process of addition. When added in IPAM, it will sync and import actual BIND DNS configuration and then you can monitor or manage zones & DNS records:

addBind.png Bind DNS.png

BIND Zones.png

addNewDNS.png

IPAM 4.0 Beta is ready to be tested. All users under active IPAM maintenance may get Beta build and try it for free on non-production environment. If you would like to try it, simply fill this IPAM Beta Agreement.

As always, we also have Thwack IPAM Beta forum and it would be great to get your feedback there.

 

Thanks!

We have officially reached Release Candidate (RC) status for NTM 1.0.10. RC is the last step before general availability and is a chance for existing customers to get the newest functionality for NTM before it is available to everyone else.

NTM 1.0.1 includes some customer requests as well as several bug fixes. Here is a list of the features included in 1.0.10

 

  • NTM is unresponsive during discovery. (case 225201, 218320)
  • NTM returns an internal error when a Seed device is entered for scanning (case 215087)
  • SNMP Credentials test now checks for v1 and v2.
  • NTM now exports to Microsoft Visio 2013
  • Displays & searches all IP Addresses of nodes.
  • NTM now displays 10Gb Link Speed connections.
  • Improvements in reporting, order of tabular report columns adjustable & now showing enumerated values in the Switch port Report.
  • It is now possible to cancel a WMI credentials test.
  • Evaluation Map can be exported to Microsoft Visio with restrictions.

 

RC builds are made available to existing customers prior to the formal release. These are used to get customer feedback in production environments and are fully supported.

Below are some screenshots to see the new features of Network Topology Mapper. But if you really want to see more, download this latest version from your SolarWinds Customer Portal.

 

Connection Display Options – Showing 1-10 Gbps & >10 Gbps link Speed lines

 

A new 10Gbps connection line is now available on NTM maps.

new1.png

 

Display & Searches alternative IP Addresses of nodes.

 

Nodes with multiple IP Addresses can now be displayed on the map, by hovering over the node and right click node details.

One can also search multiple IP Addresses using the Search bar. (make sure IP address is ticked, under Node Display options)

Capture9999.JPG

Capture00000.JPG

 

Improvements in Reporting.

 

The order of the columns in the reports can now be moved and adjusted accordingly. In the STP report, the Spanning Tree State shows enumerated STP state and 2 new columns are added to the report. Namely, Designated Root & Bridge.

CaptureSTP.JPG

...and avoid mistakes when updating your configurations.


When performing an address change on your servers (e.g. due to reorganization of your data centers, virtualization, consolidation...), you will have to reflect these changes in your firewall configurations. Depending on how large and numerous your firewall configurations are, locating the firewall(s) to change and finding the places in their configuration that need updating, can be tedious and error prone.

 

This blog reflects a fairly frequent real-life use case, and explains how FSM - Firewall Security Manager - can help making these changes quickly and safely, by double checking them before they are put in production.

The concrete example will be to detect where to make changes so that the IP of 2 servers 192.168.253.34-35 can be changed for the new addresses: 192.168.253.160-161, while keeping the protection of your firewalls in place.

 

Finding the firewalls impacted by the change

The Query/Search Rules function searches in the configurations of your firewalls, even within subnet masks. Note that a simple text search in your configs will usually not be efficient because a) it would not find the address you are looking for, within a subnet definition and b) searching on the beginning of the address, e.g. 192.168.253 could return 100's of hits in a real life config.

FSM Rule Query.PNG

The blue window shows how to define the query.

Note that in order to avoid finding all the rules that have ANY in the searched parameters, check the box at the bottom of the window.

Then select all firewalls in your inventory (left portion of the pain window) and run the query.

We recommend saving it before the run (notice the Saved Queries tab at the bottom left corner)

 

FSM finds the impacted firewalls, and the objects to modify in their configurations

FSM Rule Query finds impacted firewalls and objects.PNG

The result will show up in a new tab on the right, and will contain one sub-tab per firewall vendor that contains the searched address. In this example, we will focus on the Cisco Firewalls tab, but in reality, you would have to check all vendor tabs.

In this example, the Cisco firewall HV-FW-01 contains a destination network object called Servers_in_Public_DMZ (used in a rule that appears in config line #570) that refers to 2 addresses that we are trying to change: .34 and .35.

The statement to change is 192.168.253.32    255.255.255.248, that "contains" .34 and .35

 

Preparing the change by using the Change Modeling session

A change modeling session is a very convenient feature that creates a safe environment (basically a copy of your production configurations) that you can use for all your tests and changes, without affecting your production configs.

Select the impacted firewall(s) that you identified with the above steps and click Analyze Change / New Change Modeling Session, and name your session (e.g. Server IP Change session)

FSM Change modeling session.PNG

Notice the new TAB on the right (this is your testing environment), that contains a copy of the config tile (config.txt), routing tables, and the tools available to test your changes (icons on the right)

 

Performing the changes

Double click on the config.txt to open a text editor that you will use to modify the configuration.

Search (CTRL F) for the impacted network object: "Servers_in_Public_DMZ" and modify this section:

 

object-group network Servers_in_Public_DMZ

network-object host 192.168.253.15

network-object 192.168.253.32    255.255.255.248

network-object 192.168.253.64    255.255.255.248

 

as follows:

object-group network Servers_in_Public_DMZ

network-object host 192.168.253.15

network-object 192.168.253.32    255.255.255.254

network-object 192.168.253.36    255.255.255.252

network-object 192.168.253.160   255.255.255.254

network-object 192.168.253.64    255.255.255.248


Then save your new config.

At this point you have impacted the copy of the config that sits in the Change Modeling Session, but not the main version of the config within FSM.

 

Verifying that the changes are valid

You have several tools available, that you can launch via the icons on the right, to test the impact that you have made with your change.

In essence, all these tools highlight the differences between the main and copied/modified versions of the versions.

 

We will focus here on the Compare Rule/Object tool.

The result is an Excel report that is generated within the Change modeling session:

FSM Change modeling session details.PNG

Open the report and look at the different tabs:

  • Rule Compare reminds you of the change that has been performed at the rule level (on line 570, an ACL statement is impacted because the destination Servers_in_Public_DMZ has changed)

FSM Change modeling session details - Rulle diff.PNG

  • Network Object Compare highlights the changes that happened to the modified Network Object: Servers_In_Public_DMZ in this example. Red denotes removed lines and green is used for added ones.

FSM Change modeling session details - Object diff.PNG

  • Traffic Flow Compare confirms that you have removed and added the expected traffic, so the traffic to the new server, after the IP address change, remains unchanged

FSM Change modeling session details - Flow diff.PNG

You can find more about this report here.


Putting these changes in production

Now that you are comfortable with these changes, click on the Generate Change Scripts tool on the right side of the Change Modeling Session window.

FSM Change modeling session details - Script.PNG

Here is what the generated script looks like, in this example:

 

!-- Object <Servers_in_Public_DMZ> Change summary

!-- Modified object

!-- Deleted member 192.168.253.32/29

!-- Added member 192.168.253.32/31

!-- Added member 192.168.253.36/30

!-- Added member 192.168.253.160/31

object-group network Servers_in_Public_DMZ

network-object 192.168.253.32 255.255.255.254

network-object 192.168.253.36 255.255.255.252

network-object 192.168.253.160 255.255.255.254

no network-object 192.168.253.32 255.255.255.248


!-- ACL Rule Change summary

!-- Modified rule

!-- Modified destination Servers_in_Public_DMZ

!-- No change command needed as this is an indirect change caused by modifications to object/object groups used in the ACE.

!-- access-list outbound extended permit tcp object-group DM_INLINE_NETWORK_14 object-group Servers_in_Public_DMZ object-group DM_INLINE_TCP_3

 

After you review the proposed changes, you are ready to put them into production by executing these commands in the firewall console

If this device was managed by NCM (not the case in this example), you can actually execute the script via NCM (right click menu Execute Script), so you don't even have to connect to the device console (NCM does it for you)

FSM Change modeling session details - Script exec.PNG

 

You can then use this button to update the configuration into FSM, from whatever source it was taken from - NCM or the device itself - and make sure that your changes were taken into account.

FSM Update configs.PNG

 

I hope that was helpful and would really welcome reading your FSM use cases - does not have to be that detailed :-) .

[thanks to Lan Li, FSM dev, for his contribution]

As anyone who's been regularly following the thwack blogosphere might tell you, SolarWinds Server & Application Monitor (SAM) has an impressive track record of packing a ton of really great features into each and every release; and SAM 6.0 is no exception. In fact, some might argue that we're upping the ante with this release to a whole new level. And to any naysayers amongst you I say, you ain't seen nothing yet.

 

No different than in previous releases, SAM's elite army of robot zombie developers have been diligently and tirelessly working on a lengthy list of new features and improvements for this SAM 6.0 release, some of which are now ready for beta testing. If you would like to participate in the SAM 6.0 beta and begin playing with some of the new features, the process is very simple; sign-up here. The only requirement is that you must be an existing Server & Application Monitor product owner under active maintenance.

 

Real-Time Windows Event Log Viewer

 

Regardless of whether you're new to systems management, or you're a war torn veteran with the battle scars to prove it, most likely the first place you turn whenever trouble starts brewing in your Windows environment is the Windows Event Log. Whether it's application exceptions, user lockouts, failed backups, services stopping, or anything else that happens to go bump in the night, your first clue as to what went wrong, and when, can almost always be found in your tried and true Windows Event Log.

 

However, until now, your only recourse for pouring through the mountain of events in the Windows Event Log has been to either Remote Desktop (RDP) into your server to open the Windows Event Log Viewer locally, or launch the Windows Event Log Viewer on your workstation and connect to the servers Event Log remotely. Both of these options introduce needless additional steps and precious time wasted when trying to quickly troubleshoot the root cause of the issue. Neither of these options allow you to proactively monitor for these kinds of events in the future, should they occur.

 

In SAM 6.0 we wanted to bring the same power to your fingertips that we provide with the Real-Time Process Explorer and Windows Service Control Manager, to the Windows Event Log. To that end, the Real-Time Windows Event Log Viewer was born. Found in the Management resource on the Node Details view, the Real-Time Windows Event Log Viewer allows you to browse through all the events in your Event Log, as well as filter for specific events in a particular log. You can filter based on event severity, such as Error, Warning, Information, etc.  You can also filter on Event Source to quickly isolate the specific event(s) you're looking for.

Real-Time Windows Event Log Forwarder.png

Once zeroed in on the event(s) you're interested in, you can click on the event itself to see the full details of the event, including the full message details. Want to be alerted if this event occurs again in the future? No problem. Simply click Start Monitoring and you will be walked through a wizard for creating a Windows Event Log Monitor and assigning it to a new or existing Application Template. It's just that easy.

 

Asset Inventory

Ok, I know what you're going to say. Inventory collection and reporting is neither cool nor exciting. I get it. However, regardless of your environment size, keeping an up-to-date inventory is a necessary evil normally required by the accounting department or upper management for asset depreciation tracking, insurance purposes, and/or lifecycle management and renewal. It's normally an arduous annual or quarterly task that involves a tremendous amount of manual labor to compile, and has questionable value beyond satisfying those higher up the ladder.

 

In SAM 6.0 we strived to make inventory data not only valuable to upper management, but also those responsible for managing and maintaining the infrastructure. Expanding beyond the typical fixed asset information required by the business, SAM 6.0 allows you to answer key IT questions that help drive your business decisions. Whether you're constantly running low on memory in your server and need to know if you have any free slots for additional RAM, or need to report on software installed for license compliance or security purposes, SAM has you covered. Has your vendor notified you of a major issue identified in a specific driver or version of firmware? With SAM 6.0 you can easily see which machines are affected.

Inventory List Resource.png

 

You can enable Asset Inventory collection for a node by clicking List Resources from the Management resource of the Node Details view. It will also appear listed when adding the node through the Add Node Wizard. To enable collection, simply check the box next to Asset Inventory as depicted in the image on the left. Click submit to save your changes and inventory data for that node will collected.

Asset Inventory collection can be enabled for both physical and virtual assets alike, and functions independently of Hardware Health monitoring, or any other status monitoring for that matter. That means you don't need to monitor your volumes or interfaces for them to be included as part of an inventory report.

 

Asset Inventory collection can be enabled for any Windows host managed in SAM, including Windows servers and workstations. Stay tuned for more information on inventory collection for Linux/Unix and VMware hosts.


Once Asset Inventory has been enabled for the node, you will find a new Asset Inventory sub-view that appears on the Node Details view for that node. Clicking on the Asset Inventory sub-view tab takes you to a dedicated asset inventory view which includes all relevant inventory data collected for that node.

When you access the Asset Inventory sub-view you may at first be overwhelmed by the sheer volume of information being collected. Fortunately all this information is logically categorized and grouped together into independent resources that can be moved or copied to the Node Details view, or any other node based sub-view if you desire.

 

Anyone who's Polling Engine capacity is teetering dangerously on the edge might be worried what kind of load all this newly collected inventory information is going to place on their poller. Fortunately inventory data doesn't need to be collected with the same degree of regularity as status information. As such, Asset inventory collection by default occurs only once a day, but can be configured to occur weekly, bi-weekly, or even monthly depending on your needs.

 

Concerned about licensing? Don't be. Asset Inventory Collection is not licensed separately, and does not count against your SAM component monitor licenses. It's included as part of the Node License. 

Graphics and Audio.png

Asset Inventory Subview.png

 

Below are just a few examples of some of the new Asset inventory resources contained in this view. If you'd like to check them out for yourself in your own environment I encourage you to sign-up here to download the SAM 6.0 beta. We'd love your feedback!

New Asset Inventory Resources

 

Memory

Memory Modules.png
Processors
Processors.png
Software Inventory
Software Inventory.png
Drivers
Drivers.png
Logical Volumes
Logical Volumes.png
Network Interfaces
Network interfaces.png
Removable Media
Removeable Media.png

Recently, we’ve heard from Solarwinds Network Topology Mapper (NTM) users with large networks, that performance during the discovery process is a bit sluggish. How can we improve it?  To answer that question, one must understand how the NTM discovery engine works and what goes on behind the scenes before displaying the network diagram.

 

NTM includes a sophisticated multi-level discovery engine, which uses a combination of protocols and goes through 3 distinct phases in order to discover every node on a network and to produce a highly accurate map.

 

1. The Node Detection Phase: Uses a combination of ICMP, SNMP and WMI Protocols to detect devices.

2. The Polling Phase: Uses a Poller to collect connection data using CDP, LLDP, Bridge table and also STP to detect the role of each node.

3. The Calculation Phase: Uses complex algorithm to calculate connectivity between nodes.

 

The completion of these 3 phases of the discovery process can be slowed down by a number of factors. Primarily these:

 

  • The size of the network being scanned
  • The choice of node detection protocols used for the scan
  • The number of hops specified for the scan
  • Where the scan is being performed from

 

With all of that in mind, we have compiled the following recommendations to help you improve the performance of NTM as it discovers devices on your network.

1. Think small – The tendency of most users of NTM is to try discovering their entire network on the 1st scan. The drawback here is, if you have a network with 1200+ nodes you are in for a long wait and at the end of it, the map might look very cluttered. The smaller the range you define to scan, the faster you’ll receive the results and the easier it will be to interpret them. We recommend discovering Class C networks in at least 2 separate scans.

 

2. Credentials priority - The credentials most commonly used should be at the top of the list, be it SNMP, WMI or VMWare. Performing this simple activity will help enhance the discovery. Also, note that the more credentials there are the longer discovery will take so don’t use any unnecessary credentials.

add this 1 2 AM.png

3. Zero Hop Discovery - As you work your way through the discovery wizard, you will notice that you are prompted to provide the number of hops you wish to use during the scan.  We recommend using 0 hops unless you are not finding all of your expected devices. The more hops the scan goes through, the longer it takes and 0 usually gets them all.

add this 1 AM.png

4. Scanning Location – Scanning with NTM from within the same Management VLAN of your network can improve the speed of the discovery process significantly. This will reduce polling timeouts compared to scanning over VPN, dial-up or routing through another network.

 

5. Evening/Weekend Scanning - If you manage a very large network or you see your network growing in the coming months, it would be best to run the discovery process during evening hours or on weekends and have your map ready for Monday morning.  The key to this is making sure you have entered the proper credentials into the discovery wizard.  Make sure to test your credentials before beginning the discovery process and leaving the building.

 

And that's pretty much sums it up. Hope that it makes the discovery & mapping of your network a little bit easier.

We have officially reached Release Candidate (RC) status for User Device Tracker 3.0. RC is the last step before general availability and is a chance for existing customers to get the newest functionality for user device tracking and capacity planning before it is available to everyone else.

 

Here is the content of this RC version:

  • Whitelisting – Set up a white list of devices based on MAC address, IP address, or hostname.
  • Trap notifications - Get connectivity information in "real time"; receive an alert when a device not on whitelist connects to the network.
  • Watch List - Add users to the Watch List.
  • Domain Controller Wizard - Facilitate collection of user login information by configuring appropriate logging level on Windows® servers.
  • Virtual Route and Forwarding (VRF) - Polls devices for VRF data.
  • Alerts - Get an alert when an endpoint port changes.
  • Reports - See a report on Wireless Endpoints.
  • Groups - Add UDT ports to groups.
  • Port Shutdown - Remotely shutdown a compromised device port.


More details and screenshots can be found in the UDT 3.0 beta blog post.

 

RC builds are made available to existing customers prior to the formal release. These are used to get customer feedback in production environments and are fully supported. If you have any questions, I encourage you to leverage the UDT RC forum on thwack.

 

You will find the latest version on your customer portal in the Release Candidate section. Please note that you will need to activate this RC build using a temporary RC key that you can also find on your customer portal (Licensing and Maintenance – License Management). This temporary key will be replaced with a regular license key after official release of UDT v3.0.


Message was edited by: Jiri Cvachovec

We have just released UDT v3.0 RC2. You can find it on your customer portal. (Note: The downloaded package may say RC1 but the content is actually RC2.)

Enhancements:

  • A confirmation dialog pops up when user tries to shut down a port.
  • Only users with nodes management rights can shut down ports.
  • A few bugs fixed.

Over the coming weeks I will be posting a series of blog posts on common misconception, questions, issues etc. that I have run into over the years that we have been offering the Failover Engine.  The most common question I do get asked is "what exactly is the difference between high availability vs. disaster recovery".

 

I will provide a more in depth explanation below, but the best and quickest way to remember this is:

  • High Availability = LAN
  • Disaster Recovery = WAN


Some groundwork before I jump into the more in depth explanation, the Failover Engine works in a Active-Passive setup, meaning only one server has the SolarWinds services started and running.  The Failover Engine is not an Active-Active solution, meaning both servers have the SolarWinds services started and running.  With that in mind, for this post I will refer to each server as the following

  • Primary or Active server = SolarWinds services are active and running
  • Secondary or Passive server = SolarWinds services are not active and running

 

High Availability

As also illustrated below in the first image, the High Availability (HA) role is normally deployed in a LAN where communications are configured with the Public IP Address being shared by both the active/primary and passive/secondary servers. The active/primary server makes the Public IP visible and available to the network while the passive/secondary server leverages a packet filter installed by the Failover Engine on to hide the server and prevent network access since two machines with the same IP cannot be on the network at the same time.


In the event of a failure on the active/primary server the packet filter is removed from the passive/secondary server making it now assume the role as the active/primary server while simultaneously adding the packet filter to the server that was originally the active/primary server making it the passive/secondary server. Since both servers are sharing the Public IP address, DNS updating is not required.

 

SWBlogLAN.png

Disaster Recovery:

When deployed in a Disaster Recovery role, the active/primary server and the passive/secondary server operates over a Wide Area Network (WAN) in different subnets. As a result, the active/primary and passive/secondary servers are configured with different Public IP addresses. In the event of a failover, the Failover Engine automatically updates DNS with the IP address of passive/secondary, so to the end users they just continue to access the SolarWinds server with the same DNS name they always use.

SWBlogWAN-1.png

Any questions or comments, please ask in the comment section.

SolarWinds recently acquired a set of products that provide a self-hosted solution for securely transferring files both within and outside the the corporate firewall.  These products provide a secure alternative to cloud-based solutions like Dropbox.  "But Dropbox is so convenient and easy to use," you say.  Read on.

 

Dropbox has had its fair share of issues over the past couple of years, shining a big, ugly spotlight on security vulnerabilities with respect to sensitive customer data.  First of all, the exposure to potential security risks and service disruption from Dropbox is enormous.   According to a recent survey of 1300 business users, one in five are using Dropbox to transfer corporate files, effectively circumventing any safeguards their IT departments have put in place with respect to file transfers.  In August of last year, usernames and passwords of Dropbox accounts were compromised that resulted in a spamming campaign to a number of Dropbox users.  Unfortunately for Dropbox, this isn't the first time something like this has happened.  Another breach occurred in June of 2011 that was the result of a breakdown in the service's authentication software, exposing accounts without requiring proper authentication for a period of time.  If the security issues aren't scary enough, the service was completely unavailable for a period of time in January of this year.

 

These breaches beg a fundamental question to be answered when assessing a cloud-based versus a self-hosted solution for securely transferring files: is the cloud secure enough for the needs of my business?  The cloud certainly provides a valuable level of convenience and simplicity that's just fine for most individual consumer users, but it's evident that this convenience has a cost in terms of security.  Businesses, both large and small, often have stricter security requirements around file transfers and the users participating in those transfers that a cloud-based solution won't be able to provide.  When it comes to sensitive and confidential files, convenience is nice, but security is a must-have.

 

 

There is a Better Way

 

FTP Voyager is a free FTP client that supports a number of different protocols for secure file transfer.  Serv-U MFT Server is a managed file transfer server that provides a secure alternative to the cloud-based solutions for transferring files inside and outside the enterprise.  Let's take a look at some of the security based features and protocols that these products provide.

 

In addition to FTP, FTP Voyager supports both the FTPS and SFTP protocols.  This includes strong authentication with both X.509 client certifications and public key authentication.  FTP Voyager uses cryptography that has been FIPS 140-2 validated by NIST, and Voyager has been granted the Certificate of Networthiness by the US Army.

FTP_Voyager.png

 

Like FTP Voyager, Serv-U MFT Server supports the FTPS and SFTP protocols.  It also supports secure file transfers through a web browser or from a mobile device (iPad, iPhone, Android, Kindle Fire) via HTTPS.  Serv-U MFT Server also provides a number of different user management options, including the ability to authenticate against Active Directory.

Serv-U_Administration.png

 

Serv-U also provides a number of encryption options for transferring files.  Individual ciphers and MACs can be enabled or disabled based on your specific security requirements.  Serv-U also provides the ability to run in FIPS-140-2 mode.

Serv-U Encryption Settings.png

 

A separate module called the Serv-U Gateway provides reverse proxy capabilities, preventing data from ever being at rest in your DMZ or opening connections directly from the DMZ to your internal network.  Using Serv-U MFT Server in conjunction with the Serv-U Gateway provides an architecture that is PCI DSS 2.0 compliant as well as satisfying other high security requirements.  See reference architecture below for an example.

serv-U+gateway_architecture.png

 

You don't have to be a conspiracy theorist or even a security expert to have legitimate concerns about your data in the cloud.  Sometimes the nature of the data being transferred warrants consideration of a level of security that cloud-based solutions simply can't provide.  While Dropbox has made managed file transfer more accessible, it can introduce unnecessary risks to your organization.  FTP Voyager and Serv-U MFT Server provide secure alternatives to cloud-based solutions, giving you the best of both worlds.  For more information on Serv-U you can check out some of our videos here.  You can also find a number of security-focused knowledge base articles here.

Filter Blog

By date: By tag: