Skip navigation

When it comes to storage management there is a high probability of developing performance issues over time. There can be bottlenecks and hotspots in many places on your storage environment including the storage arrays, the controllers and the disk drives. If the server environment is virtualized, it will lead to more storage performance challenges due over-commitment of resources, abstraction and VM sprawl issue making it difficult to diagnose IOPS spikes and contention given the dynamic nature of
virtualization.

 

There are various components in the storage landscape that affect the performance of the storage system. These include RAID levels, the number and type of disk drives in a RAID set or volume group, the type of drives and their performance capabilities, as well as host server front-end ports and back-end device ports.

 

Problems are aplenty: How Can Storage and Virtual Admins Address Them?

Here are 5 simple tips to help address storage I/O performance issues. These are some simple administrative actions that we can perform to avoid common performance issues.

 

#1 Consider Changing RAID Type

RAID has 2 clear benefits: Better performance and higher availability, which means it goes faster and breaks down less often.

 

  • Performance is increased because the server has more "spindles" to read from or write to when data is accessed from a drive.
  • Availability is increased because the RAID controller can recreate lost data from parity information.

 

You can change the RAID type of the logical disk drives based on the storage data that you have based on your budget. You can use RAID-10 for high performance everywhere, but it may be costlier. If you have budget constraints, you can use RAID-5 for database data volumes, with RAID-1 or RAID-10 used on database log volumes.

 

#2 Replace Old Disk with SSD

There are many enterprises that are considering replacing existing hard disks with solid-state drives (SSD) or using SSD drives as a large cache for the array. An SSD is one big piece of flash which makes data access almost instantaneous. Without any moving parts, SSDs contribute towards reduced access time, lowered operating temperature and enhanced I/O speed.

 

  • SSDs help to expand built-in cache, thus speeding up all I/O requests to the array
  • SSDs help storage arrays in storage tiering by dynamically moving data between different disks and RAID levels, and thus resulting in improved performance at the pool (RAID group) level.

 

#3 Reallocate Disk I/O Load

When different workloads share the same disk, it may lead to interference causing the disk head to shuttle back and forth between different locations based on the workload request. Especially in a multi-disk system data read/write can be very time-consuming.

 

Data reallocation separates interfering workloads by moving one or several of the workloads to other disks, where they will cause less interference and avoid hotspots. Reallocation also helps with defragmenting data by arranging the stored data sequentially so that when performing a read operation, data retrieval is faster.

 

#4 Upgrade to Larger Cache

The disk cache holds data that has recently been read, and increasing the size of the cache space will lead to improved read/write operations and lesser performance I/O bottlenecks.

 

#5 Plan Well and Add More Physical Disk to Arrays

This is the inevitable scenario in any storage environment. No matter how many disks you add to expand your storage capacity, it’s eventually going to exhaust even the most generously sized LUNs. You can keep in mind the following best practices when adding more disks to arrays.

 

  • Know how many disks an array can hold, and how many disks are actually installed at the moment
  • Consider IOPS as well as just disk storage capacity
  • Understand storage content, classify data based on criticality and usage, and choose the storage expansion option accordingly
  • Instead of adding more disks, you can also choose to upsize existing disks to larger volumes of storage

 

Ensure you have a storage capacity planning and predictive forecast analysis of how your storage space is used and is evolving so you can add disks only where it’s really required.

 

While these are all effective tips to improve storage I/O performance, the bigger issue is to monitor the storage environment and identify bottlenecks as they appear so we can avert performance issues. You need to have top down visibility from VMs, to hosts, to network elements, to storage.

 

SolarWinds Storage Manager monitors storage performance & isolates hotspots in your multi-vendor SAN fabric. Storage Manager allows you to drill down to view total IOPS at the LUN or RAID Group level to see if there is LUN contention or if physical storage is the bottleneck.

OK, no, not really. No, we haven't changed the format here on thwack: though you can get some good conversation in our forums, no one is going to suggest the use of candles in your network closet. And this graphic just seems strangely appropriate, right now:

candlelight,communications,couples,food,laptop computers,men,online dating,people,restaurants,romantic dinner,technologies,waiters,women

You mean to say it's not "Data Night"?

 

All absolutely horrible joking aside, while doing some documentation maintenance the other day, I was reminded of a pretty cool little network management tool that I think we tend to take for granted around here: SolarWinds SNMPWalk (.zip). So, for those of you who don't already know, the question is, "What does it do?" Well, in short, by generating a map of the management information base (MIB) object IDs (OIDs) that correspond to specific types of information about any given device you want to monitor with SNMP, it helps you figure out what you can know about your network equipment. If that doesn't makes any sense, let's review a bit.

 

What is a MIB?

A Management Information Base (MIB) is the formal description of a set of objects that can be managed using SNMP. MIB-I refers to the initial MIB definition, and MIB-II refers to the current definition. Each MIB object stores a value such as sysUpTime, bandwidth utilization, or sysContact. During polling, SolarWinds NPM sends a SNMP GET request to each device to poll the specified MIB objects. Received responses are then recorded in the SolarWinds database for use in NPM, including within Orion Web Console resources.

All of which probably leads you to another question:

 

What is SNMP?

For most network monitoring and management tasks, NPM uses the Simple Network Management Protocol (SNMP). SNMP enabled network devices, including routers, switches, and PCs, host SNMP agents that maintain a virtual database of system status and performance information that is tied to specific Object Identifiers (OIDs). This virtual database is referred to as a Management Information Base (MIB), and NPM uses MIB OIDs as references to retrieve specific data about a selected, SNMP enabled, managed device. Access to MIB data may be secured either with SNMP Community Strings, as provided with SNMPv1 and SNMPv2c, or with optional SNMP credentials, as provided with SNMPv3.

Notes:

  • To properly monitor devices on your network, you must enable SNMP on all devices that are capable of SNMP communications. The steps to enable SNMP differ by device, so you may need to consult the documentation provided by your device vendor.
  • If SNMPv2c is enabled on a device you want NPM to monitor, by default, NPM will attempt to use SNMPv2c to poll the device for performance information. If you only want NPM to poll using SNMPv1, you must disable SNMPv2c on the device to be polled.
  • Most network devices can support several different types of MIBs. While most devices support the standard MIB-II MIBs, they may also support any of a number of additional MIBs that you may want to monitor. Using a fully customizable Orion Universal Device Poller, you can gather information from virtually any MIB on any network device to which you have access.

 

So, Let's Go For a Walk

Which brings us back to the SolarWinds SNMPWalk (.zip). Download and run it. Point it at a device on your network, and it will generate a list of all available OIDs for monitoring with SNMP. If NPM doesn't already recognize your device and monitor its OIDs automatically, you can create your own Universal Device Poller (UnDP) to give you the data you so desperately crave. For more information about working with a UnDP, see "Monitoring MIBs with Universal Device Pollers" in the SolarWinds Orion NPM Administrator Guide. With SNMP, the MIB, NPM, and UnDPs you can monitor just about anything, hardware-wise. Yes, it's all pretty sweet, but, no matter the price, it's still probably not a good gift for your pretty sweetie...OK, I'm leaving now; no more jokes, I promise...

Welcome to SolarWinds blog series “Diving Deeper with NetFlow – Tips and Tricks”. This is the second part of the 6 part series where you can learn new tips by understanding more about NetFlow and some use cases for effective network monitoring.


 

In the previous blog, we had discussed about NetFlow and its uses in troubleshooting network issues, and maintaining organizations network uptime. In this blog we will throw some light on Network Anomaly Detection and how NetFlow helps Network Administrators analyze and monitor network traffic in a precise way.

 

 

 

Many of the biggest threats to enterprise networks today are related to breach of network. Enterprises of all sizes are facing unique network related issues related to Malware attacks, Distributed Denial of Service (DDoS), and new applications all of which can be hard to detect.  Network Administrators can use NetFlow and other flow technologies to monitor and detect abnormal network traffic patterns that may be a sign of these threats.


 

What can cause a Network Anomaly?

 

Two common ways that a network anomaly can be introduced are through telecommuting and Bring Your Own Device (BYOD).  Both increase the risk of malware being introduced directly into your network after having been infected through an external source. Additionally, your network could be hosting a bot that was introduced through one of these sources.


Network Anomaly Detection

 

In an enterprise network, administrators normally try to secure their network by having an Intrusion Detection/Prevention System (IDS/IPS) which collects data and operates on signatures to identify threats while routers and the firewalls  work based on access control rules defined by users. If there is a zero-day malware that enters your network, it can be very hard to detect an anomaly by routers, firewalls or even your IDP/IPS systems. A bot, hosted on your network, won’t be detected through firewalls or IDS or IPS because they track only the inbound traffic. A more expensive alternative is to use a non-signature IDS/IPS system.

 

 

Finding out an anomaly in your network can be difficult, but there are the symptoms such as sudden network traffic drop, network traffic behaves off-baseline, unusual peaks, traffic abnormally focused on certain parts of network/ports, and new applications hogging most of the bandwidth or generating abnormal traffic patterns. Some peculiar cases are High SMTP traffic, Short burst of packets, one host to many on same ports, Traffic on unknown ports, too many TCP SYN flags.

 

 

 

By collecting flow data and analyzing the traffic patterns and unexpected traffic behavior, the network administrators can detect anomalous traffic. By investigating and isolating excessive network bandwidth utilization and unexpected application traffic, network administrators can find out and prevent network anomalies. By diagnosing specific time periods in the NetFlow records you can find what happened during the outage. By using NetFlow analyzer, you can understand more about the abnormalities and maintain an effective network.

 

 

 

To learn more about NetFlow, check out our NetFlow V9 Datagram Knowledge Series.

 

 

 

Watch the entire ‘Diving Deeper with NetFlow – Tips and Tricks’ webcast here and become an expert in understanding and implementing NetFlow in your enterprise networks.

A popular commercial FTP client has just been added as a new SolarWinds Free Tool!   FTP Voyager was first introduced in the mid-1990s and has been downloaded more than five million times since.   It was recently relaunched as a native 64-bit application with a fresh interface (and many other improvements) and is now available to the world for free.

 

SolarWinds recommends FTP Voyager for use with SolarWinds TFTP Server or our new Serv-U FTP Server, but we'd love it if you downloaded it and used it with any FTP server, SFTP server or FTPS server!

 

New FTP Client Tutorial Showcases New Interface

 

There are many FTP Voyager tutorials for our popular FTP client, but we wanted to say "thank you" to FatCat Servers for recutting a classic "how do you transfer files" video with the updated interface.

 


"Your FTP Voyager Video Here"

 

Have you published or seen a good FTP Voyager tutorial recently?  If so, please tell us about it in the comments below.

On a seemingly peaceful day at work, you suddenly find your phones going berserk - EMAIL’S NOT WORKING!! Hell breaks loose, everybody’s calling in, Vice president of Sales is on the line, and the rest of it is a familiar story…..

 

With some effort, you figure out that there is an IP conflict.

 

IP Conflicts cause the affected machines to experience erratic or no network connectivity. Imagine the business impact if one of your critical servers go down due to an IP Conflict. Now, how did this happen?

 

Some common reasons of IP conflicts in a network can be any of the below:

 

1. Bring Your Own Device (BYOD) Phenomenon

 

Enterprises are now widely adopting BYOD. The problem begins when a user walks in with his device carrying a static IP that is already assigned to a critical server in the network. The result is an IP conflict that brings down your server.

 

 

2. Misconfigured DHCP servers

 

Due to a misconfiguration, multiple load balanced DHCP servers are assigning IPs to clients from the same subnet. If two systems in the network get the same IP address, it results in an IP conflict.

 

 

3. Human errors during manual IP address management

 

Cases where IP address assignment is managed manually using spreadsheets. You forget to update a new IP allocation and then later reallocate the same IP to another device, will lead to an IP conflict.

 

 

How to troubleshoot and resolve IP conflicts?

 

Traditional solutions involve changing the IP of the critical server in conflict or using ping, ARP lookup and spreadsheets to find the conflicting device. The situation worsens if the device is ‘rogue’ and its MAC address information is not available in your database.

 

 

New SolarWinds IP Address Manager 4.0 helps eliminate the predicaments of IP management with active IP conflict detection and alerting. Further, by integrating IPAM with SolarWinds User Device Tracker (UDT), you can use endpoint mapping feature to easily track down the conflicting device.

 

Hence, move away from stone-age tools like spreadsheets and other primitive solutions. Evolve with IPAM 4.0 and curb the IP conflict issue at its root before it can affect productivity!

 

Download the new IPAM 4.0. Now, with Active IP conflict detection, User Device Tracker integration, and BIND DNS support!

0026_IPAM_4-0_IP-Address-Conflict-Detection_Lg_EN.jpg

An important part of monitoring is reducing the noise of unnecessary alerts.

 

In monitoring the connection ports on your network devices, while you want to know what endpoints are connected through which ports, you can usually function best when those connections are presented through an event log. In contrast, an alert is most useful to receive when a rogue endpoint, either unsanctioned or explicitly prohibited, connects to a device port.

 

Identifying rogue endpoints on the network depends on first knowing that some set of devices are explicitly allowed. For that you need a white list.

 

With a white list setup, assuming your monitoring system supports this feature, you then need some way to generate alerts upon a rogue endpoint connecting. In this case, by having your network devices trigger a trap when each endpoint connects to it, your monitoring application should be able to receive and compare trap information against its white list, sending an alert only when the endpoint (identified by MAC address or hostname) in the trap data does not appear on the white list.

 

As a result, you will setup your device port monitoring to alert you in real-time when a rogue device connects to your network. This allows alerts to standout among other normal events.

 

SolarWinds User Device Tracker (UDT) supports white listing, rogue endpoint detection and alerting. Consider UDT's feature set a supplement that effectively takes node monitoring visibility down to the device port level.

Time and again, we keep hearing that hackers tend to attack the vulnerabilities, especially when you haven’t patched the updates or when your software is nearing its End of Life (EOL). It’s not always your fault.

 

The more you fail to keep up with security updates from your software vendors, the more you are prone to vulnerability. A timely patch helps you correct the security and functionality problems in software. Patch management has several challenges that complicate it, which are covered extensively by National Institute of Standards and Technology in its updated Guide to Enterprise Patch Management.

 

 

In short, the challenges arise out of mechanisms used for applying patches, schemes used for managing hosts and so on. Here are some useful best practices for patch management:

 

  1. 1. Phased approach: Deploy a pre-tested patch before the updates are applied across your network. Also you might need to manually handle the non-standard and legacy systems that are not supported by tools you use to deploy patch.
  2. 2. Standard security techniques to deploy patch: When you are deploying the patches, there may be potential issues like patches being altered, credentials being misused, etc. It’s difficult to manage everything manually, so the best way to go is automating the patch management process. You need to ensure that you choose the right patch management software.
  3. 3. Balance security with feasibility requirements. As you know, patches can at times break other applications, so it’s important to prioritize patch deployment. You need to strike a balance between security and usability/availability requirements. For example, downloading large patches for remote and mobile devices over low network bandwidth is not feasible and you need to ensure that your patch management tool works well in such an environments.

 

But is it all about only staying updated? The answer is a big NO. Your organization might have software from hundreds of vendors, so bulk deployment can be an up-hill task. Gartner estimates that IT managers spend up to two hours every day managing patches. For instance, last week Microsoft® rolled out its security patch with updates for Internet Explorer™, Office® and Windows®. This latest installment addressed 33 bugs in a range of Redmond software. So there are chances that you may miss out on a critical vulnerability notification.

 

 

 

Alright, so you need an effective patch management software to survive the insecure IT environment but that’s not the end of the story. Automating the process with a patch manager shouldn’t be a reactive process. You need to schedule scans on a regular basis to analyze your IT environment and deploy all critical patches. To sum up, being up to date on current patches is certainly a step towards endpoint stability and security.

Stay patched, stay secure!

Those of you that are security practitioners know the necessity of incident awareness across various dimensions of the network. Threats are ready to strike any time, and having informative and meaningful data at hand would help to counter-attack and remediate risks.

 

Logs are the means to any actionable result. Any piece of critical activity on your network will trigger log messages: they may be syslog messages or SNMP traps, system logs, server logs, etc. From these silos of data from so many disparate devices and systems across the network, how could you gain visibility into specific threat events, and pinpoint the cause of these threats?

 

The heart of security information and event management (SIEM) is event correlation. This allows you to get coherent information in real time as and when there are peculiarities and suspicious activities on the enterprise network.

 

How Does Event Correlation Work?

 

SolarWinds has made this intricate activity extremely simple with a correlation technology so powerful that you don’t have to do anything – the correlation engine will monitor, detect, alert, react and report when encountered with anomalous system or user activity on the network. SolarWinds Log & Event Manager (LEM) is a full-function SIEM solution that offers an intelligent correlation engine to understand operational, security and policy-driven events.

 

  • Log Collection: LEM captures real-time event streams from network devices and utilizes agent technology to capture host-based events in real time. Here is a list of data sources from which LEM can receive log data for correlation and analysis.
  • Normalization: This is a key step before events are correlated. LEM parses the raw log data from agent nodes (workstations, servers, VMs, OS, etc.) and maps events from disparate sources to a consistent framework. This helps structure the data into identified categories and fields.
  • In-Memory Correlation: LEM correlates event logs in-memory thus avoiding performance bottlenecks associated with database insertion and query speeds.
  • Multiple-Event Correlation: LEM has comprehensive support for multiple-device, multiple-event correlation, including the unique ability to set independent thresholds of activity per event, or group of events.
  • Non-Linear Correlation: After mapping events in-memory, LEM applies a completely non-linear, multi-vector, correlation algorithm. This reduces the number of correlation rules and eliminates the need to build distinct rules for all possible combination of events.
  • Field-Level Comparison: LEM combines field-level data with user-defined groups and variables, making it possible to build rules that minimize false positives and focus your attention where and when it’s needed.
  • Environmental Awareness: LEM’s correlation rules factor in details about the organization, such as critical assets, applications, time of day or day of week, etc. to bring focus on the environmental parameters associated with the events and maximize the value of the data that’s being captured and analyzed.

LEM Real-time Event Log Correlation.png

 

So, What’s The Result of Event Correlation?

 

You have meaningful and actionable data that provides advanced incident awareness and threat visibility on your entire IT environment.

 

Using the correlated event data, you can:

  • Set up alerts to trigger when a specific security condition is encountered
  • Program active responses to counter threats, troubleshoot issues and react to policy violations
  • Perform event forensics and root cause analysis to identify suspicious behavior patterns and anomalies
  • Generate compliance reports for network and security audits

And more…

 

Correlation Rule Builder

 

SolarWinds LEM offers a simple-to-use correlation rule builder that allows you to build correlation rules using interactive drag-and-drop interface. Plus, there are nearly 700 correlation rules available out of the box for immediate use.

  LEM Correlation Rule Builder.png

 

SolarWinds Log & Event Manager makes event correlation simple yet powerful offering you a central SEIM solution to process and manage log data.

With the long weekend approaching I thought I'd share a fun look at what I've learned working in and around IT.

The-Longer-You-Work-In-IT-Jonathan-Lampe.png

IT Tools To Shorten Your Day


We can't fix all your workplace challenges, but we can help you wrap up and leave sooner.  Some of our favorite tools here include:


Like It? Please Share...


If you've ever felt the same way, feel free to share this with a colleague.  I'd also love to read your thoughts, experiences or links to similar cartoons in the comments section below. 

If you are having issues with Kiwi Syslog Daemon not receiving and displaying messages, then you can use a free packet capture program such as Wireshark, Wireshark · Go Deep.

 

This program provides the ability to capture packets as they are sent to your Network Interface Card (NIC). By filtering for and analyzing this traffic, you will be able to determine if your network devices are actually sending the expected information to your system

To set up Ethereal:

  1. Download and install the program from Wireshark · Go Deep.
  2. Use the Capture menu to open the Capture Options form.
  3. Select your NIC and define a capture filter that will look for all packets sent to UDP port 514 (the default syslog port).
  4. Press the Start button and you should see packets being as in the image below.
  5. Stop the capture and view the data. It should show packets with the protocol being Syslog.

 

By mirroring a port on your Ethernet switch, Wireshark will show you everything! You can then use Kiwi SyslogGen (Freeware)  to replay syslog messages from a Wireshark file.

For more information about how Kiwi products can work for you see: Remote Network Configuration Products | Kiwi Enterprises

To help protect and safely resolve high-risk situations on the network, organizations need SWAT force power—the kind found in the new SolarWinds® User Device Tracker (UDT).

 

The latest version of User Device Tracker incorporates powerful new features to help you take an even more proactive stance in controlling who and what is allowed on your network.

 

UDT can be tactically deployed to help you manage the onslaught of mobile devices while ensuring rogue devices stay out, with key features like:


 


Proactive & Actionable Controls

There are several strategic ways to proactively and actively secure access to your network with SolarWinds® User Device Tracker. First, leverage a whitelist to define a safe set of devices based on MAC addresses, IP addresses, or hostnames. Any device not on the whitelist will be seen as a rogue device and an alert will be triggered.

 

Second, add suspicious or unauthorized users to a Watch List so you can be notified every time they enter the network.

 

Finally, on detection of rogue device activity, instantly shut down the compromised network device port. This can be remotely executed while sitting at your desk.


Automatic Alerts

Being alerted at the right time is critical to take action on offending devices or unwanted network connections. With User Device Tracker, you can enable trap notifications to get connectivity information in "real time" and receive alerts whenever a device not on the whitelist connects. Additionally, you can set up alerts to notify of an endpoint port change.


Real-time Data Analysis & Reporting

UDT provides two easy methods to collect user device information:

 

  1. Use the Domain Controller Wizard to collect user login information. For this, configure appropriate logging levels on the Windows® servers beforehand.
  2. Poll devices for Virtual Route and Forwarding (VRF) data.

 

Additionally, you can leverage customizable, built-in report generation options to extract user and device information. For example, report on all wireless endpoints connected to the network.

 


Conclusion

To sum up, ensure the following ‘special weapons’ are part of your network management arsenal:

 

  • Proactive Measures & Actionable Controls – Whitelisting, Watch List, Remote Port Shutdown
  • Automatic Alerting – Trap Notifications, Endpoint Port Change Notification
  • Real-time Data Analysis & Reporting – Domain Controller Wizard, Polling for VRF data, Wireless Endpoint Report


Jump into action with the SWAT power of UDT to combat rogue endpoint devices with ease and deliver active network access security.

 

 

Download a free trial today and seize control of your network!

Today, let’s draw our attention to a front-end application, cascading style sheet (CSS), and how it can affect website performance. The objective is to leverage a web performance monitoring tool and continually monitor the end-user experience for each Web transaction.

 

Web designers work extensively to offer great user experience. In doing that, a lot of thought goes behind giving websites a clean look for making it easy on the eye, smooth navigation for seamless transition between pages, and having a usable site ensures continuous traffic. When you’re specifically visiting websites for answers to questions, it’s important that the page elements (e.g. fonts, images, and the page layout) stand out and are inviting enough to pull you in.

 

However, in spite of having all the right credentials, sometimes when a website loads, it only displays hyperlinks and text. This happens when the style sheet fails to load, so images and text elements do not get displayed as intended.  Another potential problem with CSS is that can sometimes take a long time to load for sites with graphic-heavy content.

 

css 1.PNG

What’s the Root Cause?

Inappropriate CSS reset: By not doing a proper CSS reset, Web pages are rendered differently, so the layout and format might look different on different computer screens.

Color names cause mix-up: Using color names incorrectly causes a Web page to display text in wrong or different colors.

Long CSS code: Lengthy CSS code will only mean that your website is going to take a longer time to load.

Failure of page elements: Images, graphics, and page layout can sometimes fail to load or takes unacceptable amounts of time to load completely.

Failure of text and image style elements: Failure to set correct parameters for text and image elements in Web pages can cause them to display in smaller fonts and shrunken images.

Location-specific issues: CSS files and its elements look different in depending on physical or geographic location.

 

CSS problems are quickly noticed by end-users, so they need to be fixed immediately to reduce abandonment. So, it is critical to monitor website and Web transactions to be aware and stay ahead of issues like these.

 

CSS Monitoring Tips:

 

Record the Web transaction:  This will establish a baseline for how your applications should perform. When pages (steps) perform slowly, you will get an alert that there is a problem with the specific page/step of the transaction.

Monitor load speeds: Keep an eye on page load speeds and page elements load times, paying special attention to images, Java Script, CSS files, and .aspx framework.

css 2.jpg

o Drill down to take a detailed look at the page elements. Here you will see whether CSS is the culprit or if the problem is related to another issue.css 3.jpg

Monitor Web page behavior and validate content from multiple locations: Since you can’t be everywhere at once, you can play back Web transactions from multiple locations and see if the page is actually loading content as intended.

o When recording the transaction, you can monitor for the image match – i.e., whether the image that is played back actually matches the image that you recorded.  This will validate whether the page is loading content as intended.

o After you record the transaction, you can deploy it to locations where you have players installed. You can always leverage the Amazon EC2 cloud to deploy players where you do not have a physical operation.

 

css 4.jpg

To monitor CSS issues automaticly, test drive a free trial of SolarWinds Web Performance Monitor today in your own environment.

 

Stay tuned to learn more about how web performance monitoring tools can keep an eye on other front-end application issues. In the meantime, keep on styling!

While we continually look for ways to simplify firewall configuration and change management tasks, a simple erroneous rule can lead to very big risks in the network. However careful we may be, redundant or shadowed rules always seem to find their way in.

 

There’s certainly a pressing need for smart and easy ways to find and fill gaps in security rules, as well as spend less time troubleshooting errors. Let us look at 3 common Juniper firewall management challenges and some tips and best practices to address them. Read on…

 

 

1. Cleaning up a Cluttered Rulebase

An important point of concern is that new rules are continually added to the firewall, but there isn’t much effort taken to remove them when they get redundant. When the task of identifying and removing unused, redundant or shadowed rules is ignored, you end up having a cluttered rulebase that can lead to security gaps and performance issues. Keep in mind the following:

  • Juniper firewall rules are defined on a Source and Destination Zone pair. Each network interface belongs to Zones that are security areas with different access policies associated with it. There may be multiple interfaces associated with a particular Zone.
  • There can be multiple objects per rule - on Source, Destination and Service. As a result, the actual number of rules becomes smaller but highly aggregated.

 

Therefore, to increase performance and efficiency, it’s crucial that unnecessary firewall rules are regularly removed. Additional policy optimization can be achieved with structural redundancy clean-up. This helps identify and remove erroneous entries in configurations that are completely useless to the functioning of the firewall.

 

2. Analyzing Firewall Logs

Juniper firewalls have references connecting back to the rules that are being triggered. Therefore, analyzing these firewall configurations and logs will effectively help isolate redundant, covered, and unused rules in these devices.

 

Firewall usage analysis is based on log records collected through the syslog interface. There are two simple ways to do this:

  • Have log data stored in a file/directory, and then use it later for analysis
  • Schedule log collection for a specific period of time, and then analyze this data.

 

Regularly schedule rule usage analysis for continuous rule-base optimization.

 

 

3. Handling the Command Line Interface (CLI)

CLI commands can be quite complex, and not everybody is adept at using them. In Juniper firewalls, the configurations are CLI-centric and are retrieved directly from the devices using SSH/Telnet connections. Unfortunately, this can be complicated, time-consuming, and error prone.

 

What's really needed is a simple-to-use interface which provides a consolidated view that is easily comprehensible and offers at-hand information to quickly identify discrepancies.

 

 

In Summary:

  1. Clean up your Juniper firewall rulebase regularly
  2. Analyze firewall logs for effective rule management
  3. Handle CLI commands from an intuitive management console

 

SolarWinds Firewall Security Manager is an easy-to-use firewall management solution that helps you better manage your Juniper and multi-vendor firewall devices from a single, intuitive interface for improved network security and administrative ease. Download a free trial today.

We are pleased to announce that SolarWinds Network Performance Monitor (NPM) version 10.5 is now available for download.

 

As today’s dynamic networks grow in size and complexity, the number of active routing topology states grows exponentially.  With the new network route monitoring feature, SolarWinds NPM takes fault and performance monitoring to the next level by providing real-time network route information alongside device status and performance statistics.  With support for major routing protocols including RIP, OSPF, and BGP, IT pros can now view routing tables, changes in default routes, and flapping routes in an intuitive web-based console.

 

Additionally, many advanced network services including multimedia distribution, finance, education, and desktop imaging rely on IP multicast to reduce network bandwidth usage. SolarWinds NPM’s new IP multicast monitoring feature enables IT pros to monitor routers, switches and end-points that receive and forward multicast packets by automatically detecting and importing existing multicast groups and applications.

 

Other new updates to SolarWinds NPM include advanced interface filtering by hardware type, name, VLAN and more for importation of new nodes and interface; and interface auditing.

 

You can learn more about NPM version 10.5 and download a free fully functional 30-day trial so you can see how it works in your network.

Welcome to SolarWinds blog series “Diving Deeper with NetFlow – Tips and Tricks”. This is the first of 6 part series where you can learn new tips by understanding more about NetFlow and find some everyday use cases for effective network monitoring.

 

Network problems seem to be a never ending condition for administrators who are charged with both maintaining network performance and delivering advanced network services to their organizations. The restraint in IT budgets and increasing pressure to ensure constant uptime, has pushed network engineers to try and manage existing resources and control costs. For engineers, troubleshooting network related problems and solving bandwidth issues can be achieved by taking advantage of existing flow technologies in your routers and switches. By using NetFlow, monitoring your network traffic not only becomes much easier but also provides greater visibility, by collecting and analyzing the flow data in your network.


What is NetFlow?

 

NetFlow is a network protocol developed by Cisco Systems for collecting IP traffic information, which eventually became the universally accepted standard on traffic monitoring and is supported on most platforms. NetFlow answers the questions of who (users), what (applications) and how network bandwidth is being used.  By understanding NetFlow much deeper, you can probe more into the insights and everyday uses that you haven’t thought about.


Effectively troubleshoot network issues with NetFlow

 

NetFlow data contains information about the network traffic, which helps network administrators to attend to issues related to application slowness and network performance degradation. Using NetFlow you can:

  • Identify the hosts involved in a network conversation from the source and destination IP addresses, and its path in the network from the Input and Output interface information.
  • Identify which applications and protocols are consuming your network bandwidth by analyzing the Source and Destination Ports and Protocols.
  • Analyze historical data to see when an incident occurred and its contribution to the total network traffic through the packet and octet count.
  • Ensure the right priorities to the right applications using ToS (Type of Service) analysis.

 

Flow data helps you keep track of interface details and statistics of top talkers and users, which can help determine the origin of an issue when a problem is reported. With Type of Service (ToS) in NetFlow records, you can understand traffic pattern per Class of Service (COS) in your network. With that you can verify Quality of Service (QoS) levels achieved and optimize network bandwidth for your specific requirements. Additionally, NetFlow data helps you to analyze usage patterns over a particular time and find out who or what uses most of the network bandwidth. NetFlow provides support to quickly troubleshoot application and performance related problems in your network.


Maintaining Network Uptime with NetFlow

 

Network uptime is critical to an organization’s revenue and an understanding of traffic behavior helps you maintain that. Excessive use of network bandwidth by users and applications can be controlled by identifying the top talkers from real-time and historical data. Manually collecting the flow data and analyzing it is a humongous task. By using a NetFlow analyzer, you can capture NetFlow data from different points in your network and convert them into easy-to-interpret information that help with better management of your enterprise network.

To learn more about NetFlow, check out our NetFlow V9 Datagram Knowledge Series.

 

The ‘Diving Deeper with NetFlow – Tips and Tricks’ webcast is scheduled on 23rd May 2013. Register here and become an expert in understanding and implementing NetFlow in your enterprise networks.

Storage manager uses Tomcat for the web server. The session timeout parameter can be found in the web.xml file.

 

For windows the path will be:

 

<installed drive>\Program Files\SolarWinds\Storage Manager Server\webapps\ROOT\WEB-INF

 

Linux:

 

/Storage_Manager_Server/webapps/ROOT/WEB-INF

 

Within the WEB-INF subdirectory there will be a file called web.xml. Open this file with a text editor and do a search for <session-timeout>.

 

The default will be set for 30 minutes. If we want to change the timeout to 1 hour, we simply change the value to 60.

 

Before:

 

<session-timeout>30</session-timeout>

 

After:

 

<session-timeout>60</session-timeout>

 

 

If you wish to set the timeout to infinity, change the value to -1.

 

 

Once the changes have been made save the file and restart the Storage Manager Web Service.

 

 

To restart the service for Windows, run services.msc, next locate the SolarWinds Storage Manager Web Service and select restart.

 

 

1.jpg

 

 

To restart the web service in Linux, open a SSH session to the Storage Manager Server and log in with an account that has proper permissions such as root and type the following command:

 

 

/etc/init.d/storage_manager_server restart webserver

 

2.jpg

 

Note that upgrading or performing an uninstall and re install of Storage Manager will set the timeout value back to the default of 30 minutes.

Targeted espionage in simple terms is the practice of illegally spying and investigating competitors, mostly to gain business advantage. The target may be financial information, a trade secret such as a proprietary product specification and so on. You may think that your organization is not a high-value target, but that’s not true.

 

There is always a hunt for sensitive and personal information like credit card and social security numbers, patient records, etc. In most cases, a highly targeted attack precedes an APT, and it may exploit a maliciously crafted document or executable, which is emailed to a specific individual, or a group. APT or Advanced persistent threat refers to an entity or a group with both the capability and the intent to persistently target a specific organization or a network, etc.

 

A recent survey conducted by Verizon on data breach, revealed the victims by Industry:

     • 37% - Financial Organizations
     • 24% - Retail and Restaurants
     • 20% - Manufacturing, Transportation and Utilities
     • 20% - Information and Professional services

 

So how does the attack typically happen?
The factors that contribute to attacks like this are known as ‘threat actors’ and they can be classified into three categories:
     • External - The ones outside the victim organization
     • Internal - These threat actors are the ones within the victim organization
     • Partners - Partners can be any third party that share a business relationship with the organization

 

Most attacks come with an intent to crack the financial data, sometimes business information. Hence the attacks can come in the form of data theft attempts, SQL injection, spyware, phishing attempts, hacking and other kinds of malware.

 

For instance, databases are increasingly becoming targets for hackers which has resulted in information security becoming one of the most important drivers for security investments. You need to have visibility and protection over security & compliance, and protection of your data. To ensure this, you need to collect and consolidate log data across the IT environment and correlate events from multiple devices in real-time.

 

A recent report showed that a decade-long espionage operation used the popular TeamViewer remote-access program and proprietary malware to target political and industrial figures in Hungary

 

So it’s high time that you get proactive and shield yourself against possible threats. You need to continuously monitor the activities on your web server, firewalls and endpoints. By deploying a logfile
analyzer
tool, you can identify anomalies, deviations in policy definitions and baseline your IT environment for vulnerabilities, and shield them.

Don't Be a Sitting Duck!

 

Script kiddies test the defenses of FTP servers and SFTP servers (using SSH) every minute of every day.  IT administrators have gotten used to these probes, and smart ones have already enabled IP lockouts on their perimeter servers.  (This setting is on the "Server Settings" pane in Serv-U FTP Server.)

 

sitting_duck.jpg

However, there are a number of "well known" usernames that should never be used as usernames on FTP servers and SFTP servers because they are just too easy to guess.

 

10. administrator -  Very popular in Windows environments.  Don't use it on your FTP server.

9. oracle - Companies that like to write big checks to Larry often cut corners elsewhere to make the payments.  Don't follow the herd using "oracle" on systems that connect to the enterprise database.

8. mysql - Don't use the names of other databases or back-end infrastructure either. (Also avoid "sa", "sqlserver" , "nas", "postgres", etc..)

7. user - Popular test account, often set up with too many permissions, and often rolls over from the evaluation environment to production.

6. guest - "Sure, c'mon in.  You can use the bathroom, the phone and my checkbook."

5. apache - It's also common to see people name accounts after the web application they support with their FTP or SFTP services. (Also avoid "iis", "serv-u", "nginx", "www", etc.)

4. info - I'm honestly stumped on why "info" is popular (if you know, tell me in the comments), but it is.

3. test - "It's just a test account.  I promise I'll delete it - soon."

2. admin - Tempting to use in web applications (including Serv-U) because it's so short. Pick usernames like "[your initials]admin" instead to avoid script kiddies.

1. root - By far, the most popular attack target.  If you're building a honeypot, include root.  If not, don't.

 

Other Usernames to Avoid

 

Did I miss some the usernames you expected to see?  If so, tell me about them in the comments section below.

Recently, the Geeks at the SolarWinds Lab landed themselves in a bit of a pickle. Which one is faster – installing a virtual appliance with Hyper-V ® or VMware ®?

Our Head Geeks, Lawrence Garvin and Patrick Hubbard, were certain that there was only one way to get to the bottom of the situation – A Virtual Application installation bout!

 

Our head geeks also explain why noisy pagers or flooded mailboxes are not the only problems for IT. The biggest challenge is to have all your alerts in one place for analysis, adding notes and performing initial triage before assigning it to the right person in the right department.

 

Pick your favorite early and root for your favorite geek or hypervisor! (Remember, this is not a competition, it is only an exhibition - please, no wagering.)

 


Did your favorite Geek win? Well ,there are no losers in this battle. All IT specialists receive a gold medal with our free SolarWinds Alert Central - the perfect tool to consolidate and manage all your alerts. Just follow along with Lawrence and Patrick to install it on your virtual machine and fire it up to:

 

  • Consolidate your alerts from all your IT monitoring software
  • Setup automatic escalation workflows that work for each team
  • Schedule on-calls using the intuitive calendar interface
  • View status/priority of alerts with easily distinguishable icons


Put an end to those critical ignored alerts that get lost in somebody’s inbox that only your boss seems to know about.

 

Alert Central is not just one of our many free tools.  It’s a free product capable of handling tens of thousands of alerts for hundreds of users.


Download Alert Central for Hyper-V or VMware and get your weekends back.

On March 5th of this year, Network Topology Mapper v1.0 was made available to the public.  Since then it has quickly become one of the more popular products that SolarWinds offers.  It’s a great tool for MSPs and IT Consultants that travel from one client location to the next because with only one license of NTM, an unlimited number of networks can be scanned and mapped.  It’s also a nice complement to SolarWinds Network Performance Monitor (NPM) because maps created in NTM can be exported to the Network Atlas format and then imported for use in NPM.

 

NTM_1-0_NETWORK_MAPPING_REGULATORY_COMPLIANCE_Base_EN.png

 

On 5/13/13, the first service release of NTM was made available.  This update includes some great new features.  Among a few bug fixes, this service release includes:

 

  • Nodes with multiple IP addresses are now supported in tooltips, details windows, and search queries
  • Spanning tree now reports states in English instead of stored values
  • Link speeds of up to 10Gb are now identified
  • Maps can now be exported to Visio 2013 vsdx format

 

For those of you who have already purchased NTM, visit the customer portal and download the latest release for these updates.  If you haven’t tried NTM yet, now is the perfect time!  For those who would like to try NTM, we’ve unlocked a few of the features in the trial to make the experience better. Download NTM v1.0.1 today and see how easy it is to create an accurate and detailed map your network.

We keep hearing about Denial of Service (DoS) attacks, owing a large part of it to our dependency on the Web. A typical DoS situation could be a website going offline. Also, you may have faced situations where a sudden increase in traffic causes the site to load very slowly. Sometimes the traffic can be good enough to shut the site down completely. A perfect case for Distributed Denial of Service (DDoS).

 

In short, DoS and DDoS attacks are some of the most inventive hacking practices on the rise bringing down businesses critical services, and inhibiting user Web access and business continuity.

 

So, the question is what exactly are DoS and DDoS? More importantly, how do we guard our IT assets from them?

 

 

Denial of Service

It’s an attack where the attempt is to prevent legitimate users from accessing information or services. It usually targets your system and its network connections, or the network of critical sites that you may often use. The most common type is flooding a network. For example, when you type a URL of a particular website, what you actually do is send a request to access the page. There are only a certain number of requests that the site’s web server can process at a time and hence cannot process your request, precisely “Denial of Service”.

 

For most hackers, Web servers are the ideal choice for launching attacks as they have more computing and network capacity compared to a home PC. A very similar thing happened with Mt. GOX servers recently. So, to crash a web server, a DoS threat attacks the following services:

  • Network bandwidth
  • Server memory
  • Application exception handling mechanism
  • CPU usage
  • Hard disk space
  • Database space
  • Database connection pool

 

To a large extent, organizations tend to rely on firewalls to defend their networks against DoS attacks. Although firewalls are a key component of an organization's security solution, they are not individually capable enough to thwart a targeted DDoS attack.

 

 

Distributed Denial of Service

In a DDoS attack, the hacker is likely to take control of the security vulnerabilities to control  your system and use it to attack other systems in the network. A perfect example for this is sending out spam, sending overloaded information to a website. In simple terms, the attack is distributed, where the user uses multiple computers to launch the DoS attack.

Symptoms like slow network performance, sudden spike in receiving spam content, and inability to access certain websites suggest that there are chances your network is under attack. It’s best to be proactive and shield yourself against possible threats. You need to continuously monitor the activities on your web server, firewalls and endpoints. Using a security information and event management software would be an ideal choice.  It helps you by monitoring all the logs collected from various entities in your IT environment, and analyzing and correlating events in real time for advanced incident awareness.

 

 

If you want to safeguard your IT against DoS and DDoS threats, you need to ensure that your SIEM tool uses active responses to respond to critical security events, and shuts down threats immediately. Some key built-in responses that you might need for sure are:

  • Send incident alerts, emails, pop-up messages, or SNMP traps
  • Add or remove users from groups
  • Block an IP address
  • Kill processes by ID or name

Microsoft SharePoint is a web application commonly used for document and file management, collaboration, search, business intelligence, social networking and other functions.  With its widespread use, internal and external customers are dependent upon SharePoint’s availability to get things done.  Below are the top 5 causes of a slow-responding SharePoint application, and how you can proactively identify these problems and fix them before end users even know there is an issue.

 

1) Network devices and bandwidth:  The most obvious reason for network latency is often bandwidth capacity.  However, latency issues can still exist even in a large bandwidth network, particularly if the *devices* involved in the interconnections – switches, routers, firewalls, etc. are introducing the latency. At its core, these devices are all ‘store and forward’. When the ‘store and forward’ takes longer than optimal, latency is introduced. Locational issues are caused by distance (the round trip takes a while) or due to a location’s network infrastructure where the internal WAN may be slow.
2) Volume of requests/application usage: Each and every click is recorded as a transaction.  If the volume of transactions exceeds the available resources, it causes application latency.  An increase in the number of concurrent users also can cause responsiveness to suffer. A high memory usage, low disk cache memory or a storage I/O may cause latency in loading components required by SharePoint.
3) Load time for integrated components.  SharePoint allows adding widgets, applications like Java, SQL. An issue or delay in loading the components may cause delay and latency issues.
4) Database issues.  SharePoint heavily relies on the database infrastructure. I/O problems could indicate a problem with the disk.  Latency could also be caused by slow queries.

 

To proactively detect these issues, here is some guidance on what to monitor in your SharePoint environment.
Monitor the network.  This includes utilization of each interface as well as the network latency and packet loss for each node. 
Monitor web transaction response times from multiple locations.  With a good website monitoring tool, you can determine if a slow page is locational or if the problem is native to the application. 
Monitor page load times for the entire transaction.  By monitoring all the pages/steps in a transaction is necessary to pinpointing where the user experience breaks down.

sharepoint pages.PNG

 

When looking at an individual page, it’s good to have a waterfall chart to view which element is consuming the most time to determine if the issue is related JavaScript, DNS lookup, etc.

sharepoint waterfall.PNG

Monitor database performance & query times.  Because a database issue can be the cause of a SharePoint performance problem it is important that your server management tool can monitor key performance metrics of your database.   Key metrics include lock wait time, fragmentation, and deadlocks among others.  You also want to monitor how long it takes for SQL queries to perform to get an indication if the query written requires a change to improve performance.

SQL monitoring.PNG

 

Monitor underlying server resources for CPU, Memory and Disk constraints. CPU utilization issues can indicate underperforming hardware or perhaps a virtual machine has insufficient resource allocation.  It is also very important to keep close tabs on disk I/O and disk latency to understand how storage performance is impacting your application.  This is a major issue with heavy data intensive applications like SharePoint.
Monitor specific SharePoint performance metrics such as:
-SharePoint request wait time.  As the number of wait events increase, page-rendering performance will deteriorate.  If wait time is consistently trending up, you should consider adding additional web servers to support your application.
-SharePoint requests rejected.  If there are any requests rejected (showing a 503 HTTP status code), there are insufficient server resources, and you should consider implementing additional web servers.
-SharePoint Worker process Restarts.  Any worker process restarts can indicate a problem such as a memory leak, access violations or process settings.  Investigate process restarts to prevent issues.

-Requests per second.  This provides an indication of current throughput of the application.  If this metrics gets out of a certain range, you will need to add additional resources to cope with the increased load.

 

The SolarWinds Web App Monitoring Pack provides the ability to monitor web application page load times and provides out-of-the-box monitoring for SharePoint 2013, 2010 and 2007. Try it free for 30 days.

Quick—why do YOU transfer files from point A to point B?  Our experience shows that many of you are trying to:

  • Back up your data
  • Match up two different folders, or
  • Transfer a really big file without having to wait around


Did you know you can do all of those things and more with SolarWinds free FTP Voyager client?   This article shows you how, with five quick "file transfer recipes."  (Remember to download FTP Voyager before trying any of this on your own.)


How to Back Up Files From Your Desktop or Laptop


1. Obtain an account on either a remote FTP, SFTP or FTPS server such as Serv-U MFT Server
2. Open up the FTP Voyager Scheduler
3. Use the Backup Wizard to select local & remote folders
4. Schedule & run the backup


FTPVoyager_Backup.png


How to Synchronize Two Folders

 

1. Open FTP Voyager & open a connection to your FTP, SFTP or SFTP server

2. On your left pane, click into the local folder you want to synchronize

3. On your right pane, click into the remote folder you want to synchronize
4. Click the "Compare Folders" button between the two panes
5. Transfer files back and forth between the folders until all the files that are listed appear in green


FTPVoyager_SyncByHand.png


How to Shut Down After a Big (Unattended) Transfer


1. Open FTP Voyager & open a connection to your FTP, SFTP or SFTP server

2. Perform a test to make sure transfers work by downloading one or two small files

3. Go to the "Transfer Queue"

4. Change "On Queue Completion" to "Shut Down Computer"

5. Start the large file transfer & leave

6. Your computer will now automatically shut down when the big file transfer finishes


How to Synchronize Entire Folder Trees


1. Open FTP Voyager & open a connection to your FTP, SFTP or SFTP server

2. On your left pane, click into the local folder you want to synchronize

3. On your right pane, click into the remote folder you want to synchronize

4. Go to the "Tools" ribbon &  click on the "Synchronize" button

5. Confirm the parameters of the synchronization by looking at the Sync preview & making any necessary adjustments

6. When everything is set, click the "Synchronize" button

7. FTP Voyager will automatically make all necessary transfers & deletions to synchronize your folders & all their subfolders


FTPVoyager_SyncUtility.png

How to Get Email Alerts When Files Arrive

 

1. Open FTP Voyager & then open your Site Profiles

2. Select the server that you want to watch & select "Copy to Scheduler" from the menu
3. Open FTP Voyager Scheduler
4. Create a new transfer task
5. Add a "Download" action that points to your expected file
6. Reopen your "Download action & go to the "Events" tab
7. Add a "Send Email" event action to send you an email when the "File Downloaded" event occurs


Try It Yourself

 

To try out any of these recipies yourself, download FTP Voyager Free FTP Client today.

Josef Cilia of APS Bank in Malta kindly shared his experience using SolarWinds Storage Manager (STM) with us.   We got some great insight into how STM is helping Josef keep his systems up and running and be more proactive in managing his systems with visibility all the way from servers and fabric to his arrays.  Josef’s feedback is provided below.


JC: Since I’ve installed Storage Manager, I can say that I have better piece of mind regarding my storage environment. Storage Manager not only helps me tackle problematic issues on my storage but also provides me with forecasts/projections for my storage (it tells you when it will reach 80%, 90% and 100%), immediate response by email whenever there are problems plus I can also monitor the status of serves such as the HBAs, memory, CPUs, disks and network.


Basically I’m monitoring 3 enterprise storages arrays namely a HP EVA4400, a HP EVA4100 and an IBM DS3524 (85TB of RAW data in all) together with 6 Brocade switches and 21 servers (up until now!).

 

SolarWinds: Since you implemented STM, what insight has it provided to you?


JC: Among the many helpful things that I have gained from installing this product is that I can drill down on every single LUN on my storage arrays and view detailed information such as total IOs/sec, read/write IOs, MB written/sec. and read/write latencies. The same data is also available for disks and disk groups. All data is displayed in graphical format.


Another simple but important thing is that I can check the redundancy on my controllers. I’ve never had such a visible picture where I can view the load on my storage controllers. It is helpful when it comes to viewing the top 10 LUNs by total IOPs, top 10 LUNs by total latency, top 10 LUNs by reads, top 10 LUNs by writes. These can be viewed at a glance. On my storage, I set rules to alert me whenever there are read access delays, write access delays, disk queue lengths, and when threshold usage is greater than 95%.


With regards to my fabric, now that I installed Storage Manager I can monitor the switch as a whole or individually port by port. This data is shown in a single screen where I can view the status of the ports (online/offline) or all the zoning on that particular switch. This tool also monitors the board temperature, fans status, power supplies status. I also set rules on these switches in such a way that if a port goes offline it alerts me immediately by sending an email. I have also set another threshold to alert me when there are errors/sec. generated on ports.


I’m also monitoring most of my servers connected to my fabric. Although this tool gives detailed information on the performance on the CPUs, disks, memory and network (which is more helpful to the server administrator), I manage the files stored on my storage meaning I can view which files are large and are using my storage, which files are orphaned and can be removed from the storage, duplicate files and what type of files are stored on my storage such as MP3s, images, DB files and many others. I run scheduled reports to extract such information from my storage and present these reports to my management on a monthly basis. I set some notifications on servers to alert me such as whenever there is high CPU or memory usage.  Another useful thing about Storage Manager is that servers do not require any restart after installing or uninstalling STM agent on them.


I use STM reporting for monthly progress reports which I present to the management. These reports are very easy to extract and they can be extracted in no time where in the past I had to spend about 3 hours to issue those reports. Another useful feature about this product is the way notifications are configured. Nowadays, I’m not wasting my time fire-fighting problems but now I’m able to be more proactive.


SolarWinds: How many issues have you found proactively that would have otherwise ended in a service performance issue or outage?

 

JC: Two within a month.

 

SolarWinds: Would you recommend SolarWinds Storage Manager and if so, why?


JC: Definitely yes. STM is helping me to forecast my storage, be more proactive before problems arise, have better visibility of all my SAN, have a visibility of the performance of my storage in a single screenshot, and much more.

Network administrators are tasked with deploying advanced network services, maintaining network performance, and reducing costs with fewer resources.  One of the biggest factors impacting your network performance is network traffic and bandwidth usage. By understanding how to take advantage of the flow technology that is built into routers and switches, IT professionals can monitor, troubleshoot and solve bandwidth related problems. Join us in this free webcast where our NetFlow experts will help you understand some cool tips and tricks to make the most of the NetFlow data from your networking devices.


The Agenda

In this webcast, we’ll dive deeper into NetFlow, discuss day-to-day networking challenges, and highlight common use cases that will help you better leverage the flow technology and its applications to troubleshoot many networking problems. This webcast covers some key topics including

  • Introduction to NetFlow and other flow technologies
  • Configuring your network to collect flow data
  • Some everyday use cases for effective network monitoring
    • Troubleshooting network issues
    • Anomaly detection
    • Tracking cloud performance
    • Monitoring the impact of BYOD traffic
    • Monitoring Quality of Service (QoS) and Type of Service (ToS)
    • Capacity Planning
    • How SolarWinds Network Performance Monitor and NetFlow Traffic Analyzer can help


When is the webcast?

Register for this free webcast and learn new tips and tricks to become an expert at understanding and implementing NetFlow in your enterprise networks.


APAC

Tuesday, 21 May 2013

11:00 AM SGT, 01:00 PM AEST

REGISTER NOW

 

North America/Latin America/EMEA

Thursday, 23 May 2013

10:30 am CT

REGISTER NOW

This article presents a use case related to monitoring users and endpoints connected to the device ports that pass through VOIP phones on a network.

 

Many IT teams manage bandwidth consumption for a network that includes a VOIP telephone system. VOIP-enabled network switches often allow each VOIP phone to connect another device--typically a desktop or laptop computer--to the network . Both the phone and the device connected through it request and receive DHCP leases; and both are considered as being connected directly to the switch.

 

Should an endpoint connected through a VOIP phone become compromised (through a Trojan virus, for example), the network switch, directly connected to the compromised device, becomes vulnerable. Though a firewall is often setup with rules that block the egress of packets that a Trojan-infected endpoint is attempting to send back to its homing station from within the network, the risk and possible repercussions of sensitive information getting out remains. Hacking and securing networks endlessly depend on innovating tactics.

 

Monitoring and the Importance of Response Time

 

Probably managing bandwidth on your VOIP-enabled network already involves watching and shaping traffic in response to call quality indications and alerts. In fact, SolarWinds Network Performance Monitor, Network Traffic Analyzer, and VOIP & Network Quality Manager interoperate to give you a very granular view of bandwidth allocation and consumption to support VOIP quality of service.

 

However, while you may be able to see that specific endpoints are hogging bandwidth through a particular application, you will not necessarily be able to see who, internally, is contributing to a spike. For that you also need to correlate MAC and IP addresses, and user activity, with the traffic problem. SolarWinds User Device Tracker provides resources for tracking user logins with the MAC and IP of the device being used. Additionally, if monitoring tools reveal a breach in security, an IT team member can use UDT to remotely shutdown any network device port that might be compromised.

According to Merriam-Webster online dictionary, cybersecurity is a noun that means the “measures taken to protect a computer or computer system (as on the Internet against unauthorized access or attack.” Forbes.com offers a list of cybersecurity threats, which include:

 

  • Scams through social networks such as Facebook and LinkedIn, in which perpetrators may try to friend or follow someone to get information
  • Advanced Persistent Threats (APTs) breaching embedded networks with sophisticated, difficult-to-detect attacks to gain information
  • Bring Your Own Device (BYOD) trend, which can leave networks open for attacks when BYODers don’t follow proper security regimens


What You Can Do to Protect Your Networks

 

Consider the Observe, Orient, Decide, and Act (OODA) loop as one of your tools to combat cybersecurity threats. Military strategist USAF Colonel John Boyd developed this methodology as part of the combat operations process, but it can also be easily applied to cybersecurity. According to Boyd, decision-making happens as part of a recurring cycle of observe-orient- decide-act. An individual or an organization that processed this cycle quickly, observing and reacting to unfolding events more rapidly than an opponent, can get inside the opponent's decision cycle and gain the advantage.

 

In the case of cybersecurity, the ability to observe and react to threats more rapidly than the attacker significantly enhances your network security. Observation is cybersecurity’s foundation. Monitoring and collecting network performance and event log data can provide important information on what is happening on the network – especially if anything unusual is happening.

 

Orientation shapes the way you observe networks, systems, and applications. For example, you can correlate network traffic with device and application log data to identify the sources, destinations, and generators of network intrusions.

 

After observation and orientation, you can formulate a hypothesis and decide on the correct course of action, based on the data you have gathered and the overall risk management profile of your organization.

 

Finally, you can act on the cybersecurity threat. Action may consist of taking automated actions to respond to the threat or it may include actions you perform, such as ensuring implementation of the latest software patches and updates.

 

Whatever you do to combat cybersecurity threats, make sure you gather feedback and document the response process, so you can ensure improvement, speed, and process repeatability. Having the right tools and information are also key to success in battling cyber threats. To find out more about tools and strategies for defending your network, see the SolarWinds whitepaper,Cybersecurity - A Practical Approach to Actionable Intelligence.

Every now and then we keep hearing about the botnet attacks, one of the most significant network security threats that organizations are facing today. What are they and how are they a security threat?

A botnet comprises a bunch of computers that are under the control of a single “botmaster” machine, mostly known as command and control servers. In most cases, it all begins when a user downloads a bot program unsuspectingly. For example, this could happen when a user accidentally clicks an infected email attachment.

 

Once the bot gets installed, it contacts the public server which is controlled by the botmaster using various communication protocols including IRC, HTTP, ICMP, DNS, SMTP, SSL. Also, it is very difficult to detect botnets given their highly dynamic nature and ability to evade common security measures.

botnet_blog_thwack.jpg

 

The Impact:
A botnet attack can take different forms as they attack.
     • It may be a launch denial of service (DoS) attack on servers, bringing down sites as most attackers deploy UDP, ICMP, and TCP SYN floods. Some may even use application-layer attacks.
     • It may also infect several systems with spyware or steal data, send out huge chunks of spam.
     • Some botnet attacks have embedded programs which identify vulnerable servers that can redirect to host phishing sites. These happen mostly on banking sites in order to steal passwords and other customer personal data.

 

How to shield yourself:
It is indeed a tricky task to locate the botmaster. Proxy connections, as well as the control plane, are often changed to make it nearly impossible to track down the botmaster. The traditional packet filtering and port-based techniques are no more effective. One core area that you need to focus on is your DNS logs.  Botnets often tend to bank on DNS hosting services to point a subdomain to IRC servers taken over by the botmaster.  A typical botnet code tends to have hard-coded references to a DNS server, and you can spot them by deploying a DNS logfile analyzer tool. Thus by identifying these services, you can revise your policy definitions and baseline your IT environment for vulnerabilities, and shield them.

 

Also, the event log analyzer needs to correlate activities across your environment and use active responses to respond to critical events, shutting down threats immediately. Thus you can stay proactive and handle threats in real time, rather than being reactive.

Build your own Widget

Posted by Bronx May 13, 2013

For those of you who have read my posts, you'll know that I love to tinker and share my "tinkerings" with you. To date, we have the following toys:

Today's lesson will be simple: "Build a Generic Widget" To do this, you will need to have VB.net installed. If you have not installed it already (shame on you) then follow lessons 1 and 2, found here. Once you have that set up, come back here and we will build a simple widget...in this case a real-time stock ticker.

ticker.png

This is simple to create because I cheated (cheating is allowed if it's for yourself). I could have (and have done so before) had the ticker read and parse RSS feeds to produce the same result, but why? My cheat is simple. The above ticker is comprised of two elements, a web browser, and a picture. That's it! Here's how I did it:

 

First, I added a web browser from the toolbox. Next, I re-sized it and offset it off of the form (essentially cropping everything except the stock price on that web page). I've highlighted the web browser going beyond the bounds of the form.

pic2.png

Below the web browser control is a Picture box control. At this point, I set the URL property of the web browser to "https://www.google.com/finance?client=ig&q=NYSE:SWI" which is SolarWinds real-time stock price. I then set the picture box control to a chart of SolarWinds' stock price. So the code looks like this (Note the code for both subs is the same. We're putting the same code in the Form_Load event so we don't have to wait ten seconds to retrieve the data as dictated by the timer):

 

Public Class Form1

Dim market$

    Private Sub Form1_Load(sender As Object, e As System.EventArgs) Handles Me.Load

        On Error Resume Next

        market$ = "http://chart.finance.yahoo.com/t?s=SWI&lang=en-US&region=US&width=300&height=180"

        dow.Image = New System.Drawing.Bitmap(New IO.MemoryStream(New System.Net.WebClient().DownloadData(market$)))

        dow.SizeMode = PictureBoxSizeMode.StretchImage

End Sub


Private Sub timDow_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles timDow.Tick

        On Error Resume Next

        market$ = "http://chart.finance.yahoo.com/t?s=SWI&lang=en-US&region=US&width=300&height=180"

        dow.Image = New System.Drawing.Bitmap(New IO.MemoryStream(New System.Net.WebClient().DownloadData(market$)))

        dow.SizeMode = PictureBoxSizeMode.StretchImage

    End Sub

End Class

 

Steps to create this:

  1. Add a web browser to the Form
  2. Set the SuppressScriptErrors property to True and the ShowScrollbars property to False.for the web browser.
  3. Set the web browser URL property to "https://www.google.com/finance?client=ig&q=NYSE:SWI"
  4. Add a picture box to the form positioned below the web browser and name it, "Dow"
  5. Add a timer to the form and call it, "timdow"
  6. Set the timer to be Enabled
  7. Set the timer's interval property to 10000 (10 seconds)
  8. Tighten the form to make it small
  9. Set the form property, Topmost to True.
  10. Set the Text property of the form to what you want the caption to read.
  11. Paste the code above in your project and you're done! Just run and/or build it.

 

You will have to play around with the positioning and sizing to get everything just right. Now imagine what you can build if you change the web addresses to something else, or if you added more picture boxes. You can do anything!

Microsoft® SharePoint deployment has become critical in organizations, serving as a collaboration and document sharing tool. SharePoint has become so essential in enterprises that it’s also widely being deployed on virtual environments. Virtualized or not, we look to have the service up and running at all times. This is perfectly fine as long as there are no faults and issues causing application downtime or slowdowns.

 

So, what is there in a SharePoint architecture that could go wrong, especially when it’s virtualized, and what do you need to monitor?

 

The Application Infrastructure

  • The SharePoint application itself running on your physical server or a virtual machine (VM)
  • The operating system where SharePoint is installed
  • The server that’s running SharePoint, and its underlying hardware components
  • Other system processes and services running that may bog down the performance of SharePoint (e.g., SQL, active directory, etc.)

 

The Virtualized Infrastructure

  • Performance of the VM that runs the SharePoint application
  • Resource contention issues between VMs – CPU, memory and storage utilization
  • Datastore IOPS to see if storage read/write operations between the VM and the datastore is going well, and if other VMs sharing the datastore are slowing down the IOPS
  • Performance of the host and the host cluster

 

Each of these is a critical entity to monitor and can contribute to application and service failure, and certainly more IT management headaches. To monitor this complex framework, you need the depth and visibility into the performance metrics from the virtualized application to the virtual infrastructure.

 

SolarWinds Virtualized Application Performance Pack (VAPP) combines the power of performance monitoring across your application and virtual environment, and gives you flexibility to view critical performance statistics on a single pane of glass. 

  • Part of VAPP, SolarWinds Server & Application Monitor (SAM) will gather SharePoint performance metrics, and alert information in one place. You can drill down to gain visibility into additional detailed performance data. You can also customize user-friendly dashboards to meet specific SharePoint monitoring needs.

 

 

SAM has out-of-the-box component monitors that allow you to monitor queued SharePoint requests, request wait time, cache API, worker process and the SharePoint service metrics such as search, text indexing, trace output, sending notifications and performing scheduled tasks.

 

  • Alongside your application health information, you can import VM performance metrics into a SAM dashboard from SolarWinds Virtualization Manager (part of VAPP) to provide detailed and granular VM, host, cluster and datastore statistics that may impact the performance of the virtualized SharePoint application.

 

For example, you can see:

    • Storage I/O on the VM and/or datastore for the application nodes
    • Storage capacity and usage for the applications on the VMs
    • CPU usage (% Used, MHz, CPU Ready) on the VM
    • Memory (% Used, Ballooning, Swapping) on the VM

And more…

 

VM Widget on SAM.PNG

 

Some Tips for Creating Custom Widget & Importing to SAM

This Virtualization Manager Dashboard Integration video walks you through the process of embedding a Virtualization Manager dashboard widget in the SAM interface.

 

 

A similar process can be used to embed a Virtualization Manager widget into any other location that can display HTML.  A summary of the steps is also provided below.

 

Export from Virtualization Manager:

  • Create a widget on Virtualization Manager dashboard with VM/host/cluster/datastore statistics and charts
  • Copy the widget’s HTML code to be imported into SAM

 

Import into SAM:

SolarWinds SAM gives you the functionality to add custom HTML pages with its dashboard on any page/view. 

  • Select the node/application/component details page, where you want to place the widget
  • Customize the page, add a Custom HTML resource wherever you desire on the page
  • Paste the HTML code from Virtualization Manager into this custom HTML section to get populate the widget and performance data in real time

Memory Utilization.PNG

 

Download Virtualized Application Performance Pack today, and monitor your virtualized SharePoint application and its virtual infrastructure. Stay ahead of performance bottlenecks!

Any large enterprise that uses a Windows environment uses Microsoft Active Directory (AD). With just a single sign-on, users can basically access their computers, group accounts, emails, VPN network, share drives, etc.

 

Now, there are issues when it comes to monitoring and managing your AD, linked to users, IT admins, hardware, and applications. Some of the most common AD issues you may encounter include:

Domain Controller: When a drive that contains the New Technology Directory Service (NTDS) files runs out of disk space, the domain controller stops working. This ultimately results in user authentication and access failure. In turn, this leads to applications failing when they are queried against AD.

Log-on: The computer may not authenticate users and services if there is a log-on failure. This restrains the domain controller from registering Domain Name System (DNS) records. It is important to maintain a secure channel between computers and the domain controllers.

Replication: User files and folders can get locked if they are not synchronized with the file servers that use file replication services. If shared folders do not replicate properly, group policy objects and other security policy objects may not be applied to the client systems.

User Account: Users can get locked out of their accounts if the Primary Domain Controller (PDC) emulator is unavailable or if several domain controllers experience a replication failure.

 

7 Metrics for AD Monitoring

To proactively detect performance issues, here are 7 key metrics you want to consider monitoring within your Active Directory domain.

1. Directory Services: Monitoring directory services are critical to ensure addresses, email, and phone contacts are always in sync.

2. Domain Controllers: Monitoring domain controllers will let you know whether the CPU usage has reached its threshold, whether a user account is locked out, or in case there is a log-on issue. Set thresholds and monitor the drive that contains NTDS files; monitoring this prevents the drive from running out of disk space and prevents the domain controller from not functioning.

3. Service Outages: All new alerts in each domain controller have to be monitored on an on-going basis to avoid any type of service outage. This could be within DNS servers and clients, servers and workstations, distributed file systems, intersite messaging, etc.

4. Lightweight Directory Access Protocol (LDAP) Client Sessions: Monitoring NTDS object counter will indicate the number of clients connected to an LDAP session. It also provides statistics on other performances such as speed and response times of particular sessions.

5. Mission Critical Processes: Monitor critical processes to check whether the system/server is able to handle all processing requests.

6. Replication: Monitoring replication shows if there is a failure on a replication link or if there is an issue with the network leading to slow replication rates between websites.

7. Reporting: Generate reports to gain visibility into critical processes in order to consistently monitor the frequent services and alerts that go down over a period of time. Reporting may also include authentication for failed log-ins, number of logged in users for a given period, etc.

 

SAM_5-0_DIRECTORY SERVER & LDAP MONITORING_Base_EN.png

 

Server management software helps you keep a close eye on directories and services in your AD. Working continuously and proactively, server management software will alert you to warnings or critical malfunctions inside AD, its servers, services, directories, and applications. Be sure your AD domain is well covered at all times, with the help of powerful server and application monitoring software.

einsigestern

Play

Posted by einsigestern May 9, 2013

My wife is a former clinical psychotherapist in private practice. Although I've been interested in the individual response to events and how those responses affects our worldviews, I'd never given the subject the deep thinking it deserves. When we were dating, I asked my future wife to dinner and a movie. When she found out the movie was "Terminator," she demurred. I asked why. She said that the emotions you experience while watching a movie about (pick a subject) are the same emotions you would experience in reality; in your everyday living. The same neurotransmitters circulate throughout your body.


Think, the shower scene in "Psycho." Alfred Hitcock was once asked why he didn't portray the frightening scenes in his movies with more graphic images. He replied (paraphrased), "The images the audience creates in their heads are far more frightening than those I could produce on the screen." Hitchcock knew that our imagination, our response to situations, real or artistic representations of reality, are powerful. Orson Welles used this technique in his noir productions. He employed novel camera angles, unexpected lighting, and dramatic presentation to move us emotionally...and we bought tickets to the experience of it.

More to the point, our reaction to any given stimulus is our decision. We own it. We have complete control over it.

Enter Dan Gilbert and his explanation of "Why We Make Bad Decisions." Watch the TED Talk video later if you wish. The condensed version of his concept is "Favorable events actually, measurably do not affect us as well as we imagine they will; negative events do not affect us in a negative way to the extent that we imagine they might." People who win big in the lottery, people who suffer the tragic loss of a loved one, experience life much differently than their minds predict.

Shawn Achor has developed similar concept with the emphasis on happiness. As John Cleese (Monty Python) once said, "If you want your people to be creative, you must allow them to play." Google, with their 20% time policy has incorporated this concept to great effect. Atlassian, with their off-the-chart sense of humor, has Ozified it.

Achor actually codified the concept. His TED Talk, "The Happy Secret to Better Work," is worth viewing, right now. I dare you to watch it without smiling.

Right from the first well-publicized international security incident on ARPANET in 1986, there has been a rapid evolution in the requirements of network security. In a previous post, we had discussed about what threats are and how they can take a toll on your organization’s IT security. Now, let’s look at external threats specifically. Majority of the threats tend to be external, which comprises all possible external sources that try to gain unauthorized access to your organization networks using the Internet or any other networks.

 

The most common attack is the Denial of Service (DoS) attack. Let us consider a scenario where your network may be flooded with large volumes of access requests. It may result in your network being unable to respond to the legitimate ones. The DoS attacks mostly use a technique called buffer overflow by which web servers are overloaded causing a denial of service attack.

 

It may further result in:

  • Slow network performance
  • Non-availability of a particular website
  • Inability to access any website


In the current scenario, external threats have morphed from network level threats like intrusions and DoS attacks into much more sophisticated content-based threats. Let us look at some of these:

  • Malware: It is a code or software that is specifically designed to damage, disrupt and inflict some illegitimate action on data, hosts, or networks. Viruses, worms, Trojans, and spyware fall into this category. They may come in the form of attractive packages that appear to be from legitimate websites but may end up stealing sensitive information.
  • Hacking: It’s all about exploiting the vulnerabilities in your network. Application-specific hacks, in particular, are becoming more threatening than ever. They use advanced SQL injection which forces database yield otherwise secure information by causing it to confuse classified data such as passwords or blueprints, with information that is for public consumption such as product details or contacts. You may want to be hawk-eyed on your application event logs.
  • Spam: All unwanted online communications belong to this category. There are two main types of spam viz.:

          Usenet spam is mostly targeted at people who read newsgroups, where normally the readers don’t tend to give away their personal and contact information. They also have the ability to disrupt the system administrators in managing the topics or content they accept.

          Email spam is targeted mostly at individual users with email messages in large numbers.

  • Phishing Attempts: These are about all possible fraudulent attempts to breach into your system and access data. BFSI is the most targeted sector, especially banking customers. The customers tend to get emails apparently from the bank requesting passwords or other log-on data. Also, with sophisticated phishing techniques, users can also be directed to deceptively real, but fake and counterfeit banking websites to share confidential information.


With ever-increasing sources and targets of external threats, you need to have more sophisticated levels of intelligence to identify, analyze, and defend your IT infrastructure. It is advisable to group your network security audit into three layers:

  • Level 1: Scan the IT systems for suspicious activities by using intrusion prevention technology.
  • Level 2: Integrate your IT security defense with compliance management.
  • Level 3: Be equipped to execute real-time active responses to mitigate security threats as they are encountered.

 

Watch out for more coming your way.

Cisco® firewalls are prevalent in today’s enterprise networks. There’s a good chance you have one, or likely, multiple Cisco security devices on your network. But, do you have enough resources to ensure these devices are being managed effectively? How do you know there are no undetected loopholes or that your firewalls are not putting your network at risk?


Managing large numbers of devices, multi-vendor device complexities, growing firewall rulebases, change management issues, as well as internal and external compliance requirements, all add to the challenges faced by today’s security admins.


In this blog, we’ll look at some key pain points for Cisco firewall management.


1.      Complex NAT and ACL Rules

Traditionally, Cisco firewall management has meant learning a myriad of NAT syntactic variations like Static NAT, Static NAT with Port Translation, One-to-Many Static NAT, Dynamic NAT, Dynamic PAT, Identity NAT, NAT in Transparent Mode, NAT in Routed Mode, Twice NAT, etc.—you get the idea.

 


Access-lists (ACLs) also come in different forms, including standard and extended, as well as named and numbered. Standard access-lists are defined to permit or deny based on the source IP address of the packet. Extended access-lists define both source and destination IP addresses. Extended access-lists can also be defined to permit or deny packets based on TCP, UDP, or ICMP protocol types and the packet’s destination port number.


It’s also important to note that on Cisco PIX, ASA (pre 8.3), and FWSM, ACL rules are defined to the mapped (translated) IP addresses. Starting with Cisco ASA 8.3, the rules are defined on actual IP addresses (untranslated).


2.      Intricate Data Analysis

To isolate unused rules, rule usage data must be analyzed. In Cisco devices, usage analysis is based on access-list hit counts. Therefore, no log records from syslog are collected for this purpose. To isolate unused rules/objects, you must first identify those objects with least/no hits and then remove or edit as required.


3.      CLI-based Configuration Management

Cisco device configurations are generally CLI-centric and viewed via SSH/Telnet. This can make for difficult and complicated management, especially when dealing with a large number of complex rulesets. Human errors/typos are quite common when dealing with the command line interface, which can lead to security holes being inadvertently opened or a service being rendered unreachable. Plus, the limited view provided by the CLI does not deliver a complete picture of how rule/objects are related. And, even though the configuration file can be downloaded as a text file for further investigation, management and troubleshooting remain difficult.


The Solution

Managing Cisco firewalls can be a daunting task, but it doesn’t have to be. The right firewall management tool can make all the difference.

 

 

Security admins need a tool in which they can automatically analyze firewall configurations and quickly identify security gaps, as well as view ACL and NAT information in an easy-to-understand manner. But it doesn’t stop there. They also need a way to streamline change management, optimize performance, and ensure compliance is maintained.


The right tool should enable security admins to:

 

  • Ensure regular rule/object cleanup tasks that optimize performance of the firewall.
  • Generate analysis reports for a selected firewall and identify rules/objects that should be removed or revised
  • Create new configuration files or clean-up scripts that can be edited and applied as necessary.
  • Troubleshoot issues by using packet tracing methods to trace the path of a specific packet that is currently being blocked or dropped.
  • Avoid errors in rulebases by testing and evaluating the effect of rule changes before applying them to the production environment.
  • Utilize a user interface (UI) that displays configuration files in an intuitive way for convenient analysis and troubleshooting.

 


SolarWinds Firewall Security Manager (FSM) can help simplify firewall management and make the security admin's job easier with its powerful automation and out-of-the-box support for Cisco and other leading firewall vendors and devices.

Whether you've been using FileZilla® as your main FTP client for a month or five years, there comes a point when you realize it can only do so much—and it's looking a bit long in the tooth. Yes, FileZilla supports multiple protocols and, yes, it's free, but a few other options have come along since FileZilla first appeared on the scene in 2001. Here are three desktop alternatives that feature modernized interfaces, free transfer schedulers, or free synchronization tools.

Ripe_Bananas_FTP_Client.png

 

Free FTP Client Alternative for Windows®

 

SolarWinds' own FTP Voyager® handles the same protocols as FileZilla: FTP, FTPS and SFTP. However, FTP Voyager also includes a free transfer schedule service and synchronization utilities. As a native 64-bit app using the same type of ribbon interface now seen in every Office® application, FTP Voyager feels particularly at home on Windows 7 desktops and laptops.

FTPVoyager_240.png

 

To get your own free copy, download FTP Voyager now.

 

Is FTP Voyager Really a Free FTP Client? 

 

Although FTP Voyager was previously sold for about $50/copy, we decided to re-release it as a free tool in 2012. So, yes: it's completely free, and it's the full version.

 

Free FTP Client Alternative for Mac®

 

If you transfer files from Mac desktops, you may want to consider an OS X client called Cyberduck. If you don't mind the "get a donation key" banner, you'll have a desktop client that supports FTP, SFTP and FTPS, just like FileZilla. The only thing you may have to get used to is the single-pane, drag-and-drop interface, but those of you with only one mouse button are probably already familiar with that. 

 

cyberduck_240.png

 

Free FTP Client Alternative for Firefox®

 

If you run Firefox as your primary Web browser, you can download and install a plug-in called "FireFTP" that provides the same traditional side-by-side transfer windows as FileZilla. Like FileZilla, FireFTP supports FTP, SFTP and FTPS connections. However, since it's entirely browser-based, it does not help with scheduled or command line-driven transfers. 

 

fireftp_240.png


SAM and SQL

Posted by Bronx May 8, 2013

In this posting, aLTeReGo points out what we're working on for the next release of SAM. One of the major improvements he mentions is SQL database monitoring. Once this release is out in the wild, I'm sure all of you DBAs and Sysadmins will be very happy; however, this is today. Today this new feature is not quite finished being cooked. In the meantime, I'll share a tip, courtesy of fellow thwackian, rhrland2021.

 

Say you're using SAM to monitor your SQL server(s), and say like many shops you may have multiple instances of sqlservr.exe. You want to monitor them individually, so you can see which instances/DBs might be chewing CPU at inopportune times. In this example, I have four instances I want to monitor - call them DB1, DB2, DB3, and DB4. By using command-line filtering when setting up the application monitor, I can instruct SAM to monitor specific instances. When viewing Task Manager on the SQL server, we see that each of the separate instance executables use a switch to specify instance - something like -sDB1 is found at the end of that invoked instance. I enter that switch into the command-line filter of my SAM template, I get a nice, accurate list of the specific instances and their respective memory and CPU use. Easy! Bigger shops in crazy I/O environments can use this as a first stop when examining basic SQL health.

 

Here's a sample snip that shows me how these procs are doing:

sql2.png

Greetings All!

 

I'm in the Big Apple, New York City, this week ... doing a presentation Wednesday morning at the Cybit Expo. Cybit is a "computer forensics" show, which focuses on Cyber Security and IT Security. My presentation is titled "Sharing Without Sacrifice: Managing Systems and Mitigating Risk in Today's Virtual Military" and I'll be talking about how to effectively monitor and manage virtual systems within the realm of military deployments, both mobile as well as stateside.

 

In addition to the presentation at CyBit, I'm scheduled to do some interviews with some media outlets, and we'll be talking about Cybit, my presentation (and whatever else those media outlets want to explore) ;-)

 

If you're in the New York City area, let me know at @LawrenceGarvin (http://twitter.lawrencegarvin.com) or LinkedIn (http://linkedin.lawrencegarvin.com) -- or just post a reply to this blog right here on Thwack.

I'll be monitoring all channels and I'd love to have a chance to meet you in person.

 

If you're not in NYC this week, I'll also be in the SolarWinds booth at TechEd North America (New Orleans) the first week of June or Cisco Live! (Orlando) the last week of June.

 

And don't forget your mom's on Sunday!!!!! :-)


One of the most basic yet overlooked functions of help desk software is the ability to receive the problem from the affected user, register it, and relay the issue to help desk staff. Sounds simple, doesn’t it? More often than not, it is this ‘communication’ requirement of help desk software that creates the disconnection between logging the initial problem request and tracking the resulting ticket to closure.

 

So, let’s take a look at the key aspects of communication that matter most for successful help desks.

 

Top 5 Requirements

 

1. Email to Ticket Conversion

A user’s ability to create a problem ticket just by sending an email saves him/her from having to log on to the help desk console, selecting the request category, typing the problem statement, and then submitting it. Help desk software must support critical email protocols such as IMAP, POP and Exchange to allow conversion of incoming emails into service desk tickets.

 

2. Two-Way Email Correspondence

Any form of communication is effective and complete only if it’s bidirectional. Two-way email communication lets help desk professionals keep the end-user up-to-date on ticket developments at all times. Whenever a technician adds notes, changes status, or reassigns a ticket to different help desk personnel, all of these changes should be automatically communicated to the end-user via bidirectional email correspondence.

 

3. SMS Alerts

Generally, help desk tickets and associated SLAs are defined by ranked priorities. At times, tickets that are labeled as lower priority can get pushed down to the bottom of the stack, which when left unattended, can be unintentionally overlooked in a busy help desk environment. This creates two problems: (1) an understandably irate end-user, and (2) a big pile of unresolved tickets. To prevent this pile-up, unassigned or incomplete tickets should be brought to the help desk’s attention via simple SMS alerting within the help desk software.

 

4. Email Ticket Management

Having an email ticketing system doesn’t do much good if it can’t keep SPAM and blocked senders at bay, or configured to automatically authenticate end-users and related content. These assurances can only be achieved if the help desk software allows you to manage and configure filters to allow ‘accepted’ domains while filtering emails based on senders, subject, and body content.

e-mail.png

5. SolarWinds Web Help Desk

In an ideal help desk scenario, identifying the problem, reporting the issue, and getting it resolved would seem like straightforward requirements. However, you already know how difficult this process can become at times when rigid ticketing software is in use or if IT personnel are unfamiliar with more complicated help desk platforms.

 

SolarWinds Web Help Desk provides highly customizable, yet simplified email ticketing and management features. Web Help Desk not only facilitates the key communication requirements mentioned above, but also goes further by addressing escalations and providing customer satisfaction surveys so you’re always ahead of the game. At the end of the day, efficiency matters the most!

 

So, take a test drive today to see how Web Help Desk software makes managing email tickets a practical, efficient, and streamlined process.

All of us own bank accounts and/or use credit cards, don’t we? So there are mighty chances that you’ve heard the term “PCI compliance” quite often. Do you know what it means? We keep hearing the term every now and then as more data breaches are happening, especially because the payment processors are hacked so often.

 

Especially if you are a banking firm or a merchant, you need to be compliant with the Payment Card Industry Data Security Standard (PCI DSS), a set of comprehensive requirements developed by the major card brands to facilitate the adoption of consistent data security measures.

 

The PCI DSS contains 12 requirements grouped into six areas: build and maintain a secure network, protect cardholders, maintain a vulnerability management program, implement strong access control measures, monitor and test networks, and maintain an information security policy.

Sounds quite complex, right? Not, at all. What if you had someone explain it in simple words? What if someone could clarify your doubts? And what if it was ABSOLUTELY FREE?

 

Yes, SolarWinds and Loop1 are doing it for you!! In this webinar, you will get full understanding of what PCI DSS is, how it works, and how effortless it is to secure your transactions!!

 

WHEN: Wednesday May 8th 11am CST.

Register Now >> https://www1.gotomeeting.com/register/771665041

Using the Serv-U Management Console on an iPad


What is Serv-U? Serv-U File Transfer Server provides a secure managed file transfer solution that gives you the ability to access files on the go through secure mobile access.

You can deploy Serv-U on Windows or Linux and you can also access the Management Console on an iPad.

 

The following Management Console functions are available on the iPad, including:

  • Resetting passwords and unlocking users.
  • Monitoring current activity via statistics and logs.
  • Viewing user activity and clearing sessions.
  • Granting additional access to users, groups or entire domains.
  • Configuring user, group, folder, protocol and server settings.

The only Management Console functions not available on the iPad are those that require specific files to be uploaded. These functions include:

  • Importing public SSH keys for SFTP authentication. (Creating or selecting existing SSH keys is supported on the iPad.)
  • Uploading logos for web client branding.

How to Connect to Serv-U with an iPad

  1. Make sure your iPad is running iOS v5 or greater. (how to check)
  2. Make sure you are running Serv-U MFT Server - our Serv-U FTP Server package does not support remote administration. (how to check)
  3. Connect to any HTTP or HTTPS Listener on Serv-U with the Safari web browser on your iPad. (By default, these listeners are bound to all of Serv-U IP addresses on TCP ports 80 and 443 respectively.)
  4. Sign on as an administrator to see the main screen of the Serv-U Management Console.

 

For more information about Serv-U and it's other great features, see RhinoSoft (Rhino Software, Inc.) - Home of Serv-U and FTP Voyager

It’s not a new issue that we face in our jobs to deal with poorly performing applications running on virtual machines. We know it would be a problem if a virtualized application were to fail, for it to not be available during a critical business process, or for some key performance thresholds to dip.

 

The complexity of this whole scenario is caused by the lack the visibility and data on the health of the virtualized environment including the applications running on the VMs, operating systems, data stores, as well as the VMs themselves.

 

Your Microsoft® Active Directory™ application running on a VMware® ESXi host could experience poor performance because the application itself had an issue, or the VM running the application had an issue. The question is: How do you get this insight into what went wrong?

  VAPP Image.png

 

SolarWinds introduces the all new Virtualized Application Performance Pack (VAPP) that gives you a world of visibility into all the metrics and performance data to alert and report on faults, issues and failures. You can now monitor the entire virtualized application stack and the other elements of the virtualization platform.

 

SolarWinds VAPP is the combined solution offering of two robust monitoring and management tools – Server & Application Monitor and Virtualization Manager. Double the tools, double the power.

 

 

Double Power of Virtualized Application Performance Pack:

  • Dedicated and graphical virtualization performance dashboards to drill down into your VMs to analyze performance
  • Map & correlate VM performance problems to datastores, hosts & clusters
  • Identify the source of CPU, memory & storage contention between VMs
  • Monitor performance & user experience for virtually any application – Microsoft® Exchange, Active Directory®, IIS, any ODBC database & more
  • Monitor server hardware faults & operating systems across platforms including Windows®, Unix and Linux
  • Free up more resources for your critical application by identifying & controlling sprawl & unused VMs
  • View application, server & deep virtualization data all from a centralized user interface

 

There’s no more system admin vs. virtualization admin hassle, pointing fingers at each other for downed service. Gain the power and means to identify the source of application and/or VM problems by yourself. Save precious time and effort during troubleshooting and problem resolution!

 

Download Virtualized Application Performance Pack today and gain deep visibility into all your virtualization and application troubles.

You might not know it, but SolarWinds has a product to manage your IT infrastructure directly from your mobile device called Mobile Admin. Now Mobile Admin comes in two parts-a free mobile client and a purchasable server. You can download the mobile client on iOS, Android, and BlackBerry devices. The Mobile Admin server installs on a Windows box, and integrates your IT management tools - such as Active Directory, SolarWinds Orion, or CA Service Desk - into a single UI that you can access from your mobile client.

 

Going back to the mobile client, it has three very nice, free features.

MA_free_stuff.png

 

SSH

You can open a secure shell connection using the mobile admin client. If you're connecting to a computer on the corporate network, don't forget to open a VPN connection first.

ssh.PNG

 

Telnet

Oh, telnet, you IT workhorse, you. Someday you might get to retire to greener fields.

telnet.PNG

 

RDP

Who really wants to open an RDP session on your phone? But if you have to do it, the Mobile Admin client provides the option. Obviously, this option works best on a tablet.

rdp.PNG

As a side note, you can't connect to a 2012 or Win8 box through RDP yet.

 

Pro-tip: There are more "keys" on those toolbars - just scroll to the right or left.

 

There you have it - three reasons to use the Mobile Admin client on your mobile device.

Dell recently announced end of support for Quest Patch Manager (formerly ScriptLogic Patch Authority Ultimate) to be effective May 31, 2014.  It’s not surprising as Dell’s acquisition of Quest brought duplicative capabilities for patch management.  Dell provides patch management capabilities (based off software developed by Lumension) as part of its Dell KACE product, which provides a large suite of capabilities from patch management to configuration management to desktop virtualization, service desk and a lot more.

 

What is surprising about the end of support announcement is there is no mention of migration strategy.  And from what I hear from our sales team, it looks like Quest customers are on their own when it comes to finding a supported patch management solution to replace Quest Patch Manager.  This is also not surprising because going from a point product to a large solution (KACE) might be cost prohibitive from an upgrade and maintenance pricing perspective.

 

If you are a Quest Patch Manager/Patch Authority Ultimate customer, I invite you to take a look at SolarWinds Patch Manager.  SolarWinds Patch Manager integrates with both WSUS & SCCM.  What customers really like about this offering is its ability to patch at discrete times to support large geographically distributed environments, to automate the patch approval process, its quick delivery of 3rd party update packages and the products function to value ratio (yea, it’s a great bang for your buck). 

 

See what our customers have to say about SolarWinds Patch Manager or try it free for 30 days in your own environment.

Everyone is talking about the hacking issue that happened last week to LivingSocial®, the daily deals site. And why wouldn’t they? The hackers gained access to customer data on their servers including emails and encrypted passwords. Although the company feels the passwords are encrypted and it would be difficult to decode them, more than 50 million of their users have been asked to reset their passwords.


Now, does encryption save you?

Encryption is all about transforming information using an algorithm to make it unreadable to anyone except those possessing special knowledge. So, if a third-party possess the knowledge to decrypt it, your information is safe no more!


Here’s one a very informative video by one of our good friends ‘Javvad Malik’ who explains password encryption on a humorous note


It may be worth appreciating the engineers at LivingSocial for adding cryptographic salt, as it calls for password cracking programs, to guess the plaintext for each individual hash, than guessing passwords for millions of tens of millions of hashes. But if they really wanted to have the information secure, then choosing the SHA1 algorithm ahead of bcrypt, scrypt, or PBKDF2 wasn’t a great move.

 

The entire approach has been reactive when they could have been staying proactive and watching out with eyes wide open. This is where your endpoint security needs to lead from the front.
It is not just about protecting your servers and devices within your network, it’s also about your end users.

 

This is the time when you turn to Security Information and Event Management (SIEM). SIEM combines two different areas: SIM and SEM. SIM (Security Information Management) that gathers and creates reports from security logs and SEM (Security Event Manager) that uses event correlation and alerting to help with the analysis of security events.


To stay ahead of the curve, you can use a SIEM security software which acts as a central collection point for device data, automatically aggregating and then normalizing this data into a consistent format.Based on this, the anomalies and security threats can be easily and quickly identified which will help respond to suspicious events.


In most cases, enterprises use correlation with security specific devices such as IDS/IPS devices, firewalls and domain controllers to take a proactive approach to network security. Going a step further, the event log analyzer understands the relationship between different activities using multiple event correlations in real time to effectively troubleshoot security issues.

 

Now the take-away from the LivingSocial incident and the immediate fix is that the users should not only change the passwords for their LivingSocial account but also ensure that they are not using the same passwords on other sites. They should also understand that it’s not optional.

 

Stay secure!!!

Log data is a record of all the transactions and information that goes through your networks. Companies generate enormous amounts of log data every day.

 

SolarWinds Log & Event Manager (LEM) collects, stores, and normalizes log data from a variety of sources and displays that data in an easy to use desktop or web console for monitoring, searching, and active response. Data is also available for scheduled and ad hoc reporting from both the LEM Console and standalone LEM Reports console.

 

Mistake number 1 - Not monitoring your collected logs until you have a major incident

You’ve installed your new LEM software. Your job is done, right? Nope, sorry, but someone has to monitor the collected logs so they learn if there were any events and also to proactively learn when there may be another similar event. Use LEM Reports to view or schedule fixed reports for compliance purposes to:

    • Produce compliance reports
    • View reports based on specific regulatory compliance initiatives
    • Provide proof that you are auditing log and event data to auditors
    • Schedule formatted reports for LEM Reports to run and export automatically

 

Also, your organization may have to look at logs for auditing purposes. HIPAA regulations require medical organizations to establish an audit process. Ensuring data security is vital in business, most especially in any business that stores and transmits cardholder data. Any company with access to cardholder must ensure that they are in compliance with the standards set by the Payment Card Industry Data Security Standard (PCI-DSS). If a company is found to be non-compliant, they may face large fines and even have their credit card processing abilities restricted.

 

I’ll discuss other mistakes commonly made when new to LEM in future blog posts.

nfsm.png


Wake up Neo…

 

The Matrix has you…


While Neo (aka Thomas Anderson) may indeed be the Wachowskis’ best fictional character ever created. SolarWinds Firewall Security Manager is no work of fiction. It is a great piece of software that makes the challenging job of managing Firewalls much less like staring a screen filled entirely with 0’s and 1’s (or rather .255’s and .192’s) and more like…Something beautiful!


Now it gets weird(er)…let’s assume for this “Exercise” that Neo wrote Firewall Security Manager one night on the Nebuchadnezzar shortly after having taken the red pill and survived a rather shocking awakening…


Enter the Construct: Weapons, Clothes, Ammo, Training programs. While Neo was brushing up on his Drunken Boxing style, Neo chose to use some synthetic packets to determine the network exposures on the layer 3 devices running in the Nebuchadnezzar, greatly enhancing his ability to retain everything he learned during his 10 hour Kung Fu infusion.


Enter the Sparring Program: Hitting Morpheus became much easier after Neo recalled that FSM easily integrates with NCM, thusly providing a transparent view into Morpheus’s seeming superior network configuration…


Enter the Jump Program: letting go of fear, doubt and disbelief…”Whoa” seems an appropriate response, now let’s assume Neo had already run a Security Audit report on the firewall that was gating the Jump Program, clearly Neo would have learned about the incorrectly mapped NATed addresses that interestingly enough, cause everybody to fall the first time…


Search and Destroy: Inside the “real” world the Sentinels do one thing… Search and Destroy, Tank desperately needed to have a clear understanding of how the sentinels’ traffic is controlled by security rules. Tank should have known that FSM assumes a “Perimeter” firewall by creating Zone Definitions Preferences…thank goodness all they had to do was power down and hold their breath…


Inside the Matrix, they are the gate keepers: The Agents are everywhere; it’s likely that when Cypher chose to open a direct line to the location of “The Oracle”, Agent Smith had run a Traffic Flow Compare on the open line in order to see if any traffic had been added or deleted as the result of a rule change.


I can only show you the door, you have to walk through it:  The Oracle is clearly well informed about her Security Audit Reports, she very safely determines if dangerous services are allowed from the DMZ to the internal Zone, breaking a vase apparently doesn’t qualify.  Mmmm those cookies do smell good.


There is no Spoon: Clearly the child has been cheating and has recently run a Cleanup and Optimization Report, identifying unused and structurally redundant rules, thusly allow him to trick Neo into thinking that a taking a different perspective was actually going to do a darned thing.


They’re in the walls: Unfortunately, even the Agents knew way in advance that the last rule in your firewall configuration should deny all unmatched traffic. Apoc and Switch could have benefited from that information.


You are a plague, and we are the cure: Even Agent Smith wants to be free, perhaps he overlooked the fact that all PCI controls that address firewall policies are able to be evaluated by FSM


Dodge This: Well said Trinity, FSM supports firewalls and Layer 3 Network Devices. Comes in SUPER handy when you need to learn how to fly a chopper in .25 seconds…Perhaps FSM is “The One”. I dare say that with FSM in place, you too can begin to believe…that managing your firewall can be done efficiently, effectively and with unexpected simplicity.


Thanks for the read folks, be sure to check out the FSM Demo to see it in action!

Managing and monitoring a single IT operation can be a pretty complex operation, and we make lots of great tools to help simply that process, but what to do when you’re a Managed Service Provider (MSP) and you have multiple independent IT operations to oversee.

 

There are a couple of different deployment scenarios for MSPs to take advantage of our Server and Application Monitor (SAM) product, as well as its sibling products, including the following:

· IP Address Manager (IPAM)

· Network Configuration Manager (NCM)

· Network Performance Monitor (NPM)

· Netflow Traffic Analyzer (NTA)

· User Device Tracker (UDT)

· VoIP and Network Quality Manager (VNQM)

· Web Performance Monitor (WPM)

Centralized Deployment

The Centralized Deployment model is based on a single, centrally located SolarWinds server running SAM (or one or more of the other products listed above) and configured to monitor/manage nodes remotely.

SolarWinds Centralized Deployment.jpg

 

There are two ways in which remote nodes can be monitored from a central server.

 

The central server can connect directly to the remote nodes and a number of protocols and ports may be needed to support this. Two commonly used protocols are Windows Management Instrumentation (WMI) and Simple Network Management Protocol (SNMP). An SNMP connection works well in this fashion for network devices, but WMI connections for servers are a bit more complicated because of the dependencies on Remote Procedure Calls (RPC). If the remote nodes are within the same enterprise, then RPC connectivity may not be an issue, but RPC is rarely capable of traversing a firewall connection. If you are managing a remote network via an always-on Site-to-Site VPN connection, then RPC/WMI may be possible across the VPN. Alternatively, enabling SNMP on the servers can provide a methodology for monitoring.

 

A second option is to deploy pollers to the remote sites. Poller remotability is useful when there is a large number of nodes to monitor on a remote network. This offloads the WMI or SNMP traffic from the site-to-site connection by performing those tasks locally and then relaying that information back to the central SolarWinds server. This information is sent directly to the instance of SQL Server supporting the main SolarWinds server, so there are also some firewall and port considerations with this method as well. As a result, this methodology is not well-suited to the MSP scenario, but may work well within a single multi-site enterprise. There are also latency issues with the database communication to be aware of as well.

Decentralized Deployment

The Decentralized Deployment model provides some advantages over the Centralized Deployment model by eliminating the challenges involved in supporting RPC/WMI or SQL traffic over a site-to-site network.

SolarWinds Decentralized Deployment.jpg

 

In this model, independent SolarWinds servers running SAM (or other products) are installed at the remote sites, and the Enterprise Operations Console (EOC) is used to provide a centralized, aggregated view of the individual remote sites. In addition, the EOC can be customized on a per-operator basis, so operators that are responsible for only some sites, can have their view customized to just those sites. The EOC can also be filtered by the particular SolarWinds products in use. For example, Network Administrators may wish to focus on the content provided by NPM, NCM, IPAM, and NTA, while Systems Administrators can be focused onto the information provided by SAM.

Hybrid Approach

Finally, if you’ve implemented multiple SolarWinds products in this family, you can also choose a hybrid approach. You can implement one or more products with a centralized model and others with a distributed model, and regardless of the model used, the EOC can connect to all of them.

 

The diversity of deployment options makes monitoring and managing independent customer sites a breeze for MSPs who implement SolarWinds products. For more information about the ways SolarWinds can help MSPs manage customer operations, please visit the website for SolarWinds Managed Service Provider Software.

Is your auditor getting under your skin?  What if you told her that she could have all the logs from ALL your routers, ALL your servers and many other devices for the past few years - would that keep her out of your hair for a while?


Kiwi_Syslog_Makes_Auditors_Smile.gif


Fortunately, SolarWinds offers a product in Kiwi Syslog Server that allows you to hold on to logs as long as you want, and not a day longer, with individual devices or networks logging to their own set of managed log files.

Log Retention Best Practices

1) Plan to Keep All Your Logs for Several Years

Every industry is regulated differently, and businesses are often subject to different tax, liability and privacy regulations in different locations.   Some common recommended retention periods include: 


In most cases it is wise to plan to retain your logs for several years, with "seven years" serving as a safe common denominator. 

2) Draft and Approve a Retention Policy

A written, mandatory policy for document retention and destruction is standard operating procedure for publicly traded companies operating under Sarbanes-Oxley  (SOX), but it is also a good idea for other companies as well.  A written policy, approved by legal council and senior management, gives the IT department the requirements and authority to shape document retention, including logs. 

There are many sample retention policies available online, such as this document retention policy template provided by the University of Wisconsin.

3) Automate Log Archival and Retention

To avoid manual mistakes and interruptions, you should automate every possible aspect of your log archival and retention process.

  • Collection: use Syslog or SNMP traps to collect logs from every possible source
  • Archival: set up your "to disk" logging rules to log separate logs for each device and write a new log for each device each day
  • Retention: set up file compression rules to reduce the space used by logs after a few days, then use file deletion rules to automatically delete logs more than a certain number of years old


How to Automate Log Retention with Kiwi Syslog Server

 

  1. Download and install Kiwi Syslog Server.
  2. Configure your routers, computers, applications and other sources to log to the syslog server.
  3. Split each source into its own file.  For each source:
    • Create a new Kiwi Syslog Server rule.
    • Add a "IP address" filter to the rule that matches the source's IP address.
    • Add a "Log to File" action to the rule to log to a specific file.
    • Use a file name that contains "%DateISO", such as "router_192-168-1-1_%DateISO.log", to get a different file for each day
  4. Create a Kiwi Syslog Server Schedule that runs every day and moves old files into a compressed archive.
    • Create a new "Archive" schedule and set the frequency to "Day."
    • Point the source to your log folder.  Keep a file mask of "*.*" to select all log files. 
      • Set a file age of about seven days.  (Only keep what you need for current analysis.)
    • Point the destination to a separate log archive folder. 
      • Make sure the "Move files...", not the "Copy files..." option is selected.
    • On the "Archive Options" tab, check the "Zip files after..." option.
      • You many also want to increase the compression level.
    • On the "Archive Notifications" tab, you may want to set up an archive report. 
  5. Create a second Kiwi Syslog Server Schedule that runs every day and cleans out the archive folder.
    • Create a new "Clean-Up" schedule and set the frequency to "Day."
    • Point the source to your log ARCHIVE folder.  Keep a file mask of "*.*" to select all log files. 
      • Set a file age of about seven years.  (Keep as many years as your retention policy requires.)
    • On the "Clean-up Notification" tab, you may want to set up an archive report. 


To try this procedure in your own deployment,  download a free, full-featured trial today.

Filter Blog

By date: By tag: