1 6 7 8 9 10 Previous Next

Geek Speak

1,238 posts

We heard you wanted more deep dive, technical training on your SolarWinds products, and we listened! We're pleased to announce the brand-spanking new Customer Training Program for current (in maintenance customers). This is a totally free program that we are delivering as part of your maintenance (who else does that?!). Even though our products are very easy to use, we want to ensure every customer gets the most out your products. Initially launching with four NPM classes, we're planning to grow this program substantially in 2014 to offer more topics on more products very soon.


All classes consist of both lab and lecture - so the lessons are very applicable and transferable to what you're doing on a day to day basis. Classes are hosted by a professional trainer and class sizes are limited to ensure a quality learning experience.


To sign up for a class, you must be current on maintenance for at least one product - but it doesn't have to be the product you're taking the class on. So, (for example) feel free to sign up for an NPM class if you are a Toolset customer interested in learning more about NPM.


Where to Sign Up


You can sign up in the Customer Portal.



Current Classes


Currently, we have four NPM classes offered at various times on various dates. If the class you want is full, feel free to write us at CustomerVoice@solarwinds.com and we'll let you know as soon as we add new classes to the schedule.

SolarWinds NPM 201: Advanced Monitoring – Universal Device Poller, SNMP traps, Syslog Viewer

NPM 201 digs into some of the more advanced monitoring mechanisms available. We’ll get away from the “out-of-the-box” object configs and default monitoring settings to create a customized monitoring environment. If you have a good understanding of MIBs, OIDs, and SNMP (or would like to), this is probably the class for you.


SolarWinds NPM 202: Performance Tuning –Tuning, Remote Pollers, License Management

NPM 202 focuses on maximizing performance. This means tuning your equipment to optimize its capabilities, tuning your polling intervals to capture the data you need without bogging down the database with less critical data, and adding additional pollers for load balancing and better network visibility.  This class is great if your NPM could use a tuneup, or if you are considering expanding your deployment with additional licenses, polling engines, or web servers.


Solarwinds NPM 101: Getting Started – Maps, Users, Custom Views

NPM 101 will take a user from the initial install through customization and daily use. We cover the Orion core platform (getting used to NPM’s web interface), network discovery and adding devices, creating maps, adding users, and creating custom views.


Solarwinds NPM 102: Digging In – Advanced Alerts, Reporting, and More

NPM 102 dives into advanced alerts and reporting. We cover creating and managing custom alerts, alert suppression, device dependencies, and custom properties. We create and automate reports, and also show how to integrate those reports into custom views for easy, real-time access.

Comments by Training Participants

“This class was definitely worth my time. It provided me with lots of information and tactics to better manage my network. I look forward to the evolution of this training program because my job is always changing and I want to stay up-to-date with how SolarWinds NPM can help me with my network.”

Corinne Johnson


“The training program has definitely been worth my time. It has provided me with in-depth product information and tactics to help me monitor my network. I am looking forward to taking more classes from SolarWinds and exploring other products. My job is continually evolving and this new training program that SolarWinds has put together is helping me to maintain a competitive edge.”

Will Luther



“Like many others who took the class, Solarwinds NPM was an inherited product for me so it was great to have this training course offered.  The product is large and has a lot of great tools that my team and I were not using.  We didn’t even know that Solarwinds NPM allowed you to map the network – we are definitely putting that to good use now thanks to this training program.”

Diana Teoh


We're ramping up this program, so watch this blog and the training page in the portal for more classes, on more products, at more times all throughout 2014 and beyond. And as always, let us know your requests at CustomerVoice@solarwinds.com.


Don't Forget About Customer-Only Trials!


And... don't forget about the benefits of downloading customer trials from the customer portal. You have access to every SolarWinds product with a streamlined evaluation experience including:


  • No need to fill out a registration form.
  • The download will not trigger emails about other products or offers.
  • Unless you reach out us, we will only contact you at the beginning and midway through your trial.
  • If you have questions or need assistance with your evaluation contact customersales@solarwinds.com.
Meryl Wilk

WiFi with 3D Vision

Posted by Meryl Wilk Jan 15, 2014

First, came WiFi, that essential technology that keeps us online – provided we have the one PC hooked up to the cable modem and router, and the another PC with a wireless networking card.


Then came WiVi, which uses WiFi technology to “see” through walls to detect motion. According to www.popsci.com, the US Navy discovered radar when they noticed  that a plane going past a radio tower reflected radio waves. Much more recently, Massachusetts Institute of Technology (MIT) scientists applied this same idea to create devices that can monitor human (or possibly other) movement by tracking the WiFi signal frequency changes in buildings or behind walls.


And now, we have WiTrack. MIT scientists have taken the WiVi idea a step further. The MIT article, WiTrack: Through-Wall 3D Tracking Using Body Radio Reflections, describes WiTrack as “…a device that tracks the 3D motion of a user from the radio signals reflected off her body…WiTrack does not require the user to carry any wireless device, yet its accuracy exceeds current RF localization systems, which require the user to hold a transceiver. It transmits wireless signals whose power is 100 times smaller than Wi-Fi and 1000 times smaller than cellphone transmissions.”


Applications for WiTrack


Applications for WiTrack are really varied, and include:


  • Security and law enforcement, from detecting intruders to avoiding or minimizing potentially violent situations, such as in battle or at a crime scene.
  • Rescue operations, for detecting motion inside hard-to-get-to places, such as collapsed buildings or avalanche sites.
  • Gaming, in which you can freely move about your home to participate in the fun. Imagine running down the hall and up the stairs as part of the gaming experience…
  • Monitoring, any three-dimensional being  who might need to be checked in on - from your new puppy, to your kids, to your great grandmother. MIT points out that a WiTrack monitoring system can do what current camera-based monitoring systems do without using cameras to invade anyone’s privacy. 


Find Out More


For even more details on WiTrack, check out the video, WiTrack: 3D Motion Tracking Through Walls Using Wireless Signals And for all the details on how WiTrack works, see the MIT paper, 3D Tracking via Body Radio Reflections.

It was recently found by CERT that there’s a new type of DDOS botnet that is infecting both Windows® and Linux® platforms. This is a highly sophisticated cross-platform malware which impacts computers by causing DNS amplification.



A DNS Amplification Attack is a Distributed Denial of Service (DDOS) tactic that belongs to the class of reflection attacks in which an attacker delivers traffic to the victim of their attack by reflecting it off of a third party so that the origin of the attack is concealed from the victim. Additionally, it combines reflection with amplification: that is, the byte count of traffic received by the victim is substantially greater than the byte count of traffic sent by the attacker, in practice amplifying or multiplying the sending power of the attacker.[1]



In Linux systems, this botnet takes advantage of the systems that allow remote SSH access from the Internet and have accounts with weak passwords. The attacker uses dictionary-base password guessing to infiltrate into the system protected by SSH. While executing an attack, the malware provides information back to the command and control server about the running task, the CPU speed, system load and network connection speed.

Malware Attack.png


The Windows variant of the botnet installs a new service in the target systems in order to gain persistence.  First, the C:\Program Files\DbProtectSupport\svchost.exe file is installed and run. This file registers a new Windows service – DPProtectSupport, which starts automatically at the system startup. Then, a DNS query is sent to the server, requesting the IP address of the .com domain. This domain is the C&C server and the bot connects to it using a high TCP port, different than the one used in Linux version. And, in the Windows version of the malware, OS information is sent to the C&C server in a text format.


This botnet was discovered in December 2013, and after many tests, the anti-virus software used was able to detect it more in Windows compared to Linux – putting Linux at higher risk of security compromise.


Its best to always gain real-time actionable intelligence from your system and network logs so that you will be able to detect any suspicious and unwarranted activity – which might be indicators of a security breach!

With the continuous increase in the number of security breaches every year, it would we critical for you to take a closer look at the few things that you can do from an IT security standpoint, to minimize the risks.  One of the key steps towards this complying with industry specific regulations like SOX and HIPAA/HITECH and having third-party organizations to conduct audits for key systems and controls.


Why do audits matter?

Compliance with data security standards can bring major benefits to businesses of all sizes, while failure to comply can have serious and long-term negative consequences. This involves identifying and prioritizing the strategic objectives and managing the business across people, processes, information and technology to realize those objectives. It also impacts day-to-day operations, which in turn affects troubleshooting and system availability.


Being in line with IT compliance regulations such as PCI DSS, GLBA, SOX, NERC CIP, and HIPAA require businesses to protect, track, and control access to and usage of sensitive information. Let us have a look at some of the top reasons as why to audit:



You may be working with clientele spread across industries and these audit reports really matter to them. For example, financial services organizations these tend to request these reports at the beginning of every year, whereas healthcare groups would need their audit reports later in the year for their own auditing purposes. These reports have a direct impact on their productivity, sales and reputation.



Let us consider HIPAA compliance for example. The core of HIPAA compliance is to ensure protection of patient and employee data, while giving access to the right persons at the right times to do their day-to-day tasks.  Failure to comply with new regulations carries serious consequences for healthcare providers, including criminal sanctions, civil sanctions, financial fines and even possible prison sentences. The guidelines on violations include up to $1.5 million in penalties for breaches.



You need to have visibility over security & compliance, and protection of your data. To ensure this, you need to collect and consolidate log data across the IT environment and correlate events from multiple devices and respond to them in real-time. Conducting audits in a way sets up a benchmark to implement best practices and also ensures that your organization is in line with the latest technology trends.


As an interesting statistic, it is expected that the number of targeted attacks is likely to increase in 2014 and this forecast is based on the continuously growing number of DDoS attacks over the last couple of years. Hackers might move away from high-volume advanced malware because the chances of it being detected are high. Still, the lower-volume targeted attacks are expected to increase, especially with the intent of accessing financial information and stealing identities or business data.


With all these set to happen, it is advisable that you ensure more visibility on the devices on your network as a part of your information security measure. Compliance and compliance audit will definitely come in handy as you head further into 2014.


Stay secure my friends!!

How to Migrate Kiwi CatTools to Another Computer Along with Activities & Devices

1. From Start > All Programs > Solarwinds CatTools > click CatTools

2. Now, go to File > Database > Export and export the devices and activities using the options highlighted in the screen shown below:

3.Save the exported files, which are in '.kbd' format.

4. Save the 'Variations' folder from:<directory>\CatTools\

5. If you are using CatTools 3.9 or higher, deactivate the current license, using Licence Manager, which can be downloaded from here.

6. Install CatTools on the new system and license it.

7. Copy the following from the old system to the new system:

  • exported .'kbd' files &
  • 'Variations' folder

8. Open the Activities file and ensure that all paths are valid. For e.g., if CatTools was previously installed to c:\program files\ and is now installed to c:\program files (x86)\, you will need to reflect this within the INI file.

9. Open the CatTools Manager > File > Import > import the two '.kbd' files.

10. Copy the 'Variations' folder to the new CatTools installation directory.

11. Restart the CatTools service.

For more information about CatTools visit: Configuration Management and Network Automation | Kiwi CatTools


Wafer-thin flash drives

Posted by LokiR Jan 13, 2014

There's a new design concept out there for flash drives the thickness of a sticky note. The company, called dataSTICKIES, uses a relative newcomer material called graphene and a proprietary wireless data transfer protocol to get achieve this wafer-thin thickness.



Now, graphene is my favorite new material; I've been waiting for close to 10 years for someone to come out with a viable commercial application, and this is a pretty cool proto-product. Graphene is a form of crystalline carbon (essentially atom-think graphite) that is super strong and an excellent conductor. Research using graphene has taken place anywhere from medicine to energy to quantum science.



The dataSTICKIES company is using graphene to store data. Because graphene is an atom thick, the hard drive becomes a flat sheet. Instead of using USB to transfer the data, the company developed an optical data transfer surface to take advantage of the super thin material. This also makes transferring data easier since you no longer have to deal with the USB superposition effect (i.e., it takes at least three tries to connect the USB cable) or moving computers around to get to the USB ports.



Another cool thing with the dataSTICKIES is that it looks like you can increase the data capacity by stacking stickies. I'm not sure how that's supposed to work though, since you are supposed to be able to stack stickies as discrete drives.



These would be pretty awesome anywhere, but especially for people on restricted networks. Need to install some more pollers or every SolarWinds product you bought? Just slap a sticky on the computer.

Today is the last day of that annual ritual celebration of all things technological we know simply as CES. Thanks to CES, we can all be supremely disappointed in the otherwise simply amazing capabilities of the gadgets we all got just last month. You may even be reading this post on a device that was a star of a past CES. Ain't tech grand?


So, What Was at CES 2014?


As your humble docs writer for SolarWinds NPM, among other things, attending CES is not remotely related to my listed job requirements. As a SolarWinds geek, though, I do have a keen personal interest in the latest whiz-bangery showing up out in Vegas.


And a lot of whiz-bangery there is: 4K HDTV, 3D printers, 2-way hybrid "laptabs", and 1TB wireless hard drives. It's all stuff we should expect to see on our networks or in our homes soon. Thankfully, Network World has the rundown for those of you who, like me, weren't able to make it. I'm not sure I need a Bluetooth-connected toothbrush, but the personal hydrogen reactor and the robot drones look like a lot of fun. Of course, wearable tech was the thing this year, so I expect to be ordering my very own Dick Tracy watch in the very near future.


For those of you who were able to make it, what have you seen that the rest of us network-oriented geeks would find fascinating?

Network admins constantly face challenges when implementing security procedures and bandwidth optimization processes in their network. Using a Virtual Local Area Network (VLAN) is one smart solution to effectively managing workstations, security, and bandwidth allocation. Although VLANs can be very useful, they can also present a lot of issues when managing them in huge enterprise networks. In this blog we’ll discuss some of the common challenges admins face when implementing VLAN’s and best practices to manage them. Before we dive into that though, let’s take a look at the basics of VLANs and how they work.


What is VLAN?

A VLAN is a logical group of workstations, servers, and network devices in a Local Area Network (LAN). A VLAN can also be referred to as a network implementation where users access a proprietary, private network from the Internet. It allows communication among users in a single LAN environment who are sharing a single broadcast or multicast domain.


Why Do We Need VLAN?

The purpose of implementing a VLAN is to utilize security features and to improve the performance of a network. Assume you have two different departments: finance and sales. You want to separate them into VLAN groups for reasons such as tighter security (limited visibility to financial data), better bandwidth allocation (for VoIP calls in sales), and load balancing. In this case, VLAN would allow you to optimize network usage and map workstations based on department and user accounts.


Typical Challenges in VLAN and How to Manage them!

Although there are many benefits of implementing VLAN, there are also certain disadvantages. In a logically connected network, a high-risk virus in one system can infect other users in the same network. If you want users to communicate between two VLANs, you might need additional routers to control the workload. Controlling latency can be a bit more ineffective within a LAN than within a WAN. Network administrators and managers can run into problems even after implanting a VLAN properly and efficiently. In a traditional LAN, it’s easy to find out if the network device is performing or not. However, understanding what’s causing your network to run slowly in VLANs with virtual trunks or paths is a more difficult process. For instance, assume you want to configure a VLAN in your network. You can choose to separate users based on departments and enable security, but if you’re creating networks within your physical switches you also have to think about routing, DHCP, DNS, etc.


Network administrators effectively manage VLANs by taking a step back and understanding whether the number of VLANs is appropriate for the number of endpoints in the network. It’s also important to understand what data needs to be protected from other traffic using a firewall. In addition, VLANs can become more efficient when combined with server virtualization. In a virtualized data center environment, the VLAN brings the physical servers together and creates a route. By allowing virtual machines to move across physical servers in the same VLAN, administrators can keep tabs on the virtual machines and manage them more efficiently.


Managing a VLAN becomes much easier for network administrators when network traffic, user access, and data transfers are isolated and routed separately. It’s also highly recommended to ensure primary network devices work properly before troubleshooting VLANs.

Since I revisited the topic of AES encryption and NSA surveillance the Washington Post published information sourced through Edward Snowden that the NSA is spending $79.7 million to pursue a quantum computer capable of running Schor's algorithm. If and when the NSA succeeds, all the currently-unreadable AES-encrypted data they routinely capture en masse from internet backbones and stored in the Bluffdale, Utah computing center would become readable.

To give some sense of the agency's ambition we need to talk about Schrödinger's cat.


Quantum Smearing


In Erwin Schrödinger's famous thought experiment a cat sits inside a Faraday Cage--a steel box from which electromagnetic energy cannot escape. Also in the box and inaccessible to the cat is a machine that contains: 1) some material whose probability of releasing radiation in an hour is exactly 50%; 2) a Geiger counter aimed at the material and rigged to release a hammer upon detecting any release of radiation; 3) a flask of poison positioned under the hammer. If the radioactive material releases radiation, the hammer smashes the flask, killing the cat.


In this box, however, as a quantum system, it is always equally probable that radiation is released and not released. Says the Copenhagen interpretation of quantum systems, with the idea of superposition, the cat in the box exists as a smear of its possible states, simultaneously both alive and dead; an idea Schrödinger along with Einstein ridiculed for being absurdly at odds with everyday life. Nobody has ever seen the material smear that is a Schrödinger cat*.


Qubits, or Herding Schrödinger Cats


David Wineland and team received their Nobel Prize in Physics in part for creating a very small Schrödinger cat. "They created 'cat states' consisting of single trapped ions entangled with coherent states of motion and observed their decoherence," explains the the Nobel Prize organization in making their 2012 award.


Wineland developed a process to trap a mercury ion, cause it to oscillate within the trap, and then use a laser to adjust its spin so that the ion's ground state aligns with one side of its oscillation and its excited state aligns with the other side. On each side of the oscillation the ion measures 10 nanometers; the ion's two resting points in the oscillation are separated by 80 nanometers. And in effect, the mercury ion is guided into a "cat state" of superposition.


In this state the ion has both a ground and excited charge and so meets the physical requirement for serving as a quantum computing "qubit"; using the difference in spin, superposition allows the ion to be both 0 and 1 depending on where in its oscillation it is "read".


A quantum computer would be capable of breaking AES because a qubit is an exponential not a linear quantity. For example, in linear binary computing, using electrical current transistors, 3 bits of data can give you 1 binary number--101, 001, 110, etc.; but in quantum computing, 3 qubits represent a quantity that is 2 to the 3rd.


So, to extrapolate how qubits scale, Wineland (43:00) offers this example: while 300 bits in linear computer memory may store a line of text, 300 qubits can store 2 to the 300th objects, holding a set that is much larger than all the elementary particles in the known universe. And qubit memory gates would allow a parallel processing quantum computer to operate on all of its 2 to the nth inputs simultaneously, making trivial the once-untouchable factoring problems upon which all currently known encryption schemes are based.


Big Black Boxes


The NSA project is underway in Faraday cages the size of rooms. Could that work proceed without the direct involvement of Wineland's award-winning NIST team? Even presuming that involvement, the technical challenges of going from an isolated and oscillating mercury ion to a fully developed quantum computing platform would seem to imply years not months of work.


This time next year we may know the answer to how long the project will take. In the meantime, we continue assuming that AES-encrypted data remains secure and that the SNMPv3-enabled tools for monitoring systems with secure data do not introduce breaches in the systems themselves.


* Schrodinger implicitly formalized his own feline paradox with a differential equation that calculates the state and behavior of matter within quantum systems as a wave function (Ψ) that brings together mutually exclusive possibilities.

By default Storage Manager places the database and it install files on the same drive. Over time the database will expand. It is important to verify there is sufficient disk space before performing an upgrade of Storage Manager. We must have twice the amount of free space as the largest table in the database. During the upgrade, Storage Manager will build temporary database tables from the actual database and it is because of this that we must verify sufficient disk space on the drive.


MariaDB creates a number of different data files in the mariadb directory. The file types include:

  • .frm – format (schema) file
  • .MYD – data file
  • .MYI – index file


It is the *.MYD (data) file that we must check. Within the database directory we must sort the files from largest to smallest keeping track of the largest .MYD file in that directory.


The database can be found at the following location:


  • Windows - <installed drive>\Program Files\SolarWinds\Storage Manager Server\mariadb\data\storage  directory


  • Linux - <installed path>/Storage_Manager_Server/mariadb/data/storage directory




  • This is the default location of where the temporary database tables are created.


  • Storage Manager versions 5.6 and newer use MariaDB. For previous versions, MySQL is used. For versions prior to 5.6, substitute MySQL for MariaDB


If you have insufficient disk space, you must point the temporary database tables to a drive that have sufficient space before upgrading. For more information on how to relocate the temporary database tables please see the Storage Manager Administrator Guide.

A couple of years back, Gartner® released the results of a survey titled Debunking the Myth of the Single-Vendor Network [1]. The results showed that single vendor network costs and complexity would increase, while multi-vendor networks would continue to provide greater efficiency. Truth be told, most network admins nowadays manage multi-vendor network environments by deploying the most suitable and affordable devices in their networks.


Generally, when deploying devices in the network, network managers try to balance two important factors: the Total Cost of Ownership (TCO) and the operational efficiency of the devices. At a macro view, this trend may pave the way to build next generation enterprise networks, but this can also pose some operational risks. Deloitte (a professional services firm) also released the results of a survey they titled Multivendor Network Architectures, TCO and Operational Risk [2] that examines the operational, financial, and risk factors associated with the single-vendor and multi-vendor approaches in different types of enterprise networks. They claim that multi-vendor networks also have unique problems that need to be addressed.


Challenges Faced in a Multi-Vendor Network

From network administrators’ point of view, there are a few challenges they face while managing a multi-vendor network:


Performance Management – When admins manage a multi-vendor network, they have to collect data on different parameters like device status, memory status, hardware information, etc. It’s a challenge to retrieve customized performance statistics data because each vendor should have unique OIDs. If you don’t have the right network management system (NMS) tool that supports multi-vendor devices, it’s impossible to monitor all the devices by collecting and monitoring network data.


EOL/EOS Monitoring – In enterprise network management, managing EOL and EOS information is a huge task. Admins have to maintain a spreadsheet that tracks all the device details—from product number to part number and end-of-life dates. If you combine that with multi-vendor devices in an enterprise network, the result can create a huge burden of manual tasks for admins. To make this process easier, there are tools available that automate EOL/EOS monitoring and resolve basic issues for network admins.


Hardware Issues and Repair – Having to face hardware failure in multi-vendor networks usually means having to face prolonged downtime. Each hardware issue may need its own unique support and repair specialists, and managing repair for a large number of devices can become a huge headache for network admins.


Applying Configurations to Different Devices – Deploying and managing new configurations for different devices requires a lot of manual effort and time. And, if there’s an issue with the configuration, admins have to manually Telnet/SSH the device to make the change or fix the issue. Admins can use network configuration management tools to resolve these problems.


Expertise to handle different devices – Network admins need to know how to manage different devices in multi-vendor networks. Expertise in command line interface will help to operate and retrieve information, if processes are implemented.


To successfully manage a multi-vendor network environment, network admins can adopt a two-pronged strategy.

  1. One, admins have to figure out the best combination of Layer 2 and Layer 3 network devices. It can be as simple as ‘How can I increase the router boot time?’ to ‘How will this device support our business critical applications?’
  2. Two, find the right tool to manage your network, irrespective of the devices you deploy. There are only few options available in the market where organizations can deploy end-to-end single vendor network devices. But, predominantly the trend seems to favor multi-vendor environments where administrators feel it’s more cost-effective.


Reduce Complexities!

While implementing solutions based on multi-vendor networks, ensure device configurations are set properly. Interoperability is essential. Admins can test the deployment configurations before actual implementation. Since each vendor may have different interpretations to network standards, it’s advisable to simulate and later deploy in the network. If administrators are going after a multi-vendor network, they have to take certain precautions to achieve high stability. Using an NMS tool that supports different vendors will help in managing your network. Complexities like command line interface (CLI) syntax can be replaced by tools that provide simple user interface to manage day-to-day activities.


Diverse networking environments require more centralization and providing continuous network availability should be the top priority. Ensure smooth network operations by monitoring all the key network parameters. For instance, network issues can be solved by looking at information as simple as ‘node status’ to something very important like, ‘memory usage’. When administrators use NMS tools, they automatically retrieve key information from devices. This makes an admin’s job much easier. If you have an SNMP-enabled device, your NMS can automatically poll relevant information from the device and display it in a readable format.


Single Central Console to Monitor All Devices

It doesn’t matter if you’re managing a small or large network, achieving efficiency and reducing cost of maintenance should be an administrator’s goal. Heterogeneous network infrastructure can pose challenges when dynamic operations are performed across systems, but using an NMS tool that supports multiple vendors can definitely be helpful for engineers during their network creation and expansion process.


[1] Courtesy Gartner® Survey - “Debunking the Myth of the Single-Vendor Network”: Republished by Dell®, [2] Courtesy Deloitte® and Cisco® Survey - “Multi-vendor network architectures, TCO and Operational risk”

Providing access from nearly anywhere, Wireless Local Area Networks (WLANs) deliver a great deal of flexibility to business networks and their applications. It’s important to note that WLANs are also susceptible to vulnerabilities, misuse, and attacks from unauthorized devices known as rogue wireless devices. To safeguard company data and ensure smooth operations, it’s crucial to take steps to prevent, detect, and block unwarranted activity associated with these rogue wireless devices.

What are Rogue Wireless Access Points?

As more wireless devices are introduced into a network, more wireless access points and transmissions within the network's proximity are also created. When this happens, new, previously unknown access points (AP) from a neighbor’s network can sometimes be introduced into your network. These are rogue wireless access points, and their source can many times come unintentionally from employees. On the other hand, the source can be a malicious one who is intentionally installing and hiding the AP in order to gather proprietary information.

It’s tough to differentiate between genuine and rogue devices. But, no matter what the intent, all unauthorized wireless devices operating within the vicinity of the company’s network should be considered wireless rogue devices that could be opening up unknown access points.

                                                                    Wireless Rogues.png

Types of Rogues

Neighbor Access Points: Normally workstations automatically associate themselves with access points based on criteria like strong signals, Extended Service Set Identifier (ESSID), and data rates. As a result, there are chances that trusted workstations accidentally associate themselves with an AP located close to, but outside the company network. Neighboring APs may not pose an immediate threat, but they do leave your company information exposed.

Ad Hoc Associations: Peer-to-peer wireless connections involve workstations directly connecting to other workstations in the same network. This facilitates file sharing or sending documents to a wireless printer. Peer-to-peer traffic generally bypasses network-enforced security measures like encryption and intrusion detection, making it even more difficult to detect or track this kind of data theft.

Unauthorized Access Points: Basic models of access points are easily available in the market. The existence of an unauthorized and unsecured AP installed intentionally or otherwise becomes an easy backdoor entry point into the company network. These unauthorized APs can be used to steal bandwidth, send objectionable content, retrieve confidential data, attack company assets, or even worse, attack others through your network.

Malicious Workstations: Malicious workstations eavesdrop or passively capture traffic in order to find passwords, log in information, email addresses, server information, and other company data. These workstations pose very serious risks and can connect to other workstations and APs. They redirect traffic using forged ARP and ICMP messages and are capable of launching Denial of Service (DoS) attacks.

Malicious Access Points: Attackers can place an AP inside or near company networks to steal confidential information or modify messages in transit. These attacks are also known as man-in-the-middle attacks. A malicious AP uses the same ESSID as an authorized AP. Workstations receiving a stronger signal from the malicious AP associate with it instead of the authorized AP. The malicious AP then modifies the data exchanged between the workstation and the authorized AP. This poses a great business risk because it allows sensitive data to be modified and circulated.

The rogue wireless device problem is one of the primary security threats in wireless networking. It’s capable of disclosing sensitive company information that if leaked, could be damaging to the organization. The first step to assess and mitigate business risks from wireless rogue devices is to detect them. Are you equipped to identify and detect rogue activity in your network?

Storage systems, like any other network or server hardware, are likely to brew up bottlenecks and performance issues and it’s the storage administrator’s job to keep this in check and triage issues. It’s a misconception that all storage bottlenecks arise due to storage disks. There are other key components of the storage infrastructure such as the storage controller, FC switches, and front-end ports that could go off course and, in turn, impact storage performance.

In this blog, we’ll understand some important factors causing performance bottlenecks in disk array controllers (aka RAID controllers).


What is a Disk Array Controller?

A disk array controller is a device which manages the physical disk drives and presents them to the computer as logical units. It almost always implements hardware RAID, and thus is sometimes referred to as RAID controller. It also often provides additional disk cache.[1]

The disk array controller is made of up 3 important parts which play a key role in the controllers functioning and also show us indicators of storage I/O bottlenecks. These are:

  • CPU that processes the data sent to the controller
  • I/O port that includes:
    • Back-end interface to establish communication with the storage disks
    • Front-end interface to communicate with a computer's host adapter
  • Software executed by the controller's processor which also consumes the processor resources


These components could potentially offset the performance of the storage subsystem when left unchecked. There are third-party storage management tools that help to get this visibility, but as storage administrators you should know what metrics to look at to understand what could possibly go wrong with the disk array controller.


Common Causes Disk Array Controller Bottlenecks

#1 Controller Capacity Overload: It is possible that the disk array controller is made to support more resources than it can practically handle. Especially when in scenarios of thin provisioning, automated tiering, snapshots, etc. the controller is put through capacity overload and this may impact the storage I/O operations. Also, when we are having to execute operations such as deduplication and compression, they may just add more load on the controller.


#2 Server Virtualization & Random I/O Workloads: Thanks to server virtualization, there are more workloads on the disk array controller to in comparison to the single application load on the host in the past. This makes it more difficult for the storage controller to find the data each virtual machine is requesting when each host has a steady stream of random I/O depending on each connecting host supporting multiple workloads.


Key Metrics to Monitor Disk Array Controller Bottlenecks

#1 CPU Utilization: You need to monitor the CPU utilization of the disk array controller with great depth and visibility. Try to get CPU utilization data during peak load times and analyze what is causing the additional load and whether the storage controller is able to cope with the processing requirements.


#2 I/O Utilization: It’s also important to monitor I/O utilization metrics of the controller in 2 respects:

  • From the host to the controller
  • From the controller to the storage array


Both these metrics allow you to figure out when the disk array controller has excessive CPU utilization or if one of the I/O bandwidths is overshooting. Then, you can understand whether the storage controller is able to meet the CPU capacity and I/O bandwidth demand with the available resource specification.


As George Crump, President of Storage Switzerland, recommends on TechTarget, you can address storage controller bottlenecks by

  • Increasing processing power of the controller CPU
  • Using more advanced storage software
  • Making the processor more efficient by implementing task-specific CPUs. This allows you to move portions of code to silicon or a field-programmable gate array (FPGA) enabling those sections of code to execute faster and the system to then deliver those functions without impacting overall performance.
  • Leveraging the hypervisor within server and/or desktop virtualization infrastructures to perform more of the data services tasks such as thin provisioning, snapshots, cloning and even tiering.
  • Using scale-out storage which is to add servers (often called nodes) to the storage system where each node includes additional capacity, I/O and processing power.

As we move into the New Year, it is time for us to have a look at some threats that we need to be guarded against. In this blog post, let us look at how Ransomware is likely to become more sophisticated in 2014. Here are a few trends observed this year that may well continue well into 2014, with some new and interesting challenges as well.


What on earth is Ransomware?

It is a type of malware that is designed to make your system or a file unusable until you pay a ransom to the hacker. It typically appears to be an official warning from law enforcement agencies like the Federal Bureau of Investigation (FBI) that accuses you of a cyber-crime and demands for electronic money transfers for you to regain control on your file.  There’s another kind of ransomware that encrypts the user’s files with a password and offers them the password upon payment of a ransom. Looking at both the cases, it is the end-user’s system that is essentially held hostage.


Cryptolocker malware and how it works

The Cryptolocker malware is seen as an extension of the ransomware trend and is far more sophisticated with its ability to encrypt files and demand ransom successfully. Its presence is hidden from the victim until it contacts a Command and Control (C2) server and encrypts the files on the connected drives. As this happens, the malware continues to run on the infected systems and ensures that it persists across reboots. So, when executed, the malware creates a copy of itself in either %AppData% or %LocalAppData%. Then the original executable file is deleted by CryptoLocker and creates an autorun registry key which ensures that the malware is executed even if the system is restarted in “safe” mode. 


Protecting yourself from Ransomware

It is important to be aware of this kind of malware and here are few steps that can help you to protect your organization from ransomware:

  • Ensure that all the software on your systems are up-to-date.
  • Make sure that you do not click on links or attachments from untrusted sources
  • You need to regularly backup your important files


Additionally, regulatory mandates and corporate policies need to become enforced stringently.  The fact is that a security attack of any kind can have a direct impact on your organization’s integrity and reputation, which is why a comprehensive security solution must be put in place. It is best to opt for an SIEM solution with real-time analysis and cross-event correlation as it would help you to:

  • Reduce the time taken to identify attacks, thereby reducing their impact
  • Reduce the time spent on forensic investigation and root cause analysis
  • Respond to threats in real-time


Shield your network and systems better this year, have a good one!!

My teenage daughter thinks my technology job in security is boring most of the time  (especially when I talk about it in front of her)- but when she heard about the SnapChat breach, I quickly received a call asking for advice.  The user names and phone numbers of many users were breached and exposed.  So should my daughter and her friends be worried?  Whenever there is a breach or new vulnerability found, there can be a lot of hysteria.  Its scary to know that your information was stolen - but there are varying degrees of damage that can be done by breaches.  I performed a quick risk assessment for her and thought, given the large numbers of SnapChat users, I would share the results. The outcome? She personally did not need to be very worried – although some might need to be.  Here’s why:


  • Right now, there is no indication that passwords were exposed.  I recommended she change her password anyway just to play it safe. Since I use SnapChat as well to communicate with her, I did the same.
  • Her user name, combined with her phone number, doesn’t provide much identifying information about her at all.  It could lead to annoying spam texts and calls – but since she is using a respectable user name that is not her full name, the two pieces together do not clearly identify her and should not cause embarrassment.  She is almost 20, so we are not very concerned about her receiving content from those she doesn't know - because she can always block those people.  Younger kids and their parents should take some precautions.
  • New incoming photos can’t be accessed with just a user name and phone number – so  new photos coming in are safe as long as passwords weren't breached (and we changed our passwords to play safe)
  • Old photos, while remnants remain on her device – are not accessible even if her account was breached because the SnapChat application does not maintain them for user access
  • Her name with her phone number is already public information because it is listed on her blog with her resume

So who should be worried and when?

  • Parents of younger children (my daughter is almost 20) should be concerned because their kids can be added by people who don't know them and their numbers have been exposed to spammers which may, in turn, expose them to inappropriate content and messages.  Downloading a new version of the app released today and opting out of "Find Friends" should definitely be performed with younger kids.  Also, making sure younger kids come to you immediately if they see inappropriate content in spam messages on their phone is essential.
  • If you have an inappropriate user name, this information combined with your phone number could cause embarrassment.
  • If it turns out that passwords were in fact breached – then someone could gain access to new incoming snapchats.  There are no reports of passwords being breached I recommend changing passwords now just to play safe
  • If the user name contains identifying information and you want to keep your number private (for example – famous people) – then it could cause an issue.  In that case, getting a new phone number from your mobile provider is the option to correct it.

Reasonable security decisions – both in businesses and in our personal security online -  are about assessing the value of the information combined with the difficulty for an attacker to gain the information.  Those who phone numbers were exposed might have some annoyances with spam to their mobile phones – but unless more data than phone numbers and user names were stolen or if you fit the “worry” criteria – that should be the extent of the damage from this breach.

Filter Blog

By date:
By tag: