I recently discussed the NSA's project to develop a quantum computer in part for the purposes of cracking AES-encrypted data captured from internet backbones during the past 10 years and now stored in the enormous warehouses.


Yet, while true quantum computing probably is still years away, quantum key distribution systems already exist. And since these systems do not depend on the practical inability of a computer to factor very large numbers, and instead use the relationship of entangled particles to cipher the exchange of data, making quantum key distribution technology more widely available would be a crypto activist strategy that even a (still-hypothetical) quantum computer plausibly would be unable to defeat.


If you think of packet-switched data security as a chain, then the encryption algorithm provides the strongest link. Managing encryption keys is the weakest link; and stealing keys is what hackers (NSA included) most often succeed in doing. In some cases, as with RSA corporation, the creator of the technology that generates keys takes money to make key theft easy.




As I've discussed, quantum computing fundamentally depends on the engineering feat of manipulating particles into the state known as superposition. Quantum key distribution also uses superpositioned particles but also requires at least one pair of such particles that are entangled. With a pair of entangled particles, observing some aspect of state for one particle exactly predicts the state of the other particle. Prediction in this case amounts to a instantaneous communication between particles.


Since this phenomenon violates the theory of special relativity, which precludes particles influencing each other at any speed faster than light, Einstein derided the entanglement hypothesis, describing it as "spooky action at a distance". And yet 80 years of experimental physics has overwhelmingly confirmed entanglement as a reproducible physical reality.


Quantum Key Distribution and the Flow of Money


Entangled particles secure qubit-based key exchange by relying on the fact that you can neither copy a quantum state (the no-cloning theorem) nor measure all aspects of entangled particles without corrupting the quantum system--in effect, collapsing particles in superposition into particles with a non-random single set of values. As a result, parties using such a quantum system to secure their information exchange can detect the fact and extent of intrusion by a third party.


In 2004, a group of researchers based at Vienna University produced a set of entangled photons and used them as a key to cipher a transfer of funds from Bank Austria Creditanstalt:

At the transmitter station in the Bank Austria Creditanstalt branch office, a laser produces the two entangled photon pairs in a crystal. One of the two photons is sent via the glass fiber data channel to the City Hall, the other one remains at the bank. Both the receiver in the City Hall and the transmitter in the bank then measure the properties of their particles.


The measuring results are then converted into a string of 0s and 1s – the cryptographic key. The sequence of the numbers 0 and 1 is, due to the laws of quantum physics, completely random. Identical strings of random numbers, used as the key for encoding the information, are produced both in the bank and the City Hall. The information is encoded using the so-called “one time pad” procedures. Here, the key is as long as the message itself. The message is linked with the key bit by bit and then transferred via the glass fibre data channel.


Eavesdropping can be detected already during the production of the key – before the transfer of the encoded message has even started. Any intervention into the transfer of the photons changes the sequence of the number strings at the measuring stations. In case of eavesdropping, both partners receive an unequal sequence. By comparing part of the key, any eavesdropping effort can be discerned. Though the eavesdropper is able to prevent the transfer of the message, he is unable to gain any information contained in the message.

Currently there are  physical and financial limits on the availability of quantum key distribution: the network cannot extend beyond 120 miles, preventing open internet adoption; and every member of the network must have a pair of entangled photons generated for them to use for each encrypted session of data exchange, incurring significant entry cost in terms of equipment needed. This is a good example of William Gibson's observation that, technologically speaking, the future is already here but it's unevenly distributed.


Encrypting Data, Breaking Codes


Innovations in encrypting and hacking into data tend to leap-frog each other in the history of cryptography. And because who controls each innovation impacts everyone who creates and exchanges data over publicly accessible channels, keeping up on the latest innovations becomes any individual's or organization's vested interest.


The lesson seems to be that the security of any software product comes down to how carefully the keys to the system are guarded. As with any kind of business and social interaction, establishing trust often comes down to reputation; and this is so much more the case when it comes to data security and choosing to purchase technology instead of creating it yourself. That software products for polling network devices carefully adhere to the standard in implementing SNMPv3, for example, importantly indicates that the security features operate as expected.



Welcome to the latest installment of our IT Blogger Spotlight series here on Geek Speak. It’s been a while, but we’re back at it and have a good one to get us going again: Dwayne Lessner, who runs IT Blood Pressure. Check the blog out! You won’t be disappointed. You can also find Dwayne on Twitter, where he goes by @dlink7.


SWI: So Dwayne, how did you get started blogging?


DL: Well, that’s probably a two-part answer, and I admit the first part is probably a little selfish. The reality is blogging is a great way to keep track of things I’m working on or thinking through. Do a little write-up, and it gives you a point of reference you can always go back to. The second part of the answer is really about giving back, actually. I find a lot of answers to my technical and not so technical questions on other blogs and online communities, and I figure maybe something I write up will do the same for someone else. I also love learning, and the research that goes into blogging is a great way to learn new things. It definitely wasn’t for money, I know that much for sure!


SWI: Tell me a little about IT Blood Pressure and what you most enjoy writing about.


DL: So, I actually have roots in the healthcare industry, so that’s where the name came from. Most of the content revolves around VDI in some way, but there’s the off server virtualization- and storage-related post thrown in there. A lot of the content also ties into my day job at Nutanix, too. 


SWI: What are your most popular posts and why?


DL: The answer to that questions is kind of funny actually, one of my post popular posts continues to be “Why Did My v-Session Disconnect.” There must be quite a few folks out there dealing with issues the PC over IP protocol. I’ve also found that write-ups I do on new products are always pretty popular, too.


SWI: When you’re not working or blogging, what are some of your hobbies?


DL: I really like playing Rugby. I’ve been playing since high school, though I realized pretty quickly in my first men’s game that what we really didn’t know what we were doing back then. I also have two little girls who keep me super busy.


SWI: What do you do for work?


DL: I’m currently a technical marketing engineer for Nutanix. I’ve been with the company for just over a year and love what I do.


SWI: How did you get into IT?


DL: To be honest, it really all started with my mom. She worked a back-breaking job night and day tossing 50 pound bags of flour. I knew that was not what I wanted to be doing when I turned 40 years old. And you know, I of course thought I knew a lot about IT when I first got into it back in college. I thought, “Oh, yeah, I know how to work the Internet, IT is for me.” Obviously, a rude awakening was in store.


SWI: What are some of your favorite tools?


DL: Anything that puts a focus on ease of use. You know, there are tools that might have every feature under the sun, but the chances of an IT pro having the time to twiddle around with all the features of a product is pretty unlikely. So, core features with great ease of use is what’s most important to me.


SWI: What trends are you seeing in the IT industry?


DL: The commoditization of everything, really. The whole idea of paying a premium price for servers, storage, network gear, you name it, really doesn’t exist like it used to. I also think the IT people that will be really successful are those with a service provider mentality. The next generation of IT, both as far as the IT pros themselves go as well as hardware and software vendors, really have to be about providing service and improving ease of use.

The Storage Manager Collector service is responsible for collecting data from proxy agents and transferring any collected information to the database. When customers add additional devices to Storage Manager, this can cause additional database connection request to the database. It is possible that the collector will not be able to store information in the database due to too many connections. I will discuss how to get around this issue.


Note: Storage Manager versions 5.6 and newer use MariaDB. For previous versions, MySQL is used. For versions prior to 5.6, substitute MySQL for MariaDB in the following instructions.


First we need to check the collector log file to see what messages are being generated. The collector log file can be found in the following location:


  • Windows - %Program Files%\SolarWinds\Storage Manager Server\webapps\ROOT\logs


  • Linux - /opt/Storage_Manager_Server/webapps/ROOT/logs


The file name is called mod.adm.collect.Collector.log.


When viewing this file look for the following error:


[Collect list digestor] Non-recoverable database error [08004]:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Data source rejected establishment of connection, message from server: "Too many connections"


The error tells us there are too many connections trying to hit the database at once.


To fix this issue requires modifying the database configuration file called my.cnf. This file can be found in the following location:


  • Windows - %Program Files%\SolarWinds\Storage Manager Server\mariadb


  • Linux - /opt/Storage_Manager_Server/mariadb


Open the my.cnf file with a text editor and search for the entry max_connections=. The default value is 400. Change it to 600, save the file and restart the Storage Manager services including the database service.


Note: Do not exceed 600 when changing this value.

Often, virtual admins have a pressing need to move virtual machines from one host to another without facing application downtime. The reason this scenario arises often is because a particular host may run out of resources due to contention issues with other virtual machines in that same host. vMotion® from VMware® allows for seamless virtual machine migration without any interruptions. Similar to vMotion, with the latest generation of Hyper-V®, Microsoft® now offers Live Migration which provides the capability to move virtual machines flawlessly between several hosts without disruption or application downtime.


Live Migration offers virtual admins plenty of flexibility and several added advantages:

  • Enables proactive maintenance. Live Migration offers the ability to perform migration and maintenance of hosts simultaneously. This drastically brings down the migration time giving you plenty of time to plan and simplify Hyper-V mobility.
  • Improved hardware utilization. When you have many virtual machines which are only occasionally used, then you can move those virtual machines to either a standalone server or cluster with minimum capacity and the source host can be powered down.
  • Performance and flexibility when working in a dynamic environment.
  • Rich in features. With the latest Hyper-V, Microsoft offers new APIs to migrate virtual machines across cluster nodes along with functionalities that enable migration of virtual machines in and out of failover clusters without downtime.


Like Live Migration, there are other virtual migration mechanisms that are offered by Hyper-V. These are also used by virtual admins to improve virtual machine availability and performance.

  • Live Storage Migration: With Live Storage Migration, IT admins can now move around files and data from virtual machine to another storage location without any service interruption or downtime. For example, several virtual machines in a host are dependent on the same volume and also require high disk I/O, then storage performance will be affected. In turn, all the virtual machines dependent on the same volume will have performance issues. So Live Storage Migration will be useful to move a virtual machine to another host which has more storage capacity.
  • Shared Nothing Live Migration: In Shared Nothing Live Migration, IT admins can move a running or a live virtual machine from one host to another. For example, admins can move the virtual machine between hosts even if the virtual machine’s file system resides on a storage device that is not shared by both hosts. This kind of migration is very useful for admins in small and medium organizations who want to perform periodic maintenance on a host during a planned outage.
  • Live Migration Queuing: A host can queue up several Live Migrations so virtual machines are lined up one after another to move to the destination host. This is useful when performing several Live Migrations.
  • Live Virtual Machine and Storage Migration (VSM): This is a process for migrating virtual machine and the virtual machine storage together from one host to another. VSM is supported between two stand-alone or Hyper-V host clusters. With VSM, IT admins can transfer virtual machines to either a cluster share volume (CSV) or SMB 3.0 file share on the destination host cluster.


Whichever migration method you choose to deploy, the ultimate goal is ensure the end-user or the business isn’t impacted by virtual machine performance issues. One other thing to keep in mind when you’re migrating virtual machines is to look out for performance bottlenecks across your virtual infrastructure and ensure application performance is nowhere near the set threshold limit.

Though HIPAA and HITECH Act have been in effect for years now, there is still a lot of confusion around it, especially when it comes to data protection. One of the key areas of discussion has been – “Data at Rest”. For example, let us consider a situation at your hospital, where your doctor loses his thumb drive. What happens to the information stored in it? The HITECH Act states that all data at rest must be encrypted, which ensures that one cannot steal patient information from any data at rest.


Unauthorized Thumb Drive Violations at APDerm and DHHS

Recently at Adult & Pediatric Dermatology, P.C., of Concord, Massachusets (APDerm), an unencrypted thumb drive that had the electronic protected health information (ePHI) of approximately 2,200 patients was stolen resulting in huge data loss. As part of the forensic analysis, it was found that APDerm had not conducted thorough analysis of the potential risks and vulnerabilities to the confidentiality of the ePHI that had to be ideally done as part of effective security management. Secondly, they also weren’t fully comply with HIPAA’s requirements of the Breach Notification Rule, which meant they had to have written policies and procedures and trained workforce on the policies. This resulted in paying a settlement amount of $150,000 and also assuring corrective measures.


A similar incident happened sometime last year as well, where the Alaska Department of Health and Human Services (DHHS) had to pay the U.S. Department of Health and Human Services’ (HHS) $1.7 million to settle potential violations of the HIPAA Security Rule.


Proactive Risk Assessment

Let us explore the APDerm case a little further - In addition to a $150,000 resolution amount, the settlement included a corrective action plan that required APDerm to develop a risk analysis and risk management plan to address and mitigate any security risks and vulnerabilities and provide an implementation report to OCR.


Let us take another dimension to this story. Leave alone the need for encryption of data, why even allow unauthorized access of USB’s? If there was a USB Defender mechanism, the whole situation would have never happened. Isn’t it important to set-up USB access restrictions? It becomes more critical especially when the devices used are mass storage devices.


Tips to stay Secure

One need to remember that the loss of sensitive information is not only limited to emails and patient information, but also to the loss of:

  • Intellectual property data and Copy-Rights
  • Customer trust and Reputation
  • Deviation from compliance policies, and many more


There are few things that can be done to secure your data:

  • Take care in handling sensitive documents and make sure you destroy them when they are no longer needed.
  • Monitor connections of USB devices include mass storage devices, phones, and cameras on your workstation ports.
  • Monitor the log activity of all your enterprise workstations and USB endpoints. You can create a group of authorized users who can access the USB devices, and then set up rules in your SIEM in way that detaches unauthorized USB devices, the moment they are plugged in.


A typical USB defender mechanism checks the following:

  1. Whether the device belongs to the defined group of authorized users
  2. Execute the automated response by detaching it, if it is an unauthorized user


Stay alert, stay secure!!

Remote support software is becoming more robust and popular meaning fewer companies are able to offer it as a free tool. For example, LogMeIn recently announced that they will start requiring customers to purchase an account subscription for their remote support desktop tool. Now, in terms of a cost comparison, the decision of using SaaS (Software as a Service) or self-hosted, just got easier.

Given the convenience and time-saving aspects of remote support functionality, more IT pros are starting to recognize it as a necessary asset in their IT arsenal. DameWare Remote Support (DRS) is a feature-rich remote support tool that includes remote control for Windows®, Mac OS® X, and Linux®, remote access from iOS® and Android® devices, in-session chat, and file transfer—all for a one-time fee of $349/user. Because it’s a self-hosted solution, you have more control over setup, operation, and maintenance. This means a lot when it comes to network security.

When you add cost to the equation, it’s easy to see how a self-hosted system is far more cost-effective over the long run than a SaaS offering.

With LogMeIn, users will need to pay $99/year for remote access of two computers. For small business owners, they recommend LogMeIn Pro for Small Businesses. This option supports up to 10 computers at $449/year. Therefore, support for each computer equals $45/year. If one IT pro can support 10 computers, the cost for DRS is about $35 per computer for the first year. This results in a 20% savings over the LogMeIn price during the first year.

The annual contract renewal for DRS is $99/year. This includes upgrades for all new releases and access to technical support. Comparing DRS to LogMeIn Pro, the cost of supporting the same 10 computers drops to $10/year, resulting in an annual savings of 75% after the first year.

LogMeIn’s new pricing policy serves as a wake-up call to the evolving nature of software and online services. When cost is factor in technology decisions, self-hosted, DameWare Remote Support is the more cost-effective option.

Phone Security and Privacy

Posted by Bronx Jan 28, 2014

Who cares about privacy?

The classic argument, "If you're not doing anything wrong then you have nothing to worry about," has a tiny, yet important flaw. The word, wrong. Who defines what is wrong and what is not? For instance, if I were to text a crude joke to one of my friends in my contact list (as I often do) the recipient and I would most likely laugh and think the joke was...at least acceptable (if not actually funny). If someone (say an NSA official) intercepts and reads this text, then that's where the laughter will most certainly cease. It's at this point where things can get a little...wonky. Imagine, a simple joke intercepted, taken out of context, and then investigated by the Feds because you mentioned a government official by name! The possible, although unlikely, nightmarish scenarios are limitless. Your phone has everything dear to you including text messages, emails, bank information, locations, friends, pictures, videos, and on and on and on. Who cares about privacy? Well, in a word, me. And I'm sure you do as well. Really, how would you react to someone standing over your shoulder reading what you're writing; or worse, you come back to your desk and see a co-worker going through your phone's photo gallery? This is happening to everyone on the planet all the time without the "benefit" of actually seeing and identifying the spy. We know this because of, "He who should not be named." (No, not Voldemort. Think neve.)


The government apparently has no care whatsoever for your privacy, but thankfully, the private sector does. Yes, the time has come to go on the offense and stop playing defense with something that is yours...your privacy. (This mantra, IMHO, is something that should be touted in all aspects of the human experience. Anyway, I digress.) Enter Blackphone!



Blackphone is a private start-up company doing one thing: Building an Android-type phone that is 100% secure.


From their site:

"Blackphone is the world's first smartphone which prioritizes the user's privacy and control, without any hooks to carriers or vendors. It comes preinstalled with all the tools you need to move throughout the world, conduct business, and stay in touch, while shielding you from prying eyes. It's the trustworthy precaution any connected worker should take, whether you're talking to your family or exchanging notes on your latest merger & acquisition."


I'll buy one as soon as I can, I promise you.


In the meantime...

I like my privacy. I have written multiple articles on privacy that help keep your life, well, yours. Take a gander here:

There are plenty of apps (some are subscription based) that you can install for securing your phone:

Imagine a world with no bathroom or bedroom doors, with transparent walls, and with everyone knowing every thought in your head! Yikes! Remember, you're doing nothing wrong...until someone more powerful than you says you are. This is when the fun begins.

If you have any good security/privacy tips, please share them with the world in the comments section below.

P.S. You might wanna uninstall Google Maps and Angry Birds. Why? Check this out.

P.P.S. For the record, I do not have a (real) Facebook or Twitter account. I have various real and disposable email accounts, none of which reveal my true identity. At home, I have VPNs galore with A/V and anti-spyware software o'plenty. Not to mention everything I mentioned in the E-privacy articles. Even my co-workers call me Bronx. (My real name is Joe...or is it?)

Technology is taking us sky high, enabling a world of interconnected things available for everyone, from everywhere, all the time. The trend of Bring Your Own Device (BYOD) has enabled employees to use their own devices to access company information and applications. In addition, this trend has led to more mobility and enabled an easier form of business on the go. Another popular topic, the Internet of Things (IOT), seems to take the concept of devices connecting to external networks to a newer and broader spectrum. The idea of IOT is to make human contact with the virtual world more realistic, tangible, and meaningful by allowing any device or machine (with IP-enabled sensors) to interact with us through the Internet.


While these advancements in science and technology sound absolutely fascinating, to the network administrator managing networks, they can also bring visions of herculean challenges that such network environments would create. There was a time when only a single IP-enabled device (a PC) was provided to an employee. Now, employees tend to carry their own smart phone, tablet, wearable tech, or other form of intelligent media that connects to the Internet. Not only does this lead to increased IT security risks, but also to more complexity in provisioning, tracking and managing IP addresses.


IPv4 addresses are themselves hard enough to manually manage and as their public availability becomes extinct, the use of the longer and alphanumeric IPv6 addresses is looking even more challenging to migrate to and manage.


THE SIGN OF THE FOUR (Subheading inspired from a Sherlock Holmes novel by the same name. )

    1. Gartner® studies indicate that there will be nearly 26 billion devices on the Internet of Things by 2020
    2. According to ABI Research® more than 30 billion devices will be wirelessly connected to the Internet of Things (Internet of Everything) by 2020.
    3. Cisco® predicts that there will be 25 billion devices connected to the Internet by 2015 and 50 billion by 2020
    4. IDC® expects the installed base of the Internet of Things will be approximately 212 billion "things" globally by the end of 2020.


In simple terms, your employees will bring in more devices that connect to the corporate network. In addition, your network itself will grow requiring more devices to support a growing number of clients and faster Internet.


These are alarming trends for network administrators. Looking ahead, there’s going to be more dynamic IP provisioning, more subnet masking, more DHCP server configurations, more split scopes, shared networks, management of subnet utilization, and DHCP scope leasing. Moreover, new management challenges with IPv6 will almost definitively rule out the possibility of manual IP address tracking with spreadsheets and in-house tools. In short, a greater number of IP addresses assigned to clients means increased network administration complexity and a larger number of IP conflicts and troubleshooting nightmares.



You may think “Why IP address management automation?” Especially when manual IP tracking and monitoring costs you hardly anything using Microsoft® Excel®. But, as we've discussed above the proliferation of BYOD, IOT and other new networking technologies such as SDN (Software Defined Networking), , VXLAN (Virtual eXtensible Local Area Network) 802.11ac Wi-Fi will bring new challenges to IP addressing. IP assignment and tracking will become far more complex, and the time taken to identify IPs to devices for troubleshooting will become much longer.


By replacing manual methods with IP address automation, you'll be able to:

    1. Reduce manual efforts and improving operational efficiency
    2. Facilitate dynamic IP addressing and synchronized DHCP/DNS management
    3. Automate IP address scanning and utilization monitoring
    4. Receive real-time notifications when subnets and scopes near full capacity
    5. Identify and troubleshooting IP conflicts easily
    6. Schedule IP address and subnet reports
    7. Easily track the status of IP assignment and DHCP scope leases
    8. Unify all IP address management options into a simplified interface rather than cumbersome spreadsheets, or any in-house or open source tools
    9. Simplify IPv6 migration and dual-stack IP implementation
    10. More easily conduct subnet allocation and masking, supernetting, and other IP addressing tasks


Internet of Things and BYOD are here to stay, paving the way for sophisticated and more advanced network connectivity. This means IT teams have to revisit their IT solutions to ensure they are part of the technology evolution – benefitting from it, and keeping it under control!

In today’s business world, email services are considered to be one of many mission critical applications, if not the most important application. At the end of the day, email downtime will affect business operations. In turn, sales teams may have a tough time reaching out to customers. Microsoft Exchange is a great tool where setting up the infrastructure and accessing emails are concerned. However, the biggest challenge for an Exchange admin is to constantly have a fully functioning system running during peak business hours. Below is the list of essential tips that an Exchange admin can leverage to optimize and improve Exchange server performance.


1. Storage Allocation and Performance


Sometimes when Exchange admins don’t correctly configure or partition the disks which store Exchange, it leads to email failure. When storage volumes runs out of available disk space, it leads to issues where Exchange can’t store additional emails. Even before assigning email space to users, as a best practice, Exchange admins should monitor available disk space and avoid the issue in advance. Planning your disk storage for Exchange server is critical because high disk latency will lead to performance issues. For optimum storage performance, it’s essential for admins to consider:

  • Deploying high performance disks and spindles
  • Selecting the right RAID configuration
  • Improving performance by aligning your disks


2. Mailbox Quota

Often, Exchange admins get trouble calls from users who can no longer send or receive emails. This happens when they’ve gone beyond their allocated mailbox quota. Instead of removing mailbox restrictions, Exchange admins can encourage users to move attachments, set up an archive, or adjust their archiving policy to reduce mailbox size. It’s important to monitor mailbox sizes to ensure each user group or individual employees remain at their original allocation. Larger organizations with many users will have to be smart about configuring mailbox sizes since they can get more trouble tickets due to an oversized email. Admins should proactively configure alerts and warn users when they’re about to reach their threshold.

3. Mailbox Database Capacity

Admins have to understand mailbox database capacity in order to get more visibility into:

  • Mailbox database size: You can determine how many mailboxes can be deployed in a single database. For example, development teams may have to share emails with heavy attachments to customers for feedback, such user groups or departments with large mailboxes can be moved to another database to balance load and capacity.
  • Storage usage: In order to estimate database capacity, you should map the number of user mailboxes per disk or array.
  • Transaction log files: Looking at transaction logs for user mailboxes will give you more information about message size, attachment size, and amount of data sent and received, etc.
  • Database growth and size: The database size will give you a rough estimate on the number of mailboxes you can deploy. This will depend on the availability factor (if you have a database copy then you have something to fall back on during failover) and storage configuration.

4. Indexing Public Folders

Even though indexing is useful, it utilizes a lot of resources causing performance issues to Exchange server. When you index public folders in Exchange it consumes a lot of CPU and disk usage. When your mailbox content changes, for example, as you receive more and more emails, the size of the index becomes larger and larger. If you have a mailbox which is about 5GB in size, then your index size also takes up a fairly large size. This happens because your email or message gets indexed separately to each public folder. Admins should only recommend indexing to specific teams or departments, otherwise there will be various resource bottlenecks.

5. Managing Unused Mailboxes

Businesses typically hire temp staff, interns, and contractors for a limited period. Admins have a tendency to leave those dormant mailboxes untouched for a long time. Removing these will free up storage and database capacity. Admins can reassign these to users who are short of capacity or keep them handy for future hires.

6. Bonus Tip: Automated Alerting

Admins can set up mechanisms to alert end users that their quota is nearly reached. Additionally, they can provide information about their mailbox, like the number and size of attachments, and information on how to reduce mailbox size. This automation can save the admin several hours and leaves the responsibility of size reduction on the user.

Exchange server can be a complex application to deal with especially when you have very little time to diagnose and troubleshoot issues. In order to be proactive, you can optimize the performance well in advance for your environment. Looking at other areas, such as controlling spam, creating proper backups, and cleaning up Exchange server will ensure consistent performance.


Big Data: Big Deal?

Posted by einsigestern Jan 24, 2014

In my June 2013 post "Byte Off More Than You Can Chew", we looked at the inconceivable quantity of data collected by the NSA. Maybe this collection is onlymetadata, but recent revelations, depending on who you believe, may indicate otherwise. This time we wrangle with Big Data.

"Big data refers to our burgeoning ability to crunch vast collections of information, analyze it instantly, and draw sometimes profoundly surprising conclusions from it. This emerging science can translate myriad phenomena—from the price of airline tickets to the text of millions of books—into searchable form, and uses our increasing computing power to unearth epiphanies that we never could have seen before." (1) We can now draw logical conclusions regarding relationships that heretofore we would have never considered. These capabilities are not free. They cost us more than money. Given enough relevant data, predictive models are quite accurate. We can discern the probabilities of events yet to unfold, and we can now do so with frightening accuracy.  "It also poses fresh threats, from the inevitable end of privacy as we know it to the prospect of being penalized for things we haven’t even done yet, based on big data’s ability to predict our future behavior." (1) Queue "Minority Report" montage.

"Leaders in every sector will have to grapple with the implications of big data, not just a few data-oriented managers. The increasing volume and detail of information captured by enterprises, the rise of multimedia, social media, and the Internet of Things will fuel exponential growth in data for the foreseeable future." (3)

A yardstick for the popularity of a given topic is the "...For Dummies" book. "Big Data for Dummies" has got it covered. "Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools." (2)

Then there is the privacy concern. Who has access to, who collects, who analyzes big data. These questions, again depending on who you believe, have yet to be answered with veracity. One can assume beneficial goals as easily as nefarious pursuits. "Much of what constitutes Big Data is information about us. Through our online activities, we leave an easy-to-follow trail of digital footprints that reveal who we are, what we buy, where we go, and much more." (1)

Theresa Payton, former White House CIO, offers some food-for-thought on the issue of privacy: "Digital devices have made our busy lives a little easier...we get just-in-time coupons, directions, and connection with loved ones.... Yet, these devices...send and collect data about us whenever we use them, but that data is not always safeguarded the way we assume it should be to protect our privacy. Privacy is complex and personal. Many of us do not know the full extent to which data is collected, stored, aggregated, and used. As recent revelations indicate, we are subject to a level of data collection and surveillance never before imaginable. While some of these methods may, in fact, protect us and provide us with information and services we deem to be helpful and desired, others can turn out to be insidious and over-arching." (4)


(1) "Big Data: A Revolution That Will Transform How We Live, Work, and Think" by Viktor Mayer-Schönberger and Kenneth Cukier (Mar 5, 2013)

(2) "Big Data For Dummies" Paperback by Hurwitz, Alan Nugent, Fern Halper, Marcia Kaufman

(3) "Big data: The next frontier for innovation, competition, and productivity " by James Manyika, Michael Chui, Brad Brown, Jacques Bughin, Richard Dobbs, Charles Roxburgh, Angela Hung Byers - McKinsey Global Institute

(4) "Privacy in the Age of Big Data: Recognizing Threats, Defending Your Rights, and Protecting Your Family" by Theresa M. Payton and Ted Claypoole


Password 123456

Posted by LokiR Jan 23, 2014

For those of you who can ban certain passwords (such as "password" or "abc123"), you may be interested in SplashData's list of the worst passwords of 2013.



Here are the top ten jewels for your banning pleasure:

  1. 123456
  2. password
  3. 12345678
  4. qwerty
  5. abc123
  6. 123456789
  7. 111111
  8. 1234567
  9. iloveyou
  10. adobe123



For those of you who don't have this power, you can use this list to help educate your users on what constitutes a bad password and maybe guide them into using stronger passwords. Most security firms recommend passphrases, like the estimable "correct horse battery staple". Even suggestions like "use your favorite sports team and favorite player number" are better than the tried and broken "company/product and 123".




Attribute-based Access Control (ABAC) is an advanced variant of role-based access control (RBAC). ABAC is a logical access control model which controls access to objects by evaluating rules against the specific attributes of the access requesting entity. There are typically 4 attribute types that ABAC uses to dynamically evaluate whether the access requested should be granted or not.

  1. Attributes of the subject
  2. Attributes of the object
  3. Environment conditions
  4. A formal relationship or access control rule which defines the permissible operations for subject-object attribute and environment condition combinations


If this may seem complicated, let’s understand ABAC by comparing it with RBAC. While RBAC is based on only the role of a particular access requester (based on organizational AD hierarchy), ABAC goes a step further and allows specific attributes (more than just the employee role) such as the location, department, or any customized properties that could define access privileges. For example, you can grant access to systems based on, say, an employee who is a R&D developer, who works on a specific project, a specific module, for a specific client, and during a specific duration can only gain access to protected information.


There are more filter levels and granular data checks in ABAC that make it harder to infiltrate as the intruder has to pass through all the attribute clearances before he’s allowed access. This is not exactly multi-level authentication, but is a more robust single level authentication that needs specific qualifying criteria based on multiple properties or metadata. It should be noted that, building an ABAC rule is not as simple as an RBAC one as you need to identify all the passing criteria for granting access, but the biggest advantage is you can tailor it to make your access provisioning more restricted to only the right people/system you deem authenticated for access.


ABAC is more flexible in security algorithm allowing more evaluation criteria and less manual intervention. Though the attribute data required to build access control rules for ABAC is wide-ranging and granular in detail, ABAC offers more dynamism and intelligent access controls for safeguarding confidential information and IT assets.


According to NIST, ABAC systems are capable of enforcing both Discretionary Access Control (DAC) and Mandatory Access Control (MAC) models. Moreover, ABAC systems can enable Risk-Adaptable Access Control (RAdAC) solutions, with risk values expressed as variable attributes. From the network perspective, network access control becomes stronger with ABAC as IT teams can use various attributes/AD properties/metadata to provide device and user access into the network.


If you are interested to learn about building a network whitelist and alerting when a network device or user outside the whitelist connects to your network, explore User Device Tracker.

System administrators face many day-to-day challenges. Specifically, when they’re troubleshooting and resolving user issues. The last thing on their mind is probably to look at performance data and make meaningful sense out of it. While this may be a daunting task, data analysis is key when solving issues that may arise from monitoring your servers and applications. This is especially true when you have mission critical applications running in your servers. Therefore, SysAdmins must be proactive to ensure there’s no downtime with their servers.


Data analytics will only be as good as your application monitoring solution. It’s key to have an effective monitoring solution so your analysis and predictions are accurate. Major considerations to keep in mind when you’re shopping for an application monitoring solution include:


Application Health and Availability

You can analyze how many applications are up or down for a given week or month. Further, you can drill down further to see critical application status and component issues. Data analytics will show you the performance of business applications and whether or not there are deviations in performance. In addition, you can analyze patterns and see how each application responds to user requests and compare against your baseline. With analytics, you can look at application storage and estimate and predict storage growth for critical applications.

Real-time Alerts

When your applications reach their threshold, you should instantly receive an alert. This helps you look at changing up your threshold settings and analyzing the number of alerts you receive per week for a desired application.

Server Performance

Look at server performance and analyze a range of metrics, such as: response time, CPU load for any given time, memory utilization, node details, number of applications in a server, real-time processes and services, storage volumes for a given server, etc.

Hardware Health

To effectively look at server environment and performance, you need an integrated console which shows you a range of hardware components. You can analyze the performance of each component and their metrics to ensure failure of hardware components doesn’t negatively impact application performance. In order to do this, you should analyze the performance of hard drives, arrays, array controllers, power supply, fan, CPU temperature, CPU fan speed, memory, voltage regular status, etc.

Database Optimization

To optimize SQL performance, you can look at various critical metrics and analyze the performance of each. You can also identify expensive queries and see how long it takes for the query to run. Badly written queries will affect database performance. Similarly, look at other components in SQL server such as, index fragmentation so data searches occur faster. Storage utilization, transactions, average response time, etc. will have to be analyzed proactively. This will keep you aware of performance and availability of the database, avoid performance bottlenecks, offers you scalability to monitor more databases and instances, and maintain server hardware and keep it healthy.

Operating System

Depending on your environment, you may be using different operating systems. It’s essential to analyze the performance of your operating systems. Therefore, you should drill down in to your server operating system and analyze the performance of CPU utilization, physical memory, virtual memory, and disk performance. This will help show what’s causing a strain on the server, operating system, and applications.

Benefits of Analytics


Receiving answers to what-if situations and determining where problems may occur can be made easy. You can also get a meaningful relationship to your IT components, application performance metrics, customer experience monitoring, and end-user monitoring. To have optimum application performance, today’s SysAdmins in both small and large enterprises are expected to analyze these patterns and come up with substantial information to help make further IT investments and determine ROI.


To conclude, SysAdmins can certainly take a load off their shoulders by utilizing the proper monitoring solution. In turn, analysis and predictions will be more accurate. That being said, the aforementioned considerations are important to keep in mind when searching for an application monitoring solution.

The Cisco Catalyst 3850 is a fixed, stackable GE (Gigabit Ethernet) access layer switch that converges wired and wireless within a single platform. This switch is based on Cisco’s programmable ASIC named Unified Access Data Plane (UADP) which supports the convergence as well as allows for deployment of SDN and Cisco ONE (Cisco’s version of SDN).

The Catalyst 3850 switch can stack and route, supports PoE, has a higher throughput, larger TCAMs, be your Wireless LAN Controller supporting up to 50 AP and 2000 clients and importantly supports Flexible NetFlow export. And why is NetFlow important? NetFlow has over the years become the de-facto standard for bandwidth monitoring and traffic analytics due its ability to report on the ‘Who, What, When and Where’ of your network traffic.

Flexible NetFlow configuration for Cisco Catalyst 3850 Switch:

The Cisco 3850 needs either an IP Base or IP Services Base license to support Flexible NetFlow (FNF) export.

Flexible NetFlow configuration involves creating a Flow Monitor, Flow Exporter and a Flow Record. Flow Monitor is the NetFlow cache whose components include the Flow Exporter and Flow Record. The Flow Exporter carries information for the export – such as the destination IP Address for the flows, the UDP port for export, interface through which NetFlow packets are exported, cache timeout for active and inactive flows, etc. The Flow Record carries the actual information about the network traffic which is then used by your NetFlow analyzer tool to generate bandwidth and traffic reports. Some of the fields in a Flow Record are source and destination IP Address, source and destination port, transport protocol, source and destination L3 interface, ToS, DSCP, bytes, packets, etc.

So, here is a sample configuration for enabling Flexible NetFlow on a Cisco Catalyst 3850 and exporting it to your flow analyzer such as SolarWinds NTA.

Flow Record:

We start with creating the flow record. From the 'global configuration' mode, the followings commands are to be applied.


flow record NetFlow-to-Orion           \\ You can use a custom name for your flow-record

match ipv4 source address                               

match ipv4 destination address

match ipv4 protocol

match transport source-port

match transport destination-port

match ipv4 tos

match interface input

collect interface output

collect counter bytes long        \\ Though "long" is an optional command, readers have stated that NetFlow reporting works only when "long" is used

collect counter packets long

Flow Exporter:

And next for the flow exporter, again from the 'global config' mode.


flow exporter NetFlow-to-Orion       \\ You can use a custom name for your flow-exporter

destination                     \\ Use the IP Address of your flow analyzer server

source GigabitEthernet1/0/1            \\ Opt for an interface that has a route to the flow analyzer server

transport udp 2055                             \\ The UDP port to reach the server. SolarWinds NTA listens on 2055


Flow Monitor:

Now to associate the flow record and exporter to the flow monitor.


flow monitor NetFlow-to-Orion          \\ Again, you can use a custom name

record NetFlow-to-Orion                  \\ Use the same name as your flow record

exporter NetFlow-to-Orion               \\ Use the same name as your flow monitor

cache timeout active 60                  \\ Interval at which active conversations are exported - in seconds

cache timeout inactive 15                \\ Interval at which inactive conversations are exported - in seconds


Enabling on an Interface:

And finally associate the flow monitor to all the interfaces you would monitor with your flow analyzer. Go to the ‘interface config’ mode for each interface and apply the command:


ip flow monitor NetFlow-to-Orion input          \\ Or use the name of your custom flow monitor


The above command attaches the flow monitor to the interface you selected after which the ingress traffic that passes across the interface is captured and send to your flow analyzer for reporting.

For a trouble free setup, ensure that your firewalls or ACLs are not blocking the NetFlow packets exported on UDP 2055, and that you have a route from the interface you had selected under flow exporter to the flow analyzer server. And then you are all set. Happy Monitoring!




30 Day Full Feature Trial | Live Product Demo | Product Overview Video | Geeks on Twitter

Last week, the Washington DC Circuit Court of Appeals ruled (pdf) that the Federal Communications Commission (FCC) no longer has the authority to require that cable and internet service providers (ISP) treat all internet traffic equally. You have probably heard of this issue already; it's typically referred to as the issue of Net Neutrality. This decision, for now, nullifies net neutrality, so what does that mean for you?


(Most) Internet Traffic Was Created Equal

This ruling has the greatest affect on the most visible type of internet traffic: the World Wide Web (WWW). Since the birth of the WWW, of course, we've really only needed to pay basic access. Once you got your account with an ISP, whether it was with Prodigy, Compuserve, AOL, Comcast, TIme Warner, or any of the multitude of other ISPs, you were able to access all the information out there. If a content provider wanted to charge for content--as a bookstore or publisher might charge for a magazine--you could pay your access fee to the content provider and their content would be yours or at least accessible for a contracted period of time. Though it might charge at tiered pricing levels based on the bandwidth consumed, your ISP did not make any distinctions in the price they charged for access to that information on the basis of its actual content.


As of last week, now they can.


All Internet Traffic Must No Longer Be Treated Equally

With last week's ruling, your ISP is now allowed to charge you rates that may differ on the basis of what content is actually being served, on top of any costs the content publisher may dictate. For example, if Time Warner is your ISP, they are now free to charge you more to access MSNBC than they would charge you to access CNN, which is a Time-Warner property. On the other hand, Time Warner could now also waive or reduce the costs to get behind a CNN paywall--if such existed--provided you were already paying Time Warner for internet service. Furthermore, this ruling allows an ISP to freely throttle provided bandwidth on the basis of the content consuming the bandwidth.


What Does This Mean For You

Of course, you've probably been using SolarWinds Network Performance Monitor, NetFlow Traffic Analyzer, and VoIP & Network Quality Manager to monitor and manage the traffic on your own internal networks (If not, you should be; check out the demo!) in ways similar to what the FCC is now authorizing for ISPs. Most users understand how that is reasonable at work: my boss would prefer that I spend more of my time writing docs than geeking out on Pinterest, but it's significantly different when we're talking about cruising the web from the comfort of our own home.


An interesting aside is that this whole issue is largely only a concern in the more developed parts of the world, where internet access has become as commonplace and taken for granted as clean water. In areas of the world where hotspots are few and far between, the internet has always been experienced in significantly different ways. What's your expereince?

Last time I explained how quantum computing relies on the phenomenon of 'superposition'. And though this year the NSA is spending $79.7 million on quantum computing research and development projects, the recent award-winning achievements in particle physics tell us that a quantum computing platform would most likely take years if not decades to engineer.


At stake in the effort, besides a new era of computing with mind-boggling power and scale, would be a breakthrough in code-breaking, enabling access to AES encrypted data already being warehoused in Bluffdale, Utah. Since the value of that data decreases based on the time it takes to break the cipher protecting it, a quantum computing platform that takes decades to complete would be of decreasing value with regards to data warehoused now. In short, assuming you use an AES cipher to protect the privacy of your data now, how much would you care if the NSA gained access to 2014 data sometime between 2034 and 2064?


Generating Encryption Keys


The National Institute of Standards and Technology (NIST) publishes a series Federal Information Processing Standards (FIPS) documents related to information security. FIPS PUB 140-2 lays out criteria for accrediting cryptographic modules. If you adhere to FIPS 197 in implementing AES within a computer application, for example, then NIST's Cryptographic Module Validation Program (CMVP), using criteria in FIPS 140-2, validates your application as FIPS-compliant.


If encryption software does not generate random keys and protect those keys from interception, then the software only guarantees that its ciphered data is secure from those who do not know how to exploit its key management flaws. You can imagine the trouble with CMVP's integrity were they to certify a non-secure key generation module--which, yes, they seem to have done with RSA Corporation's BSAFE cryptographic system. Since 2004 BSAFE has been generating keys that are accessible to the NSA via an engineered backdoor.


Worse than CMVP's implied incompetence in validating BSAFE is its possible collusion with the NSA in getting BSAFE's Dual_EC_DRBG key generation backdoor into circulation as  part of a trusted cipher system. And in any case, the verifiability of any cryptographic system is a sorely open issue. If we can't trust NIST, then who can we trust to verify the cryptography we use but do not create ourselves?


The Cost of RSA's Profitability


Security experts have been aware of the flaws in BSAFE's key generation since 2007, two years after the BSAFE specification was published. Only with a recent Snowden-sourced story did we learn that the NSA paid RSA $10 million to make the rigged Dual_EC_DRBG component the default random number generator for BSAFE.


Among other things, we have another confirmation that verifiably AES-based key generation and data encryption are the only truly secure cryptopgraphy options in our contemporary context. Trusting the source of your software for monitoring network devices is more important than you may have thought.

What we are going to learn today: Kiwi logo 2.png needn’t always be a bird or a fruit. For IT pros, it can also be a Syslog Server.


If you don’t know this already, Kiwi® Syslog Server is a log management software for Windows® platform that collects, consolidates, displays, stores, alerts and forwards syslog and SNMP trap messages from network devices, such as routers, switches, Linux® and Unix® hosts, and other syslog and trap-enabled devices. Let’s look at 5 MOST USEFUL LOG MANAGEMENT OPERATIONS you can perform with Kiwi Syslog Server.


#1 Monitor Syslog Messages & SNMP Traps from Network Devices & Servers

Kiwi Syslog Server listens to syslog messages and SNMP traps from routers, switches, firewalls, servers and other syslog and trap-enabled devices. Kiwi Syslog Server collects these messages from various sources and displays them on a centralized Web console for easy and secure access. You can also

  • Filter messages by host name, host IP address, priority, message text keyword, or time of day
  • Generate graphs of syslog statistics over specific time periods


#2 Automate Alerts for Incoming Syslog Messages

Kiwi Syslog Server provides an intelligent alert functionality to help you get notified when a syslog with a predefined criteria is met (based on time, type of syslog message, syslog source, etc.). By default, Kiwi has the following syslog priority levels which helps you immediate understanding of the syslog message for any follow-up action.


Level 0


System is unusable

Level 1


Action must be taken immediately

Level 2


Critical conditions

Level 3


Error conditions

Level 4


Warning conditions

Level 5


Normal but significant condition

Level 6


Informational messages

Level 7


Debug-level messages


Based on the type/priority of syslog message received, you can schedule an email notification, or play a sound to alert you, or run an external program, forward the alert as a syslog message to another server or database.


#3 Schedule Syslog Archive & Clean-Up Actions

Kiwi Syslog Server has an integrated scheduler that allows you to schedule and run automated archival and clean-up tasks.


  1. Scheduled Archival: Kiwi Syslog Server allows you to schedule archive options defining the source, destination, archive frequency and notification options. This tasks allows you to copy or move logs from one location to another, compress the files into individual or single archives, encrypt those archives, create multi-part archives, create file or archive hashes, run external programs, and much more.
  2. Scheduled Clean-Up: The clean-up task removes/deletes files from the source location that match a specified criteria. This task can be scheduled to occur over any interval or at any date and time desired, or at application/service start-up.


You can also easily customize and implement your organizational log retention policy to meet international compliance standards such as SOX, PCI-DSS, FISMA & more.


#4 Forward & Archive Windows Event Logs

Kiwi Syslog Servers offers the free Log Forwarder for Windows which allows you to forward all your event logs from your Windows servers and workstations to Kiwi Syslog Server and perform scheduled archive to one or more disks in the form of log files.


#5 Securely Transport Syslog Messages Across Any Network (LAN or WAN)

With the help of the free, optional, Kiwi Secure Tunnel, you can receive, compress and securely transport syslog messages from distributed network devices and servers to your instance of Kiwi Syslog Server. Kiwi Secure Tunnel is made up of a client and a server. The Tunnel Client gathers messages from one or more devices on a network and forwards the messages across a secure link to the Tunnel Server. The Server then forwards the messages on to your Kiwi Syslog Server instance.


As you can see, Kiwi Syslog Server can help you simplify most of your log management tasks for syslog messages. This is just a summary of some of the major and common operations that you can accomplish with Kiwi Syslog Server. To explore more features, do visit www.kiwisyslog.com.



(Yes, this title was inspired from Superman opening credits. If you want to watch the clip: https://www.youtube.com/watch?v=OjS6B4KuPY0)

Sometimes when installing an application on a server, users will run into the issue where the desired port is being blocked by another application. Rather than installing 3rd party software to determine what application is blocking the port, we can use tools that already come standard with Windows. Here we will discuss one way of determining what application is using what port.


There are two tools we will need:

  • Command prompt
  • Windows Task manager


For example let’s assume that a syslog application was installed on a server only to find out that port 514 is already in use. To determine what application is blocking the port we must perform the following steps:


      1. First we notice that port 514 is already in use:




      2. Bring up a command prompt and run the command netstat –a –o –p udp | find “514”



    • -a displays all connections and listening ports
    • -o displays the process id (very important)
    • -p displays the desired protocol (here we are interested in UDP)
    • We can also specify the specific port of interest. Here we are interested in port 514



     3. Note that the Process ID (PID) associated with the blocked port is 6724. Now we need to find out what application owns PID 6724. To do this we bring up the Windows Task Manager and under “View” and     

        “Select Columns…” select the option for “PID (Process Identifier).”




     Next Click “Ok” and you will now see the PIDs displayed in Windows Task Manager.


     4. When I search for the application associated with PID 6724 we can see that SyslogService.exe*32 is using port 514.



     Our options are to either use the syslog server currently running or kill the PID which will make port 514 available.


What's New in LEM 5.7

Posted by DanaeA Jan 17, 2014

SolarWinds Log & Event Manager (LEM) v5.7 provides the following usability and performance enhancements:

  • nDepth Scheduled Searches
    • Schedule nDepth searches to run automatically once or on a recurring basis
    • Scheduled Searches can also be shared between users
    • Email search results as a CSV attachment, or generate an event notifying you of search completion
    • Agents are using Java 7 in this release
  • Agent Node License Recycling - Each time a VM desktop is created, an agent connects to LEM and a license is used. This continues to happen as desktops are created and destroyed, eventually causing all licenses to be used. License recycling allows you to collect and reuse licenses from nodes that have not sent an event to the LEM manager within a specified amount of time
      • Define a schedule to automatically recover unused agent licenses
      • Specify a virtual desktop and workstation devices where licenses can be recovered
  • Scalability Enhancements
    • Improved rules engine and appliance-side processing
  • FIPS Self-Certification
  • Additional Improvements
    • Create User-Defined Groups more easily with the new CSV import
    • Deploy LEM to Hyper-V® on Windows 2012 R2
    • New connectors for NetApp®, IBM®, Brocade, and more


For more information on using LEM, please visit the following fount of information on all that is LEM Log & Event Manager

Part of your role as a system administrator is to oversee the network infrastructure that supports your company’s critical business applications. Therefore, you likely devote most of your time keeping the network up and running and performing optimally. Nevertheless, there are still occasions where you experience unexpected network outages. That’s the reality of network management. So, what does it take to stay ahead of these unforeseen breakdowns? Here are some suggestions that will simplify your administration efforts and help you be better prepared for a ‘bad day’.

Maintain a Current Device Inventory List: Keep an updated device inventory list with details of your network components such as ports, interfaces in use, hardware details, servers, virtual machines, network storage, and so on. It’s important that you regularly monitor these pieces as they directly impact network performance. Having an up-to date asset database helps you track all of your IT equipment for device replacements, end-of-life information, device configuration changes, and the status of devices in use and not in use.

Configure SNMP and Flow Technologies: SNMP (Simple Network Message Protocol) fetches performance metrics from your network devices. There are different versions of SNMP available and you can configure an appropriate version based on your data requirements and the significance of the device. To enable SNMP for a Cisco® router or switch, you can telnet to the device, go to the configuration mode, and add a read-only or read-write community string. SNMP community strings are like passwords and enable monitoring on network devices. In addition, you can enable SNMP traps to receive unsolicited trap notifications or requests on the status of a network device.

Similarly, enabling flow technologies on routers and switches helps furnish data that can be used to analyze traffic and bandwidth usage. For this, you need to configure the flow-based packet analysis on the devices that need monitoring.

Perform Network Performance Baselining: Performance baselines are a standard set of metrics that define the normal working conditions of the network’s infrastructure. Baselining is a critical aspect of network performance monitoring. You accomplish this by running network baseline tests and determining the standard threshold values for networking hardware. Baselining helps determine and set alerting thresholds for situations where the network is experiencing performance slowdowns. It also aids in determining requirements for hardware upgrades and purchase.

Every organization should establish network monitoring policies according to the organization’s compliance level. Additionally, clearly define the scope of activities to match these standards.

Identify and Define Alerts and an Escalation Matrix: Depending on the thresholds you set, your network monitoring system will trigger alerts on various network issues and errors. It is important to clearly identify and define the point of contact or person designated to receive the alert. In the case of escalations, you need to decide how the alert will be routed based on its severity. Failure to attend to an alert on time is equivalent to not having any alerts configured at all. Delivering timely alerts to the right person significantly reduces network downtime and serious damage to business operations.

Finally, understand that your network will not remain the same. Be sure to plan for network expansions and technology advancements that will be necessary to accommodate monitoring.

See this whitepaper to learn more about streamlining enterprise network monitoring.

I just had a brief discussion with a dev co-worker and we discussed this very topic. We also provided some examples showing that no matter what policies are in place, security is only as good as the people who are responsible for enforcing it. At some point, you just have to trust your people. That said, let's move on to example numero uno.


Example #1

My co-worker used to work for the Department of Defense as a contractor (no, not him). Passwords were given to him in a vault and he was made to memorize them (as opposed to simply writing them on paper) all the while being watched by a government official whose job it was to ensure that no written record of the passwords existed. On the surface, my friend complied. He remembered the passwords alright...just long enough to write them down though (when no one was looking, of course).


The same employee at the same job was also to be watched by a government official as he worked to make sure data was not "misused." Believe it or not, even government officials are human. At some point they too take breaks, go to lunch, become friendly, and even gain your trust. Simply put, the opportunity will arise to compromise security because people are human.


Example #2

For years I worked at the SolarWinds headquarters in Austin, TX. Part of my daily routine was to download a podcast via Bittorrent over the wifi connection straight to my phone. This past August, I moved to the Salt Lake City office and quickly realized they plugged the torrent hole in the firewall here. How would I get my show onto my phone? Oh, the perils of security! Puh-lease. All I did was RDP into my laptop, download my show there, then put it in my Dropbox. Presto! Five minutes later I was enjoying the show.


The Moral

Like I said earlier, "...security is only as good as the people who are responsible for enforcing it. At some point, you just have to trust your people." If you don't trust those around you, you may have bigger issues that need addressing. That's just my 2¢.


Do you have an example?

If you do, tell me about it in the comment section below. Now if you'll excuse me, I need to find some black tape to put over my webcam.

We heard you wanted more deep dive, technical training on your SolarWinds products, and we listened! We're pleased to announce the brand-spanking new Customer Training Program for current (in maintenance customers). This is a totally free program that we are delivering as part of your maintenance (who else does that?!). Even though our products are very easy to use, we want to ensure every customer gets the most out your products. Initially launching with four NPM classes, we're planning to grow this program substantially in 2014 to offer more topics on more products very soon.


All classes consist of both lab and lecture - so the lessons are very applicable and transferable to what you're doing on a day to day basis. Classes are hosted by a professional trainer and class sizes are limited to ensure a quality learning experience.


To sign up for a class, you must be current on maintenance for at least one product - but it doesn't have to be the product you're taking the class on. So, (for example) feel free to sign up for an NPM class if you are a Toolset customer interested in learning more about NPM.


Where to Sign Up


You can sign up in the Customer Portal.




Current Classes


Currently, we have four NPM classes offered at various times on various dates. If the class you want is full, feel free to write us at CustomerVoice@solarwinds.com and we'll let you know as soon as we add new classes to the schedule.


SolarWinds NPM 201: Advanced Monitoring – Universal Device Poller, SNMP traps, Syslog Viewer

NPM 201 digs into some of the more advanced monitoring mechanisms available. We’ll get away from the “out-of-the-box” object configs and default monitoring settings to create a customized monitoring environment. If you have a good understanding of MIBs, OIDs, and SNMP (or would like to), this is probably the class for you.


SolarWinds NPM 202: Performance Tuning –Tuning, Remote Pollers, License Management

NPM 202 focuses on maximizing performance. This means tuning your equipment to optimize its capabilities, tuning your polling intervals to capture the data you need without bogging down the database with less critical data, and adding additional pollers for load balancing and better network visibility.  This class is great if your NPM could use a tuneup, or if you are considering expanding your deployment with additional licenses, polling engines, or web servers.


Solarwinds NPM 101: Getting Started – Maps, Users, Custom Views

NPM 101 will take a user from the initial install through customization and daily use. We cover the Orion core platform (getting used to NPM’s web interface), network discovery and adding devices, creating maps, adding users, and creating custom views.


Solarwinds NPM 102: Digging In – Advanced Alerts, Reporting, and More

NPM 102 dives into advanced alerts and reporting. We cover creating and managing custom alerts, alert suppression, device dependencies, and custom properties. We create and automate reports, and also show how to integrate those reports into custom views for easy, real-time access.


Comments by Training Participants

“This class was definitely worth my time. It provided me with lots of information and tactics to better manage my network. I look forward to the evolution of this training program because my job is always changing and I want to stay up-to-date with how SolarWinds NPM can help me with my network.”

Corinne Johnson



“The training program has definitely been worth my time. It has provided me with in-depth product information and tactics to help me monitor my network. I am looking forward to taking more classes from SolarWinds and exploring other products. My job is continually evolving and this new training program that SolarWinds has put together is helping me to maintain a competitive edge.”

Will Luther




We're ramping up this program, so watch this blog and the training page in the portal for more classes, on more products, at more times all throughout 2014 and beyond. And as always, let us know your requests at CustomerVoice@solarwinds.com.


Don't Forget About Customer-Only Trials!


And... don't forget about the benefits of downloading customer trials from the customer portal. You have access to every SolarWinds product with a streamlined evaluation experience including:


  • No need to fill out a registration form.
  • The download will not trigger emails about other products or offers.
  • Unless you reach out us, we will only contact you at the beginning and midway through your trial.
  • If you have questions or need assistance with your evaluation contact customersales@solarwinds.com.
Meryl Wilk

WiFi with 3D Vision

Posted by Meryl Wilk Jan 15, 2014

First, came WiFi, that essential technology that keeps us online – provided we have the one PC hooked up to the cable modem and router, and the another PC with a wireless networking card.


Then came WiVi, which uses WiFi technology to “see” through walls to detect motion. According to www.popsci.com, the US Navy discovered radar when they noticed  that a plane going past a radio tower reflected radio waves. Much more recently, Massachusetts Institute of Technology (MIT) scientists applied this same idea to create devices that can monitor human (or possibly other) movement by tracking the WiFi signal frequency changes in buildings or behind walls.


And now, we have WiTrack. MIT scientists have taken the WiVi idea a step further. The MIT article, WiTrack: Through-Wall 3D Tracking Using Body Radio Reflections, describes WiTrack as “…a device that tracks the 3D motion of a user from the radio signals reflected off her body…WiTrack does not require the user to carry any wireless device, yet its accuracy exceeds current RF localization systems, which require the user to hold a transceiver. It transmits wireless signals whose power is 100 times smaller than Wi-Fi and 1000 times smaller than cellphone transmissions.”


Applications for WiTrack


Applications for WiTrack are really varied, and include:


  • Security and law enforcement, from detecting intruders to avoiding or minimizing potentially violent situations, such as in battle or at a crime scene.
  • Rescue operations, for detecting motion inside hard-to-get-to places, such as collapsed buildings or avalanche sites.
  • Gaming, in which you can freely move about your home to participate in the fun. Imagine running down the hall and up the stairs as part of the gaming experience…
  • Monitoring, any three-dimensional being  who might need to be checked in on - from your new puppy, to your kids, to your great grandmother. MIT points out that a WiTrack monitoring system can do what current camera-based monitoring systems do without using cameras to invade anyone’s privacy. 


Find Out More


For even more details on WiTrack, check out the video, WiTrack: 3D Motion Tracking Through Walls Using Wireless Signals And for all the details on how WiTrack works, see the MIT paper, 3D Tracking via Body Radio Reflections.

It was recently found by CERT that there’s a new type of DDOS botnet that is infecting both Windows® and Linux® platforms. This is a highly sophisticated cross-platform malware which impacts computers by causing DNS amplification.



A DNS Amplification Attack is a Distributed Denial of Service (DDOS) tactic that belongs to the class of reflection attacks in which an attacker delivers traffic to the victim of their attack by reflecting it off of a third party so that the origin of the attack is concealed from the victim. Additionally, it combines reflection with amplification: that is, the byte count of traffic received by the victim is substantially greater than the byte count of traffic sent by the attacker, in practice amplifying or multiplying the sending power of the attacker.[1]



In Linux systems, this botnet takes advantage of the systems that allow remote SSH access from the Internet and have accounts with weak passwords. The attacker uses dictionary-base password guessing to infiltrate into the system protected by SSH. While executing an attack, the malware provides information back to the command and control server about the running task, the CPU speed, system load and network connection speed.

Malware Attack.png


The Windows variant of the botnet installs a new service in the target systems in order to gain persistence.  First, the C:\Program Files\DbProtectSupport\svchost.exe file is installed and run. This file registers a new Windows service – DPProtectSupport, which starts automatically at the system startup. Then, a DNS query is sent to the server, requesting the IP address of the .com domain. This domain is the C&C server and the bot connects to it using a high TCP port, different than the one used in Linux version. And, in the Windows version of the malware, OS information is sent to the C&C server in a text format.


This botnet was discovered in December 2013, and after many tests, the anti-virus software used was able to detect it more in Windows compared to Linux – putting Linux at higher risk of security compromise.


Its best to always gain real-time actionable intelligence from your system and network logs so that you will be able to detect any suspicious and unwarranted activity – which might be indicators of a security breach!

With the continuous increase in the number of security breaches every year, it would we critical for you to take a closer look at the few things that you can do from an IT security standpoint, to minimize the risks.  One of the key steps towards this complying with industry specific regulations like SOX and HIPAA/HITECH and having third-party organizations to conduct audits for key systems and controls.


Why do audits matter?

Compliance with data security standards can bring major benefits to businesses of all sizes, while failure to comply can have serious and long-term negative consequences. This involves identifying and prioritizing the strategic objectives and managing the business across people, processes, information and technology to realize those objectives. It also impacts day-to-day operations, which in turn affects troubleshooting and system availability.


Being in line with IT compliance regulations such as PCI DSS, GLBA, SOX, NERC CIP, and HIPAA require businesses to protect, track, and control access to and usage of sensitive information. Let us have a look at some of the top reasons as why to audit:



You may be working with clientele spread across industries and these audit reports really matter to them. For example, financial services organizations these tend to request these reports at the beginning of every year, whereas healthcare groups would need their audit reports later in the year for their own auditing purposes. These reports have a direct impact on their productivity, sales and reputation.



Let us consider HIPAA compliance for example. The core focus of HIPAA compliance is to protect the confidentiality, integrity, and availability of electronic protected health information or “ePHI.”  Failure to comply with HIPAA’s regulations carries serious consequences for any business that interacts with ePHI, including criminal sanctions, civil sanctions, fines and even possible prison sentences. The guidelines on violations include up to $1.5 million in penalties for breaches.



You need to have visibility over security & compliance, and protection of your data. To ensure this, you need to collect and consolidate log data across the IT environment and correlate events from multiple devices and respond to them in real-time. Conducting audits in a way sets up a benchmark to implement best practices and also ensures that your organization is in line with the latest technology trends.


As an interesting statistic, it is expected that the number of targeted attacks is likely to increase in 2014 and this forecast is based on the continuously growing number of DDoS attacks over the last couple of years. Hackers might move away from high-volume advanced malware because the chances of it being detected are high. Still, the lower-volume targeted attacks are expected to increase, especially with the intent of accessing financial information and stealing identities or business data.


With all these set to happen, it is advisable that you ensure more visibility on the devices on your network as a part of your information security measure. Compliance and compliance audit will definitely come in handy as you head further into 2014.


Stay secure my friends!!



How to Migrate Kiwi CatTools to Another Computer Along with Activities & Devices

1. From Start > All Programs > Solarwinds CatTools > click CatTools

2. Now, go to File > Database > Export and export the devices and activities using the options highlighted in the screen shown below:

3.Save the exported files, which are in '.kbd' format.

4. Save the 'Variations' folder from:<directory>\CatTools\

5. If you are using CatTools 3.9 or higher, deactivate the current license, using Licence Manager, which can be downloaded from here.

6. Install CatTools on the new system and license it.

7. Copy the following from the old system to the new system:

  • exported .'kbd' files &
  • 'Variations' folder

8. Open the Activities file and ensure that all paths are valid. For e.g., if CatTools was previously installed to c:\program files\ and is now installed to c:\program files (x86)\, you will need to reflect this within the INI file.

9. Open the CatTools Manager > File > Import > import the two '.kbd' files.

10. Copy the 'Variations' folder to the new CatTools installation directory.

11. Restart the CatTools service.

For more information about CatTools visit: Configuration Management and Network Automation | Kiwi CatTools


Wafer-thin flash drives

Posted by LokiR Jan 13, 2014

There's a new design concept out there for flash drives the thickness of a sticky note. The company, called dataSTICKIES, uses a relative newcomer material called graphene and a proprietary wireless data transfer protocol to get achieve this wafer-thin thickness.



Now, graphene is my favorite new material; I've been waiting for close to 10 years for someone to come out with a viable commercial application, and this is a pretty cool proto-product. Graphene is a form of crystalline carbon (essentially atom-think graphite) that is super strong and an excellent conductor. Research using graphene has taken place anywhere from medicine to energy to quantum science.



The dataSTICKIES company is using graphene to store data. Because graphene is an atom thick, the hard drive becomes a flat sheet. Instead of using USB to transfer the data, the company developed an optical data transfer surface to take advantage of the super thin material. This also makes transferring data easier since you no longer have to deal with the USB superposition effect (i.e., it takes at least three tries to connect the USB cable) or moving computers around to get to the USB ports.



Another cool thing with the dataSTICKIES is that it looks like you can increase the data capacity by stacking stickies. I'm not sure how that's supposed to work though, since you are supposed to be able to stack stickies as discrete drives.



These would be pretty awesome anywhere, but especially for people on restricted networks. Need to install some more pollers or every SolarWinds product you bought? Just slap a sticky on the computer.

Today is the last day of that annual ritual celebration of all things technological we know simply as CES. Thanks to CES, we can all be supremely disappointed in the otherwise simply amazing capabilities of the gadgets we all got just last month. You may even be reading this post on a device that was a star of a past CES. Ain't tech grand?


So, What Was at CES 2014?


As your humble docs writer for SolarWinds NPM, among other things, attending CES is not remotely related to my listed job requirements. As a SolarWinds geek, though, I do have a keen personal interest in the latest whiz-bangery showing up out in Vegas.


And a lot of whiz-bangery there is: 4K HDTV, 3D printers, 2-way hybrid "laptabs", and 1TB wireless hard drives. It's all stuff we should expect to see on our networks or in our homes soon. Thankfully, Network World has the rundown for those of you who, like me, weren't able to make it. I'm not sure I need a Bluetooth-connected toothbrush, but the personal hydrogen reactor and the robot drones look like a lot of fun. Of course, wearable tech was the thing this year, so I expect to be ordering my very own Dick Tracy watch in the very near future.


For those of you who were able to make it, what have you seen that the rest of us network-oriented geeks would find fascinating?

Network admins constantly face challenges when implementing security procedures and bandwidth optimization processes in their network. Using a Virtual Local Area Network (VLAN) is one smart solution to effectively managing workstations, security, and bandwidth allocation. Although VLANs can be very useful, they can also present a lot of issues when managing them in huge enterprise networks. In this blog we’ll discuss some of the common challenges admins face when implementing VLAN’s and best practices to manage them. Before we dive into that though, let’s take a look at the basics of VLANs and how they work.


What is VLAN?

A VLAN is a logical group of workstations, servers, and network devices in a Local Area Network (LAN). A VLAN can also be referred to as a network implementation where users access a proprietary, private network from the Internet. It allows communication among users in a single LAN environment who are sharing a single broadcast or multicast domain.


Why Do We Need VLAN?

The purpose of implementing a VLAN is to utilize security features and to improve the performance of a network. Assume you have two different departments: finance and sales. You want to separate them into VLAN groups for reasons such as tighter security (limited visibility to financial data), better bandwidth allocation (for VoIP calls in sales), and load balancing. In this case, VLAN would allow you to optimize network usage and map workstations based on department and user accounts.


Typical Challenges in VLAN and How to Manage them!

Although there are many benefits of implementing VLAN, there are also certain disadvantages. In a logically connected network, a high-risk virus in one system can infect other users in the same network. If you want users to communicate between two VLANs, you might need additional routers to control the workload. Controlling latency can be a bit more ineffective within a LAN than within a WAN. Network administrators and managers can run into problems even after implanting a VLAN properly and efficiently. In a traditional LAN, it’s easy to find out if the network device is performing or not. However, understanding what’s causing your network to run slowly in VLANs with virtual trunks or paths is a more difficult process. For instance, assume you want to configure a VLAN in your network. You can choose to separate users based on departments and enable security, but if you’re creating networks within your physical switches you also have to think about routing, DHCP, DNS, etc.


Network administrators effectively manage VLANs by taking a step back and understanding whether the number of VLANs is appropriate for the number of endpoints in the network. It’s also important to understand what data needs to be protected from other traffic using a firewall. In addition, VLANs can become more efficient when combined with server virtualization. In a virtualized data center environment, the VLAN brings the physical servers together and creates a route. By allowing virtual machines to move across physical servers in the same VLAN, administrators can keep tabs on the virtual machines and manage them more efficiently.


Managing a VLAN becomes much easier for network administrators when network traffic, user access, and data transfers are isolated and routed separately. It’s also highly recommended to ensure primary network devices work properly before troubleshooting VLANs.

Since I revisited the topic of AES encryption and NSA surveillance the Washington Post published information sourced through Edward Snowden that the NSA is spending $79.7 million to pursue a quantum computer capable of running Schor's algorithm. If and when the NSA succeeds, all the currently-unreadable AES-encrypted data they routinely capture en masse from internet backbones and stored in the Bluffdale, Utah computing center would become readable.

To give some sense of the agency's ambition we need to talk about Schrödinger's cat.


Quantum Smearing


In Erwin Schrödinger's famous thought experiment a cat sits inside a Faraday Cage--a steel box from which electromagnetic energy cannot escape. Also in the box and inaccessible to the cat is a machine that contains: 1) some material whose probability of releasing radiation in an hour is exactly 50%; 2) a Geiger counter aimed at the material and rigged to release a hammer upon detecting any release of radiation; 3) a flask of poison positioned under the hammer. If the radioactive material releases radiation, the hammer smashes the flask, killing the cat.


In this box, however, as a quantum system, it is always equally probable that radiation is released and not released. Says the Copenhagen interpretation of quantum systems, with the idea of superposition, the cat in the box exists as a smear of its possible states, simultaneously both alive and dead; an idea Schrödinger along with Einstein ridiculed for being absurdly at odds with everyday life. Nobody has ever seen the material smear that is a Schrödinger cat*.


Qubits, or Herding Schrödinger Cats


David Wineland and team received their Nobel Prize in Physics in part for creating a very small Schrödinger cat. "They created 'cat states' consisting of single trapped ions entangled with coherent states of motion and observed their decoherence," explains the the Nobel Prize organization in making their 2012 award.


Wineland developed a process to trap a mercury ion, cause it to oscillate within the trap, and then use a laser to adjust its spin so that the ion's ground state aligns with one side of its oscillation and its excited state aligns with the other side. On each side of the oscillation the ion measures 10 nanometers; the ion's two resting points in the oscillation are separated by 80 nanometers. And in effect, the mercury ion is guided into a "cat state" of superposition.


In this state the ion has both a ground and excited charge and so meets the physical requirement for serving as a quantum computing "qubit"; using the difference in spin, superposition allows the ion to be both 0 and 1 depending on where in its oscillation it is "read".


A quantum computer would be capable of breaking AES because a qubit is an exponential not a linear quantity. For example, in linear binary computing, using electrical current transistors, 3 bits of data can give you 1 binary number--101, 001, 110, etc.; but in quantum computing, 3 qubits represent a quantity that is 2 to the 3rd.


So, to extrapolate how qubits scale, Wineland (43:00) offers this example: while 300 bits in linear computer memory may store a line of text, 300 qubits can store 2 to the 300th objects, holding a set that is much larger than all the elementary particles in the known universe. And qubit memory gates would allow a parallel processing quantum computer to operate on all of its 2 to the nth inputs simultaneously, making trivial the once-untouchable factoring problems upon which all currently known encryption schemes are based.


Big Black Boxes


The NSA project is underway in Faraday cages the size of rooms. Could that work proceed without the direct involvement of Wineland's award-winning NIST team? Even presuming that involvement, the technical challenges of going from an isolated and oscillating mercury ion to a fully developed quantum computing platform would seem to imply years not months of work.


This time next year we may know the answer to how long the project will take. In the meantime, we continue assuming that AES-encrypted data remains secure and that the SNMPv3-enabled tools for monitoring systems with secure data do not introduce breaches in the systems themselves.


* Schrodinger implicitly formalized his own feline paradox with a differential equation that calculates the state and behavior of matter within quantum systems as a wave function (Ψ) that brings together mutually exclusive possibilities.

By default Storage Manager places the database and it install files on the same drive. Over time the database will expand. It is important to verify there is sufficient disk space before performing an upgrade of Storage Manager. We must have twice the amount of free space as the largest table in the database. During the upgrade, Storage Manager will build temporary database tables from the actual database and it is because of this that we must verify sufficient disk space on the drive.


MariaDB creates a number of different data files in the mariadb directory. The file types include:

  • .frm – format (schema) file
  • .MYD – data file
  • .MYI – index file


It is the *.MYD (data) file that we must check. Within the database directory we must sort the files from largest to smallest keeping track of the largest .MYD file in that directory.


The database can be found at the following location:


  • Windows - <installed drive>\Program Files\SolarWinds\Storage Manager Server\mariadb\data\storage  directory


  • Linux - <installed path>/Storage_Manager_Server/mariadb/data/storage directory




  • This is the default location of where the temporary database tables are created.


  • Storage Manager versions 5.6 and newer use MariaDB. For previous versions, MySQL is used. For versions prior to 5.6, substitute MySQL for MariaDB


If you have insufficient disk space, you must point the temporary database tables to a drive that have sufficient space before upgrading. For more information on how to relocate the temporary database tables please see the Storage Manager Administrator Guide.

A couple of years back, Gartner® released the results of a survey titled Debunking the Myth of the Single-Vendor Network [1]. The results showed that single vendor network costs and complexity would increase, while multi-vendor networks would continue to provide greater efficiency. Truth be told, most network admins nowadays manage multi-vendor network environments by deploying the most suitable and affordable devices in their networks.


Generally, when deploying devices in the network, network managers try to balance two important factors: the Total Cost of Ownership (TCO) and the operational efficiency of the devices. At a macro view, this trend may pave the way to build next generation enterprise networks, but this can also pose some operational risks. Deloitte (a professional services firm) also released the results of a survey they titled Multivendor Network Architectures, TCO and Operational Risk [2] that examines the operational, financial, and risk factors associated with the single-vendor and multi-vendor approaches in different types of enterprise networks. They claim that multi-vendor networks also have unique problems that need to be addressed.


Challenges Faced in a Multi-Vendor Network

From network administrators’ point of view, there are a few challenges they face while managing a multi-vendor network:


Performance Management – When admins manage a multi-vendor network, they have to collect data on different parameters like device status, memory status, hardware information, etc. It’s a challenge to retrieve customized performance statistics data because each vendor should have unique OIDs. If you don’t have the right network management system (NMS) tool that supports multi-vendor devices, it’s impossible to monitor all the devices by collecting and monitoring network data.


EOL/EOS Monitoring – In enterprise network management, managing EOL and EOS information is a huge task. Admins have to maintain a spreadsheet that tracks all the device details—from product number to part number and end-of-life dates. If you combine that with multi-vendor devices in an enterprise network, the result can create a huge burden of manual tasks for admins. To make this process easier, there are tools available that automate EOL/EOS monitoring and resolve basic issues for network admins.


Hardware Issues and Repair – Having to face hardware failure in multi-vendor networks usually means having to face prolonged downtime. Each hardware issue may need its own unique support and repair specialists, and managing repair for a large number of devices can become a huge headache for network admins.


Applying Configurations to Different Devices – Deploying and managing new configurations for different devices requires a lot of manual effort and time. And, if there’s an issue with the configuration, admins have to manually Telnet/SSH the device to make the change or fix the issue. Admins can use network configuration management tools to resolve these problems.


Expertise to handle different devices – Network admins need to know how to manage different devices in multi-vendor networks. Expertise in command line interface will help to operate and retrieve information, if processes are implemented.


To successfully manage a multi-vendor network environment, network admins can adopt a two-pronged strategy.

  1. One, admins have to figure out the best combination of Layer 2 and Layer 3 network devices. It can be as simple as ‘How can I increase the router boot time?’ to ‘How will this device support our business critical applications?’
  2. Two, find the right tool to manage your network, irrespective of the devices you deploy. There are only few options available in the market where organizations can deploy end-to-end single vendor network devices. But, predominantly the trend seems to favor multi-vendor environments where administrators feel it’s more cost-effective.


Reduce Complexities!

While implementing solutions based on multi-vendor networks, ensure device configurations are set properly. Interoperability is essential. Admins can test the deployment configurations before actual implementation. Since each vendor may have different interpretations to network standards, it’s advisable to simulate and later deploy in the network. If administrators are going after a multi-vendor network, they have to take certain precautions to achieve high stability. Using an NMS tool that supports different vendors will help in managing your network. Complexities like command line interface (CLI) syntax can be replaced by tools that provide simple user interface to manage day-to-day activities.


Diverse networking environments require more centralization and providing continuous network availability should be the top priority. Ensure smooth network operations by monitoring all the key network parameters. For instance, network issues can be solved by looking at information as simple as ‘node status’ to something very important like, ‘memory usage’. When administrators use NMS tools, they automatically retrieve key information from devices. This makes an admin’s job much easier. If you have an SNMP-enabled device, your NMS can automatically poll relevant information from the device and display it in a readable format.


Single Central Console to Monitor All Devices

It doesn’t matter if you’re managing a small or large network, achieving efficiency and reducing cost of maintenance should be an administrator’s goal. Heterogeneous network infrastructure can pose challenges when dynamic operations are performed across systems, but using an NMS tool that supports multiple vendors can definitely be helpful for engineers during their network creation and expansion process.


[1] Courtesy Gartner® Survey - “Debunking the Myth of the Single-Vendor Network”: Republished by Dell®, [2] Courtesy Deloitte® and Cisco® Survey - “Multi-vendor network architectures, TCO and Operational risk”

Providing access from nearly anywhere, Wireless Local Area Networks (WLANs) deliver a great deal of flexibility to business networks and their applications. It’s important to note that WLANs are also susceptible to vulnerabilities, misuse, and attacks from unauthorized devices known as rogue wireless devices. To safeguard company data and ensure smooth operations, it’s crucial to take steps to prevent, detect, and block unwarranted activity associated with these rogue wireless devices.

What are Rogue Wireless Access Points?

As more wireless devices are introduced into a network, more wireless access points and transmissions within the network's proximity are also created. When this happens, new, previously unknown access points (AP) from a neighbor’s network can sometimes be introduced into your network. These are rogue wireless access points, and their source can many times come unintentionally from employees. On the other hand, the source can be a malicious one who is intentionally installing and hiding the AP in order to gather proprietary information.

It’s tough to differentiate between genuine and rogue devices. But, no matter what the intent, all unauthorized wireless devices operating within the vicinity of the company’s network should be considered wireless rogue devices that could be opening up unknown access points.

                                                                    Wireless Rogues.png

Types of Rogues

Neighbor Access Points: Normally workstations automatically associate themselves with access points based on criteria like strong signals, Extended Service Set Identifier (ESSID), and data rates. As a result, there are chances that trusted workstations accidentally associate themselves with an AP located close to, but outside the company network. Neighboring APs may not pose an immediate threat, but they do leave your company information exposed.

Ad Hoc Associations: Peer-to-peer wireless connections involve workstations directly connecting to other workstations in the same network. This facilitates file sharing or sending documents to a wireless printer. Peer-to-peer traffic generally bypasses network-enforced security measures like encryption and intrusion detection, making it even more difficult to detect or track this kind of data theft.

Unauthorized Access Points: Basic models of access points are easily available in the market. The existence of an unauthorized and unsecured AP installed intentionally or otherwise becomes an easy backdoor entry point into the company network. These unauthorized APs can be used to steal bandwidth, send objectionable content, retrieve confidential data, attack company assets, or even worse, attack others through your network.

Malicious Workstations: Malicious workstations eavesdrop or passively capture traffic in order to find passwords, log in information, email addresses, server information, and other company data. These workstations pose very serious risks and can connect to other workstations and APs. They redirect traffic using forged ARP and ICMP messages and are capable of launching Denial of Service (DoS) attacks.

Malicious Access Points: Attackers can place an AP inside or near company networks to steal confidential information or modify messages in transit. These attacks are also known as man-in-the-middle attacks. A malicious AP uses the same ESSID as an authorized AP. Workstations receiving a stronger signal from the malicious AP associate with it instead of the authorized AP. The malicious AP then modifies the data exchanged between the workstation and the authorized AP. This poses a great business risk because it allows sensitive data to be modified and circulated.

The rogue wireless device problem is one of the primary security threats in wireless networking. It’s capable of disclosing sensitive company information that if leaked, could be damaging to the organization. The first step to assess and mitigate business risks from wireless rogue devices is to detect them. Are you equipped to identify and detect rogue activity in your network?

Storage systems, like any other network or server hardware, are likely to brew up bottlenecks and performance issues and it’s the storage administrator’s job to keep this in check and triage issues. It’s a misconception that all storage bottlenecks arise due to storage disks. There are other key components of the storage infrastructure such as the storage controller, FC switches, and front-end ports that could go off course and, in turn, impact storage performance.

In this blog, we’ll understand some important factors causing performance bottlenecks in disk array controllers (aka RAID controllers).


What is a Disk Array Controller?

A disk array controller is a device which manages the physical disk drives and presents them to the computer as logical units. It almost always implements hardware RAID, and thus is sometimes referred to as RAID controller. It also often provides additional disk cache.[1]

The disk array controller is made of up 3 important parts which play a key role in the controllers functioning and also show us indicators of storage I/O bottlenecks. These are:

  • CPU that processes the data sent to the controller
  • I/O port that includes:
    • Back-end interface to establish communication with the storage disks
    • Front-end interface to communicate with a computer's host adapter
  • Software executed by the controller's processor which also consumes the processor resources


These components could potentially offset the performance of the storage subsystem when left unchecked. There are third-party storage management tools that help to get this visibility, but as storage administrators you should know what metrics to look at to understand what could possibly go wrong with the disk array controller.


Common Causes Disk Array Controller Bottlenecks

#1 Controller Capacity Overload: It is possible that the disk array controller is made to support more resources than it can practically handle. Especially when in scenarios of thin provisioning, automated tiering, snapshots, etc. the controller is put through capacity overload and this may impact the storage I/O operations. Also, when we are having to execute operations such as deduplication and compression, they may just add more load on the controller.


#2 Server Virtualization & Random I/O Workloads: Thanks to server virtualization, there are more workloads on the disk array controller to in comparison to the single application load on the host in the past. This makes it more difficult for the storage controller to find the data each virtual machine is requesting when each host has a steady stream of random I/O depending on each connecting host supporting multiple workloads.


Key Metrics to Monitor Disk Array Controller Bottlenecks

#1 CPU Utilization: You need to monitor the CPU utilization of the disk array controller with great depth and visibility. Try to get CPU utilization data during peak load times and analyze what is causing the additional load and whether the storage controller is able to cope with the processing requirements.


#2 I/O Utilization: It’s also important to monitor I/O utilization metrics of the controller in 2 respects:

  • From the host to the controller
  • From the controller to the storage array


Both these metrics allow you to figure out when the disk array controller has excessive CPU utilization or if one of the I/O bandwidths is overshooting. Then, you can understand whether the storage controller is able to meet the CPU capacity and I/O bandwidth demand with the available resource specification.


As George Crump, President of Storage Switzerland, recommends on TechTarget, you can address storage controller bottlenecks by

  • Increasing processing power of the controller CPU
  • Using more advanced storage software
  • Making the processor more efficient by implementing task-specific CPUs. This allows you to move portions of code to silicon or a field-programmable gate array (FPGA) enabling those sections of code to execute faster and the system to then deliver those functions without impacting overall performance.
  • Leveraging the hypervisor within server and/or desktop virtualization infrastructures to perform more of the data services tasks such as thin provisioning, snapshots, cloning and even tiering.
  • Using scale-out storage which is to add servers (often called nodes) to the storage system where each node includes additional capacity, I/O and processing power.

As we move into the New Year, it is time for us to have a look at some threats that we need to be guarded against. In this blog post, let us look at how Ransomware is likely to become more sophisticated in 2014. Here are a few trends observed this year that may well continue well into 2014, with some new and interesting challenges as well.


What on earth is Ransomware?

It is a type of malware that is designed to make your system or a file unusable until you pay a ransom to the hacker. It typically appears to be an official warning from law enforcement agencies like the Federal Bureau of Investigation (FBI) that accuses you of a cyber-crime and demands for electronic money transfers for you to regain control on your file.  There’s another kind of ransomware that encrypts the user’s files with a password and offers them the password upon payment of a ransom. Looking at both the cases, it is the end-user’s system that is essentially held hostage.


Cryptolocker malware and how it works

The Cryptolocker malware is seen as an extension of the ransomware trend and is far more sophisticated with its ability to encrypt files and demand ransom successfully. Its presence is hidden from the victim until it contacts a Command and Control (C2) server and encrypts the files on the connected drives. As this happens, the malware continues to run on the infected systems and ensures that it persists across reboots. So, when executed, the malware creates a copy of itself in either %AppData% or %LocalAppData%. Then the original executable file is deleted by CryptoLocker and creates an autorun registry key which ensures that the malware is executed even if the system is restarted in “safe” mode. 


Protecting yourself from Ransomware

It is important to be aware of this kind of malware and here are few steps that can help you to protect your organization from ransomware:

  • Ensure that all the software on your systems are up-to-date.
  • Make sure that you do not click on links or attachments from untrusted sources
  • You need to regularly backup your important files


Additionally, regulatory mandates and corporate policies need to become enforced stringently.  The fact is that a security attack of any kind can have a direct impact on your organization’s integrity and reputation, which is why a comprehensive security solution must be put in place. It is best to opt for an SIEM solution with real-time analysis and cross-event correlation as it would help you to:

  • Reduce the time taken to identify attacks, thereby reducing their impact
  • Reduce the time spent on forensic investigation and root cause analysis
  • Respond to threats in real-time


Shield your network and systems better this year, have a good one!!

My teenage daughter thinks my technology job in security is boring most of the time  (especially when I talk about it in front of her)- but when she heard about the SnapChat breach, I quickly received a call asking for advice.  The user names and phone numbers of many users were breached and exposed.  So should my daughter and her friends be worried?  Whenever there is a breach or new vulnerability found, there can be a lot of hysteria.  Its scary to know that your information was stolen - but there are varying degrees of damage that can be done by breaches.  I performed a quick risk assessment for her and thought, given the large numbers of SnapChat users, I would share the results. The outcome? She personally did not need to be very worried – although some might need to be.  Here’s why:


  • Right now, there is no indication that passwords were exposed.  I recommended she change her password anyway just to play it safe. Since I use SnapChat as well to communicate with her, I did the same.
  • Her user name, combined with her phone number, doesn’t provide much identifying information about her at all.  It could lead to annoying spam texts and calls – but since she is using a respectable user name that is not her full name, the two pieces together do not clearly identify her and should not cause embarrassment.  She is almost 20, so we are not very concerned about her receiving content from those she doesn't know - because she can always block those people.  Younger kids and their parents should take some precautions.
  • New incoming photos can’t be accessed with just a user name and phone number – so  new photos coming in are safe as long as passwords weren't breached (and we changed our passwords to play safe)
  • Old photos, while remnants remain on her device – are not accessible even if her account was breached because the SnapChat application does not maintain them for user access
  • Her name with her phone number is already public information because it is listed on her blog with her resume

So who should be worried and when?

  • Parents of younger children (my daughter is almost 20) should be concerned because their kids can be added by people who don't know them and their numbers have been exposed to spammers which may, in turn, expose them to inappropriate content and messages.  Downloading a new version of the app released today and opting out of "Find Friends" should definitely be performed with younger kids.  Also, making sure younger kids come to you immediately if they see inappropriate content in spam messages on their phone is essential.
  • If you have an inappropriate user name, this information combined with your phone number could cause embarrassment.
  • If it turns out that passwords were in fact breached – then someone could gain access to new incoming snapchats.  There are no reports of passwords being breached I recommend changing passwords now just to play safe
  • If the user name contains identifying information and you want to keep your number private (for example – famous people) – then it could cause an issue.  In that case, getting a new phone number from your mobile provider is the option to correct it.

Reasonable security decisions – both in businesses and in our personal security online -  are about assessing the value of the information combined with the difficulty for an attacker to gain the information.  Those who phone numbers were exposed might have some annoyances with spam to their mobile phones – but unless more data than phone numbers and user names were stolen or if you fit the “worry” criteria – that should be the extent of the damage from this breach.

The Situation

Over the Christmas holiday I found myself with some extra time and decided to get my backups on my external hard drives in order. About halfway through, Windows decided not to recognize my hard drives anymore. Look familiar?

Clicking the message did nothing to assist me.


The Wrong Solution

My first instinct was to plug the drives into my other laptop to verify the hardware was still in working order and of course, they worked just fine. At this point I've verified it's not the hardware causing my burgeoning headache. What next? Drivers! Of course, that's it! Perhaps the drivers somehow became corrupt. Piece of cake! Just grab them off the manufacturer's site and I should be back in business.


I installed the drivers and pointed Windows to the proper folder and guess what I found? Nothing. Huh? That's odd. I installed the drivers again and this time I watched the folder and files being created from the MSI installation program, only the folder and files were never created, despite the fact that the installer said I was good to go. (I know what you're thinking, did you check the li'l box that says, "Show hidden files and folders" as well as the other li'l box that says, "Show protected operating system files"?  C'mon, that's a rookie mistake for lesser men.)


I've had MSI problems in the past so I verified that another MSI installer worked on the same rig. It did, so MSI installers in general are not the problem. I concluded that the devs over at this particular company were complete idiots by providing the end user with a program that installs sailboat fuel.


Hour three of my adventure

Trolled Google and learned what I could. Nothing new here.


Hour four of my adventure

Get the drivers from a third party program that searches for and installs needed drivers. Sure that will work! The drivers downloaded and updated and sure enough, nothing changed! I still could not get my drives to work.


Hour five of my adventure

All the articles and forums online were telling me things I already knew and tried, including tweaking every possible setting in the Control Panel and System Manager. Nothing. One blog even had me tweak some registry settings. Still nothing! Reboot after reboot and I had no external drives. My USB wireless mouse however, never failed during this ordeal. It had to be the drivers!


I was getting desperate. The next step, gulp...Microsoft.com. I found myself here:

This had to work! I mean, they BUILT the damned thing and the Fix It tool they prepared was designed specifically for this problem. Need I tell you what happened? You guessed it, nothing. The tool found errors it could not fix - didn't tell me what they were though. It just told me my hardware was corrupt. (Another wrong assumption.)


Hour six of my adventure

I was stupefied. Six hours of reading, reboots, downloads, tools, settings and registry edits, and nothing. What to do? Then my eye caught an obscure article (not unlike this one). The only reason I read it was because the instructions given were different from the hundreds of others that I had read which all gave the same five solutions that didn't work. I had nothing to lose and at some point, even the ridiculous seems viable.


Before the big reveal, let's review:

  • With 30+ years of programming and computer experience, I was stumped
  • Microsoft could not fix it
  • Manufacturer could not fix it
  • Updated drivers did not fix it
  • Registry and other settings adjustments did not fix it
  • Reboots did not fix it
  • All but one solution in all of Google, failed


The Ridiculous solution

Turn the computer off, physically remove the laptop battery and power cord from the computer. Let sit for five minutes. Resume as normal. YES, THAT REALLY WORKED!!

The Moral of the story?

If you can find one, please add it to the comments below.

We all know how important network topology mapping is for monitoring a large network. It gives us an entire layout of network devices and a pictorial representation on how they’re physically connected. Not only does it keep network admins updated on status of devices, they also support by ascertaining the impact caused by an issue on your organization’s network.


If you have more than a handful of devices to manage, it’s advisable to create a network topology map for visualizing your entire network. When you’re trying to scale up and add more nodes to your network, the map will help you to manage all devices. However, in enterprise-level networks, network topology mapping is considered an absolute necessity since it helps to locate and troubleshoot issues by identifying which nodes aren’t working from the network map. Having a network topology map adds a lot of benefits to network monitoring:


#1 Find network bottlenecks easily – If you have an advanced network map which provides real-time status on network devices, it becomes easier for admins to locate new issues or network bottlenecks. And if they’re integrated with monitoring tools, you can find and analyze the root cause of the problem more quickly, rather than waiting for an email alert to reach your inbox.


#2 Centralize resources and manage your devices – Network topology provides admins with a tangible view of the network. It’s easier for network admins to understand by visualizing the relationship of all the devices connected within the network. They can centralize monitoring and dedicate fewer resources in managing distributed nodes.


#3 Assess impact of network change immediately – Routinely, network admins have to add new devices or change the existing structure of the network based on user requirements, and these changes can impact their normal operations. But, when admins use network topology maps, they can quickly assess impending operational risks by looking at the relationship of those devices. It helps to be proactive when there’s a change management to deploy.


#4 Robust network monitoring system – A network topology map coupled with your network monitoring tool creates a strong base for administrators managing a large network. To collect data on all available network devices and display that as a pictorial representation is the foremost advantage of having a network topology map. You can easily see the status of the network path and connected devices.

new new.png

Also, maps allow you to see at the device level.



For instance, real-time data on devices help network admins check whether a node is up or down. It can be easy to manage remote sites, but if the link to the remote site is down you can no longer collect data from them. This reflects in the map which in turn alerts the administrator to initiate the necessary steps to resolve the issue. Advanced network monitoring tools provide the capability to automatically map your network and create customized maps based on your needs. Thus, finding out issues and bottlenecks becomes much easier for you.

For those of you that are in the IT administration and support job – help desk technicians or just any IT admin providing IT support to end-users – one of the things you would have looked out for throughout your career is a more easy means of providing support. Yes, help desk makes this process easier and more organized. But what about the real act of rendering remote support? Certainly RDP, VNC, et al. are making your end-user desktop remote controlling process simpler, but just know this: it can be way more easy and cool if you get hold of some cool screen sharing functionality in your remote control tool. So, what are these useful features?

Desktop Sharing.PNG


Chat with Your End-users during Remote Session

When you are providing remote IT support, you may have faced this many times: you have to communicate something to your end-user, and he needs to say something back to you, and you end up looking up his extension from the AD, and then calling him. Even if you did try pinging him via your organizational instant messenger, you would have to keep switching your screen from the chat window to the remote control window frequently. The last thing you want to do and slow down the remote support process further is communicate via email.


Remote control tools that provide built-in chat functionality to communicate with your remote users allow you to speed up the troubleshooting or support process saving your time and making two-way communication more effective.


Take Screenshots of Remote Desktop during Screen Sharing Session

It’s not very uncommon that you may face a specific issue in your remote user’s system and would want to take the screenshot of the issue as it happens – either you want it for further research, or just for documentation purpose, or recreate a specific scenario. Yes, you can use a snipping tool available on your machine, but it’ll get more complicated when you’ll have multiple remote user windows open and would want to take screenshots. Look for remote control tools that offer built-in functionality to take screenshots of your remote desktop any time during the remote session.


Transfer Files during Remote Desktop Session

Transferring files during an actual remote desktop sharing session is a real boon to IT admins. There are times when we have to install some batch files or any updates to the remote system. Email or folder sharing over LAN is definitely an option, but remote control tools offering built-in file transfer functionality where you can mutually share files with your remote users just with a simple drag-and-drop command will certainly make your remote support process easier and more efficient.


Remote Control Computers from Mobile

It’s hard to find IT admins as sedentary workers through the day. They keep moving around between end-user workstations, NOC, data center, etc. Wherever they are, if there’s an IT problem to troubleshoot on a remote computer, they should be able to do it. Remote control tools that allows establishing remote connections from mobile interface will be most useful here. IT admins can connect to end-user machines and triage issues on the fly 24x7.


Look for these simple, yet powerful, functionality in your remote control tool and ensure you are able to fulfil your IT support objectives with simplicity, speed, and higher operational efficiency. Remote IT support can also be fun!

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.