1 14 15 16 17 18 Previous Next

Geek Speak

1,244 posts

Network availability is always crucial for continuity of business operations. With more than 80% of network outages caused due to configuration errors, network engineers are often faced with the challenge to configure network devices and ensure the network is secure and operational at all times. Below are some of the key challenges that network administrators face when manually configuring devices.


  • Configuration changes can be time-consuming, error prone and could possibly result in hours of downtime for the company
  • As more and more devices are added to the network, it is difficult to track configuration changes, and especially in the event of any network outages troubleshooting becomes a daunting task
  • Even the smallest of configuration errors could pose a big threat to companies as it exposes the network to hackers and malicious attacks


As a result, more network administrators are turning towards automated network device configuration management tools to better manage their configuration changes. Here are some key benefits that automated configuration management solutions offer:


#1 Minimized Downtime

NCM 2.png

Administrators can continuously track configuration changes and immediately identify the erroneous configuration changes that caused the outages. By applying the previous known configurations, devices can be up and running within no time thereby reducing downtime for the company.


#2 Reduced Human Errors

With the ability to automate backups, network admins need not worry whether each device is being backed up or not. Automation of bulk changes reduces human errors, saves time with just a few clicks and without the need to remember scripts each time while making a change.  


#3 High Operational Efficiency

The time to complete tasks such as patches or backups which requires multiple changes could be significantly reduced. Network admins can continuously monitor changes to devices and receive real-time change notifications increasing the efficiency of network operations and troubleshooting.


#4 Enhanced Security

Network admins need to be fully informed of the attempts made by unauthorized sources gaining access to the network. With the ability to monitor and track real time changes to devices, it is easy to trace the actions of end users ensuring accountability in case of any security breaches.



While manual configuration of devices could leave the network team wondering which error caused the downtime, automated network configuration management solutions enable administrators to take complete control of managing configuration changes, ensuring security and improving efficiency.


Learn More:

NCM White Paper.png


Voice Over I.V.

Posted by LokiR Oct 7, 2013

No, the title is not a typo. October's wacky, weird, so-new-it-hurts technology is a subcutaneous cellphone that runs off of your blood.


The phone rather looks like a prototype for the awesome phones from the Total Recall remake. Essentially, the phone is a small, thin, silicon touchscreen that is inserted under the skin and only "lights up" when you make or receive a call. The other cool thing about the subcutaneous cellphone is that it would use a blood battery - a tiny, biological battery that converts the glucose in your blood to electricity.


How can this technology affect you? Not only is this not available (and there are no plans to make it available), who wants to implant cellphone technology that will be out of date in a year or so? On the other hand, the battery is going to be very useful in the health care field.


There may come a time in the near future when you'll have to track medical implants and determine battery life or functionality. It would be awesome if you could monitor these things remotely like you can your server health.


Actually, the blood battery can act as a human health monitor since it directly interfaces with your blood. When this is implemented, there is probably going to be a wireless or Bluetooth element to it so that health care providers can monitor you for blood disorders. There will also be an app for that.


And in that eventuality when cellphone technology has plateaued, you will be able to get your very own subcutaneous cellphone. I hope by that time we might have some holograms or something to make it even more awesome. 

Well, as you might have heard, the final version of the PCI DSS 3.0 requirements will be up only by November 2013 and it would be effective from January 2014. Alright, it’s time to get a glimpse of the proposed changes in the newer version.


PCI Requirement No.

Current PCI DSS Standard

Proposed PCI DSS Update for 3.0 on top of existing standards



Install and maintain a firewall configuration to protect cardholder data.

Have a current diagram that shows cardholder data flows.

To clarify that documented cardholder data flows are an important component of network diagrams.


Do not use vendor-supplied defaults for system passwords and other security parameters.

Maintain an inventory of system components in scope for PCI DSS.

To support effective scoping practices.


Use and regularly update antivirus software.

Evaluate evolving malware threats for systems not commonly affected by malware.

To promote ongoing awareness and due diligence to protect systems from malware


Develop and maintain secure systems and applications.

Update list of common vulnerabilities in alignment with OWASP, NIST, SANS, etc., for inclusion in secure coding practices.

To keep current with emerging threats.


Assign a unique ID to each person with computer access.

Security considerations for authentication mechanisms such as physical security tokens, smart cards, and certificates.

To address feedback that requirements for securing authentication methods other than passwords need to be included.


Restrict physical access to cardholder data.

Protect POS terminals and devices from tampering or substitution.

To address need for physical security of payment terminals.


Regularly test security systems and processes.

Implement a methodology for penetration testing, and perform penetration tests to verify that the segmentation methods are operational and effective.

To address requests for more details for penetration tests, and for more stringent scoping verification.


Maintain a policy that addresses information security.

Maintain information about which PCI DSS requirements are managed by service providers and which are managed by the entity.

Service providers to acknowledge responsibility for maintaining applicable PCI DSS requirements.

To address feedback from the Third Party Security Assurance SIG.







What do these changes mean to you?


  • Policy guidance and operational procedures have to be given with each requirement
  • You would have to maintain an inventory of all systems within your PCI scope
  • It will eliminate the redundant sub-requirements
  • These changes will bring in clarity on the testing procedures for each requirement
  • It has strengthened the requirements around penetration testing and validation of network segments.
  • It would allow you to be more flexible around risk mitigation methods comprising password strength and complexity requirements


Is your IT infrastructure ready?

Well, looking at all these additions to the current PCI requirements,  you may feel that it’s a big change, but having a closer look at it, the change has been more structural. So, the question you need to ask yourself is how well are you equipped to embrace PCI 3.0

Some key questions would be:

  • Have you constructed policies and procedures to limit the storage and retention time of PCI data?
  • Do you have constant assessment and reporting systems across employees of different levels?
  • Do you have an SIEM tool that will correlate and alert you in real time upon any security breaches?


PCI 3.0 will be effective from January 1, 2014 and it would become a mandate from July 2015!!


Are you all set?



PCI Data security.PNG

As an IT environment grows, you will continue to add more hardware, software, and applications for end-users. Your current IT assets are going to increase and you will have to make continuous adjustments to the existing setup to expand and scale your infrastructure. As IT pros, not only do you have to proactively monitor hardware and application health, but you also have to keep an up-to-date inventory of all your IT assets. Even though it’s a time consuming and tedious manual process, it has to be done due to most organizational policies.

Not maintaining an inventory of your IT assets can have these results:

  • Lack of visibility on your current hardware and software
  • Lose track of warranty information for critical hardware components
  • Lose out on important software, operating system, and firmware updates
  • Lack of visibility on the overall hardware lifecycle and maintenance timelines


Asset Inventory in SAM 6.0

SolarWinds® Server & Application Monitor (SAM) has always been a comprehensive server hardware and application monitoring software. With the new SAM 6.0, you can now proactively monitor and manage your IT assets. Adding IT Asset Inventory Management to SAM means you can now automagically maintain a detailed inventory of your hardware and software.

With the Asset Inventory dashboard, you can look at various information about your IT assets. Major assets include:

  • Server Warranty: Track server warranties that have expired and that are going to expire. SAM periodically checks the status of each server warranty against the vendors it supports (HP, IBM, and Dell) on their online warranty validation servers.


  • Hardware Inventory: Get reports on your hardware such as hard drives, memory, volumes, removable media, graphics and audio, USB controllers, and other computer peripherals. SAM gives you hardware information such as manufacturer name, publisher, version, and serial numbers.


  • Software Inventory: SAM will check all the software installed on your servers, identify the publisher, version, and install date. This lets you clearly see all the software products that are regularly used, as well as those that are rarely used.


  • Operating System Updates: SAM monitors the operating system updates and populates the name of the operating system, the type of update applied (whether it is a system or a security update), date the update was installed, and the person who installed it.


To simplify the process, SAM will only collect inventory data once a day. You also have the option to configure the data collection interval weekly, bi-weekly, or even monthly depending on your preference. This way, inventory data does not have to be collected with the same level of frequency as availability and performance information. This is beneficial because it minimizes the impact on your polling engine.


Takeaway for IT Pros

  • Visualize IT asset inventory for both physical and virtual assets
  • Automatically track your assets and view software update cycles
  • Facilitate improved inventory lifecycle management
  • Respond to key IT questions that help drive your business decisions
  • Monitor and manage your IT assets in one place

Explore Asset Inventory Management and other features in SAM 6.0.

Network administrators find tracking and maintaining device End-of-Life (EoL) and End-of-Sale (EoS) data difficult because …..


  • Hard to collect, verify and manage various multi-vendor device information
  • Big effort to regularly track and keep a watch on Vendor announcements
  • Unpractical to check on a daily/regular basis for devices reaching EoL
  • Maintaining an up to date and detailed inventory of all devices
  • Frequent device replacements in the network


To manually maintain a running and up to date network demands great deal of time and resources. But, again, why is device EoL important?


As a network administrator, you wouldn’t want to find out one day, that your core router is down and the reason is a faulty part. You then go on to discover that the device is out of Vendor support! You are faced with the difficulty in obtaining spares and replacing the device itself may mean more downtime.


To avoid being in situations like such and to plan and prepare in advance for device replacements or support renewals, it is beneficial to have a device inventory system in place. Such a solution can help,


  • Maintain a centralized inventory with all device details
  • Easily plug-in EoL/EoS data and prompt when devices are nearing expiry
  • Provide up to date information to plan and budget for replacements or renewals in advance
  • Ensure that your devices are running on current firmware and IOS
  • Ensure that the devices are covered under hardware warranty and technical support contracts


If you are struggling with manual device inventory management or if you are paying someone else to do the job, it is time to invest in a tool or solution that can help you take care of your inventory requirements efficiently.


IP address management, as all of us networking professionals know, is an integral part of the enterprise network management system. With the explosion of IP addresses—thanks, but no thanks to the BYOD trend—and the encroaching need to migrate to IPv6, network admins are looking for an effective solution to monitor, track and manage IP addresses with the ability to:


  • Control the entire IP infrastructure from a centralized web console
  • Scan DHCP servers for IP address changes
  • Prevent subnets and DHCP scopes from filling up with preventative alert notifications
  • Create, schedule, and share reports showing IP address space percent utilization
  • Create IPv6 subnets and plan for IPv6 migrations


As I’m sure you’re already aware, trying to implement all of this using spreadsheets is impossible, but there are IPAM and DDI solutions available in the market that can help. In choosing the right solution for your specific needs, a decision will need to be made as to whether you should go with a software-based IPAM solution or a hardware-based appliance. Let’s dive into a comparison and find out which would suit your needs better.


IPAM SW vs. HW.png


#1 Price

As much as we all wish price didn’t have to be a factor in our decisions, the hard truth is that it always will be, especially for today’s IT departments who want to be seen as adding business value and not just as a cost center. Hardware-based IPAM appliances are significantly more expensive than their software counterparts. Closing a deal is a lengthy process likely requiring weeks for approvals, signoffs, product demonstrations and finally, installations. On the other hand, software-based IPAM solutions are far less expensive and easier to implement, making them ideally suited to businesses of all types and sizes. Why purchase a cost-prohibitive, proprietary hardware appliance when you can get a far more flexible and agile solution that will add scale to your existing infrastructure at a fraction of the cost?


#2 Installation & Setup

Hardware IPAM solutions are difficult to setup and install – in other words, time-consuming and likely headache-fraught!

    • First, you must wait on the device to be consigned to your location.
    • Upon delivery, it must be racked, stacked, and cabled properly.
    • Next, you need to ensure its communicating with the rest of your network (if it’s not, go back to previous step).
    • Finally, you’ll need to configure the management UI from a separate computer
    • For each additional device (which will likely be needed to scale), rinse and repeat aforementioned steps

With IPAM software, you can download and install the product immediately and be up and running in as little as an hour!


#3 Maintenance & Overhead

Anything hardware-oriented poses significant maintenance overhead. You need to ensure the IPAM appliance is regularly paid attention to for maintenance and overhaul. You will need to factor in:

    1. Manual effort required for device inspection, maintenance and repair
    2. Time consumption in tending to hardware servicing needs
    3. Productivity and operational time loss during the hours of maintenance
    4. Cost involved in replacing any damaged or faulty hardware parts


The very nature of hardware makes it vulnerable to failure – one damaged part, no matter how small, could shut down the entire device. With a software-based solution, you don’t have to worry about a proprietary physical device being impacted or the associated time, effort and financial overhead required for maintenance and repair. Software can be installed on any existing server and can be easily scaled.


#4 Upgrade

IPAM software can be easily upgraded anytime.  It’s as simple as downloading and installing the new version. Whereas, hardware upgrades are often a painful task involving firmware and driver updates, which if done incorrectly, could render the device inoperable. In some instances, an upgrade isn’t even an option. Instead, the entire appliance may need to be swapped out for the latest and greatest hardware, costing you even more money!


#5 Technical Support

Some 3rd party IP address management software providers offer full first year maintenance and free on-call technical support 24x7. And, subsequent annual renewals are just a fraction of the cost. Plus, there’s no need for technical personnel to visit the site for support and repairs. Hardware, on the other hand, does require first-hand physical check-ups and repairs if there are technical problems. You lose time logging the issue with the vendor, waiting for the vendor technician to visit the site, then waiting some more for the issue to be diagnosed and resolved. And, during this whole time, the appliance is down and IPAM capabilities are lost.


#6 Scalability

Software-based IPAM solutions scale much easier with additional pollers that can be installed onto any web server, unlike hardware-based solutions that often require additional (and costly) appliances to scale up.


#7 Integration with other Network Management Systems

Many software-based IPAM solutions can easily integrate with other network monitoring systems to help achieve comprehensive network management.


Hardware appliances are not as flexible or easily integrated with existing network monitoring solutions, resulting in the need to have different monitoring consoles and added administration.

SolarWinds IP Address Manager (IPAM) is a scalable, flexible and nimble IPAM software solution that offers simple yet powerful IP address management, DHCP and DNS management from a centralized web console. IPAM leverages an intuitive point-and-click web interface that allows you to easily manage, monitor, alert, and report on your entire IP infrastructure, identify and resolve IP conflicts, as well as efficiently plan for your future IP address needs, including IPv6 migrations.

It’s October!!


Yes, you are right, it is the National Cyber Security Awareness Month (NCSAM)



This year, we are celebrating the 10th anniversary of NCSAM, which is sponsored by the Department of Homeland Security and the National Cyber Security Alliance.

With the continuous advancement in technology, those trying to access your personal information are also growing smarter. The core intention of NCSAM is to educate online consumers and businesses about cyber security issues and the best practices to avoid them.



NCSAM at SolarWinds – What does it mean to you?

It’s that time of the year, when you look back at your security and safety precautions, understand the consequences of your actions and behaviors online, yet enjoy the benefits of the Internet. To help you do this, we have compiled our best resources that can be readily used to measure your current level of security.


Have you visited our all new NCSAM page yet??

For the month of October, we will be discussing cyber security issues every week as follows:




Area of Discussion

Week 1

October 1-6

General Security

Week 2

October 7-13

Being Mobile: Online Safety & Security

Week 3

October 14-20

Cyber Education

Week 4

October 21-27


Week 5

October 28-31

Cyber security



Keep an eye on this space, there’s more coming!!


Before we start to understand how this works, you need to know that SolarWinds Alert Central is a FREE SOFTWARE - totally, completely, free for life! 


Once you have installed Alert Central and linked it with your Orion (NPM, SAM, IPAM, NCM or other Orion-based products from SolarWinds), all your Orion alerts will be sent to Alert Central and any updates made to the alert in Alert Central will also be updated on Orion. It’s just a simple 3-step process to establish this integration:


#1 Configure Alert Central’s Email Settings

  • You need a new email account for managing alerts with Alert Central
  • Alert Central can receive email from POP or IMAP, with or without SSL, or with a direct connection to Microsoft® Exchange Web Services
  • Alert Central uses standard SMTP with or without SSL and authentication to send outbound email


#2 Configure Orion to Generate Alerts for Different Alert Conditions

  • Use Orion Basic Alert Manager of Advanced Alert Manager to define alert conditions and determine when they are going to be triggered
  • By default, Orion is set up to send alerts to Alert Manager if you are running Orion system with Core v 2012.2.0, or if you are using versions NPM 10.4 / SAM 5.5 or above.


#3 Configure Routing Rules in Alert Central

  • Add your Orion source credentials to Alert Central
  • Configure the routing rules for Alert Central to route alerts to recipients based on custom conditions.
  • The most common scenarios for configuring alerts are:
    • Routing all alerts to a single group
    • Routing alerts from different locations or network segments to different groups
    • Routing alerts with different keywords to different groups




  • You can also add different Orion properties in Alert Central to route specific alerts and test them on the Alert Central console to ensure they work well
  • Best Practice: Set up a default alert group to receive all alerts which do not meet any of your custom routing conditions


Once you have configured your alert routing conditions you can set up escalation policies and on-call calendaring for each group to ensure alerts go to the right user.


Watch this short video to learn how to configure Orion alerts with Alert Central.



About SolarWinds Alert Central

SolarWinds Alert Central is centralized IT alert management software that provides amazingly streamlined alert management for your entire IT organization. It consolidates and manages IT alerts, alert escalation, and on-call scheduling to help ensure all your alerts get to the right people, in the right

groups, at the right time. And Alert Central integrates with just about every IT product that sends alerts, using a simple set up wizard to walk you through the process.


  • Centralize alerts from all your IT systems
  • Automatically route alerts to the right group
  • Filter alerts from noisy monitoring systems
  • Automatically escalate unanswered alerts
  • Easily create and use on-call calendars


And the best part is that SolarWinds Alert Central is completely FREE! Download Alert Central - for VMWare or Hyper-V


AC 2.png

To be in the first flush of network issues before they turn into a nasty outage, you need to be on the watch for a few important network monitoring parameters. With so many factors, objects and interfaces, the burning question is – What to monitor?


Knowledge about main causes of network downtime helps but, a lot depends on the design of your network, the devices, services running, and so on. But, in general, what are the recommended critical parameters that need steady monitoring?


Be the first to know before it affects your users! Here are a few pointers to some important monitoring parameters are as below,


Availability and Performance Monitoring: Monitoring and analysis of network device and interface availability and performance indicators help ensure that your network is running at its best. Some of the factors that influence good network availability are:


  • Packet loss & latency
  • Errors & discards
  • CPU, memory load & utilization


Detailed monitoring and analysis of this data for your network elements and timely alerting on poor network conditions like slow network traffic, packet loss, or impaired devices helps safeguard your network from unwarranted downtime.


Errors & Discards: Errors and discards are two different objects in the sense that errors designate the packets that were received but couldn't be processed because there was a problem with the packet. Discards are those packets that were received with no errors but were discarded before being passed on to a higher layer protocol.


Large number of errors and discards reported in the interface stats is a clear indication of something that’s gone wrong. A further investigation into the root cause will help identify the issue which can be quickly resolved.


Device Configuration and Change: Non-compliant configuration changes are a major cause of network downtime. Not knowing what changes are made and when, is even more dangerous for your network. Create a system that monitors and approves device configuration changes before they are applied to production. Setting up alerts to notify you whenever a change is made helps you maintain control over your device configurations being the cause of network downtime. 


Syslog and Trap Messages: Syslog and traps serve separate functions - syslog messages come in for authentication attempts, configuration changes, hits on ACLs and so on. Traps are event-based come in when some device-specific event has occurred like an interface had too many errors, high CPU usage, etc.


The advantage here is that, instead of waiting for your network management system (NMS) to poll for device information, you can be alerted on unusual events based on these syslog and trap messages.


Network Device Health Monitoring: Monitor the state of your key device components including temperature, fan speed, and power supply so that you do not find yourself in a network outage caused due to faulty power supply or an overheated router. Set pre-defined thresholds to be alerted every time these values are crossed.


So, start monitoring your critical factors that impact the availability and performance of your network and  Minimize Network Downtime!


To Learn more: Unified Network Availability, Fault and Performance Monitoring

Alright Security folks, WEBCAST TIME!!


What is this all about?

This webcast will discuss and showcase how various organizations are dealing with IT security.  You will get to know whether these organizations are implementing tools and techniques to deal with their security data analytics problems, are they automating pattern recognition, and many more.



When: Thursday, October 03 at 1:00 PM EDT



Why shouldn't you be missing this?

During June and July, along with SANS, we conducted a survey that was taken by 600+ security professionals to explore how organizations are dealing with their security data for better analysis and detection. In fact, we even found some shocking facts, for example, almost one-third of the security pros out there are still acting upon hunches when it comes to detecting security threats.



To stay updated, all you have to do is just attend this webcast.



Registration link: https://www.sans.org/webcasts/analyst-webcast-results-analytics-intelligence-survey-ii-96807







Along with the SANS Analyst Dave Shackleford, you will also have Nicole Pauls from the SolarWinds crew.



Nicole Pauls

Nicole Pauls is a Director of Product Management for Security Information and Event Management (SIEM) at SolarWinds, an IT management software provider based in Austin, Texas. Nicole has worked in all aspects of IT from help desk support, to network, security, and systems administration, to complete IT responsibility over the span of 10 years. She became a product manager to help bring accessible IT management software to the masses.

Because Microsoft® SQL Server® is such a widely used database, slowdowns within its environment can lead to issues for multiple applications. More often than not, the root cause of such slowdowns is usually memory bottlenecks. There are many issues that affect SQL Server performance and scalability. Let’s look at a few of them.


  • Paging: Memory bottlenecks can lead to excessive paging which can impact SQL Server performance.
  • Virtual memory: When your SQL Server consumes a lot of virtual memory, information will constantly move back and forth from RAM to disk. This puts the physical disks under tremendous pressure.
  • Memory usage: No matter how much memory is added to the system, it appears as though SQL Server is using all of it. This can happen when SQL Server caches the entire database into the memory.
  • Buffer statistics: When other applications consume lots of memory and your SQL Servers don’t have any then there can be issues related to page reads, buffer cache, etc.
  • Other: Memory bottlenecks can occur if databases don’t have good indexes, and applications or programs constantly processing user requests.


Monitor SQL Server Memory

You must continuously monitor your SQL Server in order to improve its overall performance. During this process, it’s vital to check the statistics (optimally, through alerts) of various performance counters related to SQL Server memory. This is especially true when you’re constantly adding more databases to your SQL Server. Along with memory resources you’ll also have to monitor CPU load, storage performance, physical and virtual memory usage, query responsiveness, etc., which also cause performance issues in the database.


Improve Your SQL Server Performance

If you’re really looking to improve your SQL Server performance, it’s imperative to understand your existing environment, and which performance counters you really need.


A server monitoring tool should provide out-of-the-box user experience for your SQL Server database. It will allow you to simulate end-user transactions and proactively measure the performance and availability of your SQL Server. The tool will also:

  • Ensure the availability and performance of your SQL Server.
  • Give you visibility into statistics, the health of your SQL Server, and then set performance thresholds.
  • Build custom reports to show SQL Server availability and performance history.
  • Get real-time remote monitoring of any WMI performance counters to troubleshoot application issues.


SolarWinds Server & Application Monitor (SAM) comprehensively monitors SQL Servers, and other Microsoft applications running in them. Try the fully functional free 30 day trial.

If you are considering HP Network Node Manager i (NNMi) for your network monitoring requirement, or if you already are using NNMi in your network and looking for a change, here are 5 concrete reasons why you should consider SolarWinds Network Performance Monitor (NPM).


#1 NPM is an easy-to-use network management solution

    • Intuitive Web interfacePicture1.png
    • Customizable and interactive dashboards and charts
    • No training cost or product management overheads


#2 NPM is an affordable enterprise-class software

    • Transparent pricing
    • Flexible licensing
    • No hidden costs


#3 NPM can be installed and deployed typically in under an hour

    • Do-it-yourself deployment
    • No professional services, or product delivery time
    • Download, install and start monitoring in under an hour


#4 Built by network engineers for network engineers

    • Purpose-built based on the needs of the IT community
    • Customer and community-driven product enhancement


#5 Buy only what you need

    • Unified network fault, availability and performance monitoring in one single software
    • No add-ons for network monitoring functionality
    • Modular architecture allows you to add other network management solution and integrate with NPM


More information on product comparison in this SlideShare


That’s not all!


For network performance and combined bandwidth monitoring and network traffic analysis, you can save up to 75%[1] over HP NNMi with SolarWinds Bandwidth Analyzer Pack (Network Performance Monitor + NetFlow Traffic Analyzer).


[1]^ Estimated cost savings for 500 nodes using SolarWinds Network Performance Monitor SL2000 and SolarWinds NetFlow Traffic Analyzer for Network Performance Monitor SL2000 vs. HP Network Node Manager Advanced + iSPI Performance For Metrics + iSPI for Traffic + iSPI for Multicast. Based on available August 2013 data.


Learn More About SolarWinds NPM

Troubleshooting polling issues can involve many steps and there are a few quick things we can try first before placing a support call. For example, say we have a piece of networking equipment such as a router or switch that is being monitored in Orion and suddenly the interfaces go unknown (gray) on the network device. This is telling us that SNMP information is not being recorded for the interfaces. More than likely we will not see any CPU or Memory utilization being reported either. To troubleshoot this issue we can do the following.

  1. Verify that the network device is configured correctly for sending SNMP traffic to the Network Management Software. A quick example would be verifying that the community string is correct.
  2. Run a packet capture on the server hosting the Network Management Software. We can poll the device from the Orion website by selecting “poll now” from Node Details. This sends SNMP (UDP 161) traps to the destination network device. While you are polling the device, run a sniffer trace using a program like or similar to Wireshark (formerly Ethereal). If there is a network issue, you will see SNMP frames leaving your Network Management Server, while no response is returned from the network device.
  3. If you do see frames returning from the network device, then this rules out network connectivity issues. Next verify there is no firewall blocking the SNMP frames on the server hosting the Network Management Software. If you are running a sniffer such as Wireshark, the incoming frames are being read at the Network Interface Card. A local server firewall such as Windows Firewall can still block SNMP frames from reaching the Network Management Software even though we see them hitting the NIC card. Note also that if a firewall is blocking SNMP traffic on the local server this will prevent any SNMP related information from being reported in your Network Management Software.

In summary verifying end to end connectivity between the Network Management Server and the device in question is the key and there are a few steps we can do to verify and possibly fix polling issues saving the end user some time.

WSUS Inventory collects server information and status information from the WSUS server and populates that data into the Patch Manager database. This data is collected via the WSUS API. An inventory task must be executed in order to use reporting.


Create the WSUS Inventory Task

There are several methodologies that can be used to create a WSUS Inventory task, but the simplest is to right click on the WSUS node in the console, and select "WSUS Inventory" from the menu.


The first screen presented is the WSUS Inventory Options. They provide the ability to handle certain advanced or complex inventory needs, but in 99% of instances, these options will be left at the defaults. In a subsequent post I'll discuss these four options in greater detail. Click on Save.


On the next screen you have the standard Patch Manager dialog for scheduling the task. Schedule the inventory to occur as needed. Typically the WSUS Inventory task is performed daily, but there are scenarios in which you may wish to perform the inventory more, or less, frequently. Be careful not to schedule the WSUS Inventory concurrently with other major operations, such as backups or WSUS synchronization events.


WSUS Extended Inventory (Asset Inventory)

In addition to the WSUS server, update, and computer status data, it is also possible to collect asset inventory data via the WSUS Inventory task. If the WSUS server is configured to collect this asset inventory data, it will be automatically collected by the WSUS Inventory task. To enable the WSUS server to collect asset inventory data from clients, right-click on the WSUS node in the console, select "Configuration Options" from the menu, and enable the first option "Collect Extended Inventory Information".


Using System Reports

With the inventory task completed, you now have access to several dozen pre-defined report templates in the two report categories named "Windows Server Update Service" and "Windows Server Update Service Analytics". The data that is obtained from the WSUS server is re-schematized within the Patch Manager database, optimized for reporting, and presented as a collection of datasources that are used to build reports. To run a report, right click the desired report, and select "Run Report" from the context menu.


Category: Windows Server Update Services

In the "Windows Server Update Services" report category there are 24 inter-dependent datasources available. Ten of them provide 327 fields of basic update and computer information, along with WSUS server data. The fourteen datasources named "Update Services Computer..." provide access to 111 fields of asset inventory data collected by the WSUS Extended Inventory.


Category: Windows Server Update Services Analytics

In the "Windows Server Update Services Analytics" report category there are nine self-contained, independent, datasources. The "Computer Update Status" datasource is the basic collection, and the other eight are based on modifications of this datasource, either by adding additional fields, or filtering the data.


In subsequent articles we'll look in more detail at how to customize existing reports and how to build new reports, including a more in-depth look at datasources and the WSUS Inventory Options.


If you're not currently using Patch Manager in your WSUS enviroment, the rich reporting capabilities are a great reason to implement Patch Manager.

It’s been 35 years since the very first solid-state drive (SSD) was launched. This went by the name “solid-state disk”. These were called “solid-state” because they contained no moving parts and only had memory chips. This storage medium was not magnetic or optical, but they were solid state semiconductors such as battery-backed RAM, RRAM, PRAM or other electrically erasable RAM-like non-volatile memory chips.


In terms of benefits, SSDs worked faster than a traditional hard drive could in data storage and retrieval – but this came at a steep cost. It’s been a constant quest in the industry, over the years, to make the technology of SSD cheaper, smaller, and faster in operation, with higher storage capacity. A post in StorageSearch.com shows the development and transformation of SSD, over the years, since its first usage and availability until now.


Why Storage and Datacenter Admins Should Care?

More than the user using the PC or notebook, it’s the storage admins who are spending time managing and troubleshooting the drives for detecting storage hotspots and other performance issues. And it’s imperative that storage and datacenter admins understand the SSD technology so they can better apply and leverage the technology in managing the datacenter.


Application #1 – Boosting Cache to Improve Array I/O

A cache is a temporary storage placed in front of the primary storage device to make the storage I/O operations faster and transparent. When an application or process tries to access the data stored in the cache, this can be read and written much quicker than from the primary storage device which could be a lot slower. All modern arrays have a built-in cache, but SSD can be leveraged to “expand” this cache, thus speeding up all I/O requests to the array.  Although this approach has no way to distinguish between critical and non-critical I/O, it has the advantage of improving performance for all applications using the array.


Application #2 – Tiering – Improving Performance at the Pool Level

SSDs help storage arrays in storage tiering by dynamically moving data between different disks and RAID levels in meeting different space, performance and cost requirements. Tiering enables storage to pool (RAID group) across different speeds of storage drives (SSD, FC, SATA) and then uses analysis to put frequently accessed data on the SSD, less frequently on FC, and least frequently on SATA.  The array is constantly analyzing usage, and adjusts how much data is on each tier. SSD arrays are used in applications that demand increased performance with high I/O. It is often the top tier in an automated storage tiering approach. Tiering is now available in most arrays because of SSD.


Application #3 – Fast Storage for High I/O Applications

Since arrays general treat SSD just like traditional HDD, if you have a specific high I/O performance need, you can use SSD to create a RAID group or Storage Pool.  From an OS and application perspective, the operating system understands the RAID as just one large disk whereas since the I/O operations are spread out over multiple SSDs, thereby enhancing the overall speed of the read/write process. Without moving parts, SSDs contribute towards reduced access time, lowered operating temperature and enhanced I/O speed. It should be kept in mind that SSDs cannot handle huge data, and data for caching should be chosen selectiv; virtualizatioely based on what data will require faster access to – based on performance requirement, frequency of use and level of protection.


SSD Benefits in Virtualization

In a virtualization environment, there is the fundamental problem of high latency with host swapping primarily in traditional disks compared to memory. It takes only nanoseconds for data retrieval from memory whereas it takes milliseconds to fetch from a hard drive. When SSDs are used for swapping to host cache, performance impact of VM kernel swapping reduces considerably. When the hypervisor needs to swap memory pages to disk, it will swap to the .vswp files on the SSD drive. Using SSDs to host ESX swap files can eliminate network latency and help in optimizing VM performance.



The application of Solid-state Drives has become significant in achieving high storage I/O performance. Proper usage of SSDs in your storage environment can ensure your datacenter meets the demands of today’s challenging environments. If you are interested to learn more about the advantages of SSD over Hard Disk Drives (HDD), you can look at this comparative post from StorageReview.com.

Filter Blog

By date:
By tag: