Skip navigation

If you've ever wondered what not to do when implementing file auditing on your Windows systems, the answer is simple: Don't audit everything on every file.

 

What Files Should I Audit?

We recommend only auditing sensitive or confidential files or folders, such as those that contain enterprise financial information or customer data. But even this warrants some fine-tuning. For example, for some files or folders, you might not care if somebody reads or opens it, but you do care if someone modifies or deletes it. Similarly, in many cases, it may seem frivolous to keep track of every time someone reads the attributes of a file. In any case, the bottom-line recommendation is: Tailor your file auditing strategy to the needs of your company and requirements of your regulators; just don't set auditing for the C: drive to "Full Control."

 

What Do I Do With File Auditing Information?

File auditing information is extremely helpful in keeping track of who has done what with what files. One way to view this information is to watch the Security log on each system - your file servers, for example - in Windows Event Viewer. Monitor the logs proactively to watch for patterns that might indicate someone is up to no good, or use the logs as a forensic tool after the fact.

 

Another way to manage this information is to use an Event Log monitoring software, such as the SolarWinds free tool by the same name. This way, you are able to see logs from several systems at once, and even compare them to correlate similar events.

 

The best option is to use a comprehensive log management, or SIEM software. Something like SolarWinds Log and Event Manager (LEM) not only consolidates this information from numerous systems (including Linux, Mac, and others), it allows you to build filters and rules to show you what you want to see when you want to see it. Furthermore, LEM can even take immediate action when something fishy happens, like logging the offending user off the system, or even disabling the account.

 

For additional information about how to implement file auditing on your Windows systems, check out our knowledge base article, "How to enable file auditing in Windows."

A rogue access point (AP) is a wireless access point that has gained access into secure enterprise network without explicit authorization from the network administration team. These unauthorized rogue access points open wireless backdoors into wired networks. There could be numerous unauthorized APs in and around the airspace of your corporate firewall. There could be Wi-Fi devices from employees who bring personal devices into the corporate WLAN and APs from neighboring concerns that may be accessible to your network because of proximity. These may not look potentially malicious but still they are unsecured and may turn out to be security threats later on. And then, there are the actual rogue APs that pose potential security threats and by infringing into your corporate network.

  • In order to better understand the intent of these APs, let’s classify them as
  • Unauthorized APs – that which are introduced by employees within the organization but with no detrimental intent
  • Insecure APs – that which bypass network security owing to airspace proximity
  • Malicious APs – actual rogue APs that pose a security threat. Some of these include:
    • Skyjacking attack: Vulnerabilities within device access points could be used by remote attackers to convert an authorized AP into rouge by taking full control over it.
    • Planting a malicious rogue AP within the office space disguised as a trusted AP.
    • Rogue APs can also  spoof MAC addresses used by legitimate APs or try to mimic your own WLAN's SSID


While all of these malicious and non-malicious access points need to be monitored, it is the responsibility of the network administrator to ensure the malicious ones are contained and eliminated.

How SolarWinds can help you monitor rogue APs?

SolarWinds Network Performance Monitor (NPM) is an effective network management software that comes with an integrated poller that can get help identify your rogue APs in a multi-vendor network environment by scanning wireless controllers and devices. SolarWinds NPM network monitor supports monitoring both thin and thick (or autonomous) access points and their associated clients. You can also use the out-of-the-box on rogue access points over varying time frames.

NPM Wireless Summary View.png

SolarWinds User Device Tracker is comprehensive network device monitoring tool that can be used to drill deeper into the rogue access point and get details of all the endpoints connected to it, when the rogue AP was connected, how long it was active and which user was using it.

 

UDT Access Point Details.png

Now that you’ve detected the rogue access point and analyzed its activity in your WLAN, you can take appropriate measures to contain or eliminate it from your enterprise network once and for all.


NPM_Cybersecurity_WP.png

The Basics of JMX

Posted by Bronx Aug 31, 2012

JMX is an acronym for Java Management Extensions and allows remote clients to connect to a Java Virtual Machine (JVM).

 

Using JMX, you can manage and monitor running applications in a JVM environment. Using Java, management of applications in a virtual machine is done through the use of Managed Beans, or MBeans.

 

MBeans are the soul of JMX and are the controllable end-points of an application where remote clients can watch application activity as well as control them. The MBean represents a resource running in the JVM, such as an application. They can be used for collecting statistics like performance, resource usage, problems, and so on.

 

A common tool used to monitor a JVM is JConsole. JConsole is a free graphical monitoring tool used to monitor JVMs, which can be found here: http://sourceforge.net/projects/jconsole/

 

JConsole is not necessary to use the JMX component monitor within SolarWinds SAM, server monitoring. The information provided here concerning JConsole is an introduction to using Java as a means of monitoring. Detailed information on how to use JConsole can be found by navigating to the following link: JMX in SAM, server performance monitoring tool. So now you are a expert in server performance monitoring.

Virtual desktops have long held something of a mystique for CIOs as they attempt to drive down the overall costs of acquiring and maintaining what has become commodity technology -- the desktop computing environment.  However, CIOs quickly discovered that the costs to implement a VDI solution that was done right were much higher than initially anticipated and for a number of reasons:

 

  • Microsoft continues to impose a VDI tax for Windows, but that's not the focus of this article since it's not really all that flexible.
  • The cost to obtain enough storage to support both VDI-based boot storms and user capacity was far and beyond what most CIOs were expecting.  The sheer number of spindles needed to support such environments created a pricing situation that skewed the results in a negative way.
  • Terminals themselves were expensive, often approaching the cost of a traditional PC.
  • The end user experience was mediocre at best and multimedia was a non-starter.

 

Over the past few years, the last point has been well address with the introduction of and subsequent improvements to both Teradici's PCoIP protocol and Microsoft's RemoteFX.  When use properly, these protocols can provide a user experience that can rival that of traditional PCs.

 

In the past year or two, we've started to see the third point address, with less expensive terminals hitting the market.

 

The second point is one that is being addressed now as we see a slew of new hardware vendors -- both all-in-one vendors such as Nutanix, Pivot3 and Simplivity -- hit the market and through the introduction of storage products that hit a sweet spot in terms of price and performance.  At VMworld 2012, there were many, many storage vendors relating VDI success stories with customers using their storage equipment.  As you may know, VDI requires storage that has both scale and performance in order to counter boot storms and to ensure that end users are provided with an experience that doesn't hold them back.  The sheer number of players and the countless VDI stories from these vendors lead me to believe that we're seeing VDI as a growing market.

 

Of course, it doesn't hurt that BYOD (or what VMware calls SYOM - Spend Your Own Money) is a growing force in IT.  When BYOD and VDI come together, IT departments can simply service BYOD users by provisioning them a virtual desktop that can run on just about any device out there.  So, we're hearing stories from vendors regarding VDI success and BYOD is pushing VDI to the mainstream.  It sounds like the beginnings of something pretty significant!

 

What do you think?  Will VDI ever become as mainstream as server virtualization has?

 

IPAM_BestPrac_WP.png

Continuing with the earlier blogs on Simplified Windows Administration and Improved End-user Support let’s take a stab at simplifying and automating some Active Directory (AD) and system information management tasks. Huge time and effort is spent towards working with AD, managing AD properties, exporting AD and system information, while you have the option to keep it simple and in control.


Let’s discuss some industry-wide best practices and tips to execute these activities quickly, securely and effectively for improved Active Directory and System Information Management

 

  1. Active Directory Management Tips
    • Design your AD environment to have business-critical objects (such as user accounts used to run system services) separated in Organizational Units (OUs) so that not everybody has access to impact changes
    • Use the Delegation Wizard in the Active Directory Users and Computers (ADUC) management console to assign permission to IT staff for AD management based on the tasks they execute
    • Adding, removing and updating AD users and managing multiple domains and OUs can be a daunting task to perform individually in every single system. Using a remote support software to execute these tasks in all the required Windows PCs could significantly reduce manual labor.
  2. Leverage Group Policy for Configuration Management and Security. Group Policy allows you to make changes across multiple servers. IT admins can create Group Policy Objects (GPOs) using Active Directory and apply them to the various OUs.
  3. Exporting System Information:
    • Use Windows System Resource Manager (from native Windows administrative tools) to export configuration information from your Windows servers.
    • When dealing with mass export of information from PCs, from WMI properties, AD objects or system information, it’s best to use an exporter tool—all the more better if it supports remote tech support—for exporting into files of different formats.

 

Another valuable nugget to those IT savvy sysadmins is that you can learn some quick PowerShell scripting commands to automate some basic admin tasks. This can be a life-saver in automating repetitive tasks saving you time in plenty. You can read this post from TechRepublic that could teach you 10 simple PowerShell commands to get started.


Ensuring quick and effective handling of system administration tasks will help you reclaim more time for concentrating better on resolving network, application and infrastructure issues that are impacting your business. A successful IT admin is one who balances the various types of issues and challenges posed with and comes out finding right way of doing them all quickly and effectively. Preparation, balance and automation can lead you there!

SolarWinds has just released our newest free tool, Call Detail Record Tracker, to make it easier for you to quickly view call detail records and see the relevant MOS score.  With Call Detail Record Tracker you can:

 

  • Retrieve call detail records from Cisco CallManager 7.x and 8.x
  • Load up to 48 hours of CDR data
  • Search, filter, and sort call detail records

 

To get started with Call Detail Record Tracker, you simply point to the FTP server where you store your CDRs, enter your credentials, and let the tool know how far back you want to retrieve CDRs (up to 48 hours):

 

CDR Tracker Still02.jpg

 

Once your CDRs have been retrieved, you can search, filter, and sort based on call origin, call destination, call status, termination cause, call quality, and call time:

 

 

CDR Tracker Still03.jpg

 

Learn more about SolarWinds free Call Detail Record Tracker, watch this overview video, or download your free copy.

 

For more advanced VoIP Monitoring and Troubleshooting take a look at SolarWinds VoIP & Network Quality Manager (VNQM).  See this blog to learn more about VNQM.

Locating users and devices in a wireless network just got easier with today's release of version 2.5 of User Device Tracker.  Now you can track users and devices within a wireless network with support for wireless thin access points.  This new product capability can enhance network security for any business, educational, governmental, or healthcare environment that has a wireless infrastructure.

 

UDT's summary page now includes two new resources:  Top 10 SSIDs by Current # of Endpoints, and Top 10 Access Points by Current # of Endpoints.

 

Screen Shot 2012-08-30 at 8.35.30 AM.png

You can drill down for additional details about the SSID:

 

Screen Shot 2012-08-30 at 8.38.09 AM.png

And the Access Point:

Screen Shot 2012-08-30 at 8.40.20 AM.png

 

And again drill further down for details on the Endpoint Connections:

 

Screen Shot 2012-08-30 at 8.45.03 AM.png

 

Learn more about User Device Tracker or download a free fully functional 30-day trial.

Gartner analyst Richard Jones led a really insightful presentation today at VMworld 2012 talking about storage in virtual environments. He focused a lot on what he called “Server Hosted Virtual Desktops,” or SHVD (a.k.a.: virtual desktop infrastructure (VDI), hosted virtual desktop (HVD), etc.), which, he rightly pointed out, have some significant impacts on storage architecture. In fact, the write loads on virtual desktop deployments can be significantly greater than those of even the most intensive transactional databases. So, obviously, storage management including subsets like san management is a major concern in these environments.

A few major points worth noting:

 

  • Gartner predicts that, through 2016, storage costs will be a bottleneck in virtualization and VDI  in one-third of organizations.
  • This issue one reason for the emergence of a group of niche virtualization resource players like Nutanix, Pivot3, Simplivity, and many more that build integrated server and storage appliances, some of which are tuned for specific applications like VDI.
  • Storage planning for SHVD cannot be taken lightly, and storage budget should be about 40%-60% of the budget for an SHVD project. In other words, don’t just guess – do some analysis!
  • Tying in with the first bullet above, CapEx is a huge barrier to virtual desktop deployments. Most organizations do not realize a straightforward positive ROI if they are simply comparing virtual desktops to their physical deployments. Most VDI deployments are at cost parity to 40% more expensive than their physical counterparts. However, VDI brings advantages in security, compliance, and other areas that often make them worth additional cost.
  • VDI deployments are the embodiment of the “Storage I/O Blender.” It’s important to understand the implications of all of the different points of complexity.
  • Most VDI deployments on existing storage solutions fail. Gartner recommends new storage solutions to support VDI deployments in the long term…unless you already have a “big honking storage array.”

 

We’ll go into each of these topics and many more over the next few weeks, but suffice it to say that a GREAT place, and possibly the MOST important place, to start in assessing your organization’s readiness for VDI is to consider your storage. So, looking at your environment through the lens of a good SAN management tool or storage monitoring solution is key for holistic storage monitoring. Starting this early in the consideration process will give you some good insight into the load on your storage arrays and how you’ll need to architect VDI storage for your organization and become a storage performance monitoring superhero!

VMware view has been a major topic at VMWorld 2012 and one of the key issue issues that even VMware is acknowledging is that the infrastructure costs are still a barrier for broader adoption of virtual desktop infrastructure (VDI).  In particular the unpredictability of the storage IOps and the differences between peak activities and steady state makes sizing and purchasing storage for VDI very difficult.  Essentially, sizing for peak storage loads resulting from boot storms or upgrade, antivirus or other change operations can make the storage bill too expensive to make VDI practical for many companies.  On the other hand, undersizing the storage capacity can quickly result in a poor user experience and rapid dissatisfaction, especially in the adoption phase.   Caught between these two problems VMware has put some additional focus on bringing down the infrastructure cost barrier with two features – View Storage Accelerator and View Composer Array Integration (VCAI).  These enhancements will reduce some of the key problems customers see with doing VMware performance monitoring to achieve holistic VMware monitoring for their VDI environment.

 

View storage accelerator is a capability that is focused on acceleration of the read process for VDI. Essentially they put in-memory cache of between 400 MB and 2 GB of RAM in a host based solution in front of the disk-based storage.  The cache will hold the primary storage bits that the virtual desktops are accessing from a read point of view and takes that load off of the disks in the infrastructure.  Depending on what operations or scenarios were looked at the capability showed SAN performance management improvements for peak read IOPs by up to 80% and average IOPs by 45%.  Different scenarios resulted in smaller reductions but were still substantial. That change can really make a substantial reduction in the required storage capacity and capital cost making it easier VMware performance monitoring.


The second capability they have added is called View Composer Array Integration (VCAI). Essentially this is a different strategy that offloads much of the workload from the virtual system to the storage array where the cloning and snapshot technology can be very efficient. View can then access those newly created images to rapidly deliver services to the end users.  VCAI is part of View Composer and integrates with NAS storage partners using the vStorage APIs for Array Integration (VAAI).  EMC and NetApp are the only partners that currently provide this capability but they expect more to participate in the future.


With these enhancements VMware is addressing probably the biggest infrastructure hurdle people encounter when they move to VDI but they recognize that other challenges remain. When you reduce the storage IOPs as your primary bottleneck in these operations, that often exposes the next bottleneck, often CPU.  Additionally, while read IOPs storms are probably the most common, this won’t help with operations that are write intensive.  While there are more things needed to reduce the infrastructure costs, these enhancements should help VDI infrastructure budget requests make it past the laugh test. VMware troubleshooting simplified!

Virtualization is all about abstraction.  With VMware, we've abstracted complete servers away from the hardware.  For storage, we're creating automated tiering pools that enable data to flow up and down tiers as necessary in order to meet predefined business objectives.  In the data center, we're looking for anything and everything that can help automate as much as possible; at VMworld 2012, there is no shortage of vendors on the trade show floor with products intended to meet just this need.  Also on the floor are multiple vendors selling building block pieces of hardware intended to replace the mass of hardware that's entered the data center over the past decade.  So, rather than buy a new server or a new SAN, CIOs will simply buy a new "infrastructure building block" as needs dictate.

 

In short, we're looking at an era of massive simplification of the technology environment.  These environments have grown complex and the skill sets necessary to maintain the services are vast and expensive.  Businesses can no longer afford to simply keep throwing IT people at what is not a core business function at the expense of the lines of business.  Instead, IT departments will need to find ways to become more efficient and lower "keeping the lights on" costs.  The companies that we see in the trade show booths are a testament to the perceived need for these kinds of services.

 

Does this mean that IT doesn't matter anymore?  Not at all!  Rather, I see this push for simplification as a reinforcement of the incredible value that IT departments can bring to their organizations.  CIOs that are able to master the simplification cycle can redirect energies from "cost center" activities to ones that can become business multipliers with direct bottom line impact on the revenue side of the balance sheet.  I don't see these simplification plays as a risk to IT; in fact, I see simplification as a necessary step that every CIO should undertake for the benefit of their organizations.  Driving inefficiency out of the enterprise is a battle worth fighting.

 

There are two possible futures for IT organizations:

  • Cost centers reporting to the CFO and suffering from budget cuts every quarter as the CFO looks for ways to reduce expenses.
  • Business multipliers with the CIO being a trusted partner in critical business decisions.

 

The age of simplification will eventually yield from IT what we've wanted all along--great support and the ability to enable the business--and there are plenty of partners out there that are ready, willing and able to assist.

 

Virt_Selection_WP.png

The Monday general session at VMWorld 2012 covered a lot of ground but didn’t really provide any major new technology announcements.  It started off with some of the traditional background statistics showing VMware’s and virtualization’s growth from 2008 to 2012 including an increase in virtualized workloads from 25% to 60% and that the key question regarding Cloud has changed from “What?” is it to “How?” do you implement it.  With that introduction, Paul Moritz went on to lay out his vision for the future, first with the general and pretty widely used paradigm that IT must provide services “Wherever, Whenever, and in Context” in the future. In order to make this transformation VMWare sees a number of transformations at the various layers in IT:

  • Infrastructure Layer: a transformation from server based to the cloud
  • Application Layer: a transformation from existing applications to new, cloud friendly applications and big data
  • Access Layer: a transformation from the PC to mobile devices.

While none of that is really new news, they then got into the heart of what they are aiming for with a “Software-Defined Datacenter” where all the infrastructure is virtualized and delivered as a service and automation is done by software.  While the concept isn’t new, it did lay out where VMware will be putting their emphasis in the near future trying to expand out from just the compute components to virtualized network and storage capabilities as well.  As part of this effort they announced the vCloud Suite which appears to be a regrouping primarily of their existing capabilities including management (vCenter Ops and vFabric), virtualization and cloud with vSphere and vCloud Director and adding new capabilities for software-defined networking and security plus software-defined storage and availability all available through a set of APIs.

After announcing the vSphere 5.1 release along with some shots at Microsoft Hyper-V around performance and reliability they got to what was probably the only really new news of the session that VMware will drop their current vRAM based pricing scheme and move to a per CPU price with no resource limits.   This was based on a survey of 13,000 of their customers and clearly is an effort to correct what has been broadly acknowledged as a tactical mistake on their part. 

The remainder of the session went into more details about how they will implement the vCloud Suite type capabilities including a couple relatively standard command line and screen-shot demos.  Tuesday’s general session is “Delivering on the Promise of the Software Defined Datacenter”, we’ll see if any exciting news comes out of that.

What is with all these different protocols? Who needs them anyway? If you're monitoring server and application performance, using a server performance monitoring tool, you do. Different protocols do different things and consume varying amounts of bandwidth and resources. The better you understand the differences, the more apt you are to make the right choice when setting up your application monitoring solution. So when it comes to effective server monitoring, let's define the big four:

 

Remote Procedure Call (RPC)

RPC allows for communication between computers on a network. The big advantage of using RPC over other protocols is that you can execute programs on another computer within your network without remote interaction.

Pros: Very flexible.

Cons: Bandwidth and resource hog. RPC can fail due to unforseen network issues.

 

Windows Management Instrumentation (WMI)

WMI is Microsoft's implementation of Web-Based Enterprise Management (WBEM) and Common Information Model (CIM). It allows for scripting languages to manage Windows-based machines remotely as well as locally.

Pros: Very flexible when using Windows-based machines.

Cons: Bandwidth and resource hog, but not as bad as RPC. WMI requires Windows login information. WMI can only be used with Windows-based machines.

 

Internet Control Message Protocol (ICMP)

ICMP is not a transport protocol. It's more of a protocol used to "spy" on IP addresses; however, some diagnostic tools like Ping and Traceroute are exceptions. ICMP is used primarily to send messages to other networked computers indicating, for example, that a service is not available. ICMP can also be used to relay query messages.

Pros: Minimal bandwidth and resource usage.

Cons: Very limited in scope and flexibility.

 

Simple Network Management Protocol (SNMP)

SNMP is a vendor, hardware, and software, independent protocol for networked devices. Devices that typically support SNMP include routers, switches, servers, and so on. SNMP is used mostly to monitor network devices for conditions that warrant attention. SNMP exposes data in the form of variables which provide vital information on system configuration. These variables can then be queried by Application Monitors.

Pros: Very flexible. Can pull information from Windows and non-Windows machines. SNMP v3 gives the least vulnerable information when it comes to security. Uses very little bandwidth.

Cons: SNMP collects less information than WMI.

 

Its important you know the differences between the different protocols while you monitor server. The next question is, "Is there a product out there that is flexible enough to allow me to choose between all of these different protocols?"

After sitting in a session today at VMworld 2012, I was intrigued by how some companies over-complicate things...then, I realized that their entire business model revolves around complexity. IBM revealed parts of their methodology in consulting companies that are managing the transition to cloud computing. Ironically enough, the major component I could see that differentiated the "clouds" they described from simpler virtual infrastructures was the inclusion of a self-provisioning portal. Even for that, they gave it several names worthy of acronyms like Service Management Automation, and they touted it as an ITIL-based process in the cloud.

 

Don't get me wrong, I have nothing against process, but it seems that we put barriers in front of the cloud that don't necessarily need to be there for most organizations. Most companies today already have the building blocks of a properly functioning private cloud in place. It just takes some modifications to existing management processes to go the last mile. However, it is worth examining what the cloud service providers are touting to see where the market may be going. IBM called out five different types of clouds:

  • Private Clouds - private data center, internally managed
  • Managed Private Clouds - private data center, externally managed
  • Hosted Private Clouds - external data center on dedicated hardware, externally managed
  • Shared Clouds - external data center on shared hardware, externally managed
  • Public Clouds - external data center, utility-based service

 

This is pretty similar to the message we see coming from other cloud leaders, like Rackspace (Amazon less so, as they're really only focused on the public cloud). This leaves many asking if the cloud will replace the IT department. I think the answer is that the cloud will not replace the IT department entirely, but it will fundamentally change its role...and this change is probably for the better. IT's role should shift toward more planning and managing tasks and away from building and running the hardware and infrastructure required. Greater levels of automation shouldn't take IT jobs away, but should allow IT professionals to focus their time on more value-added tasks. So, the cloud is not something to be afraid of, but something to embrace as the next wave in computing. A simple first step is to start with some simple cloud monitoring software to gain insight into the performance and utilization of your virtual environment and start optimizing resources.

 

As we've all figured out, the cloud can be whatever you want it to be (or whatever some marketer wants to define it as), but it doesn't have to be a dark, looming cloud. It could just be a welcome rain cloud that causes some growth in the way that IT operates and allows us to be more productive with the same resources!

Tonight, I spent a number of hours at the SolarWinds VMworld 2012 party and had a blast!  I was able to catch up with a number of people that I knew and met a number of new people I hadn't met before.  With many, I had great conversations that ran the spectrum from a debate over vSphere vs. Hyper to helping free children from poverty in many areas of the world.

 

When you look at the cross section of the virtualization community, it's often difficult to remember that we're all there for very different reasons. For example:

 

  • David Marshall and I are at VMworld to make greet old friends, make new connections and to identify new content that we can share with the larger virtualization community while we learn about what's new in the environment.
  • Vendors large and small, old and new pay significant dollars for space to generate new leads, build their company and, hopefully have their businesses soar to new heights.
  • Attendees responsible for technology in their organizations come to find solutions to vexing business problems and to attend sessions where they can learn to do their jobs even better.
  • Presenters come to the show to freely share their knowledge with other attendees.

 

But, through conversations that I had this evening at the SolarWinds party, some specific conversations made it clear that it's impossible to really nail down a single motive as to why people attend shows like VMworld.  Here are some of the conversations I had this evening:

 

  • A discussion that started out with frustration over the inability to automatically decouple EMC's VFcache from Oracle-based virtual machines, making automated vMotions impossible when VFcache is used.  The attendee indicated that he almost found out too late about this serious limitation and, had he not caught it before a final purchase, he could have been fired because the company depends heavily on vMotion.  While there is a workaround, he was insistent that it was no solution at all.
  • A discussion with another attendee over the merits of Hyper-V as compared to vSphere.   While I've always been a fan of vSphere, I certainly see challenges ahead as Hyper-V 2012/3 gains momentum and mind share.
  • Yet another Hyper-V/vSphere discussion in which the attendee indicated that the hypervisor soon won't matter since Intel will simply build abstraction capability directly into the hardware, leaving current vendors needing only to manage workloads rather than having to provide a complete operational environment.
  • Another discussion with an attendee that just virtualized his environment within the past couple of years.  He explained to me the comprehensive justification process he went through in order to be allowed to go down the virtual path and also explained how excited he was the first time he was able to deploy a new workload in just an hour and how thrilled his boss was at the prospect.

 

But the conversation that had the most impact was the one that I had with people from a group named Compassion.  While I am not a religious person and Compassion is very much a Christian organization, their goal and outreach attempts are incredible.  The people at the show from Compassion support an IT infrastructure responsible for ensuring that 1.4 million children in poverty from around the world are clothed, fed and educated.  I learned about how the group operates, learned that no less than 80% of donations to the group are used in their support activities and that they have a very good success rate.

 

We all have different reasons for being at VMworld; whether it's to make money or to help support activities that feed children, the opportunities to do things better are vast.val

VMworld “day 0” was really just a precursor to what we’ll be seeing for the rest of the week.  Today, there were no major sessions, but attendees were treated to the early opening of the exhibit floor where vendors galore have gathered to hawk their wares.

As the biggest VMworld yet, it’s not a surprise that the show floor is a lively place with an impressive cross section of the virtualization market.  In walking around the floor and getting an idea for who has set up shop, there are some clear trends that are really big this year:

 

  • Solid state storage.  It seems like every other booth is a vendor selling solid state storage hardware of some kind.  Vendors include Tegile, Pure Systems, Tintri, EMC, Dell, Nimbus Data, Nimble and many, many others.  It’s absolutely clear that inclusion of solid state in storage is the future of storage and we’re seeing a number of players lining up to capitalize on this revolution.
  • Management and monitoring tools.  With tight budgets and a need to demonstrate good ROI and low TCO, organizations are forced to maximize as fully as possible their investments in IT infrastructure. In looking at the show floor, there are a number of vendors selling solutions intended to address storage performance management issues, VMware performance monitoring and other needs.
  • VDI. Although virtual desktops have been discussed, dissected and debated for a number of years, there are vendors galore on the trade show floor that are hoping to capitalize on the promises of desktop virtualization.

 

It’s crystal clear that virtualization remains a top priority for many organizations and that the vendor space also has high hopes, as evidenced by the sheer number of companies on the show floor.

More to come during the week!

In my first VMWorld blog I asked the question “Will it be all about the Cloud again this year?”  From the first day of sessions and my take on the General session topic this morning I think we have an initial answer – while cloud hasn’t gone away, “software-defined” seems to be the focus from VMware.  VMware is already all about software defined computing with all their virtual computing capabilities but it seems the Nicira acquisition has them thinking about “software defined” everything.  Nicira moves them in the direction of abstracting networking capabilities from the hardware to the software layer.  They are also looking talking about “software-defined storage” as well. Basically, VMware’s goal is to control all of the major infrastructure components of a data center. 

 

This also represents a shift back from the “cloud will rule the datacenter” message that VMware has given in the past.  In reality this makes sense.  Cloud is a great concept but the implementation of real on-demand clouds is difficult.  It is not just about being able to automatically deploy another VM.  This shift could be seen as a tacit acknowledgement that there is still a lot of work to do to be able to effectively deploy all the services and capabilities required to run a data center with an on-demand cloud approach.  This is especially true when there are still hardware components that require configuration and management.  So the focus on software-defined infrastructure capabilities makes sense in a journey toward the more advanced automation that real cloud implementations require.  Cloud automation clearly exists today but it either requires some very heavy lifting to do full automation of the provisioning and configuration activities or you have to compromise on the scope and flexibility of your cloud (e.g., use predefined vLAN IP addresses, preconfigure and assign storage, etc.).  By first working on extracting the operational tasks for each of the infrastructure components to a software layer, you take the first step to really facilitating the automated deployment of those resources. While being a more incremental step towards full deployment and monitoring of the cloud, it also is an acknowledgement that the path may be longer than originally advertised.


The second major point to all this is that VMware’s clear goal is to own all of the underlying infrastructure components of a datacenter.  That is both thrilling and terrifying at the same time. On the thrilling side, VMware has enough execution muscle power that people in the industry have to take them seriously.  That is likely to drive many of the current players who make their living in the spaces VMware is targeting (e.g., networking, firewalls, storage, etc.) to accelerate activities that provide an alternative to VMware controlled abstraction of those infrastructure components.  At the same time, the idea of one vendor that abstracts and controls compute, networking and storage is a scary concept.  If you didn’t like the level of control Microsoft had with its operating system, just imagine what that would be like.

There's been lots of talk in the virtualization market about using flash-based storage or SSD to enhance performance. The major premise is that the bottleneck, in many environments is storage I/O instead of storage capacity. This isn't a new problem - storage administrators and DBAs have been dealing with this for years. I distinctly remember designing EMC CLARiiON storage arrays earlier in my career that were loaded with 36GB 15K RPM fibre channel disks simply to maximize IOPS. It's only been in the last few years that solid state storage has become a viable option. The big question is how to implement it.

 

Today at VMworld, I got to hear one point of view promoting the use of flash as a cache. The concept is to install flash storage at the server layer in order to avoid a lot of the storage network congestion. In this scenario, the flash storage would behave a lot like DRAM, with persistent storage across reboots and performance similar to that of memory, but at a fraction of the cost. Fusion-IO, the primary purveyor of this type of technology (through myriad OEMs), argues thatdeploying flash at the host level allows back buy storage for capacity rather than performance. I think this is probably over-shttp://www.solarwinds.com/solutions/EMC-storage-performance.aspximplifying as we know that there are economic advantages of tiered storage, and I don't believe for a minute that this technology will allow us to run everything on the cheapest tier of disk without a performance risk.

 

At the end of the day, I think that host-level flash storage should be viewed as another option in deploying tier 0 storage that might, in some cases, not exclude the deployment of SSDs within the same environment. So, what's the right technology for a specific environment? As usual, the answer is, "It depends." The most logical path is to assess your storage environment using a good storage monitoring tool to understand exactly where bottlenecks exist and treat the issue appropriately for each application. Rifles are more efficient in getting the job done than shotguns, in most cases. In storage monitoring, the problem in most virtual environments is that we don't know what we're shooting at! Performing a thorough analysis of the storage environment from the VM to the spindle to identify the problem sounds like a daunting task, but with the right storage performance management tool in place, you can effectively perform storage performance monitoring and it can be pretty simple. SolarWinds Storage Manager is a great, easy-to-deploy solution that you can download and deploy in less than an hour - a small time investment that could save you lots of time, effort, and money. Did I mention that Storage Manager, storage performance monitoring software has free 30-day trial?

As my supply of koozies has dwindled over the long hot Texas summer due to natural attrition (dogs making up for a neoprene dietary deficiency, unrecoverable loss while boating, etc.), my mind naturally turns toward VMWorld 2012.  What better place to restock the supply and maybe even increase our USB memory stick inventory (which also suffers from gradual attrition). But as I sipped on a Coke that was rapidly approaching room temperature and scanned the VMWorld schedule, a few other questions came to mind that I’m looking forward to addressing at the conference in San Francisco.

  • Will it be all about the Cloud again this year?
    • Much of the noise VMware has made over the last year has been around expanding their cloud capabilities – in particular some of the more complex requirements of private clouds.  Will that focus continue and what does that mean for customers that haven’t yet bought into all the Cloud hype and are more interested in improvements at the infrastructure layer for their base virtualization implementations?
  • How will VMware’s effort to add “software defined” network and storage to their arsenal of virtualization capabilities play out?
    • Given the acquisition of Nicira how far will they get with the technology and what will be the approach and timetable for improving their integration with the rest of their portfolio?
    • How related will “software defined” storage be and what will it mean for parent EMC?
  • Will VMWare View be able to improve the ROI of Virtual Desktop Infrastructure enough to accelerate adoption?
    • In my discussions with customers a surprising number are either seriously thinking about VDI or doing initial testing and implementation in a small portion of their desktop environment.  But none that I have talked to are talking about full scale replacement of all their desktops.  Largely this is due to the difficulty really demonstrating a clear ROI and lingering technical concerns over such a critical shift.
  • What will be the big new news in the general sessions?
    • We all know that this is the place for the company to get maximum bang from their announcement buck as much of the IT community is watching – like an Apple show, VMware is expected to make some kind of splash every year.  I’m looking forward to seeing if they have anything big up their sleeve.
  • Does vCenter Operations Manager get even bigger and more complex or does VMware do any work to make adoption simpler.
    • At SolarWinds we make our living out of making IT solutions, including VMware monitoring, that are simpler and easier to use, it will be interesting to see if vCenter Ops tries to follow suit. 

  I’ll be posting regular blog updates from here at VMWorld, stay tuned for the answers to these and other burning questions.


You have a network, and it's your job to keep it running. How do you know it's doing what it should? You could just wait to hear from your users; surely your CIO will let you know when his "internet is broken", but this approach may not have a positive influence on your continued employment. No, you need some network monitoring software. As soon as you are monitoring all the servers, routers, and end user computers on your network, you should consider setting up some SNMP traps on some of your more crucial, core devices so you know exactly what is happening, more or less, when it happens.

 

First, what is SNMP?

 

SNMP (Simple Network Monitoring Protocol) is the industry-standard protocol used for communicating with network devices. SNMP-enabled network devices, including routers, switches, and PCs, host SNMP agents that maintain a virtual database of system status and performance information that is tied to specific Object Identifiers (OIDs). This virtual database is referred to as a Management Information Base (MIB), and SNMP-based network monitoring uses MIB OIDs as references to retrieve specific data about a selected, SNMP-enabled, managed devices. "Polling" is the general term we use for this process of data retrieval.

 

So, what is an SNMP trap?

 

If you are trying to monitor a large number of devices, where each device may have a large number of its own connected objects, it can become impractical to poll every object of potential interest on every device. SNMP traps and polling work in opposite directions: each SNMP-enabled device you are monitoring can notify you directly if it is having a problem, without solicitation from your network monitoring server.

It's the difference between asking someone, "How are you?" and having that same someone tell you exactly what's bugging them. That's a big difference.

 

How do I configure an SNMP trap?

 

Configuration specifics vary from vendor to vender and from device to device. Generally, however, when you are configuring devices to send SNMP traps, confirm that traps are sent to the IP address assigned to the NPM server. To ensure proper configuration, refer to the documentation supplied by the vendor of your devices.

For more information about SNMP traps, see the chapter, "SNMP Traps" in the SolarWinds technical reference, "Introduction to SNMP". For more information about the SolarWinds Trap Viewer, the utility SolarWinds provides with SolarWinds NPM, see the chapter, "Monitoring SNMP Traps", in the SolarWinds Orion Network Performance Monitor Administrator Guide.

 

NPM_NetFund_SNMP.png

VMworld 2012 kicked off today in San Francisco, and it promises to bring lots of surprises. The first I've heard about is a rewind in VMware's pricing model to be announced tomorrow, including a change in pricing for their solution for VMware monitoring, vCenter Operations Manager. From the sounds of it, this will be the big mea culpa on VMware's licensing structure change that most of us have been expecting for months now. There is also a strong chance that they'll announce the end of the vRAM tax. More to come on that soon.

 

Tomorrow, I'm attending sessions on the use of flash as a cache, managing the transition to cloud computing, and a couple of sessions on different use cases. So, what do you want to hear about from VMworld? Leave a comment below, and I'll see what I can do. Oh, and if you're at VMworld, make sure you come by booth 1701 to visit SolarWinds in the Acronis booth and for a chance to win some great prizes!

Most of you are probably familiar with the 9 policies under Security Settings\Audit Policy in Windows. But how many of you know about the 53 additional policies Microsoft added in Windows Vista and Windows Server 2008? The great thing about these new policies is that they provide a lot more granularity over what the previous 9 afforded. The downside? They can also create a lot of noise if you're keeping a close eye on your logs.

 

What do I see if I'm monitoring newer Windows operating systems?

The 53 additional security policies start generating events in the Windows Event Log as soon as you deploy a Windows Vista or higher OS. This causes an immediate influx of audit data, which persists unless you tune the auditing appropriately. And this is all in addition to any existing policies in Security Settings\Audit Policy that may be enabled by Group Policy.

 

To see exactly what policies are available and enabled by default, run auditpol.exe /get /category:* from a Command Prompt on a Windows Vista or higher system. You'll notice several policies are enabled by default as opposed to the previous 2/9 pre-configured on older systems.

 

How do I change what gets logged?

If you are running Windows Vista or Windows Server 2008, you have to use the auditpol.exe CLI tool to configure these settings. However, in Windows 7 and Windows Server 2008 R2, Microsoft added these settings to Security Settings\Advanced Audit Policy Configuration. From here, you can set the policies in each of the 10 new categories to No Auditing or any combination of Success and/or Failure.

 

But wait! There's a catch.

Microsoft discourages users from using the policies in both Security Settings\Audit Policy and Security Settings\Advanced Audit Policy Configuration simultaneously. For this reason, whenever you configure one of the advanced audit policies, Windows forces advanced auditing to override basic auditing for that computer. That means, if you had basic auditing configured through Group Policy and you changed a single setting in Security Settings\Advanced Audit Policy Configuration, basic policy is effectively disabled for that computer.

 

What we recommend.

Given the potential volume generated by the advanced audit policies, we recommend you tune those settings, especially if you're using a log management tool like Log & Event Manager. The important takeaway: If you change anything in Security Settings\Advanced Audit Policy Configuration, tune it fully so it matches your basic auditing settings for your pre-Windows Vista clients. Otherwise, your pre-Windows Vista clients will continue auditing events as before, but your clients running Windows Vista or later will only audit what you tell them to in Security Settings\Advanced Audit Policy Configuration.

 

For additional information about the log management solutions from SolarWinds, including a free Windows Event Log Consolidator, visit our Log & Security Information Management page.

 

For information about how to tune advanced auditing in your environment, see Microsoft's Advanced Security Auditing FAQ.

Good question! (I know that’s a good question because that was one of my first when I took on the task of writing the server monitoring tool, SAM Admin Guide.) As we all know, both applications and templates are home to various component monitors. That said, it can be argued that both applications and templates are basically the same thing. That argument would be correct, for the most part. Relax! server management is easy with SAM.

 

So the next logical question is, “If they’re basically the same thing, why the two names?” The correct answer is, “Well, they’re not exactly the same thing.” (Thanks for playing.)

 

Now that I have you thoroughly confused, let me explain. The official explanation can be found here, but that’s just too darn…boring (and I wrote it). I find the following definition much easier to remember: An application is a template that is currently assigned to a node. A template is an application that is unassigned. In other words, once you assign a template, it becomes an application. Voila! Confusion be gone! So, now you are a server performance monitoring expert.

Understanding Compliance


If data in your network relates to employee or customer medical records, or to your company’s finances—either in terms of revenue or reporting—then most likely you must comply with federal law (HIPAA, Sarbanes-Oxley). Similarly, for all federal agencies and organizations, the National Standards and Technology (NIST) group dictates minimum standards for handling data in the government’s IT systems. Compliance requirements are strictest of all for handling data in US defense-related organizations.


The penalties for non-compliance are severe: federal prosecution (for corporate officers), demotion or discharge (for civil servants).


From an IT perspective, complying with such requirements involves implementing practices for maintaining the integrity and security of data, which often includes creating a repository of network device configurations. While only legal and technical experts with specific knowledge of your business or agency can determine how and to what extent your IT systems must comply with federal laws and regulations, the practices and tools themselves for managing compliance have predictable features.


Compliance Management Features

 

Most compliance management systems for IT are policy-based. Each policy is built from specific rules and then applied to specific network devices. Running a report that is itself built from specific policies allows an IT manager to audit devices across the network, quickly discovering which devices are running compliant configurations and flagging configuration statements that need to be remediated on devices that are currently out of compliance.


Compliance Management Products

 

The most useful tools come with packaged reports covering the laws and regulations that commonly impact IT systems. For example, this video showcases a compliance management system that is ready to audit compliance for SOX, HIPAA, DISA Stig, and CISP:

 

http://www.youtube.com/watch?v=Z0jVibm6NB8

Referencing an early blog post, where I spoke about how to *not* overcomplicate windows, I promised you a Part 2. Here are some additional tips and tricks to that can help you save time, and simplify Windows administration.

 

  • Knowing and managing your Active Directory and Exchange server well. AD management can be a time-consuming task for adding, editing, deleting, and searching for AD properties and attributes. Using some automation effort and Active Directory tools can help you take the pain out of AD management.
  • Defining proper administrative privileges for end-users. There are so many common issues when the user mistakenly or deliberately changes a setting. Effectively defining access permissions will ensure these issues don’t happen at all.
  • Remotely resolving issues by establishing a secure and effective remote connection with the end-users’ Windows desktops.
  • Using web-based Windows password reset tools/services whereby the domain users can login to the self-service and reset system password. Research says password recovery/reset tasks consume about 40% of a day’s time for an IT admin. This can be reclaimed for other critical tasks.
  • The network and infrastructure are always going to grow. Estimation and planning the expansion ahead will help to build and deploy more scalable and flexible hardware and software to support the growth of IT infrastructure.

 

I am interested in your feedback, views and comments and also any interesting time-saving IT administration stories that have made you an IT rock star. Feel free to comment and share!

 

And on a parting note - if you are looking for a tool that can help you with remote support. Check out DameWare Mini Remote Control (MRC) - MRC makes it simple and secure to establish remote control access to your Windows system. Get the “standing-over-the-shoulder” experience with MRC and truly "Do I.T. Remotely" - and maybe stop walking to every end-user's desk for system administration tasks!

With the growth of virtualization, mobile devices, and the cloud, management of your IP Address space assignments has become increasingly challenging. Using static spreadsheets to manage your space is no longer a viable method. Not just at the Enterprise level, but for mid-size and smaller shops, space assignments and statuses change too rapidly to keep track of manually. Add to that any unforeseen growth through acquisitions, staff changes, and or changes through a location move and you quickly feel the drawbacks of relying on static spreadsheets.

Reasons to Replace:

  • Access control/Change management: Who will be allowed to edit the spreadsheet?
  • DHCP addressing: Extremely difficult to manage dynamic data via a spreadsheet
  • Historical tracking of changes: Manually updating columns can lead to typos, errors, etc…
  • No automation

Although simple to use, spreadsheets are not scalable and they can be difficult to access remotely. If by misfortune, fate, or destiny, you are one of the administrators assigned to the role of manually updating spreadsheets, there is a solution. You can automate the processes mentioned above using a simple software solution.

Your spreadsheet replacement tool should automate the discovery of any new devices added to your environment. Every time new addresses are allocated, someone (you!) will not need to manually update spreadsheets.

Look for a tool that will notify you of impending doom, such as IP address conflicts, with alerts or emailed scheduled reports. Inferring from spreadsheets to determine which IP Addresses are available is very 1990s. It is after all, the 21st century.

 

Can your spreadsheets setup alerts for scope utilization and critical subnet usage?

Perhaps you would benefit from reports that you can customize that provide an audit trail that shows you who made what changes and when.

Solutions:

Have a look at IPAM and test out the Spreadsheet Import Wizard.  The IPAM Import wizard provides an intuitive GUI that allows you to select which columns you want to import. The wizard also respects user delegation roles when you have multiple administrators responsible for particular subnets. The wizard imports IP addresses and/or subnet structures including folders/groups automatically, allowing you the option to organize them manually via the drag and drop feature. Don't worry MSPs, duplicate subnets are supported.

Are you planning for IPv6 migration yet? It's never too early to start migration planning. IPv6 addresses are difficult to manage using spreadsheets. Just imagine having the 128 bit number expressed in eight hex blocks, each block having four characters, thus increasing the possibility of errors as you type them into a spreadsheet.

 

How does IPAM work? IPAM uses ping sweeps, SNMP scans, and Neighbor scans, which scan ARP tables, to retrieve up to the minute statuses and data.  You can see all events and space utilization summaries of your network on the summary web page.  You can drill down to the details page to see the IP Address, MAC Assignment, and Hostname Assignment histories.

Also, IPAM can manage DHCP scopes and IP reservations directly from the IPAM web console, thus reducing the time modifying RDP connections to the DHCP server console.

For further information see the IPAM Administrator's Guide.

Too small of a shop or don't have a budget? Try the free IPAM tool-

 

IPAM_BestPrac_WP.png

Personal experience tells us outages impacting production networks are most often due to mistakes in making device configuration changes. We either make mistakes in editing a configuration file or in targeting device(s) for the change.

 

The threat of configuration-related outages is not only constant but grows with your network and IT team. And the impact of outages tend to grow as well, depending on how your business relies on your network to connect with customers. If your network supports customer-facing web interfaces, for example, an outage can mean calculable revenue loss for your company.

 

When configuration management mistakes are made, and outages occur, reducing downtime depends on being able to quickly locate the mistake by auditing recent changes to the network. Two tools are basic to establishing and maintaining a reliable audit trail: a configuration change approval system, and a configuration change confirmation system.


Configuration Change Approval

 

Approval systems are often role-based, allowing any member of an IT team to complete device configuration work, scheduling changes; but also requiring a manager to review and enable changes to be executed as scheduled. A software workflow usually sends the request for approval at the time a change is scheduled; and when the change is approved the software takes action based on the schedule, usually coordinated with network maintenance windows.


Configuration Change Confirmation

 

By triggering download of a changed device configuration, and comparing the current configuration to a previous one, a change confirmation system provides the team with a checkpoint in the form of an email alert. Verifying the accuracy of configuration changes can be as easy as reviewing the side by side comparison included in each email alert.

Make Auditing Easy

 

These two systems provide a history of what was done when and to which devices. Reversing a configuration mistake becomes relatively trivial. Without these systems, however, before beginning to resolve the current outage, you face an initial crisis of needing to assemble and verify the timeline of config changes.

 

To see a good example of a configuration change approval system see this guided video tour starting at time marker 6:10:

http://www.solarwinds.com/resources/videos/network-configuration-manager-guided-tour.html


You can navigate this same software on your own in demo mode:

  http://configuration.demo.solarwinds.com/Orion/Login.aspx?ReturnUrl=%2fdefault.aspx

SolarWinds Server and Application Monitor and MangeEngine Applications Manager are both value-oriented products designed to be easy to use right out of the box. But when you delve into the details, how do they look? We recently looked under the hood and came up with a few differentiators that you might want to think about before making a purchase decision. How do they stack up? There’s a full comparison here.

 

The high points follow:

 

  • SolarWinds SAM, the effective server monitoring tool provides Expert Templates that deliver expert recommendations for the optimal thresholds in your environment. These templates also provide guidance on what to do when those thresholds are exceeded, and guidance on how to resolve the issue when it occurs.

    ManageEngine, on the other hand, has no support of this kind. When it comes to optimal thresholds and determining which counters matter for your environment, you are on your own.

  • Real-Time Performance Monitoring is cool feature within SolarWinds SAM that enables information polling as often as every 5 seconds. This feature allows you to instantly look at the processes running on a machine to determine what might be causing a problem like a sudden server spike.  You don’t even need to log in--physically or remotely--to the machine, you simply launch RTPE in context from the affected node.

    ManageEngine doesn’t have this feature or anything like it.

  • Do you want to pay forever, or own your software outright? SolarWinds SAM, application monitor is a perpetual license, but ManageEngine is pay as you go.  And, once you get the facts about the additional plug-ins and add-ons you need to purchase with ManageEngine for monitoring your servers and apps, the price will go up even more.

  • Monitoring the hardware underlying your OS and apps is key. ManageEngine only provides basic hardware statistics which carries the risk of potential hardware problems going undetected.

    Using SolarWinds SAM for effective server performance monitoring, on the other hand, provides comprehensive hardware monitoring, so you’ll have the whole picture of server and app performance, including the underlying infrastructure.

  • With SolarWinds, you are in safe hands. IT Management is the SolarWinds focus. ManageEngine is part of Zoho, a privately-held company with multiple software product lines and competing priorities.

 

Get more details here.

 

Learn More about Server& Application Monitor here.

 

bote_logo.png

SAM_Selection_WP.png

In a business world dominated by stringent network compliance, auditing policies and regulations, network engineers have additional pressure to ensure federal and internal corporate policies are met. With enterprise networks growing hand-in-hand with business expansion, it’s not easy to ensure all configuration and device setting changes that happen in the nooks and crannies of your IT infrastructure get tracked and reported. Network operation teams are submerged in huge volumes of device configuration data analyzing and comparing what has changed, when and by whom. This becomes all the more difficult in a multi-vendor environment.

 

In addition, there could be a potentially malicious insider trying to tamper with network device configuration settings. Both compliance and network security are under radar when it comes to Network Configuration and Change Management (NCCM).

 

The time spent on manually chasing down when and where configurations changed could be spent on solving business-critical IT issues by automating the NCCM process.

 

#

Top 5 Reasons to Automate NCCM

Benefits

1.

Automating NCCM allows you to schedule periodic network configuration scans getting a holistic view of all the changes in configuration and device settings.

Manual efforts eliminated, time saved, and NCCM process optimized

2.

You can automate alerting in real-time when met with unauthorized and non-compliant config changes. You can know where and when the changes have happened and by whom.

Simplify network troubleshooting and security

3.

Automation gives you the power to perform deeper analysis of configuration data. You can archive configuration and backup history to deep-dive into config changes and policy violations.

Network forensics improved; historical config change data available any time for analysis

4.

Execution of bulk configuration changes is made easy with NCCM automation. Especially in a heterogeneous environment, thousands of device configurations can be scheduled for bulk and uniform change.

Individual device configuration change effort eliminated and process standardized

5.

Automating NCCM also incorporates the automation of compliance reporting.

Ensure federal and internal corporate policies are complied with efficiently

 

 

You want to avoid feeling this level of pain - Amazon cloud outage was triggered by configuration error - so let’s review some easy ways to automate your NCCM process:

  1. You can write your own scripts and macros to simplify and automate some change management processes within the LAN
  2. Leverage on Open Source tools for further scripts and automation
  3. Choosing the right third-party NCCM automation software that will scale to support your growing multi-vendor network infrastructure

 

 

Feel free to comment on this post and share with the community your experiences or similar stories of configuration mismanagement causing network issues.

The two most frequently asked questions we have been asked lately on patch management are:

 

• When will SolarWinds Patch Manager support System Center Configuration Manager 2012?
• How long does it take for the Patch Manager team to turn around a package after the ISV has announced patch availability?

 

Patch Manager will support SCCM 2012 in September.  SolarWinds Patch Manager, the ideal patch management software, extends the power of ConfigMgr to help you keep your desktops, laptops, and servers patched and secure with the latest patches for both Microsoft and other 3rd party applications. With Patch Manager, you’ll save hours upon hours of time on your WSUS patch management and eliminate patch management headaches by deploying patches for 3rd party applications right along with your Microsoft patches – Microsoft System Center Updates Publisher (SCUP) is not required.  Check out this new video to get a sneak peek on our upcoming release or you can check out the Sneak Peek Webcast replay for the gory details on our patch management solution.

 

We have also recently published a handy table in PatchZone that documents how long it takes for the Patch Manager team to make available 3rd party packages.  As you can see, our packaging team is AWESOME – and in most cases it only takes about a day or two for the package to be uploaded to the Patch Manager catalog.

 

If you have not done so already,  check out the SolarWinds Patch Manager and get a free 30 day trial of Patch Manager.

Escalated alerts won't give you the massage you so desperately need, but they can relieve a bit of your daily network management stress. Here's how your life might be right now, and then how your life could be if you enhance a typical network performance monitoring approach with escalated alerting.

 

Network Management Without Alert Escalation

 

You are a member of a small IT group at WidgetCo, and you're the point of contact for network management. One of your core routers has gone down hard, and your entire sales group is without network access. You don't know this yet because

  1. You're not in front of your web console.
  2. You've been in a meeting for the last half hour.
  3. You left your phone in the car after lunch.
  4. You can't check your email, and your VP of Sales is on his way to the IT corral; his hair is on fire.

 

Your career at WidgetCo may be shorter than you once thought.

 

An escalated alert won't remind you when you left your phone in the car, but it will make sure your other IT guys can cover for you when you can't get the job done yourself.

 

The escalated alert feature of SolarWinds Orion enables you to customize a series of alerts to trigger successive actions as a defined alert condition persists. Let's see how an escalated alert might have helped your situation.

 

Network Management with Alert Escalation

 

You are a member of a small IT group at WidgetCo, and you're the point of contact for network management. One of your core routers has gone down hard, and your entire sales group is without network access. You don't know this right now (see above), but you need not worry, because an escalated alert has your back:

  1. Immediately, as soon as NPM recognizes the down router, you get both an email and a page. An entry is also recorded in the Orion events log.
  2. If the alert is not acknowledged in the Orion Web Console within 2 minutes--and it won't--a second alert is fired, generating another email and another page, both of which are sent to you and to everyone on your staff. An entry is also recorded in the Orion events log.
  3. If the second alert is not acknowledged within 2 minutes, NPM fires a third alert that sends both a third email and a third page to you, to your staff, and to your boss, the director of IT. (Remember, you're a small IT group) An entry is also recorded in the Orion events log.
  4. Your boss can intercept the VP of Sales or even give him a heads-up before he hears about it from his guys.
  5. You can buy lunch for your boss tomorrow, and any other day thereafter that he requires it.

 

Escalated alerts can ensure that everyone on the WidgetCo IT staff is notified of any significant network alert conditions within 5 minutes without burdening your IT manager with excessive alert notifications when you are in front of your web conosle.

 

For more information about using Advanced Alerts, in general, see the SolarWinds Technical Reference, "Understanding Orion Advanced Alerts". For more technical details about configuring alert escalation, see "Escalated Advanced Alerts" section of the SolarWinds NPM Administrator Guide.

Numerous prognosticators predicted network meltdowns as a result of the increase in streaming video from the Olympic Games. The Los Angeles Times reported Internet traffic had spiked about 20% because of live Olympics streaming and the CTO of Los Angeles municipality had emailed thousands of City Hall employees, asking them to stop watching the games online at work. Another study stated that Games enthusiasts followed the proceedings on two or more personal devices.

 

Tackling the bandwidth crunch


The Olympics may have officially come to an end, and hopefully your network did not melt down, but that does not lessen the need for effective network traffic monitoring. Companies should prepare themselves to ensure bandwidth consumption is actively managed. Some key focus areas should be:

 

  • Monitor network bandwidth & traffic patterns down to the interface level
  • Identify your bandwidth hogs: which users, applications, and protocols are consuming the most bandwidth
  • Understand what protocols and IP addresses are consuming bandwidth. 

 

If you’re primary concern is to measure and control traffic to avoid congestion and poor performance by the variety of devices on the network, it is important to have a solution that addresses all the above best practices. If so, you may be interested in checking out a netflow analyzer tool which helps monitor network traffic and gives you visibility into the performance of your QoS policies that you may have established on your network.

 

 

Let us know, did streaming video from the Olympics have any impact on your network?

 

 

The Winter Games will be here sooner than you think.  Will your network be ready?

As IT administrators, we know the pain of addressing issues from the weird to the wildest kind. There are so many tasks that we do each day that focus more on support, installation, password recovery, configuration adjustments and the like, whereas there are other key IT management and troubleshooting concerns such as remediating network issues, providing workarounds for errors and system anomalies, and resolving problems that affect end-users and the business.


A simple solution to this could be finding the right way to channelize the IT administrator’s time more towards business-critical IT problems while having other support and administrative tasks taken care of smartly and quickly. Juggling all these together will only result in none of them being addressed properly.

The best bet is to find the right means of execution to deal with system administration tasks. Here are some key pointers for IT admins to take note of to optimize their time and efforts for effective Windows server administration.


5 Key Tips to Simplified Windows Server Administration

 

For starters, Windows Event Viewer can be a simple and useful functionality to monitor all that is happening on your Windows PC. You can obtain lots of information about the hardware and software problems, and monitor Windows security events. Getting a good grip on managing these event logs will further help IT admins.

    1. Restarting Windows services can be cumbersome if multiple users face services crashing down. You can use the Services utility offered in your Windows server to quickly Start, Stop, Pause, Resume, or Restart your Windows services.
    2. Ever faced with restarting crashed remote Windows systems? You can use the Remote Shutdown Dialog tool available in your Windows OS to restart or shutdown a remote computer. Type “shutdown -i” in the Run command to open Remote Shutdown Dialog
    3. When connecting to end-user computer with multiple monitors, you can use the Remote Desktop Connection client in the Spanning Mode. Type “mstsc /span” in the Run command to open the RDC client in Spanning Mode
    4. Use Windows Performance Monitor to examine how programs running on Windows impact computer performance, both in real time and by analyzing log data. Performance Monitor uses
        • Performance Counters that measure system state/activity
        • Event Trace Data that are components of the OS or individual apps reporting actions or events
        • Configuration Information from Windows registry


Try to leverage Open Source free tools, and third-party administration tools that could help you quickly and effectively troubleshoot issues especially when having to do it remotely.

 

More to come in the next blog where we’ll talk about Rendering Effective End-User Support…


Let us hear what other tips and best practices you have towards simplified Windows server administration. Feel free to share your experiences with the community!

brad.hale

Managing the BYOD Chaos

Posted by brad.hale Employee Aug 17, 2012

The evolution in mobile technology has changed the way we work. Gone are the days when work was only done on personal computers. Even laptops have slowly paved way to trendy mobile phones and tablets. With a new and sophisticated device hitting the market on a daily basis with better technological supremacy, employees are not just using them to communicate but largely to conduct work as well. BYOD, bring-your-own device is a trend which is becoming the norm of the day. Surprised? Well, don’t be. A recent research report from Forrester shows, organizations in Europe and North America are taking this trend quite seriously with 64% of respondents identifying providing more mobility support for employees as a top priority. Companies that allow employees to use their personal devices on the corporate network feel that it’s a trend which has increased employee productivity thereby increasing job satisfaction and retention and also drastic reduction in the cost they have to bear towards hardware. Employees are satisfied as they can access work from devices familiar to them. Organizations are smiling at the money they have saved by shifting the hardware responsibility to the user.

 

But all this consumerization of IT is also causing headache for enterprises. Gartner reports that BYOD is surely a concern when it comes to security and top enterprises are a worried lot.

 

 

  • Potential for loss of confidential information via personal devices
  • Legal issues and regulatory compliance risks
  • Introduction of malware threats
  • Management burden associated with supporting diverse device types
  • Ensuring user authentication, security, and encryption
  • Policy formulation and enforcement
  • Monitoring and management of Wi-Fi access points
  • Network bandwidth monitoring


If you’re an IT Admin, you ought to prepare yourself to tackle the BYOD trend. With the number of unrecognized devices multiplying on the network daily, it’s quite important to have complete control over them to proactively address any security risk arising via suspicious rouge devices. Sanjay Castelino, VP, Market Leader Network Management Business at SolarWinds, recently published an article, Managing the BYOD Chaos with Network and Security Information Monitoring and Management, in which he laid out a number of areas an IT pro can focus on to manage the BYOD chaos.


 

 

By creating and enforcing the right policy for your organization, monitoring usage and access, and implementing intelligent and advanced security solutions, BYOD can substantially benefit the likes of businesses and employees in developing a better, more productive work environment. Try the interactive online demo of SolarWinds NetFlow Traffic Analyzer, a comprehensive network traffic monitor that will help you monitor traffic and bandwidth utilization across your enterprise network.

 

 

IPAM_BestPrac_WP.png

You've gotten your orders from upper management, "We need to buy a log management system – NOW!" So what is log management? And why does upper management think we need it at this instant? Hopefully, this blog answers the what and why of log management for you.


What is log management?

Log management involves the collection and management of log data, gathered from your network devices, servers, applications, and systems. The log data needs to be collected, stored, analyzed, and monitored for it to be useful information, and this process is much easier with log and event management software. An event log monitoring software allows you to manage and monitor alerts and events that occur within your company.


Log management is typically used for security, system and network operations, and regulatory compliance.


Why you should care about log management?

Log management can help you look like your company’s geekiest IT geek and you’ll save your company plenty, by having the info you need to resolve network security issues more quickly. This is because log management can track and log everything that happens on your network. The resulting log provides you a record of who’s been doing what on your network, which:

  • Offers security protection for your company
  • Logs urgent problems that need to be addressed
  • Creates reports for management
  • Faster and easier issue  resolution because you can now track everything that’s happened on your network
  • Enables you to track and block an IP addresses
  • Allows you to shut down or reboot a workstation
  • Let’s you see who is logged in at certain times
  • Shows you where all the network bandwidth is going

For more information on log management and what it can do for you, see SolarWinds Log & Event Manager.

Having the ability to troubleshoot, triage, and keep an eye on things from your phone or tablet frees you from your desk and lets you work on your own terms. Whether you’re in the office or out of it, going mobile is a great productivity tool for the whole IT team.  In this short (but fabulous) webcast, you will learn about Mobile Admin, a super affordable productivity tool from SolarWinds.

 

 

• Find out how Mobile Admin can save you time and improve on-the-job flexibility.
• Learn how to manage over 40 key IT technologies including SolarWinds Orion, Exchange, Active Directory, Windows Server, and more from your mobile device.
• Get details about how simple it is to deploy Mobile Admin across the company for all end-user devices: iPhone, Android, iPad, Blackberry.
• Learn about Mobile Admin’s sophisticated security features.
• Ask questions about your specific requirements!

 

Register today! Even if you can't attend, register and we'll send you the recording to watch at your convenience.

Are you still crawling around your wiring closet trying to trace a rogue network device or user to a switch port?

 

Regardless of whether your tracking users and devices in a wiring closet that looks like this…

Good Wiring Closet.png

or like this....

 

bad wiring closet.png

 

the process can be tedious and maddening without the right tools.  Almost all network admins and system engineers are posed with three crucial questions when attempting to track devices and endpoints that may be potential security threats.

  • How do I track the device just by knowing its MAC address, IP address or hostname without manually searching for the connecting wires and cables?
  • How do I find the user that uses the device?
  • How do I find the historical data on when and where a device was connected and which user was using it?

 

Let’s explore how you can answer these three questions with SolarWinds User Device Tracker (UDT).

 

When you first login to UDT, you will be provided with UDT’s Device Tracker Summary, giving you an at-a-glance view of your monitored nodes and their status.  Within this view, you have the ability to search for a user or device quickly and easily (see highlighted box).

 

UDT Summary With Search Box.png

 

Searching for a device is as simple as entering the Endpoint hostname, Endpoint IP Address, or Endpoint MAC Address.  Once the device is located, you can see the node port, node name, connection duration and connection type as well as detailed information about the device.

 

UDT End Point Details.png

 

You can drill down into the node port to see the history of devices that have been connected.

 

UDT Port Details.png

 

To search for a user, you simply enter their username in the search box and you will be presented with User Detail.

 

UDT User Details.png

Additionally, with UDT you can create a device watch list so that you are notified immediately when a watched user or device connects to the network.

 

UDT Device Watch List.png

 

UDT Add A Device.png

 

On top of tracking users and devices, UDT maps, monitors and reports on your switch ports for utilization.  Having a report on the switch capacity and port utilization can be just the load balancing thing that you need to implement optimal use of ports and switches. Leveraging UDT’s switch port monitoring functionality, we can:

  • Find available network ports
  • View individual ports per switch, reclaiming unused ports
  • Discover switches operating near full capacity
  • Display switch capacities by ports used, CPU loads, memory used to justify purchase of new equipment

 

UDT Ethernet Ports Over Time.png UDT CPU Load.png UDT Radial Gauges.png

 

With UDT, we can drill down into any added node to find all available ports and the status of the port – whether it’s up, down, used or unused. The port details shown by UDT include port name, number, VLAN and duplex along with a complete history of devices that have been attached to the port.

 

SolarWinds UDT’s real-time network discovery feature initiates automated network scans producing comprehensive network switch/port lists saving time by eliminating manual database entries.

 

UDT Port Discovery.png

 

You can test drive our live demo or try out a free fully functional 30-day trial of SolarWinds User Device Tracker, an affordable yet tremendously powerful product that will answer all your calls of tracking rogue network devices.

 

Track them, trace them and take them down! - Let us know what you think.

I recently had the opportunity to talk with Tyler Williams in Charlotte, North Carolina about how he uses Mobile Admin in his organization. Tyler works in Operations. His organization "brings nationwide knowledge to improve local healthcare. It does this by collecting and analyzing clinical and financial data from its member hospitals, organizing committees of members to make decisions and set direction for the alliance, sponsoring seminars and conferences, and sharing best practices.“ In other words, it focuses on hospital efficiency. In his role as Senior Messaging Engineer, Tyler supports in the in-house IT department’s messaging environment, specifically applications like AD, Exchange, and Lync.

 

How do you use Mobile Admin?

I typically use Mobile Admin from my BlackBerry, and occasionally I have used the web client. We are very security-focused, so we only allow access via BlackBerry or through Good, which is MDM security software.

ipad-tablet-image.jpg

 

What Mobile Admin services do you rely on most?

I use the Exchange, AD, and Remote Desktop services quite a bit.

 

You said in another conversation that you loved the product. What makes you say that? How has Mobile Admin come in handy in your line of work?

It’s funny, because people tease me about being on my BlackBerry all day at work. I don’t take my laptop to meetings, but no matter where I am, I can work. You can’t do everything you can from your desk, but you can get most things done you need to, right from the mobile device.

 

Can you tell us about some instances where Mobile Admin has come in handy?

I can think of several instances where things needed to be taken care of quickly, and that’s where Mobile Admin really comes in handy. I’ve used it from a restaurant while on vacation, from home so I didn’t have to start up the laptop, and even from the light rail while on the way to a hockey game. In that case, we were working on an acquisition and I got an email after hours from a phone engineer who had a vendor on site. There was a script that needed to run, and it had a hardcoded password, and once it ran it was going to make the system have the wrong password. It was a bit of a mess. But because of Mobile Admin, I was able to change the password instantly from the train. It was a quick and easy solution.

 

I can think of other situations, where sometimes we’ll have someone leave the company and I’ll need to disable their AD access almost instantly. Mobile Admin makes that easy to do from wherever I am.

 

What do you like most about Mobile Admin?

It’s convenient. I don’t need to use it every day, but when I need it, I need it. There’s a tool in there to get most things done that you need to get done. I’m never afraid I’m not going to be able to do something, so it gives me a lot of freedom.

 

To try Mobile Admin for yourself, or to learn more about the integrations, tools, and capabilities in the product, check it out at RoveIT.com or download a free, fully functional trial.

How many times have you received a call from an unhappy user about poor call quality?  If you’ve deployed any kind of VoIP system, then the answer is probably more than you want. Unfortunately, up until this point, there has not been a good way to troubleshoot poor VoIP performance without using invasive network probes or protocol analyzers. SolarWinds does not like to leave hard IT problems unsolved.

Today SolarWinds released the latest product within our network management family of products, VoIP and Network Quality Manager 4.0 (VNQM).  VNQM is an evolution of our IP SLA Manager product that adds support for VoIP monitoring and troubleshooting to the existing WAN performance monitoring.  If you have been looking for an affordable easy-to-use product that will help you resolve VoIP call quality and connection problems, then look no further. 

VNQM allows IT pros to search and filter call detail records (CDRs) and then view the pertinent call details, including:

  • call origination, destination, region, or call manager
  • call time
  • call status
  • jitter
  • latency
  • packet loss
  • MOS, and more.

 

CDR Search.png

 

The CDR can then be correlated with the IP SLA operation that corresponds to the call path in order to troubleshoot and pinpoint the cause of poor quality in the network .

 

Call Detail View.png

 

VNQM Highlights:

  • Monitor and Troubleshoot VoIP Call performance by correlating individual call performance with corresponding WAN performance
  • Search, filter, and display call detail records (CDRs) to aid in troubleshooting
  • Monitor site-to-site WAN performance using Cisco IP SLA technology
  • Download, install, and deploy in less than an hour

 

Below are a number of resources regarding SolarWinds new VoIP and Network Quality Manager:

 

If you are an existing IP SLA Manager customer under active maintenance, you can enjoy all of the new features of VNQM by upgrading your license within your customer portal.

As more and more end users work and connect remotely to office systems, IT admins are faced with a constant challenge of ensuring easy and secure remote connectivity. More importantly, security issues with remote connectivity have been the top most concern for majority of IT admins and have plagued businesses for some time now.

 

  • According to the Verizon 2012 Data Breach Investigations Report, remote connectivity services occupied the top slot among the different channels used by hackers, constituting 88 percent of security threats
  • Misuse of sensitive information, password theft and installation of malicious programs are some of the risks involved when unauthorized end users gain access to networks

 

So, what are the vulnerabilities with remote connectivity that makes it prone to frequent security attacks? Improper identity validation and weak authentication processes installed to access enterprise computers and servers comprise most of the security attacks. It is easier for hackers to break authentication based on static reusable passwords making it impossible to validate identities. Though passwords could be made stronger, there is a higher risk that the password could be intercepted by hacking software. Let’s dive into some quick tips to understand how to securely establish and manage remote connectivity:

 

  • Risk Assessment – IT admins need to evaluate what level of access they want to provide to remote users and should be able to verify each user and devices connecting to the network
  • Improved Authentication methods – With a combination of three authentication factors such as a password, a smart card and use of biometrics, Multifactor authentication for remote connectivity is harder to break than the single factor ones.
  • Educate Employees – Communicating the security policies in place to employees at all levels and making them understand the risks and benefits associated with secure remote connectivity is highly essential.
  • Encrypting Remote Communication Data– Keeping the remote communication data encrypted through security mechanisms such as SSH and cryptographic keys will help protect critical data from being intercepted by others as it is transmitted between remote and local systems.

 

If you haven’t considered security to be your top priority yet, now is the time to take a step forward and you could avoid becoming the next victim of a security attack, especially when remotely connecting to systems.

 

Faced with security challenges remotely, IT admins are always on the lookout for an ideal remote control software solution with enhanced levels of security. Learn how DameWare Mini Remote Control can seamlessly secure your remote connectivity and be the affordable remote control software that you are looking for.

Want to win a new scooter or an Alienware M14x gaming laptop?


Well, today is your lucky day!

 

From now through August 30th, if you'll help us build some hype around VMworld, you're eligible to win one of these great prizes. Here are the details:


How to win the laptopalienware-m14x-r2-feature1.jpg

1) Follow @SolarWinds on Twitter

 

2) Tweet something about VMworld with the '@SolarWinds' handle AND the '#VMworld' hashtag. Note that you DO NOT have to be at VMworld to win this prize! Anyone can play, and you can tweet about anything...but here are a few ideas:

  • What are you most excited about at VMworld 2012?
  • What do you want to learn at VMworld 2012?
  • If you use SolarWinds Virtualization Manager or Storage Manager, tell your friends about it!

Just make sure that it's something that your friends will want to retweet ...you'll see why in a second.

 

2) Get your Twitter followers to RETWEET your post.


3) We'll rank the top 10 retweeted users. Then we'll take those top 10, and draw for the winner of the laptop.


See full laptop contest terms and conditions here.


 

 



How to win the scooterscooter.jpg

1) Follow @SolarWinds on Twitter

 

2) Visit SolarWinds and Acronis in Booth #1701 at VMworld, and have your badge scanned.

 

3) Tweet a picture of yourself with the scooter with the '@SolarWinds' handle AND the '#VMworld' and '#acronis' hashtag (let your friends know where to come and find us!)

 

4) We'll draw from the list of all of the "Tweeters" for the scooter, and we'll arrange for you to pick it up at the nearest dealership to your home!

 

See full scooter contest terms and conditions here.

 

STAY TUNED FOR MORE UPDATES ON WHERE YOU CAN FIND SOLARWINDS AT VMWORLD 2012!

Simple Network Management Protocol (SNMP) has been around since it was first defined in RFC 1067 in late 1988. Since that time, it has gone through two major revisions. Before we get to v3, let’s take a look at the other two version and what they accomplished.

 

SNMP v1 defined a structured communication for managing devices from a central manager. An SNMP agent was installed on managed devices. This agent receives information from the managed device and relays the information to the manger. Version 1 was limited in functionality (GET, GETNEXT, SET and TRAP). Version 1 is able to ask for one object at a time. If the manager needed thirty objects from the agent, at had to ask thirty times; resembling a conservation with a three year old.

 

SNMP v2 added the ability to make a bulk request (GETBULK). Here, SNMP manager sends GETBULK request for several objects and the SNMP agent answers back with as much information per packet as it can. Although this is a large improvement in efficiency, v1 and v2 had almost no built in security. Both v1 and v2 used the concept of a community string as a weak security mechanism. The community string is set in the manager software and is passed over the wire in plain text. When SNMP had very little capability, this was not problematic. As vendors began adding more SNMP SET commands to device agents, this became an issue. Simply by sniffing the community string and sending SET commands a hacker could take down a device or even a network!

 

Version v3 was created to address the community string security weakness by defining several security measures. These include the following:

 

  • Data encryption – no more plain text.
  • User-based Security Model (USM) - users have as-needed access for read, read/write and to? specific managed devices
  • View-based access control – users are further restricted to administrator created views.
  • Timing mechanisms to prevent SNMP command record and playback.


So, what can SNMP v3 do for you? SNMP v3 really closes the door on SNMP security concerns by:

  • Preventing hackers access to SNMP commands using encryption and timing restrictions.
  • Allowing you to assign SNMP capabilities according to user needs on as-needed access.

For more details on SNMP v3 implementation see

 

http://www.solarwinds.com/documentation/Orion/docs/Implementing_SNMPv3r1.pdf

In just a couple of weeks, thousands of people will be swarming San Francisco for VMworld! It promises to be a great time with lots of excellent networking and content. VMblog.com and SolarWinds are teaming up to make it even better!! We're hosting a vMixer on Monday night, and we'd love for you to join us! VMblog founder David Marshall, vSphere-land’s Eric Siebert, TechRepublic & VirtualizationAdmin’s Scott D. Lowe, and many other vExperts and top virtualization bloggers will be in attendance. This is your opportunity to network with some of the biggest thought leaders in the virtualization world. Beer, wine, and hors d’oeuvres will be provided free of charge.

 

DATE: MONDAY, 8/26/2012

TIME: 5:30 PM – 10 PM

PLACE: The Palace Hotel’s French Parlor, Market Street & Montgomery Street (just 3 blocks from Moscone)

 

Space is limited, so please click here to R.S.V.P. for the vMixer.

 

The first 150 attendees will receive a free SolarWinds Laptop Sleeve! We'll also be giving away some other great prizes at VMworld, so make sure to follow @SolarWinds and @RobbieSWI on Twitter and watch the SolarWinds Geek Speak blog for more details coming soon!!

Trust me, it will. First, let's define synthetic traffic. Synthetic website traffic is traffic that is generated by a computer program to simulate real users and their experiences from various locations. Enough said.

Now, why would you want synthetic traffic in the first place? Well, you could use the inflated number of hits to your website to brag to your friends, but I suspect you haven't gotten this far in life by pursuing such vapid endeavors. A more realistic scenario would be the following:

Let's say you're the Admin for a chain of nationwide banks. Most of your customers are under 60, have little fear of technology, and actually use the online banking feature you've painstakingly implemented. Your main fear would be that Nick Newbie comes along and discovers something is broken when he tries to perform a certain transaction, prompting the dreaded phone call from the bemused user, and possibly your boss. Brace yourself for a talking to and a little overtime.

With synthetic traffic, you can find and fix web performance issues, by a web performance monitoring tool before they impact your users. You can have fake users all over the world performing a host of transactions. If something goes awry, you'll know when, where, and why, instantly. No angry user on the phone, the boss remains aloof, and you are stress free because you get out of the office on time. Just fix the problem at your leisure. That's what you're good at anyway, all Thanks to your web application monitoring.

The bottom line is that synthetic traffic simulates real activity by real users. Failures can include people making mistakes, network bottlenecks, connectivity failures, and so on. Know about the failures before your users and bosses do with effecient website monitoring. Save yourself the headache and aggravation. Don't be this guy:

20120813_132157.jpg

I was recently changing my keys to a new key ring and included in this move was a 4GB USB flash drive. This reminded me just how ubiquitous these flash drives have become. After all, how many times have you been given a USB drive at a trade show, by a friend, or through some other unsuspecting channel?  To the network engineer who manages network security or an administrator, the common USB drive presents significant threats from both what they bring in to the network and what they can take out.

 

According to Computer World, one in four malware attacks is carried out through a USB device.  One such method is to manipulate Autorun such that it launches every time a USB device is inserted into a system.  The Stuxnet worm took advantage of other vulnerabilities and infected machines once the user browsed files on the USB drive.

 

According to Cisco, over twenty million unprotected USB drives are lost per year exposing trade secrets and proprietary information.  Couple this accidental data loss with the malicious removal of data on USB and the losses can be come staggering.

 

So, short of gluing USB ports shut, how can you go about protecting your network and data from the comings and goings of USB flash drivces?  One way is to monitor your event logs for unauthorized insertion or removal of flash drives.

 

SolarWinds Log & Event Manager (LEM) includes built-in USB Defender technology that provides real-time notification when USB drives are detected.  This notification can be further correlated with network logs to identify potential malicious attacks coming from USB drives.  With LEM’s USB Defender technology, you can take automated actions such as disabling user accounts, quarantining workstations, and automatically or manually ejecting USB devices.  Additionally, LEM provides built-in reporting to audit USB usage over time.

 

SolarWinds Log & Event Manager (LEM) delivers powerful Security Information and Event Management (SIEM) capabilities in a highly affordable, easy-to-deploy virtual appliance. It combines real-time log analysis, event correlation, and a groundbreaking approach to IT search to deliver the visibility, security, and control you need to overcome everyday IT challenges. Starting at $4,495, LEM offers a free fully functional 30-day trial so you can see just how powerful and easy-to-use it is.

If you ever have to support desktops and laptops in your company, chances are that you’ve used some sort of remote control software to avoid walking – or driving, or flying! – to your end-user’s desk.  Remote control software has been around for some time, and there are lots of different flavors.


Some remote control software products require end-user permission to initiate a session, while others allow unattended access without permission.  Some provide the ability to connect over the Internet; others don’t.  Some are best suited for helping your grandparents “get on the Email,” while others provide feature-rich support for help desk administrators.  The best remote control software for you is the one that fits the job that you need to do on a regular basis.

 

Regardless of which is the best, there is little question about which is the most common free remote control software.  Microsoft’s RDP, or Remote Desktop Protocol, is one of the original remote control solutions, ships with Windows, and has been around since XP was first released.  RDP does what it claims to do – and is free; however, it does have some limitations.  RDP does not have the ability to screen share – two users viewing the same screen or desktop simultaneously.

 

If you are responsible for supporting desktops and laptops, you need the ability to screen share during a trouble-shooting session with a remote user.  More often than not, you need to interact with the user sitting in front of the remote system – maybe to allow them to show you what problem they are having, or to find a particular file or application.  During an RDP session, the screen of the remote user is blacked out while the administrator is logged in – effectively preventing the experience of “standing over the shoulder” of the remote user.

 

 

There are a number of remote control software products that provide screen sharing.  One solution particularly popular with systems administrators is DameWare Mini Remote Control (or MRC).  When a remote control session is initiated on a remote system, the remote desktop is visible to both the MRC administrator as well as the remote user.  The end user can see what the administrator is doing on the system, and vice-versa.  MRC provides a number of other features that help enhance the interactive experience, like the ability to chat with the remote user through a native chat client, the ability to take screenshots, as well as the ability to transfer files.

 

MRC also provides a granular level of control over how the administrator and end user will interact with each other before, during and after a remote session has ended.  It can be configured to require the end user’s permission before initiating a session, or it can be configured to allow unattended access on remote systems.  Connections can also be restricted by IP address or Active Directory group.  Again, none of this functionality is available with RDP.


If you’ve used RDP or other free remote control software before and felt that it didn’t quite meet your needs, consider trying the Dameware Mini Remote Control.  It’s not free, but it’s cheap - and you can try it for free for 14 days.  You can also check out a more detailed comparison of RDP vs. DameWare MRC.

Even if your company isn’t an ecommerce-based operation, your organization is bound to be chock full of web apps. These are probably a mix of:

  • third-party SaaS applications like Salesforce and Netsuite
  • behind-the-firewall applications like SharePoint, a bug tracking system, project tracking systems, etc.
  • externally facing sites like your corporate website

 

When web application performance suffers on any of these applications, it costs you hassle, troubleshooting time, and either directly or indirectly costs your company money in lost revenue or productivity.  You’re probably already monitoring your apps from an application and infrastructure perspective, but is the ROI there for synthetic end user monitoring?

 

Let’s consider the 3rd party sites first. If they go down or suffer a performance degradation, people can’t work. Productivity in your office slows or stops, and in the case of the two applications mentioned above, you might not be able to make, track, and process sales. That’s really bad news. While there’s not much you can actually do when these sites have issues, proactive monitoring allows you to track SLA information, giving you powerful data when negotiating next year’s contract. This knowledge could save you the cost of your synthetic monitoring product in one fell swoop. Plus, knowing how these sites perform can help you choose the vendor with the most uptime, minimizing lost productivity by making the right vendor choice.

 

Now, on to monitoring apps that you actually control. As we discussed above, you are probably already monitoring application and infrastructure health with a product like SolarWinds Server & Application Monitor. Do you really need to add synthetic end user monitoring on top of that? Is the ROI there? The answer is, it depends. It depends on whether these things are important in your organization:

  1. Users can access the apps from around the globe – it’s possible for an application to be available from one location and not others due to the distributed nature of web applications
  2. Apps perform optimally at all times of the day
  3. Slowdowns and errors are recognized and eliminated
  4. Employees can work efficiently, resulting in increased job satisfaction
  5. End-users can conduct full transactions using your applications with no hiccups, so work gets done and ecommerce sales go through without a hitch

 

Ok, I might be laying it on a bit thickly, but you get the point. If you want to ensure everything is working the way it should – the only way to know this is to monitor your web apps from the end-user’s perspective. Not doing so leaves you with a huge blind spot. When you monitor your web applications from multiple end points, you can get out in front of application errors. Remember how you used to find about application problems when the users called you? Now you can employ an army of synthetic users, at locations around the globe, to “call” you when they see anything amiss.

 

How to Calculate the ROI

Calculating the ROI of a synthetic web app monitoring solution isn’t hard. For internal applications, you can estimate the cost of application downtime with this easy formula: Hourly Downtime Cost = (P+R) x A x C

 

Where:

 

P = number of people affected + R = revenue contributed by the application

A = average percentage they are affected 

C = average employee cost (salaries or wages + benefits)

 

Next, every time you have an application slowdown or outage, run it through the calculation above. Be sure that you’re including users accessing the application from all probable locations—not just home office users accessing home office applications, but remote users and satellite offices as well. How many times have you received a phone call from one of your users that an application was down, but you had received no alert from your conventional monitoring tools? Your monitoring solution tells you the services are started and everything is fine, but users are receiving errors when trying to log in or submit form data. The critical services and processes of your mission critical application may be running, but that doesn’t mean the application is working. A synthetic monitoring solution acts like an army of end users, continuously monitoring your website and web applications and reporting problems immediately, before your real end users do, such is the importance of a web application monitoring software.

 

Estimating the cost for an ecommerce site is also quite simple – just look at your typical revenue for similar period to outage period. What should you have earned? But you didn’t. Maybe the customers will come back and buy from you at another time. Maybe they won’t. Maybe they went to a competitor’s site and now they like them better.

This gives us a nice transition into how application availability impacts the outward image of your organization. If your organization has a customer-facing website that assists in sales, awareness, and brand-building, you can estimate how much that contributes as a percentage of sales and cost it out over the period of downtime. What did you lose by not being there when customers came looking? How much traffic should you have received in the period of downtime? There are several studies that go into this in more depth, but you can make a back-of-the-envelope calculation that should be pretty eye opening.

 

When you add it all up, it should be pretty clear that synthetic web application monitoring can quickly and easily pay for itself in both hard dollars (increased revenue) and soft (improved employee satisfaction and productivity).  I think these real numbers will change end user monitoring from a “nice to have” to a “must have.” If you’re even partially convinced, why not run some ROI calculations, then download SolarWinds Web Performance Monitor, website monitoring tool and see how the ROI looks during the trial period. Then, get in touch and let’s discuss the results of your web application monitoring tool.

 

“The only way to make a rational, business case-based decision on the appropriate level of investment in availability solutions is to first understand the financial impact and exposure that downtime has on the organization’s bottom line.

--InformationManagement.com


SAM_AppBasics_WP.png

Since SolarWinds acquired EminentWare at the beginning of this year, I've been learning a lot about Microsoft Windows Server Update Services (WSUS), the Windows Update Agent, and software patching in general. Most of us know that Microsoft works hard to help us keep our systems up to date to ensure optimal security and performance, but what I didn't know 6 months ago was that it's just as important to keep track of updates from other vendors.

 

Just like Microsoft, vendors such as Adobe, Apple, Google, and Oracle (makers of Java) publish regular security bulletins to inform sysadmins of their latest patches and the risks their un-patched vulnerabilities expose. In a recent blog post, jkuvlesk listed the URLs for several of these vendors' security bulletins, along with some useful tips about keeping their applications up to date. To take an example from one of these bulletins, here's what Adobe has to say about their latest Flash Player update:

 

These updates address vulnerabilities that could cause a crash and potentially allow an attacker to take control of the affected system.

 

To me, statements like this make patching 3rd party applications a pretty big priority -- especially since I know how easy it is to ignore (or even disable) those balloons from Adobe informing users that their software is out of date. So, what can you do to help keep these applications patched?

 

Many administrators use WSUS and the Windows Update Agent to identify, approve, and deploy patches to Microsoft products on Windows systems without any end-user intervention. The good news is, you can do the same for 3rd party applications. By preparing 3rd party update packages manually and modifying the Group Policy settings on the workstations in your environment, you can instruct workstations to report back to WSUS to request your latest custom packages. But that still leaves you holding the bag when it comes to figuring out what to patch and when -- not to mention how to create the packages. Enter Patch Manager.

 

Patch Manager, the one-stop solution for all your patch management worries, does a lot of the leg work for you when it comes to patching 3rd party applications. It even helps you create your own custom packages when necessary. By using a 3rd party patch management solution like Patch Manager, you can keep all of your applications up to date and avoid the headaches that can come along with those un-patched vulnerabilities. Sounds like a perfect patch management software! Yes, it sure is.

 

For more patch management tips, WSUS patch management and additional information about the 3rd party applications SolarWinds supports, check out PatchZone, the thwack! space dedicated to keeping your systems patched up and running smoothly.

As IT admins and network engineers we all know the answer to why we need to migrate to IPv6. Let’s do a quick recap. The primary sources of IPv4 addresses are getting exhausted. Of the five regional Internet registries (RIR) in the world, the Asian major, APNIC, has officially declared in 2011 that they are out of IPv4. RIPE, the European RIR, is expected to run out in 2013. Even the large blocks of pre-allocated IPv4 addresses will run out eventually.

 

It’s not just the IPv4 resource depletion issue that you must consider for the move. There are some significant advantages in migrating to IPv6.

  • IPv6’s 128 bits (versus 32 bits in IPv4) provides virtually unlimited address space that enables any device to have a unique IP address
  • Better network management and routing efficiency because of the larger subnet space
  • Improved encryption and authentication options
  • Improved QoS support
  • Extended support for mobile devices

 

The next big questions is: “How to go about migrating to IPv6?”

 

Planning is the key to unlocking hassle-free IPv6 migration.  Plan ahead in advance and identify your infrastructure needs. Check whether your network hardware and software inventory are compatible with IPv6 and support applications on IPv6.

Also, try creating and testing migration scenarios before the actual implementation.


Migration

  • Execute the migration by studying the existing IPv4 hierarchy/architecture and translating the addresses to IPv6.
  • Migrate your routing configurations by identifying and changing the configurations wherever required.
  • Migrate your security policies – such as those on routing, load balancing, health checks, etc. – seamlessly so that the network security is intact during and after the migration.


Analyze your network performance after the migration to check for any network performance issues and additional infrastructure needs.

Throughout the planning and migration process, keep in mind to assess risks and work out mitigation measures, and to minimize the cost overheads.

You can follow any of the popular migration approaches – dual stack, tunnels, and translation – but ensure to carry out a well-planed and resilient migration to IPv6 to not have the users, applications, network, IT infrastructure and business services impacted later on.

 

You can learn more or see for yourself how SolarWinds IP Address Manager can help take the headache out of your IPv6 Migration by test-driving our live demo or if you’re ready to take the next step, you can download a free, fully functional 30-day trial!

I don’t know if there is any way to measure this, but I’ll bet this is the most asked question since mobile phones became ubiquitous. It’s a frustrating situation; you can tell that you can hear the other person, and you can see how many reception bars your phone has, but you have no way of knowing what is going on at the other end of the call. Does the other person have good reception? Did their battery just go dead? In other words, what is the bidirectional quality of the mobile phone connection?

 

This problem is not just restricted to mobile phone connections. Today’s data networks are comprised of hundreds of individual connections supporting scores of applications. Typically the device testing the availability, quality and performance of the network is in just one location, as shown in the following diagram:

IP SLA graph.png

 

The quality tester at Site 1 has the ability to test the WAN connections A, B, and C that are directly connected to Site 1. The tester can test the combined WAN quality of links A and E, as well as the combined WAN quality of links C and D. Here is the problem; can this tester determine if users at Site 3 accessing servers at site 4 are experiencing the "Can you hear me now?" problem? Can these users open TCP connections to Site 4 servers? Can they reach web servers in Site 4? Our tester in Site 1 has no way of testing this. Enter the proxy test agent! The general term for this technology is proxy performance management. The Cisco specific implementation of this is called IP SLA.


By configuring an IP SLA agent on the Site 3 router, we can run WAN quality tests directly between Sites 3 and 4. The Site 3 router can then send the test results to the Quality Tester at Site 1. And the Site 3 router can be configured to perform tests over link D to determine the quality of that link.

 

Cisco's IP SLA

IP SLA capability is built into Cisco IOS, all you need do is configure it. So, what can you test using IP SLA?

  • VoIP quality (Jitter, delay, MOS)
  • TCP port open
  • UDP Port Open
  • DNS requests
  • DHCP requests
  • FTP file transfer capability and performance
  • ICMP Echo
  • ICMP path echo
  • HTTP access (includes DNS time and TCP port open time)
  • UDP Echo
  • Path Jitter
  • VoIP Jitter

 

IP SLA could be thought as a distributed "can you hear me now" process for the network. If you have not tried IP SLA on your network, the easiest way to implement it and see what it can do for you is to download either the SolarWinds IP SLA Free Tool or the IP SLA Manager and give it a try. These tools automate the router configuration process and provide graphical results of the tests.

 

 


Filter Blog

By date: By tag: