Skip navigation

 

I was on a call today and got some great insight from one of our customers! He told me how he was a great fan of thwack and he visited it often.  We talked about the things he looked for when he came to thwack and he offered some suggestions.  He said that it would be great to have more How-To videos, e.g. for Netflow traffic management. Here at solarwinds, our awesome video team is working on creating more of these. But, in the meantime, you can find a variety of How-to videos in different places.

 

Central video tutorial location

The central location for video tutorials is the solarwinds Resource Center. It has a total of 305 videos in total. Here’s how you need to look for the video you are looking for. You can select the product e.g. ‘Netflow Traffic Analyser’ and the resource type as ‘videos’, sub type as ‘How-to videos’ from the left navigation pane. These are all very useful for Network Managers.

tanu blog1.jpg

Other places to find video

Here are the other places where you can also find the same videos, just a different way of finding them. ‘All roads lead to Rome’

  • You can find a variety of How-to videos in our ‘Library and support’ page.  Here you can see 15 videos on the bottom right of the page titled NPM training series.  And that’s just a fraction of what exists out there.  Keep drilling down into product level pages under the products section, and you’ll find even more. For example, for Netflow traffic management, go to
  • If you go to the product page for example the Netflow Traffic Monitor page, go to resources and click on videos, then view all. It takes you to the resource center again.
  • We have some of our top videos on our Youtube channel.

 

Resources in  whitepaper format

We also have whitepapers here. Some of the top downloaded whitepapers have been

 

We love your feedback here at Solarwinds. So tell me, what type of content would help you in your daily job?  Also if you have made any How-to slides or videos or tutorials to help your colleagues, please share them with the community on Thwack, that way all of us could contribute towards each other’s learning.

 

Happy learning everyone http://www.solarwinds.com/solutions/network-traffic-monitor.aspx

What's in a name? That which we call a rose
By any other name would smell as sweet

   - Juliet

What's in a name? If we call it a private cloud
Or a virtualized environment, it's still pretty sweet!

   - Julie in IT

MP900422987.JPG

So, that private cloud you just set up might not actually be a cloud, after all. At least that's what Network World is now reporting: apparently, Forrester Research is telling us that what most of us might have been considering to be a private cloud is not really much more than blowing smoke:

The line between virtualization and a private cloud can be a fuzzy one, and according to a new report by Forrester Research, up to 70% of what IT administrators claim are private clouds are not. "It's a huge problem," says Forrester cloud expert James Staten. "It's cloud-washing." [source]

The rub is in how a "private cloud" is distinguished from a run-of-the-mill "virtualized environment". The National Institute of Standards and Technology (NIST)—the group that gives the internet the time 2.6 billion times a day—tells us how to tell the difference:

Most cloud experts have settled on a generally-agreed upon definition of cloud computing - be it public or private - as having the five characteristics outlined by the National Institutes for Standards in Technology. These include:

  • On-demand, self-service for users
  • Broad network access
  • Shared resource pool
  • Ability to elastically scale resources
  • Having measured service

Without those five bullets, it's not technically a cloud. [source]

One expert quoted in the article gives an edge to the confusion, calling it "cloud-washing". The article provides a few more juicy bits, too, including this zinger, suggesting that IT Joes and Juliets are engaged in some fear-based obfuscation on the point:

So where does all this cloud-washing come from? Staten says fundamentally IT administrators are scared of the cloud. Virtualization experts within the enterprise used to be the top-dogs; when resources were needed, they provisioned the capacity. Cloud is seen as threatening that model by giving users self-service and dynamically scalable resources. What's left for the virtualization expert to do? [source]

We here at SolarWinds know what all you virtualization experts out there do: you try to give your users the best network experience you can. Of course, a superior network management platform that includes network performance, device configuration, IP control, and network traffic management capabilities is an indispensable tool in your real, live, IT shop.

 

What do you think? Do you have a private cloud or do you have a virtual environment? Does the semantic distinction matter? Let us know in the comments.

Also "SaaS" Can Become "PaaS" If You're Not Careful

 

The PCI Security Standard Council finally released its PCI DSS Cloud Computing Guidelines this month, and the Guidelines are not kind to Platform as a Service (PaaS) solutions, or to Software as a Service (SaaS) solutions that behave like PaaS.  In the document, the Council stuck to the usual definition of IaaS (Infrastructure as a Service), PaaS and SaaS, but it opened its dreaded "in scope" umbrella widest over PaaS.

 

The following chart, adapted from the Guidelines, uses three colors to indicate whether it is the client's responsibility, the cloud service provider's responsibility, or both parties' responsibility to prove compliance to each of the twelve PCI DSS requirements.

PCI-DSS-Cloud-Scope-IaaS-PaaS-SaaS-Lampe.png

Shared responsibility for PCI DSS compliance (i.e., "Both") extends across 11 of the 12 possible requirements for PaaS, 9 of 12 for IaaS and 4 of 12 for SaaS.

 

PaaS solutions are particularly thorny from a security auditor's perspective because both the CSP and client contribute code, scripts or workflows that govern the movement and processing of data.   For example, a PaaS solution could have a base SaaS application that handles contact information plus a PaaS layer (e.g., Web services) that allows clients to integrate into their backend systems.

 

IaaS solutions have several shared areas of responsibility, but the lines of delineation between client and cloud service provider are clear from a security auditors' perspective.  For example, requirement #1 (firewall) could be broken up into a "Do you have a secure firewall?" question posed to the CSP, and a "Do you have a secure set of firewall rules?" question posed to the client.

 

SaaS solutions have the fewest number of shared areas of responsibility, but almost any degree of integration,such as centralized authentication or automated data transfer, threatens to convert a SaaS solution into a PaaS solution in the eyes of a security auditor.  In fact, the Guidelines include special discussion of "Hybrid Clouds" and other common deployment models that blur the lines between SaaS and PaaS.

 

Addressing PCI DSS Concerns with SolarWinds Technology

 

SolarWinds® software, including Log & Event Manager, Firewall Security Manager, DameWare® Remote Support, and Serv-U® Managed File Transfer (MFT), is frequently deployed on top of IaaS to provide PCI DSS compliant solutions.  SolarWinds software is also often used to power, monitor or manage industry- and workflow-specific SaaS solutions from leading vendors and on-premises installations around the world. Additional information about how SolarWinds helps organizations of all sizes achieve PCI DSS compliance can be found below.

 

The Oscars, Grammys, Golden Globes, BAFTAs, and the WindowsNetworking.coms – That’s right….it’s awards season!!  Sadly, the music, television, and motion picture associations have snubbed DameWare again, but all is not lost this season.  On February 20, 2013 DameWare Remote Support (DRS) received another Readers’ Choice award from WindowsNetworking.com!  This is the 4th year in a row that DameWare was chosen by readers as the best remote control software on the market and the 5th time since 2008.

 

In honor of this award, I’ve prepared a little speech, so here goes…

 

Well, another year has passed and DameWare Remote Support has gotten even better.  I have a list here of people that deserve some praise for their hard work and ingenuity.

 

I’d like to thank the product management and development teams at SolarWinds for working on our customers’ behalf.  They designed and built-out the new features that have made DRS the best remote support software on the market again.  Here are some of the great features that became available in DRS in 2012:

 

- Remote control for Mac OS X and Linux operating systems

- Support for Intel vPro AMT

- Agent backwards compatibility through version 7

- Support for Windows 8

 

That list augments an already impressive feature-set that includes:

 

- 3 methods of desktop remote control from one console including MRC, RDP, and VNC

- Active Directory management tools that let you manage multiple AD domains

- Remote administration tools that let you troubleshoot Windows computers remotely without a full remote control session

- A tool that lets you export Active Directory users and other objects

 

(music begins playing)


Please….just a few more people to thank.  I’d also like to thank our customer support team for providing outstanding service to our customers.

 

(music begins playing again)


Please!!  I’m not finished!  I’d also like to thank our sales and marketing teams for getting the word out about this great product and making the sales process quick and painless for our customers.

 

(music begins playing again)

 

Please!!!  Just one more!!  And most of all, I’d like to thank our customers.  You made this award possible by voting DameWare Remote Support the best remote control software on the market!  We promise to keep improving DRS and delivering the best products possible to you.

 

(music begins playing again and I’m dragged off the stage)


Thanks, everyone!  See you next year!!!

katebrew

MARS Need(ed) Women!

Posted by katebrew Feb 26, 2013

Well, I guess technically  MARS doesn't need anybody anymore, since Cisco is in the slow process of killing it.

 

cdn3.jpgCisco Security Monitoring, Analysis and Response System (MARS) is a SIEM product, and by many accounts, well-liked.  As early as 2008, however, rumors of trouble in Cisco-MARS-land began to surface.  The actual announcement about End of Sale / End of Life (EOS/EOL) came from Cisco on December 3,2010.  Now, the last date of support is not until June 30, 2016, but most people who were using MARS are actively looking for a replacement. 

 

Surprise! SolarWinds has a product to replace MARS.   Cisco MARS aficionados – check out a full comparison with  Log and Event Manager (LEM)

 

Or, if you prefer movies to reading, here's the slide version of the information on MARS and LEM.

 

As for the famous movie, it was filmed in Texas during a two week period in 1966, and released in 1967, with a title similar to the title of this blog.  Would you believe it was made for only $20K?  The movie, not the blog .

One of the lesser known features of Patch Manager is its ability to supplement the reporting capabilities of Configuration Manager. What makes Patch Manager the choice of interest is not what it does, but how it does it, to wit, predefined report templates and an easy to use report builder. This functionality, though, is not immediately available for use right out-of-the box; it requires some additional options configurations inside Configuration Manager. In this article we’ll show you how to turn on the client reporting to the Configuration Manager Software Update Point (SUP) so that you can get update compliance data using the Patch Manager reporting system.

 

In a WSUS standalone environment the Windows Update Agent automatically reports state information to the WSUS server. However, in the Configuration Manager environment, this automatic reporting is suppressed, and the only state information reported comes from the Configuration Manager Agent to the Configuration Manager Management Point server.

To enable the clients in a Configuration Manager environment to report state information to the SUP, you’ll need to modify the configuration of the SUP component in the Configuration Manager console.

 

Enabling WSUS Reporting Events in Configuration Manager 2012

CM2012 SUP Component Configuration.png

In the Configuration Manager 2012 console:

  1. Select the Administration workspace.
  2. Select the Site Configuration node.
  3. Select the Site from the list of sites in the details pane.
  4. Open the Configure Site Components menu.
  5. Select Software Update Point to launch the SUP component properties dialog.

CM2012 SUP Reporting Events Option.png

     6. Select the Sync Settings tab, and in the WSUS reporting events section at the bottom of the dialog, select the option Create all WSUS reporting events.

Enabling WSUS Reporting Events in Configuration Manager 2007

CM2007 SUP Reporting Events Option.png

In the Configuration Manager 2007 console:

  1. Navigate through the Site Database -> Site Management -> Site -> Site Settings tree.
  2. Select the Component Configuration node.
  3. Select the Software Update Point Component entry from the list of components in the details pane.
  4. Right click and launch the Properties dialog.
  5. Select the Sync Settings tab, and in the WSUS reporting events section at the bottom of the dialog, select the option Create all WSUS reporting events.

Client Behavior

The clients will upload their state information to the SUP database during their next scheduled Software Updates scan. How long this will take depends on the frequency you have configured for Software Updates scans. At most it should take no more than a full day. Alternatively, you can use the Client Management tools in Patch Manager to force your clients to perform a Software Update scan immediately, or at a scheduled time.

Configuring a WSUS Inventory Task for Configuration Manager Environments

PM Launch WSUS Inventory.png

While you’re waiting for the clients to upload their state information to the SUP, you can configure Patch Manager to perform a WSUS Inventory when those uploads have completed.


Drill into the Update Services -> SUP node of the Patch Manager console, right-click, and select WSUS Inventory to launch the inventory configuration dialog. Use the default options, and configure the task to run at a time and frequency appropriate to your needs. Typically, this would be a daily task run during non-working hours.

 

Using WSUS Reporting

Once the WSUS Inventory task has completed, you can use the reports and datasources in the Patch Manager MMC console to access the client-reported state information. The best place to start is the Computer Update Status report, which is a general report for all clients and all updates showing the installation state for each update on each client. Updates identified as "NotApplicable" are automatically suppressed from this report, so the report focuses only on the installable updates and whether or not they are installed.

Finding WSUS Reports in Patch Manager console.png

Make special note that in a Configuration Manager environment, there are no update approvals, so you might wish to remove the Approval State column from your reports since it has no meaning.

 

For more information about creating the WSUS Inventory task and using the Patch Manager WSUS reports, please review these resources:

In today’s customer service-oriented world, smooth-running IT help desks are more necessity than choice. Yet many organizations blindly adopt help desk technologies without first identifying their business requirements and determining whether the sought-after solution meets those requirements. Because of this, many help desk implementations fail to deliver the expected results—with management of the ticketing process emerging as the primary pitfall.

 

1. Difficulty of Handling Growing Ticket Volume


When the sheer number of tickets increases substantially, most help desk solutions tend to fail to scale accordingly. Coupled with challenges like end-to-end activity tracking and workflow management, help desk tickets become exponentially more cumbersome.

Without appropriate support from your help desk software, it becomes extremely difficult to manage customer requests, help desk actions, and ticket resolutions. The growing volume of tickets you face can easily inhibit your ability to track:

 

  • Dependencies between tickets
  • Prioritization and assignment of tickets based on staff availability and ticket severity
  • Service activity on a ticket from creation to closure
  • Response time and effectiveness of ticket handling
  • Communication between the end-user and the technician

 

2. Complexity of Managing the Ticketing Process


Help desk software is certainly the preferred instrument for managing customer support, but many help desk technologies are not simple to work with, many users continue to email help desk staff directly. Help desk staff should spend less time managing the ticketing software and more time troubleshooting and resolving problems.


Customization and automation of the ticketing process can help them do exactly that, by offering:

  • Flexible, dynamic business rules for automated routing and ticket updates
  • Dynamic assignment of tickets to a specific tech or group of techs based on skill set, location, department, availability, work load, etc.
  • Automatic conversion of service request emails into trouble tickets (including file attachments)
  • Customizable ticket submission forms to make it easier for the end-user

 


3. Lack of Proper Reporting & Metrics


In addition to scalability and complexity issues, many IT help desk solutions also lack the necessary reporting capabilities that provide key indicators and metrics for measuring help desk performance.

 

Proper reporting gives you visibility into:

  • Fulfillment of help desk requests
  • Timeliness of service resolution
  • Performance of help desk technicians
  • End-user feedback to analyze bottlenecks and reasons for customer dissatisfaction


Your management team needs built-in comprehensive reporting to be able to quickly identify where end-users need additional assistance and easily determine the primary causes of recurring tickets. Help desk reporting is an essential tool for making improvements, particularly when the metrics analyzed offer a clear plan of action and input for quick decision-making.

 


Help Desk Software Doesn’t Have to be Complicated


Successful help desk implementation is about understanding how your end-users are using the IT help desk so you can simplify and expedite ticket resolution for them. Help desk implementation can be easy as long as you have done the research to choose the right solution for your organization and know how to customize it to meet your specific business requirements.


SolarWinds Web Help Desk offers affordable yet powerful IT help desk ticketing software with simplicity and automation to help you quickly resolve problems, escalate issues, and isolate root causes. With built-in reporting tools to customize, schedule, export, and email reports, Web Help Desk gives you complete control of your ticketing process. Try the fully-functional 30-day free trial today!

My favorite customer quote of the week: 

"DameWare is RDP on steroids."

RDP_on_Steroids.png


A conversation with a new remote support user yielded this gem as the three-person IT team leader described how he could do more faster with DameWare than he ever could with Windows RDP alone. Other quotes I found myself jotting down: 

  • "It's like RDP but you can work with another user at the same time on the same screen, and it includes a little chat client so you don't have to share notepad or something lame like that." 
  • "You don't need to build DameWare in your standard images to use it because it installs its local client when you connect."
  • "DameWare cleans up nicely too.  It automatically uninstalls the client it installed when you close up the session."


Your Thoughts?

 

How would you would answer this?  "DameWare is..." 

Let us know in the comments below!

Prime numbers have fascinated me since I was in the ninth grade. I didn't care about the big ones, like the 17 million digit one mentioned here. I was interested in finding a pattern to them. It was in the ninth grade that I read an article in Science Digest about scientists who used a Univac computer to search for a pattern to prime numbers.What they did on this behemoth of a computer was simple, yet elegant. Play along at home.

  1. Grab pen and paper.
  2. In the center of the page, write the number 1.
  3. Continue writing numbers in order in a spiral fashion. You should have something that looks like this:

    5   4   3   12
    6   1   2   11
    7   8   9   10

  4. Continue the number spiral to be as high as you want.
  5. When your spiral is complete, circle only the prime numbers: 2, 3, 5, 7, 11, 13, 17, and so on.

 

You'll begin to see a pattern of diagonal lines develop. (Random numbers would not produce anything remotely close to the illustration below.) The illustration below shows the center of the spiral, 1, in blue. All the white dots represent prime numbers plotted in a spiral, just like we did above in Step 3.  So, the scientists were partially successful in finding a pattern, if that's even possible. Something's going on here.
primespiral.png

In 1983, I decided to take up the same challenge of looking for a pattern to primes using my zippy Commodore 64. I created my own version of their program and had the same luck, no definite pattern. I tried once again about 15 years ago using my more modern Windows 95 machine. The results? A little different. I altered the program to accept any number between 1 and 1,000,000 to be used as the starting point/center of the spiral. (If you want, you can download my little experiment here. Just unzip it and drag the .exe to the desktop.) Having the number 41 as the center/starting point created a distinct diagonal line. This was no accident. If there is a pattern to primes, I'll find it.

 

Prime Numbers and Encryption

"A computer cannot generate random numbers." Thank you Mr. London, my ninth grade programming teacher. And he's right. A computer cannot do anything randomly. An algorithm (a set of rules) is used to produce pseudo-random numbers by the computer. For instance, if I were to create and run a program that generates 10 random numbers, close, then re-run the same program, the second set of "random" numbers would be the same as the first. This is because the algorithm that generates the random numbers never changes. This is why lotteries use ping pong balls rather than computers to pick the random numbers. (I could go on about statistics and nothing truly being random, but that's best kept for my personal musings.)

 

To date, there has never been a pattern found to explain the way prime numbers are generated.


Enter RSA encryption. From Wikipedia, "A user of RSA creates and then publishes the product of two large prime numbers, along with an auxiliary value, as their public key. The prime factors must be kept secret. Anyone can use the public key to encrypt a message, but with currently published methods, if the public key is large enough, only someone with knowledge of the prime factors can feasibly decode the message.

 

Security of Encryption

With RSA, your public key can be published for anyone to encrypt a message that only you can decipher. Why? Because only you have the private key. It is incredibly difficult for others to figure this out from the public key. Factorization of a large prime number is a glacially tedious process which cannot be automated.

 

By now you may have guessed that the size of the prime numbers used dictate the strength of the encryption. For example, a message encrypted with 5-digit prime numbers (40-bit encryption) yields over 1 trillion possible results. Using 16-digit numbers (128-bit encryption) generates near infinite possible combinations. What does this mean in the real world? Look at it this way. A 128-bit encryption can take about 11,000 quadrillion years to be cracked via brute force, give or take a few millennia. This is why 128-bit encryption is the standard used to protect sensitive data.


SolarWinds products and Encryption

Naturally, our software takes advantage of 128-bit encryption. Here, you can read about how our software utilizes various encryption methods, including 128-bit.

Virtual environments can be convenient and cost effective, but diagnosing and troubleshooting problems or performance issues can be challenging. In virtualized environments, servers and storage locations can become obscured as the locations are frequently moved. Storage Manager provides end-to-end mapping so you can gain visibility into where your storage and servers are located and track their performance over time. From any virtualization console in Storage Manager, you can click through to see the mappings from all formatted drives through the switch and into the LUNs on the array. Storage Manager also provides a single mapping report across the entire enterprise.

 

Advantages of End-to-End Mapping

  • Gain visibility into where your storage is located.
  • Help diagnose performance issues.
  • Keep compliant by tracking the locations of critical or security sensitive storage.

 

End-to-End Mapping Abstract

Below is an abstract mapping from the virtual machine (VM) to the LUN.

E2EMapping_abstract.png

 

End-to-End Mapping in Storage Manager

 

Below is an image of a logical mapping view of a VMware ESX. I accessed this logical mapping view by going from one of my virtualization consoles to the Storage tab and clicking Logical Mapping. From this view, I can map data stores to LUNs, track the number of VMs, and drill down to each LUN.

 

LogicalMapping.png

From this view, I can see the datastore, drill down to the VMs, or jump to the array and LUN and drill down further to the RAID group so I can check the LUN performance.

 

In addition to the logical mapping view, Storage Manager offers a physical mapping view where I can glean information about the health and status of the switch ports associated with my VM to LUN mapping.

 

See the GeekSpeak blog post, Oh Storage Where Art Thou?, for additional guidance on using Storage Manager to diagnose and resolve storage issues in your virtual environment.

Tip and tricks you can use.

Posted by Bronx Feb 22, 2013

Want to get fancy with your SAM alerts? Do more than just be notified of an issue, take action. The following example will terminate all instances of notepad.exe on an alert status of Down using PowerShell:

  1. In the Trigger Action of the Advanced Alert, add a new action type of "Execute an external program."
  2. In the program to execute, type in "PowerShell" and the rest of the command line arguments or the path to your PowerShell script.
  3. Below is an example that terminates all running instances of notepad.exe.
    Note: You will need to enable impersonation through your PowerShell script or run the Advanced Alert Manager under a user account with elevated privileges.

powershell+advanced+alert+manager.png

 

Wrapping VBScript around an executable file.

 

The following example demonstrates how to write a simple vbscript to open notepad.exe - all from the comfort of your desktop:

1. Open Notepad and paste the following code into a new document:

    Set WshShell = WScript.CreateObject("WScript.Shell")
    Dim exeName
    Dim statusCode
    exeName = "%windir%\notepad"
    statusCode = WshShell.Run (exeName, 1, true)
    MsgBox("End of Program")

2. Save the file as Example.vbs (manually change the extension to .vbs)

3. Double-click Example.vbs to run the program which launches Notepad.exe

Note: Change the highlighted section to the path and exe you want to execute, for example, C:\Program Files\Mozilla Firefox\firefox.exe

Learning a few simple tips and tricks like this just may make your SysAdmin day a li'l bit brighter.

 

Monitoring your network performance remains a significant factor for organizations as IT departments grapple with increasing network problems. Further, you also need to keep a tab on the performance of the hardware as it has a direct impact on the performance of your network.

 

Your hardware is the backbone of your network performance. How many times have we heard network engineers talk about the outages due to a faulty power supply or an over-heated router? This very issue reiterates the importance of proactively knowing the health of your network hardware before it brings your network down.


The biggest complexity in effectively monitoring hardware is its diversity of equipment manufacturers.  Monitoring these hardware components is quite important as it helps the network admins to pre-empt causes of hardware malfunction for troubleshooting network breakdown.

 

OK, so what do you monitor?


To effectively monitor the health of your network, you need to be able to gain visibility into the performance of various components within your network hardware. Some key metrics that you would want to constantly monitor include:

    • Hard drive status
    • Array controller status
    • Power supply status
    • Chassis intrusion status
    • Chassis temperature and fan speed
    • CPU temperature and fan speed
    • Memory module status
    • Voltage regulator status

 

Why monitor these hardware metrics?

 

Let us consider a couple of use cases:


CPU temperature: If your CPU is overloaded with tasks, its temperature might shoot up sometimes resulting in the malfunction of your devices. If you don’t monitor your CPU temperature, you not only run the risk of shortening the life of your device but also impacting the network performance. 


CPU fan speed:  In addition to having a tab on the CPU temperature, checking CPU fan speed is important as they go hand in hand. If your CPU fan is manually controlled you can use a fan speed monitor to balance cooling, fan speed, and noise.


Similarly you need to monitor many other key hardware elements on your network from time to time. You need to make sure that there is no delay in identifying the hardware malfunctions as the delay may result in causing damage to your hardware, in turn causing a hiatus in the performance of your network.

 

Hardware Health Monitoring.png

 

What kind of a monitoring software should you choose?

 

You should look for a single integrated console from where you can monitor the diverse hardware components, all from an interactive dashboard. Choose a network monitoring software that delivers in a multi-vendor environment and provides:

  • Agentless performance and availability monitoring
  • Automatic network discovery
  • Intelligent network alerting
  • Reporting for hundreds of devices and vital hardware statistics
  • An easily customizable dashboard delivering scalability for even the largest environments with devices from various vendors.

server.jpg

Monitoring the performance of business critical apps is a necessity, there is no denying that.  It’s equally important to have a hawk eye view on the performance of the hardware on which these critical apps run, as failure of any of the server’s internal components can negatively impact application availability. Servers can develop critical problems without prior warning and eventually results service downtime.  So, you’re not really monitoring your server if you can’t see into the hardware.

 

Your server hardware is the backbone of your critical business services. To avoid any server downtime, it’s really essential to have a system that monitors the health and performance of your critical server hardware components for optimal performance.

 

But, it’s complicated

The biggest complexity in effectively managing hardware is the diverse nature of the IT infrastructure which typically comprises of equipment from multiple manufacturers. Each manufacturer provides their own solution for monitoring and managing their hardware such as:

• Dell provides Dell OpenManage for its servers

• IBM System X provides the IBM Systems Director for managing its hardware

• HP provides HP Insight Manager managing their servers

 

If you have multiple hardware vendors, then you have multiple monitoring consoles.  Add to that additional consoles or scripts for monitoring operating system performance, virtual machine performance and application performance.  Monitoring all these components is quite important such that sysadmins can narrow down issues in the case of a service failure. Any delay in identifying hardware malfunctions can cause damage to your hardware, resulting in a halt to all the applications running on that specific server.

 

To effectively monitor your server environment, sysadmins require a single integrated console from where they can monitor the diverse hardware components, all from an interactive dashboard.

A server monitoring tool should keep tabs on the following key hardware monitoring metrics such as:

• Hard drive status

• Array status

• Array controller status

• Power supply status

• Fan status

• Chassis intrusion status

• Chassis temperature and/or status

• Chassis fan speed and/or status

• CPU temperature and/or status

• CPU fan speed and/or status

• Memory module status

• Voltage regulator status

 

 

Hardware health doctor

With so many various parameters to be monitored, it’s quite important for you to look for an easy-to-use and cost effective server monitoring for your multi-vendor environment, providing expert guidance on what to monitor, why to monitor it, including customizable dashboards & reports showing trends, capacity, & performance all in no time.

 

Want the complete server monitoring software that makes sure your Servers, Applications & Hardware are performing optimally? Check out SolarWinds Server & Application Monitor – it’s the only game in town that provides multi-vendor hardware monitoring that won’t cost more than the hardware you’re monitoring.

My previous articles on the IT future in terms of face recognition technology leverage the idea of a surveillance culture, which George Orwell put into frame with his famous novel 1984; the ‘telescreen’ watches while being watched. (You can find connections to big data storage, real-time computing, law enforement mining, and encryption in those posts.)

 

On Super Bowl Sunday in 1984, Apple ran an ad to let us know that its MacIntosh personal computer would do away with the oppressive monologue of television. The Mac would revolutionize communications by giving individuals new information ordering powers.

 

29 years later—if current buzz proves correct, and as Google readies the first version of Glass—Apple is on the verge of releasing its first wearable computing device, iWatch.

 

In this next few articles in the series I’ll explore the counter-surveillance aspects of wearable computers and the impact they are likely to make on the current IT trend known as BYOD.

 

Monitoring Wearable Devices

Big buzz wearable computing products may not come in 2013 but they are coming soon. And when they do, walking into a building that fails to seamlessly grant a wearable device’s request for an IP address will quickly come to seem like walking into a building that makes your watch stop telling you the correct time. The power of new user expectations will drive an IT infrastructure expansion; and so wearable and ubiquitous computing are likely to evolve together.

 

Increases of network scale make the ability to triage problems depend more urgently on knowing what device is causing an issue. And proactively correlating devices with assigned IP addresses becomes the key with devices causing trouble from the edge of the network.

 

A reasonable BYOD policy would be to get the MAC address on devices brought to work. As a means of policy enforcement, you could then use a product like SolarWinds IPAM to white-list known MAC addresses and setup an alert on any unknown MAC address to which your DHCP server assigns an IP address. If that policy seems too onerous, you could instead use SolarWinds User Device Tracker to match users on the network with the device they're using to gain access.

 

Both tools would help eliminate blind spots that delay resolving network problems when they occur.

Want a chance to win a $50 Amazon® gift card?!


SolarWinds is running a short survey to understand the help desk challenges faced by IT technicians, help desk support staff, and those of you who support and manage your organization’s help desk processes.


It’s just a short survey well worth a few minutes of your time. Let us know about your day-to-day help desk challenges and what you look for in your ideal help desk solution.


Just complete this survey to enter the drawing* to win a $50 Amazon® gift card!

 

Take the Survey

 

*Terms & Conditions: You can read the T&Cs for the survey here.

End of Life

End of Life (EOL) date is when a product, hardware or software, has reached its obsolescence and is no longer supported by the company with any maintenance or update. End of Sale (EOS) is when a product is no longer sold by the organization, but may continue to be supported.

 

Why do you need EOL/EOS DATES?

    • Stay up-to-date on device support information from vendor
    • Plan well in advance for network expansion and growth
    • Be better informed and equipped for device replacement
    • Track network device inventory for effective network and configuration management

 

Finding EOL and EOS using Vendor Sites

Browsing through a vendor site to find EOL information can be quite a headache for the already overloaded and time-strapped sysadmin. Even though vendors like Cisco and Juniper have pages for announcing EOL information, it can be a painful process finding the right information because:

    • EOL information page on vendor sites can be cluttered and time-consuming to browse through
    • Search capabilities within vendor sites are usually NOT refined to find EOL information
    • Individual vendor sites means there is no uniformity in data presentation across vendors


What is SolarWinds EOL Lookup?

SolarWinds has a launched a free web-based EOL service, aptly named EOL Lookup. The EOL data is based on vendor EOL information, but presented in a refined and easy to access way. And, it doesn't stop there. SolarWinds EOL Lookup service delivers more than just EOL dates, it provides:

    • Latest software download links for EOL products
    • Troubleshooting documents
    • Data sheets and more

 


EOL_Lookup_Search.png

SolarWinds EOL Lookup is made simple using predictive search

EOL_Lookup_ResultsPresentation.png
Intuitive data presentation in SolarWinds EOL Lookup service

 

You can get information on future EOL updates by subscribing for free. For now, SolarWinds EOL Lookup provides EOL information for Juniper and Cisco, but many more vendors will be added in the near future. So just sit back and relax; you no longer need to keep track of all EOL/EOS dates--everything is now available in a single repository!

 

 

Check out our EOL Lookup presentation on SlideShare.

It’s been 35 years since the very first solid-state drive (SSD) was launched. This went by the name “solid-state disk”. These were called “solid-state” because they contained no moving parts and only had memory chips. This storage medium was not magnetic or optical, but they were solid state semiconductors such as battery-backed RAM, RRAM, PRAM or other electrically erasable RAM-like non-volatile memory chips.

 

In terms of benefits, SSDs worked faster than a traditional hard drive could in data storage and retrieval – but this came at a steep cost. It’s been a constant quest in the industry, over the years, to make the technology of SSD cheaper, smaller, and faster in operation, with higher storage capacity.

A post in StorageSearch.com shows the development and transformation of SSD, over the years, since its first usage and availability until now.


Why Storage and Datacenter Admins Should Care?


More than the user using the PC or notebook, it’s the storage admins who are spending time managing and troubleshooting the drives for detecting storage hotspots and other performance issues. And it’s imperative that storage and datacenter admins understand the SSD technology so they can better apply and leverage the technology in managing the datacenter.


Application #1 – Boosting Cache to Improve Array I/O


A cache is a temporary storage placed in front of the primary storage device to make the storage I/O operations faster and transparent. When an application or process tries to access the data stored in the cache, this can be read and written much quicker than from the primary storage device which could be a lot slower.

 

All modern arrays have a built-in cache, but SSD can be leveraged to “expand” this cache, thus speeding up all I/O requests to the array.  Although this approach has no way to distinguish between critical and non-critical I/O, it has the advantage of improving performance for all applications using the array.


Application #2 – Tiering – Improving Performance at the Pool Level


SSDs help storage arrays in storage tiering by dynamically moving data between different disks and RAID levels in meeting different space, performance and cost requirements. Tiering enables storage to pool (RAID group) across different speeds of storage drives (SSD, FC, SATA) and then uses analysis to put frequently accessed data on the SSD, less frequently on FC, and least frequently on SATA.  The array is constantly analyzing usage, and adjusts how much data is on each tier. SSD arrays are used in applications that demand increased performance with high I/O. It is often the top tier in an automated storage tiering approach. Tiering is now available in most arrays because of SSD.


Application #3 – Fast Storage for High I/O Applications


Since arrays general treat SSD just like traditional HDD, if you have a specific high I/O performance need, you can use SSD to create a RAID group or Storage Pool.  From an OS and application perspective, the operating system understands the RAID as just one large disk whereas since the I/O operations are spread out over multiple SSDs, thereby enhancing the overall speed of the read/write process.

 

Without moving parts, SSDs contribute towards reduced access time, lowered operating temperature and enhanced I/O speed. It should be kept in mind that SSDs cannot handle huge data, and data for caching should be chosen selectively based on what data will require faster access to – based on performance requirement, frequency of use and level of protection.


What’s Best in Virtualization?


In a virtualization environment, there is the fundamental problem of high latency with host swapping primarily in traditional disks compared to memory. It takes only nanoseconds for data retrieval from memory whereas it takes milliseconds to fetch from a hard drive.

When SSDs are used for swapping to host cache, performance impact of VM kernel swapping reduces considerably. When the hypervisor needs to swap memory pages to disk, it will swap to the .vswp files on the SSD drive. Using SSDs to host ESX swap files can eliminate network latency and help in optimizing VM performance.


Conclusion


The application of Solid-state Drives has become significant in achieving high storage I/O performance. Proper usage of SSDs in your storage environment, along with the right set of SAN management and performance monitoring tools and techniques, can ensure your data center meets the demands of today’s challenging environments.

 

If you are interested to learn more about the advantages of SSD over Hard Disk Drives (HDD), you can look at this comparative post from StorageReview.com.

Last Friday I got to see a presentation by Tom Ervin, a Cyber Squad Computer Scientist with the FBI in San Antonio hack into computers in a demo at the local InfraGard meeting.  It was pretty cool - at one point Tom asked for a volunteer / victim, who was seated before a PC near the front. On the main display, Tom acted as the "hacker."  First the hacker sent the victim an email that looked like it was from a family relative, Uncle Bud.  This would be fairly easy for the hacker to figure out the victim has an Uncle Bud, given social media methods.  So, the victim gets this friendly note that looks like it's from Uncle Bud, inviting him to click on a flash Christmas card.  The victim, being a nice guy and not wanting to insult Uncle Bud, clicks on the link.  The hacker, using the flash Trojanizer utility, is then emailed lots of info about the victim's computer, including IP address and port number, as a result of the victim clicking on that link.  


The hacker then uses SubSeven, a Remote Administration Tool (RAT), to connect with the victim's PC and see all kinds of info on that PC and take control.  Subsequently, the hacker opens a Keylogger app and is able to see the victim's keystrokes in real time.  That means credentials.  Awfully dangerous if the victim is opening an online banking application!

 

In this demo, the hacker activated the webcam and could even watch the victim.  Creepy.


Now, the hacker tools Tom was using are common ones, and up-to-date endpoint security, such as AV, would have stopped these particular hacking tools from working.  The tools he was using were "old news" that can be defeated.  They're still usable by real hackers, because there are always people who don't keep their endpoint security up-to-date.  In addition, there are always newer, more sophisticated tools.  Tom, being with the FBI and all, did not want to publicize the newer, nastier hacking tools, which is nice.  But they are out there...

infragard2.gif


Tom's presentation was given at the InfraGard Austin meeting.  InfraGard is a collaboration between the FBI and private industry members who are involved in protecting critical infrastructure.  Critical infrastructure includes things like water supplies, communications systems and information technology. Important things that would appreciably hurt our lifestyles, if hacked.


Each InfraGard chapter is linked with an FBI Field Office and provided access to experts from the agency to help mitigate threats to the US critical infrastructure.  The Austin chapter is linked with the FBI office in San Antonio.  InfraGard members are vetted at time of application, and then have the capability to contribute to the security and protection of our infrastructure and key resources


At this InfraGard meeting, aside from two demos, Tom also discussed the trends the FBI is seeing.  Social engineering is not new, but it is growing with social media and associated scams.  He also discussed Spam scams, including an example of how the stock market was influenced with such a scam.  He also discussed how he investigates suspected malicious code in his role at the FBI, including the tools he uses.  Another interesting point was around anti-detection and anti-debugging tools and techniques, which attempt to make the malware "hard to find."  Tom mentioned that in his role, it's important to be able to attribute malware to its source, so such countermeasures make attribution increasingly difficult.


If all this talk of malware is making you concerned about the security of your own IT security infrastructure, please check out this whitepaper, IT Security Management Checklist - 9 Key Recommendations to Keep your Network Safe.


On a lighter note, you might also check out this short video from MAD Security about the dangers of USB Devices.  No cats were harmed in the making of the film


Linux, SAM, and You

Posted by Bronx Feb 19, 2013

"Linux is a Unix-like computer operating system assembled under the model of free and open source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released October 5,1991 by Linus Torvalds." Thank you Wikipedia for that lucid definition. (When the OS first came out, I pronounced it "lahy-nuks," to pay homage to its creator. Needless to say, I was once again in the minority.)

 

Linux and You

Over the years, Linux has grown on me for three reasons:

  1. It's free
  2. It's open source
  3. It's still free and open source

If you've read some of my previous articles, you're no doubt aware that I am a proponent of tinkering, and an opponent of restrictions. Both Microsoft and Apple usually have great operating systems, and that's fine. They are entitled to make all the money they want for building a popular product. However, when it comes to the user having total control over an OS, Linux wins, hands down. (Let the marketing and sales people figure out how to make the money.)

 

Linux in Action

If you have an Android based phone or tablet (like I do, of course) then you have experience using Linux. Again, I prefer more flexibility and fewer restrictions than those provided by Apple and Microsoft. As you may have noticed in the past few years, the Android phone and tablet market has exploded. Why? IMHO, Android based products are cheaper and more flexible than their Apple counterpart. And when it comes to Windows trying to compete...let's just say...(actually, I can't say that.)

 

So, with Linux on the rise, what does this mean for you, the IT professional? According to a recent article, found at PCWorld and Dice, it means more money. A lot more money! Check out that PCWorld article to see just how much more money you could be making if you were a Linux expert. (I may have to pick up some light reading for myself this weekend.)

 

Linux and SAM

If you currently have Linux boxes in your world, then you, my friend, are a smart puppy. You get to enjoy tinkering to no end. And the more you tinker, the more you'll learn, which in turn will lead to more money for you; the undervalued, under-appreciated, and unsung hero of your company. Of course, we here at SolarWinds actually do appreciate your hard work, and that's why we created SAM templates to monitor Linux boxes! Take a look at some of these templates below:

 

SAM Templates for Linux:

201731-linux_penguin_180_original.jpg

Apparently, security is not your job after all.

 

In a recent security survey we asked more than a hundred IT administrators and other professionals about their roles and attitudes toward computer and network security.  Not surprisingly, almost all IT pros (86%) said that they are responsible for securing IT.  However, only about one in fifteen (7%) said that security was their full time job!

 

Q) What's the main conclusion? 
A) Most of the people companies depend on to enforce IT security policies have conflicting priorities and resources, not the least of which is the length of the average IT day.

 

Q) What else can we conclude?
A) When offered the choice between "secure and hard" and "secure and easy," IT pros will pick "whatever checks the box and gets me out the door by dinnertime" almost every time.

 

Fortunately, SolarWinds has a number of easy-to-use products that secure networks, secure computers, and avoid the hassles and meetings that keep everyone working too late.  Three of them are:

 

Are You In The 79%?

Are you one of the 79% of IT pros responsible for computer security as part of your overall job (but not your full time job)?  Are you part of a small shop where security is no one's full time job?  We'd love to hear from you in the comments below!

How do your company’s business needs impact your network? Do you know when your network is busiest? When do you have the least amount of traffic? Have you been considering moving some or all of your network into the cloud? Do you have questions about how to save money and make the most of your network bandwidth?


Network Management Planning helps you answer all these questions and more. The process involves:

 

  • Knowing your business and what it is trying to accomplish
  • Translating your business requirements into network requirements
  • Collecting existing network topology and usage data
  • Analyzing the data
  • Developing a plan
  • Verifying the plan
  • Implementing the plan

 

Are you ready to learn more? SolarWinds also offers you the training you need, as well as network professional certification. For starters, take a look at the video, Learn: Network Management Planning.

For specific information on becoming a SolarWinds Certified Professional (SCP), see How to Get Certified. The SolarWinds Certified Professional program is an accredited certification designed to validate your skills in network management by testing you in_ key knowledge areas. Becoming an SCP enables you to demonstrate your skill, get your geek credit, and promote yourself. Get started by reviewing the process and evaluating your readiness. And it doesn’t even require you leave your office!

Kiwi Syslog Server can help you manage the large volume of messages you are getting from your devices.  Simply create filters and actions that will weed out insignificant events and then act upon important ones.

A good example is to send a text message when a site to site tunnel is dropped. Keep in mind that you can use any event that you would like to isolate a problem, and then trigger any necessary actions. The action can be is as simple as sending an email, to running a script that can do any number of complex responsive actions..

2-14-2013 2-09-49 PM.png

What if you want to isolate an event that tells you that one of your firewalls has issued a message that the IPSec tunnel was terminated because the connection was invalid? To be alerted about this type of message you would need to setup a Priority filter to catch this type of message. Each incoming message contains a Priority value. This value is made up of a Facility and Level. You specify which priorities will cause the filter result to be true. All Facility codes are defined in RFC 3164 if you need a refresher. Next, you would then set the priority field as Facility and define the importance level.

2-14-2013 2-07-18 PM.png

 

When you setup a network device or groups of devices to send syslogs to kiwi, you define what "facility" to use. For this example we'll use Facility = "Local5" as the firewall. The Importance level ranges from "debug" to "Emerg" (Emergency). You should select as appropriate, probably something at the warning level or greater to avoid getting Notice logs.

Next you would define an action. In this case a simple email works. Enter a message as needed or select from any number of variables to populate a customized one.

After you have entered in the dialog fields, verify that your e-mail servers are setup in kiwi so the email does not bounce.

2-14-2013 2-07-50 PM.png

2-14-2013 2-11-19 PM.png

 

Above is just an example of one simple filter you can use to isolate events of interest using Kiwi Syslog.


More information can be found here Syslog Server for Windows | Product Overview | Kiwi Syslog Server

The title of this article sounds ridiculous without explanation, so here's the explanation. There's an old saying that states the best swordsman in all of France would rather fight the second best rather than fight the worst because he knows what to expect from the second best. The point being? Unpredictability and improvisation are assets.

 

Fundamentals first

You cannot become a great swordsman by simply reading about sword fighting techniques. (You will die). You cannot be a great musician if your fingers cannot move fast enough to play the notes. You cannot be a great writer if you cannot spell and structure a sentence. You cannot be a great SysAdmin if you cannot see and understand your network. You cannot be a great painter if you cannot draw.

 

Take that last sentence about being able to paint and draw. I had a friend who claimed she was an artist because she slopped paint on canvas. You know the person I'm talking about. The one who paints like a six year old and calls it "art" because other modern artists, like Picasso, painted in a similar fashion. There is a slight difference between my friend and Picasso. Picasso learned all of the fundamentals of painting before he decided to get "experimental" with his work. My friend did not. She just skipped over all of the basics and went straight for the kooky and calling it art. Below are two of Picasso's works. You can see why he was allowed his creative license. I'm no art critic, but clearly Picasso learned to walk before he was able to run (regardless of his questionable destinations).

 

 

Chess

Chess is a wonderful game for utilizing unpredictability and improvisation within a structured setting, after you learn the fundamentals (the moves of the pieces). As with most things in life, we are taught how to do things in an organized and predictable framework. Chess itself has multiple structured gambits and defenses, which is why I never learned them. I taught myself how to ultimately win by being unpredictable and thinking on the fly. Adapting to any situation became my strength.

 

Networking Fundamentals

You cannot solve your networking problems if you do not understand the basics of how your network functions. My first suggestion is to read the following, free of charge:


Networking Improvisation

Any book on improvisation would be, at the very least, ironic. To do well at the networking improv game, you should:

  • Know as much as you can about your network, including both the hardware and software.
  • Have as many tools at your disposal as possible. A swordsman will be safer with more than one weapon. A musician will play better mastering more than one instrument. A painter will paint better with more than one color and brush. A SysAdmin will solve more problems with more tools.
    • SolarWinds has a plethora of tools to help you see every possible aspect of your network, and some are free!


The Lesson

Learn the basics and master them. Collect all the tools you can and then master them. Information, for the most part, is free on the Internet. You can adapt to and overcome any problem that presents itself if you have both the knowledge and the tools. (That's my motto).

DanaeA

Logs 101: Normalization

Posted by DanaeA Feb 15, 2013

In the first blog of this series, What are Logs?, we learned what logs are and why they are useful to your organization. Now we are going to learn what happens when you have different types of log data and how your System Administrators don’t go crazy going through the thousands of logs that are produced.

 

System Administrators monitor logs from hundreds of different devices, all written in proprietary formats. Unfortunately, proprietary log “language” is often not user friendly and unreadable in its native format. Log data is written in the device’s language, which is different than the “language” of another device. Comparing the logs against each other is like comparing a paragraph written in Russian, against one written in Japanese.

 

The following is a sample of a log file from an antivirus program:

Log sample.png

There is some recognizable information, but can you imagine trying to decipher thousands of logs entries a day? Luckily, there are software programs (like our very own SolarWinds Log & Event Manager) that can convert that data to useful information that can be searched for important alerts or events that may have occurred.

 

What is Normalization?

 

The LEM system is based on software modules called Agents, which collect and normalize log data in real time before it’s processed by the virtual appliance, and other non-Agent devices, which send their log data directly to the Manager for both normalization and processing.

By definition, to normalize is to make (text or language) regular and consistent. LEM gathers logs from devices and translates (or normalizes) those logs into the same language, so they can be directly compared against each other.

 

When an Agent cannot be installed on a device, that device can be set to send its log data to the LEM Manager for normalization and processing. Examples of devices that cannot host Agent software include firewalls, routers, and other networking devices. LEM accepts normalized data and raw data from a variety of devices. Non-agent devices send their log data in raw form to the LEM manager. Once normalized, log data is processed by the LEM Manager, which provides a secure management clearinghouse for normalized data. The Manager’s policy engine correlates data based on user defined rules and local alert filters, and initiates the associated actions when applicable. These actions can include notifying users both locally in the Console and by email, blocking an IP address, shutting down or rebooting a workstation, and passing the alerts on to the LEM database for future analysis and reporting within the Reports application.

 

SolarWinds Log & Event Manager collects, stores, and normalizes log data from a variety of sources and displays that data in an easy to use desktop or web console for monitoring, searching, and active response.

LEM logs.png

The term virtualization is synonymous with being a technology that is easy, cheap and effective, amazingly packaged in one technology. But because many of the normal limitations found in deploying physical servers have been removed, it can get out of control and begin causing its own problems.


VM creation is quick and easy, and because it does not come with the extra hardware baggage, system administrators are more than happy to create new virtual machines whenever needed. But once the job is done, these VMs tend to be ignored and, because they are residing on your physical infrastructure somewhere, they tend to consume essential storage and processing resources leading to VM Sprawl.

 


Stray VMs, a Bane to Admins


Stray VMs are not something that can be avoided, but the problem gets more complicated as additional VMs are added and the admins shift their attention to the management of these individual systems and away from the overall health of the environment allowing the resources consumed by VM sprawl to get bigger.

All the active VMs tend to be attended to, but the old unused ones previously created, still consume physical and financial resources, and even when powered off, these stray VMs may still have software licensing and support costs attached that can prove to be an organizational burden. Not just that, if the unused VMs are over allocated with memory, storage and processor, then there is a tremendous waste of valuable resources that can be channeled elsewhere. Quickly summarizing, virtualization administrators should keep an eye out for the problems that VM sprawl can cause.

Idle but alive

  • Zombie VMs may seem idle but many may still consume CPU and storage resources, burdening hypervisor performance and wasting expensive storage capacity.

Resource over allocation

  • Because of the ease of creating VMs, admins sometimes end up creating VM’s and over-allocating resources.  By right-sizing these VMs closer to their actual necessary levels, it can be a cost-effective way to free up more memory and storage.

Storage resource crunch

  • Unused VMs occupy storage resources which can be used elsewhere in a virtual environment.  In addition to this, VM snapshots also consume huge storage resources and multiply the VM sprawl impacts.

Software Licenses

  • Along with consuming critical resources, VM sprawl also can use software licenses which may lead to software violation or at a minimum can make it very difficult to get an accurate license count.

Capacity Planning

  • Due to the increased number of stray VMs the administrators may plan future virtual expansion based on the wrong assumptions


VM security tips

VM Sprawl can also represent a significant security risk as VMs that are offline may escape routine security processes. Some of the following best practices can be used as quick tips for effective management of security issues related to VM sprawl security. Have a process to ensure all virtual machines, including those offline, are fully patched and secured on an ongoing basis or at least before they are brought back into the IT environment.

  • Appreciate the architectural differences of a virtual environment including VM sprawl issues and adapt security policies accordingly
  • Apply intrusion detection and antivirus software to all physical and virtual layers
  • Avoid VM sprawl, enforce policies to ensure VM creation is closely monitored and machines are decommissioned after use.
  • Use secure templates for the creation of new VMs

 

VM Sprawl Control Super Hero


 

Controlling this situation with the right set of virtualization management tools is essential for the over-worked admins to  put the brakes on virtual machine sprawl by quickly having a handy, affordable and easy-to-use control mechanism that can start governing the pesky situation of VM sprawl quickly and easily can give the admins one less thing to worry about.

You've got your Java update package all downloaded, and you're about ready to send it out to all your happy little boxes. But, wait! You feel some (justified) suspicion toward all things Java and open the package. There are three JRE install packages in there.

  • x86
  • x64
  • x86 for x64 systems

 

(Justified) suspicion has just upgraded to low grade paranoia. Thank you, Java.

 

Why does Java have three installers for two architectures?

 

The x86 and x64 packages are self-explanatory, but what is this x84 for x64 systems?

 

Java seems to have some intermittent problems with installing or updating the 32-bit version on a 64-bit system. For the Java 7u10 update package, it looks like Java is uninstalling earlier versions of JRE7, and then doing some gyrations to install the 32-bit version.

 

To uninstall JRE7, Java closes all the open handlers and then runs an uninstaller MSI for 32-bit Java. Then it runs some interesting batch files. The first one renames %systemroot\system32\config\systemprofile to %systemroot%\syswow64\config\systemprofile. Java then applies the package and runs another batch file to revert the systemprofile path.

 

Long story short, Java doesn't seem to natively support updating or installing 32-bit Java on 64-bit systems (or that support is broken on 7u10). That last package is their method of providing that support in a vaguely seamless way.

In 2012, we saw some of the worst online security snafus and attacks. With the increase in cyber espionage for personal and enterprise data in the form of DDoS attacks, cloud outages, and hacker attacks, and according to the latest predictions by IT security companies, PC users will still remain the biggest target for malicious attacks. Mobile devices, including tablets, will also be targeted. On the enterprise front, be it cloud, virtual or physical, securing enterprise network is going to harder than before. Network advancements and environment changes will make it complex and hard to monitor everything that’s hooked to the network.

 

The following summarize predictions made by security organizations about cybercrime in 2013:

BYOD will add complexity.   According to McAfee, more users will bring their own devices for work and mobile devices will become common in the work place. All these new devices will make the IT environment complex and difficult to monitor.

The Dropbox hack which caused data breaches will continue to persist both in the cloud and the physical data center.

Malware will evolve in sophistication of usage and deployment. McAfee indicates it will become difficult to monitor malware and spams - for example, the “It’s you on photo” spam that affected Twitter users leading them to visit a “.ru” spam site.

The cost of securing the network will increase steeply. IT organizations will spend budget to upgrade network software and equipment for sophisticated tooling which will be required to combat these new threats.  Organizations will need to deploy additional administrators to effectively perform monitoring across different attributes across network, system and virtual environments.  Software for SIEM solutions, VM/Cloud monitoring, end user activity monitoring will be required as a pro-active threat mitigation measure.

• Personal and enterprise data will become a target for hackers to avenge a communal, political or personal vendetta.

• With the increase in usage of mobile and cloud technology, new IT standards will be mandated like the new PCI DSS compliance standard for mobile payment processing.  It is also likely existing compliance rules will be made stricter to enforce compliance.

• According to Trend Micro’s 2013 forecast, the Windows 8 product line-up will be targeted by most hackers.

 

The secret mantra behind any secure IT enterprise is “being pro-active rather being re active.”  So if, the worst is about to strike, let us be prepared for it:

• Use simple but effective methods and use SIEM software to monitor all IT resources including servers, user devices and network devices.

• Proper assignment of resources, both virtual or physical, will enhance performance and will reduce the pain of monitoring unsolicited devices

• Software vulnerabilities should be patched regularly with an automated patch management tool.  

• Providing guidelines and educate end-users about network safety, malware, data theft, and spam using internal campaigns and programs.

 

Threats will always continue to exist. Though we cannot predict every threat, we can increase awareness among users and adopt more prudent security measures to protect users and data.

Tom Endean recently published a review of Log & Event Manager (LEM) Sys Admins - Tom Endean.PNG

 

It is a comprehensive look at LEM, from installation to utilization.  The review includes details on the real time analysis and nDepth search features of LEM.  It has a couple of fun examples of LEM in action with Active Response:

 

  • Showing LEM catching a user who is playing an unauthorized game at work.  LEM terminates the game and scolds the user in an alert window on the offending machine.
  • Showing LEM notifying Tom Solarwinds via email that a user has been added to the Domain Admins group.

 

Check it out, it is well-written and entertaining, with a dash of British Humor.  It's also a great intro to LEM.

 

Virtualization is one of the fastest growing “new technologies” in the I.T. world, but it’s not cheap to implement, mostly because of storage costs. CPU and Memory capacities of today’s systems are more than sufficient to handle a host with several running virtual machines, but very few systems have adequate internal storage services to support the number of virtual machines that can run in the available memory space.

 

External storage is inevitable

One way to address the storage challenge is with external storage. Large virtualization implementations typically implement Fibre Channel or iSCSI SANs, but these are fairly expensive implementations, and quite often out of the reach of many organizations. File sharing solutions can be very cost-effective for more modest virtualization implementations, but they’ve not always been functional solutions for more than experimentation.

 

Another factor that encourages the use of external storage is the growing prevalence of both Microsoft Hyper-V and VMWare ESX/ESXi implementations in the same data center. Building out individual host servers with internal storage capacities gets very expensive. The differential in pricing between servers with large numbers of internal bays and a minimal number of internal bays can be a factor of ten. Shared storage can be a great cost-effective strategy for addressing the needs of hybrid environments.

 

Back in the really early days of virtualization, I recall experimenting with putting virtual disks on a file server, and found that to be a functional solution for a single virtual machine, but it quickly hit the wall with additional machines, due to limited network bandwidth. At that time, Gigabit Ethernet was not within easy reach of the masses, and only a limited number of virtual disks could be supported on a 100Mb/sec Ethernet connection. The only thing available at that time that could really provide the services needed were Fibre Channel SANs. Today available Ethernet network bandwidth exceeds the capacity of Fibre Channel.

 

Previous limitations of file sharing

Until recently, though, the file sharing technologies available for using external storage were the primary bottleneck. If you were working with Microsoft Hyper-V, the capabilities of SMB in Windows Server just didn’t measure up. In fact, some people believe that SMB v2 (introduced in Windows Server 2008) was actually a step backward in performance, and the SMB v2.1 patches done for Server 2008 R2 merely brought us back to the performance levels of Server 2003.

 

For VMWare ESX/ESXi the opportunities weren’t much better. The ESX/ESXi external file sharing model uses Network File System (NFS) v3. NFS has been the file sharing system in Unix and Linux for dozens of years, but it can be complicated to configure and tune performance. NFS v3 is almost 30-years old. It also has limitations with respect to security, performance, statefulness, and cluster support – all things that would be particularly useful when hosting virtual disks.

 

File sharing improvements.

In April, 2003, the IETF introduced NFS v4 (RFC 3530) which addressed security and performance, and introduced a stateful protocol model. In January, 2010, the NFS v4.1 (RFC 5561) enhancements added capabilities for clustered server deployments and scalable parallel access to file distributed among multiple servers, making NFS v4.1 a very robust file sharing protocol for use with virtualization. Concurrent with the NFS developments, Microsoft was hard at work making improvements to the SMB protocol, and has introduced those with SMB v3. Some of these improvements compare to the scale of improvements seen in NFS from v3 to v4.1.

 

NFS v4 and SMB v3 together

Combining the capabilities of SMB v3 and NFS v4.1 into a single file server brings an exceptionally powerful external storage solution to virtualization environments that are budget conscious, and Microsoft has made both of these protocols available in Windows Server 2012. Now, a single file server can provide storage services to both Hyper-V environments and ESX/ESXi environments, using each hypervisor’s ‘native’ file sharing protocols.

 

Today, Hyper-V v3 can use either SMB v3 or NFS v4.1, and soon (we hope) ESX/ESXi will be enhanced to also take advantage of these significant improvements in NFS v4.1. If you’re looking to expand your virtualization infrastructure, shared file storage might be something to include in your environment.

 

For those that are using file sharing to support your virtualization environments, what are your experiences?

 

VirtBlogCTA-Monitoring.png

In the previous article I discussed Facebook’s trove of tagged user photos as a powerful source of information for advertising and law enforcement purposes.

 

Here I want to discuss an irony about efforts to access data and keep it off limits.

 

Minority Report: Addendum

In Philip K. Dick’s original story “Minority Report” face recognition technology seems quaint as a surveillance tool compared to how the society puts to use the interconnected gifts of three mutant children. More or less imprisoned together in a sensory deprivation tank, the three coexist as a hive mind that produces televisual images as precognitive evidence for imminent crimes, feeding a law enforcement pilot program that uses the precognitive images to arrest and convict perpetrators who have not yet actually done anything.

 

A related irony is that the system’s virtuoso detective, John Anderton, becomes one of his own cases. His life comes to depend on his success in revealing a fundamental flaw in the system before other detectives can find and convict him of a precognitively recorded murder.

 

Utah Data Center

The NSA is currently building its largest surveillance-oriented datacenter in part to break into AES-encrypted data they already hold through ongoing capture of all internet backbone data. Yet the AES encryption standard they intend to break is also one (AES 256) they themselves use to guard top secret information. So if their brute-forcing efforts succeed, government agencies would be able to digitally safeguard classified data only so long as an unauthorized someone does not gain access to the new code-breaking system.

 

Monitoring Report

For now, if AES 256 is good enough for the United States National Security Agency, it’s also good enough for protecting any network resources. As IT professionals, whenever security warrants, we should use network management tools that support SNMPv3, an AES-based protocol. For example, SolarWinds Network Configuration Manager can use SNMPv3 to download existing configs or to update them as needed.

We Have a Winner!!!

Well, folks....it looks like DameWare can be used to remotely control a Surface Pro tablet.  We'd like to thank our thwack friend jimmyyen for doing the QA work for us and proving that DW is once again up to the task.  Check out jimmyyen's video below to see his handy-work.

 

 


The Surface Pro is Here, Now How Do You Support It?

As most of you already know, Microsoft released its Surface Pro tablet this weekend.  What sets this tablet apart from the likes of Apple’s® iPad® and even Microsoft’s Surface RT® is that it is squarely aimed at enterprise users.  Its internals are really no different than those of a laptop or ultrabook.  It runs on a 3rd generation Intel® Core i5® Processor, has 4GB of RAM, and up to 128GB of storage on an internal SSD.  It also includes the Pro version of Windows 8® which means it can be joined to an Active Directory® domain and can have Group Policies® enforced on it.  Adoption rates for this new tablet are expected to be robust, so it is time for IT pros to start thinking about how they are going to support it.

 

surface_pro_pcworld.jpg

                                                                    Image by PCWorld


So how will you support these new devices? 

Well, if they're exactly what we’ve been told they are, DameWare® will be up to the task.  The DameWare MRC agent should be able to be installed on a Surface Pro just like any other Intel-based Windows computer.  This means that you should be able to remotely control a Surface Pro the same way you would the other Windows computers on your network.  It also means that you ought to be able to perform remote administration tasks on one like viewing event logs and restarting services from the DameWare Remote Support console.
 
So you’re probably thinking….Should beOught to?  Does DameWare work with Surface Pro tablets or not??  Well, to be honest we haven’t tried because they just became available 2 days ago.  The release of the Surface Pro got us thinking about how active our friends are on thwack and we decided to let you help us find out if these new tablets can be supported by DameWare.  Now rest assured that we haven’t lost all of our software developers or QA analysts.  We just thought it might be fun to engage our thwack friends in a little QA contest.

 
The QA Contest: Show Your DameWare Surface Pro Support Savvy

Here’s how it works:  All you have to do is be the 1st one by February 25th, 2013 to post a video on thwack of a Surface Pro tablet being remotely controlled with DameWare Remote Support or Mini Remote Control and then tweet it with #DameWare.  It’s that simple!  The winner will receive a $100 Amazon gift card and some bragging rights on thwack.


And Now for Some Fine Print

Your video has to clearly show that it is a Surface Pro tablet being remotely controlled and it has to be with the Mini Remote Control Viewer.  I recommend showing an MRC chat session to prove you’re using the MRC Viewer.  The best way to get your video on thwack is to upload it to YouTube and post the link to the DameWare DRS or DameWare MRC forums.  You can also submit directly to those forums and attach your video submission.  Please make sure to check the “DameWare Surface Pro Tablet Contest” category before posting.  Once you've posted your video, tweet your video with #DameWare.

 

Sadly, this contest is for US residents only.  My apologies to our good thwack friends from Canada, Europe, Asia, and Latin America.  Please see the attached Terms & Condition document to get all of the fine print.


Go Get ‘em!

So what are you waiting for?  Let’s see those videos!

Organizations with large data centers often can make storage management difficult by using more than one kind of storage. This type multi-vendor storage management has made it complex for administrators to manage storage devices.

In a heterogeneous storage environment administrators struggle with:

  • Enterprise capacity Planning: Administrators need to perform an analysis on the RAW capacity, storage pool usage, RAID capacity, free LUN’s etc. in order to forecast and allocate storage resources. This ensures all LUN’s are allocated and thereby reduces waste
  • Mapping VM’s: Administrators need to map their VMs to the underlying LUN’s associated with the VM’s storage. This helps identifying unused VM’s and associated storage resources
  • Storage Bottlenecks: Storage read/write speed determines the usability of certain applications. Any critical glitch with any storage components may affect the entire storage networks performance. It is important to maintain an optimum read/write speed and determine if any bottlenecks exist.
  • Updating Storage Components: For devices on the network, administrators need to know the asset information about the storage components such as serial numbers, firmware versions, etc. for compliance and audit reporting.

 

IT administrators need to monitor and report report on each of these aspects of their storage environment across all vendors. By having all of the storage information in one place they can correlate errors and identify performance issue root causes related to all storage components. Managing an array of storage devices, monitoring performance issues across all vendors can be quite a task for storage administrators.

Best Practices involved in multi-vendor storage management

CPU Load will affect Storage Performance – CPU load of storage servers will affect the storage performance, ensure CPU load does not exceed the optimum level. Setting an SNMP trap to trigger alert will allow you to know when the CPU reaches its threshold limit.

Disk Load will affect Latency – This can be tracked by aggregating all disk read/write speed in a server. When there is latency it will cause slow read/write speeds from which you can pinpoint the volume that is overloaded. If a volume is overload, you can identify additional capacity to allocate more disk space or to migrate volumes.

Effective Storage Capacity Planning – Plan your storage needs ahead of the requirement, keep at least ¼ of the total storage free and don’t let the storage reach it maximum level. Always have a storage buffer available that allows you to move or backup VM’s and files when needed. Identify all stale VM’s and unused LUN’s for proper utilization of storage.

Defragmenting Volumes – Defragmenting may sound as a simple process but the process may slow down the servers.  As a result, plan your maintenance schedules ahead of time, so it does not affect productivity. The best practice is schedule storage maintenance once in a month to remove clutters and fragments in storage devices.

Storage Tiering – It is a process to put the data where it belongs storing non-critical data like snapshots and stale VM’s to low speed storage devices and alternatively data highly used to the high speed disks. This kind of prioritizing storage need will reduce cluster and optimizes storage performance.

Reports and Notification – Keep a note of all asset information. This helps you to update firmware or to find out the warranty, EOL status of components.

In a diverse storage environment, multi-vendor monitoring and reporting is essential to keep track of all storage related performance issues. Storage is one of the major layers in IT network, so any performance drag may affect productivity, end-user usability or application availability, so be pro-active and keep your storage spindles spinning.

 

Monitoring servers, applications, networks and services is crucial. However, in today's datacenter, it's more complex than ever, with physical servers, virtual servers, cloud-based servers and legacy servers all running alongside one another.

 

The argument over agent-based versus agentless monitoring has been going on for quite a long time. Initially, the power, reliability, functionality, and all-around robustness of agent-based monitoring overwhelm the perceived advantages of lower cost, easier to implement/maintain features of agentless monitoring.

However, all this is changing with the need for organizations to be agile and the evident downside of agent based monitoring systems due to the complexity involved with agents.

 

Agent-Based Server Monitoring Hassles

  • Red Tape: The agent software runs on the remote machine and therefore affects its operation. In many environments, especially governments and larger corporations, you simply can't go installing software on critical machines without going through an evaluation and approval process.
  • Time to Maintain Agents:  Agents are very hard to maintain. As the monitoring solution is updated, the agents will need to be updated from time to time.
  • Scalability/Footprint:  Deploying, managing or administering, and monitoring connectivity with large numbers of clients and servers can become untenable. The problem is even more complicated when considering network infrastructure devices for which the number of possible connection paths is vast.

 

With all the hassles of agent-based monitoring, there are a few benefits which include deployment flexibility (eliminating NAT/Firewall/Proxy issues) as well as obtaining data such as event logs that are not obtained with agentless solutions.

 

Agentless Server Monitoring
Agentless monitoring is deployed in one of two ways:

Using a remote API exposed by the platform or service being monitored or directly analyzing network packets flowing between service components.

 

SNMP is typically used to monitor servers and network devices in an agentless manner. In the case of Windows servers, WMI (Windows Management Instrumentation) is typically used which provides a better set of metrics than can be obtained through SNMP monitoring alone.  Also for many Windows based servers and applications, agentless monitoring via the WMI gateway provides strong monitoring capabilities.

 

Agentless monitoring has certain distinctive advantages over monitoring with agents. Some are highlighted below:

• No Clients to deploy or maintain
• Lightweight, no application to install or run on the client. Typically consumes no additional resources
• WMI & VMware Agentless Monitoring is stronger than SNMP alone
• Typically lower initial cost for software

 

With all the various available options, it’s quite important to understand the business impacts in your particular environment for picking one server monitoring technology over another.

 

Related blog post: Customer spotlight: Agentless Enterprise Monitoring at Cardinal Health

 

susan.cohen

DNA Data Storage

Posted by susan.cohen Feb 11, 2013

Gleaning solutions from nature

Some years ago in an Object Oriented Database class, the professor proclaimed, look to nature for elegant solutions to complex IT problems. I remember my fellow classmates' suspended breaths of disbelief, but today the full meaning of his message comes clearly into focus in the article, Storing Digital Data on DNA.

DNA: the ultimate molecule

DNA is considered the ultimate molecule because it holds the recipe of life for all living things. DNA is nature's way of storing extraordinary and detailed amounts of data in a very compact space. DNA is dense, compact, can last thousands of years, and it doesn't require electricity or cooling to remain viable (1). So as universities, governments, and industries look for ways to deal with the growing mass of data flooding our information highways, it's no wonder DNA is making splash on the data storage scene.

Storing digital data on DNA

Using DNA for digital storage is capsulized in these six steps:

  1. Files are represented in long strings of zeros and ones (binary files).
  2. A computer program converts the binary files into the letters A,C,G,T. These letters represent the four components comprising DNA.
  3. A machine takes the transformed data in its A,C,G,T format, and uses it to make DNA. The result looks like a spec of dust.
  4. The DNA is processed in a sequencing machine that reads the DNA fragments as the letters A,C,G,T.
  5. A computer program reassembles the DNA fragments, and converts them back into binary files.
  6. The binary files are ready to play back in their original format with 99.99% accuracy.

DNADataStorageModel.png

In the Meantime

DNA is indeed an attractive storage solution but writing information onto DNA is expensive. Costs are coming down rapidly, but it could be 10 or more years before this technology is practical. In the meantime, we must rely on our current storage solutions including hard disks.

But hard disks are expensive and  power hungry, and in today's high demand environments such as cloud computing, optimizing storage can be challenging. Using Storage Manager and Virtualization Manager  together are powerful tools for monitoring and managing virtualized environments and optimizing your storage solutions.

 

1 - Gautam Naik, Wall Street Journal reporter

Humanoid Robots - Why?

Posted by Bronx Feb 11, 2013

I just finished reading an article concerning a humanoid robot that will be "born" nine months from now. This begs the question, "Why do we need robots to look and act like people?"

 

Artificial Brains

About a dozen years ago I found myself alone at my computer, looking for someone to talk to via instant messaging. It came as a bit of a surprise that none of my friends were online. Rather than ponder why I was on my computer and alone on a Saturday night, I went the other way and "created" a friend. I spent the next few weeks coding an artificially intelligent "friend" who actually thought about his questions and answers based on my questions and answers. His personality was strikingly similar to my own, which made me chuckle, and others cringe. I called him SAM. (No relation to SolarWinds' SAM (Server & Application Monitor)).

 

Artificial People

It's only a matter of time before robots act indistinguishably from people. But why? I think Dr. Ian Malcom (Jurassic Park) put it best when he said, "We spent so much time trying to figure out whether or not we could that we didn't stop and ask whether or not we should." Thank you Dr. Malcom (an artificial person in his own right). Now I, Bronx (an artificial name for a real person - vis-à-vis, me), will ask that question, among others.

 

Moral Questions

Clearly there are some philosophical questions that need asking and answering. Even I'm guilty of being lazy in my thinking. Look at the Artificial Brains section above. I referred to a piece of software as His. Perhaps I should have thought more about why I had no friends at that moment rather than create one. Beyond that, imagine human-like robots were available today and ask yourself the following:

  • Should we build robots identical to humans? Initially it sounds like a good idea, but to what end? C-3P0 was human-esque and he was little more than a stumbling translator. R2-D2 seemed more efficient with his wheels and jet pack. The robots in the movie AI were essentially people, emotions, intelligence, and all. An emotional connection with a machine? Curious.
  • Do we need/want an emotional connection with a machine?  Probably not, but it is bound to happen. I never cried over a broken toaster and a humanoid robot should be no different. Making something that looks and acts human will probably create emotional havoc at some point.
  • What physical benefit will an artificial person bring? Robots are a great help when designed for a specific purpose, but why a human-like robot? You may be thinking that a humanoid can help the elderly and sick. That may be true, but why does it need to look like a person? My robot vacuum helps me and looks nothing like me. (I'm sure the robot is thankful for that.)
  • Who will be the master? In the movie, The Terminator, a race of robots outsmarted their human creators. The same is true in the movie, The Matrix, except that their world was software based. Do we want to risk this? It's only a matter of time before the machines become smarter than the people. To keep up, we'll probably end up merging with the machines at some point. "Honey, I love your sense of humor and the way your bionic eyes sparkle in the animated moonlight" Sigh.
  • Will this humanoid be a substitute for something/someone lacking in your life? Perhaps it's a better idea to deal directly with any and all emotional issues rather than put "a band-aid over a bullet wound."
  • Will having a fake person around the house allow you to express yourself in ways otherwise socially unacceptable where real people are concerned? If the answer is yes, you probably have issues you have not properly dealt with.You need to learn to be more socially accommodating, at the very least. If the answer is no, you don't need one. There are plenty of real people out there.

 

Real World AI

Fortunately, we are still in control of the machines. SolarWinds provides a type of AI when monitoring your network environment. (Let the software run around and figure out the problem, not you!). Some of our best are:

In the previous article I discussed how the distributed computing technology called Hadoop marks a big difference between the Amazon’s AWS and Facebook’s respective types of cloud computing.

 

In this article I want to connect Facebook’s success at scaling big data processing for the social web with the earlier discussion of face recognition technology.

 

Your Face Here

For a face recognition system to work it must have access to a database of identified images against which to process new captures for possible matches. So, for example, that scene in Minority Report where Tom Cruise’s character receives personalized ads as he walks through a commercial space implies a database that includes at least one image that is already associated with the name ‘John Anderton’. And we can reasonably surmise that the database, like a utility power grid, provides pervasive access to authorized software—advertising, law enforcement—that might want to match images streaming in from camera sensors in the movie’s ubiquitously-computing future society.

 

In our time, US Senator Al Franken reminds us in testimony from a Judiciary Committee hearing in 2012 that “Facebook may have created the world's largest privately held database of face prints without the explicit knowledge of its users." And that lack of explicit user consent is why EU countries (Norway, Germany) have forced Facebook to disable their “photo tag suggest” feature, which uses face recognition analysis to link faces with names of other users in the Facebook system. With the feature turned-on, as it is by default, and when the face recognition software finds matches, the relevant users are instantly tagged in photos their Facebook friends upload; all it takes is a click for the uploading user to accept the suggested tags, kicking off a notification to all those tagged.

 

Facebook stipulates in its latest user agreement that it can use photos in its system for advertising purposes. We should assume that “advertising purposes” includes granting access to third parties that very much would like to advertise to its potential customers based on being able to review and analyze images of them in one or more scenes from everyday life.

 

If this flow remained entirely benign, all you might notice of this process is that ads you see online are indeed more personally appealing and relevant to your interests.

 

Warranted Access?

Access to the world’s largest database of already-tagged images is a law enforcement dream. Increasingly, Federal officers make that dream come true with a warrant. In a twist, rather than seeking to see images from the Facebook database, some courts have used the popularity of Facebook as a form of public shaming, setting up an account and posting DMV pictures of people with outstanding warrants in their jurisdiction.

 

Here technology, law, law enforcement, commerce, and politics begin to crash into each other. If the photos are nicely tagged and—thanks to Hadoop--easily findable, then posting a photo to Facebook means releasing it not only to anyone in your circle of “friends” but to anyone who has, can pay for, or can get warranted access.

 

If you encrypt an image and upload it to Facebook then the system can just copy it when your friend decrypts it for viewing. Since Facebook has a vested interest in having access to user images for those “advertising purposes,” we should assume that the system copies any uploaded images when “friends” decrypt them. And if so, Facebook image data is always in the clear when it becomes part of the system.

 

Encryption for Monitoring Tools

Fortunately, as we see fit, we can encrypt image data within our own networks and trust they will remain that way except at the intended endpoints. And we can ensure that our monitoring systems respect our data encryption policies. For example, SolarWinds Network Performance Monitor can poll for MIB updates on network devices through SNMPv3, keeping data on the state of your network confidential as it passes through its packet-switched route.

It wasn't that long ago that SSL 2.0 and then SSL 3.0 imperfections sent the security world scrambling to the safety of TLS, SSL's direct successor.  Then came BEAST, which used a combination of JavaScript and network sniffers to decrypt authentication cookies over TLS 1.0 streams.  And now we have the Lucky 13 attack that convinces TLS 1.0, TLS 1.1 and TLS 1.2 to all reveal information about the original message using a man-in-the-middle timing technique.


Fortunately, the scope of the Lucky 13 attack appears to be limited to TLS cipher suites that include CBC-mode encryption.  Unfortunately, those suites are very common and usually on by default.


However, if you own a Serv-U FTP Server or Serv-U MFT Server, you have the controls you need to enable or disable affected cipher suites built into the Serv-U Management Console.

Serv-U_Encryption_Settings_Navigation.pngServ-U_Encryption_Settings_CBC_Ciphers.png

In this case, just look for the SSL ciphers that include "CBC" and uncheck them.

FIPS 140-2 SSL Caveat


If you check Serv-U's "Enable FIPS 140-2 mode" checkbox, the "Advanced SSL Options" panel disappears.  Behind the scenes, Serv-U disables all ciphers except SHA ciphers using AES (AES256-SHA and AES128-SHA) and Triple DES (DES-CBC-3SHA).  Note that the Triple DES cipher uses CBC. In other words, if you want to retain fine control over your data in motion ciphers, you will need to leave the "Enable FIPS 140-2 mode" box unchecked.

The Ultimate Network Management Dashboard -- A Review

You've made it! Finally. The last installment in a series of posts providing steps to create a custom network management dashboard using SolarWinds NPM. Before we get into the topic of this post--customizing web console resources--let's review what we've already done:

  • Part I presented the Network Management Dashboard concept and provided a link to the document from this has all arisen: "How to create the Ultimate Network Management Dashboard".
  • Part II provided information about restricting user access to your network management dashboard.
  • Part III discussed the necessity of organizing your network so it makes sense when you run your network discovery.
  • Part IV provided some guidance in creating network maps on which you can display all your awesome network management data.

This article, Part V, offers some details around the final step in the "How to create the Ultimate Network Management Dashboard" procedure:

Step 5
Customize Top 10 Lists. This is one of the most sought after requirements in an NMS, consolidating all of the important monitoring parameters to be available in a single view. With NPM, network administrators can customize, prioritize and organize all of their views based on their unique monitoring requirements.

 

You've Got an Amazing View, Here...Let's Make it Better

Unfortunately, SolarWinds NPM can't give you a server closet with a balcony overlooking the Pacific, nor can we offer you much to discuss at your next book club meeting, but we can give you a pretty sweet view of your network performance data, practically right out of the box. If you don't already have a server running NPM, check out our demo server to see what you can get, almost immediately. Click Network > Network Top 10, and you should see something like this:

NetworkManagementView.png

 

"What [Resources] Are You Looking At?"

If you're logged-in as an admin on your own SolarWinds server, you should see a Customize View link at the top right of the Network Top 10 view. Clicking Customize View gives you the opportunity to select the resources you want to see on the current view. Select only those resources that help you do your job, and leave the rest. For the complete treatment, see "Customizing Views" and "Using and Configuring Subviews" in the SolarWinds NPM Administrator Guide.

 

Being Resourceful

If you scroll through the view to look at its resources, you'll see that most resources will have Edit buttons in the top right corner, next to the Help buttons shown above. Resources with Edit buttons may be customized. With some resources, you are limited to editing titles, whereas others, including Top XX Resources, allow you to actually indicate the number of objects listed in the customized resource. For more detailed information related to any resource you are customizing, click Help in the top right of the resource for resource-specific direction.

 

There You Have It: The Ultimate Network Management Dashboard

With the information you've collected through this series, you've got the ability to customize your SolarWinds NPM dashboard as you see fit. Fully tricked out or bare-bones, SolarWinds gives you the information you need to get your job done. For more information join thwack.com, where you can show off your custom ride and get pro tips from other SolarWinds users.

 

Until next time...

 

Now that we’re past all that holiday madness and safely into the new year, it’s a great time to sit down, take a breath, and take stock of where your IT infrastructure is and where you’d like it to be.

 

I recently got into a conversation with our Head Geeks at SolarWinds about monitoring best practices, asking specifically, how do you know when you have enough coverage to be proactive and avoid issues? Their sage advice, “It depends.”  In a nutshell: “Don’t focus on interfaces, nodes, and volumes. Think about the criticality of the services you are delivering, and decide how much - and what to monitor - based on that.” We all have growing networks, and networks that are growing in complexity and density via IT onvergence. All of this deserves a bit of a think once in a while, to make sure everything in staying in line.

 

Here are a few other things to ask yourself while you’re having that think:

 

  • Sure, I’m monitoring what I need to with NPM, but am I getting the full benefit of IPSLA technology to ensure remote campuses also have reliable access
    to applications hosted on the internet or at headquarters?  Productive users are happy users.

 

  • Am I keeping support costs in check by ensuring all desktops are patched? Am I truly deploying those patches automatically?  Your IT staff is too valuable to assess and patch machines one by one.

 

  • Are you getting maximum value for your WAN connections by using QoS policy shaping to ensure non-business traffic isn’t choking out the critical traffic? Don’t let YouTube steal all the bandwidth from video conferencing!

 

  • As the number of virtual devices on your network increases and more services are concentrated into a smaller number of devices and links – are you able to adequately monitor those converged elements?  As you know, IT convergence increases the criticality of every piece of network gear, making single failures have larger impacts on your network. Every one counts in large amounts.

 

  • The added complexity of virtualization drives additional monitoring requirements – do you monitor your virtual infrastructure as well as you monitor your physical one? If not, is that costing you something? Could it in the future? Hmmm.

 

  • Do you continue to add additional subnets to your network? Are you moving forward with an ipV6 transition? Have you had an acquisition that caused overlapping subnets? How are you managing IP addresses? If it’s by spreadsheet, would you like to move to something more modern? It’s 2013 for gosh sakes.

 

  • You’re monitoring your business critical applications. Would you like to provide that same level of availability throughout your organization? Perhaps baby steps are in order.. Or squeaky wheels get more monitoring first. But in general, better monitoring, stability and uptime is a win win for everyone.


So, some food for thought.  Let us know how you’re doing and if we can help. And don’t forget to keep your maintenance current.

 

And don’t forget to keep your maintenance current. We are constantly delivering updates that offer new features to enhance the depth and functionality of your products. Stay on maintenance and get all your product updates for free.  Here’s to a healthy, happy admins in 2013!



I just finished reading an article entitled, No, we don't really need another smartphone OS. The author argues that the three big platforms, IOS, Android, and Windows are basically enough, stating that, "The mobile landscape consolidated for a number of reasons, one being that not enough customers supported each OS to keep its development well-funded." That may be true, but...


Competition

The very reason we have the phone technology we have today is because of the competition. Competition is the belief that you can build a better mouse trap in the hopes of making money, thus improving the world. As we all know, one size does not fit all. As I detailed in an earlier article, choices and options are the driving factors behind my purchases (and I'm sure I'm not alone). In fact, if the Android OS did not exist, I would not be able to do half of the things I currently use it for on a daily basis. The iphone and Windows phone specifically restrict me from operations I can currently do with my Android. (Thank you, Google.)

 

The Browser War

Let's convert the phone OS war to the browser war. Now, if you're as geeky as I am, you probably do not use IE. You're most likely using Chrome or Firefox, Why? Because for you, they're simply better, regardless of your individual reasons. Both Chrome and Firefox have options that IE does not. Should we limit the number of browsers to three? Hardly.

 

Obviously, a lack of money and desire will kill the small players in competitive arenas, in most cases. And that's fine. But sooner or later, one of the little guys will have a big voice, and that's a fact. Mozilla Firefox began as an open source project, and just look at Bill Gates and Steve Jobs. They were nobodys at some point.

 

In fact, I thought I was innovative by creating my own Windows based browser, Bluto. It has features that are important to me, and hopefully a few others. Had it taken off, I could have been the next big thing. As it turns out, Bluto falls in that slim column to the right entitled, Other. (Oh well.) Look at the chart below and notice how the browser market share has gradually shifted over time:

browser-share-2011-04.png

As with any free market, the best will rise to the top and the poorest will sink to the bottom. Based on this chart, IE is the "best," but losing ground. Firefox has remained stable while Chrome continues to gain traction. The point is simple: Competition drives innovation. It is true that after the dust settles, only a few browsers, and phone operating systems, will survive. But they will have gotten better because someone else pushed back. If there were no competition, there would be no reason to improve and we would be stuck with ye old StarTac phone and the AOL browser. Yikes!

 

Even if the outsider doesn't make it big, his ideas might. For example, my browser may have a feature that the marketing people over at Microsoft may like and they may incorporate it into a future release. Hence, a poor performing product may begin to grow again and the world is that much better off.

 

The Network Monitoring War

SolarWinds is no slouch when it comes to competition. As a matter of fact, we were Forbes' number one small business in America for 2012! We examine our competition to see what they're doing wrong, and what they're doing right. We also engage our customers on a personal level, as I expressed in this article, so we can improve products like SAM and NPM, among others.

 

The Lesson I Learned

"Those who fear competition are those who cannot compete." - Bronx

In the first article of this series I discussed Facebook as a bellwether for an eventual convergence of face recognition technology and the social web as depicted in the film Minority Report. In this second part I want to elaborate on one big data implication of that convergence.

 

Big Clouds, Little Clouds

Within their overall footprint (180,000 non-virtual servers in multiple datacenters) Facebook reportedly has multiple clusters of over 6000 non-virtual servers interconnected with a single file system.

 

So what? You have heard that Amazon’s AWS may have over 450,000 servers in seven datacenters. Bear with me while I explain the important difference.

 

Yes, you can load up many datacenters with thousands of servers and leverage that impressive cloud-computing power to run a job, for example, that cracks previously uncrackable encryption schemes. Such a job concurrently uses servers in parallel to accomplish work on a very large iterative task that would run much longer than your life if done with serial processing on a single machine. The speed of completing this kind of task is directly related to the number of servers you throw at it. Concurrent iterations within the overall task do not share any data. And cloud computing in the AWS model excels in meeting ‘shared nothing’ data management challenges.

 

That AWS cloud computing efficiency does not pertain to the real-time data processing required to manage the user experience for Facebook’s more than one billion users*. Instead, Facebook’s operations engineering organization needs to leverage cloud resources in a way that handles interrelated events for all concurrently active Facebook users. The trick is to give work to thousands of machines in parallel while respecting the cascading effects of data I/O; each event ripples across as many accounts as a given user has “friends”. Facebook’s production workflows have complex ‘shared something' data management challenges.

 

Usually sharing data means the contending resources at some point must wait in a line; and the shape of the line is often called a—say it with me—bottleneck.

 

Meet Hadoop

Facebook meets their formidable shared data challenges with an open source technology called Hadoop.

 

Hadoop provides a common file system across all federated servers. Master servers perform the MapReduce actions of dividing up a task across servers in the federations and then reducing the provisional answers that come back into an appropriate overall answer. As a result, 7000 servers function like a massive and coherent computing entity; and the millions of user Newsfeeds and Timelines happily interweave text and image data across Facebook’s social web.

 

Facebook's great success in terms of operations engineering is to scale the findability and changeability of big data so that users experience real-time social interaction. While Facebook applications pay more attention to patterns in relevant data sets, AWS applications pay attention to everything in the data set. It's a difference between finding a needle by looking at every piece of hay in the stack and finding the needle by looking at only the right hay.

 

*AWS could of course be leveraged for purposes described here; and in fact support for the Simple Storage Serivce (S3) has already been developed within the ongoing Hadoop open source project. The advantage that Facebook has is in already having figured out how to scale up the largest Hadoop-based cloud-computing implementations.

 

More about Monitoring

In the next article of this series I will revisit the topics of image data use and data encryption in order to connect some dots related to the data-processing acumen of Hadoop. Here I want to emphasize a previous point that any growing storage solution, and especially one with real-time processing requirements, must in terms of business case spread risk at the time of procurement. It’s better to do business with multiple vendors of more or less equally reliable products than depend on a single vendor that might pose risks related to future pricing and in any case probably will not survive in the long-term in its same form. All the more important, too, is having a scalable storage monitoring system that accommodates both the addition and removal of different products made by different vendors.

LokiR

Hello, WiGig

Posted by LokiR Feb 7, 2013

Have you heard of 802.11ad, or more commonly, WiGig? It is a standard that has recently been approved by the IEEE that provides wireless transfer speeds of up to 7Gbps over short distances. If you're looking to update your infrastructure, you might want to consider WiGig enabled devices. WiGig is designed to combat the network spaghetti of cables between wireless devices or peripherals.

 

The 802.11ad standard has been under development by the Wireless Gigabit Alliance since 2009 and has existed as a specification since 2010. Panasonic and Wilocity are already gearing up to delivering WiGig connectivity this year.

 

How does WiGig work?

 

WiGig operates on the 60 GHz band. This frequency has a more robust signal, but the signal is limited to about 40 feet. Because it operates on a separate band from other standards, it frees the airways on the other bands from device to device traffic, such as the traffic between wireless mice and keyboards to a computer. Multiply this by, say, an office, and suddenly you have significantly more bandwidth available to data connections.

 

If a WiGig device cannot communicate with another WiGig device, it can seamlessly (theoretically) switch to legacy technologies in the 2.4 GHz and 5 GHz bands, so you don't have to worry if your entire infrastructure is not up to snuff.

 

If you do happen to deploy WiGig, you can use SolarWinds NPM to see how much wireless congestion WiGig saves you.

As much as I enjoy waxing existential over what is means to be truly free, this discussion is limited to what it means for a LUN to be truly free in Storage Manager. Storage Manager is able to monitor storage environments containing devices from different vendors. These devices can define free LUNs differently. To understand what it means for a LUN to be truly free, we must  understand the definition of a free LUN and the definition of a Replication Target LUN.

 

Definition of a Free LUN

Generally free LUNs are created but not assigned to a host. These LUNs are potential candidates for reclaiming space on a storage array. Because free LUNs are not assigned to a host, they do not have an HBA mapping.

 

Definition of a Replication Target LUN

Replication target LUNs are used to replicate a source LUN so the information on the Replication Target LUN is redundant. These Replication Target LUNs are associated with their source LUN, and the Replication Target LUNs are not presented to a Host. Unlike the space on a free LUN, the space of a Replication Target LUN is used and cannot be reclaimed by a storage array. Replication Target LUNs do not have HBA mappings since there is no association with a host.

 

Example: LSI and HP EVA

According to LSI and HP EVA arrays, a free LUN is any LUN without an HBA mapping.So for these arrays, Storage Manager lists the Replication Target LUNs as "Free LUNs".

 

Navigate to the Free LUNs report from the left-hand navigation pane > SAN Groups > YourArrayName > Storage tab. An example is illustrated below.

 

But as we saw in the definition of a Replication Target LUN, these LUNs are not truly free. However, if a meaningful naming convention is employed, and the names are adequately descriptive, then the replication LUNs can be identified within the Storage Manager reports. For example, use names such as <LUNName>_replication for your Replication Target LUNs.

Last week I met blogger Bob Plankers (@plankers), author of the lonesysadmin.net, and we had a great conversation on how the sysadmin role has changed over the last 5 years and what sysadmins and help desk professionals can learn from one another.

 

JK:  How did you get into blogging?

BP:  I got into blogging in 2005.  At that time there were not a lot of bloggers in sysadmin space.  I found that in searching the Internet for answers to questions, once in a while I knew something of interest that was not available on the Internet. My blog was the way to put it on the Internet. I’ve been in the field for around 15 years at all kinds of companies, from a consultant to the private sector to working in the help desk to working as a sysadmin at a large university in the Midwest.

 

JK: Since you have worked on both sides of problem determination – the help desk and as a sysadmin – what advice do you have for these teams to work better together?

BP:  A lot of sysadmins would benefit from a rotation in the help desk.  Seeing what help desk folks have to deal with, the problems they face from users and to actually talk to users who have to use the things that you are building.    A lot of times IT departments don’t follow up with the users or the help desk to find out what the pain points are.  Then you end up with people that are angry, because the application may be doing things as unintended but you never hear about them because the normal interaction between the service desk and the sysadmin is in a break-fix mode.  No one thinks to send things to a sysadmin like, hey, this app logs me out every 10 minutes and it’s a big hassle.  This is an annoying problem that a sysadmin might thing is just a security feature, but it is really impacting the user experience of the application and could be fixed.

 

I am pretty sensitive to that – I understand being caught as a help desk person in not quite knowing what to tell an end user who is complaining, and not getting any love from the sysadmin team because they don’t think it’s a problem since it’s not a technical issue.

 

Sysadmins can address technical as well as non-technical issues by listening to some of those complaints.  Addressing an issue might be as easy as explaining to the help desk why something is the way it is.  This helps a lot because they can explain it to the user, and at least the user would understand why the application is the way it is.

 

Beyond that, you can give the help desk access to the server monitoring tools.  It’s like sysadmins are the high priests of the IT organization and they want to hold all the information.  If you’re all on the same team, everyone should have the same information.  Give the help desk access to the information, and train the help desk team to not make any major decisions unless sysadmins are consulted first.  If the help desk staff can see what you (sysadmin) are seeing, it makes a big difference.  If they have the information of an outage that is occurring, when the end user calls the help desk, the help desk admin can speak intelligently that there is a problem and it is being worked on.

 

For example, I report that cable is not working in my area, and the support guy tells me that “nothing is reported in your area.”  So I go on Twitter and complain, and a guy deep inside the cable company sees it and fixes it. That’s neat, but probably not the way it should work. If the help desk appears smarter to the end user the end user will actually call the help desk when something is wrong, rather than throw up their hands when a service is slow or not responding.

 

This is especially important as virtual infrastructures are more pervasive – being able to work through technical and non-technical issues.

 

JK:  What’s an example of a non technical VM issue?

BP: VM seems slow – that is what I normally get.  I follow up with – how can you tell it is slow?  Well, I logged in and it seems kinda pokey.  Well, can you run applications or services?  Yes, but it just seems slow.  How should your application be performing, are you meeting your SLAs?  Well, yes, but the VM seems pokey.

 

With virtualization, it’s a different game now, your VM may be slower than physical box you used to own, but your application still works fine.  This is not a technical issue but an issue related to educating users why a VM may be slower than what they were used to, and why the business thinks it’s fine that way.

 

JK:  How has the System admin job changed over last 5 yrs?

BP:  Over last 5 years, the idea of working in silos has gone away, and with it some of the problems.  Historically the network guy would find out about things last – to many sysadmins he’s like a plumber – the network should work like your pipes, and we don’t need to talk to him on a regular basis, just when the plumbing plugs up. That was a bad idea then, and a real bad idea now.

 

The classic idea that everyone is separate is dying.  Now instead of a storage guy, or a network guy or server guy, now I have to be a generalist and have the right tools in place to see across these environments.  Virtualization also muddies the waters across these classic domains.  The rules aren’t so cut and dry. Best practices say stuff like you shouldn’t have more than 20 VM’s on a data store, but what they really mean is that you can’t have the wrong 20 VM’s on a data store.  Figuring this out is not so easy, and you need the right tools to do this that will save you time.  There are a whole bunch of tools out there, but by the time you are done implementing them you have spent more time than you will ever save in using the tool.  Having a good tool that will show you if the VM is slow, the storage issues and the network information is very valuable.

 

One thing I learned working in the help desk is linear troubleshooting – changing one variable or chasing one metric at a time.  A lot of sysadmins don’t know anything about storage or networking.  They never had to worry about how a network worked before virtualization.  With virtualization, now they have to configure a network, never thought about a VLAN, what is a LUN, a datastore?  How does fiber channel SAN or iSCSI work?  Take complicated storage and stack that on top of a complex network and your head really hurts.  It’s like working one of those math exams where the answers for part B and C depend on part A. You get part A wrong and you’ve screwed everything up. You’ve got to have your network solid for your storage to work, and so on.  Overall, I’d say that if you’re a sysadmin today, you’re better off being a mile wide and an inch deep in all areas.

 

JK:  What technology trends are you planning to blog about in 2013?

BP:  Right now I am working on a column for TechTarget around the topic that IT does not practice what it preaches when it comes to consolidation.  What I mean is that with server consolidation, cloud, virtual desktops – IT is pushing stuff back into the datacenter.  However, within the datacenter we are now seeing distributed building blocks like direct attached storage which is clustered, rather than these big storage arrays.  Within the datacenter we are becoming decentralized.  IT is doing it to save gobs and gobs of money.  It’s an interesting trend.

 

Another trend I find interesting is now that costs have been driven down for the infrastructure through virtualization and cloud, folks can spend more time on intangibles.  How much time do I spend performing a particular task?  Should I spend the time automating that task or is my time best spent elsewhere?

 

 

 

Other IT blogger profiles:

Ryan Adzima, The Techvangelist

Bill Brenner, Salted Hash

Tom Hollingsworth, The Networking Nerd

Scott Lowe, blog.scottlowe.org

Ivan Pepelnjak, ipSpace

Matt Simmons, Standalone SysAdmin

David Marshall VMBlog

Jennifer

Your Cisco devices don’t automatically log someone getting into your network and changing your Cisco router and switch configurations. But with a few simple commands, you can configure your Cisco routers and switches to monitor and log configuration changes when they occur. You can set this up on the routers and switches themselves, as well as on your network appliances, such as a LEM appliance. Enabling configuration change monitoring and logging on your Cisco routers and switches lets you know if unauthorized configuration changes occur on your network.

 

By configuring your Cisco devices to monitor and log changes, you’re telling the devices log every command that changes the router’s or switch’s configuration. (Show commands, for example, are not logged, because they don’t change the router’s configuration.)

 

Perform the following steps to enable configuration change monitoring on Cisco devices:

 

  1. Access the Cisco device via ssh.
  2. Configure the Cisco device for syslogging to your appliance. See the SolarWinds knowledgebase article Configuring Cisco IOS Routers and Switches to Syslog to Your LEM Appliance for instructions on setting up the device to log to your LEM appliance.
  3. Configure the Cisco device to monitor configuration changes using the following commands:
    • enable
      Enters Privilege-Exec mode on the router. Some routers put you in Privilege-Exec mode by default. You can tell if you are there if there is a # next to the router name. For example: routername# instead of <routername>


    • configure terminal
      Enters Global Configuration Mode. You must enter this mode to make any changes to a router or a switch.


    • archive
      Enters the archive’s sub menu.


    • log config
      Goes into the logging configuration sub-mode. This is where you specify the logging options for the running configuration.


    • logging enable
      Enables logging for the running configuration.


    • logging size
      Specifies how many logs to keep on the local system. For example the command logging size 200 keeps 200 logs on the cisco device itself as well as sends those logs where you tell it to.


    • hidekeys
      Enables more secure logging by making sure passwords are not sent in the clear.

 

    • syslog
      Sends the log files to syslog.

 

    • end
      Sends you back to the Privilege-Exec mode.

Marketing. It's a maze we all have to navigate at some point during our daily lives. We are constantly bombarded with eye-catching buzzwords to get us excited about mystical new technology. Even buzzword, is a buzzword. Technology is moving fast, but strip out the fancy marketing terms and you'll realize that beneath many buzzwords lie technology already familiar to us.

 

Separating Fact from Hype

Below is a list of "creative" marketing terms with their actual definitions. See if you agree with me:

  • Retina Display - Just a clear picture.
  • Technicolor - More saturated colors.
  • HD Radio - HD literally has no meaning in this context. The radio signal may be broadcast digitally, but HD does not mean, High Definition, as the marketing people would like to have you believe.
  • The Cloud - For all intents and purposes, it's just the web, the internet, and so on. No different from what you're used to. Just a "loftier" name to something that has become stale.
  • Big Data - A large amount of data. Woo-hoo.
  • Solid State - Simply put, devices that are built from solid materials are considered solid-state. Um, isn't that most things? Look, I'm wearing a solid state hat! So what.
  • Flash Memory - Flash is easier to say than, "I brought the report on my EEPROM chip with a thin oxide layer separating a floating gate and control gate utilizing Fowler-Nordheim electron tunneling." It's just memory with a, quite literally, flashy name.
  • Plasma TV - Refers less often to blood products than to a kind of television screen technology that uses a matrix of gas plasma cells. Cool word though.
  • Surround Sound - Multiple speakers that surround the listener. Nothing surprising here.
  • Data Migration - Moving Information from here to there. Whoop-dee-do.
  • Ultrabook - Laptop
  • NFC smartphone - NFC is the new high-tech acronym for Near Field Communication which uses radio waves. In other words, walkie-talkie.
  • OLED TV - Organic Light-Emitting Diodes. Supposedly a better picture is generated, but I see no evidence of that.
  • Out-of-the-box - Does what it's supposed to do when purchased.
  • Virtual Classroom - Learning on a computer.
  • Prosumer - This is a marketing term for high-end products bought by professional consumers, and means very close to nothing. What exactly is a "professional consumer"?
  • Web 2.0 - And I have an amplifier that goes to 11. Just stop it.

One of the great benefits of using CatTools is its ability to roll out changes to multiple devices at once. A typical example of this would be to change the passwords of all of your network devices on a regular basis.


There are several ways in which to make multiple device changes like this:

To change the Enable, the Enable Secret, the VTY or the Console password you can create an Activity of type Device.Password Change and run this against the devices you wish to change. The benefit of using this type of activity is that as it is running it will update the CatTools database with the new password as it completes each device.


To change the Username Password you can create an Activity of type Device.CLI.Modify Config.
Using this type of Activity against a device automatically puts you into Config Mode and free text commands can be issued to make the necessary changes. When using this method you need to use CatTools Meta Commands.


These are commands that instruct CatTools to perform an internal action. These commands will update the database with the new passwords that you are assigning to the devices.

For Example: In the commands text box on the Options tab of a  Device.CLI.Modify Config activity you might type in the following.

username joe password fred
%ctDB:Device:AAAPassword:fred


This would update the username and password within the CatTools Device table which holds the properties for each device.


Above is just one of the many things you can automate with CatTools. CatTools is an application that provides automated device configuration management on routers, switches and firewalls. Currently supported devices include Cisco / 3Com / Dell / Enterasys / Extreme / Foundry / HP / Junpier / and Nortel device, among others.

To learn more about how you can utilize Meta Data, Meta Variables, and Meta Commands visit the Kiwi CatTools website.

In this first article of this series, I discuss face recognition technology in advertising as one current step on the path towards the era of ubiquitous computing.

I will then revisit a discussion of data security and encryption.

 

Faces of Big Data Tomorrow

"John Anderton, you could use a Guinness right about now!" a friendly voice hails the hero of Steven Spielberg's Minority Report. Many viewers laugh at the ad's cheerful tone in addressing a man who warily walks through a luxury shopping area in fear for his life.

 

Other ads in the same space identify Anderton as a Lexus owner who is on a “road less traveled,” an “American Express Member since 2037,” and ask him if he is stressed out and might need to “get away" and "forget [his] troubles”.

 

With cross-cuts the movie depicts a face recognition system whose sensors, cameras, and software apparently recognize John Anderton's face, identify his expression, track his gaze to specific ads, and deliver personalized messages.

 

Ironically--or perhaps of course?--the system’s real-time access to personal data through what one presumes is a ubiquitous computing grid inspires in Anderton a creeping paranoia instead of the soothing companionship the ads seem to offer.

 

Faces of Big Data Today

Currently contemporary site-specific advertising with face recognition senses the gender and age of a viewer, serves an ad, provides touch interaction, and records what you do on the touch pad and how long you look at the ad. This kind of system creates data that is basically anonymous.

 

One can expect this implementation of face recognition to become common in airports, commercial spaces, casinos, really anywhere ads go except perhaps roadside billboards. And it’s only a matter of time before webcam sensors become advanced enough to obtain face-related information on the viewer’s perusal of elements in web pages. Linked with account information, this kind of face recognition would be quite personal though it would also be elective; adjust settings on the webcam or disable it, as you like.

 

Facebook is the site where you would expect commercialized face recognition innovations eventually to show up. And indeed they have: Facebook Camera, for example, lets you manage the photos on your iPhone, helping you tag, group, and push them into your Timeline. Facebook's Instagram, like Twitter, lets you send mobile photos into specific areas of a shared social web space according to filters you specify. And through its acquisition of Face.com, Facebook takes a step further in using face recognition software to correlate tags with faces in photos not just on your own Timeline but on anyone's. Though users can disable the relevant feature—called 'photo tag suggest'—--it's active by default.

 

With these mobile photo technologies and its Timeline application, and after training its user with the intentional interactivity of the ‘Like’ button, the company seems to be working on the big data piece of personal ad serving in a way that even Amazon cannot match.

 

Just considered in terms of data quantity, even before acquiring Instagram and Face.com, and launching the Timeline and Facebook Camera features, Facebook was daily adding over 500 TB of compressed data. Though official numbers aren’t available, with the acquisitions and feature launches, we can safety assume that the company’s daily storage number is most certainly much higher. As the virtual space of unlimited expansion, Timeline holds all user content and prompts each uploading user to add various kinds of metadata for an event.

 

Any data the user enters into the Facebook Timeline has a shadow effect in the form of enabling applications running in the cloud to correlate the personal information with demographic categories (gender, income bracket, educational level, profession, etc.) that would make marketers squeal in their Guinness.

 

Monitoring Big Data Arrays

Facebook currently has thousands of servers handling the flow of their user created data. It would be site management suicide for them to use a single vendor or storage technology to store a data set that will easily scale into the zettabytes over the next 5-10 years. Pragmatically, even a company much smaller than Facebook would not use a single source for its storage array components; multiplying the number of vendors as part of a procurement strategy means having some insurance against vendor failures. Simultaneously, this pragmatic outlook also means finding a storage monitoring solution that unifies status views across different vendor systems.

 

This is the third post in a five-part series uncovering the mysteries of logs. In the first two posts (What Are Logs? and Logs: So Many Different Types), we discussed the various uses for, and types and formats of logs. In this post, we'll complicate even further by looking at how each of the different types of logs support their own unique methods for getting at their valuable data.

 

How to Collect Logs

Now that we're familiar with the four general types of logs, let's take a peek at how to get at them. In all but one case, each type of logs supports both a "pull" and "push" option for log retrieval. Think of the "pull" option as you going somewhere to get the logs, and the "push" option as the system, device, or application sending the logs to you.

 

Windows Event Logs

  • Pull - The traditional and most common method for retrieving Windows Event Logs is through the Event Viewer snap-in for the Microsoft Management Console (MMC) on the local system. On the surface, it seems like this would require you to remote into each system for which you want to view logs, but you can also connect the Event Viewer snap-in on your local machine to remote systems. I discussed this option in a bit more detail in another post: Get all of your Windows management tools in a single pane of glass.
  • Push - Starting with Windows Server 2008, Windows also provides a method by which you can make a system "publish" its logs, allowing you to "subscribe" to them. If you're managing older systems, the "pull" method is really your only option without bringing a third-party log aggregator into the mix. One example is the free Event Log Consolidator from SolarWinds.

Text Logs

  • Pull - The "pull" method for text logs is pretty rudimentary: Open the file. Sometimes this isn't as simple as it sounds, though. For applications, this could be as easy as opening a file. For network devices, it probably involves logging into a command-line interface and running a series of commands. In either case, it's important to revisit a point made in Part 1: Just because a system uses the "text" logging type, that doesn't mean you can read it. In many cases, it takes years of practice to be fluent in the language of your logs.
  • Push - The most common "push" method for text logs is syslog. This method requires the logging device to both translate (or create) its logs into the syslog format, and then transmit the logs via the syslog protocol to a server the administrator specifies. From a log management perspective, if your devices and systems provide a syslog option for their logs, sending your logs to a dedicated syslog server is one of the most effective ways you can streamline your process.

SNMP Logs

  • Pull - When SNMP is enabled on a device, it stores its data locally, waiting for something to poll it. This retrieval method is a little less common from a logging perspective, but it's the best-known implementation of SNMP as a protocol.
  • Push - This method involves one of the most confusing terms in network management history (IMNSHO). An SNMP trap is a message that an SNMP-enabled device sends out when a certain threshold is met. For this to work, you have to define the thresholds and destination for the device. Historically, network devices and some antivirus vendors are what use the SNMP trap method for logging and state/status notifications.

 

Database Logs

This is the one category that really only supports one retrieval method: pull. When a system stores its logs in a database, they're there until you ask for them. In other words, you have to query the database. You might do this through a pretty web interface or a plain old SQL management tool, but in either case a database query is involved.

 

All That Trouble Just for Logs?

After learning about the various types, formats, and retrieval methods for log data, you might be wondering why you'd ever want to go through so much trouble to get at your logs. Well, remember from Part 1 that logs can be very useful. Luckily for the troubleshooting use-case, Windows logs are relatively easy to get at and most network devices support syslog. But if you want to take a more comprehensive approach - whether for compliance reporting or proactive analysis - you'll probably want to call in the help of some experts. While your organization might not be able to afford (or justify) a dedicated security specialist to collect and analyze your logs, it might be worth taking a look at a dedicated log management system.

 

In the next installment, we'll look at yet another variable in this intriguing equation: Reading the Logs.

DanaeA

Overflowing Logs?

Posted by DanaeA Feb 1, 2013

Are your Windows Security Logs overflowing? Have you noticed more noise on your logs than normal? Changing your Audit Security Settings can cause a flood of data that you may not have realized is now invading your logs.

 

Having to muddle through extraneous information is cumbersome and time consuming, especially since auditors require that logs be reviewed on a daily basis for any suspicious events or alerts. You want to streamline the data to be analyzed and changing your Audit Security Settings may help filter out the noise. For more information on Windows Auditing and the extra noise it creates, check out our Not All Windows Auditing is Created Equal blog article.

 

For information regarding log management, see SolarWinds Log & Event Manager.

When he’s not blogging, Michael McNamara is a technical consultant for a large health care provider in the Philadelphia area and a husband and father to three girls. He describes himself as a “jack of all trades” in his current job, with primary responsibility for data and voice infrastructure while also supporting Windows Active Directory, Microsoft Exchange, Microsoft Windows, VMware, FC SAN, and more.

 

URL: http://blog.michaelfmcnamara.com/

Twitter: @mfMcNamara

 

MT: How did you start blogging?


MM: I started blogging in 2007 in an effort to build my digital persona when my job was threatened by a potential sale of the department I worked in. It was my initial goal to be the #1 ranking for “Michael McNamara". The idea being that when I interviewed and people Google’d “Michael McNamara” they’d quickly see that I was legit and the real deal. The job scare blew over but I had found a new hobby in blogging.

 

MT: As your blog has grown, who are your readers and what are they looking for from your blog?


MM: At the start I kept my posts geared towards Avaya (formerly Nortel) equipment, specifically Ethernet switching and IP telephony since there was a real lack of information around Avaya equipment available on the net.. There was little useful documentation, no community, and only pre-sales spec sheets that weren’t much use to the engineers and system administrators actually working with the equipment. As I solved my pain points and shared them online I found that I wasn’t alone. There were people all over the world struggling with the same problems and the blog kind of caught fire.

 

While I actually work with and support numerous technologies I originally limited my blogging to topics which were absent from the net. I felt there was no benefit in me writing about topics that were already covered in-depth elsewhere on the net. A few years ago I realized that people valued my feedback and experiences so I started expanding my topics beyond just Avaya networking.

 

MT: What new trends are you seeing in your interaction with your readers?


MM: About two years ago I started a discussion forum to provide a place for guests, strangers even to ask their own questions. The comment threads on my blog posts were becoming inundated with off-topic comments and questions and I didn’t want to turn people away just because their question wasn’t related to the topic of the blog post so I decided to start a discussion forum. On the discussion forum users could ask whatever question they wanted and myself and a few other subject matter experts would try to answer them as best we could. With more and more projects being “value engineered” there are a lot of system administrators and engineers trying to deploy and configure the equipment themselves, hence there are a lot of questions and advice being sought.

 

MT: Any new technology trends you are hearing from your community?


MM: There are a number of hot buzz words in the industry today including OpenFlow, SDN, BYOD, etc. One of the areas I’m working with in healthcare is access to clinical information. While some of that potentially involves BYOD there are also technologies available from Citrix and VMware that allow physicians access to traditionally fat applications from smartphones and tables. With more and more emphasis on ‘value engineering’ and change management I’m also looking at automation through scripting.


MT: Do you have a favorite SolarWinds product?

 

MM: I would have to say the Engineer's Toolset is definitely my personal favorite although there are so many very neat and helpful tools offered by Solarwinds . There are certainly other tools and solutions but Solarwinds offers a very clean GUI that can quickly get you the answers you need when troubleshooting a problem and/or issue.

 

MT: What is your day job and how do you balance that with your blog activities?


MM: I work in Information Technology as a technical consultant for a large healthcare company. My day job gets my full attention from 8 to 5 so all my blogging activities including research, testing and actually writing any articles has to be done afterhours or weekends. Based on the exposure from my blog, I occasionally receive consulting solicitations which places even great demands on my free time.  I view blogging as a rewarding hobby which connects me with thousands of IT professionals around the world.

 

MT: You have any interesting hobbies you’d like to share?


MM: I enjoy playing ice hockey in the winter and watching the grass grow in the summer. I coach my twin girls' soccer team in the spring and fall seasons – a rewarding job watching all the girls as they have matured through the years in both in their skill and enjoyment of the game.

 

 

Other IT blogger profiles:

Ryan Adzima, The Techvangelist

Bill Brenner, Salted Hash

Tom Hollingsworth, The Networking Nerd

Scott Lowe, blog.scottlowe.org

Ivan Pepelnjak, ipSpace

Matt Simmons, Standalone SysAdmin

David Marshall VMBlog

Self-tracking is a branch of Big Data reaching far beyond the data engineer. The article, Our Data, Ourselves, states that 27% of internet users are using some type of online system to monitor health indicators such as weight, exercise, diet as well as symptoms of health and disease. Self-tracking can help:

  • Track what foods and activities trigger symptoms such as headaches or stomach aches.
  • Learn what is our most productive time of day.
  • Reduce harmful habits while increasing healthful ones.

 

I'm sure Socrates wasn't thinking big data when he proclaimed, Know Thyself, but that's exactly what users on the Quantified Self website are thinking. This online community is a collaboration center for self-trackers to share their tools and experiences. They sponsor conferences, show and tell videos, and a comprehensive list of self-tracking tools. It's here that I took my first steps into the wilderness of self-tracking with the iPhone app, Daytum. I'm tracking my daily activity (or lack of) and my calorie intake. Quantified Self encourages a broad spectrum of self-inquiry ranging from how coffee relates to productivity, to how foods affect moods or bowel movements, to what physical activity leads to the best sleep.

 

The most popular self-tracking tools are seen in the fitness and medical fields. Websites and smartphone apps for tracking health and fitness are exploding. Over 9 million runners use the device, RunKeeper. A more versatile tool, FitBit, wirelessly logs metrics such as brainwave patterns during sleep, heart rate during exercise, leg power exerted when bicycling, and numbers of steps taken. Of the fitness devices I perused, I chose the stylish UP Band by Jawbone to measure the steps I take, the calories I burn, the quantity and quality of sleep I get, my calorie intake, and my heart rate. The device connects to my iPhone, and I can compare my results with other users and even compete with my friends. It's my 24/7 fitness coach.

 

Smartphones are becoming  windows into our behavior and can help us see what symptoms and lifestyle are key to optimizing health and well-being. Passive sensing of our daily lives is a powerful tool for medical and psychological diagnostics. Smartphones contain microphones, GPS locators, and accelerometers. With data collected over years, we can glean insights into the state of our cognitive functions and diagnose declines in these functions before symptoms are obvious. Smartphones can use passive sensing to help us know if we are sick before we are aware of any symptoms. This can come in handy on many levels including disease control. If we know we are coming down with an influenza, we can stay at home and reduce public access to our contagion.

 

We can go deep into ourselves with Proteus Digital Health's ingestible event marker (IEM). It's about the size of a grain of sand and consists of an integrated circuit and a battery encased in a vitamin-like coating that dissolves in the stomach acid and activates the sensor. It sends out a high-frequency electric current that transmits through the conductive tissue of the body. The signal can be picked up by a monitor patch that is worn on the skin or implanted just under the skin. Data can be read on a range of platforms from a smartphone to a doctor's computer. The device can measure, track, and record a wide range of information that allows a level of monitoring found only in intensive care units.


 

Safety and security of our personal data is necessary for widespread use of self-tracking. People need to know their data is secure and that they get to choose who sees it or how it is used. We need our own personal "data vaults" where we define the rules and conditions for sharing our data. Safely and easily tracking our health-related information can help us live healthier lives.


And just as self-monitoring can do wonders for our personal health and wellness, network monitoring can help boost our network performance and reliability.



Filter Blog

By date: By tag: