Skip navigation

In this exciting age of virtualization, cloud, SDN, and other hardware abstractions, we often take for granted the most important technological advances of the 19th century – electricity. While some of us are just able to spin up another EC2 instance from the ether, most of us still worry about physical capacity, even if it’s going to our own internal virtual infrastructure. So that means at the end of the day, in order to add capacity, some poor soul somewhere is going to have to heft 60 pounds of metal into a rack and plug it in.

In its most basic form, this moment of truth is a non-event. The server comes with a couple cables, the PDUs are already mounted in the rack by the last person, just plug in each end and move on. Easy enough. But what if you had to order more of those cables? What the heck is the difference between a C14 and a C19? What if you had to order a new PDU and UPS to build out a new closet or rack? What if you’re standing in the server room right now talking to that kindly gentleman wearing overalls trying to figure out just what sort of socket needs to go on that wall? These are the sort of low-tech questions we’re going to answer here, because the day will come when you’ll be asked to “just” plug something in.

 

Let’s start with the common power cable. Excepting proprietary oddities from a few manufacturers and regional regulations, you will mostly encounter the same 4 types of cables:

 

IEC C13-C14:

IEC_C13-C14_(2).png

IEC C14-C20:

 

IEC_C14-C20.png

 

IEC C19-C20:

IEC_C19-C20.png

 

NEMA 5-15P-C13 (mostly North America):

 

NEMA_5-15P-C13.png

So now that we can identify these cables, where do we plug them in? The most common AC output voltages from UPSs will be 120v and 208v. For the remainder of this post, I will refer to 208v as it is most prevalent in North America, but it could be anywhere from 208v-240v internationally. Most modern datacenter gear will accommodate voltages from 120v-240v AC, however it is always advised you actually check the power supply of the gear to verify there isn’t an old-school voltage selection switch before plugging it in. Unless you like the sweet smell of fried PCB in the morning.

 

This also applies when you’re spec’ing out UPS and PDU gear for a new rack build-out. The first order of business is to make sure all of the gear going into the rack will support the output voltage of the UPS. (Or for that matter, that the PDU does as well.) 208v is more efficient than 120v, so if you have a choice in the matter, go with 208v. Secondarily, choose a PDU (or UPS) with a sufficient number of the desired type of sockets. The choice in output socket is primarily driven by your voltage selection and device amperage requirements, so if you’re going with 120v in North America, NEMA 5-15 is most common. Higher draw devices (read bigger) will likely call for C19 sockets as they can handle ~60% more juice than C13. C13’s also have proclivity to not fit as snug and secure as one might hope, so it may be a worthy investment to find PDUs / cables with locking latches (APC has some decent ones) to prevent accidental downtime.

 

On the other end of that UPS (or PDU) you will have a whole new set of decision points to determine what you need. Apologies in advance for the focus on North American plug types here. High-res pics of other standard (IEC) types would be very welcome in the comments section below. The common input receptacles you will see (or ask your electrician for) in North America will be:

 

L5-20 (120v/20A)

L5-20.png

 

 

L6-30 (208v/30A)

 

L6-30.png

 

 

 

Less common, and usually seen with big 120V UPSs will be a L5-20. Ever wonder what that horizontal slot was for in commercial building sockets? Now you know. If you’re going 208v, you may also be asked by your electrician if you need a “three-phase” circuit. The short answer is that three phase circuits can handle a heavier load, and that you need a UPS that supports it. The long answer may be found here: http://en.wikipedia.org/wiki/Three-phase_electric_power

Speaking of UPS- how does one select the appropriate model? Firstly, let’s talk about one of the dark mysteries of the electrical world- kVa and how it differs from kW. We won’t go too far into the weeds on this, but the reason kVA maps so well to kW is that kVA is “apparent” power and kW is “actual” power. Represented mathematically:

 

kW = kVA * pf (power factor)

 

Power factor is basically the efficiency (loss) of the power load. Nearly all modern high-efficiency server power supplies should have a power factor close to 1.0, so kVA tracks very closely to kW. This is what you will use to size the rated output load of the UPS. As far as runtime goes- pardon the obviousness, but the more batteries, the longer the UPS will run for. This varies by vendor and model, but will usually be stated on the product specs under min/max/average load. Make sure the UPS has the appropriate input connectors for the wall plug you chose, and the matching connectors for the PDUs, and you’re in business. Next step is to wait for a few hundred pounds of gear filled with lead-acid batteries to arrive on a pallet and hoist it into the rack.

 

Until the day comes that Tesla’s dream of wireless power distribution becomes a reality, there will always be a need for someone to push a plug into a matching receptacle of appropriate voltage and with sufficient amperage. If that someone is you- I hope you found this post useful.

In the comments below, we would love to hear how you currently manage your power / cooling / environmentals. Do you have them in NPM or are you using something else? How do you model your racks- Visio? Dare to dream of using something better?

We also have a quick 4-minute survey that gives an opportunity for even more feedback on rack diagramming and power / cooling management. We'd love to hear your thoughts: Rack Diagramming Survey

The time has come for yet another Log & Event Manager (LEM) Release Candidate! The RC is already available on the Customer Portal for all LEM customers under maintenance. As a Release Candidate you can deploy it in production and work with our awesome support team if you need any assistance. Here's what you'll see in the RC...

 

Automate Searching and Augment Reporting with Scheduled nDepth Searches

 

Reporting is useful when you want static content with graphs and charts with pages of content, but it's hard to slice and dice the data and it can be tough to get and edit your report criteria just right. Our search interface, lovingly called nDepth, has the ability for you to do more flexible searching, using components like User-Defined Groups and Directory Service Groups, and to piggy back on existing filter criteria to get a jump-start. With this release, you'll be able to take any Saved Search in nDepth (in the normalized data store or the original log message store) and both generate an event from it and/or have the results sent to you in email.

 

nDepthSavedSearch2-MenuSmall.pngnDepthSavedSearch3-Schedule.png

 

Let's say I've got a saved search (or am using a default saved search!) for Logon Failure activity for the last week. With reports, I can schedule, filter, and export to different formats, but I might also want to create my own charts or pass the data off to another team for investigation, which are harder to do with reports. nDepth has a new option in the gear menu on the left side, "Schedule," which will open up a dialog that lets me schedule any saved search on whatever repeating interval I like. By specifying an "End Date," I can also decide how long I want the scheduled search to run, in case there's a short-term issue that doesn't need to be ran indefinitely. If I choose to email the results, up to 10MB (millions and millions of records) will be included in an attached zipped CSV file with all of the original data, similar to a manual export from the Console, except MANY more results.

 

Support Flexible Workstation Environments by Recycling Agent Licenses Automatically


VDI and other flexible temporary workstation initiatives are becoming much more commonplace, but even temporary workstations need to be monitored the same as their semi-permanent counterparts. With LEM Workstation Edition, we've made licensing affordable for workstations, and with this release we've made it possible to automatically recycle licenses from nodes that haven't sent any data in a while.

 

You'll find the license recycling feature (off by default!) in Manage>Appliances>License toward the bottom. With this feature you can:

  • Specify the age of last event before the license is eligible to be recycled (e.g. must have been offline for more than an hour, in case someone is rebooting or temporarily shut down): default 1 hour
  • Specify the schedule frequency to recycle licenses (e.g. every day at 5am, check for old licenses to recycle): default every day at 4am, and
  • Specify the matching parameters for what systems to recycle so that unexpected systems don't get deleted (e.g. only nodes with hostnames or IP addresses that match your VDI network): default all nodes

 

AgentLicenseRecycle-small.pngAgentLicenseRecycle2-UDG-small.png

...But Wait, There's More!

 

Import User-Defined Groups from CSV Files

A commonly requested feature is the ability to import CSV files to automatically populate groups, rather than having to edit data elements by hand, which we've implemented in this RC. From Build>Groups, go to (top right) Gear>Import, change to "All File Types" and choose your CSV file. The format of the file is basically what you see in Build>Groups:

UDG, UDG Name, UDG Description

Element Name, Element Data, Element Description

Element 2 Name, Element 2 Data, Element 2 Description

...


Performance and Platform Improvements

We're investing time in improving things under the hood, too. With this release, we've done some heavy lifting in the correlation engine, updated our agent and appliance Java Runtime Environments, updated Tomcat, and a lot of other somewhat invisible changes. For those of you who want to prevent an agent update from automatically being pushed out after upgrading, make sure to go to Manage>Nodes or Manage>Appliances and turn off Automatic Updates for specific nodes or globally.

 

We've also improved small areas like the performance of nDepth CSV export from the Console (be sure to check out scheduled searches if you still need to export more than 250,000 records), adding more info to our troubleshooting logs to help our support team help you faster, and a ton of other things.


New Connectors and Device Support

We'll provide a more complete list with the release notes, but the most notable addition is that we've included out of the box support for NetApp File Auditing. Most new connectors are released regularly with the connector download, but for NetApp auditing you'll need to upgrade your appliance and agent to the new release first.


Questions, Issues, Comments - Send 'em Our Way


Feel free to use the Log & Event Manager Release Candidate Thwack forum to report and comment on any issues, questions, or comments you have about this release. Our product management, development, and QA teams are keeping an eye out for any possible issues.


If you have a question about whether a case you've filed was resolved in this release or a certain feature request implemented, feel free to ping back on this post or in the RC forum and let me know - I'll be sure to look into it.


Happy Logging!

So here we go again! Time to kick off the next release of SolarWinds Server & Application Monitor with the first official beta, and grab a quick first glimpse into several of the cool new features we've got lined up for SAM 6.1.

 

Not to be confused with SAM 6.0.1, the Service Release containing many important bug fixes for SAM 6.0 and available for download now through the Customer Portal, SAM 6.1 is the next major "feature" installment in the series. If you'd like to participate in the SAM 6.1 beta you can do so by signing up here. Feedback is crucial during the beta phase of development because there's still plenty of time to make important tweaks and adjustments that can make all the difference in the final release. If you've never participated in a SolarWinds beta before, now is a great time to start. Not only do you get to play with all the great new features first, but it's also an excellent way to help shape the future of the product.

 

 

Windows Scheduled Tasks

 

At one time or another, every systems administrator has had to rely (albeit sometimes begrudgingly) on Windows native Task Scheduler to automate some routine process. Be it automating backups, disk defragmentation, antivirus scans, etc., the Windows Task Scheduler has undoubtedly played an important role in ensuring your infrastructure is properly maintained. However, even as the Windows Task Scheduler has improved over the years, real-time visibility into the success or failure of those tasks across the enterprise has remained, for the most part, an enigma.

 

In SolarWinds never ending pursuit to provide greater levels of visibility into the critical componentry that make up your IT infrastructure, we sought to resolve this visibility gap by introducing the Windows Scheduled Task Monitor as part of the SAM 6.1 beta release.

 

Elegant in it's simplicity, SAM's Windows Scheduled Task Monitor for the first time provides you with at-a-glance access to the state and status of the scheduled tasks configured on your Windows hosts. In addition to simply seeing what tasks have been configured on the host, their current state, and their last run result, SAM 6.1 includes a new pre-configured out-of-the-box alert which will notify you of any task execution failures that occur. You will also find new web based reports that allow you to view all scheduled tasks configured across all servers in your environment, as well as a dedicated Task Failure Report you can view or have emailed to you on a regular basis.

Windows Scheduled Tasks.png

 

When monitored, you will find the Windows Schedule Task resource pictured above on the Node Details view of the monitored server. This is because Windows Scheduled Tasks are not applications in the conventional sense. As such, they are treated somewhat special in SAM and given a prominent resource of its own amongst other host specific information on the Node Details view.

WIndows Scheduled Task Add Node.png

Several options are available to enable SAM's new Windows Scheduled Task monitor. When adding a new, or listing resources on an existing WMI managed node, you will be provided an option to select Windows Scheduled Tasks. The same as you would for volumes or interfaces.

 

If enabling this feature one node at a time isn't your speed, you also have the option of leveraging the Network Sonar Discovery Wizard. The Network Sonar Discovery Wizard allows you to quickly and easily enable the Windows Scheduled Task monitor en masse across all Windows hosts in your environment, or surgically enable this feature only on a select group of nodes.

 

Both one-time discovery, and scheduled reoccurring discovery options are available to enable the Windows Scheduled Task monitor. If using the scheduled discovery option you will have granular level control over which hosts the Windows Scheduled Task Monitor is enabled, as seen in the screenshots below. Hint: If the image is too small, click on it to zoom in and see the full size image.

 

The new Windows Scheduled Task Monitor in SAM 6.1 supports monitoring tasks configured on Windows 2003, 2003R2, 2008, 2008R2, 2012, and 2012R2.

Windows Scheduled Tasks - Scheduled Discovery.pngWIndows Scheduled Task Network Sonar Discovery.png

JSON

 

Web Services APIs such as JSON are the glue that bind modern applications together, usually across different servers, allowing for the exchange of information between them. As end users become reliant upon applications built on these web services, it becomes increasingly more important to monitor those applications to ensure they're functioning as expected. The simplest, and most obvious method for monitoring those applications is to query the back-end server directly, using the same web service API method that the front-end web application would use. From the server's response we can determine the web services availability (up/down), latency (response time), as well as validate the content returned as a result of that query.JSON.png

From within the HTTP/HTTPS Component Monitor settings, you will find three new options (Host Request, Content Type, and Request Body) that allow for the monitoring of restful web service API's, such as JSON and XML. Three new methods (Put, Post, and Delete) are also provided, in addition to the existing "GET" method that has historically been the default and only method available for the HTTP/HTTPS User Experience Monitors prior to SAM 6.1.

 

Sustained Thresholds

 

Last, but certainly not least, 6.1 includes additional improvements to how thresholds are handled in SAM. While tremendous strides were made to how thresholds are calculated in the SAM 6.0 release with the introduction of the Threshold Baseline Calculator, that feature served to provide meaningful context to already collected data. In other words, to answer the proverbial question "What's normal for my environment?" and then suggest recommended warning and critical thresholds based on that information; however, as anyone who's been monitoring IT infrastructure for a while will tell you just because a threshold was crossed once, doesn't mean it's a significant issue that requires immediate attention.  After all, who enjoys being woken from their slumber at 3am to a nuisance alarm telling you that the % Processor time on one of the servers spiked momentarily. If the alert requires no action on your behalf, then more than likely it wasn't worth you waking up for. Alert notifications should be about providing actionable information that requires some level of user intervention to resolve. While some metrics, such as the amount of free space remaining in your SQL database might only get worse over time, thus requiring immediate attention when it dips below a reasonable limit, other metrics can vary wildly from one poll to the next. This is where sampling can play an important role in reducing, or even eliminating the number of nuisance alerts that flood your inbox on a regular basis.

 

In SAM 6.1 you will find new options for defining sample criteria for both warning and critical thresholds associated with each monitored metric of an application. By default, both warning and critical thresholds are evaluated after a single successful poll. This is the exact same behavior as all versions of SAM prior to 6.1. In addition to the single poll evaluation, you will find options for defining criteria for multiple consecutive polls, as well as a method for defining the number of samples that must exceed the threshold for a configured sample size before the condition is met and the status of that component monitor is changed.

 

Sustained Thresholds.png

 

Sustained conditions in SAM 6.1 can be defined independently for both warning and critical thresholds to provide maximum flexibility. Both "X Consecutive Polls" and "X out of Y Polls use a sliding window approach to evaluating thresholds. After each poll, the conditions defined for the threshold are evaluated based on the bounds of the sample size. Put simply, that means that after each poll a new sample is collected and added to the evaluation, while the oldest sample is removed from evaluation. Below, I provide two examples. The first example on the left demonstrates the "X consecutive polls" method. In the left column I show the numerical value collected from the poll (the sample). In the right column I show the status of the component as defined by the sustained condition. The "Sample Size" in this example is "3", meaning that three consecutive polls/samples must exceed the threshold of "80" before the status should change to "Warning".

 

Warning = Greater Than 80 for 3 Consecutive PollsWarning = Greater Than 80 for 3 out of 5 Polls
Polled ValueStatus
65UP/Green
77UP/Green
88UP/Green
85UP/Green
89Warning
83Warning
46UP/Green
81UP/Green
22UP/Green
Polled ValueStatus
65UP/Green
82UP/Green
34UP/Green
95UP/Green
88Warning
90Warning
35Warning
25Warning
15UP/Green

 

The second example demonstrates the "X out of Y polls" method. While the "Sample Size" for evaluation in this example is "5" polls, any three of those 5 polled samples must exceed "80" before the status of this component would change to "Warning". Using the same sliding window approach as the first example, with each successive poll a new sample is collected, while the 6th sample is dropped from evaluation.

 

While somewhat similar functionality has existed within the Advanced Alert Manager for some time now, aiding in reducing the number of nuisance alarms, each individual component monitor that has unique threshold criteria has required its own separate alert definition. Not only is this a tedious and time consuming process to initially setup and configure, but it also necessitates the additional overhead of managing and maintaining what can be an unruly number of alert definitions.

 

Sign-up Now

 

We'd love to get your feedback on these new features. So tell us what you think in the comments section below, or better yet, sign-up here to download the latest SAM 6.1 beta and try them out for yourself!

 

Please note that you must currently own a copy of SolarWinds Server & Application Monitor that is under active maintenance to participate in the SAM 6.1 beta.

Filter Blog

By date: By tag: