11 Posts authored by: Michael Halpin Employee

Christoph Pfister Audience during the keynote


Hey folks!

The February 2018 EMEA Partner summit finished a couple of weeks ago, and it was so good we wanted to just take a little bit of time to savour it before this post. This was our sixth running of the event, and has definitely been our biggest and best to date. We were delighted to see a mix of partners who had previously attended other sessions as well as a number of new faces, with partners attending from all across Europe, the Middle East, and Africa.


Kevin Bury speaking during the executive panel


The event actually consisted of two tracks: one for our commercial staff and another for technical staff at our partners. For our commercial colleagues, there were two days of assorted content, including keynotes from cpfister (our EVP for product), kevin.bury (our SVP for customer experience), and Ludo Neveu (sales GVP). All of our sessions received great feedback, but two that stood out in particular included a customer success session (featuring THWACK® MVP robertcbrowning), and the commercial wrap-up session, where our sales team was able to share part of our internal workflows.


Our technical partners also had a chance to attend the keynotes. There was a focus on technical training, with the technical aspect spread over four days and exams running on the final day. Given the variety of experience and specialties of those attending, two different training sessions were run during the days, focused primarily on providing training for the NPM and SAM SCP certifications, as well as a new beta exam for NTA and a more advanced Orion® Architecture beta exam. 




Of course, it’s not all just work. There’s some downtime in the evenings, and it’s great to be able to share some Irish hospitality with our visitors. The main social event took place on Tuesday night, where we enjoyed some good barbecue within walking distance of our event hotel. But as an added bonus, once we finished the meal (and it took a while; there was some GOOD eating!), we moved on to some post-meal entertainment, including bowling, pool, and, if the stories are true, a very competitive table-tennis game between some keynote speakers.





For those of us within SolarWinds, our partner events are some of the biggest highlights of the year. It’s an excellent opportunity for our sales and marketing staff to have some face-to-face time with many partners with whom they may have worked remotely over the years. And while one of the overall goals of the week is partner enablement, it’s also an excellent opportunity for partners to provide feedback directly to SolarWinds. Work is already under way for future Summits, taking into account feedback from our partners, but also looking to incorporate some new ideas as well.




And as a bonus, here’s a video highlighting our great week.


As the countdown to the EMEA Partner Summit (formerly Partner Bootcamp) is well underway (we begin on February 5), we want to share a few points on what to expect at next month’s event. We are delighted that the event has grown over the last couple of years, not just in the number of attendees, but also in the breadth and depth of content. We have actually had so much interest that we now run separate tracks for technical and for sales aspects.


Just to give a flavor of what to expect, here is a small taster:




We are delighted to have not one, but three keynote speakers for this EMEA Partner Summit, including:


  • Christoph Pfister (EVP Product)
  • Kevin Bury (SVP Customer Success & Business Operations)
  • Ludovic Neveu (GVP Sales)





The technical track will focus on training for SolarWinds Certified Professional® (SCP) exams, including:

  • Network Performance Monitor (NPM)
  • Server & Application Monitor (SAM)
  • NetFlow Traffic Analyzer (NTA) (beta)


As you can tell, our technical training team didn’t just stop at a single exam for NPM. They have worked tirelessly to extend the scope of products, with SAM in the process of being added to the portfolio. And as a bonus for those attending the Summit, there will be access to a new exam for NTA.


However, that’s not all. We are also delighted to announce a brand-new SCP track for Orion® Architecture, aimed at our consultants and partners. This SCP certification will focus specifically on deploying large-scale Orion deployments, and will focus on topics such as:

  • Centralized vs. distributed deployments
  • Deploying high availability
  • Scaling topics for Orion modules
  • Integration with other solutions



As promised, we have a great line-up of evening activities. Highlights include a private dining experience at an
award-winning barbeque restaurant. We’re bringing the flavours of our Austin HQ to Ireland, with the former head chef at Jamie Oliver’s Barbecoa Smokehouse.

The Mardyke
Images: © 2017
Mardyke Entertainment Complex. All rights reserved.

Delegates who are staying until Friday are invited to join us on a totally unique quest through one of Cork’s oldest prisons, which is now a top visitor attraction.

Cork City Gaol
Images: © 2017
Cork City Goal. All rights reserved.


If you are interested in attending and haven’t registered yet, please contact your Channel Account Manager for more details. And if you have already registered, we look forward to seeing you there.

The eagle-eyed among us may have noticed that the “What We’re Working On” post for Network Performance Monitor (NPM) included this little nugget:

  • ARM Linux® Agent - Linux Agent for ARM-based devices such as Raspberry Pi®


Well, we’re delighted to announce that with the release of NPM 12.2, you can now monitor ARM-based devices with NPM.


Orion Agent on a Raspberry Pi


What exactly are ARM-based devices?


ARM (Advanced RISC Machine) processors are a family of RISC-based processors, which are designed and licensed for manufacturing by ARM Holdings.


I won’t get too bogged down in the differences between RISC (Reduced Instruction Set Computer) vs. CISC (Complex Instruction Set Computer) processors. At a very basic level, one processor type has a smaller and simpler set of instructions than the other. (This becomes quite a philosophical debate, which is why I won’t get into it, but a lot of RISC implementations are now more complex than a lot of CISC processors!)


While both types of computer architectures are common (RISC ISAs include MIPS, PowerPC®, and SPARC®, while CISC can cover the x-86 and 8080 implementations, as well as Motorola® 6800 & 68000), what I really want to focus in on is practical implementations of ARM devices.


The Internet of Things and Raspberry Pi


ARM processors are now almost ubiquitous in not only consumer electronics (vendors ranging from Apple®, Google®, Samsung®, and Sony® have all utilized ARM processors in their various smartphones, tablets, and gaming consoles), but can also be deployed in anything from sensors and cash machines, to coffee pots and children’s toys.


But from an IT professional’s perspective (or someone whose background includes computer engineering and cloud computing, as well as working with various industrial technologies in manufacturing), the main implementation of interest to me is Raspberry Pi.


If you have made it this far, I’ll assume you at least know of the single-chip computer that is available for only a few euros/dollars/your currency of choice. You may also know that it was originally (and still is excellent for) teaching kids about computers, as well as various hobbyist-type uses (such as converting an old radio into a Spotify®-playing boom box, remote-controlling the office coffee pot, or even just a Tweeting implementation for Pi Day )


However, I see many other different use cases for a Raspberry Pi, which I like to categorize into three areas:


Network Services

  • Something like a lightweight Postfix mail server
  • Apache®®web servers
  • DNS/DHCP servers
  • Or even a Docker®
  • Some customers use Raspberry Pi as a cheap way to have an always-on NOC display of your SolarWinds® Orion® dashboard
  • Next on my personal list is an “On Air” sign
  • By combining with a device like an Arduino, you can easily control various servos, allowing not only displays, but for physical actions to be taken



However, since I’ve worked in manufacturing, I see some of the largest gains in retrofitting a Raspberry Pi to industrial equipment, to cheaply and easily create an Internet of Things (IoT) deployment of various sensors. Again, by combining with Arduinos, your networked Raspberry Pi can connect to various sensors, including:

  • Temperature
  • Humidity
  • PIR Motion
  • And motion detectors and accelerometers


(Just to name a few.)


Of course, if you do have Raspberry Pi devices dotted around you network, you will of course want to monitor them, which is where the new ARM agent comes in.


How do I deploy the ARM agent?


The good news is that the same approaches for deploying the standard Linux agent will also work for the ARM agent. 

We have a number of videos that cover different ways to deploy the agent, but personally, I like to just “Download Agent Software.” Then, for the “Linux Agent,” “Manually Install by Downloading Files.”


Manage Agents option in Orion


Download Agent Software

You can select the “Raspbian Jessie 8.x” distro (in my lab I’m running Raspbian 9 (stretch), and it works fine), and generate and copy the script, before simply pasting into your SSH® session.


Download Agent Software

And that’s it! Within a few minutes, your Raspberry Pi should be registered as an Orion node.


What if I have the Server & Application Monitor (SAM) deployed, can I monitor applications on my Raspberry Pi?


The short answer is that the SAM team is working hard on the next release, and that will include the ARM agent. However, there’s good news for customers who own SAM, along with other products running the latest version of the SolarWinds Orion Core (2017.3): you can actually utilize the new ARM agent.


So, for example, if you have SAM 6.4 running on a server that also includes the latest version of NPM (12.2), then the underlying core version will also have been updated. This means that you can now assign SAM templates to the ARM agents, as opposed to monitoring those devices with just SNMP. (You could also substitute out NPM 12.2 for other products that include agents running on the 2017.3 core version, such as Virtualization Manager (VMAN) 8.0 or the VOIP & Network Quality Manager (VNQM) 4.4.1).


What can I monitor from SAM?


The same component monitor types that run on the standard Linux agent will also run on the ARM agent:

  • Process Monitor
  • Linux/UNIX® Script Monitor
  • Nagios® Script Monitor
  • JMX Monitor
  • SNMP Monitor
  • ODBC User Experience Monitor
  • Oracle User Experience Monitor
  • TCP Port Monitor
  • Tomcat Server Monitor


So, templates like the Linux Memory, CPU, and disk templates that ship with SAM will work out of the box. You can even deploy Docker on your Raspberry Pi, spin up a Dockerized verion of Netflix Chaos Monkey and monitor the whole lot from SAM!


And that’s it. With a few simple steps, you can be up and running with the new ARM agent, bringing the same monitoring power to your ARM devices as your other enterprise platforms.


We would love to hear about your Raspberry Pi use cases for the Orion agent. Please share in the comments below!

One of the many reasons that modules based on the  SolarWinds® Orion® Platform like Server & Application Monitor (SAM), Network Performance Monitor (NPM), and Network Configuration Manager (NCM) are so popular is because customers are able to see value straight out of the box. However, the power of the Orion Platform can easily be taken to the next level by customizing aspects such as reports, alerts, and views. And the cherry on top is that you can also create custom templates for what you manage and monitor! In these posts, we will look at how by not re-inventing the wheel and importing customizations created by other THWACK® members, you can quickly and easily expand your SolarWinds deployment with these best practices.




Overall, the approach is fairly straightforward: on THWACK, our customers (and SolarWinds staff) can share content they have created so that others can re-use it. If you need to poll custom metrics or use existing polled data, it's always worth checking THWACK to see if there is a pre-existing poller that meets your needs rather than building one from scratch.


Later, we will look at how this content can be created, but once it's in THWACK, it's generally straightforward to either load the customizations either directly in Orion itself, or indirectly by downloading from THWACK and then importing them into Orion. This approach can be very useful if your Orion server is not connected to the internet.



Pollers within NPM



First, we will look at how to use pollers for NPM. Essentially, they are two types: you have pollers created via the Device Studio and Universal Device Pollers (UnDPs). Device studio pollers cover technologies such vendor name, CPU, and memory, should devices not support standard polling OIDs, whereas UnDPs tend to be used for more "metric" types.


With Device Studio pollers, you can:

  • Poll devices that do not support any of the OIDs polled for by SolarWinds pollers
  • Poll devices that return incorrect data when polled by SolarWinds pollers
  • Override polled values to display custom static values


Device studio pollers can be both created and imported in the Orion console itself, and we'll see that these are fairly typical use cases. (More information on the use case is covered over in the online documentation).

The easiest way to find what's already built is to open the device studio ("Settings->"Manage Pollers") in Orion, and then select "Thwack Community Pollers".

From here you can simply select the Poller you wish to use, and then "Assign" to node. On the next screen, you can then scan the nodes for matching OIDs, and you can then enable as needed.


Import from Thwack Directly into Orion



The indirect approach has a few more steps. You would navigate to THWACK from an internet-connected machine, download the template you need, and then re-import to the Orion Platform.


Firstly, navigate to THWACK (if you are reading this, you are already there!) and browse to "Product Forums --> Network Performance Monitor." Then select "Content Exchange.”


Thwack Forum


From here, you can then navigate to the "Device Pollers" category and ensure the "Documents" filter is applied. From here, it's just a matter of selecting the poller you require and then clicking on "Download."


Download Content from Thwack


Once you download the file, you can then import it to the Orion Platform in the Device Studio, except instead of selecting the "THWACK Community Pollers," use the Local Poller Library.


UnDPs work slightly differently, in that they are still managed via a Win32 application on the Orion server itself, but you can still utilize the import function to benefit from UnDP templates on THWACK. The approach is virtually the same as for the device studio pollers, except instead of selecting the "device pollers" category, you select the "Universal Device Pollers" category. As before, just download the template, copy to the Orion server, and then import.

Universal Device Poller (UnDP)

<insert UnDP screenshot>


Stay Tuned


In the next posts, we will look at application templates within SAM and device templates within NCM.

Greetings fellow THWACK® comrades!

In a slight deviation from my usual PowerShell® posts, I wanted to share a sneak peek of some of the work that is going on behind the scenes at the Cork office for next week’s SolarWinds® Partner Bootcamp EMEA.

If you’re not familiar with our SolarWinds Partner Bootcamp, it’s a multi-day event in Cork, where SolarWinds technical and sales staff host a series of product workshops with our channel partners from across Europe, the Middle East, and Africa. During these face-to-face sessions, we’re able to cover advanced topics, provide technical training, and receive feedback directly from our partners in the field.

Expect another post in the near future, but in the meantime, here’s a little teaser of some of the welcome kit items our attendees might expect!



As I'm sure you’re now well aware of, the AWS® outage, earlier this month, was caused by simple human error, where an unfortunate soul ran a command without fully understanding its consequences. As sqlrockstar pointed out in his post on the topic Lessons Learned From the AWS Outage, PowerShell's creator, Jeffrey Snover, actually built such functionality to help mitigate such errors into PowerShell itself. So, I thought it would be a good opportunity to look at that functionality in a more detailed, practical manner. In particular, I want to look at using -Whatif and -Confirm within functions.


To examine the topic further, I've added a real-world example—in this case, the "Remove-OrionNode" function from the PowerOrion module.

$swis = Connect-Swis -UserName admin -password ''
Remove-OrionNode -SwisConnection $swis -IPAddress '' -WhatIf

 What if: Performing the operation "Removing Object" on target "swis://"


If I called the above three lines, but without the "-whatif" parameter in the final line, the code would simply execute, and the node referenced by "NodeID=1" would be deleted. But as that particular object wasn't actually targeted within the function call—rather, the IP "" is—is that the node the user expected, or was a different node expected? By simply adding the -whatif parameter, the user can now have a very good understanding of what will happen before a major change is affected.


Similarly, if we want to confirm such a change, we can utilise the "-confirm" parameter, providing the user a safeguard, to confirm/reject such changes.


Confirm Remove Node.png


How to Implement


Fortunately, PowerShell provides an excellent framework to allow such expanded functionality. By adding the "CmdletBinding" attribute to your advanced functions, your PowerShell can now behave like compiled cmdlets. Without any other parameters, this is enough to allow passing "-verbose" and "-debug" to your functions. In particular, the "SupportsShouldProcess" allows the -WhatIf and -Confirm functionality we are looking to achieve.


For more information on the topic you can run the following command:


get-help about_Functions_CmdletBindingAttribute



However, for a more practical inspection, we will analyze the function below. To enable the functionality, we simply place it just after the function name is defined:


function Remove-OrionNode



However, we then call the code by wrapping actual code that makes the edits in an "if" statement, which calls " $PSCmdlet.ShouldProcess("$uri","Removing Object")". So that is the actual logic that calculates what object is under scope, as well as provides the textoutput, which can be seen in the previous screenshot.



By leveraging PowerShell's native safeguard functions, you can now easily add a level of robustness, helping to prevent unintended consequences when running your scripts. I'd be interested to hear if you were already doing something similar in your script. If you have any interesting input on the topic, please share in the comments below.

In this week’s post, we will look at the steps needed to create a PowerShell® module containing the functions written in previous posts. While functions are the keystones in code reuse, encompassing them within modules allows the most flexible and easy ways to share code between users and machines.


At its simplest, a PowerShell module can be made by creating a .PSM1 file, which has the same name of your module, and placing your functions within. That file is then itself placed in a folder—again, named after your module.


Function Write-Hello{
     write-host "Hello World"

For example, if I want to create a module called “MyModule,” to share with others, and within that module I want to share with others, I would simply create a folder called “MyModule,” and within that folder a text file called "MyModule.psm1". Next, we simply copy our function(s) into the file.


While modules can be loaded from anywhere, the best practice is to load them in to $home\Documents\WindowsPowerShell\Modules. (If you want to use a non-listed location, simply use the “Import-Module” cmdlet.)


However, you can check a full list of folders by running:


get-item env:psmodulepath | select value -ExpandProperty value


If we do a quick “get-module –listavailable,” we can now see our module is listed.


get-module -listavailable.png


Since it's loaded now into the session, any cmdlets within that module can now be called, simply by calling the relevant command name. We can verify our example by listing all commands that begin with the "write" verb.


get-command -verb write


get-command -verb write.jpg


One point worth noting in the previous examples, is that we can see in the screenshots that the module version is listed as “0.0.” If we want to add metadata to our modules, such as versions, license info, authors, etc., we can achieve this by adding a manifest file (a .psd1 file, which contains a hashtable with all the values). Of course, as with most things PowerShell, there's a cmdlet to help!


get-help New-ModuleManifest


Here is the example I used to build my manifest:


$guid = [guid]::NewGuid()
New-ModuleManifest -path .\MyModule\MyModule.psd1 -Guid $guid -Author 'Michael Halpin' -Description "Demo Module" -ModuleVersion 0.1

These are the basic steps that I used when creating the PowerOrion module. If you have done any work in creating your own modules, feel free to share those in the comments below.

Okay, so “road warrior” may be a stretch. However, as a sales engineer here in SolarWinds EMEA, I still do rack up a decent amount of international travel every year. Leon has already written a post, “A CONVENTION GO-ER’S GUIDE, PART 1,” where he covered stuff to bring to a conference, but I thought a post covering more international type travel might be of interest.


  1. Spend money on proper luggage. Traveling can be hard enough, without having to deal with annoyances like bags with squeaky wheels and broken handles. If you travel a lot, it is worth investing in proper equipment. I have numerous bags that I use for different occasions, all of which work and work well. This means that not only can I get to my destination with minimum fuss, but my contents are also more likely to get there intact.
  2. Bring carry-on whenever possible. This is advice that’s commonly known by business travelers, but it’s worth reiterating. Carry-on not only means that you are less likely to lose luggage; it is also faster when you arrive. (According to George Clooney’s character in the 2009 film “Up in the Air,” it is 35 minutes on average to claim your luggage, but I am not sure what the scientific study was that was behind that particular claim.) One of the biggest wins for me, however, is that I have less to carry on the other side. A big bonus, if I need to catch the London Underground, for example. With this in mind, my go-to bag for short business trips is a Samsonite® hard-sided carry-on roller. It is not the cheapest, but it allows me to bring both my laptop, and several days of clothes, in a form factor that keeps the weight of my back, while being small enough to work as carry-on.
  3. Use a sticker as a name tag. Over the years, I have found that traditional luggage tags are prone to wear and tear, and hence can get lost. However, a name sticker at least allows it to be identified easily.   Suitcase.jpg
  4. Be fault tolerant. If I need a hard copy of something when I travel (like my driver’s license or passport), I’ll always keep a scanned copy in Dropbox®, just in case something goes amiss. Similarly, if I have soft copies of things that I might need (such as my itinerary), I will print those just in case my phone dies. This goes all the way to ensuring I have a things like a small first-aid kit (more in the next point), wash bag, and polo shirt with me on carry-on (even if I have luggage checked).
  5. Have a mini first-aid kit. A lot of travel tends to not lead to the healthiest lifestyle, and as a result, I’ve succumbed to various ailments over the years. However, I have found that by having some standard, over-the-counter medicines to treat things like stomach aches, allergies, toothaches, etc., can prove invaluable when you’re on the road and away from a chemist (or pharmacist, depending on your locale).
  6. Your phone can do a lot. Of course, you can use your phone for the usual texting/calling/social media duties when abroad (especially if you have a good roaming package). I also have mine loaded with useful travel apps like Tripit®, Uber®, Google® Maps, and Dropbox (see point 4). For entertainment, I ensure my phone is synced with offline content from Netflix®, Spotify®, and Audible®. In addition, for some less conventional use of the camera, I always take pictures of where my car is parked in the airport, as well as taking photos of receipts.
  7. Pack an extension lead. A bit of an unusual one, but I discovered Powercubes online, and haven’t looked back. One of the annoying things with many hotels is the disconnect between where you want to have your devices, and where the actual power sockets are located. As an added bonus, it also means I only need to bring a single international adapter
  8. Modules! I would not say I have OCD, but when traveling, I want to know where exactly everything is. This helps not only reduce time to get through airport security, but it allows me to reuse things for different workloads. As an example, when I’m at home, my Maxpedition® Organiser goes everywhere with me. (It holds items such as a battery pack, USB cable, earphones, pens, a notebook, a Leatherman® Skeletool® CX, a 4-in-1 screwdriver, a Spork,and a Led Lenser® flashlight). Combine that (minus anything that may cause issues with airport security, such as the Leatherman, screwdriver, and light) with my smartphone, and it contains everything I need to pass hours on airplane. It even has a keyclip, so you don't lose your car keys! (cc sqlrockstar) Even my notebook is “modular.” Simply by adding a penholder, I can now carry two pens all the time, whenever I need the notebook. Other modules include a document holder (passport, tickets, etc.), packing pods for clothes, and a Grid-It® for other electronic accessories.
  9. Keep the bag packed. Usually, as soon as I get back from one trip, I start repacking for the next one. This means I am much more likely to replenish any consumables (such as toiletries or medicines) that I may have gone through. However, it also means that if I’m ever called on at short notice, then I’m good to go!
  10. Never turn your nose at a power socket. This is a fairly common-sense tip, but even if I know my laptop battery should have more than enough juice to last for a particular trip, if I get a chance to plug in and keep the battery charged, I’ll take it. Why? Well, let’s just say I have been on the wrong end of cancelled flights, in countries where charging wasn’t possible!




And that concludes my top 10 tips for business travel. I would love to hear yours in the comments below!

Here’s a nice tip that can help you eek out a bit more performance from your Orion® server. The following approach is used in some larger environments to resolve out-of-memory issues experienced by the Orion Module Engine. However, it’s also safe in environments where there is spare RAM that the service might use.


How to apply the change

Previously, the change involved editing a configuration file directly. However, the latest versions of Orion Core   have made this process much simpler. Navigate to one of the two URLS below (the global option will be the one to run, unless there is a specific reason not to). This will set the change on not only the main Orion server, but also on all the additional polling engines.


The advanced configuration settings editor pages are hidden and can be accessed via link:

    • {OrionWebsite}/Orion/Admin/AdvancedConfiguration/Global.aspx - leads to “Global” tab
    • {OrionWebsite}/Orion/Admin/AdvancedConfiguration/ServerSpecific.aspx - leads to “Server-Specific” tab
  • Find ForcePluginsInSeparateProcess under SolarWinds.BusinessLayerHost.BusinessLayerHostSettings
  • Set ForcePluginsInSeparateProcess to have a “tick”


Business Layer Host Settings.jpg


You will then be prompted to restart the Orion Module Engine. This is safe to do, and should not affect any customer experience in the console.


What exactly does the change do?

The Orion Module Engine loads numerous different plugins when it runs (Examples: Orion, NPM, Interfaces, UnDP, VIM, Wireless, SAM, UDT, NCM, etc.). Since 32-bit executables start to run out of memory close to 1GB of RAM, a single plug-in running in the service can act as bottle for the others. The changes mentioned above simply allow each individual plug-in to run in its own process. So, not only does this simplify troubleshooting in some support issues, but it also has the bonus of improving performance in perfectly healthy machines.


And how do you know it works?

• Open Task Manager -> View-> Add Columns -> Add "Command Line"

You can then see how much CPU and memory each one instance of the business layer uses!


Disclaimer: While this change can be considered safe and extremely low-risk to deploy, as always, every change to a production system should be first thoroughly tested, and steps should be taken to allow roll-back if things go awry.

As a follow-up to my last post, this week I'll be looking at different options for how to make the PowerShell® code for my Busylight more manageable and easier to reuse. To start with, I'm going to focus on simply setting the Busylight to either be green, red, or blue. We'll initially look at just using a basic script, and then we'll move on to making code more portable via functions.


The simplest way to do this is to just copy all my code (the massive five lines) into a .ps1 file, and change the colour values as needed. So, I could end up with three virtually identical files, (SetBusyLightToGreen.ps1, SetBusyLightToBlue.ps1, and SetBusyLightToRed.ps1), where the only thing changed is something similar to:

$Color.GreenRgbValue= 255

Running is easy; it's not any more complicated than calling .\SetBusyLightToGreen.ps1. However, that's not ideal, because any future change would then require me to replicate it across x number of files. For example, if I wanted to add in some error handling, it's now three updates. And if I want to create a file for the colour purple—well, that’s another file to look at.


A better approach is to move all my code into a single file, and break the units of work into smaller code. When the script runs anything not in the script body will run, and code within the functions will only run when the function is called. Creating a function is simple. Just use the FUNCTION keyword, followed by the function name, and the script code itself enclosed within a pair of braces. So, in this example, the first three lines are called as soon as the script is run, and the functions only when explicitly called. Not only can we now call this function within the script, but once we open the console running the script, those functions will exist in-memory, so they can be called whenever we want. We can also make use of tab completion. Each function should ideally do just one thing, and do that one thing well. Also, it should be able to work with objects both as inputs and outputs on the pipeline. These are more advanced topics, and we’ll go into further detail in future posts, but they’re worth bearing in mind even at this early stage.

Naming Conventions

Before we go further, it’s worth spending a few minutes discussing naming conventions. When it comes to naming PowerShell functions, consider cmdlets, even though technically a cmdlet needs to be something that is compiled. Essentially, each function should follow a verb-noun format, with the verbs in particular needing to conform to approved verbs. This is why things like “set-busylighttogreen” makes it easier for you to understand what your code does, while also acting as a helpful document for the next person who has to read it. You should also get in the habit of always using the full command name in scripts. But if you’re working from the CLI, you can also set aliases for your most common commands. So, instead of doing a “get-childitem,” you can use a ls, dir, or whatever is more intuitive. I tend to use notepad++ quite often, so I have an alias set in my profile. Instead of calling “C:\Program Files\Notepad++\Notepad++.exe,” I can now just call np from the console.

Why All the Fuss About Functions?

You’re probably asking, “Why bother with all of these extra steps?” Well, the end goal here is to allow us to easily reuse code. While a simple script may do the exact job today, if we have a similar but different requirement farther down the line, a rewrite would be needed. Having our code in functions also gives us the flexibility to determine how we call the code. Not only can we reuse code in other scripts simply by copying and pasting, but we can also place it in our profile script so that it's always available from the console. Ultimately, however, we will see that by placing our functions in modules, we will have the most flexible deployment approach. Wrap-Up In this post we looked at the basics of building a function. In the next, we’ll look at the steps involved in converting these functions into a module. We will continue to build on this in future posts, discuss error-handling and implicit help information, as well as ways to use Pester to build tests around our code.

Exit-PSSession "Blog Session"

This is the first is a series of posts, entitled “The PowerShellToolbox,” where I'll be talking about everyone's favourite shell/scripting language. I'm assuming a lot of you will at least have heard of it, and there will also be some fanatics out there, so I thought it might be of interest if I covered some aspects that come up in my role as a sales engineer. Specifically, I learned many of these lessons when I wrote the PowerShell Module based on the SDK.


These will include topics such as:

  • Reusing scripts by making them into cmdlets and modules
  • How to properly test your scripts using Pester
  • Creating a PowerShell jumpbox
  • A look at Desired State Configuration


... and many more. I'm writing this series based on the assumption that you, the reader, have a certain level of experience with PowerShell. For those wanting to learn the basics, I couldn't recommend enough the excellent training videos Jeffrey Snover and Jason Helmick created over at Microsoft® Virtual Academy. To begin with, we'll look at a project I've been thinking of for a while, and over the coming weeks iterate the solution, beginning today with a high-level planning phase. We’ll then move on to some basic cmdlets, before adding those to a module. Along the way, we'll also look at using unit testing for the scripts, error handling, and source-control. While the code here is specific to a use case for me, the overall goal is to show you how to take an idea, convert it to PowerShell, and then build it into a robust tool in your arsenal.


The Scenario

I work from home and while I'm usually the only one in the house, there are times when it's obviously useful to have some sort of indicator as to whether the door to my office is closed due to the fact I'm on a call (and "do not disturb") or whether it's okay to pop in. While trying to find  a solution, I stumbled across Scott Hanselman's blog on using a Kuando Busylight. Essentially, it's a USB light that indicates presence based on your Lync® presence, but it can also work with other technologies. There are Alpha and Omega models, but given there wasn't much of a price difference on Amazon®, I went for the Omega. And by pairing it with a 5m USB extension cable, I should be able to locate it outside my office.


However, some of the technologies I use aren't supported, so I want to have a way to manually control the colour. While there is a manual sample app for this, I want to be able to script these changes, since I use some PowerShell scripts to configure certain workflows on my system—like when I'm running a WebEx®, I'll connect to another network and ensure that certain apps are not running—and I want to be able to control the device from PowerShell.


Buslyight and Jabra headset

Busylight alongside Jabra headset for perspective


PowerShell for Busylight

  Fortunately, there is also a

Busylight SDK available (I'm using Vers

., and while it ships with C++, C#, and VB code samples, a really strong feature of PowerShell is that it’s easy to expand its reach by using objects designed for other technologies. I'm not going to do a huge amount of coding today, but just for a quick proof of concept, I want to very quickly just do something basic—like setting the color to blue—from PowerShell. Once I download and install the SDK, the next step is to look at the included help file documentation, and I note there are a few classes: busylightcolor, busylightdevice, PulseSequence, and SDK. That might be of interest.

Help File.png

Busylight SDK  Help


At this stage, don't panic if you don't know what constructors, methods, and properties are; I'll explain as we go along. In my case, I'm fortunate that I also have a basic understanding of some programming concepts—I've worked with C, C++, and Java, as well as various VB & FoxPro applications I developed in previous roles—but the key thing I noted that in the "SDK" class is that it is described as “This class handles all connected Busylight Devices,” so it seems like a good place to start.

1. I'll load the DLL, so that PowerShell knows what code I want to use (the "add-type" line)

2. Next I'll create a new object, which from the documentation, is in a namespace called "Busylight" (tab-completion will help!), and I assign this to a $BusyLightDevice variable. (Just in case you're interested, at this stage the "constructor" that you see is called implicitly, so you don't have to worry about it)

3. And finally, I'll just now check the variable, to make sure it's connected, and sure enough we're now seeing info.

PowerShell to Connect to Busylight


And to set the color, again only 3 lines of code.

PowerShell to set the colour


Blue Busy Light

This time, however, we are now creating a color object that the busylight can interpret, has just blue set to max, and the rest to 0. This Busylightcolor object is then used as a parameter by the "method" called "Light" in the SDK class. (To get technical: in a class a method "does" something, like setting or reading a value, while the value of something, such as the BlueRGBValue in the Busylightcolor class, is called a "property.")


By setting the different red, green, and blue combinations as RGB values we can now set different colors. So if I now add in red, and check the settings:



And as we can now see, we have a purple!

Purple Busy Light


Wrap Up

In the next post, I'll iterate on the above code by writing some functions that will allow us to reuse the code in multiple places. But in the meantime, I'd love to hear if you've already begun working on a toolbox with these types of scripts. If you have any feedback on things you'd like to see in future posts, please let us know!


Exit-PSSession "Blog Post"

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.