Skip navigation
1 2 3 4 Previous Next

Geek Speak

2,130 posts

To paraphrase a lyric from Hamilton, Deciding to code is easy; choosing a language is harder. There are many programming languages that are good candidates for any would-be programmer, but selecting the one that will be most beneficial to each individual need is a very challenging decision. In this post, I will attempt to give some background on programming languages in general, as well as examine a few of the most popular options and attempt to identify where each one might be the most appropriate choice.


Programming Types and Terminology


Before digging into any specific languages, I'm going to explain some of the properties of programming languages in general, because these will contribute to your decision as well.


Interpreted vs Compiled



An interpreted language is one where the language reads the script and generates machine-level instructions on the fly. When an interpreted program is run, it's actually the language interpreter that is running with the script as an input. Its output is the hardware-specific bytecode (i.e. machine code). The advantages of interpreted languages are that they are typically quick to edit and debug, but they are also slower to run because the conversion to bytecode has to happen in real-time. Distributing a program written in an interpreted language effectively means distributing the source code.





A compiled language is one where the script is processed by the language compiler and turned into an executable file containing the machine-specific bytecode. It is this output file that is run when the script is executed. It isn't necessary to have the language installed on the target machine to execute bytecode, so this is the way most commercial software is created and distributed. Compiled code runs quickly because the hard work of determining the bytecode has already been done, and all the target machine needs to do is execute it.




Strongly Typed vs Weakly Typed

What is Type?


In programming languages, type is the concept that each piece of data is of a particular kind. For example, 17 is an integer. John is a string. 2017-05-07 10:11:17.112 UTC is a time. The reason languages like to keep track of type is to determine how to react when operations are performed on them.


As an example, I have created a simple program where I assign a value of some sort to a variable (a place to store a value), imaginatively called x. My program looks something like this:

x = 6
print x + x

I tested my script and changed the value of x to see how each of five languages would process the answer. It should be noted that putting a value in quotes (") implies that the value is a string, i.e. a sequence of characters.John is a string, but there's no reason678" can't be a string too. The values of x are listed at the top, and the table shows the result of adding x to x:




Weakly Typed Languages

Why does this happen? Perl and Bash are weakly (or loosely) typed; that is, while they understand what a string is and what an integer is, they're pretty flexible about how those are used. In this case, Perl and bash made a best effort guess at whether to treat the strings as numbers or strings; although the value 6 was defined in quotes (and quotes mean a string), the determination was that in the context of a plus sign, the program must be trying to add numbers together. Python and Ruby, on the other hand, respected 6 as a string and decided that the intent was to concatenate the strings, hence the answer of 66.


The flexibility of the weak typing offered by a language like Perl is both a blessing and a curse. It's great because the programmer doesn't have to think about what data type each variable represents, and can use them anywhere and let the language determine the right type to use based on context. It's awful because the programmer doesn't have to think about what data type each variable represents, and can use them anywhere. I speak from bitter experience when I say that the ability to (mis)use variables in this way will, eventually, lead to the collapse of civilization. Or worse, unexpected and hard-to-track-down behavior in the code.


That Bash error? Bash for a moment pretends to have strong typing and dislikes being asked to add variables whose value begins with a number but is not a proper integer. It's too little, too late if you ask me.


Strongly Typed Languages

In contrast, Python and Ruby are strongly-typed languages (as are C and Go). In these languages to add two numbers means adding two integers (or floating point numbers, aka floats). Concatenating strings requires two or more strings. Any attempt to mix and match the types will generate an error. For example in Python:

>>> a = 6
>>> b = "6"
>>> print a + b Traceback (most recent call last):   File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'int' and 'str'

Strongly typed languages have the advantage that accidentally adding the wrong variable to an equation, for example, will not be permitted if the type is incorrect. In theory, it reduces errors and encourages a more explicit programming style. It also ensures that the programmer is clear that the value of an int(eger) will never have decimal places. On the other hand, sometimes it's a real pain to have to convert variables from one format to another to use its value in a different context.

PowerShell appears to want to pretend to be strongly typed, but a short test reveals some scary behavior. I've included a brief demonstration at the end in the section titled Addendum: PowerShell Bonus Content.


Dynamic / Statically Typed


There's one more twist to the above definitions. While functionally the language may be strongly typed, for example, it's possible to allow a variable to change its type at any time. For example, it is just fine in Perl to initialize a variable with an integer, then give it a new value which is a string:

$a = 1;
$a = "hello";

Dynamic typing is typically a property of interpreted languages, presumably because they have more flexibility to change memory allocations at runtime. Compiled languages, on the other hand, tend to be statically typed; if a variable is defined as a string, it cannot change later on.


Modules / Includes / Packages / Imports / Libraries


Almost every language has some system whereby the functionality can be expanded by installing and referencing some code written by somebody else. For example, Perl does not have SSH support built in, but there is a Net::SSH module which can be installed and used. Modules are the easiest way to avoid reinventing the wheel and allow us to ride the back of somebody else's hard work. Python has packages, Ruby has modules which are commonly distributed in a format called a "gem," and Go has packages. These expansion systems are critical to writing good code; it's not a failure to use them, it's common sense.


Choosing a Language


With some understanding of type, modules and interpreted/compiled languages, now it's time to figure out how to choose the best language. First, here's a quick summary of the most common scripting languages:


C / ITypeS / DExpansion
PowerShellInterpretedIt's complicatedDynamicModules


I've chosen not to include Bash mainly because I consider it to be more of a wrapper than a fully fledged scripting language suitable for infrastructure tasks. Okay, okay. Put your sandals down. I know how amazing Bash is. You do, too. will





Ten years ago I would have said that Perl (version 5.x, definitely not v6) was the obvious option. Perl is flexible, powerful, has roughly eleventy-billion modules written for it, and there are many training guides available. Perl's regular expression handling is exemplary and it's amazingly simple and fast to use. Perl has been my go-to language since I first started using it around twenty-five years ago, and when I need to code in a hurry, it's the language I use because I'm so familiar with it. With that said, for scripting involving IP communications, I find that Perl can be finicky, inconsistent and slow. Additionally, vendor support for Perl (e.g. providing a module for interfacing with their equipment) has declined significantly in the last 5-10 years, which also makes Perl less desirable. Don't get me wrong; I doubt I will stop writing Perl scripts in the foreseeable future, but I'm not sure that I could, in all honesty, recommend it for somebody looking to control their infrastructure with code.




It probably won't be a surprise to learn that for network automation, Python is probably the best choice of language. I'm not entirely clear why people love Python so much, and why even the people who love Python seem stuck on v2.7 and are avoiding the move to v3.0. Still, Python has established itself as the de facto standard for networking automation. Many vendors provide Python packages, and there is a strong and active community developing and enhancing packages. Personally, I have had problems adjusting to the use of whitespace (indent) to indicate code block hierarchy, and it makes my eyes twitch that a block of code doesn't end with a closing brace of some kind, but I know I'm in the minority here. Python has a rich library of packages to choose from, but just like Perl, it's important to choose carefully and find a modern, actively supported package. If you think that semicolons at the end of lines and braces surrounding code make things look horribly complicated, then you will love Python. A new Python user really should learn version 3, but note that v3 code is not backward compatible with v2.x, and it may be important to check the availability of relevant vendor packages in a Python3-compatible form.





Oh Ruby, how beautiful you are. I look at Ruby as being like Python, but cleaner. Ruby is three or four years younger than Python, and borrows parts of its syntax from languages like Perl, C, Java, Python, and Smalltalk. At first, I think Ruby can seem a little confusing compared to Python, but there's no question that it's a terrifically powerful language. Coupled with Rails (Ruby on Rails) on a web server, Ruby can be used to quickly create database-driven web applications, for example. I think there's almost a kind of snobbery surrounding Ruby, where those who prefer Ruby look down on Python almost like it's something used by amateurs, whereas Ruby is for professionals. I suspect there are many who would disagree with that, but that's the perception I've detected. However, for network automation, Ruby has not got the same momentum as Python and is less well supported by vendors. Consequently, while I think Ruby is a great language, I would not recommend it at the moment as a network automation tool. For a wide range of other purposes though, Ruby would be a good language to learn.




PowerShell – that Microsoft thing – used to be just for Windows, but now it has been ported to Linux and MacOS as well. PowerShell has garnered strong support from many Windows system administrators since its release in 2009 because of the ease with which it can interact with Windows systems. PowerShell excels at automation and configuration management of Windows installations. As a Mac user, my exposure to PowerShell has been limited, and I have not heard about it being much use for network automation purposes. However, if compute is your thing, PowerShell might just be the perfect language to learn, not least because it's native in Windows Server 2008 onwards. Interestingly, Microsoft is trying to offer network switch configuration within PowerShell, and released its Open Management Infrastructure (OMI) specification in 2012, encouraging vendors to use this standard interface to which PowerShell could then interface. As a Windows administrator, I think PowerShell would be an obvious choice.





Go is definitely the baby of the group here, and with its first release in 2012, the only one of the languages here created in this decade! Go is an open source language developed by Google, and is still mutating fairly quickly with each release, as new functionality is being added. This is a good because things that are perceived as missing are frequently added in the next release. It's bad because not all code will be forward compatible (i.e. will run in the next version). As Go is so new, the number of packages available for use is much more limited than for Ruby, Perl, or Python. This is obviously a potential downside because it may mean doing more work for one's self.


Where Go wins, for me, is on speed and portability. Because Go is a compiled language, the machine running the program doesn't need to have Go installed; it just needs the compiled binary. This makes distributing software incredibly simple, and also makes Go pretty much immune to anything else the user might do on their platform with their interpreter (e.g. upgrade modules, upgrade the language version, etc). More to the point, it's trivial to get Go to cross-compile for other platforms; I happen to write my code on a Mac, but I can (and do) compile tools into binaries for Mac, Linux, and Windows and share them with my colleagues. For speed, a compiled language should always beat an interpreted language, and Go delivers that in spades. In particular, I have found that Go's HTTP(S) library is incredibly fast. I've written tools relying on REST API transactions in both Go and Perl, and Go versions blow Perl out of the water. If you can handle a strongly, statically typed language (it means some extra work at times) and need to distribute code, I would strongly recommend Go. The vendor support is almost non-existent, however, so be prepared to do some work on your own.




There is a lot to consider when choosing a language to learn, and I feel that this post may only scrape the surface of all the potential issues to take into account. Unfortunately, sometimes the issues may not be obvious until a program is mostly completed. Nonetheless, my personal recommendations can be summarized thus:


  • For Windows automation: PowerShell
  • For general automation: Python (easier), or Go (harder, but fast!)


If you're a coder/scripter, what would you recommend to others based on your own experience? In the next post in this series, I'll look at ways to learn a language, both in terms of approach and some specific resources.


Addendum: Powershell Bonus Content


In the earlier table where I showed the results from adding x + x, PowerShell behaves perfectly. However, when I started to add int and string variable types, it was not so good:

PS /> $a = 6
PS /> $b = "6"
PS /> $y = $a + $b
PS /> $y 12

In this example, PowerShell just interpreted the string 6 as an integer and added it to 6. What if I do it the other way around and try adding an integer to a string?

PS /> $a = "6"
PS /> $b = 6
PS /> $y = $a + $b
PS /> $y 66

This time, PowerShell treated both variables as strings; whatever type the first variable is, that's what gets applied to the other. In my opinion that is a disaster waiting to happen. I am inexperienced with PowerShell, so perhaps somebody here can explain to me why this might be desirable behavior because I'm just not getting it.

By Joe Kim, SolarWinds Chief Technology Officer


Government IT workers get a little squeamish on occasion. Understandable, right? I mean, federal IT managers must stay on top of the latest tech, which is constantly evolving. Legacy technologies are being replaced with shiny new cloud, virtualization, and networking software. Network complexity continues to grow, and budget and security concerns are always prevalent.


Underlying all of that may even be a sense of uncertainty regarding job security, some of which may stem from automation software. Don’t fret. Automation is your friend, and it can be used effectively to eliminate wasted time and unnecessary headaches.


Creating Their Legacy, Driving Innovation


Those should be comforting words for today’s federal IT professionals, who tend to have their fingers in a lot of pies. Beyond simply managing the network, growing network complexity, and initiatives like DevOps, have given administrators far more responsibility than ever before.


Today’s IT professionals can’t afford to be burdened with manual interventions that require hours – sometimes days – to fix. Furthermore, many like the idea of having time to do things that will help advance their agencies’ technology agendas, and create their own legacy.


Alert! Let’s Automate Responses!


Who wants to have to manually react to every single alert that comes through? Who has the time?


There’s a better way of dealing with alerts, one that won’t take hours away from your day.


Let’s take a look at a simple example. When a server alert is created because a disk is full, an administrator would typically deal with that task manually, perhaps by dumping the temp directory. What if they wrote a script for this task, instead. That would eliminate the need for a manual intervention?


Here’s another one. For whatever reason, an application stops working. Again, manually dealing with this challenge can be a painstaking, time-consuming process. Automation allows managers to write a script that enables the application to automatically restart.


Administrators can also evaluate their alerts to determine if an automated response is scriptable. This could create far fewer headaches.


Perhaps even more importantly, automated responses could free up IT time to develop and deploy new and innovative applications, for instance, or find better ways to deliver those applications to users.


Tools for the Job


Speaking of tools, there are certain types that should be considered. Change management and tracking, compliance auditing, and configuration backups should be on everyone’s automated wish list.


These tools save time and resources and greatly reduce errors that are sometimes created by manual tasks. These errors can lead to network downtime or potential security breaches. Meanwhile, they help to free up time for projects that can help your agency become more agile and innovative.


There are ways IT professionals can manage the hand they’re dealt more effectively and efficiently. They can use automation to make their lives easier and their agencies more nimble and secure. In turn, they can work smarter, not harder.


Find the full article on GovLoop.

In my last post WHEN BEING AN EXPERT ISN’T GOOD ENOUGH: MASTER OF ALL TRADES, JACK OF NONE, you all shared some great insight on how you were able to be find ways to be successful as individual SMEs and contributors, and how you could navigate the landscape of an organization.  


This week, I’d like to talk about silo organizations and how we’ve found ways to work better together. (You can share your stories, as well!)



This is the first thing I imagine when I hear that an organization is silo-ed off:



The boundaries are clearly defined, the foundation is well set, it’s very aged and well established. It doesn’t mean any of it is particularly good or bad, but it certainly shows the test of time. Navigating in that landscape requires more than tackling a delicate balance of ego and seniority.


Once upon a time, we had a very delicate situation we were trying to tackle. This may sound simple and straightforward, but needless to say, it’ll all make sense on how things were far from easy. We were faced with deploying a syslog server. Things literally do NOT get any easier than that! When I first found out about this (security) initiative, I was told that this was a "work in progress" for over two years, and that no syslog servers had been deployed, yet. Wait. Two years? Syslog server. None deployed?! This can’t be that difficult, can it? Welcome to the silo-ed organization, right?


On its surface, it sounds so simple, yet as we started to peel back the onion:


Security needed syslog servers deployed.

The storage team would need to provision the capacity for these servers.

The virtualization team would need to deploy the servers.

The networking team would need to provide IP addresses, and the appropriate VLANs, and advertise the VLANs as appropriate if they did not exist.

The virtualization team would then need to configure those VLANs in their networking stack for use.

Once all that was accomplished, the networking and security teams would need to work together to configure devices to send syslog data to these servers.


All that is straightforward, and easy to do when everyone works together! The disconnected, non-communicating silos prevented that from happening for years because everyone felt everyone else was responsible for every action and it’s a lot easier to not do things than to work together!


Strangely, what probably helped drive this success the most was less the clear separation of silo-by-silo boundary and more the responsibility taken by project managing this as a single project. When things are done within a silo, they’re often done in a bubble and begin and end without notifying others outside of that bubble. It makes sense, like when driving a car we’re all driving on the same road together and our actions may influence each other’s (lane changes, signal changes, and the like), but what music I’m listening to in my car has no influence on any other car.  


So, while we all have our own interdependencies that exist within our silos, when we’re working together ACROSS silos on a shared objective, we can be successful together as long as we recognize the big picture.   Whether we recognize that individually, or we do collectively with some dictated charter, we can still be successful. When I started this piece, I was more focused on the effects and influence we can make as individuals within our silos, and the interaction and interoperability with others within silos. But I came to realize that when we each individually manage our responsibilities within a “project,” we become better together. That said, I'm not implying that formal project management is required for any or all multi-silo interactions. It really comes down to accepting responsibility as individuals, and working together on something larger than ourselves and our organization, not just seeing our actions as a transaction with no effect on the bigger whole.


Then again, I could be crazy and this story may not resonate with any of you.   


Share your input on what you’ve found helps you work better together, whether it be inter-silo, intra-silo, farming silos, you name it!

From the dawn of computers and networks, log management has been an integral part of managing and monitoring IT processes. Today, managing log messages is instrumental to troubleshooting network issues and detecting security breaches. Log management involves collecting, analyzing, transmitting, storing, archiving, and disposing of log data created in your IT environment. With thousands of log messages created every minute, centralized log management and rapid analysis of problematic logs are some of the critical requirements faced by IT Pros.


Kiwi Syslog® Server is an easy-to-install and simple log management software that helps you capture and analyze logs so you know what’s happening inside your router, switch, and network cable. Here are the five most useful log management operations performed by Kiwi Syslog Server:

1. Syslog monitoring

Kiwi Syslog Server collects syslog messages and SNMP traps from servers, firewalls, routers, switches, and syslog-enabled devices, and displays these messages in a centralized web console for secure analysis. It also offers various filtering options to sort and view messages based on priority, host name, IP address, time, etc. You can also generate graphs on syslog messages for auditing purposes when needed.

2. Syslog alerting

Kiwi Syslog Server’s intelligent alerting sends real-time alerts and notifications when syslog messages meet specified criteria based on time, source, type, and so on. Kiwi Syslog Server has some default priority levels, such as emergency, alert, critical, error, warning, etc. to help you understand the severity of any given situation. Based on the alerts condition, you can set up automatic actions, including the following:

  • Trigger an email notification
  • Receive alerts via sound notification
  • Run an external program or script
  • Log to file, Windows® event log, or database
  • Forward specific messages to another host


3. Log retention and archival processes

Using the scheduler built into Kiwi Syslog Server, you can automate log archival and clean up processes.


You can accomplish the following using the log archival options in Kiwi Syslog Server:

  • Automate log archival operations by defining the source, destination, archival frequency, and notification options
  • Copy or move logs from one location to another
  • Compress files into individual or single archives
  • Encrypt archives and create multi-part archives
  • Create file or archive hashes, run external programs, and much more


With the scheduled clean-up option, you can schedule the removal/deletion of log files from a source location if it matches specific criteria. You can schedule the clean-up process at specified application/start-up times, or any time or date you wish.

4. Log forwarding

You can use Kiwi Syslog Server to forward log messages to servers, databases, other syslog servers, and SIEM systems. The Log Forwarder for Windows tool included in the Kiwi Syslog Server download package allows you to forward event logs from Windows servers and workstations as syslog messages to Kiwi Syslog Server.


5. Transport syslog messages

Kiwi Secure Tunnel, which is included in the Kiwi Syslog Server download, helps you securely transport syslog messages from any network devices and servers to your syslog server. It collects log messages from all configured network devices and systems, and transports them to the syslog server across a secure link in any network (LAN or WAN).


Read more…


Kiwi Syslog Server

As you can see, Kiwi Syslog Server helps you step up your log management operations while saving you invaluable time. Try a free trial of Kiwi Syslog Server now »

I have lots of conversations with colleagues and acquaintances in my professional community about career paths. The question that inevitably comes up is whether they should continue down their certification path with specific vendors like VMware, Microsoft, Cisco, and Oracle, or should they pursue new learning paths, like AWS and Docker/Kubernetes?


Unfortunately, there is no answer that fits each individual because we each possess different experiences, areas of expertise, and professional connections. But that doesn't mean you can't fortify your career.  Below are tips that I have curated from my professional connections and personal experiences.

  1. Never stop learning. Read more, write more, think more, practice more, and repeat.
  2. Be a salesperson. Sell yourself! Sell the present and the potential you. If you can’t sell yourself, you’ll never discover opportunities that just might lead to your dream job.
  3. Follow the money. Job listings at sites like will show you where companies are investing their resources. If you want to really future-proof your job, job listings will let you know what technical skills are in demand and what the going rate is for those skills.


As organizations embrace digital transformation, there are three questions that every organization will ask IT professionals:

  1. A skill problem? Aptitude
  2. A hill problem? Altitude
  3. A will problem? Attitude

How you respond to these questions will determine your future within that organization and in the industry.


So what do you think of my curated tips? What would you add or subtract from the list? Also, how about those three organizational questions? Are you being asked those very questions by your organization? Let me know in the comment section.


A final note: I will be at Interop ITX in the next few weeks to discuss this among all the tech-specific conversations. If you will be attending, drop me a line in the comment section and let’s meet up.

So, I’m sure you're all aware of the Google phishing scam. It, conveniently, presents a few key items that I would like to discuss.


What we know, as in what Google will tell us, is that the expedition did not represent an access of information. Rather, it merely gathered contacts and re-sent the phishing email for fake Google docs. Clearly, we need to discuss the key identifiers of how to protect yourself from similar attacks. The phishing emails were sent from (supposedly) Now if that doesn't look fishy, I don’t know what does. Regardless, people obviously opened it.


Another critical element is that the link the Google docs directed you to led to nothing more than a long chain of craziness, instead of a normal Google doc location. However, like most phishing, it appears to be from someone you know. So how can we protect ourselves?


Google installed several fixes within an hour. This shows great business practices for security on their side. We have to know that there is no one-size-fits-all for security, period. New breaches are happening every second, and we don’t always know the location, intent, or result of these attacks. What we can do is be mindful that we are no longer free-range users, and we have a personal responsibility to be aware of attacks, both at home and at work.


So, I'd like to help you learn the basics of looking for and recognizing phishing emails. First, and always, begin with being suspicious. Here are some ideas to help strengthen your Spidey senses:


  • Report phishing emails to your IT team or personal email account providers. If they don’t know, they can't fix the issue. They may eventually find out, but think of this as your friendly Internet Watch program.
  • Avoid attacks. NEVER give personal information unless you know why you are being asked for it, and are100% able to verify the email address. Make sure the email address actually matches the sender.
  • Hover over links and verify if they are going to the correct location.
  • Update your browser security settings. Google released a fix for this and pushed it out within hours.
  • Patch your devices -- including MOBILE! Android had an updated phishing release from Google within hours.
  • Stop thinking of patches for your phone as a feature request.


We can be our own cyber security eye in the sky! All it takes is motivation and time to be hacked, breached, or attacked, so we must be diligent and not let down our guards. Being vigilant is critical, as is proactively protecting ourselves at home and work by practicing a few simple practices.


And another thing: Let's stop sending out our SSIDs at home like a bat signal. There are little things we can do everywhere. Go big and implement MAC address filtering that will determine if anyone is trying to access your Wi-Fi big time. (Take it from someone who has four teenage daughters.)





The Actuator - May 3rd

Posted by sqlrockstar Employee May 3, 2017

Had a great time at the Salt Lake City SWUG last week and wanted to say THANK YOU to all who attended. I'm already looking forward to my next SWUG. But before that, I need to get my stuff together for Techorama in Antwerp in three weeks.


I've got a few long links to share today, so set aside some time for each. The last two are fun ones though, and I hope you enjoy them as much as I did.


Northrop Grumman can make a stealth bomber – but can't protect its workers' W-2 tax forms

At first glance, you might think this is an example of a company not protecting their data, but that's not the case. Northrop outsourced their tax portal to Equifax, meaning there is a lot more to this story than just blaming Northrop.


How Online Shopping Makes Suckers of Us All

I'm a fan of data mining, and I've been shopping online for decades. I never once thought to myself that the online markets could be manipulating the pricing. Makes total sense now. But I still got a great price on those ten pounds of nutmeg.


The Myth of a Superhuman AI

For anyone worried about the machines rising up to kill us all, this article gives hope.


Who is Publishing NSA and CIA Secrets, and Why?

With all the recent hacks and leaks, I've been wondering what the bigger picture would look like. I never thought it would be the result of bragging.


Here’s Why Juicero’s Press is So Expensive

At first, I thought this story was funny, now it's just sad to think about how much money has been wasted on something that nobody ever asked for, or wants. I can't imagine spending $400 for a juice machine, and a $35/week subscription on top of that. That's  over $2k in juice money in the first year. If you know of someone spending that much money on juice you should smack them in the mouth and then teach them how to make their own juice for a fraction of the cost.


Unicorn Startup Simulator

Okay, I was a bit harsh on the juice folks in the last link. So, here's a link to help remind us all how difficult it is for a company in the Valley to succeed.


Hilarious Sayings That Don’t Make Sense Translated

Because I enjoy languages, and infographics, and apparently the name Juicero is Russian for "to hang noodles on the ears."


At the SWUG last week in Salt Lake City the museum had this relic outside of our room:


Federal IT professionals spend much time and money implementing sophisticated threat management software to thwart potential attackers, but they often forget that one of the simplest methods hackers use to access sensitive information is through social media. The world’s best cybersecurity tools won’t help if IT managers inadvertently share information that may appear to be innocuous status updates, but in reality, could reveal details about their professions that could serve as tasty intel for disruptors.


On LinkedIn®, for example, attackers can view profiles of network and system administrators and learn what systems targets are working on. This approach is obviously much easier and more efficient than running a blind scan trying to fingerprint a target.


However, federal IT professionals can actually use social media networks to block attackers. By sharing information amongst their peers and colleagues, managers can effectively tag hackers by using some of their own tactics against them.


Most attackers are part of an underground community, sharing tools, tactics, and information faster than any one company or entity can keep up.


Federal IT professionals can use threat feeds for information gathering and defense. Threat feeds are heralded for quickly sharing attack information to enable enhanced threat response. They can be comprised of simple IP addresses or network blocks associated with malicious activity, or could include more complex behavioral analysis and analytics.


While threat feeds will not guarantee security, they are a step in the right direction. They allow administrators to programmatically share information about threats and create mutual defenses much stronger than any one entity could do on its own. There’s also the matter of sharing data across internal teams so that all agency personnel are better enabled to recognize threats, though this is often overlooked.


An easy way to share information internally is to have unified tools or dashboards that display data about the state of agency networks and systems. Often, performance data can be used to pinpoint security incidents. The best way to start is to make action reports from incident responses more inclusive. The more the entire team understands and appreciates how threats are discovered, the more vigilant the team can be overall in anomaly detection and in raising red flags when warranted.


Federal IT professionals could use the Internet Storm Center as a resource on active attacks, publishing information about top malicious ports being used by attackers and the IP addresses of attackers. It’s a valuable destination.


The bottom line is that while all federal IT professionals must all be diligent and guarded about what they share on social media, that doesn’t mean they should delete their accounts. Used correctly, social media and online information sharing can effectively help them unite forces and gain valuable insight to fight a common and determined enemy.


Find the full article on GovLoop.

The only constant truth in our industry is that technology is always changing.  At times, it’s difficult to keep up with everything new that is being introduced while you stay active in working your day to day duties.  That challenge grows even harder if these new innovations diverge from the direction that the company you work for is heading.  Ignoring such change is a bad idea. Failing to keep up with where the market it heading is a recipe for stagnation and eventual irrelevance. So how do you keep up with these things when your employer doesn’t sponsor or encourage your education?


1) The first step is to come to the realization that you're going to need to spend some time outside of work learning new things. This can be difficult for a lot of reasons, but especially if you have a family or other outside obligations. Your career is a series of priorities though, and while it may/should not be the highest thing you prioritize, it has to at least be on the list.  Nobody is going to do the work for you, and if you don’t have the support of your organization, you’re going to have to carve out the time on your own.


2) Watch/listen/read/consume, a lot. Find people who are writing about the things you want to learn and read their blogs or books. Don’t just read their blogs, though. Add them to a program that harvests their RSS feeds so you are notified when they write new things. Find podcasts that address these new technologies and listen to them on your commute to/from work. Search YouTube to find people who are creating content around the things you want to learn. I have found the technology community to be very forthcoming with information about the things that they are working on. I’ve learned so much just from consuming the content that they create. These are very bright people sharing the things they are passionate about for free. The only thing it costs is your time. Some caution needs to be taken here though, as not everyone who creates content on the internet is right. Use the other resources to ask questions and validate the concepts learned from online sources.


3) Find others like you. The other thing that I have found about technology practitioners is that, contrary to the stereotype of awkward nerds, many love to be social and exist within an online community. There are people just like you hanging out on Twitter, in Slack groups, in forums, and other social places on the web. Engage with them and participate in the conversations. Part of the problem of new technology is that you don’t know what you don’t know. Something as simple as hearing an acronym/initialism that you haven’t heard before could lead you down a path of discovery and learning. Ask questions and see what comes back. Share your frustrations and see if others have found ways around them. The online community of technology practitioners is thriving. Don't miss the opportunity to join in and learn something from them.


4) Read vendor documentation. I know this one sounds dry, but it is often a good source for guidance on how a new technology is being implemented. Often it will include the fundamental concepts you need to know in order to implement whatever it is that you are learning about. Take terms that you don’t understand and search for them.  Look for key limitations or caveats in the way a vendor implements a technology and it will tell you about its limitations. You do have to read between the lines a bit, and filter out the vendor-specific stuff (unless you are looking to learn about a specific vendor), but this content is often free and incredibly comprehensive.


5) Pay for training. If all of the above doesn’t round out what you need to learn, you’re just going to have to invest in yourself and pay for some training. This can be daunting as week-long onsite courses can cost thousands of dollars. I wouldn’t recommend that route unless you absolutely need to. Take advantage of online computer-based training (CBT) from sites like CBT Nuggets, Pluralsight, and ITProTV. These sites typically have reasonable monthly or yearly subscription fees so you can consume as much content as your heart desires.


6) Practice, practice, practice. This is true for any learning type, but especially true when you’re going it alone. If at all possible, build a lab of what you’re trying to learn.  Utilize demo licenses and emulated equipment if you have to. Build virtual machines with free hypervisors like KVM so you can get hands-on experience with what you’re trying to learn. A lab is the only place where you are going to know for sure if you know your stuff or not. Build it, break it, fix it, and then do it all again. Try it from a different angle and test your assumptions. You can read all the content in the world, but if you can’t apply it, it isn’t going to help you much.


Final Thoughts


Independent learning can be time consuming and, at times, costly. It helps to realize that any investment of time or money is an investment in yourself and the skills you can bring to your next position or employer.  If done right, you’ll earn it back many times over by the salary increases you’ll see by bringing new and valuable skills to the table.  However, nobody is going to do it for you, so get out there and start finding the places where you can take those next steps.


Is Automation the New Normal?

Posted by scuff Apr 28, 2017

As an IT pro, automation fascinates me -- as a concept. While I spend much of my day on support tasks that don't seem to be able to be automated, I'm also surrounded by business owners and online entrepreneurs crying out that automation is essential to business success. The world seems to be in lust with automation.

Before I delve into what automation actually is (stay tuned for a future post), I'd like to know if the automation bell is ringing just as loudly in your world. Do we even have to debate if automation is optional in today's IT environment? Or is the concept just a bunch of hype from a growing number of automation vendors?


My theory is that the adoption of automation is relative to the size of your environment and size of your IT team. In the enterprise space, you wouldn’t think twice about scripting desktop software deployments or using a deployment tool. It just doesn’t make sense to touch every device manually. In smaller organizations, you might argue that it takes longer to automate a process for something than it would to do it manually on five or 10 servers.  Getting over that cost of adoption is key. You have to be happy knowing it WILL take you longer to research and automate things the first time, but the payoff comes in cost savings every time you need to repeat that process in the future.


Automation is also good for removing the "unique human" from the picture. Lessen your reliance on that one person who knows how to do the thing! Program the thing to be done by an automation tool so that others on the IT team know how to use it, too.


I think we would also see a difference in the rate of automation based on what you’re actually automating. Is it a priority to automate new server builds, or do you first tackle automatic service restarts (as an example of a support issue we can automate the healing of, before needing our intervention)?


Does the attitude of your IT department (and your organization) have an impact on how you view automation? We’ve had automation of sorts for a long time, from Kix32 scripts to Group Policy settings. Has your organization stopped there, or have you fully embraced modern automation recipes and infrastructure as code? Are Powershell and Bash a regular, helpful part of your day, a necessary evil, or still as foreign as speaking Hungarian (unless, of course, you speak Hungarian)?


I’ve asked a lot of questions in this article and I’m really keen to hear your thoughts. I mix with some amazing people who have automation finely tuned, and other IT pros who are still wondering why they’d invest the time. As DevOps meets traditional IT ops, does automation provide the common ground of configuring code to benefit IT pros?

How should somebody new to coding get started learning a language and maybe even automating something? Where should they begin?


It's probably obvious that there are a huge number of factors that go into choosing a particular language to learn. I'll look at that particular issue in the next post, but before worrying about which arcane programming language to choose, maybe it's best to take a look at what programming really is. In this post, we'll consider whether it's going to be something that comes naturally to you, or require a very conscious effort.


If you're wondering what I mean by understanding programming rather than understanding a language, allow me to share an analogy. When I'm looking for, say, a network engineer with Juniper Junos skills, I'm aware that the number of engineers with Cisco skills outnumbers those with Juniper skills perhaps at a ratio of 10:1 based on the résumés that I see. So rather than looking for who can program in Cisco IOS and who can program in Junos OS, I look for engineers who have an underlying understanding of the protocols I need. The logic here is that I can teach an engineer (or they can teach themselves) how to apply their knowledge using a new configuration language, but it's a lot more effort to go back and teach them about the protocols being used. In other words, if an engineer understands, say, the theory of OSPF operation, applying it to Junos OS rather than IOS is simply a case of finding the specific commands that implement the design the engineer already understands. More importantly, learning the way a protocol is configured on a particular vendor's operating system is far less important than understanding what those commands are doing to the protocol.


Logical Building Blocks


Micro Problem: Multiply 5 x 4


Here's a relatively simple example of turning a problem into a logical sequence. Back in the days before complex instruction sets, many computer CPUs did not have a multiply function built in, and offered only addition and subtraction as native instructions. How can 5x4 be calculated using only addition or subtraction? The good news for anybody who has done common core math (a reference for the readers in the USA) is that it may be obvious that 5x4 is equivalent to 5+5+5+5. So how should that be implemented in code? Here's one way to do it:

answer = 0  // create a place to store the eventual answer
answer = answer + 5  // add 5
answer = answer + 5  // add 5 (2nd time)
answer = answer + 5  // add 5 (3rd time)
answer = answer + 5  // add 5  (4th time)

At the end of this process, answer should contain a value of 20, which is correct. However, this approach isn't very scalable. What if next time I need to know the answer to 5 x 25? I really don't want to have to write the add 5 line twenty-five times! More importantly, if the numbers being multiplied might be determined while the program is running, having a hard-coded set of additions is no use to us at all. Instead, maybe it's possible to make this process a little more generic by repeating the add command however many times we need to in some kind of loop. Thankfully there are ways to achieve this. Without worrying about exactly how the loop itself is coded, the logic of the loop might look something like this:

answer = 0
number_to_add = 5
number_of_times = 4
do the following commands [number_of_times] times:
  answer = answer + [number_to_add]

Hopefully that makes sense written as pseudocode. We define the number that we are multiplying (number_to_add), and how many times we need to add it to the answer (number_of_times), and the loop will execute the addition the correct number of times, giving us the same answer as before. Now, however, to multiply different pairs of numbers, the addition loop never needs to change. It's only necessary to change number_to_add and number_of_times.


This is a pretty low level example that doesn't achieve much, but understanding the logic of the steps taken is something that can then be implemented across multiple languages:




I will add (before somebody else comments!) that there are other ways to achieve the same thing in each of these languages. The point I'm making is that by understanding a logical flow, it's possible to implement what is quite clearly the same logical sequence of steps in multiple different languages in order to get the result we wanted.


Macro Problem: Automate VLAN Creation


Having looked at some low level logic, let's look at an example of a higher level construct to demonstrate that the ability to follow (and determine) a logical sequence of steps applies all the way up to higher levels as well.


In this example, I want to define a VLAN ID and a VLAN Name, and have my script create that VLAN on a list of devices. At the very highest level, my theoretical sequence might look like this:

Login to router
Create VLAN

After some more thought, I realize that I need to do those steps on each device in turn, so I need to create some kind of loop:

do the following for each switch in the list (s1, s2 ...  sN ):
  Login to router 
  Create VLAN

It occurs to me that before I begin creating the VLANs, I ought to confirm that it doesn't exist on any of the target devices already, and if it does, I should stop before creating it. Now my program logic begins to look like this:

do the following for each switch in the list (s1, s2 ...  sN ):
  Login to router
  Check if VLAN exists
  IF the chosen VLAN exists on this device, THEN stop!

do the following for each switch in the list (s1, s2 ...  sN ):
  Login to router
  Create VLAN

The construct used to determine whether to stop or not is referred to as an if/then/else clause. In this case, IF the vlan exists, THEN stop (ELSE, implicitly, keep on running).

Each step in the sequence above can then be broken down into smaller parts and analyzed in a similar way. For example:

Login to router
IF login failed THEN:
| log an error
| stop ELSE:
| log success

Boolean (true/false) logic is the basic for all these sequences, and multiple conditions can be tested simultaneously and even nested within other clauses. For example, I might expand the login process to cope with RADIUS failure:

Login to router
IF login failed THEN:
| IF error message was "RADIUS server unavailable" THEN:
| | attempt login using local credentials
| | log an error
| | stop
| log success


So What?


The so what here is that if following the kind of nested logic above seems easy, then in all likelihood so will coding. Much of the effort in coding is figuring out how to break a problem down into the right sequence of steps, the right loops, and so forth. In other words, the correct flow of events. To be clear, choosing a language is important too, but without having a grasp of the underlying sequence of steps necessary to achieve a goal, expertise in a programming language isn't going to be very useful.


My advice to a newcomer taking the first steps down the Road to Code (cheese-y, I know), is that it's good to look at the list of tasks that would ideally be automated and see if they can be broken down into logical steps, and then break those steps down even further until there's a solid plan for the approach. Think about what needs to happen in what order. If information is being used in a particular step, where did that information come from? Start thinking about the problems with a methodical, programming mindset.


In this series of posts, it's obviously not going to be possible to teach anybody how to code, but instead I'll be looking at how to select a linguistic weapon of choice, how to learn to code, ways to get started and build on success, and finally to offer some advice on when coding is the right solution.


The IT Training Quandary

Posted by mbleib Apr 27, 2017

What do you do when your employer says no more training? What do you do when you know that your organization should move to the cloud or at least some discrete components? How do you stay current and not stagnate? Can you do this within the organization, or must you go outside to gain the skills you seek?


This is a huge quandary…


Or is it?


Not too long ago, I wrote about becoming stale in your skill sets, and how that becomes a career-limiting scenario. The “gotcha” in this situation is that often your employer isn't as focused on training as you are. The employer may believe in getting you trained up, but you may feel as if that training is less than marketable or forward thinking. Or, worse, the employer doesn’t feel that training is necessary. They may view you as being capable of doing the job you’ve been asked to do, and that the movement toward future technology is not mission critical. Or, there just might not be budget allotted for training.


These scenarios are confusing and difficult. How is one to deal with the disparity between what you want and what your employer wants?

The need for strategy, in this case, is truly critical. I don’t advocate misleading your employer, of course, but we are all looking out for ourselves and what we can do to leverage our careers. Some people are satisfied with what they’re doing and don’t long to sharpen their skills, while others are like sharks, not living unless they’re moving forward. I consider myself to be among the latter.


Research free training options. I know, for example, that Microsoft has much of its Azure training available online for no cost. Again, I don’t recommend boiling the ocean, but you can choose what you want to select strategically. Of course, knowing the course you wish to take might force you to actually pay for the training you seek.


Certainly, a sandbox environment, or home lab environment, where you can build up and tear down test platforms would provide self-training. Of course, getting certifications in that mode are somewhat difficult, as well as gaining access to the right tools to accomplish your training in the ways the vendor recommends.


I advocate doing research on a product category that would benefit the company in today’s environment, but can act as a catalyst for the movement to the cloud. Should that be on the horizon, the most useful ramp in this case is likely Backup as a Service or DR as a service. So the research into new categories of backup, like Cohesity, Rubrik, or Actifio, where data management, location, and data awareness are critical, can assist the movement of the organization toward cloudy approaches. If you can effectively sell the benefits of your vision, then your star should rise in the eyes of management. Sometimes it may feel like you’re dragging the technology behind you, or that you’re pushing undesired tech toward your IT management, but fighting the good fight is well worth it. You can orchestrate a cost-free proof of concept on products like these to facilitate the research, and thus prove the benefit to the organization, without significant outlay.


In this way, you can guide your organization toward the technologies that are most beneficial to them by solving today’s issues while plotting forward-thinking strategies. Some organizations are simply not conducive to this approach, which leads me to my next point.


Sometimes, the only way to better your skills, or improve your salary/stature, is without the relationship in your current organization. This is a very dynamic field, and movement from vendor to end-user to channel partner has proven a fluid stream. If you find that you’re just not getting satisfaction within your IT org, you really should consider whether moving on is the right approach. This draconian approach is one that should be approached with caution, as the appearance of hopping from gig to gig can potentially be viewed by an employer as a negative. However, there are times when the only way to move upward is to move onward.


The Actuator - April 26th

Posted by sqlrockstar Employee Apr 26, 2017

Heading to Salt Lake City this week for the SWUG meeting on Thursday. If you are in the area I hope you have the chance to stop by and say hello. I'll be there to talk data and databases and hand out some goodies. I haven't been to Salt Lake City in a few years, and I'm looking forward to being there again even if there is a chance of snow in the forecast.


As always, here's a handful of links from the intertubz I thought you might find interesting. Enjoy!


Steve Ballmer Serves Up a Fascinating Data Trove

As someone who loves data, I find this data project to be the best thing since Charles Nelson Minard. Head over to and get started on finding answers to questions you didn't know you wanted to ask.


The New York Times to Replace Data Centers with Google Cloud, AWS

Eventually, we will hit a point where *not* having your data hosted by a cloud provider will make headlines.


Do You Want to be Judged on Intentions or Results?

Short but thought-provoking post. I learned a long time ago that no one cares about effort, they only care about results. But I didn't stop think about how I want to be judged, or how I could control the conversation.


Cybersecurity Startup Exposed Hospital Network Data in Demos

Whoops. I'm starting to understand why they didn't earn that contract.


Microsoft is Bringing AI and More To SQL Server 2017

In case you missed it, last week Microsoft announced the new features coming in SQL Server 2017. It would appear that Microsoft sees the future of data computing to include features that go beyond just traditional storage.


Windows and Office align feature release schedules to benefit customers

In another announcement, Microsoft announced fewer updates to their products. But what they are really announcing is the transition from traditional client software to subscription-based software for their core products such as Office and Windows.


Uber tried to fool Apple and got caught

If you were looking for another reason to dislike how Uber operates as a company, this is the link for you.


Took the family for a drive to the Berkshires last Friday and realized that my debugging skills are needed everywhere I go:


Hybrid IT continues to grow as more agencies embrace the cloud, so I wanted to share this blog written last year by our former Chief Information Officer, Joel Dolisy, which I think is still very valid and worth a read.



Most federal IT professionals acknowledge that the cloud is and will be a driving component behind their agencies’ long-term successes; no one expects to move all of their IT infrastructures to the cloud.


Because of regulations and security concerns, many administrators feel it’s best to keep some level of control over their data and applications. They like the efficiencies that the cloud brings, but they aren’t convinced that it’s suitable for everything.


Hybrid IT environments offer benefits, but they can also introduce greater complexity and management challenges. Teams from different disciplines must come together to manage various aspects of in-house and cloud-based solutions. Managers must develop special skillsets that go well beyond traditional IT, and new tools must be deployed to closely monitor this complex environment.


Here are a few strategies managers can implement to close the gap between the old and the new:


1. Use tools to gain greater visibility

Administrators should deploy tools that supply single access points to metrics, alerts, and other data collected from applications and workloads, allowing IT staff to remediate, troubleshoot, and optimize applications, regardless of where they may reside.


2. Use a micro-service architecture and automation

Hybrid IT models will require agencies to become more lean, agile, and cost effective. Traditional barriers to consumption must be overcome and administrators should gain a better understanding of APIs, distributed systems, and overall IT architectures.


Administrators must also prepare to automatically scale, move, and remediate services.


3. Make monitoring a core discipline

Maintaining a holistic view of your entire infrastructure allows IT staff to react quickly to potential issues, enabling a more proactive strategy.


4. Remember that application migration is just the first step

Migration is important, but the management following initial move might be even more critical. Managers must have a core understanding of an application’s key events and performance metrics and be prepared to remediate and troubleshoot issues.


5. Get used to working with distributed architectures

Managers must become accustomed to working with various providers handling remediation as a result of outages or other issues. The result is less control, but greater agility, scalability, and flexibility.


6. Develop key technical skills and knowledge

Agency IT professionals need to learn service-oriented architectures, automation, vendor management, application migration, distributed architectures, API and hybrid IT monitoring, and more.


7. Adopt DevOps to deliver better service

DevOps breaks down barriers between teams, allowing them to pool resources to solve problems and deliver updates and changes faster and more efficiently. This makes IT services more agile and scalable.


8. Brush up on business skills

Administrators will need to hone their business-savvy sides. They must know how to negotiate contracts, become better project managers, and establish the technical expertise necessary to understand and manage various cloud services.


Managing hybrid IT environments takes managers outside their comfort zones. They must commit to learning and honing new skills, and use the monitoring and analytics tools at their disposal. It’s a great deal to ask, but it’s the best path forward for those who want to create a strong bridge between the old and the new.


Find the full article on Government Computer News.

The other day, as is often the case when an engineer is deep in a troubleshooting task that requires a restart if interrupted, I got a request for advice. “Hey, if you have a second I wanted to ask a question about standardizing DevOps tools. Should my friend use Chef, Puppet, or something else to get DevOps going?” He couldn't understand task-switch loss, so I did my best impression of the wisest help desk gurus on THWACK. I took a breath, found a smile, and answered the question.


“Standardizing” the tools of DevOps is anathema to the goal of DevOps; a bad habit carried over from old-school, under-resourced IT. With waterfall-based, help desk interrupt-driven, top-down IT, there’s too often a belief that if only the organization would adopt a Magic Tool, all would be well. DevOps, and more correctly the Agile principals that beget DevOps as an outcome, is bigger than a tool, vendor, or any technology.


For an organization to successfully make DevOps work, especially to achieve its promise of breaking logjams blocking the digital transformation enterprise so desperately wants, standardizing Agile principles and methods should be the real goal. For example, if the Ops team adopts Scrum and comes to value ScrumMasters who resist corrupting urges to grow scope after sprints begin, it doesn’t matter if the team chooses Ansible, Chef, Puppet, or AWS or Azure services for automation.


If critical teams standardize on methods that result in predictable, quality outcomes, they can each choose the tools that work best for them, or change them as needed to take advantage of new features. Tools selection or replacement becomes just one more element in the product backlog, to be balanced against business goals, like everything else. It substantially reduces the tendency to paralyze the team waiting on The Penultimate Tool of the Ages, before it can even get started.


Where standardizing DevOps methods over tools really pays off is assured quality. If a team standardizes on a principle of Minimal Acceptable Monitoring, they will ensure throughout Dev, testing, deployment, and Ops that the right tools are used to quantify performance and user experience. This crucial measure of service quality then informs sprint goals, increasing quality and even efficiency over time. Even better, by adopting effective, (verses impossibly idealized), Continuous Deployment, IT can help ensure DevOps, in which often overlooked goals like security awareness are always a part of every change rather than an occasional review project.


If your aspiring pre-DevOps organization makes only one decision on a standard, make it this: everyone learns the 10 key principles of AgileNote: I’m not saying anything about Scrum, Chef, stand-up meetings, or the Kanban board that runs my house chores. I’m not talking about actual adoption, timelines, project division, team realignment, or even two-pizza teams. The point is not to be prescriptive in the early stages. Resist the IT urge to go immediately to implementation, resolution, and ticket closure.  Let.. this.. soak.. in.  Dream about how you’d build IT if you could start over without any existing technology or processes. How would you solve the macro requirement: How do I delight humans who use technology? And a final pro tip: Standardization on Agile principles is best done over several sessions as a team, offsite, with adult beverages to go with the pizza.

Filter Blog

By date:
By tag: