Skip navigation

Home from Techorama in Belgium and back in the saddle for a short week before I head to Austin on Monday morning. I do enjoy visiting Europe, and Belgium in particular. Life just seems to move at a slower pace there.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

The big asks of British Airways

Last year I wrote about a similar outage with Delta, so here's some equal time for a similar failure with BA. Who knew that managing IT infrastructure could be so hard?

 

Facebook Building Own Fiber Network to Link Data Centers

I'm kinda shocked they don't already have this in place. But more shocking is the chart that shows internal traffic growth, mostly a result of Facebook having to replicate more pictures and videos of cats.

 

Who Are the Shadow Brokers?

Interesting thought exercise from Bruce Schneier about this group and what might be coming next.

 

Web Developer Security Checklist

Every systems admin needs a similar checklist to this one.

 

All the things we have to do that we don't really need to do: The social cost of junk science

A nice and quick reminder about the hidden costs of junk science. Or, the hidden costs of good science.

 

The Calculus of Service Availability

So the next time someone tells you they need 99.9% uptime for a system, you can explain to them what that really means.

 

How Your Data is Stored, or, The Laws of the Imaginary Greeks

This is a bit long, set aside some time. But you'll learn all about the problems (and solutions) for distributed computing.

 

One thing I love about Belgium is how they make shopping for the essentials easy:

IMG_7019.JPG

By Joe Kim, SolarWinds Chief Technology Officer

 

Federal IT professionals must consider the sheer volume and variety of devices connected to their networks, from fitness wearables to laptops, tablets, and smartphones. The Internet of Things (IoT) and the cloud also significantly impact bandwidth and present security concerns, spurred by incidents such as the Office of Personnel Management breach of 2014.

 

Despite this chaotic and ever-changing IT environment, for the Defense Department, network and data center consolidation is well underway, layering additional concerns on top of an already complex backdrop. Since 2011, the DoD has closed more than 500 data centers. That’s well below the goal the agency initially set forth, and it issued a directive last year to step up the pace; and subsequently, the Data Center Optimization Initiative was introduced to further speed efforts.

 

To be successful, federal IT professionals need a system that accounts for all of the data that soon will stream through their networks. They also need to get a handle on all the devices employees use and will use to access and share that data, all while ensuring network security.

 

Meeting the Challenges of Tomorrow Today

 

Network monitoring has become absolutely essential, but some solutions are simply not capable of dealing with the reality of today’s networks.

 

Increasingly, federal IT managers house some applications on-premises while others use hosted solutions, creating a hybrid IT environment that can be difficult to manage. Administrators will continue to go this route as they attempt to fulfill the DoD's ultimate goal: greater efficiency. Hybrid IT creates monitoring challenges, as it makes it difficult for administrators to “see” everything that is going on with the applications.

 

Going Beyond the Basics

 

This complexity will require network administrators to go beyond initial monitoring strategies and begin implementing processes that provide visibility into the entire network infrastructure, whether it’s on-premises or hosted. Hop-by-hop analysis lets administrators effectively map critical pathways and gain invaluable insight into the devices and applications using the network. It provides a complete view of all network activity, which will become increasingly important as consolidation accelerates.

 

At the very least, every IT organization should employ monitoring best practices to proactively plan for consolidation and ensuing growth, including:

 

  1. Adding dedicated monitoring experts who can provide holistic views of agencies’ current infrastructure and calculate future needs.
  2. Helping to ensure that teams understand the nuances of monitoring hardware, networks, applications, virtualization, and configurations and that they have access to a comprehensive suite of monitoring tools.
  3. Equipping teams with tools that address scalability needs. This will be exceptionally important as consolidation begins to truly take flight and data needs rapidly expand.

 

Looking Reality in the Eye

 

DoD network consolidation is a slow, yet major undertaking, and a necessity to help ensure efficiency. It comes with extreme challenges, particularly a much greater degree of network complexity. Effectively wrangling this complexity requires network administrators to go beyond simple monitoring and embrace a more comprehensive monitoring strategy that will better prepare them for their future.

 

Find the full article on Signal.

20170516_115950-2.jpg

Two weeks ago, I had the privilege of attending and speaking at ByNet Expo in Tel Aviv, Israel.  As i mentioned in my preview article, I had hoped to use this event to talk about cloud, hybrid IT, and SolarWinds' approach to these trends, to meet with customers in the region, and to enjoy the food, culture, and weather.

 

I'm happy to report that the trip was a resounding success on all three fronts.

 

First, a bit of background:

 

Founded in 1975, ByNet (http://www.bynet.co.il/en/) is the region's largest systems integrator, offering professional services and solutions for networking, software, cloud, and more.

 

I was invited by SolarWinds' leading partner in Israel, ProLogic (http://prologic.co.il/) who, honestly, are a great bunch of folks who not only know their stuff when it comes to SolarWinds, but they also are amazing hosts and fantastic people to just hang out with.

 

Now you might be wondering what kind of show ByNet (sometimes pronounced "bee-naht" by the locals) Expo is. Is it a local user-group style gathering? A tech meet-up? A local business owners luncheon?

 

To answer that, let me first run some of the numbers:

  • Overall attendees: 4,500
  • Visitors to the SolarWinds/Prologic booth: ~1,000
  • Visitors to my talk (~150, which was SRO for the space I was in)

 

The booth was staffed by Gilad, Lior, and Yosef, who make up part of the ProLogic team. On the Solarwinds side, I was joined by Adriane Burke out of our Cork office. That was enough to attract some very interesting visitors, including the Israeli Ministry of Foreign Affairs, Orbotec, Soreq, the Israeli Prime Minister's Office, Hebrew University, Mcafee, and three different branches of the IDF.

 

We also got to chat with some of our existing customers in the region, like Motorola, 3M, the Bank of Israel, and Bank Hapoalim.

 

Sadly missing from our visitor list, despite my repeated invitations on Twitter, was Gal Gadot.

 

But words will only take you so far. Here are some pictures to help give you a sense of how this show measures up:

 

01_IMG_0004.JPG

 

01_IMG_0106.JPG

 

01_IMG_0841.JPG

 

01_IMG_0844.JPG

01scaled_IMG-20170516-WA0000.jpg

01scaled_IMG-20170516-WA0004.jpg

 

 

But those are just some raw facts and figures, along with a few flashy photos. What was the show really like? What did I learn and see and do?

 

First, I continue to be struck by the way language and culture informs and enriches my interactions with customers and those curious about technology. Whether I'm in the booth at a non-U.S. show such as CiscoLive Europe or ByNet Expo, or when I'm meeting with IT pros from other parts of the globe, the use of language, the expectations of where one should pause when describing a concept or asking for clarification, the graciousness with which we excuse a particular word use or phrasing - these are all the hallmarks of both an amazing and ultimately informative exchange. And also of individuals who value the power of language.

 

And every time I have the privilege to experience this, I am simply blown away by its power. I wonder how much we lose, here in the states, by our generally mono-linguistic mindset.

 

Second, whatever language they speak, SolarWinds users are the same across the globe. Which is to say they are inquisitive, informed, and inspiring in the way they push the boundaries of the solution. So many conversations I had were peppered with questions like, "Why can't you...?" and "When will you be able to...?"

 

I love the fact that our community pushes us to do more, be better, and reach higher.

 

With that said, I landed on Friday morning after a 14-hour flight, dropped my bags at the hotel and - what else - set off to do a quick bit of pre-Shabbat shopping. After that, with just an hour or two before I - and most of the country - went offline, I unpacked and got settled in.

 

Twenty-four hours later, after a Shabbat spent walking a chunkble chuck of the city, I headed out for a late night snack. Shawarma, of course.

 

Sunday morning I was joined by my co-worker from Cork, Adrian Burke. ProLogic's Gilad Baron spent the day showing us Jerusalem's Old City, introducing us to some of the best food the city has to offer, and generally keeping us out of trouble.

 

And just like that, the weekend was over and it was time to get to work. On Monday we visited a few key customers to hear their tales of #MonitoringGlory and answer questions. Tuesday was the ByNet Expo show, where the crowd and the venue rivaled anything Adrian and I have seen in our travels.

 

On my last day, Wednesday, I got to sit down in the ProLogic offices with a dozen implementation specialists to talk some Solarwinds nitty-gritty: topics like the product roadmaps, use cases, and trends they are seeing out in the field.

 

After a bit of last-minute shopping and eating that night, I packed and readied myself to return home Thursday morning.

 

Random Musings

  • On Friday afternoon, about an hour before sundown, there is a siren that sounds across the country, telling everyone that Shabbat is approaching. Of course nobody is OBLIGATED to stop working, but it is striking to me how powerful  a country-wide signal to rest can be. This is a cultural value that we do not see in America.
  • It is difficult to take a 67-year-old Israeli taxi driver seriously when he screams into his radio at people who obviously do not understand him. Though challenging, I managed to hide my giggles.
  • Traveling east is hard. Going west, on the other hand, is easy.
  • You never "catch up" on sleep.
  • Learning another language makes you much more sensitive to the importance of pauses in helping other people understand you.
  • Everything in Jerusalem is uphill. Both ways.
  • On a related note: there are very few fat people in Jerusalem.
  • Except for tourists.
  • Orthodox men clearly have their sweat glands removed. Either that or they install personal air conditioners inside their coats. That's right. I said coats. In May. When it's 95 degrees in the sun.

 

01scaled_20170514_155616.jpg

01scaled_20170515_201928.jpg

01-Scaled_20170513_215610.jpg

 

01scaled_20170514_094222.jpg

If you’ve read any of my articles you’ll know I’m old school. Automation in my days was batch files, Kix32 scripts, then group policy and Microsoft Systems Management Server (before SMS was a popular messaging protocol). I’m fortunate to mingle with some very smart Enterprise tech people occasionally, and they are talking new automation languages. Infrastructure as Code. Chef. Puppet. Ansible. Wow. I’m going to pause for a minute in envy.

 

To start with, there’s the debate of “which one do you choose?” I’m going to leave that to anyone who has more knowledge of these products than me. Are you using one of those or an alternative product, to handle server and infrastructure configuration management?

 

Do we all need to be versioning our infrastructure or has this DevOps thing gone a little too far?

 

Or is there a tipping point – probably an organizational size – at which this makes way more sense than how we used to manage infrastructure? Does your organization slide under that, making you wonder what all the fuss is about and why you’d bother?

 

Meanwhile, back in my SMB world, PowerShell is nudging in more and more as “the way you do things." In fact, many Office 365 administration tasks can’t be performed in the web GUI and require a PowerShell command. Which also means you know how to install the required PowerShell components and issue the correct commands to connect to your Office 365 or Azure Active Directory tenant. If I ignored the operational efficiencies from going command line again (hello, Lotus Domino Server), I would still be dragged into the world of PowerShell when it’s the only way to do something.

 

If this is all new to you, or if you live and breathe this stuff, my next question is …. how do you start? Whether you’re resigned to needing this stuff on your CV now or whether you are genuinely excited about learning something new (or you might be somewhere in the middle), what are your go-to resources?

 

Product websites are always a good place to start. Many are offering videos and webinars as an alternative to drowning in text with screenshots.

Are you searching through Github for infrastructure as code samples and PowerShell scripts, or are you learning from peers on Reddit?

Maybe you’ve gone with a different learning resource altogether, like the courses at Pluralsight?

 

If automation is the new normal (or the old normal), how do we pick up new automation skills? Let me know what’s worked for you.

 

Disclaimer: I’m a Pluralsight author of one course that is nothing to do with the topics I’ve just written about. And there are no affiliate links here either.

Network performance monitoring feels a bit like a moving target sometimes.  Just as we normalize processes and procedures for our monitoring platforms, some new technology comes around that turns things upside down again. The most recent change that seems to be forcing us to re-evaluate our monitoring platforms is cloud computing and dynamic workloads. Many years ago, a service lived on a single server, or multiple if it was really big. It may or may not have had redundant systems, but ultimately you could count on any traffic to/from that box to be related to that particular service.

 

That got turned on its head with the widespread adoption of virtualization. We started hosting many logical applications and services on one physical box. Network performance to and from that one server was no longer tied to a specific application, but generally speaking, these workloads remained in place unless something dramatic happened, so we had time to troubleshoot and remediate issues when they arose.

 

In comes the cloud computing model, DevOps, and the idea of an ephemeral workload. Rather than have one logical server (physical or virtual), large enough to handle peak workloads when they come up and highly underutilized otherwise, we are moving toward containerized applications that are horizontally scaled. This complicates things when we start looking at how to effectively monitor these environments.

 

So What Does This Mean For Network Performance Monitoring?

 

The old way of doing things simply will not work any longer. Assuming that a logical service can be directly associated with a piece of infrastructure is no longer possible. We’re going to have to create some new methods, as well as enhance some old ones, to extract the visibility we need out of the infrastructure.

 

What Might That Look Like?

 

Application Performance Monitoring

This is something that we do today and Solarwinds has an excellent suite of tools to make it happen. What needs to change is our perspective on the data that these tools are giving us. In our legacy environments, we could poll an application every few minutes because not a lot changes between polling intervals. In the new model of system infrastructure, we have to assume that the application is scaled horizontally behind load balancers and that poll only touched one of many deployed instances. Application polling and synthetic transactions will need to happen far more frequently to give us a broader picture of performance across all instances of that application.

 

Telemetry

Rather than relying on polling to tell us about new configurations/instances/deployments on the network, we need the infrastructure to tell our monitoring systems about changes directly. Push rather than pull works much better when changes happen often and may be transient. We see a simple version of this in syslog today, but we need far better-automated intelligence to help us correlate events across systems and analyze the data coming into the monitoring platform. This data then will need to be associated with our traditional polling infrastructure to understand the impact of a piece of infrastructure going down or misbehaving. This likely will also include heuristic analysis to determine baseline operations and variations from that baseline. Manually reading logs every morning isn’t going to cut it as we move forward.

 

Traditional Monitoring

This doesn’t go away just because we’ve complicated things with a new form of application deployment. We still will need to keep monitoring our infrastructure for up/down, throughput, errors/discards, CPU, etc.

 

Final Thoughts

Information Technology is an ever-changing field, so it makes sense that we’re going to have to adjust our methods over time. Some of these changes will be in how we implement the tools we have today, and some of them are going to require our vendors to give us better visibility into the infrastructure we’re deploying. Either way, these types of challenges are what makes this work so much fun.

Today, I want to bring your attention to a great series of webcasts that are available here: Security Kung Fu Webcast Series

 

I will stress the importance of each one of these over the next few weeks as I review and reflect on what I learned from these webcasts.

 

That's right. I'm reviewing the webcast as a critic in this series because I deeply believe in security, and I want to make sure you guys are aware of the content provided in each webcast. Please follow me on this security adventure and dive into the importance of the information they covered. Also, I'll be mixing them up, so the reviews won't be presented in order. 

 

Takeaways

 

1. There is a difference in being secure versus compliant.

  • I can comply with regulations, but does that cover everything within my infrastructure?
  • I can secure my environment, but does that mean I am meeting my overall compliance needs?

 

These are questions that I like to ask whenever I'm involved with any security plan. This helps to make sure that my environment is fluid and being assessed by both sides of the argument.

 

2. Too many rules to follow! I just want to do my job!

  • News flash: Security is a business issue. It's NOT just for IT!
  • This webcast talks about the rules and compliance needs for different types of businesses. However, all levels of users need to focus on security. This means engaging with and training them at every opportunity.

 

The biggest issue that I see is a lack of a solid security planning that is integral to an organization's overarching business strategy. This webcast offers insight on ways to use tools to help you complete security plans faster and strengthen your proactive and reactive security needs.

 

Summary

 

The Security vs Compliance webcast will help guide you toward implementing a solid security plan. I joined this webcast and offered some of my opinions on being secure vs compliant, so please feel free to let me know if you have more to add!

 

Remember, "Security is a very fluid dance. The music may change, but you have to keep dancing."

 

If there is something specific you guys want me to bring up, please let me know! I love talking security and how to use what you have to support any security plan. Leave me a security comment and I'll see if I can get this ramped up and answer in a future Geek Speak blog!

The-Hog-Ring-Auto-Upholstery-Community-Aerospace-Lancia-Beta-Trevi.jpg

 

We’ve all seen dashboards for given systems. A dashboard is essentially a quick view into a given system. We are seeing these more and more often in the monitoring of a given system. Your network monitoring software may present a dashboard of all switches, routers, and even down to the ports, or all the way up to all ports in given WAN connections. For a large organization, this can be a quite cumbersome view to digest in a quick dashboard. Network is a great example of fully fleshed out click-down views. Should any “Red” appear on that dashboard, a simple click into it, and then deeper and deeper into it, should help to discover the source of the problem wherever it may be.

 

Other dashboards are now being created, such that the useful information presented within the given environment may be not so dynamic, and harder to discern in terms of useful information.

 

The most important thing to understand from within a dashboard environment is that the important information should be so easily presented that the person glancing at it should not have to know exactly how to fix whatever issue is, but that that information be understood by whoever may be viewing it. If a given system is presenting an error of some sort, the viewer should have the base level of understanding necessary to understand the relevant information that is important to them.

 

Should that dashboard be fluid or static? The fluidity is necessary for those doing the the deep dive into the information at the time, but a static dashboard can be truly satisfactory should that individual be assigning the resolution to another, more of a managerial or administrative view.

 

I believe that those dashboards of true significance have the ability to present either of these perspectives. The usability should only be limited by the viewer’s needs.

 

I’ve seen some truly spectacular dynamic dashboard presentations. A few that spring to mind are Splunk, the analytics engine for well more than just a SIEM, Plexxi, a networking company with outstanding deep dive capabilities into their dashboard with outstanding animations, and of course, some of the wonderfully intuitive dashboards from SolarWinds. This is not to say that these are the limits of what a dashboard can present, but only a representation of many that are stellar.

 

The difficulty with any fluid dashboard is how difficult is it for a manager of the environment to create the functional dashboard necessary to the viewer? If my goal were to fashion a dashboard intended for the purpose of seeing for example Network or storage bottlenecks, I would want to see, at least initially, a Green/Yellow/Red gauge indicating if there were “HotSpots” or areas of concern, then, if all I needed was that, I’d, as management assign someone to look into that, but if I were administration, I’d want to be more interactive to that dashboard, and be able to dig down to see exactly where the issue existed, and/or how to fix it.

 

I’m a firm believer in the philosophy that a dashboard should provide useful information, but only what the viewer requires. Something with some fluidity always is preferable.

Expectation.png

 

Hey, everybody!  Welcome to this week’s quandary of Root Cause, Correlation Analysis, and having to collaborate across cross-functional teams where you have all the hands but none of the fingers!

 

If that sounds confusing to you, it’s because frankly, it is! I’d like to share a tale of woe and heartbreak driven by frustration in functional and equally dysfunctional IT team dynamics!

 

The story is set in a fairly cross-functional organization. You're probably familiar with the type. While there are clearly defined teams with responsibilities, there are also hard lines in the sand of who does what, where, when, how and why. Honestly, this story rings so true that I’ve seen this story blur with other ones. If that isn’t excitement, I don’t know what is!

 

As the story goes, our team had deployed a series of tools enabling a cross-stack data correlation engine allowing us to identify and truly correlate events as they happen to allow troubleshooting to be better, easier.   The problem was the true burden of responsibility this team had ALL the responsibility of identifying problems, but none of the authority to actually resolve those problems, let alone the authorization to work on them!   What makes this particularly fun is that we were chartered with and burdened by the responsibility of being held accountable for the issues until they were resolved.   If that sounds like some kind of decision made in a government sector… I wouldn’t tell you you’re wrong! J

 

This is where simple technical skills while essential were not good enough.  And frankly, all of the project management skills in the world wouldn’t matter here, because it’s not like a “problem” is a “project” per se.   No, we had to get everyone on board, every stakeholder at the table where egos were strong and stubborn.   Just like we discussed recently in Better Together - Working Together in Silo Organizations and When Being an Expert Isn’t Good Enough: Master of All Trades, Jack of None merely knowing the answer or the cause of the problem wasn’t good enough here.   All parties would reject the issue being theirs, even in light of evidence proving otherwise and would instead resort to finger pointing. Fortunately how we started to navigate these waters was through education of the tools we were using and how it would provide insight into their systems, access to our tools so we weren’t just the messenger they were trying to shoot but a helpful informant in things, and we also offered our guidance as IT Professionals to help them navigate the errors or problems so they could resolve them better.

 

It sounds so simple, it’s something fairly straight-forward but the timing it took and would continue to take whenever new members would join a team, or new problems would surface would take months or longer to reach a sense of team parity.

 

It’s been an interesting element of Systems Operations in the face of having intelligence, and knowledge not meaning much of anything unless you had all parties engaged, and even then that was no guarantee that people would agree, let alone do anything about it.

 

Have you faced a similar issue as well, where you identify a problem which isn’t your problem and the challenges faced in trying to resolve it?  Or perhaps even just having accountability for something which isn’t your responsibility and the woes of trying to get parties to take responsibility?

 

Or really any other story of problem correlation and root cause and how you were able to better or faster resolve it than what we faced!

You've decided it's time to learn how to code, so the next step is to find some resources and start programming your first masterpiece. Hopefully, you've decided that my advice on which language to choose was useful, and you're going to start with either Python, Go or PowerShell. There are a number of ways to learn, and a number of approaches to take. In this post, I'll share my thoughts on different ways to achieve success, and I'll link to some learning resources that I feel are pretty good.

 

How I Began Coding

 

When I was a young lad, my first foray into programming was using Sinclair BASIC on a Sinclair ZX81 (which in the United States was sold as the Timex Sinclair 1000). BASIC was the only language available on that particular powerhouse of computing excellence, so my options were limited. I continued by using BBC BASIC on the Acorn BBC Micro Model B, where I learned to use functions and procedures to avoid repetition of code. On the PC I got interested in what could be accomplished by scripting in MS-DOS. On Macintosh, I rediscovered a little bit of C (via MPW). When I was finally introduced to NetBSD, things got interesting.

 

I wanted to automate activities that manipulated text files, and UNIX is just an amazing platform for that. I learned to edit text in vi (aka vim, these days) because it was one tool that I could pretty much guarantee was installed on every installation I got my hands on. I began writing shell scripts which looped around calling various instantiations of text processing utilities like grep, sed, awk, sort, uniq, fmt and more, just to get the results I wanted. I found that often, awk was the only tool with the power to extract and process the data I needed, so I ended up writing more and more little awk scripts to fill in. To be honest, some of the pipelines I was creating for my poor old text files were tricky at best. Finally, somebody with more experience than me looked at it and said, Have you considered doing this in Perl instead?

 

Challenge accepted! At that point, my mission became to create the same functionality in Perl as I had created from my shell scripts. Once I did so, I never looked back. Those and other scripts that I wrote at the time are still running. Periodically, I may go back and refactor some code, or extract it into a module so I can use the same code in multiple related scripts, but I have fully converted to using a proper scripting language, leaving shell scripts to history.

 

How I Learned Perl

 

With my extensive experience with BASIC and my shallow knowledge of C, I was not prepared to take on Perl. I knew what strings and arrays were, but what was a hash? I'd heard of references but didn't really understand them. In the end—and try not to laugh because this was in the very early days of the internet—I bought a book (Learn Perl in 21 Days), and started reading. As I learned something, I'd try it in a script, I'd play with it, and I'd keep using it until I found a problem it didn't solve. Then back to the book, and I'd continue. I used the book as more as a reference than I did as a true training guide (I don't think I read much beyond about Day 10 in a single stretch; after that was on an as-needed basis).

 

The point is, I did not learn Perl by working through a series of 100 exercises on a website. Nor did I learn Perl by reading through the 21 Days book, and then the ubiquitous Camel book. I can't learn by reading theory and then applying it. And in any case, I didn't necessarily want to learn Perl as such; what I really wanted was to solve my text processing problems at that time. And then as new problems arose, I would use Perl to solve those, and if I found something I didn't now how to do, I'd go back to the books as a reference to find out what the language could do for me. As a result, I did not always do things the most efficient way, and I look back at my early code and think, Oh, yuck. If I did that now I'd take a completely different approach. But that's okay, because learning means getting better over time and —  this is the real kicker — my scripts worked. This might matter more if I were writing code to be used in a high-performance environment where every millisecond counts, but for my purposes, "It works" was more than enough for me to feel that I had met my goals.

 

In my research, I stumbled across a great video which put all of that more succinctly than I did:

 

Link: How to Learn to Code - YouTube

 

In the video, (spoiler alert!) CheersKevin states that you don't want to learn a language; you want to solve problems, and that's exactly it. My attitude is that I need to learn enough about a language to be dangerous, and over time I will hone that skill so that I'm dangerous in the right direction, but my focus has always been on producing an end product that satisfies me in some way. To that end, I simply cannot sit through 30 progressive exercises teaching me to program a poker game simulator bit by bit. I don't want to play poker; I don't have any motivation to engage with the problem.

 

A Few Basics

 

Having said that you don't want to learn a language, it is nonetheless important to understand the ways in which data can be stored and some basic code structure. Here are a few things I believe it's important to understand as you start programming, regardless of which language you choose to learn:

 

ItemDefinition
scalar variablea way to store a single value, e.g. a string (letters/numbers/symbols), a number, a pointer to a memory location, and so on.
array / list / collectiona way to store an (ordered) list of values, e.g. a list of colors ("red", "blue", "green") or (1,1,2,3,5,8).
hash / dictionary / lookup table / associative arraya way to store data by associating a unique key to a value, e.g. the key might be "red", and the value might be the html hex value for that color, "#ff0000". Many key/value pairs can be stored in the same object, e.g. colors=("red"=>"#ff0000", "blue"=>"#00ff00", "green"=>"#0000ff")
zero-based numberingthe number (or index) of the first element in a list (array) is zero;  the second element is 1, and so on. Each element in a list is typically accessed by putting the index (the position in the list) in square brackets after the name. In our previously defined array colors=("red", "blue", "green") the elements in the list are colors[0] = "red", colors[1]="blue", and colors[2]="green".
function / procedure / subroutinea way to group a set of commands together so that the whole block can be called with a single command. This avoids repetition within the code.
objects, properties and methodsan object can have properties (which are information about, or characteristics of, the object), and methods (which are actually properties which execute a function when called). The properties and methods are usually accessed using dot notation. For example, I might have an object mycircle which has a property called radius; this would be accessed as mycircle.radius. I could then have a method called area which will calculate the area of the circle (πr²) based on the current value of mycircle.radius; the result would access as mycircle.area() where parentheses are conventionally used to indicate that this is a method rather than a property.

 

All three languages here (and indeed most other modern languages) use data types and structures like the above to store and access information. It's, therefore, important to have just a basic understanding before diving in too far. This is in some ways the same logic as gaining an understanding of IP before trying to configure a router; each router may have a different configuration syntax for routing protocols and IP addresses, but they're all fundamentally configuring IP ... so it's important to understand IP!

 

Some Training Resources

 

This section is really the impossible part, because we all learn things in different ways, at different speeds, and have different tolerances. However, I will share some resource which either I have personally found useful, or that others have recommended as being among the best:

 

Python

 

 

The last course is a great example of learning in order to accomplish a goal, although perhaps only useful to network engineers as the title suggests. Kirk is the author of the NetMiko Python Library and uses it in his course to allow new programmers to jump straight into connecting to network devices, extracting information and executing commands.

 

Go

 

Go is not, as I think I indicated previously, a good language for a total beginner. However, if you have some experience of programming, these resources will get you going fairly quickly:

 

 

As a relatively new, and still changing, language, Go does not have a wealth of training resources available. However, there is a strong community supporting it, and the online documentation is a good resource even though it's more a statement of fact than a learning experience.

 

PowerShell

 

 

Parting Thoughts

 

Satisfaction with learning resources is so subjective, it's hard to be sure if I'm offering a helpful list or not, but I've tried to recommend courses which have a reputation for being good for complete beginners. Whether these resources appeal may depend on your learning style and your tolerance for repetition. Additionally, if you have previous programming experience you may find that they move too slowly or are too low level; that's okay because there are other resources out there aimed at people with more experience. There are many resources I haven't mentioned which you may think are amazing, and if so I would encourage you to share those in the comments because if it worked for you, it will almost certainly work for somebody else where other resources will fail.

 

Coincidentally a few days ago I was listening to Scott Lowe's Full Stack Journey podcast (now part of the Packet Pushers network), and as he interviewed Brent Salisbury in Episode 4, Brent talked about those lucky people who can simply read a book about a technology (or in this case a programming language) and understand it, but his own learning style requires a lot of hands-on, and the repetition is what drills home his learning. Those two categories of people are going to succeed in quite different ways.

 

Since it's fresh in my mind, I'd also like to recommend listening to Episode 8 with Ivan Pepelnjak. As I listened, I realized that Ivan had stolen many of the things I wanted to say, and said them to Scott late in 2016. In the spirit that everything old is new again, I'll leave you with some of the axioms from RFC1925 (The Twelve Networking Truths) (one of Ivan's favorites) seem oddly relevant to this post, and to the art of of programming too:

 

         (6a)  (corollary). It is always possible to add another
               level of indirection.    
     (8)  It is more complicated than you think.
     (9)  For all resources, whatever it is, you need more.
    (10)  One size never fits all.
    (11)  Every old idea will be proposed again with a different
          name and a different presentation, regardless of whether
          it works.
         (11a)  (corollary). See rule 6a.  

I made it to Antwerp and Techorama this week. I am delivering two sessions and looking forward to my first ever presentation on a movie theater screen. I am now wishing I had embedded movie clips into my slides, just for fun.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

CS4G Netsim

An interesting network simulator game, useful for folks like me who still need to get up to speed on all things networking.

 

SQL Server Command Line Tools for MacOS Released

If you were ever wondering about signs that Hell had frozen over, SQL Server on Linux is as good as any other. So would be the release of SQL Server tools for MacOS.

 

Extending the Airplane Laptop Ban

This makes no sense to me, and I am hopeful it never becomes real. I cannot imagine being asked to travel to speak and not have my laptop in my possession at all times.

 

North Korea's Unit 180, the cyber warfare cell that worries the West

Some interesting views into how North Korea may be able to not only hack, but get away with hacking by using the networks inside other countries.

 

Keylogger Found in Audio Driver of HP Laptops

This is why we can't have nice, secure things.

 

U.S. top court tightens patent suit rules in blow to 'patent trolls'

While I like that this was done, something tells me that this will only be a temporary shift. The real winners here, as always, are the lawyers.

 

No photo editing required. Cars in Belgium are this tiny:

FullSizeRender.jpg

By Joe Kim, SolarWinds Chief Technology Officer

 

The Data Center Optimization Initiative was introduced last year, superseding the Federal Data Center Consolidation Initiative (FDCCI).

 

While there have been some major wins, including billions of dollars saved and thousands of data centers shuttered, those wins do not change the fact that there are still major cybersecurity concerns surrounding the consolidation effort. According to a SolarWinds and Market Connections cybersecurity survey from last year, these concerns mainly stem from incomplete transitions during consolidation and modernization projects, overly complex enterprise management tools, and a lack of familiarity with new systems.

 

The fact that these concerns are still top of mind several years into the FDCCI is not surprising, considering the rapid evolution of the threat landscape.

 

Let’s take a look at four strategies federal network administrators can adopt to help circumvent this challenge and make their data consolidation efforts a little more secure.

 

1. Create a clearly defined organizational structure

 

Ultimately, everyone in an agency has a hand in data center operations—not just IT administrators, but also developers, managers and executives. Each responsible party should be assigned unique responsibilities and remain in contact with each other. That way, if a breach or outage occurs, the team will be able to work together to address the issue.

 

2. Follow up with lightweight and flexible procedures

 

One of the goals behind the federal government’s modernization effort is to become more agile and flexible, but this should not be confined to hardware and software. Once the organizational structure is defined and it’s time to put processes and procedures in place, agencies should help ensure that they are highly flexible and can adapt to changing conditions.

 

3. Encrypt and segment data at rest and in flight

 

All data, whether at rest or in flight, must be encrypted, especially as agencies continue their data center transitions. There are simply too many risks involved in the transition process itself -- too many places where data is vulnerable and too many opportunities for increasingly savvy hackers or insiders to access information left in the open.

 

Data segmentation is also critical, as it can limit the attack damage to a subset of data. Segmenting can reduce the potential for cascading -- and often catastrophic -- network failures.

 

4. Automate security and gain complete control

 

Administrators must implement solutions that can monitor applications and network activity and deliver patches and updates as necessary. These goals can be achieved with modern performance monitoring software that gives data center managers a complete view of the health of every aspect of their data centers, including compute, storage, network, and applications.

 

Administrators willing to lay the security groundwork now will find their road toward data center consolidation easier to travel. Their efforts will also provide a solid foundation for managing what promises to be a tricky post-consolidation world, where the amount of data continues to grow even as the number of data centers has shrunk.

 

 

Find the full article on Government Computer News.

In the contemporary American cinema classic, “Back to the Future,” Marty McFly takes a DeLorean-turned-time-machine into the future, into the past, and subsequently into the future again. If only your network and system infrastructure had a similar means of interdimensional travel to reveal the catalyst to events and incidents. Unfortunately, there is no flux capacitor for your network. You cannot get your firewall up to 88MPH, lock horns with a one-billion-volt bolt of lightning, and go back in time to determine the underlying cause of historical incidents on your network. Instead, we stay vigilant, watching and monitoring our networks for issues and trends across a historical period of time. However, in most environments, monitoring and observing all devices for ANY event is a foreboding task.

 

Luckily for us, most devices log events and have the ability to forward their log files to a centralized syslog server for collection, aggregation, review, and action. These log entries can range from configuration change notifications and port flapping on network devices, to services stopping on a system, or an intrusion. These log messages are paramount to your historical monitoring, and in some cases, compliance to legal and/or regulatory standards and audits. However, a log can only give you the information you need if you read it. This presents a challenge when many devices, such as firewalls, can produce millions of log messages per minute, many of which you might not need to read at all. With Kiwi Syslog® Server, you no longer have to hunt through log files on each individual device. Instead, they are all at your fingertips, allowing you to collect, filter, parse, and alert on log messages based on your criteria.

 

Ever vigilant, Kiwi Syslog Server becomes your eyes and ears, watching and listening for unusual log entries so you don’t have to. It is like a DeLorean for your network.

 

Kiwi Syslog Key Benefits

  • Deploy quickly. Accepts Syslog, SNMP, and Event Log data from your existing deployment.
  • Monitor real-time logs. Display logs locally or anywhere through the secure web access module.
  • React to messages. Send email, run programs, or forward data when selected messages arrive.
  • Troubleshoot problems. Centralize logs from systems and network devices to quickly pinpoint issues.
  • Comply with regulations. Implement log retention requirements of SOX, FISMA, PCI-DSS, and more.

 

Kiwi Syslog Key Features

  • NO LIMIT on maximum number of sources
  • Built and tested to handle MILLIONS of messages an hour
  • Run as a service (or foreground application) on most Windows operating systems
  • Collect log data from Syslog messages (both UDP and TCP), SNMP traps, and Windows® Event Logs (through the included Windows Event Forwarder)
  • Display real-time logs in multiple windows in a local viewing console, or from anywhere through secure web access
  • Split written logs by device, IP, hostname, date, or other message or time variables
  • Manage log archives with scheduled compress, encrypt, rename, move, and delete rules
  • Forward logs to other syslog servers, SNMP servers, or databases
  • Send email alerts, run programs, play sounds, and perform other actions when messages arrive
  • Act as a syslog proxy (forwarding messages with original IP information)
  • Ship syslog information securely across insecure networks with included Kiwi Secure Tunnel
  • View trend analysis graphs and send email with traffic statistics

 

Of course, it would be much more fun and adventurous to traverse the space-time continuum. Who wouldn’t want to leap into another dimension to get a glimpse of what’s to come, or a head’s up on things before they happen? However, for those of us without a time machine, there’s always Kiwi Syslog Server. Download it today and start your journey to better understand your network.

I was fortunate enough to be in the audience for my friend, Dave McCrory's presentation at Interop during the Future of Data Summit. Dave is currently the CTO of Basho, and he famously coined the term "data gravity" in 2010. Data gravity, or as friends have come to call it, McCrory's Law, simply states that data is attracted to data. Data now has such critical mass that processing is moving to it versus data moving to processing.

 

Furthermore, Dave introduced this notion of data agglomeration, where data will migrate to and stick with services that provide the best advantages. Examples of this concept include car dealerships and furniture stores being in the same vicinity, just as major cities of the world tend to be close to large bodies of water. In terms of cloud services, this is the reason why companies that incorporate weather readings are leveraging IBM Watson. IBM bought The Weather Company and all their IoT sensors, which has produced and continues to produce massive amounts of data.

 

I can't do enough justice to the quality of Dave's content and its context in our current hybrid IT world. His presentation was definitely worth the price of admission to Interop. Do you think data has gravity? Do you think data agglomeration will lead to multi-cloud service providers within an organization that is seeking competitive advantages? Share your thoughts in the comment section below.

IMG_6760.JPG

 

I spend a lot of my time with data. Too much time, really. An almost unhealthy amount of time.

 

I read about data. I write about data. I tweet about data.

 

I'm all about the data, no trouble.

 

And because I spend so much time immersed in data I see a lot of mistakes out there. A lot of issues that are avoidable. Data breaches caused by a lack of adequate data security. Disasters made worse by not having backups. Data centers offline but nobody being alerted because the alerting is not configured correctly.

 

Avoiding these issues is simple: Hire someone with a working knowledge of data, databases, and data administration. Of course, good help is hard to find. In the absence of finding good help, you need some guidance about what you can be doing better.

 

So that’s what I want to do for you today. Here are seven things that you can do to love your data, starting today. Not all the items listed here can be done in a day. Some will take longer than others. Review the list, build a plan, and get started on making your data the best it can be.

 

Automation – As a data professional you need to spend time thinking of ways to automate yourself out of a job. I would even suggest that you must be thinking about ways to “cloud-proof” your job. GUIs, and people, don’t scale. Code does.

 

Security – Data is the most critical asset your company owns. Without data, your company would not exist. All that hardware your company owns? Yeah, that’s there to move bits of data back and forth. When you consider the value and criticality of your data, you will understand that it is necessary to deploy data security and privacy tools.

 

Maintenance – In an unscientific study I did last year, I found that the bulk of database performance problems was related to the lack of proper maintenance. In some cases, there is no maintenance being done at all, or it is the wrong maintenance (such as VM snapshots instead of database dumps).

 

Alerting – Many folks lump alerting together with monitoring, but they are different. You use monitoring to decide what you want to alert upon. Alerts require action; everything else is just information you can collect and use later.

 

Monitoring – Here’s the most important thing you need to know about monitoring: If you create inbox rules for your monitoring system, then you’ve already lost. Monitor and measure what you need, but only send emails when action is needed (see above).

 

Analytics – While we are talking about alerting and monitoring, here’s your reminder to learn some basic data analysis and use it against the monitoring data. Learn to spot outliers. Stop guessing about what happened and start learning about what will happen.

 

Backups – It’s 2017. Do you know where your database backups are? But it’s not just backups. Last month the cloud went down. Be prepared. Have a proper business continuity plan in place and test the process twice a year.

 

Successful data professionals use these seven items to exercise proper control over their enterprise. Further, many of the issues I read about all fall into one of these seven buckets.

 

The difference between being prepared and being unprepared comes down to your willingness to make sure you have each of these seven items covered.

The big news since last week is the WannyCry attack. I've got a lot I want to say on the subject of data security and will put together my thoughts in a different post. But for now I just want to remind everyone that security is a shared responsibility. With each attack, there seems to be more finger pointing and fewer solutions being offered.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Everything you need to know about the WannaCry / Wcry / WannaCrypt ransomware

Nice summary from Troy Hunt, helping to make sense of what happened last week. And while it is easy to say "just patch everything", the reality is that some systems aren't able to be patched. The truth is, the current software business model is broken.

 

Why “Just Patch It!” Isn’t as Easy as You Think

Having worked in an industry that doesn't like to touch systems that are working, I can relate to how some systems might be patched infrequently, if ever.

 

Don’t Blame Microsoft For WannaCrypt Vulnerability Exploitation

There are some people out there that believe Microsoft should be providing security updates for every OS they have ever built since 1983. What scares me most is that these same people are allowed to vote and drive cars.

 

Logs and Metrics

What's the difference between logs and metrics? Seems like an easy question to answer. But for some, there is little difference between the two.

 

Understanding the Kubernetes ecosystem

A quick Q&A about Kubernetes that reinforces the concept that there is no silver bullet when it comes to technology.

 

Microsoft debuts Azure Cosmos DB, a superset of its DocumentDB service

Microsoft is taking the first steps towards creating a truly global database for any type of data. Relational, NoSQL, NewSQL, it all just data, and Microsoft wants to make it easy to store your data with them.

 

Cybercrime on the high seas: the new threat facing billionaire superyacht owners

The struggle is real.

 

I've found the company responsible for every butt-dial phone call ever placed:

IMG_4935.JPG

There is a dramatic shift underway in federal IT on a scale that is rarely seen more than once every decade or so. Computing environments are evolving from traditional on-premises-only to hybrid strategies that migrate some infrastructure to the cloud, while keeping some critical systems onsite.

 

According to SolarWinds’ IT Trends Report 2016: The Hybrid IT Evolution, which includes a survey of government IT professionals, hybrid IT will continue to be the norm for the foreseeable future.

 

What does this mean for federal IT pros? Should we anticipate a re-invention of sorts?

 

At a conceptual level, it means understanding a new normal and how best to operate within this hybrid environment. On a practical level, it means learning how to align current skill sets with new requirements and, just as important, integrating new expertise, such as hybrid IT monitoring and management, data analytics, automation, and cloud application migration with existing skill sets.

 

Understanding the New Normal

 

A hybrid IT environment—the new normal—can be complex. Unlike legacy environments that were relatively homogeneous, hybrid IT environments are inherently heterogeneous.

 

In a hybrid IT environment, different systems and applications exist in different locations. For example, according to the report, 70 percent of respondents say they have migrated applications to the cloud. In addition, 55 percent say they have migrated storage and 36 percent have migrated databases to the cloud.

 

Here’s where the challenge lies for today’s federal IT pros. What is the best way to manage hybrid IT?

 

The key to successful management is maximum visibility. Having a single point of truth across platforms—on-premises and cloud—is essential. Specifically, consider implementing a centralized dashboard to remediate, troubleshoot, and optimize all of your environments.

 

Skill Sets Required for the New IT Prototype

 

Of course, understanding the hybrid IT environment—and having a conceptual management strategy—is only half the equation. Managing a hybrid IT infrastructure requires new skills in addition to those needed to manage on-premises infrastructures.

 

Your team will need to learn service-oriented architectures, automation, vendor management, application migration, distributed architectures, API and hybrid IT monitoring, and more. In fact, the team’s skill set will be the driving force behind the success of your implementation.

 

What kinds of skills will bring the most value to a hybrid environment? According to the report, these are the required skills:

 

 

Also, hybrid IT often involves working with multiple service providers in different geographic locations. IT pros will need to become accustomed to working with various providers handling different tasks. In fact, it may become necessary to know how to negotiate contracts, understand budget management and service level agreements, manage workflows and deadlines, and dissect contract terms and conditions.

 

Conclusion

 

Hybrid IT environments may take federal IT pros outside of their comfort zones. The key to success is commitment: commit to embracing the new environment and commit to learning and honing new skills. Understanding the environment and bringing in the right skill sets and tools will be the key to maximizing the benefits of this new computing reality.

 

Find the full article on our partner DLT’s website.

The latest attack seemingly took the world by surprise. However, most of the affected users were using unpatched and unlicensed versions of Windows. How do we take a stand against ransomware and avoid being sidelined by these attacks? Here are a few things that I do and am happy to share in an effort to help strengthen your resistance against these attacks.

****

Update:  Assuming is never a good idea! Of course, your need for data backups is critical in ransomware attacks. But, it's not enough to have backups. You must also validate that they are usable and that the process works through testing.

 

****

  1. File Integrity Monitoring
    1. Monitoring your files for things like changing file extensions, moving of files, and authorization. Log & Event Manager (LEM) is vital in this to help protect your businesses information.
  2. Group Policies for Windows
    1. Cryptolocker prevention kits that do not allow ransomware to install in their most common locations.
    2. Make sure the Users group does not have full access to folders. I see this a lot, where a user group has full access to numerous folders.
    3. Make sure that users do not have rights to the registry!
  3. Static Block List
    1. Block known Tor IP addresses example: 146.185.220.0/23
  4. Limit network share access
    1. If they are able to penetrate and get to a server, you do not want to freely allow the ransomware full access to network shares. You also do not want a general user to have access to network shares that hold mission critical data. Think about this. Make sure you are applying policies and not giving users access to things they shouldn't. Allowing such gives attackers the same level of access.
  5. Update patching on servers
    1. If you are not patching your servers, you are not up to date on the malicious vulnerabilities that are already known. Stop being low hanging fruit and start being the insect spray to keep these attacks to a minimum.  Patch Manager will help you schedule and push these out so you are not worrying about being up to date. 
    2. The lab environment is key to making sure your third-party software is easily able to receive a patch. We all know that when a software or application is released, it is not aware of what's coming in the future. That is why installing a lab environment to test patches is a great way to help you patch and not be worried about breaking an application in the process.
  6. Spam
    1. For the love of everything great, update your spam filters. This is key to helping you keep malware from getting to people that are not aware of these attacks, which results in them being blamed. Preventing these emails of destruction helps keep your teams aware. You can even use them as user education.
  7. Test your plan
    1. Test out a fake ransomware email with your business. See who reacts and within what departments. This will help you to train people within their areas to not react to these type of emails.
    2. You may be surprised at how many people will click and simply give away their passwords. This is an opportunity for you to shine as an IT organization by using this information to help get funds and user training for the business.
  8. Web filter
    1. Control the sites that users can access. Use egress or outbound traffic filtering to block connections to malicious hosts.
  9. Protect your servers and yourselves
    1. Have a companywide anti-virus/malware program that is updated and verified. Patch Manager will help you determine who is up to date and who is not!
  10. Web settings
    1. Verify that your web settings do not allow for forced downloads.

 

 

There are lots of ways to protect ourselves at work and at home. The main reason why I focus on the home in my user education is because we can prevent these from work -- to a point. However, when the user goes home, they are an open door. So including user education to go over ways of protecting home environments is as much of a responsibility for the IT team as it is for the users themselves. Once home, the ransomware could decipher that blocked call and take over your machine.

 

We can try to protect ourselves with things like LEM, which alerts you when users come online, and see if their files have changed or are being changed.  However, NOT clicking the "click bait" email is what will ultimately help end-users be stronger links in the equation.

 

I hope this prompts you to raise questions about your security policies and begin having conversations about setting in place a fluid and active security plan. You never know what today or tomorrow will bring in bitcoin asks...

As I type this, I find myself somewhat unexpectedly winging my way back to Israel. For those who recall, I was here last December for DevOpsDays Tel Aviv (described here and here) with the specific goals to:

 

  • Continue my conversations about the intersection between DevOps and traditional monitoring
  • Increase my knowledge of cloud technologies and the causes behind the push to cloud
  • Eat my body weight in kosher shawarma

 

Very recently, an opportunity to speak at Bynet Expo fell into my lap, where we had a chance to articulate SolarWinds' approach to cloud and hybrid IT monitoring and management. The shawarma was calling me back, so I had to go.

 

This is where I'll be for the next week, soaking up some bright Israeli sunshine, meeting with over 3,000 IT pros in the booth, and offering my thoughts on how Sympathetic Innovation is the key to managing hybrid IT.

 

I can't wait to share the experience with everyone when I get back.

 

Oh, and there may be some food pictures. And even a video or two.

Malware is an issue that has been around since shortly after the start of computing and isn't something that is going to go away anytime soon. Over the years, the motivations, sophistication, and appearance have changed, but the core tenants remain the same. The most recent iteration of malware is called ransomware. Ransomware is software that takes control of the files on your computer, encrypts them with a password known only to the attacker, and then demands money (ransom) in order to unlock the files and return the system to normal.

 

Why is malware so successful? It’s all about trust. Users needs to be trusted to some degree so that they can complete the work that they need to do. Unfortunately, the more we entrust to the end-user, the more ability a bad piece of software has to inflict damage to the local system and all the systems it’s attached to. Limiting how much of your systems/files/network can be modified by the end-user can help mitigate this risk, but it has the side effect of inhibiting productivity and the ability to complete assigned work. Often it’s a catch-22 for businesses to determine how much security is enough, and malicious actors have been taking advantage of this balancing act to successfully implement their attacks. Now that these attacks have been systematically monetized, we're unlikely to see them diminish anytime soon.

 

So what can you do to move the balance back to your favor?

 

There are some well-established best practices that you should consider implementing in your systems if you haven't done so already. These practices are not foolproof, but if implemented well should mitigate all but the most determined of attackers and limit the scope of impact for those that do get through.

 

End-user Training: This has been recommended for ages and hasn't been the most effective tool in mitigating computer security risks. That being said, it still needs to be done. The safest way to mitigate the threat of malware is to avoid it altogether. Regularly training users to identify risky computing situations and how to avoid them is critical in minimizing risk to your systems.

 

Implement Thorough Filtering: This references both centralized and distributed filtering tools that are put in place to automatically identify threats and stop users from making a mistake before they can cause any damage. Examples of centralized filtering would be systems like web proxies, email spam/malware filtering, DNS filters, intrusion detection systems, and firewalls. Examples of local filtering include regularly updated anti-virus and anti-malware software. These filtering systems are only as good as the signatures they have though so regular definition updates are critical. Unfortunately, signatures can only be developed for known threats, so this too is not foolproof, but it’s a good tool to help ensure older/known versions/variants aren't making it through to end-users to be clicked on and run.

 

The Principle of Least Privilege: This is exactly what it sounds like. It is easy to say and hard to implement and is the balance between security and usability. If a user has administrative access to anything, they should never be logged in for day-to-day activities with that account and should be using the higher privileged account only when necessary. Users should only be granted write access to files and shares that they need write access to. Malware can't do anything with files it can only read. Implementing software that either whitelists only specific applications, or blacklists applications from being run from non-standard locations (temporary internet files, downloads folder, etc…) can go a long way in mitigating the threats that signature-based tools miss.

 

Patch Your Systems: This is another very basic concept, but something that is often neglected. Many pieces of malware make use of vulnerabilities that are already patched by the vendor. Yes, patches sometimes break things. Yes, distributing patches on a large network can be cumbersome and time consuming. You simply don't have an option, though. It needs to be done.

 

Have Backups: If you do get infected with ransomware, and it is successful in encrypting local or networked files, backups are going to come to the rescue. You are doing backups regularly, right? You are testing restores of those backups, right? It sounds simple, but so many find out that their backup system isn't working when they need it the most. Don't make that mistake.

 

Store Backups Offline: Backups that are stored online are at the same risk as the files they are backing up. Backups need to be stored on a removable media and then that media needs to be removed from the network and stored off-site. The more advanced ransomware variants look specifically to infect backup locations, as a functioning backup guarantees the attackers don't get paid. Don't let your last recourse become useless because you weren't diligent enough to move them off-line and off-site.

 

Final Thoughts

 

For those of you who have been in this industry for any time (yes, I'm talking to you graybeards of the bunch), you'll recognize the above list of action items as a simple set of good practices for a secure environment.  However, I would be willing to bet you've worked in environments (yes, plural) that haven't followed one or more of these recommendations due to a lack of discipline or a lack of proper risk assessment skills. Regardless, these tried and true strategies still work because the problem hasn't changed. It still comes down to the blast radius of a malware attack being directly correlated with the amount of privilege you grant the end-users in the organizations you manage. Help your management understand this tradeoff and the tools you have in your arsenal to manage it, and you can find the sweet spot between usability and security.

Last week we discussed whether automation is expected in today’s IT environment. Many of you agreed that it’s definitely not a new thing, with SysAdmins scripting things years ago (hello, Kix32!). We could even argue that it was much easier to script things in the old days when DOS and Windows 3.1 relied on .ini files for configuration, and there was no pesky registry.

 

Like other technology buzzwords, automation may prompt a different mental picture for you than it does for me. To online entrepreneurs, automation is about lead magnets feeding into sales funnels, auto responders and content delivery systems – that’s the magic behind the “make money while you sleep” crowd (or so they say). To a manufacturer, automation might mean robotics. To a customer services manager, automation can be customer support self-service or chatbots.

 

So let’s discuss what SysAdmins are actually automating.


Builds – Gone are the days of inserting 6 x 3.5” disks to install an operating system, or being able to copy it from one machine to another (totally showing my age here). After that, you were lucky if someone had taken screenshots or written down what settings to select as you stepped through the setup screens of NT4.0. We’ve always wanted our server and desktop builds to be consistent, but the need to document got in the way for some of us. Microsoft Small Business Server gave us wizards, and image cloning technology and Sysprep became normal for desktops. Ignoring cloud services for now, how are your current server and desktop builds automated? Have you gone fully "Infrastructure as Code" with Puppet, Chef, or Ansible?

 

Standard desktop settings – After a while in the small business space your memory fades of locked down Standard Operating Environments, but the reasons for them remain strong. In a Microsoft world, it seems Group Policy settings still rule here. Am I right?

 

Server reboots and service restarts – While I’d love to see us progress further with autonomous computing (computer, heal thyself), we’re not quite there yet. In reality, you can have a few workhorse servers that just need a scheduled reboot every now and again. My other favorite remediation step is automatically starting services that shouldn’t have stopped. What automation do you have in place to prevent things from dying or for trying to revive them?

 

User provisioning – When used for the occasional staff member, the GUI isn’t that bad a place to create new accounts, as long as you remember to add the required group memberships, mailboxes etc. Even then, it’s probably quicker to update their details on a spreadsheet and run a PowerShell command. Has PowerShell become our new favorite thing for user adds, changes, and deletes?

 

This is a very brief overview, so what have I missed? Do you live on GitHub, searching for new scripts?

 

Jump into the comments and share what SysAdmin tasks you’ve automated and how. Maybe together we can automate us out of some work!

 

-SCuffy

When focusing on traditional mode IT, what can a Legacy Pro expect?

 

great-white-shark-1.jpg

 

This is a follow-up to the last posting I wrote that covered the quest for training, either from within or outside your organization. Today I'll discuss the other side of the coin because I have worked in spaces and with various individuals where training just wasn't important.

 

These individuals were excellent at the jobs they'd been hired to do, and they were highly satisfied with those jobs. They had no desire for more training or even advancement. I don't have an issue with that. Certainly, I’d rather interact with a fantastic storage admin or route/switch engineer with no desire for career mobility than the engineer who’d been in the role for two months and had their sights set on the next role already. I’d be likely to get solid advice and correct addressing of issues by the engineer who’d been doing that job for years.

 

But, and this is important, what would happen if the organization changed direction. The Route/Switch guy who knows Cisco inside and out may be left in the dust if he refused to learn Arista, (for example) when the infrastructure changes hands, and the organization changes platform. Some of these decisions are made with no regard to existing talent. Or, if as an enterprise, they moved from expansion to their on-premises VMware environment to a cloud-first mentality? Those who refuse to learn will be left by the wayside.

 

Just like a shark dying if it doesn't move forward, so will a legacy IT pro lose their status if they don’t move forward.

 

I’ve been in environments where people were siloed to the extent that they never needed to do anything outside their scope. Say, for example, a mainframe coder. And yet, for the life of the mainframe in that environment, they were going to stay valuable to the organization. These skills are not consistent with the growth in the industry. Who really is developing their mainframe skills today? But, that doesn’t mean that they intrinsically have no impetus to move forward. They actually do, and should. Because, while it’s hard to do away with a mainframe, it’s been known to happen.

 

Obviously, my advice is to grow your skills, by hook or by crook. To learn outside your standard scope is beneficial in all ways. Even if you don’t use the new tech that you’re learning, you may be able to benefit the older tech on which you currently work by leveraging some of your newly gained knowledge.

 

As usual, I’m an advocate for taking whatever training interests you. I’d go beyond that to say that there are many ways to leverage free training portals, and programs to benefit you and your organization beyond those that have been sanctioned specifically by the organization. Spread your wings, seek out ways to better yourself, and in this, as in life, I’d pass on the following advice: Always try to do something beneficial every day. At least one thing that will place you on the moving forward path, and not let you die like a shark rendered stationary.

This week's links have a security slant because humans remain infallible. The Google Docs phishing email was but one of a handful of headlines reminding me that humans are far too trusting when it comes to basic security. I suppose it is human nature to want to help others, that's why people pick up hitchhikers, I guess. Data security and privacy is one area where I hope the machines rise up to save us from ourselves.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

For the love of God, stop clicking on shady emails already

Seriously. This is why we can't have nice things.

 

235 apps attempt to secretly track users with ultrasonic audio

If an app asks for access to your microphone or camera, and you can't imagine why it needs that access, think twice about installing it on your phone.

 

Antivirus evolved – Microsoft Malware Protection Center Blog

Rise up machines! One of the biggest benefits for Azure Machine Learning is how Microsoft is using machine learning technology to help detect threats. This is one of the (many) reasons why I see Azure overtaking AWS because Microsoft is focused on data security for their customers.

 

The alarming state of secure coding neglect

Honestly, I'm not that alarmed, because I know that security is an afterthought. If security was important, then we wouldn't be clicking on weird links to random Google Docs folders.

 

Intel's AMT Flaw: Worse Than Feared

See what I mean? Seven years of flawed chips installed on hardware devices around the globe. At this point, we should just assume that everything we do or will do, is going to be exposed and lost.

 

Reckon you've seen some stupid security things? Here, hold my beer...

I could go on with more bad security examples, but Troy Hunt provides a nice summary of how security is hard for most humans.

 

How to prevent blood clots as airlines squeeze you into tighter spaces

Turns out United was doing that guy a favor by letting some blood flow from his face so as to avoid any blood clots while flying.

 

Go home, Google Voice, you're drunk. Or maybe machines are further away from helping us than I had hoped. In related news, my heat is just fine:

IMG_6161.JPG

To paraphrase a lyric from Hamilton, Deciding to code is easy; choosing a language is harder. There are many programming languages that are good candidates for any would-be programmer, but selecting the one that will be most beneficial to each individual need is a very challenging decision. In this post, I will attempt to give some background on programming languages in general, as well as examine a few of the most popular options and attempt to identify where each one might be the most appropriate choice.

 

Programming Types and Terminology

 

Before digging into any specific languages, I'm going to explain some of the properties of programming languages in general, because these will contribute to your decision as well.

 

Interpreted vs Compiled

Interpreted

 

An interpreted language is one where the language reads the script and generates machine-level instructions on the fly. When an interpreted program is run, it's actually the language interpreter that is running with the script as an input. Its output is the hardware-specific bytecode (i.e. machine code). The advantages of interpreted languages are that they are typically quick to edit and debug, but they are also slower to run because the conversion to bytecode has to happen in real-time. Distributing a program written in an interpreted language effectively means distributing the source code.

 

sw_interpreter.png

Compiled

 

A compiled language is one where the script is processed by the language compiler and turned into an executable file containing the machine-specific bytecode. It is this output file that is run when the script is executed. It isn't necessary to have the language installed on the target machine to execute bytecode, so this is the way most commercial software is created and distributed. Compiled code runs quickly because the hard work of determining the bytecode has already been done, and all the target machine needs to do is execute it.

 

sw_compiler.png

 

Strongly Typed vs Weakly Typed

What is Type?

 

In programming languages, type is the concept that each piece of data is of a particular kind. For example, 17 is an integer. John is a string. 2017-05-07 10:11:17.112 UTC is a time. The reason languages like to keep track of type is to determine how to react when operations are performed on them.

 

As an example, I have created a simple program where I assign a value of some sort to a variable (a place to store a value), imaginatively called x. My program looks something like this:


x = 6
print x + x

I tested my script and changed the value of x to see how each of five languages would process the answer. It should be noted that putting a value in quotes (") implies that the value is a string, i.e. a sequence of characters.John is a string, but there's no reason678" can't be a string too. The values of x are listed at the top, and the table shows the result of adding x to x:

 

66six6sixsix6
Perl12120120
Bash12120*error*0
Python1266sixsix6six6sixsix6six6
Ruby1266sixsix6six6sixsix6six6
PowerShell1266sixsix6six6sixsix6six6

 

Weakly Typed Languages

Why does this happen? Perl and Bash are weakly (or loosely) typed; that is, while they understand what a string is and what an integer is, they're pretty flexible about how those are used. In this case, Perl and bash made a best effort guess at whether to treat the strings as numbers or strings; although the value 6 was defined in quotes (and quotes mean a string), the determination was that in the context of a plus sign, the program must be trying to add numbers together. Python and Ruby, on the other hand, respected 6 as a string and decided that the intent was to concatenate the strings, hence the answer of 66.

 

The flexibility of the weak typing offered by a language like Perl is both a blessing and a curse. It's great because the programmer doesn't have to think about what data type each variable represents, and can use them anywhere and let the language determine the right type to use based on context. It's awful because the programmer doesn't have to think about what data type each variable represents, and can use them anywhere. I speak from bitter experience when I say that the ability to (mis)use variables in this way will, eventually, lead to the collapse of civilization. Or worse, unexpected and hard-to-track-down behavior in the code.

 

That Bash error? Bash for a moment pretends to have strong typing and dislikes being asked to add variables whose value begins with a number but is not a proper integer. It's too little, too late if you ask me.

 

Strongly Typed Languages

In contrast, Python and Ruby are strongly-typed languages (as are C and Go). In these languages to add two numbers means adding two integers (or floating point numbers, aka floats). Concatenating strings requires two or more strings. Any attempt to mix and match the types will generate an error. For example in Python:


>>> a = 6
>>> b = "6"
>>> print a + b Traceback (most recent call last):   File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'int' and 'str'

Strongly typed languages have the advantage that accidentally adding the wrong variable to an equation, for example, will not be permitted if the type is incorrect. In theory, it reduces errors and encourages a more explicit programming style. It also ensures that the programmer is clear that the value of an int(eger) will never have decimal places. On the other hand, sometimes it's a real pain to have to convert variables from one format to another to use its value in a different context.

PowerShell appears to want to pretend to be strongly typed, but a short test reveals some scary behavior. I've included a brief demonstration at the end in the section titled Addendum: PowerShell Bonus Content.

 

Dynamic / Statically Typed

 

There's one more twist to the above definitions. While functionally the language may be strongly typed, for example, it's possible to allow a variable to change its type at any time. For example, it is just fine in Perl to initialize a variable with an integer, then give it a new value which is a string:


$a = 1;
$a = "hello";

Dynamic typing is typically a property of interpreted languages, presumably because they have more flexibility to change memory allocations at runtime. Compiled languages, on the other hand, tend to be statically typed; if a variable is defined as a string, it cannot change later on.

 

Modules / Includes / Packages / Imports / Libraries

 

Almost every language has some system whereby the functionality can be expanded by installing and referencing some code written by somebody else. For example, Perl does not have SSH support built in, but there is a Net::SSH module which can be installed and used. Modules are the easiest way to avoid reinventing the wheel and allow us to ride the back of somebody else's hard work. Python has packages, Ruby has modules which are commonly distributed in a format called a "gem," and Go has packages. These expansion systems are critical to writing good code; it's not a failure to use them, it's common sense.

 

Choosing a Language

 

With some understanding of type, modules and interpreted/compiled languages, now it's time to figure out how to choose the best language. First, here's a quick summary of the most common scripting languages:

 

C / ITypeS / DExpansion
PerlInterpretedWeakDynamicModules
PythonInterpretedStrongDynamicPackages
RubyInterpretedStrongDynamicModules
PowerShellInterpretedIt's complicatedDynamicModules
GoCompiledStrongStaticPackages

 

I've chosen not to include Bash mainly because I consider it to be more of a wrapper than a fully fledged scripting language suitable for infrastructure tasks. Okay, okay. Put your sandals down. I know how amazing Bash is. You do, too. will

 

Perl

sw_perl_logo.png

 

Ten years ago I would have said that Perl (version 5.x, definitely not v6) was the obvious option. Perl is flexible, powerful, has roughly eleventy-billion modules written for it, and there are many training guides available. Perl's regular expression handling is exemplary and it's amazingly simple and fast to use. Perl has been my go-to language since I first started using it around twenty-five years ago, and when I need to code in a hurry, it's the language I use because I'm so familiar with it. With that said, for scripting involving IP communications, I find that Perl can be finicky, inconsistent and slow. Additionally, vendor support for Perl (e.g. providing a module for interfacing with their equipment) has declined significantly in the last 5-10 years, which also makes Perl less desirable. Don't get me wrong; I doubt I will stop writing Perl scripts in the foreseeable future, but I'm not sure that I could, in all honesty, recommend it for somebody looking to control their infrastructure with code.

 

Python

sw_python_logo.png

It probably won't be a surprise to learn that for network automation, Python is probably the best choice of language. I'm not entirely clear why people love Python so much, and why even the people who love Python seem stuck on v2.7 and are avoiding the move to v3.0. Still, Python has established itself as the de facto standard for networking automation. Many vendors provide Python packages, and there is a strong and active community developing and enhancing packages. Personally, I have had problems adjusting to the use of whitespace (indent) to indicate code block hierarchy, and it makes my eyes twitch that a block of code doesn't end with a closing brace of some kind, but I know I'm in the minority here. Python has a rich library of packages to choose from, but just like Perl, it's important to choose carefully and find a modern, actively supported package. If you think that semicolons at the end of lines and braces surrounding code make things look horribly complicated, then you will love Python. A new Python user really should learn version 3, but note that v3 code is not backward compatible with v2.x, and it may be important to check the availability of relevant vendor packages in a Python3-compatible form.

 

Ruby

sw_ruby_logo.png

 

Oh Ruby, how beautiful you are. I look at Ruby as being like Python, but cleaner. Ruby is three or four years younger than Python, and borrows parts of its syntax from languages like Perl, C, Java, Python, and Smalltalk. At first, I think Ruby can seem a little confusing compared to Python, but there's no question that it's a terrifically powerful language. Coupled with Rails (Ruby on Rails) on a web server, Ruby can be used to quickly create database-driven web applications, for example. I think there's almost a kind of snobbery surrounding Ruby, where those who prefer Ruby look down on Python almost like it's something used by amateurs, whereas Ruby is for professionals. I suspect there are many who would disagree with that, but that's the perception I've detected. However, for network automation, Ruby has not got the same momentum as Python and is less well supported by vendors. Consequently, while I think Ruby is a great language, I would not recommend it at the moment as a network automation tool. For a wide range of other purposes though, Ruby would be a good language to learn.

 

PowerShell

sw_powershell_logo.png

PowerShell – that Microsoft thing – used to be just for Windows, but now it has been ported to Linux and MacOS as well. PowerShell has garnered strong support from many Windows system administrators since its release in 2009 because of the ease with which it can interact with Windows systems. PowerShell excels at automation and configuration management of Windows installations. As a Mac user, my exposure to PowerShell has been limited, and I have not heard about it being much use for network automation purposes. However, if compute is your thing, PowerShell might just be the perfect language to learn, not least because it's native in Windows Server 2008 onwards. Interestingly, Microsoft is trying to offer network switch configuration within PowerShell, and released its Open Management Infrastructure (OMI) specification in 2012, encouraging vendors to use this standard interface to which PowerShell could then interface. As a Windows administrator, I think PowerShell would be an obvious choice.

 

Go

sw_golang_logo.png

 

Go is definitely the baby of the group here, and with its first release in 2012, the only one of the languages here created in this decade! Go is an open source language developed by Google, and is still mutating fairly quickly with each release, as new functionality is being added. This is a good because things that are perceived as missing are frequently added in the next release. It's bad because not all code will be forward compatible (i.e. will run in the next version). As Go is so new, the number of packages available for use is much more limited than for Ruby, Perl, or Python. This is obviously a potential downside because it may mean doing more work for one's self.

 

Where Go wins, for me, is on speed and portability. Because Go is a compiled language, the machine running the program doesn't need to have Go installed; it just needs the compiled binary. This makes distributing software incredibly simple, and also makes Go pretty much immune to anything else the user might do on their platform with their interpreter (e.g. upgrade modules, upgrade the language version, etc). More to the point, it's trivial to get Go to cross-compile for other platforms; I happen to write my code on a Mac, but I can (and do) compile tools into binaries for Mac, Linux, and Windows and share them with my colleagues. For speed, a compiled language should always beat an interpreted language, and Go delivers that in spades. In particular, I have found that Go's HTTP(S) library is incredibly fast. I've written tools relying on REST API transactions in both Go and Perl, and Go versions blow Perl out of the water. If you can handle a strongly, statically typed language (it means some extra work at times) and need to distribute code, I would strongly recommend Go. The vendor support is almost non-existent, however, so be prepared to do some work on your own.

 

Conclusions

 

There is a lot to consider when choosing a language to learn, and I feel that this post may only scrape the surface of all the potential issues to take into account. Unfortunately, sometimes the issues may not be obvious until a program is mostly completed. Nonetheless, my personal recommendations can be summarized thus:

 

  • For Windows automation: PowerShell
  • For general automation: Python (easier), or Go (harder, but fast!)

 

If you're a coder/scripter, what would you recommend to others based on your own experience? In the next post in this series, I'll look at ways to learn a language, both in terms of approach and some specific resources.

 

Addendum: Powershell Bonus Content

 

In the earlier table where I showed the results from adding x + x, PowerShell behaves perfectly. However, when I started to add int and string variable types, it was not so good:


PS /> $a = 6
PS /> $b = "6"
PS /> $y = $a + $b
PS /> $y 12

In this example, PowerShell just interpreted the string 6 as an integer and added it to 6. What if I do it the other way around and try adding an integer to a string?


PS /> $a = "6"
PS /> $b = 6
PS /> $y = $a + $b
PS /> $y 66

This time, PowerShell treated both variables as strings; whatever type the first variable is, that's what gets applied to the other. In my opinion that is a disaster waiting to happen. I am inexperienced with PowerShell, so perhaps somebody here can explain to me why this might be desirable behavior because I'm just not getting it.

By Joe Kim, SolarWinds Chief Technology Officer

 

Government IT workers get a little squeamish on occasion. Understandable, right? I mean, federal IT managers must stay on top of the latest tech, which is constantly evolving. Legacy technologies are being replaced with shiny new cloud, virtualization, and networking software. Network complexity continues to grow, and budget and security concerns are always prevalent.

 

Underlying all of that may even be a sense of uncertainty regarding job security, some of which may stem from automation software. Don’t fret. Automation is your friend, and it can be used effectively to eliminate wasted time and unnecessary headaches.

 

Creating Their Legacy, Driving Innovation

 

Those should be comforting words for today’s federal IT professionals, who tend to have their fingers in a lot of pies. Beyond simply managing the network, growing network complexity, and initiatives like DevOps, have given administrators far more responsibility than ever before.

 

Today’s IT professionals can’t afford to be burdened with manual interventions that require hours – sometimes days – to fix. Furthermore, many like the idea of having time to do things that will help advance their agencies’ technology agendas, and create their own legacy.

 

Alert! Let’s Automate Responses!

 

Who wants to have to manually react to every single alert that comes through? Who has the time?

 

There’s a better way of dealing with alerts, one that won’t take hours away from your day.

 

Let’s take a look at a simple example. When a server alert is created because a disk is full, an administrator would typically deal with that task manually, perhaps by dumping the temp directory. What if they wrote a script for this task, instead. That would eliminate the need for a manual intervention?

 

Here’s another one. For whatever reason, an application stops working. Again, manually dealing with this challenge can be a painstaking, time-consuming process. Automation allows managers to write a script that enables the application to automatically restart.

 

Administrators can also evaluate their alerts to determine if an automated response is scriptable. This could create far fewer headaches.

 

Perhaps even more importantly, automated responses could free up IT time to develop and deploy new and innovative applications, for instance, or find better ways to deliver those applications to users.

 

Tools for the Job

 

Speaking of tools, there are certain types that should be considered. Change management and tracking, compliance auditing, and configuration backups should be on everyone’s automated wish list.

 

These tools save time and resources and greatly reduce errors that are sometimes created by manual tasks. These errors can lead to network downtime or potential security breaches. Meanwhile, they help to free up time for projects that can help your agency become more agile and innovative.

 

There are ways IT professionals can manage the hand they’re dealt more effectively and efficiently. They can use automation to make their lives easier and their agencies more nimble and secure. In turn, they can work smarter, not harder.

 

Find the full article on GovLoop.

In my last post WHEN BEING AN EXPERT ISN’T GOOD ENOUGH: MASTER OF ALL TRADES, JACK OF NONE, you all shared some great insight on how you were able to be find ways to be successful as individual SMEs and contributors, and how you could navigate the landscape of an organization.  

 

This week, I’d like to talk about silo organizations and how we’ve found ways to work better together. (You can share your stories, as well!)

 

 

This is the first thing I imagine when I hear that an organization is silo-ed off:

oldsilos.jpg

 

The boundaries are clearly defined, the foundation is well set, it’s very aged and well established. It doesn’t mean any of it is particularly good or bad, but it certainly shows the test of time. Navigating in that landscape requires more than tackling a delicate balance of ego and seniority.

 

Once upon a time, we had a very delicate situation we were trying to tackle. This may sound simple and straightforward, but needless to say, it’ll all make sense on how things were far from easy. We were faced with deploying a syslog server. Things literally do NOT get any easier than that! When I first found out about this (security) initiative, I was told that this was a "work in progress" for over two years, and that no syslog servers had been deployed, yet. Wait. Two years? Syslog server. None deployed?! This can’t be that difficult, can it? Welcome to the silo-ed organization, right?

 

On its surface, it sounds so simple, yet as we started to peel back the onion:

 

Security needed syslog servers deployed.

The storage team would need to provision the capacity for these servers.

The virtualization team would need to deploy the servers.

The networking team would need to provide IP addresses, and the appropriate VLANs, and advertise the VLANs as appropriate if they did not exist.

The virtualization team would then need to configure those VLANs in their networking stack for use.

Once all that was accomplished, the networking and security teams would need to work together to configure devices to send syslog data to these servers.

 

All that is straightforward, and easy to do when everyone works together! The disconnected, non-communicating silos prevented that from happening for years because everyone felt everyone else was responsible for every action and it’s a lot easier to not do things than to work together!

 

Strangely, what probably helped drive this success the most was less the clear separation of silo-by-silo boundary and more the responsibility taken by project managing this as a single project. When things are done within a silo, they’re often done in a bubble and begin and end without notifying others outside of that bubble. It makes sense, like when driving a car we’re all driving on the same road together and our actions may influence each other’s (lane changes, signal changes, and the like), but what music I’m listening to in my car has no influence on any other car.  

 

So, while we all have our own interdependencies that exist within our silos, when we’re working together ACROSS silos on a shared objective, we can be successful together as long as we recognize the big picture.   Whether we recognize that individually, or we do collectively with some dictated charter, we can still be successful. When I started this piece, I was more focused on the effects and influence we can make as individuals within our silos, and the interaction and interoperability with others within silos. But I came to realize that when we each individually manage our responsibilities within a “project,” we become better together. That said, I'm not implying that formal project management is required for any or all multi-silo interactions. It really comes down to accepting responsibility as individuals, and working together on something larger than ourselves and our organization, not just seeing our actions as a transaction with no effect on the bigger whole.

 

Then again, I could be crazy and this story may not resonate with any of you.   

 

Share your input on what you’ve found helps you work better together, whether it be inter-silo, intra-silo, farming silos, you name it!

From the dawn of computers and networks, log management has been an integral part of managing and monitoring IT processes. Today, managing log messages is instrumental to troubleshooting network issues and detecting security breaches. Log management involves collecting, analyzing, transmitting, storing, archiving, and disposing of log data created in your IT environment. With thousands of log messages created every minute, centralized log management and rapid analysis of problematic logs are some of the critical requirements faced by IT Pros.

 

Kiwi Syslog® Server is an easy-to-install and simple log management software that helps you capture and analyze logs so you know what’s happening inside your router, switch, and network cable. Here are the five most useful log management operations performed by Kiwi Syslog Server:

1. Syslog monitoring

Kiwi Syslog Server collects syslog messages and SNMP traps from servers, firewalls, routers, switches, and syslog-enabled devices, and displays these messages in a centralized web console for secure analysis. It also offers various filtering options to sort and view messages based on priority, host name, IP address, time, etc. You can also generate graphs on syslog messages for auditing purposes when needed.

2. Syslog alerting

Kiwi Syslog Server’s intelligent alerting sends real-time alerts and notifications when syslog messages meet specified criteria based on time, source, type, and so on. Kiwi Syslog Server has some default priority levels, such as emergency, alert, critical, error, warning, etc. to help you understand the severity of any given situation. Based on the alerts condition, you can set up automatic actions, including the following:

  • Trigger an email notification
  • Receive alerts via sound notification
  • Run an external program or script
  • Log to file, Windows® event log, or database
  • Forward specific messages to another host

 

3. Log retention and archival processes

Using the scheduler built into Kiwi Syslog Server, you can automate log archival and clean up processes.

 

You can accomplish the following using the log archival options in Kiwi Syslog Server:

  • Automate log archival operations by defining the source, destination, archival frequency, and notification options
  • Copy or move logs from one location to another
  • Compress files into individual or single archives
  • Encrypt archives and create multi-part archives
  • Create file or archive hashes, run external programs, and much more

 

With the scheduled clean-up option, you can schedule the removal/deletion of log files from a source location if it matches specific criteria. You can schedule the clean-up process at specified application/start-up times, or any time or date you wish.

4. Log forwarding

You can use Kiwi Syslog Server to forward log messages to servers, databases, other syslog servers, and SIEM systems. The Log Forwarder for Windows tool included in the Kiwi Syslog Server download package allows you to forward event logs from Windows servers and workstations as syslog messages to Kiwi Syslog Server.

 

5. Transport syslog messages

Kiwi Secure Tunnel, which is included in the Kiwi Syslog Server download, helps you securely transport syslog messages from any network devices and servers to your syslog server. It collects log messages from all configured network devices and systems, and transports them to the syslog server across a secure link in any network (LAN or WAN).

 

Read more…

 

Kiwi Syslog Server

As you can see, Kiwi Syslog Server helps you step up your log management operations while saving you invaluable time. Try a free trial of Kiwi Syslog Server now »

I have lots of conversations with colleagues and acquaintances in my professional community about career paths. The question that inevitably comes up is whether they should continue down their certification path with specific vendors like VMware, Microsoft, Cisco, and Oracle, or should they pursue new learning paths, like AWS and Docker/Kubernetes?

 

Unfortunately, there is no answer that fits each individual because we each possess different experiences, areas of expertise, and professional connections. But that doesn't mean you can't fortify your career.  Below are tips that I have curated from my professional connections and personal experiences.

  1. Never stop learning. Read more, write more, think more, practice more, and repeat.
  2. Be a salesperson. Sell yourself! Sell the present and the potential you. If you can’t sell yourself, you’ll never discover opportunities that just might lead to your dream job.
  3. Follow the money. Job listings at sites like dice.com will show you where companies are investing their resources. If you want to really future-proof your job, job listings will let you know what technical skills are in demand and what the going rate is for those skills.

 

As organizations embrace digital transformation, there are three questions that every organization will ask IT professionals:

  1. A skill problem? Aptitude
  2. A hill problem? Altitude
  3. A will problem? Attitude

How you respond to these questions will determine your future within that organization and in the industry.

 

So what do you think of my curated tips? What would you add or subtract from the list? Also, how about those three organizational questions? Are you being asked those very questions by your organization? Let me know in the comment section.

 

A final note: I will be at Interop ITX in the next few weeks to discuss this among all the tech-specific conversations. If you will be attending, drop me a line in the comment section and let’s meet up.

So, I’m sure you're all aware of the Google phishing scam. It, conveniently, presents a few key items that I would like to discuss.

 

What we know, as in what Google will tell us, is that the expedition did not represent an access of information. Rather, it merely gathered contacts and re-sent the phishing email for fake Google docs. Clearly, we need to discuss the key identifiers of how to protect yourself from similar attacks. The phishing emails were sent from (supposedly) hhhhhhhhhhhhhhhh@mailinator.com. Now if that doesn't look fishy, I don’t know what does. Regardless, people obviously opened it.

 

Another critical element is that the link the Google docs directed you to led to nothing more than a long chain of craziness, instead of a normal Google doc location. However, like most phishing, it appears to be from someone you know. So how can we protect ourselves?

 

Google installed several fixes within an hour. This shows great business practices for security on their side. We have to know that there is no one-size-fits-all for security, period. New breaches are happening every second, and we don’t always know the location, intent, or result of these attacks. What we can do is be mindful that we are no longer free-range users, and we have a personal responsibility to be aware of attacks, both at home and at work.

 

So, I'd like to help you learn the basics of looking for and recognizing phishing emails. First, and always, begin with being suspicious. Here are some ideas to help strengthen your Spidey senses:

 

  • Report phishing emails to your IT team or personal email account providers. If they don’t know, they can't fix the issue. They may eventually find out, but think of this as your friendly Internet Watch program.
  • Avoid attacks. NEVER give personal information unless you know why you are being asked for it, and are100% able to verify the email address. Make sure the email address actually matches the sender.
  • Hover over links and verify if they are going to the correct location.
  • Update your browser security settings. Google released a fix for this and pushed it out within hours.
  • Patch your devices -- including MOBILE! Android had an updated phishing release from Google within hours.
  • Stop thinking of patches for your phone as a feature request.

 

We can be our own cyber security eye in the sky! All it takes is motivation and time to be hacked, breached, or attacked, so we must be diligent and not let down our guards. Being vigilant is critical, as is proactively protecting ourselves at home and work by practicing a few simple practices.

 

And another thing: Let's stop sending out our SSIDs at home like a bat signal. There are little things we can do everywhere. Go big and implement MAC address filtering that will determine if anyone is trying to access your Wi-Fi big time. (Take it from someone who has four teenage daughters.)

 

 

~Dez~

Had a great time at the Salt Lake City SWUG last week and wanted to say THANK YOU to all who attended. I'm already looking forward to my next SWUG. But before that, I need to get my stuff together for Techorama in Antwerp in three weeks.

 

I've got a few long links to share today, so set aside some time for each. The last two are fun ones though, and I hope you enjoy them as much as I did.

 

Northrop Grumman can make a stealth bomber – but can't protect its workers' W-2 tax forms

At first glance, you might think this is an example of a company not protecting their data, but that's not the case. Northrop outsourced their tax portal to Equifax, meaning there is a lot more to this story than just blaming Northrop.

 

How Online Shopping Makes Suckers of Us All

I'm a fan of data mining, and I've been shopping online for decades. I never once thought to myself that the online markets could be manipulating the pricing. Makes total sense now. But I still got a great price on those ten pounds of nutmeg.

 

The Myth of a Superhuman AI

For anyone worried about the machines rising up to kill us all, this article gives hope.

 

Who is Publishing NSA and CIA Secrets, and Why?

With all the recent hacks and leaks, I've been wondering what the bigger picture would look like. I never thought it would be the result of bragging.

 

Here’s Why Juicero’s Press is So Expensive

At first, I thought this story was funny, now it's just sad to think about how much money has been wasted on something that nobody ever asked for, or wants. I can't imagine spending $400 for a juice machine, and a $35/week subscription on top of that. That's  over $2k in juice money in the first year. If you know of someone spending that much money on juice you should smack them in the mouth and then teach them how to make their own juice for a fraction of the cost.

 

Unicorn Startup Simulator

Okay, I was a bit harsh on the juice folks in the last link. So, here's a link to help remind us all how difficult it is for a company in the Valley to succeed.

 

Hilarious Sayings That Don’t Make Sense Translated

Because I enjoy languages, and infographics, and apparently the name Juicero is Russian for "to hang noodles on the ears."

 

At the SWUG last week in Salt Lake City the museum had this relic outside of our room:

IMG_6675.JPG

Federal IT professionals spend much time and money implementing sophisticated threat management software to thwart potential attackers, but they often forget that one of the simplest methods hackers use to access sensitive information is through social media. The world’s best cybersecurity tools won’t help if IT managers inadvertently share information that may appear to be innocuous status updates, but in reality, could reveal details about their professions that could serve as tasty intel for disruptors.

 

On LinkedIn®, for example, attackers can view profiles of network and system administrators and learn what systems targets are working on. This approach is obviously much easier and more efficient than running a blind scan trying to fingerprint a target.

 

However, federal IT professionals can actually use social media networks to block attackers. By sharing information amongst their peers and colleagues, managers can effectively tag hackers by using some of their own tactics against them.

 

Most attackers are part of an underground community, sharing tools, tactics, and information faster than any one company or entity can keep up.

 

Federal IT professionals can use threat feeds for information gathering and defense. Threat feeds are heralded for quickly sharing attack information to enable enhanced threat response. They can be comprised of simple IP addresses or network blocks associated with malicious activity, or could include more complex behavioral analysis and analytics.

 

While threat feeds will not guarantee security, they are a step in the right direction. They allow administrators to programmatically share information about threats and create mutual defenses much stronger than any one entity could do on its own. There’s also the matter of sharing data across internal teams so that all agency personnel are better enabled to recognize threats, though this is often overlooked.

 

An easy way to share information internally is to have unified tools or dashboards that display data about the state of agency networks and systems. Often, performance data can be used to pinpoint security incidents. The best way to start is to make action reports from incident responses more inclusive. The more the entire team understands and appreciates how threats are discovered, the more vigilant the team can be overall in anomaly detection and in raising red flags when warranted.

 

Federal IT professionals could use the Internet Storm Center as a resource on active attacks, publishing information about top malicious ports being used by attackers and the IP addresses of attackers. It’s a valuable destination.

 

The bottom line is that while all federal IT professionals must all be diligent and guarded about what they share on social media, that doesn’t mean they should delete their accounts. Used correctly, social media and online information sharing can effectively help them unite forces and gain valuable insight to fight a common and determined enemy.

 

Find the full article on GovLoop.

Filter Blog

By date: By tag: