Skip navigation

The only constant truth in our industry is that technology is always changing.  At times, it’s difficult to keep up with everything new that is being introduced while you stay active in working your day to day duties.  That challenge grows even harder if these new innovations diverge from the direction that the company you work for is heading.  Ignoring such change is a bad idea. Failing to keep up with where the market it heading is a recipe for stagnation and eventual irrelevance. So how do you keep up with these things when your employer doesn’t sponsor or encourage your education?

 

1) The first step is to come to the realization that you're going to need to spend some time outside of work learning new things. This can be difficult for a lot of reasons, but especially if you have a family or other outside obligations. Your career is a series of priorities though, and while it may/should not be the highest thing you prioritize, it has to at least be on the list.  Nobody is going to do the work for you, and if you don’t have the support of your organization, you’re going to have to carve out the time on your own.

 

2) Watch/listen/read/consume, a lot. Find people who are writing about the things you want to learn and read their blogs or books. Don’t just read their blogs, though. Add them to a program that harvests their RSS feeds so you are notified when they write new things. Find podcasts that address these new technologies and listen to them on your commute to/from work. Search YouTube to find people who are creating content around the things you want to learn. I have found the technology community to be very forthcoming with information about the things that they are working on. I’ve learned so much just from consuming the content that they create. These are very bright people sharing the things they are passionate about for free. The only thing it costs is your time. Some caution needs to be taken here though, as not everyone who creates content on the internet is right. Use the other resources to ask questions and validate the concepts learned from online sources.

 

3) Find others like you. The other thing that I have found about technology practitioners is that, contrary to the stereotype of awkward nerds, many love to be social and exist within an online community. There are people just like you hanging out on Twitter, in Slack groups, in forums, and other social places on the web. Engage with them and participate in the conversations. Part of the problem of new technology is that you don’t know what you don’t know. Something as simple as hearing an acronym/initialism that you haven’t heard before could lead you down a path of discovery and learning. Ask questions and see what comes back. Share your frustrations and see if others have found ways around them. The online community of technology practitioners is thriving. Don't miss the opportunity to join in and learn something from them.

 

4) Read vendor documentation. I know this one sounds dry, but it is often a good source for guidance on how a new technology is being implemented. Often it will include the fundamental concepts you need to know in order to implement whatever it is that you are learning about. Take terms that you don’t understand and search for them.  Look for key limitations or caveats in the way a vendor implements a technology and it will tell you about its limitations. You do have to read between the lines a bit, and filter out the vendor-specific stuff (unless you are looking to learn about a specific vendor), but this content is often free and incredibly comprehensive.

 

5) Pay for training. If all of the above doesn’t round out what you need to learn, you’re just going to have to invest in yourself and pay for some training. This can be daunting as week-long onsite courses can cost thousands of dollars. I wouldn’t recommend that route unless you absolutely need to. Take advantage of online computer-based training (CBT) from sites like CBT Nuggets, Pluralsight, and ITProTV. These sites typically have reasonable monthly or yearly subscription fees so you can consume as much content as your heart desires.

 

6) Practice, practice, practice. This is true for any learning type, but especially true when you’re going it alone. If at all possible, build a lab of what you’re trying to learn.  Utilize demo licenses and emulated equipment if you have to. Build virtual machines with free hypervisors like KVM so you can get hands-on experience with what you’re trying to learn. A lab is the only place where you are going to know for sure if you know your stuff or not. Build it, break it, fix it, and then do it all again. Try it from a different angle and test your assumptions. You can read all the content in the world, but if you can’t apply it, it isn’t going to help you much.

 

Final Thoughts

 

Independent learning can be time consuming and, at times, costly. It helps to realize that any investment of time or money is an investment in yourself and the skills you can bring to your next position or employer.  If done right, you’ll earn it back many times over by the salary increases you’ll see by bringing new and valuable skills to the table.  However, nobody is going to do it for you, so get out there and start finding the places where you can take those next steps.

As an IT pro, automation fascinates me -- as a concept. While I spend much of my day on support tasks that don't seem to be able to be automated, I'm also surrounded by business owners and online entrepreneurs crying out that automation is essential to business success. The world seems to be in lust with automation.

Before I delve into what automation actually is (stay tuned for a future post), I'd like to know if the automation bell is ringing just as loudly in your world. Do we even have to debate if automation is optional in today's IT environment? Or is the concept just a bunch of hype from a growing number of automation vendors?

 

My theory is that the adoption of automation is relative to the size of your environment and size of your IT team. In the enterprise space, you wouldn’t think twice about scripting desktop software deployments or using a deployment tool. It just doesn’t make sense to touch every device manually. In smaller organizations, you might argue that it takes longer to automate a process for something than it would to do it manually on five or 10 servers.  Getting over that cost of adoption is key. You have to be happy knowing it WILL take you longer to research and automate things the first time, but the payoff comes in cost savings every time you need to repeat that process in the future.

 

Automation is also good for removing the "unique human" from the picture. Lessen your reliance on that one person who knows how to do the thing! Program the thing to be done by an automation tool so that others on the IT team know how to use it, too.

 

I think we would also see a difference in the rate of automation based on what you’re actually automating. Is it a priority to automate new server builds, or do you first tackle automatic service restarts (as an example of a support issue we can automate the healing of, before needing our intervention)?

 

Does the attitude of your IT department (and your organization) have an impact on how you view automation? We’ve had automation of sorts for a long time, from Kix32 scripts to Group Policy settings. Has your organization stopped there, or have you fully embraced modern automation recipes and infrastructure as code? Are Powershell and Bash a regular, helpful part of your day, a necessary evil, or still as foreign as speaking Hungarian (unless, of course, you speak Hungarian)?

 

I’ve asked a lot of questions in this article and I’m really keen to hear your thoughts. I mix with some amazing people who have automation finely tuned, and other IT pros who are still wondering why they’d invest the time. As DevOps meets traditional IT ops, does automation provide the common ground of configuring code to benefit IT pros?

How should somebody new to coding get started learning a language and maybe even automating something? Where should they begin?

 

It's probably obvious that there are a huge number of factors that go into choosing a particular language to learn. I'll look at that particular issue in the next post, but before worrying about which arcane programming language to choose, maybe it's best to take a look at what programming really is. In this post, we'll consider whether it's going to be something that comes naturally to you, or require a very conscious effort.

 

If you're wondering what I mean by understanding programming rather than understanding a language, allow me to share an analogy. When I'm looking for, say, a network engineer with Juniper Junos skills, I'm aware that the number of engineers with Cisco skills outnumbers those with Juniper skills perhaps at a ratio of 10:1 based on the résumés that I see. So rather than looking for who can program in Cisco IOS and who can program in Junos OS, I look for engineers who have an underlying understanding of the protocols I need. The logic here is that I can teach an engineer (or they can teach themselves) how to apply their knowledge using a new configuration language, but it's a lot more effort to go back and teach them about the protocols being used. In other words, if an engineer understands, say, the theory of OSPF operation, applying it to Junos OS rather than IOS is simply a case of finding the specific commands that implement the design the engineer already understands. More importantly, learning the way a protocol is configured on a particular vendor's operating system is far less important than understanding what those commands are doing to the protocol.

 

Logical Building Blocks

 

Micro Problem: Multiply 5 x 4

 

Here's a relatively simple example of turning a problem into a logical sequence. Back in the days before complex instruction sets, many computer CPUs did not have a multiply function built in, and offered only addition and subtraction as native instructions. How can 5x4 be calculated using only addition or subtraction? The good news for anybody who has done common core math (a reference for the readers in the USA) is that it may be obvious that 5x4 is equivalent to 5+5+5+5. So how should that be implemented in code? Here's one way to do it:


answer = 0  // create a place to store the eventual answer
answer = answer + 5  // add 5
answer = answer + 5  // add 5 (2nd time)
answer = answer + 5  // add 5 (3rd time)
answer = answer + 5  // add 5  (4th time)

At the end of this process, answer should contain a value of 20, which is correct. However, this approach isn't very scalable. What if next time I need to know the answer to 5 x 25? I really don't want to have to write the add 5 line twenty-five times! More importantly, if the numbers being multiplied might be determined while the program is running, having a hard-coded set of additions is no use to us at all. Instead, maybe it's possible to make this process a little more generic by repeating the add command however many times we need to in some kind of loop. Thankfully there are ways to achieve this. Without worrying about exactly how the loop itself is coded, the logic of the loop might look something like this:


answer = 0
number_to_add = 5
number_of_times = 4
do the following commands [number_of_times] times:
  answer = answer + [number_to_add]
done

Hopefully that makes sense written as pseudocode. We define the number that we are multiplying (number_to_add), and how many times we need to add it to the answer (number_of_times), and the loop will execute the addition the correct number of times, giving us the same answer as before. Now, however, to multiply different pairs of numbers, the addition loop never needs to change. It's only necessary to change number_to_add and number_of_times.

 

This is a pretty low level example that doesn't achieve much, but understanding the logic of the steps taken is something that can then be implemented across multiple languages:

 

sw_code_example_1.png

 

I will add (before somebody else comments!) that there are other ways to achieve the same thing in each of these languages. The point I'm making is that by understanding a logical flow, it's possible to implement what is quite clearly the same logical sequence of steps in multiple different languages in order to get the result we wanted.

 

Macro Problem: Automate VLAN Creation

 

Having looked at some low level logic, let's look at an example of a higher level construct to demonstrate that the ability to follow (and determine) a logical sequence of steps applies all the way up to higher levels as well.

 

In this example, I want to define a VLAN ID and a VLAN Name, and have my script create that VLAN on a list of devices. At the very highest level, my theoretical sequence might look like this:


Login to router
Create VLAN
Logout

After some more thought, I realize that I need to do those steps on each device in turn, so I need to create some kind of loop:


do the following for each switch in the list (s1, s2 ...  sN ):
  Login to router 
  Create VLAN
  Logout
done

It occurs to me that before I begin creating the VLANs, I ought to confirm that it doesn't exist on any of the target devices already, and if it does, I should stop before creating it. Now my program logic begins to look like this:


do the following for each switch in the list (s1, s2 ...  sN ):
  Login to router
  Check if VLAN exists
  Logout
  IF the chosen VLAN exists on this device, THEN stop!
  ENDIF
done

do the following for each switch in the list (s1, s2 ...  sN ):
  Login to router
  Create VLAN
  Logout
done

The construct used to determine whether to stop or not is referred to as an if/then/else clause. In this case, IF the vlan exists, THEN stop (ELSE, implicitly, keep on running).

Each step in the sequence above can then be broken down into smaller parts and analyzed in a similar way. For example:


Login to router
IF login failed THEN:
| log an error
| stop ELSE:
| log success
ENDIF

Boolean (true/false) logic is the basic for all these sequences, and multiple conditions can be tested simultaneously and even nested within other clauses. For example, I might expand the login process to cope with RADIUS failure:


Login to router
IF login failed THEN:
| IF error message was "RADIUS server unavailable" THEN:
| | attempt login using local credentials
| ELSE:
| | log an error
| | stop
| ENDIF
ELSE:
| log success
ENDIF

 

So What?

 

The so what here is that if following the kind of nested logic above seems easy, then in all likelihood so will coding. Much of the effort in coding is figuring out how to break a problem down into the right sequence of steps, the right loops, and so forth. In other words, the correct flow of events. To be clear, choosing a language is important too, but without having a grasp of the underlying sequence of steps necessary to achieve a goal, expertise in a programming language isn't going to be very useful.

 

My advice to a newcomer taking the first steps down the Road to Code (cheese-y, I know), is that it's good to look at the list of tasks that would ideally be automated and see if they can be broken down into logical steps, and then break those steps down even further until there's a solid plan for the approach. Think about what needs to happen in what order. If information is being used in a particular step, where did that information come from? Start thinking about the problems with a methodical, programming mindset.

 

In this series of posts, it's obviously not going to be possible to teach anybody how to code, but instead I'll be looking at how to select a linguistic weapon of choice, how to learn to code, ways to get started and build on success, and finally to offer some advice on when coding is the right solution.

mbleib

The IT Training Quandary

Posted by mbleib Apr 27, 2017

What do you do when your employer says no more training? What do you do when you know that your organization should move to the cloud or at least some discrete components? How do you stay current and not stagnate? Can you do this within the organization, or must you go outside to gain the skills you seek?

 

This is a huge quandary…

 

Or is it?

 

Not too long ago, I wrote about becoming stale in your skill sets, and how that becomes a career-limiting scenario. The “gotcha” in this situation is that often your employer isn't as focused on training as you are. The employer may believe in getting you trained up, but you may feel as if that training is less than marketable or forward thinking. Or, worse, the employer doesn’t feel that training is necessary. They may view you as being capable of doing the job you’ve been asked to do, and that the movement toward future technology is not mission critical. Or, there just might not be budget allotted for training.

 

These scenarios are confusing and difficult. How is one to deal with the disparity between what you want and what your employer wants?


The need for strategy, in this case, is truly critical. I don’t advocate misleading your employer, of course, but we are all looking out for ourselves and what we can do to leverage our careers. Some people are satisfied with what they’re doing and don’t long to sharpen their skills, while others are like sharks, not living unless they’re moving forward. I consider myself to be among the latter.

 

Research free training options. I know, for example, that Microsoft has much of its Azure training available online for no cost. Again, I don’t recommend boiling the ocean, but you can choose what you want to select strategically. Of course, knowing the course you wish to take might force you to actually pay for the training you seek.

 

Certainly, a sandbox environment, or home lab environment, where you can build up and tear down test platforms would provide self-training. Of course, getting certifications in that mode are somewhat difficult, as well as gaining access to the right tools to accomplish your training in the ways the vendor recommends.

 

I advocate doing research on a product category that would benefit the company in today’s environment, but can act as a catalyst for the movement to the cloud. Should that be on the horizon, the most useful ramp in this case is likely Backup as a Service or DR as a service. So the research into new categories of backup, like Cohesity, Rubrik, or Actifio, where data management, location, and data awareness are critical, can assist the movement of the organization toward cloudy approaches. If you can effectively sell the benefits of your vision, then your star should rise in the eyes of management. Sometimes it may feel like you’re dragging the technology behind you, or that you’re pushing undesired tech toward your IT management, but fighting the good fight is well worth it. You can orchestrate a cost-free proof of concept on products like these to facilitate the research, and thus prove the benefit to the organization, without significant outlay.

 

In this way, you can guide your organization toward the technologies that are most beneficial to them by solving today’s issues while plotting forward-thinking strategies. Some organizations are simply not conducive to this approach, which leads me to my next point.

 

Sometimes, the only way to better your skills, or improve your salary/stature, is without the relationship in your current organization. This is a very dynamic field, and movement from vendor to end-user to channel partner has proven a fluid stream. If you find that you’re just not getting satisfaction within your IT org, you really should consider whether moving on is the right approach. This draconian approach is one that should be approached with caution, as the appearance of hopping from gig to gig can potentially be viewed by an employer as a negative. However, there are times when the only way to move upward is to move onward.

Heading to Salt Lake City this week for the SWUG meeting on Thursday. If you are in the area I hope you have the chance to stop by and say hello. I'll be there to talk data and databases and hand out some goodies. I haven't been to Salt Lake City in a few years, and I'm looking forward to being there again even if there is a chance of snow in the forecast.

 

As always, here's a handful of links from the intertubz I thought you might find interesting. Enjoy!

 

Steve Ballmer Serves Up a Fascinating Data Trove

As someone who loves data, I find this data project to be the best thing since Charles Nelson Minard. Head over to https://usafacts.org/ and get started on finding answers to questions you didn't know you wanted to ask.

 

The New York Times to Replace Data Centers with Google Cloud, AWS

Eventually, we will hit a point where *not* having your data hosted by a cloud provider will make headlines.

 

Do You Want to be Judged on Intentions or Results?

Short but thought-provoking post. I learned a long time ago that no one cares about effort, they only care about results. But I didn't stop think about how I want to be judged, or how I could control the conversation.

 

Cybersecurity Startup Exposed Hospital Network Data in Demos

Whoops. I'm starting to understand why they didn't earn that contract.

 

Microsoft is Bringing AI and More To SQL Server 2017

In case you missed it, last week Microsoft announced the new features coming in SQL Server 2017. It would appear that Microsoft sees the future of data computing to include features that go beyond just traditional storage.

 

Windows and Office align feature release schedules to benefit customers

In another announcement, Microsoft announced fewer updates to their products. But what they are really announcing is the transition from traditional client software to subscription-based software for their core products such as Office and Windows.

 

Uber tried to fool Apple and got caught

If you were looking for another reason to dislike how Uber operates as a company, this is the link for you.

 

Took the family for a drive to the Berkshires last Friday and realized that my debugging skills are needed everywhere I go:

IMG_6638.JPG

Hybrid IT continues to grow as more agencies embrace the cloud, so I wanted to share this blog written last year by our former Chief Information Officer, Joel Dolisy, which I think is still very valid and worth a read.

 

 

Most federal IT professionals acknowledge that the cloud is and will be a driving component behind their agencies’ long-term successes; no one expects to move all of their IT infrastructures to the cloud.

 

Because of regulations and security concerns, many administrators feel it’s best to keep some level of control over their data and applications. They like the efficiencies that the cloud brings, but they aren’t convinced that it’s suitable for everything.

 

Hybrid IT environments offer benefits, but they can also introduce greater complexity and management challenges. Teams from different disciplines must come together to manage various aspects of in-house and cloud-based solutions. Managers must develop special skillsets that go well beyond traditional IT, and new tools must be deployed to closely monitor this complex environment.

 

Here are a few strategies managers can implement to close the gap between the old and the new:

 

1. Use tools to gain greater visibility

Administrators should deploy tools that supply single access points to metrics, alerts, and other data collected from applications and workloads, allowing IT staff to remediate, troubleshoot, and optimize applications, regardless of where they may reside.

 

2. Use a micro-service architecture and automation

Hybrid IT models will require agencies to become more lean, agile, and cost effective. Traditional barriers to consumption must be overcome and administrators should gain a better understanding of APIs, distributed systems, and overall IT architectures.

 

Administrators must also prepare to automatically scale, move, and remediate services.

 

3. Make monitoring a core discipline

Maintaining a holistic view of your entire infrastructure allows IT staff to react quickly to potential issues, enabling a more proactive strategy.

 

4. Remember that application migration is just the first step

Migration is important, but the management following initial move might be even more critical. Managers must have a core understanding of an application’s key events and performance metrics and be prepared to remediate and troubleshoot issues.

 

5. Get used to working with distributed architectures

Managers must become accustomed to working with various providers handling remediation as a result of outages or other issues. The result is less control, but greater agility, scalability, and flexibility.

 

6. Develop key technical skills and knowledge

Agency IT professionals need to learn service-oriented architectures, automation, vendor management, application migration, distributed architectures, API and hybrid IT monitoring, and more.

 

7. Adopt DevOps to deliver better service

DevOps breaks down barriers between teams, allowing them to pool resources to solve problems and deliver updates and changes faster and more efficiently. This makes IT services more agile and scalable.

 

8. Brush up on business skills

Administrators will need to hone their business-savvy sides. They must know how to negotiate contracts, become better project managers, and establish the technical expertise necessary to understand and manage various cloud services.

 

Managing hybrid IT environments takes managers outside their comfort zones. They must commit to learning and honing new skills, and use the monitoring and analytics tools at their disposal. It’s a great deal to ask, but it’s the best path forward for those who want to create a strong bridge between the old and the new.

 

Find the full article on Government Computer News.

The other day, as is often the case when an engineer is deep in a troubleshooting task that requires a restart if interrupted, I got a request for advice. “Hey, if you have a second I wanted to ask a question about standardizing DevOps tools. Should my friend use Chef, Puppet, or something else to get DevOps going?” He couldn't understand task-switch loss, so I did my best impression of the wisest help desk gurus on THWACK. I took a breath, found a smile, and answered the question.

 

“Standardizing” the tools of DevOps is anathema to the goal of DevOps; a bad habit carried over from old-school, under-resourced IT. With waterfall-based, help desk interrupt-driven, top-down IT, there’s too often a belief that if only the organization would adopt a Magic Tool, all would be well. DevOps, and more correctly the Agile principals that beget DevOps as an outcome, is bigger than a tool, vendor, or any technology.

 

For an organization to successfully make DevOps work, especially to achieve its promise of breaking logjams blocking the digital transformation enterprise so desperately wants, standardizing Agile principles and methods should be the real goal. For example, if the Ops team adopts Scrum and comes to value ScrumMasters who resist corrupting urges to grow scope after sprints begin, it doesn’t matter if the team chooses Ansible, Chef, Puppet, or AWS or Azure services for automation.

 

If critical teams standardize on methods that result in predictable, quality outcomes, they can each choose the tools that work best for them, or change them as needed to take advantage of new features. Tools selection or replacement becomes just one more element in the product backlog, to be balanced against business goals, like everything else. It substantially reduces the tendency to paralyze the team waiting on The Penultimate Tool of the Ages, before it can even get started.

 

Where standardizing DevOps methods over tools really pays off is assured quality. If a team standardizes on a principle of Minimal Acceptable Monitoring, they will ensure throughout Dev, testing, deployment, and Ops that the right tools are used to quantify performance and user experience. This crucial measure of service quality then informs sprint goals, increasing quality and even efficiency over time. Even better, by adopting effective, (verses impossibly idealized), Continuous Deployment, IT can help ensure DevOps, in which often overlooked goals like security awareness are always a part of every change rather than an occasional review project.

 

If your aspiring pre-DevOps organization makes only one decision on a standard, make it this: everyone learns the 10 key principles of AgileNote: I’m not saying anything about Scrum, Chef, stand-up meetings, or the Kanban board that runs my house chores. I’m not talking about actual adoption, timelines, project division, team realignment, or even two-pizza teams. The point is not to be prescriptive in the early stages. Resist the IT urge to go immediately to implementation, resolution, and ticket closure.  Let.. this.. soak.. in.  Dream about how you’d build IT if you could start over without any existing technology or processes. How would you solve the macro requirement: How do I delight humans who use technology? And a final pro tip: Standardization on Agile principles is best done over several sessions as a team, offsite, with adult beverages to go with the pizza.

Has this situation happened to you? You've dedicated your professional career -- and let's be honest -- your life, on a subject, only to find “that's not good enough.” Maybe it comes from having too many irons in the fire, or it could be that there are just too many fires to be chasing.

 

Ericsson (1990) says that it takes 10,000 hours (20 hours for 50 weeks a year for ten years = 10,000) of deliberate practice to become an expert in almost anything.

 

I’m sure you’ve heard that Ericsson figure before, but in any normal field, the expectation is that you will gain and garner that expertise over the course of 10 years. How many of you can attest to spending 20 hours a day for multiple days to even multiple weeks in a row as you tackle whatever catastrophe the business demands, often driven by a lack of planning on their part? (Apparently, a lack of planning IS our emergency when it comes to keeping that paycheck coming in!)

 

I got my start way back in Security and Development (the latter of which I won’t admit if you ask me to code anything :)). As time progressed, the basic underpinnings of security began delving into other spaces. The message became, “If you want to do ANYTHING in security, you need networking skills or you won’t get very far.” To understand the systems you’re working on, you have to have a firm grasp of the underlying Operating Systems and kernels. But if you’re doing that, you better understand the applications. Oh, and in the late 1990s, VMware came out, which made performing most of this significantly easier and more scalable. Meanwhile, understanding what and how people do the things they do only made sense if you understood System Operations. And nearly every task along the way wasn’t a casual few hours here or there, especially if your goal was to immerse yourself in something to truly understand it. Doing so would quickly become a way of life, and before long you'd quickly find yourself striving for and achieving expertise in far too many areas, updating your skill sets along the way.

 

As my career moved on, I found there to be far more overlap of specializations and subject matter expertise, rather than clearly delineated silos. Where this would come to head as a strong positive was when I worked with organizations as a SME in storage, virtualization, networking and security, finding that the larger the organization, the more these groups would refuse to talk to each other. More specifically, if there was a problem, the normal workflow or blame assignment would look something like this picture. Feel free to provide your own version of events that you experience.

 

 

Given this very atypical approach to support by finger-pointing, having expertise in multiple domains would become a strong asset since security people will only talk to other security people. Okay, not always, but also, yes, very much always. And if you understand what they’re saying and where they’re coming from, pointing out, “Hey, do you have a firewall here?” means a lot more coming from someone who understands policy than from one of the other silos, which they seemingly have nothing but disdain for. Often, a simple network question posed by one network person to another could move mountains, because each party respects the ability or premise of the other. Storage and virtualization folks typically take the brunt of the damage because they regularly have to prove that problems aren’t their fault because they’re the easiest point of blame due to storage pool consolidation or hardware pool consolidation. Finally, the application guys simply won’t talk to us half the time, let alone mention that they made countless changes without understanding what WE did wrong to make their application suddenly stop working the way it should. (Spoiler alert: It was an application problem.)

 

Have you found yourself pursuing one or more subject matter domains of expertise, either just get your job done, or to navigate the shark-infested waters of office politics? Share your stories!

Raise your hand if you have witnessed firsthand rogue or shadow IT. This is when biz, dev, or marketing goes directly to cloud service providers for infrastructure services instead of going through your IT organization. Let's call this Rogue Wars.

 

Recently, I was talking to a friend in the industry about just such a situation. They were frustrated with non-IT teams, especially marketing and web operations, procuring services from other people’s servers. These rogue operators were accessing public cloud service providers to obtain infrastructure services for their mobile and web app development teams. My friend's biggest complaint was that his team was still responsible for supporting all aspect of ops, including performance optimization, troubleshooting, and remediation, even though they had zero purviews or access into the rogue IT services.

 

They were challenged by the cloud’s promise of simplified self-service. The fact that it's readily available, agile, and scalable was killing them softly with complexities that their IT processes were ill prepared for. For example, the non-IT teams did not leverage proper protocol to retire those self-service virtual machines (VMs) and infrastructure resources that form the application stack.That meant that they were paying for resources that no longer did work for the organization. Tickets were also being opened for slow application performance, but the IT teams had zero visibility to the public cloud resources. For this reason, they could only let the developers know that the issue was not within the purview of internal IT. Unfortunately, they were handed the responsibility of resolving the performance issue.

 

This is how the easy button of cloud services is making IT organizations feel the complex burn. Please share your stories of rogue/shadow IT in the comments below. How did you overcome it, or are you still cleaning up the mess?

I missed a milestone announcement! The Actuator celebrated its one-year anniversary last week! I created this series as a fun way to keep in touch with everyone here on THWACK on a weekly basis. I never thought I would manage to keep it going, every week, for a full year. Thanks to everyone for their support. Here's to another year of mindless links!

 

In unrelated news, I'm putting the finishing touches on my slides for the Salt Lake City SWUG next week. I hope you get the chance to stop by and say hello.

 

As always, here's a handful of links from the intertubz I thought you might find interesting. Enjoy!

 

Printed titanium parts expected to save millions in Boeing Dreamliner costs

Not sure how I feel about flying in an airplane that was made on a 3D printer. I'm also wondering what this means for the future of manufacturing.

 

A New, More Rigorous Study Confirms: The More You Use Facebook, the Worse You Feel

Pretty sure Zuckerburg is going to unfriend Harvard after reading this article.

 

Microsoft Says Users Are Protected From Alleged NSA Malware

This seems to be a common trend lately: Hackers release information and companies respond by saying that if everything is okay if you are up to date with patches. I can't help but feel we are pawns in a much larger game.

 

Architecting Microsoft SQL Server on VMware

For all the people that have ever told me "virtualizing database servers is hard," I present to you the upgraded guidelines from VMWare. SPOILER ALERT: It's not that hard to virtualize your database workloads.

 

Nintendo Discontinues the NES Classic Edition

Because it was too popular, apparently.

 

Early Macintosh Emulation Comes to the Archive

Since we are walking down memory lane with the NES, here's a look at what Macintosh used to look like.

 

Emoji for fun and profit

This video with the CTO of Slack runs 30 minutes, but for anyone that has wondered about the history of emoji, it's worth a view. I can't be the only one, right?

 

As the kids scoured my yard for eggs yesterday afternoon I found myself thinking this exact thought:

easter-egg.jpg

By Joe Kim, SolarWinds Chief Technology Officer

 

It can be truly astounding to think about the scale of today’s largest government networks, which are growing larger and more complex every day.

 

As a public sector IT pro, it may seem like an impossible challenge to manage this growing behemoth. Ever-increasing numbers of network devices, servers, and applications give you less leeway for downtime, hiccups, or problems of any sort.

 

There is a range of strategies that government IT pros can employ to support network growth and scalability while helping to ensure that all architectural and infrastructural requirements are met, and system failover scenarios are accounted for.

 

As the IT environment expands, it becomes more important for monitoring and management systems to scale to keep up with growth. Most monitoring systems are built with the following elements, each with its own requirements and challenges to scale:

 

  • A server that hosts the monitoring product and polls for status and performance
  • A database where the polled information is stored for historical data access and reporting
  • A web console for software management, data visualization, and reporting

 

Within this environment, three primary variables will affect a system’s scalability:

 

  1. Infrastructure size: The number of monitored elements (where an element is defined as a single, identifiable node, interface, or volume), or the number of servers and applications that can be monitored.
  2. Polling frequency: The interval at which the monitoring system polls for information. For example, statistics collected every few seconds instead of every minute will make the system work harder, and requirements will increase.
  3. The number of simultaneous users accessing the monitoring system.

 

Those are the basics of understanding the feasibility of scalability. Now, let’s move on to ways to manage that environment.

 

A command center is particularly well suited to agencies with multiple regions or sites where the number of nodes to be monitored in each region would warrant both localized data collection and storage. It works well for regional teams that are responsible for their own environments and require autonomy over their monitoring platform. While the systems are segregated between regions, all data can still be accessed from the centrally located console.

 

Additional scalability tips

 

There are several additional strategies that will help manage an agency’s growing infrastructure:

 

Add polling engines: Distributing the polling load for the monitoring system among multiple servers will provide scalability for large networks.

 

Add web servers: Additional web servers can help support increasing numbers of concurrent monitoring sessions, helping to ensure that more users have uninterrupted web access to network monitoring software.

 

Add a failover server: To help ensure the monitoring system is always available, install a failover mechanism that will switch monitoring system operation to a secondary server if the primary server should fail.

 

Agency networks will certainly get large. It's the nature of an increasingly technically driven government. While it may seem overwhelming, implementing these few tactics will help IT managers embrace the growth and ultimately realize its value.

 

Find the full article on Government Computer News.

geeks.jpg

 

Recently, a rare moment of alignment occurred here at SolarWinds. All the Head Geeks came home to roost, brainstorm, debate, break bread together, and simply bask in the warm glow of friendship and camaraderie.

 

But the event, which only happens between two and three times a year due to our speaking and convention appearances, caused some confusion among the rest of the staff: They didn't know what to call it. Were we a gaggle of Geeks? A herd? A NERD herd?

 

Other collections have interesting names. There's a murder of crows, a conspiracy of ravens, an ostentation of peacocks, and exaltation of larks. There's a troop of baboons and a shrewdness of apes. A parade of elephants and a bloat of hippopotami.

 

Even among humans, we have some interesting group names: a blush of boys, a hastiness of cooks, or a superfluity of nuns.

 

So, I thought I would put it out to the ever creative THWACKizens:

 

What do YOU call a collection, gathering, grouping of Geeks? Are we:

  • A convention?
  • An argument or quibble?
  • An array or hash?
  • Or maybe a chaos or grok.

 

Share YOUR ideas in the comments below!

Back from Telford and SQLBits and looking forward to some time at home before I head off to the SWUG meeting in Salt Lake City in two weeks. If you can attend I do hope you stop by and say hello. I'd love the opportunity to talk data and databases!

 

As always, here's a handful of links from the intertubz I thought you might find interesting. Enjoy!

 

Why Uber Won't Fire Its CEO

If people really want the CEO gone, they need to find a way to get the stock price low enough. Until people start losing money, the bad behavior will continue.

 

Google adds fact-check findings to search and news results

A step in the right direction, but it won't overcome the real issue: there is a dire lack of trusted fact-checkers in the world. As such, we have no idea what information can be trusted.

 

Hackers set off Dallas' 156 emergency sirens over a dozen times

I suspect we will see more of these stories over the next 18 months, as state and local government agencies aren't at the top of the "tech-savvy" list, mostly due to inadequate funding.

 

Microsoft Snatches Up Deis To Boost Azure Kubernetes Tech

Microsoft grabs Deis right after Docker announces plans to offer Enterprise containers. These items may be related.

 

SQL Server Privacy Statement

SQL Server became the first Microsoft product to publish detailed information about the what and how data is collected for usage feedback. This is a huge step forward and sets the bar high not only for other product teams but for other database platforms in the industry.

 

Can My Internet Provider Really Sell My Data? How Can I Protect Myself?

Yes, they always have been able to do this. Of course, there are ways for you to minimize the risk here, but at the end of the day, you have to place your trust with someone, whether that be your ISP or with a company that provides a VPN.

 

It's that time of year, Easter eggs season! Here's one my daughter made a few years back, I'm not sure it looks anything like me:

 

sql-egg.JPG

Thirty-three entered but only one could win.

If you made our winner mad, you may lose a limb.

When you’re in a galaxy far far away, it’s dangerous to fly solo—

be sure to travel with a Wookie who knows how to use a crossbow.

And when the Death Star is about to go kablooey,

the one you want by your side is the one we affectionately call, Chewie.

 

Chewbacca won this bracket battle with one hairy arm tied behind his back.

Round after round he won in a landslide, and ultimately earned the title of The Greatest Sidekick of All Time!

 

We interviewed Chewbacca after we crowned him as the winner. He excitedly exclaimed, “Uuuuuuuuuur Ahhhhhrrrrrr Uhrrrr Ahhhhhhrrrrrrr Aaaaarhg!”

I think that really says it all folks.

 

1702_thwack_BracketBattle-Sidekicks_round-FINAL.png

 

 

What are your final thoughts on this year’s bracket battle?

 

Do you have any bracket theme ideas for next year?

 

Tell us below!

By Joe Kim, SolarWinds Chief Technology Officer

 

At our recent user group meeting in Washington, D.C., I had conversations with some of our Army customers, which reminded me of a blog written last year by our former Chief Information Officer, Joel Dolisy. It is exciting that our software can support their mission.

 

No one needs reliable connectivity more than the nation’s armed forces, especially during the heat of battle. But reliable connectivity often can be hampered by a hidden enemy: latency and bandwidth concerns.

 

The military heavily relies on Voice over Internet Protocol (VoIP) for calls, web conferencing, high-definition video sharing, and other bandwidth-heavy applications. While this might sound more like the communication tool for a business boardroom, it is equally applicable within the military, and compromised systems come with potentially life-altering consequences.

 

Use of these highly necessary communication tools can dramatically strain even the most robust communications systems. As such, Defense Department IT managers must leverage their efficacy while helping to ensure that colleagues remain in constant contact with minimal lag, stuttering, or disconnection.

 

How can IT administrators and those in the field make sure their networks supply crucial connectivity to meet the needs of soldiers and commanders? How can they guarantee reliable data and communications anytime, anywhere?

 

We can look to the U.S. Army, which successfully deployed software and solutions that meet today’s essential need to remain connected. The Army’s recruiting message is “Army Strong.” Attaining that strength requires troops be battle-ready the world over. Today, that also means maintaining uninterrupted communications and deploying technology that can monitor, analyze, and reduce—if not eliminate—network outages.

 

The Army developed Warfighter Information Network-Tactical (WIN-T) to provide secure and reliable communications of all forms—voice, video, and data—from any location, at any time. The network is built on software that helps the Army manage networks in a number of ways, yet creates seamless and smooth communications.

 

Network bandwidth analysis solutions help to ensure optimal network performance for even the most bandwidth-intense applications. Performance monitors identify potential performance issues that managers can rectify quickly before the problems interfere with communications or cause outages. Configuration management lets Army IT personnel easily manage multiple configurations for different routers and switches to help to ensure communication remains unimpeded. Traffic analysis maintains optimal network and bandwidth usage. The Army has also deployed solutions to monitor the quality of VoIP calls and WIN-T wide area network (WAN) performance, and troubleshoot issues through access to detailed call records.

 

A Communications Blueprint

 

In short, the organization is working hard to ensure that warfighters and others securely remain in touch at all times. After all, the heat of battle is when the need for communications is greatest, and helping to ensure that troops stay connected from the front lines and the command post is an absolute must.

 

Through WIN-T and supporting technologies, the Army laid out a communications blueprint that other defense organizations can—and should—emulate. It has created a solution that enables soldiers to maintain constant voice, video, and data communications, even when in remote and challenging spots and while on the move. It’s an impressive feat very much essential to today’s always-on warfighter.

 

Find the full article on Signal.

2009-3237_010.jpg

On April 4, Seth Godin -- the writer I aspire to be like -- wrote, "The Invisible Fence" (Seth's Blog: The invisible fence ). In his usual eloquent yet terse style, he said:

"There are very few fences that can stop a determined person (or dog, for that matter).

Most of the time, the fence is merely a visual reminder that we're rewarded for complying.

If you care enough, ignore the fence. It's mostly in your head."

 

It caught my eye because once upon a time I looked into getting an Invisible Fence for my dog, pictured above. Also pictured above is my son, and to say the two were thick as thieves is an understatement. Aside from when he was at school, they went everywhere together. The boy thought the dog was his responsibility, at least that's what we'd told him. But our dog knew better. The boy was her human, a responsibility she took very seriously.

 

Which is why the Invisible Fence rep stood in my driveway, looked over at the dog and her human, and told me not to bother. "Dog's like that," he informed me. "they guard their flock no matter what. If she hears him and decides she needs to be there, a 10-foot brick wall won't stop her, let alone a shock collar, no matter how high you turn it up. What it will do, though, is make her think twice about coming back."

 

Years later, with the dog laid to rest and her human almost grown, that comment has stuck with me.

 

How often, I'm left wondering, do we build fences?

Fences around our work.

Fences around our teams.

Fences around our interactions.

Fences around our relationships.

Fences around our heart.

 

Fences, which, as Seth writes, are mostly in our head.

 

And, like the salesman told me that day, fences that do nothing to keep others locked inside artificial boundaries, but do an amazing job of keeping them from coming back once they are free.

SolarWinds recently released the 2017 IT Trends Report: Portrait of a Hybrid IT Organization, which highlights the current trends in IT from the perspective of IT professionals. The full details of the report, as well as recommendations for hybrid IT success, can be found at it-trends.solarwinds.com.

 

The findings are based on a survey fielded in December 2016. It yielded responses from 205 IT practitioners, managers, and directors in the U.S. and Canada from public and private-sector small, mid-size, and enterprise companies that leverage cloud-based services for at least some of their IT infrastructure. The results of the survey illustrate what a modern hybrid IT organization looks like, and shows cost benefits of the cloud, as well as the struggle to balance shifting job and skill dynamics. 

 

The following are some key takeaways from the 2017 IT Trends Report:

  1. Moving more applications, storage, and databases into the cloud.
  2. Experiencing the cost efficiencies of the cloud.
  3. Building and expanding cloud roles and skill sets for IT professionals.
  4. Increasing complexity and lacking visibility across the entire hybrid IT infrastructure.

Cloud and hybrid IT are a reality for many organizations today. They have created a new era of work that is more global, interconnected, and flexible than ever. At the same time, the benefits of hybrid IT introduce greater complexity and technology abstraction. IT professionals are tasked with devising new and creative methods to monitor and manage these services, as well as prepare their organizations and themselves for continued technology advancements.

 

Are these consistent with your organizational directives and environment? Share your thoughts in the comment section below.

1702_thwack_BracketBattle-Sidekicks_Landing-Page_525x133_v.1.png

 

For the first time in bracket battle 2017, I don’t think there were any surprises this round. The final four included two of the most popular bracket contestants we’ve ever seen—Groot & Chewbacca. They have continued to dominate each round and made getting into the finals look easy. Several of you predicted this final match-up on DAY 1 of the bracket battle! Team Watson & Team Pinky, you put up a good fight.

 

To see the final score for these match-ups, check out the polls below:

 

tomaddox

 

smoked_angus

 

For the last time this year, it’s time to check out the updated bracket and vote for the greatest sidekick of all time! This one is for all the marbles!

You will have until April 9th @ 11:59 PM CDT to submit your votes & campaign for your favorite sidekick.

 

Access the bracket and make your picks HERE>>

Greetings from Telford, UK! I'm here at SQL Bits delivering a full day training session with datachick and enjoying the sunshine. If you are near Telford stop by SQLBits on Saturday and you can listen to Karen and I debate database design. Yes, it is as exciting as it sounds!

 

Also, here's a reminder that the calendar says April, which means the year is about 25% over. This is about the time I like to ask people how they are doing on those goals they set for themselves at the start of the year. If you feel a bit behind it’s not too late to get started!

 

As always, here's a handful of links from the intertubz I thought you might find interesting. Enjoy!

 

Old Microsoft IIS Servers Vulnerable to Zero-Day Exploit

Look folks, if you are running 14-year-old, unpatched, internet-facing servers then maybe you deserve what happens.

 

This is how Samsung plans to prevent future phones from catching fire

There's no such thing as bad press, right?

 

Facial recognition database used by FBI is out of control, House committee hears

Setting aside the privacy issues/concerns, I have one question about this report: If commercial software is “five years ahead” of what the FBI is using, why doesn’t the FBI use the commercial software?

 

Why I Always Tug on the ATM

Yep. I find myself doing this as well, every time I am at a place like an ATM, or gas station, and supermarkets.

 

Researchers steal data from CPU cache shared by two VMs 

Concerning, yes. But I still think the Cloud is far more secure that the data center on the 8th floor.

 

Seven Big Reasons to Move Backup to the Cloud

In case the previous article meant you needed a reminder that the Cloud is quite useful, and awesome, for a variety of scenarios.

 

From my first visit to Telford with Lego daughter three years ago:

a4819231f3fb648fed5f6d139728b729.jpg

By Joe Kim, SolarWinds Chief Technology Officer

 

We are in the process of wrapping up our next federal cybersecurity survey and we are eager to see the results. I fully expect foreign government threats to be near the top of the list, and I thought this would be a good time to remind folks of some security fundamentals, presented by my colleague Mav Turner, SolarWinds Senior Director, Product Management.

 

 

When we think of cyberattacks, we generally picture a lone wolf hacker or Anonymous-type organization. But foreign governments are also formidable threats. Scan recent headlines and you’ll see articles explaining that cyber hacks on Sony Pictures Entertainment and the Democratic National Committee—among many others—have been attributed to North Korea and Russia.

 

Last year’s SolarWinds federal cybersecurity study revealed foreign governments pose some of the most serious risks for cyberattacks. Results indicate an uptick in reported government-backed threats over the past few years, with reports increasing from 34 percent in 2014 to 48 percent last year.

 

As publicity surrounding breaches grows, the public's demand to attribute breaches to a specific government or nation-state and the expectation of an explanation grows as well. This "pressure cooker" climate complicates and sometimes politicizes decision-making for agencies.

 

While there is no magic bullet, concentrating on three fundamentals—process, people, and tools—can create a good foundation for a well-designed security posture. Here’s how agencies can make them work together.

 

Develop a Sound Security Process

 

Agencies must develop proactive, well-formulated plans that outline exact steps that must be taken in case of an intrusion, taking into account which employees have access to what information, and the solutions the agency will employ to monitor networks. A step-by-step management approach will help ensure that no data is left unguarded.

 

Invest in People and Education

 

All personnel—not just IT—should be informed about the varying types of existing threats. They should also know that their organizations could be targeted at any time. IT personnel who react to frontline security breaches must have an especially deep understanding of the tools used to manage and thwart threats.

 

The need to invest in people is underscored by the release of the federal cybersecurity work force strategy, an action plan from the White House’s Office of Management and Budget to find, develop, and expand the nation's cybersecurity talent in the public and private sectors.

 

Deploy the Proper Tools

 

Patch management and network automation software add layers of security, and use standardized device configuration and deployment automation to reduce configuration errors. The best-in-class network security tools also use change monitoring, alerts, configuration backups, and rollbacks to improve network reliability. 

 

Just as foreign governments use teams of people to attack, domestic agencies find strength in their numbers. Social media, networking groups, and threat feeds provide great tools for sharing information about the latest threats, and educating peers on ways to fortify networks. IT personnel should use them to stay ahead of potential attackers.

 

Organizations should band together. The most strategic defense against cyber breaches will come when federal, state, and local agencies—including law enforcement and other security personnel—across the United States share resources and work together to fight foreign intrusion into U.S. cyberspace.

 

Find the full article on Signal.

1702_thwack_BracketBattle-Sidekicks_Landing-Page_525x133_v.1.png

 

March madness brings April Awesomeness. It’s April, which means we’re getting close to the exciting and dramatic finish to Bracket Battle 2017! Round 3 had me hitting the refresh button and kept me guessing on who would win each matchup.

 

Let’s see who from the elite eight will move on to the final four:

 

  • Round 3: Groot vs Hermione Granger Watching these two battle for the lead was like watching a quidditch match. Hermione used every spell in the book, but it was Groot who won this match in the end. bigmclargehuge gave some insight into the struggle between Groot vs Hermione, “ This is a tough call for me. Hermione was a great companion in the story, but Groot was an integral team player. I voted against Groot vs. Ford Prefect, but I gotta go with Groot here. He just seems like the better sidekick!
  • Round 3: Dr. Watson vs Samwise Samwise fought the good fight, but now he’s headed back to shire to eat some po-ta-toes. stevenastem commented, “I like Watson's approach to Holmes better than Sam's approach to Frodo.  Plus I think you can argue that Sam is actually the hero with Frodo his simpering sidekick.  Sure Frodo is 'in charge' but I think that was a mistake.”
  • Round 3: Pinky vs Garth Algar Garth’s Cinderella story finally came to an end this round. He was the underdog in the play-in round and managed to party all the way to the elite eight. Excellent. Pinky is definitely the underdog (er mouse) taking on Chewbacca, the favorite to win in the next round.
  • Round 3: Chewbacca vs Agent K  Chewbacca continues his streak with another landslide win! ecklerwr1 is already predicting Chewbacca will win the whole thing, “Chewbacca is going ALL THE WAY!!!!!

 

 

Were you surprised by any of the round 3 outcomes? Comment below!

 

To quote our friend Garth, it’s time to “live in the now” and move on to the next round of bracket battle. Check out the updated bracket & start voting for the ‘Distinguished’ round now! We need your help to determine who will move on to the finals in the ultimate sidekick bracket battle!

 

Access the bracket and make your picks HERE>>

Filter Blog

By date: By tag: