Skip navigation

SolarWinds has committed $75,000 to support The American Red Cross of Central & South Texas Region, All Hands Volunteers, and Feeding Texas in their disaster relief efforts assisting communities affected by Hurricane Harvey. In addition, SolarWinds will match U.S. employee donations to these organizations over the next 30 days on a $2 for every $1 basis.

Kevin B. Thompson Statement on Hurricane Harvey Disaster Relief

SolarWinds has also committed a minimum of 1,000 employee volunteer hours over the next six months to organizations who will continue recovery and rehabilitation efforts, including The American Red Cross, All Hands Volunteers, the Information Technology Disaster Resource Center, and any of the National Voluntary Organization Active in Disaster [NVOAD] partner organizations. Employees will be able to contribute up to one week of their time without impact to their paid time off.

 

All of us at SolarWinds encourage THWACK members to join us in contributing to these organizations – whether you are able to give funds, or volunteer your time, your gifts are critically important to assist communities impacted by the disastrous flooding caused by Harvey.

 

To donate money to Hurricane Harvey relief, please visit:

To assist with volunteer efforts, please visit:

 

If you have any questions about the SolarWinds #GeeksThatGive response to Hurricane Harvey, please reach out to pr@solarwinds.com or jennebarbour.

Viva Las Vegas! This version of the Actuator comes to you from VMworld Las Vegas. If you are reading this at VMworld, there is still time to stop by the booth and say hello!

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Going Multi-Cloud with AWS and GCP: Lessons Learned at Scale

Two key takeaways from this for me. First, this is the only company I’ve seen that has publicly stated they are using GCP. Second, the issues with AWS DR architecture (and old equipment in the Eastern U.S. region) make me smile when I think about how far ahead Microsoft Azure is when it comes to providing proper redundancy for customers.

 

Microsoft Tests Artificial Intelligence on its Drone in Nevada

I guess now that we have mastered self-driving cars, it’s time we build planes that can fly and think for themselves. I’m certain this will benefit humanity and not hasten the development of Skynet in any way.

 

MIT researchers use drone fleets to track warehouse inventory

That’s better. Start with something smaller than a plane that can track down items on a shelf…or humans. Skynet gets closer with each passing day.

 

CIA uses a secret tool to spy on NSA, FBI and other intel partners

This isn’t news. Countries spy on each other, even allied countries. And agencies inside those countries spy on each other, too. I don’t have data to verify this, but I suspect spying dates to the first time humans got together in tribes.

 

Leak of >1,700 valid passwords could make the IoT mess much worse

Making a bad situation worse. No idea at what point we will finally understand that our current security models are horribly broken, but I suspect it will take some Black Mirror scenarios.

 

Meet the man using data science to predict who dies next on Game of Thrones

Good to see people taking advances in computing and using them in ways to benefit all of mankind. Also? Tyene is as good as dead and we didn’t need a computer to tell us that.

 

The life-saving browser shortcut everyone should know

Okay, that’s cool. I can’t believe I hadn't heard about this before, either.

 

I'm not sure if I have the minimum pieces of flair yet.

As companies race to the cloud and adopt DevOps culture as part of that process, it's becoming more apparent that the word "monitoring" has a significantly different meaning within the walls of a data center than it does in the DevOps huddle area. But what, if anything, is actually different? Or is it all just jargon and an attitude of not invented here (NIH).


In my panel discussion, 'When DevOps Says "Monitor,"' I will be joined by Nathen Harvey, VP of Community Development at Chef, Michael Cote, Director of Technical Marketing at Pivotal, and Clinton Wolfe, cloud architect and DevOps practice lead (and current "hero for hire" seeking his next adventure). In our conversation, we'll break down expectations, and yes, even bad (monitoring) habits in the DevOps world in a way that will make a traditional monitoring engineer feel right at home.

 

Because it was so successful last year, we are continuing our expanded-session, two-day, two-track format for THWACKcamp 2017. SolarWinds product managers and technical experts will guide attendees through how-to sessions designed to shed light on new challenges, while Head Geeks and IT thought leaders will discuss, debate, and provide context for a range of industry topics.

 

In our 100% free, virtual, multi-track IT learning event, thousands of attendees will have the opportunity to hear from industry experts and SolarWinds Head Geeks, such as myself, and technical staff. Registrants also get to interact with each other to discuss topics related to emerging IT challenges, including automation, hybrid IT, DevOps, and more.

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch my session!

scuff

When the Cloud Wins

Posted by scuff Aug 29, 2017

Now that we’ve torn apart the cloud dream, it’s time to give it some credit. Let’s look at some scenarios where the cloud makes sense.

 

Email workloads: Faced with the costs of replacing an aging Microsoft Small Business Server (hardware, software, implementation, and ongoing support), the cloud numbers can stack up. This market has been an easy win for Microsoft, who sweetened the deal with other Office 365 products in your license. They’ve lost some of these to Google’s GSuite instead, but for the purpose of this discussion, it’s all cloud. Enterprises commonly pick this workload first in their cloud endeavours. It’s not rocket science. Hybrid directory integration is good, and they’ve saved the overhead of a number of Exchange servers in the process. Your mileage may vary.

 

Consolidating hardware/reducing support costs: Take the previous paragraph on email and apply that concept to any other workload you might stick in the cloud. If you can get rid of some hardware and the associate infrastructure support costs, the cloud numbers could stack up, especially if your financial controller wants to reign in asset spending but has a flexible operations budget. Those two separate financial buckets affect a company very differently.

 

Goodbye, experts: Why hire a DBA to manage the performance of an on-prem application when a SaaS app can do the trick? With the cloud, you’re paying a monthly fee for the app to just work. If you want to add extra security features, tick a box and license them, instead of running an RFP process to find and select a solution and a vendor. Worried about DevOps? Make it the SaaS provider's problem and enjoy the benefits of the latest production just by logging in. Need high availability and a 24x7 NOC? Use the cloud, as they’ll be up all night keeping it running for the rest of their customers, anyway.

 

Testing, testing, 1,2,3: Want to run a proof of concept or play with a new solution? Run it up in the cloud, no hardware required, and stop paying for it when you are done. For software developers, PaaS means they aren’t held back while an IT pro builds a server and they don’t have the tedium of patching it, etc.

 

Short term or seasonal demand: Releasing a new movie? You’ll have high demand for the trailer, over opening weekend, and in the short term. Two years later, though? Not so much. Village Roadshow in Australia just announced a big move to Microsoft Azure. If you have a short term project or a seasonal demand peak like the holidays, don’t underestimate the elasticity of cloud resources.

 

Modern capabilities: Satya wasn’t kidding when he said Microsoft was cloud-first (now AI first). There are new features being released constantly in the cloud, first because the vendor controls all of the moving parts and doesn’t have to take into account backward compatibility or version mismatches across the customer base. There are also a ton of SaaS vendors with capabilities you’d have difficulty finding in installable, on-premises software.

 

SaaS integration: Thanks to APIs, cloud solutions are really good at being connected, as data isn’t being locked away on inaccessible internal networks. Yes, the data storage thing is both a blessing and a curse. SaaS integration has lead to some great workflow and productivity tricks, whether the apps talk to each other directly or they’re doing a trigger and action dance with Zapier, IFTTT, or Microsoft Flow.

 

So, there’s a bright shiny cloud picture for you. Now, don’t tear apart these ideas too much in the comments. I want to hear where you think the cloud is a win, and where you’ve been glad you’ve moved something to the cloud.

If you’ve developed a desire to learn more about hacking or to try a little hacking yourself, there are a lot of resources at your disposal to get started. Keep in mind, our premise here is how to hack without breaking any laws, which is important to remember. That being said, there’s nothing illegal about talking about it, is there? On the contrary, talking about hacking ensures that the larger community of IT and security professionals are sharing ideas and techniques for prevention, detection, and mitigation. This makes everyone’s overall security posture a better one.

 

Books, podcasts, blogs, vlogs, and even full-blown conferences are all dedicated to sharing this knowledge and can be resources for you to get started. Let’s look at a few, as examples.

 

Books

 

Right off the bat, I’m going to recommend a couple of books written by Kevin Mitnick, who is arguably the world’s most infamous hacker. “Ghost in the Wires: My Adventures as the World’s Most Wanted Hacker," should be mandatory reading, in my opinion, for anyone interested in learning a lot more about the early days of hacking. Kevin has a number of other books as well and now works on the “right side” of the fence as a security consultant.

 

Go to Amazon and search for the word “hacking” and be prepared to sift through just over 13,000 results. Here, we can see once again how many different meanings there are for this concept. A lot of these results aren’t related to what we’re looking for at all, but a review of the more popular titles and customer ratings will easily identify the relevant titles versus the chaff.

 

Books such as “The Hacker Playbook 2,” “How to Hack like a God,” and “The Hacking Bible,” will give you a solid understanding of tools, commands, and techniques you can try yourself to build a foundation of knowledge on how to hack and how you can be hacked.

 

Blogs and Vlogs

 

Hak5 is probably the top example of a hacker how-to site and video series that comes to mind. Hak5 started back in 2005 with a siofle video series on simple hacks and tricks for technology enthusiasts to try at home. Over the years, it has evolved into a full-fledged penetration testing and information security training series, including a store where one can purchase some fun hacker tools.

 

Hackaday is another popular site that focuses on hacking as an alternative method to accomplishing a task, and less about computers or security.

 

Brian Krebs operates his news site, Krebs on Security, which is an excellent resource for keeping up on zero-day exploits and current news from the hacking/malware world. Krebs was a victim of a hacker and has dedicated himself to learning as much as he can about the exploits and hacks used. Now he educates people on security and how to mitigate threats.

 

There are many others and the list could go on, but search around and find some sites that look interesting, watch some videos, and try some of the techniques presented. Be cautious when searching for terms like “hacking” and “exploit,” however!

 

Conferences

 

Believe it or not, there are conferences you can attend that are dedicated to informing and educating the hacking community. The two prominent ones are Black Hat and DEF CON. Now, the name Black Hat for a convention might make you second guess whether you want to attend or not. In fact, Black Hat is more of the IT pro event filled with training and security briefings intended to prevent malicious attacks. DEF CON, on the other hand, is more of a carnival for hackers of all types.

 

Both were founded by Jeff Moss, known as “Dark Tangent” but seem to be aimed at different audiences. Granted, you’ll likely find people that attend both, and both events will have their share of white hats, black hats, and gray hats. DEF CON even hosts a contest called “Spot the Fed” referencing the attendance of several members of Federal law enforcement and cyber-security teams.

 

Now if you’re a first-time attendee to either of these events, it’s good to develop some safe practices with any electronic devices you might wish to bring along, for very obvious reasons. The entire place is filled with hackers, who might decide to have some fun with you. Turn off your wifi, turn off your Bluetooth, turn off Air Drop, or maybe even leave all of your electronics at home.

 

Past recorded sessions from both conferences are available on YouTube, and I’d invite you to check them out if you aren’t able to attend in person.

 

Information overload

 

It may seem like a lot, but regardless of the sources you choose, there is no shortage of information or media out there for you budding InfoSec professionals out there. Find a place to start, and jump in, whether it’s reading up on the history of hacking, or deciphering current zero-day exploits that you might be facing at your place of employment.

 

You may find a particular track that you want to focus on and can then begin to narrow your research to that one area. Specialization can be of some benefit, but a well-rounded security posture is always best.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Agencies rely heavily on their applications, so I thought I’d share a blog written earlier this year by my SolarWinds colleague, Patrick Hubbard.

 

IT administration is changing, or expanding, to be precise. We can no longer think of just “our part” of IT, “our part” of the network, or “our part” of the help desk. For agencies to efficiently and effectively manage IT, administrators must have visibility across the entire application stack, or “AppStack.” Only with visibility to all infrastructure elements that are supporting an application deployed in production can they understand how those elements relate to each other.

 

This, however, is easier said than done. There’s a good chance there are things in your AppStack that you’re not even aware of—things that, for better or worse, you must uncover and track to gain critical cross-stack visibility. Meeting the challenge will likely require broadening traditional definitions of an AppStack to include things we traditionally thought of as out of scope.

 

What kinds of things? I’m glad you asked.

 

Not too long ago, application management was simple. Well, less complex at least. Applications were more limited in extent and sat on a mainframe, micros, or PCs with local storage. Today, the complexity of applications—with shared storage and server virtualization—begs a restatement of the meaning of an application to include the entire AppStack. Today, the AppStack includes everything from the user’s experience to the spindles and everything in between. Yes, everything in between.

 

As many agencies are moving to a hybrid-cloud environment, let’s take the example of a hybrid-cloud hosted virtual classroom system. Are HTTP services for presentation and data tier included? Yes. As are storage and virtualization, as well as core, distribution, WAN, and firewall. What about VPN and firewall to the Virtual Private Cloud (VPC)? Yes— Cloud has become an important component of many application stacks.

 

There are even things you might have assumed aren’t components of a traditional application—therefore are not part of the AppStack—that are, in fact, critical to uptime. Some examples are security certificate expiration for the web portal, the integration of durable subscription endpoint services, and the Exchange™ servers that transmitted emails and relayed questions. Yes, these are part of the AppStack. And, yes, you should be monitoring all of these elements.

 

You will achieve the greatest success in application monitoring and management by expanding the definition of AppStack to include all elements of your environment. Encourage all the Federal IT pros in your team to broaden their perspective. Use your network monitoring systems to discover and document every element of the connectivity chain, identify and document the links between each. You will likely discover that there are more than a few previously unmonitored services in your AppStack you need start keeping more of an eye on. And in the long run, proactive observation has a tendency to decrease unplanned outages and even make your weekends a bit better.

 

Find the full article on Federal Technology Insider.

Security concerns are getting lots of media coverage these days, given the massive breaches of data that are becoming more common all the time. Businesses want to have a security plan, but sometimes don't have the resources to create or implement one. Protect your infrastructure with the simple features that a SIEM application provides. Simple, step-by-step implementation allows you to lock in a solid security plan today.

 

In my THWACKcamp 2017 session, "Protecting the Business: Creating a Security Maturity Model with SIEM," Jamie Hynds, SolarWinds Product Manager, and I will present a hands-on, end-to-end, how-to configure and use Log & Event Manager, including configuring file integrity monitoring, understating the effects of normalization, and creating event correlation rules.

 

In our 100% free, virtual, multi-track IT learning event, thousands of attendees will have the opportunity to hear from industry experts and SolarWinds Head Geeks -- such as Leon and me -- and technical staff. Registrants also get to interact with each other to discuss topics related to emerging IT challenges, including automation, hybrid IT, DevOps, and more.

 

We are bringing our expanded-session, two-day, two-track format from THWACKcamp 2016 to THWACKcamp 2017. SolarWinds product managers and technical experts will guide attendees through how-to sessions designed to shed light on new challenges, while Head Geeks and IT thought leaders will discuss, debate, and provide context for a range of industry topics.

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch my session!

For those of you living in Texas and in Harvey's path please check in. We're worried about you.

Infrastructure as Code is about the programmatic management of IT infrastructure, be it physical or virtual servers, network devices, or the variety of supporting appliances racked and stacked in our network closets and data centers.

 

Historically, the problem has been managing an increasing number of servers after the advent of virtualization. Spinning up new virtual servers became a matter of minutes creating issues like server sprawl and configuration drift. And today, the same problem is being recognized in networking as well.

 

Languages such as Python, Ruby, and Go, along with configuration management platforms such as Puppet and Chef are used to treat servers as pools of resources called on in configuration files. In networking, engineers are taking advantage of traditional programming languages, but Salt and Ansible have emerged as dominant frameworks to manage large numbers of network devices.

 

What problems does treating infrastructure as code solve?

 

First, providing there’s enough storage capacity, we’re able to spin up new servers so quickly and easily that many organizations struggle with server sprawl. In other words, when there’s a new application to deploy, we spin up a new virtual machine. Soon, server admins, even in medium-sized organizations, found themselves dealing with an unmanageably large number of VMs.

 

Second, though using proprietary or open tools to automate server deployment and management is great for provisioning (configuration management), it doesn’t provide a clear method for continuous delivery including validation, automated testing, or version control. Configurations in the environment can drift from the standard with little ability to track, repair, or roll back.

 

Treating the infrastructure as code means adopting the same software development tools and practices that developers use to make provisioning infrastructure services more efficient, to manage large numbers of devices in pools of resources, and to provide a methodology to test and validate configuration.

 

What tools can we use to manage infrastructure the same way developers manage code?

 

The answer to this question is, well, it depends. The tools we can use depend on what sort of devices we have in our infrastructure and the knowledge of SysAdmins, network admins, and application developers. The good news is that Chef, Puppet, Salt, and Ansible are all relatively easy enough to learn that infrastructure engineers of all stripes can quickly cross the divide into the configuration management part of DevOps.

 

Having a working knowledge of Python or Ruby wouldn’t hurt, either.

 

Configuration Management

 

Chef and Puppet are open source configuration management platforms designed to make managing large numbers of servers very easy. They both enable a SysAdmin to pull information and push configuration files to a variety of platforms, making both Chef and Puppet very popular.

 

Since both are open source, an infrastructure engineer just starting out might find the maze of plugins, and the debate within the community about which is better, a little bit confusing. But the reality is that it’s not too difficult to go from the ground floor to managing an infrastructure.

 

Both Puppet and Chef are agent-based, using a master server with agents installed directly on nodes. Both are used very effectively to manage a large variety of platforms, including Windows and VMware. Both are written in Ruby, and both scale very well.

 

The differences between the two are generally based on elements that I don’t feel are that relevant. Some believe that Chef lends itself more to developers, whereas Puppet lends itself more to SysAdmins. The thing is, though, both have a very large customer base and effectively solve the same problems. In a typical organization of reasonable size, you’ll likely have a mix of platforms to manage, including VMware, Windows, and Linux. Both Chef and Puppet are excellent components to a DevOps culture that's managing these platforms.

 

Ansible and Salt are both agent-less languages built on Python. And though they offer some support for Windows, Ansible and Salt are more geared for managing Linux and Unix-based systems, including network devices.

 

Continuous Delivery

 

Continuous Delivery is about keeping configurations consistent across the entire IT environment, reducing the potential “blast radius” with appropriate testing, and reducing time to deploy new configurations. This is very important because using Chef or Puppet alone stops at automation and doesn’t apply the DevOps practices that provide all the benefits of Infrastructure as Code.

 

Remember that Infrastructure as Code is as much a cultural shift as it is the adoption of technology.

The most common tools are Travis CI and Jenkins. Travis CI is a hosted solution that runs directly off GitHub, while Jenkins runs off a local server. Both have GitHub repositories. Some people like having total control using a local Jenkins server, while others prefer the ease of use of a hosted solution like Travis.

 

To me, it’s not all that important which one a team uses. A SysAdmin adopting an Infrastructure as Code model will benefit either way. Integrating one or the other into a team’s workflow will provide tremendously better continuous delivery than simple peer review and ad hoc lab environments.

 

Version Control

 

Version control is one component that, in my experience, infrastructure engineers instantly see the value in. In fact, in the IT departments in which I’ve worked, everyone seems to have some sort of cobbled together version control system (VCS) to keep track of changes to the many configuration files we have.

 

 

Infrastructure as Code formalizes this with both software and consistent practice. Rather than storing configurations in a dozen locations, likely each team member’s local computer, a version control system centralizes everything.

 

That’s the easy part, though. We can do that with a department share. What a good version control system does, however, is provide a constant flow of changes back to the source code, revision history, branch management, and even change traceability.

 

Git is probably the most common VCS along with GutHub and BitBucket, but just like the continuous delivery solutions I mentioned, it’s more about just doing something. Using any of these VCSs even minimally is light years ahead of a network share and file names with “FINAL” or CURRENT” at the end.

 

Culture

 

When it comes down to it, though, Infrastructure as Code is just as much about culture as it is about technology. It’s a paradigm shift – a shift in practice and methodology – in addition to the adoption of programming languages, tools, and management platforms.

 

An IT department will see absolutely zero benefits from standing up a Jenkins server if it isn’t integrated into the workflow and actually used. Getting buy-in from the team is extremely important. This is no easy task, because now you’re dealing with people instead of bits and bytes, and people are much more complex that even our most sophisticated systems.

 

One way to get there is to start with only version control with GitHub or some other VCS. Since this is completely non-intrusive and has zero impact on production, buy-in from the team and from leadership is much easier.

 

Another idea I’ve seen work in practice is to start with only one part of the infrastructure. This could mean starting with all the Linux machines or all the Windows machines and managing them with Chef. In this way, a SysAdmin can manage a group of like machines and see tangible benefits of Infrastructure as Code very quickly without having to get buy-in across all teams.

 

As the benefits become more apparent, either to colleagues working in the trenches or leadership looking for some proof of concept, the culture can be changed from the bottom up or top down.

 

Making something old into something new

 

Remember that Infrastructure as Code is a cultural shift as much as it is an adoption of technology. Developers have been using these same tools and same practices for years to develop, maintain, and deploy code, so this a is a time-tested paradigm that SysAdmins and network admins can adopt very quickly.

 

In time, Infrastructure as Code can help us become proactive managers of our infrastructure, making it work for us rather than making us work for the infrastructure.

It's the classic horror story twist from 40 (or more) years ago, back when homes had a single telephone line and the idea of calling your own home was something reserved only for phreakers and horror films. While this might not be so scary now, I can assure you that back in the day that scene was the cutting edge in horror.

 

Fast forward 40+ years, and today, the idea of calling someone from inside your house isn't scary. In fact, it's quite common. Never mind phone calls; I will send my daughter an instant message to come downstairs for dinner. The constant flow of communication today is not scary, it's expected. Which brings us to the topic at hand: the Internet of Things (IoT). The idea behind IoT is simple enough: anything, and anyone can have an IP address attached to them at any moment, tracking data about their movement (and other things). And IoT has given rise to the concept of “Little Data," the idea that we can gather data about ourselves (like your FitBit®) on a daily basis, data that we then analyze to make adjustments to our daily routines. Here's just a partial list of the current IoT devices in and around your house that have such capabilities:

 

• Smartphone

• Cable modem

• Wireless router

• Wireless printer

• Tablet

• Laptop

• PC

• Television

• Xbox®/PlayStation®/etc.

• Home alarm system

• Automobile

• Security camera

• Light bulb

• Refrigerator

• Toaster

• Microwave

• Stove/Oven

• Dishwasher

• Washer/Dryer

• Windows

 

Yes, that's right, windows. Your house will know when it's raining and close the windows for you, even while you're away. I suppose it’s also plausible to think that a thief could know you are away and hack your system, open the windows, and gain access to the stuff inside your house. Or maybe it is just kids who want to play a prank and leave your windows open in the rain.

 

Folks, this isn't good news. And the above list doesn't mention the data that is being collected when you are using your IoT devices. For example, the data that Google®/Apple®/Microsoft®/Facebook®/Yahoo!® are tracking as you navigate the internet.

 

And yet I *still* don't see people concerned about where IoT security is currently headed regarding our privacy. With the number of data breaches continuing to rise it would seem to me that the makers of these devices know less about security and privacy than we'd like to believe.

 

So why don't people care? I can only think of two reasons. One is that they haven't been victims of a data breach in any way. The other? No one has scared them into thinking twice about IoT.

 

So, that's why I made this, for you, as a reminder:

 

 

The next time you are shopping for an appliance and you are told about all the great "smart" features that are available, I want you to think about the above image, your privacy, and your security. And I want you to ask some smart questions, such as:

 

• Have I actually read and understood the user terms?

• Do I understand where my data is going and how it will be shared?

• Is the data portable or downloadable?

• Will the appliance be fully functional if not connected to the internet?

• Can I get my data from the device without it needing to be connected to the internet?

• Can I change the passwords on the device?

• Is my home network secure?

 

Look, I'm a fan of IoT. I really am. I enjoy data. I love my FitBit. I'm going to get the Nest® at some point, too.

 

But I also recognize that the IoT solutions could be something that makes our lives worse, not better. We need to start asking questions like those above so that manufacturers understand that privacy and security should be at the top of the feature list, and not an afterthought.

PowerShell has been around for a long time now, consistently proving its value in terms of automation. PowerShell (aka Shell) was first introduced to me personally when it was still code (named Monad) in the early 2000s. Even then, it was clear that Microsoft had a long-term plan when they began sharing bits of code within the Exchange platform. Every time you did something in the GUI they would share with you the code to complete the task in Shell. They were making sure the learning process was already beginning.

 

PowerShell has come a long way since the early days of Monad, and just about any product, Microsoft or not, has PowerShell commands to help you complete your job.

 

Modern Day

Today, PowerShell is seemingly everywhere. Its ability to automate tasks that the traditional toolset cannot is impactful and necessary. Today, all administrators need to be able to do some level of PowerShell from simple to complex.

 

So, why has this become a staple in our workday?

 

Organizations today are streamlining IT always striving to simplify their workday.  Some may argue that organizations are trying to do more with fewer employees. This almost implies they are trying to overwork their teams, but I prefer to take a more positive spin. To me, automation is about allowing a process to flow, and happen in a way that simplifies my work day. This doesn’t mean that I am kicked back with nothing to do when automation is complete. It means that I get more time to work on other things, learn more, and grow professionally.

 

Today, automation presents endless possibilities. For context, let's take a look at some things that we automate today that, in the past, we handled manually.

 

  • Operating System deployment – In the early days of IT, we were feeding floppy disks or bootable CDs into computers to manually deploy an operating system to a computer. Once that was complete, we still had to install all of the necessary applications. Repeat this for every single person that needed a PC, and you had potentially thousands of PCs repeating this very manual process. When the application “Ghost” was released, we were ecstatic. Now we finally had a way to copy an image and deploy it to another PC, which significantly reduced our deployment time. Today, really the only accepted approach is to automate workstations via third-party tools and/or PowerShell. Enterprise IT staff cannot imagine spending a whole day setting up a laptop or PC for a user anymore. Now you are done in less than and hour!
  • Reporting – Anyone who knows me knows that I have done a lot of work with Citrix, and over the years I have found that there are just some things traditional reporting tools don't offer. While third-party offerings in this space are much improved today, this doesn’t mean that I might now need a custom report. PowerShell to the rescue! Custom reporting gives me the insights into my environment that I need to help ensure that it’s healthy and running successfully for the enterprise.
  • Microsoft Exchange – As previously mentioned, Microsoft Exchange was one of the first applications from Microsoft to use PowerShell. When working with Exchange, you open Exchange Management Shell to get all of the Exchange-related PowerShell commands pre-loaded. From daily tasks to automating mailbox moves and more, Shell has proven its value over and over if you are working with Exchange.

 

This list really only scratches the surface of PowerShell's automation possibilities. If you are doing work on Windows, as with most applications today, PowerShell skills are necessary for your technical success.

 

Taking it to the Next Level

 

The automation movement has already started. The power of automation has the potential to really change the landscape of the work we do. I anticipate that the need for automation will continue, and over the next few years the more we can automate, the better. That being said, there is a risk to automating everything with custom code without proper documentation and team sharing.  As employees leave organizations, so does the knowledge that went into the code. Not everyone that writes code does it well. Even if the script works, this doesn’t mean that it is a quality script.  Application upgrades typically involve rewriting the automation scripts that go with it.

 

As you can see, the time savings can go right out the window with that next big upgrade. Or does it?

 

If you plan the upgrade timeframe to include the rewrite of your automation scripts, then you are still likely better off than you were without it due to all of the time savings you realized in between.

My final thoughts on this, despite all of the pros and cons to automation, would be to automate what you can as often as possible!

I have wanted to start an ongoing conversation about security on Geek Speak for a long time. And now I have! Consider this the beginning of a security conversation that I encourage everyone to join. This bi-monthly blog will cover security in a way that combines the discussions we hear going on around us with the ones we have with colleagues and friends. I’d love for you to share your thoughts, ask questions, and ENGAGE! Your input will make this series that much richer and more interesting.

 

You can bring up any topic or share any ideas that you would like for me to talk about. Please join me in creating some entertaining reading with a security vibe. Let’s start…NOW!

 

Let me dive into something that I feel is going to impact hacking behaviors. Microsoft is attempting to find clever, more intense ways to go after hackers. This may not sound surprising, but think about this: They are filing legal suits over trademarks. What? That’s right. They are suing known hacker groups for trademarks. Although you can’t drag hackers to court, you can observe and disrupt their end game.

 

Okay, so they went after the group that was allegedly involved with the United States voting process. So far, Microsoft has taken over at least 70 different Fancy Bear, or FB, domains!

 

Why does this matter? Why should we care? Because FB literally became the man in the middle, legally speaking. By using Microsoft’s products and services, they opened themselves up to be taken over by... that’s right: Microsoft!

 

Since 2016, Microsoft has mapped out and observed FB’s server networks, which means they can indirectly cause their own mayhem. Okay, so they aren’t doing THAT, but they are observing and disrupting foreign intelligence operations. Cheeky, Microsoft. Cheeky!

 

Now, for me, I’m more interested in when they decide they can flip it over into their hands to eavesdrop and scan out networks. The United States’ Computer Fraud and Abuse Act gives Microsoft quite a blanket to keep warm under. But we can go into that later, as it is currently in use at Def Con...

 

Now, I started the conversation. It’s your turn to keep it going. Share your thoughts about Microsoft, security, hackers, etc. below.

Finishing up prep for VMworld next week. Two sessions, booth duty, and various interviews and briefings, along with networking. Should be a busy week! If you are attending, stop by the booth and say hello. I always enjoy talking data with anyone.

 

As always, here's some stuff I found on the internet I hope you might find interesting. Enjoy!

 

8 Lessons from 20 Years of Hype Cycles

Brilliant summary of Gartner's 20 years of Hype Cycles. Long, but worth the time if you have an interest in the tech industry (and are experienced enough to remember that Desktop for Linux was a thing).

 

Reverse Engineering IoT Devices

Long, but worth the read because geek.

 

London council 'failed to test' parking ticket app, exposed personal info

Despite having the necessary resources to build the app correctly, they didn't. Because security is hard, y'all.

 

DARPA tunes machine learning to radio signals

Maybe the reason we haven't made First Contact is because we haven't let the machines do the talking for us, yet.

 

Screen Savers Haven’t Been Useful For Decades. Why Are They Still Here?

Then again, maybe we aren't quite yet ready to connect with other civilizations.

 

Chill: Robots Won’t Take All Our Jobs

Jobs shift, they don't disappear. My children will have jobs, and job titles, that don't exist, yet.

 

Worried you’re being automated? Think again…

This gives us all hope, I think.

 

 

There was an eclipse this past Monday. Maybe you heard about it. This was the view from my office.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Though cybercriminals are usually incentivized by financial gain, the reality is that a cyber-attack can create far more damage than just hitting an organization fiscally. This is especially the case when it comes to healthcare organizations. Health data is far more valuable to a cybercriminal, going for roughly 10 or 20 times more than a generic credit card number. Therefore, we can expect to see a surge in healthcare breaches. However, the impact of this won’t just cripple a facility financially. It’s possible a cybercriminal could take over a hospital, manipulate important hospital data, or even compromise medical devices.

 

It’s already started

These sort of breaches are already happening. At the start of 2016, three UK hospitals in Lincolnshire managed by the North Lincolnshire and Goole NHS Foundation Trust were infected by a computer virus. The breach was so severe it resulted in hundreds of planned operations and outpatient appointments being cancelled.

 

The event, which officials were forced to deem as a “major incident,” also made it difficult to access test results and identify blood for transfusions, and some hospitals struggled to process blood tests. This is one of the first examples of a healthcare cyber security breach directly impacting patients in the UK, but it won’t be the last.

 

Follow in the footsteps of enterprises

Breaches like these have put a great deal of pressure on healthcare IT professionals. Though there has been a shift in mentality in enterprise, with security becoming a priority, the same can’t be said for the healthcare sector. This needs to change. The situation is worsened with most healthcare organizations often having budget cuts, which make security a hard thing to prioritize.

 

It doesn’t need to break to be fixed

Many healthcare IT professionals assume that management will only focus on security once a significant breach occurs, but it’s time healthcare organizations learned from enterprises that have seen breaches occur and acted. In the meantime, there is work that requires little investment that IT professionals can do to protect the network.

 

Educate and enforce

Employees are often the weakest link when it comes to security in the workplace. An awareness campaign should encompass both education and enforcement. By approaching an education initiative in this way, employees will have a better understanding of potential threats that could come from having an unauthorized device connected to the network.

 

For example, healthcare workers need to be shown how a cybercriminal could infiltrate the network through hacking someone’s phone. This would also start a dialogue between healthcare employees, helping them to prioritize security and thus giving the IT department a better chance of protecting the organization from a breach.

 

It’s naturally assumed that a healthcare IT professional should be able to effectively protect his or her organization from an attack. However, even the most experienced security professional would struggle to do so without the right tools in place. To protect healthcare organizations from disastrous attacks requires funding, investment, and cooperation from employees.

 

Find the full article on Adjacent Open Access.

I'm not aware of an antivirus product for network operating systems, but in many ways, our routers and switches are just as vulnerable as a desktop computer. So, why don't we all protect them in the same way as our compute assets? In this post, I'll look at some basic tenets of securing the network infrastructure that underpins the entire business.

 

Authentication, authorization, and accounting (AAA)

Network devices intentionally leave themselves open to user access, so controlling who can get past the login prompt (authentication) is a key part of securing devices. Once logged in, it's important to control what a user can do (authorization). Ideally, what the user does should also be logged (accounting).

 

Local accounts are bad, mkay?

Local accounts (those created on the device itself) should be limited solely to backup credentials that allow access when the regular authentication service is unavailable. The password should be complex and changed regularly. In highly secure networks, access to the password should be restricted (kind of a "break glass for password" concept). Local accounts don't automatically disable themselves when an employee leaves, and far too often, I've seen accounts still active on devices for users who left the company years ago, with some of those accessible from the internet. Don't do it.

 

Use a centralized authentication service

If local accounts are bad, then the alternative is to use an authentication service like RADIUS or TACACS. Ideally, those services should, in turn, defer authentication to the company's existing authentication service, which in most cases, is Microsoft Active Directory (AD) or a similar LDAP service. This not only makes it easier to manage who has access in one place, but by using things like AD groups, it's possible to determine not just who is allowed to authenticate successfully, but what access rights they will have once logged in. The final, perhaps obvious, benefit is that it's only necessary to grant a user access in one place (AD), and they are implicitly granted access to all network devices.

 

The term process

A term (termination) process defines the list of steps to be taken when an employee leaves the company. While many of the steps relate to HR and payroll, the network team should also have a well-defined term process to help ensure that after a network employee leaves, things such as local fall back admin passwords are changed, or perhaps SNMP read/write strings are changed. The term process should also include disabling the employee's Active Directory account, which will also lock them out of all network devices because we're using an authentication service that authenticates against AD. It's magic! This is a particularly important process to have when an employee is terminated by the company, or may for any other reason be disgruntled.

 

Principal of least privilege

One of the basic security tenets is the principal of least privilege, which in basic terms, says Don't give people access to things unless they actually need it; default to giving no access at all. The same applies to network device logins, where users should be mapped to the privileged group that allows them to meet their (job) goals, while not granting permissions to do anything for which they are not authorized. For example, an NOC team might need read-only access to all devices to run show commands, but they likely should not be making configuration changes. If that's the case, one should ensure that the NOC AD group is mapped to have only read-only privileges.

 

Command authorization

Command authorization is a long-standing security feature of Cisco's TACACS+, and while sometimes painful to configure, it can allow granular control of issued commands. It's often possible to configure command filtering within the network OS configuration, often by defining privilege levels or user classes at which a command can be issued, and using RADIUS or TACACS to map the user to that group or user class at login. One company I worked for created a "staging" account on Juniper devices, which allowed the user to enter configuration mode and enter commands, and allowed the user to run commit check to validate the configuration's validity, but did not allow an actual commit to make the changes active on the device. This provided a safe environment in which to validate proposed changes without ever having the risk of the user forgetting to add check to their commit statement. Juniper users: tell me I'm not the only one who ever did that, right?

 

Command accounting

This one is simple: log everything that happens on a device. More than once in the past, we have found the root cause of an outage by checking the command logs on a device and confirming that, contrary to the claimed innocence of the engineer concerned, they actually did log in and make a change (without change control either, naturally). In the wild, I see command accounting configured on network devices far less often than I would have expected, but it's an important part of a secure network infrastructure.

 

Network time protocol (NTP)

It's great to have logs, but if the timestamps aren't accurate, it's very difficult to align events from different devices to analyze a problem. Every device should be using NTP to ensure that they have an accurate clock to use. Additionally, I advise choosing one time zone for all devices—servers included—and sticking to it. Configuring each device with its local time zone sounds like a good idea until, again, you're trying to put those logs together, and suddenly it's a huge pain. Typically, I lean towards UTC (Coordinated Universal Time, despite the letters being in the wrong order), mainly because it does not implement summer time (daylight savings time), so it's consistent all year round.

 

Encrypt all the things

Don't allow telnet to the device if you can use SSH instead. Don't run an HTTP server on the device if you can run HTTPS instead. Basically, if it's possible to avoid using an unencrypted protocol, that's the right choice. Don't just enable the encrypted protocol; go back and disable the unencrypted one. If you can run SSHv2 instead of SSHv1, you know what to do.

 

Password all the protocols

Not all protocols implement passwords perfectly, with some treating them more like SNMP strings. Nonetheless, consider using passwords (preferably using something like MD5) on any network protocols that support it, e.g., OSPF, BGP, EIGRP, NTP, VRRP, HSRP.

 

Change defaults

If I catch you with SNMP strings of public and private, I'm going to send you straight to the principal's office for a stern talking to. Seriously, this is so common and so stupid. It's worth scanning servers as well for this; quite often, if SNMP is running on a server, it's running the defaults.

 

Control access sources

Use the network operating system's features to control who can connect to them in the first place. This may take the form of a simple access list (e.g., a vty access-class in Cisco speak) or could fall within a wider Control Plane Policing (CoPP) policy, where the control for any protocol can be implemented. Access Control Lists (ACLs) aren't in themselves secure, but it's another step to overcome for any bad actor wishing to illicitly connect to the devices. If there are bastion management devices (aka jump boxes), perhaps make only those devices able to connect. Restrict from where SNMP commands can be issued. This all applies doubly for any internet-facing devices, where such protections are crucial. Don't allow management connections to a network device on an interface with a public IP. Basically, protect yourself at the IP layer as well by using passwords and AAA.

 

Ideally, all devices would be managed using their dedicated management ports, accessed through a separate management network. However, not everybody has the funding to build an out-of-band management network, and many are reliant on in-band access.

 

Define security standards and audit yer stuff

It's really worth creating a standard security policy (with reference configurations) for the network devices, and then periodically auditing the devices against it. If a device goes out of compliance is that a mistake or did somebody intentionally weaken the device security posture? Either way, just because a configuration was implemented once, it would be risky to assume it had remained in place from then on, so a regular check is worthwhile.

 

Remember why

Why are we doing all of this? The business runs over the network. If the network is impacted by a bad actor, the business can be impacted in turn. These steps are one part of a layered security plan; by protecting the underlying infrastructure, we help maintain the availability of the applications. Remember the security CIA triad —Confidentiality, Integrity, and Availability? The steps I have outlined above—and much more that I can think of—help maintain network availability and ensure that the network is not compromised. This means that we have a higher level of trust that the data we entrust to the network transport is not being siphoned off or altered in transit.

 

What steps do you take to keep your infrastructure secure?

Previously, I discussed the origins of the word “hacking” and the motivations around it from early phone phreakers, red-boxers, and technology enthusiasts.

 

Today, most hackers can be boiled down to Black Hats and White Hats. The hat analogy comes from old Western movies, where the good guys wore white and the bad guys wore black. Both groups have different reasons for hacking.

 

Spy vs. Spy

The White Hat/Black Hat analogy always makes me think of the old Spy vs. Spy comic in Mad Magazine. These two characters—one dressed all in white, the other all in black—were rivals who constantly tried to outsmart, steal from, or kill each other. The irony was that there was no real distinction between good or evil. In any given comic, the White Spy might be trying to kill the Black Spy or vice versa, and it was impossible to tell who was supposed to be the good guy or the bad guy.

 

Black Hat hackers are in it to make money, pure and simple. There are billions of dollars lost every year to information breaches, malware, cryptoware, and data ransoming. Often tied to various organized crime syndicates (think Russian Mafia and Yakuza), these are obviously the “bad guys” and the folks that we, as IT professionals, are trying to protect ourselves and our organizations from.

 

The White Hats are the “good guys," and if we practice and partake in our own hacking, we would (hopefully) consider ourselves part of this group. Often made up of cybersecurity and other information security professionals, the goal of the White Hat is to understand, plan for, predict, and prevent the attacks from the Black Hat community.

 

Not Always Black or White

There does remain another group of people whose hacking motivations are not necessarily determined by profit or protection, but instead, are largely political. These would be the Gray Hats, or the hackers who blur the distinction between black and white, and whose designation as “good or bad” is subjective and often depends on your own point of view. As I mentioned, the motivation for these groups is often political, and their technical resources are frequently used to spread a specific political message, often at the expense of a group with an opposing view. They hack websites and social media accounts, and replace their victims’ political messaging with their own.

 

Groups like Anonymous would fall into this category, the Guy Fawkes mask-wearing activists who are heavily involved in world politics, and who justify their actions as vigilantism. Whether you think what they do is good or not depends on your own personal belief structure, and which side of the black/white spectrum they land on is up to you. It’s important to consider such groups when trying to understand motivation and purpose, if you decide to embark on your own hacking journey.

 

What’s in It for Us?

Because hacking has multiple meanings, which approach do we take as IT pros when we sit down for a little private hacking session? For us, it should be about learning, solving problems, and dissecting how a given technology works. Let’s face it: most of us are in this industry because we enjoy taking things apart, learning how they work, and then putting them back together. Whether that’s breaking down a piece of hardware like a PC or printer, or de-compiling some software into its fundamental bits of code, we like to understand what makes things tick, and we’re good at it. Plus, someone actually pays us to do this!

 

Hacking as part of our own professional development can be extremely worthwhile because it helps us gain a deep understanding of a given piece of technology. Whether it is for troubleshooting purposes, or for a deep dive into a specific protocol while working toward a certification, hacking is one more tool you can use to become better at what you do.

 

Techniques you use in your everyday work may already be considered “hacks." Some tools you may have at your disposal may potentially be the same tools that hackers use in their daily “work." Have you ever fired up Wireshark to do some packet capturing? Used a utility from a well-known tool compilation to change a lost Windows password? Scanned a host on your network for open ports using NMAP? All of these are common tools that can be used by the IT professional to accomplish a task, or a malicious hacker trying to compromise your environment.

 

As this series continues, we will look at a number of different tools—both software and hardware—that have this kind of utility, and how you can use these in a way that will improve your understanding of the technology you support, as well as developing a respect for the full spectrum of hacking that may impact your business or organization.

 

There are some fun toys out there, but make sure to handle them with care.

 

As always, "with great power comes great responsibility." Please check your local, state, county, provincial, and/or federal regulations regarding any of the methods, techniques, or equipment outlined in these articles before attempting to use any of them, and always use your own private, isolated test/lab environment.

Hey everybody! It’s me again! In my last post, "Introducing Considerations for How Policy Impacts Healthcare IT," we started our journey discussing healthcare IT from the perspective of the business, as well as the IT support organization. We briefly touched on HIPAA regulations, EMR systems, and had a general conversation about where I wanted to take this series of posts. The feedback and participation from the community was AMAZING, and I hope we can continue that In this post. Let's start by digging a bit deeper into two key topics (and maybe a tangent or two): Protecting data at rest and in motion.

 

Data at Rest

When I talk about data at rest, what exactly am I referring to? Well, quite frankly, it could be anything. We could be talking about a Microsoft Word document on the hard drive of your laptop that contains a healthcare pre-authorization for a patient. We could be talking about medical test results from a patient that resides in a SQL database in your data center. We could even be talking about the network passwords document on the USB thumb drive strapped to your key chain. (Cringe, right?!) Data at rest is just that: it’s data that’s sitting somewhere. So how do you protect data at rest? Let us open that can of worms and talk about that, shall we?

 

By now you’ve heard of disk encryption, and hopefully you’re using it everywhere. It’s probably obvious to you that you should be using disk encryption on your laptop, because what if you leave it in the back seat of your car over lunch and it gets stolen? You can’t have all that PHI getting out into the public, now can you? Of course not! But did you take a minute to think about the data stored on the servers in your data center? While it might not be as likely that somebody swipes a drive out of your RAID array, it CAN happen. Are you prepared for that? What about your SAN? Are those disks encrypted? You’d better find out.

 

Have you considered the USB ports on your desktop computers? How hard would it be for somebody to walk in with a nice 500gb thumb drive, plug it into a workstation, and grab major chunks of sensitive information in a very short period of time, and simply walk out the front door? Not very hard if you’re not doing something to prevent that. There are a bunch of scenarios we haven’t talked about, but at least I've made you think about data at rest a little bit now.

 

Data in Motion

Not only do we need to protect our data at rest, we also need to protect it in motion. This means we need to talk about our networks, particularly the segments of those networks that cross public infrastructure. Yes, even "private lines" are subject to being tapped. Do you have VPN connectivity, either remote-access (dynamic) or static to remote sites and users? Are you using an encryption scheme that’s not susceptible to man-in-the-middle or other security attacks? What about remote access connections for contractors and employees? Can they just "touch the whole network" once their VPN connection comes up, or do you have processes and procedures in place to limit what resources they can connect to and how?

 

These are all things you need to think about in healthcare IT, and they’re all directly related to policy. (They are either implemented because of it, or they drive the creation of it.) I could go on for hours and talk about other associated risks for data at rest and data in motion, but I think we’ve skimmed the surface rather well for a start. What are you doing in your IT environments to address the issues I’ve mentioned today? Are there other data at rest or data in motion considerations you think I’ve omitted? I’d love to hear your thoughts in the comments!

 

Until next time!

The SolarWinds “Virtualization Monitoring with Discipline” VMworld tour is about to start and we are bringing solutions and swag.

 

VMworld US

At VMworld® US in Las Vegas, the SolarWinds family is bringing a new shirt, new stickers and buttons, new socks, and a new morning event. And that’s not all we’re bringing to VMworld.

 

  • Join us on Tuesday morning for the inaugural Monitoring Morning as KMSigma and I talk about monitoring at scale and troubleshooting, respectively.

  • Next, don’t forget to attend sqlrockstar's two speaking sessions. He'll speak about monster database VMs, and join a panel session on best practices when virtualizing data. Also, be sure to check out chrispaap's talk on mastering the virtual universe using foundational skills, such as monitoring with discipline.

 

    • Solutions Exchange
      Monday, August 28     2:50 – 3:10 p.m.
      Chris Paap

Monitoring With Discipline to Master your Virtualized Universe

  • Tuesday, August 29     11:30am – 12:30 p.m.
    Thomas LaRock

Performance Tuning and Monitoring for Virtualized Database Servers

  • Wednesday, August 30     4:00 – 5:00 p.m.
    Thomas LaRock

SQL Server ® on vSphere ®: A Panel with Some of the World’s Most Renowned Experts

  • Lastly, visit us at booth number 224 to talk to our SMEs, get your questions answered, and pick up your swag.

VMworld Europe

Another first is that SolarWinds will be on the Solutions Expo floor at VMworld Europe in Barcelona. In the lead-up to the event, we’ll be hosting a pre-VMworld Europe webcast to talk shop about Virtualization Manager and its virtue for empowering troubleshooting in the highly virtualized domain of hybrid IT.

  • sqlrockstar will again be speaking in the following session.
    • Wednesday, September 13        12:30 – 1:30 p.m.

Thomas LaRock

Performance Tuning and Monitoring for Virtualized Database Servers

  • chrispaap and I, along with our SolarWinds EMEA SMEs, will be in the booth to answer your questions, talk shop about monitoring with discipline, and hand out swag.

 

I’ll update this section with details as they become available.

 

Let me know in the comment section if you will be in attendance at VMworld US or VMworld Europe. If you can’t make it to one of these events, let me know how we at SolarWinds can better meet and exceed your virtualization pain points.

The cloud is no longer a new thing. Now, we’re rapidly moving to an “AI-first” world. Even Satya Nadella updated the Microsoft corporate vision recently to say “Our strategic vision is to compete and grow by building best-in-class platforms and productivity services for an intelligent cloud and an intelligent edge infused with AI.” Bye bye cloud first, mobile first.

 

In reality, some organizations still haven't taken the plunge into cloud solutions, even if they want to. Maybe they’ve had to consolidate systems or remove legacy dependencies first. The cloud is still new to them. So, what advice would you give to someone looking at cloud for the first time? Have we learned some lessons along the way? Has cloud matured from its initial hype, or have we just moved on to new cloud-related hype subjects (see AI)? What are we now being told (and sold) that we are wary of until it has had some time to mature?

 

Turn off your servers
Even in the SMB market, cloud hasn’t resulted in a mass graveyard of on-premises servers. Before advising the smallest of organizations on a move to the cloud, I want to know what data they generate, how much there is, how big it is, and what they do with it. That knowledge, coupled with their internet connection capability, determines if there is a case for leaving some shared data or archive data out of the cloud. That’s before we’ve looked at legacy applications, especially where aging specialist hardware is concerned (think manufacturing or medical). I’m not saying it’s impossible to go full cloud, but the dream and the reality are a little different. Do your due diligence wisely, despite what your friendly cloud salesperson says.

 

Fire your engineers
Millions of IT pros have not been made redundant because their organizations have gone to the cloud. They’ve had to learn some new skills, for sure. But even virtual servers and Infrastructure as a Service (IaaS) requires sizing, monitoring, and managing. The cloud vendor is not going to tell you that your instance is over-specced and you should bump it down to a cheaper plan. Having said that, I know organizations that have slowed down their hiring because of the process efficiencies they now have in place with cloud and/or automation. We don’t seem to need as much technical head count per end-user to keep the lights on.

 

Virtual desktops
Another early cloud promise was that we could all run cheap, low-specced desktops with a virtual desktop in the cloud doing all the processing. Yes, it sounded like terminal services to me too, or even back to dumb terminal + mainframe days. Again, this is a solution that has its place (we’re seeing it in veterinary surgeries with specialist applications and Intel Compute Sticks). But it doesn’t feel like this cloud benefit has been widely adopted.

 

Chatbots are your help desk
It could be early days for this one. Again, we haven’t fired all of the Level 1 support roles and replaced them with machines. While they aren’t strictly a cloud-move thing (other than chatbots living in the cloud), there is still a significant amount of hype around chatbots being our customer service and ITSM saviors. Will this one fizzle out, or do we just need to give the bots some more time to improve (knowing ironically that this happens the best when we use them and feed them more data)?

 

Build your own cloud
After being in technical preview for a year, Microsoft has released the Azure Stack platform to its hardware partners for certification. Azure Stack gives you access to provision and manage infrastructure resources like you’d do in Azure, but those resources are in your own data center. There’s also a pay-as-you-go subscription billing option. The technical aspects and use cases seem pretty cool, but this is a very new thing. Have you played with the Azure Stack technical preview? Do you have plans to try it or implement it?

 

So, tell me the truth
One thing that has become a cloud truth is automation, whether that’s PowerShell scripts, IFTTT, or Chef recipes. While much of that automation is available on-premises, too (depending on how old your systems are), many Software-as-a-Service (SaaS) solutions are picked over on-premises for their interoperability. If you can pull yourself away from GUI habits and embrace the console (or hand your processes off to a GUI like Microsoft Flow), those skills are a worthwhile investment to get you to cloud nirvana.

 

I’ve stayed vendor-agnostic on purpose, but maybe you have some vendor-specific stories to share? What cloud visions just didn’t materialize? What’s too “bleeding edge” now to trust yet?

Back from Austin and THWACKcamp filming, and now gearing up for VMworld. I've got one session and a panel discussion. If you are attending VMworld let me know, I'd love to connect with you while in Vegas. if you time it right, you may catch me on my way to a bacon snack.

 

As always, here's a bunch of links I think you might enjoy.

 

Don't Take Security Advice from SEO Experts or Psychics

There's a LOT of bad advice on the internet folks. Take the time to do the extra research, especially when it comes to an expert offering expert opinions for free.

 

10 Things I’ve Learned About Customer Development

"What features your customers ask for is never as interesting as why they want them." Truth.

 

Researchers encode malware in DNA, compromise DNA sequencing software

This is why we can't have nice things.

 

An Algorithm Trained on Emoji Knows When You’re Being Sarcastic on Twitter

Like we even need such a thing.

 

Password guru regrets past advice

It's not just you, we all regret this advice.

 

The InfoSec Community is Wrong About AI Being Hype

There's more than one tech community that is underestimating the impact that Ai and Machine Learning will have on our industry in the next 5 to 8 years.

 

Researchers Find a Malicious Way to Meddle with Autonomous Tech

Then again, if we can keep fooling systems with tricks like this, maybe it will be a bit longer before the machines take over.

 

I think I'm going to play this game every time I visit Austin.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

The technology that government end-users rely on is moving beyond the bounds of on-premises infrastructures, yet employees still hold IT departments accountable for performance.

 

According to a recent SolarWinds “IT is Everywhere” survey of government IT professionals, 84 percent say the expectation to support end-users’ personal devices connecting to agency networks is greater than it was 10 years ago. The survey also found that 70% of IT pros estimate that end-users use non-IT sanctioned, cloud-based applications at least occasionally.

 

Here are more insights from federal IT pros:

 

  • 63% claim end-users expect work-related applications used remotely to perform at the same level (or better) than they do in the office
  • 79% say they provide support to remote workers at least occasionally
  • 53% say end-users expect the same time-to-resolution for issues with both cloud-based applications and local applications managed directly by IT
  • 40% say end-users expect the same time-to-resolution for issues with both personal and company-owned devices and technology
  • 68% claim to provide at least occasional support for personal devices

 

All of this amounts to a tall order for government IT professionals. However, there are some strategies to help ensure that users are happy and productive while agency systems remain secure.

 

Closely monitor end-user devices

 

User device tracking can provide a good security blanket for those concerned about unsanctioned devices. IT professionals can create watch lists of acceptable devices and be alerted when rogue devices access their networks. They can then trace those devices back to their users. This tracking can significantly mitigate concerns surrounding bring-your-own-device security.

 

Gain a complete view of all applications

 

Having a holistic view of all applications results in a better understanding of how the performance of one application may impact the entire application stack. Administrators will also be able to quickly identify and rectify performance issues and bottlenecks.

 

Beyond that, administrators must also account for all of the applications that users may be accessing via their personal devices, such as social media apps, messaging tools, and others. Network performance monitoring and network traffic analysis can help IT managers detect the causes behind quality-of-service issues and trace them back to specific applications, devices, and users.

 

Look out for bandwidth hogs

 

IT managers should make sure their toolkits include network performance and bandwidth monitoring solutions that allow them to assess traffic patterns and usage. If a slowdown or abnormality occurs, administrators can take a look at the data and trace any potential issues back to individual users or applications. They can then take action to rectify the issue.

 

Fair or not, IT pros are officially the go-to people whenever a problem arises. While IT managers may not be able to do everything their end-users expect, they can certainly lay the groundwork for tackling most challenges and creating a secure, reliable, and productive environment.

 

Find the full article on Government Computer News.

During times of rapid increase in technology, it is better to be a generalist than a specialist. You need to know a little bit about a lot of different things. This is most applicable to any database administrator who is responsible for managing instances in the cloud. These DBAs need to add to their skills the ability to quickly troubleshoot network issues that may be affecting query performance.

 

In my THWACKcamp 2017 session, "Performance Tuning the Accidental Cloud DBA," fellow Head Geek Leon Adato and I will discuss the skills that are necessary for DBAs to have and practice over the next three to five years.

 

We are continuing our expanded-session, two-day, two-track format for THWACKcamp 2017. SolarWinds product managers and technical experts will guide attendees through how-to sessions designed to shed light on new challenges, while Head Geeks and IT thought leaders will discuss, debate, and provide context for a range of industry topics.

 

In our 100% free, virtual, multi-track IT learning event, thousands of attendees will have the opportunity to hear from industry experts and SolarWinds Head Geeks -- such as Leon and me -- and technical staff. Registrants also get to interact with each other to discuss topics related to emerging IT challenges, including automation, hybrid IT, DevOps, and more.

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch my session!

Best practices, I feel, mean different things to different people. For me, best practices are a few things. They are a list of vendor recommendations for product implementation, they come from my own real-world experiences, and they are informed by what I see my peers doing in the IT community. I have learned to accept that not all best practices come from vendors, and that the best practices list I have compiled is essentially a set of implementation guidelines aimed at ensuring the highest quality of deployment.

 

So how does this apply to virtualization and the best practices you follow? Let’s chat!

 

Getting ready to virtualize your servers or workstations?

 

According to Gartner, enterprise adoption of server virtualization has nearly doubled in the past couple of years. That doesn’t even include workstation virtualization which is also becoming more relevant to the enterprise as product options mature.  So, if your organization isn’t virtualizing an operating system today, it’s highly probable that it will in the future. Understanding how to prepare for this type of business transformation according to the latest best practices/guidelines will be key to your deployment success.

 

Preparing according to best practices/guidelines

 

As mentioned, it’s important to have a solid foundation of best practices/guidelines for your virtualization implementation. Diving right in, here are some guidelines that will get you started with a successful virtualization deployment:

 

  • Infrastructure sizing – Vendors will provide you with great guidance on where to begin sizing your virtual environment, but at the end of the day, all environments are different. Take time to POC/Test within your environment and build out your resource calculations.  Also, be sure to involve the business users to help ensure that you are providing the ultimate performance experience before you finalize your architectural design. Also, when sizing, don’t use averages. You will come up short and performance will suffer.

 

  • Know your software – A key part of the performance you will get from your virtualized environment will depend on the applications you are running.  It’s important to baseline test to obtain a solid list of applications in your environment. Then take this a step further to understand the number of resources used by your applications. You can see that even the smallest software upgrade can impact performance by looking at the following example: Microsoft Office 2016 consumes up to 20% more resources than previous versions (2007/2010). That’s a big deal if it wasn’t considered in advance because it could severely impact the user performance experience.

 

  • Image Management – One of the best things about virtualization is that it can greatly reduce your work effort when it comes to patch management and operating system maintenance. The value of this can only be seen when you deploy as few operating systems as possible. So, when you are deciding on use cases, keep this in mind.

 

  • Use application whitelisting instead of anti-virus – Anti-virus solutions have proven to impact the performance of virtualization environments. If you must run something at the operating system level, I would strongly suggest using application whitelist instead. Having an enforced approved list of applications can provide a more secure platform without taking a performance hit.

 

  • Protect your data – You just spent all this time deploying virtualization to make sure that your virtualization databases are backed up. Heck, your entire environment should be backed up. Taking this even one step further, be sure to include high availability and even disaster recovery in your design. In my experience, if an environment isn’t ready for the worst, you can end up in a pretty bad situation that could include an entire rebuild. If you cannot afford the business downtime in a worst-case scenario, then button things up to be sure that your plan includes proper data protection.

 

  • The right infrastructure – Vendors are pretty good about creating guidelines about the type of infrastructure their virtualization platforms will run on, but I strongly suggest that you take a look at both hyper-converged infrastructure, and use of GPUs. If you expect the performance of your virtual systems (especially with virtual workstations) to be the same as what your users experience today, these infrastructure options should at least be part of your conversation. They'll likely end up being a part of your deployment design.

 

  • Automate everything you can – Automation can be a very powerful way to help ensure that you are using your time efficiently and wisely. When it comes to automation, keep the following in mind: If you are going to do manual automation, remember that there is a certain amount of time being spent to complete the work. In some cases, if there is a third-party tool that can help with automation, that may be worth considering. Third-party automation tools typically come with an upgrade path that you won’t get when you home grow your code. And when the person that wrote the code leaves, there goes that support, too. There isn’t one single answer here. Just remember that automation is important, so you should be thinking about this if you aren’t already

 

For virtualization success, be sure to fully research your environment up front. This research will help you easily determine if any/all of the above best practices/guidelines will create success for your virtualization deployment. Cheers!

Working in IT is such a breeze. The industry never changes, and the infrastructure we work with doesn’t impact anyone very much. It’s really a pleasure cruise more than anything else.

 

And if you believed that, you’ve probably never worked in IT.

 

The reality for many IT professionals is the opposite. Our industry is in a constant state of change, and from the Level One helpdesk person to the senior architect, everything we do impacts others in significant ways. Corporate IT, for many, is a maze of web conferences, buzzwords, and nontechnical leadership making technical decisions.

 

Not losing my mind in the midst of all this sometimes feels just as much a part of my professional development plan as learning about the next new gadget. Over the last 10 years, I’ve developed some principles to help me negotiate with this dynamic, challenging, fulfilling, but also frustrating and unnerving world.

 

Own my own education

 

The first principle is to own my own education. One thing I've had to settle with deep down inside is that I’ll never really “arrive” in technology. There is no one degree or certification that I can earn that covers all technology, let alone all future technology. That means that I’ve had to adopt a personal routine of professional development apart from my employer to even attempt to keep pace with the changes in the industry.

 

Now that I have three children and work in a senior position, it’s much harder to maintain consistent motivation to keep pushing. Nevertheless, what I’ve found extremely helpful is having a routine that makes a never-ending professional development plan more doable. For me, that means setting aside time every morning before work or at lunch to read, work in a lab, or watch training videos. This way, my professional development doesn’t impede much on family life or compete with the many other obligations I have in life. This requires getting to bed at a reasonable time and getting my lazy rear end out of bed early, but when I get in the routine, it becomes easier to maintain.

 

I don’t rely on my colleagues, my employer, or my friends to hand me a professional development plan or motivate me to carry it out. This is a deeply personal thing, but I’ve found that adopting this one philosophy has changed my entire career and provided a sense of stability.

 

Choosing what to focus on is a similar matter. There’s a balance between what you need to learn for your day job and the need to continually strengthen foundational knowledge. For example, my day job may require that I get very familiar with configuring a specific vendor’s platform. This is perfectly fine since it’s directly related to my ability to do my job well, but I’ve learned to make sure to do this without sacrificing my personal goal to develop foundational networking knowledge.

 

Community

 

The leads into my second principle: community. Engaging in the networking community on Twitter, through blog posts, and in Slack channels gives me an outlet to vent, bounce ideas around, and find serious inspiration from those who’ve gone before me. It also helps me figure out what I should be working on in my professional development plan.

 

I’d be remiss if I didn’t mention that too much engagement in social media can be detrimental because of how much time it can consume, but when done properly and within limits, reading my favorite networking blogs, interacting with nerds on Twitter, and doing my own writing has been instrumental in refining my training plan.

 

Taking a break

 

A third principle is taking a break. A friend of mine calls it taking a sanity day.

 

I can get burned out quickly. Normally, it’s not entirely because of my day job, either. Working in IT means I’m stressed about an upcoming cutover, getting some notes together for this afternoon’s web conference, ignoring at least a few tickets in the queue, worried I’ll fail the next cert exam and waste $400, and concerned that I’m not progressing in my professional development the way I'd hoped.

 

For years I just kept pushing in all these areas until I’d snap and storm out of the office or find myself losing my temper with my family. I’ve learned that taking a few days off from all of it has helped me tremendously.

 

For me, that’s meant going to work but being okay with delegating and asking for some help on projects. It’s also meant backing off social media a bit and either pausing the professional development routine for a few days or working on something unrelated.

 

Recently I’ve mixed learning Python into my routine, and I’ve found that a few days of that is an amazing mental break when I can’t bear to dial into another conference bridge or look at another CLI. And sometimes I need to shut it all down and just do some work in the yard. 

 

This isn’t giving up. This isn’t packing it in and waiting to retire. This is taking some time to decrease the noise, to think, to re-evaluate, and to recuperate.

 

I admit that I sometimes feel like I need permission to do this. Permission from who? I’m not sure, but it’s sometimes difficult for me to detach. After I do, though, I can go back to the world of five chat windows, back-to-back meetings, and all of the corporate IT nonsense with a new energy and a better attitude. 

 

These are principles I’ve developed for myself based on my own experiences, so I’d love to learn how others work in IT without losing their minds, as well. IT is constantly changing, and from the entry level folks to the senior staff, everything we do impacts others in significant ways. How do you navigate the maze of web conferences, buzzwords, and late night cutovers?    

My name is Josh Kittle and I’m currently a senior network engineer working for a large technology reseller. I primarily work with enterprise collaboration technologies, but my roots are in everything IT. For nearly a decade, I worked as a network architect in the IT department of one of the largest managed healthcare organizations in the United States. Therefore, healthcare security policy, the topic I’m going to introduce to you here today, is something I have quite a bit of experience with. More specifically, I’m going to talk about healthcare security concerns in IT, and how IT security is impacted by the requirements of healthcare, and conversely, how health care policy is impacted by IT initiatives. My ultimate goal is to turn this into a two-way dialogue. I want to hear your thoughts and feedback on this topic (especially if you work in healthcare IT) and see if together we can take this discussion further!

 

Over the next five posts, I’m going to talk about a number of different considerations for healthcare IT, both from the perspective of the IT organization and the business. In a way, the IT organization is serving an entirely different customer (the business) than the business is serving (in many cases, this is the consumer, but in other cases, it could be the providers). Much of the perspective I’m going to bring to this topic will be specific to the healthcare system within the United States, but I’d love to have a conversation in the forum below about how these topics play out in other geographical areas, for those of you living in other parts of the world. Let’s get started!

     

There are a number of things to consider as we prepare to discuss healthcare policy and IT, or IT policy and health care for that matter since we’re going to dip our toes into both perspectives. Let's start by talking about IT policy and health care. A lot of the same considerations that are important to us in traditional enterprise IT apply in healthcare IT, particularly around the topic of information security. When you really think about it, information security is as much a business policy as it is something we deal with in IT,  and information security is a great place to start this discussion. Let me take a second to define what I mean by information security. Bottom line, information security is the concept of making sure that information is available to the people who need it while preventing access to those who shouldn’t have it. This means protecting both data-at-rest as well as data-in-motion. Topics such as disk encryption, virtual private networks, as well as preventing data from being exposed using offline methods all play a key role. We will talk about various aspects of many of these in future posts!

     

The availability of healthcare-related information is it pertains to the consumer is a much larger subject than it has ever been. We have regulations such as HIPAA that govern how and where we are able to share and make data available. We have electronic medical records systems (EMR) that allow providers to share patient information. We have consumer-facing, internet-enabled technologies that allow patients to interact with caregivers from the comfort of their mobile device (or really, from anywhere). It’s an exciting time to be involved in healthcare IT, and there is no shortage of problems to solve. In my next couple of posts, I’m going to talk about protecting both data-at-rest and data-in-motion, so I want you to think about how these problems affect you if you’re in a healthcare environment (and feel free to speculate and bounce ideas off the forum walls even if you’re not). I would love to hear the challenges you face in these areas and how you’re going about solving them!

 

As mentioned above, I hope to turn this series into a dialogue of sorts. Share your thoughts and ideas below -- especially if you work in healthcare IT -- so we can take this discussion further.

In Austin this week for some THWACKcamp filming. There's a lot of reasons to enjoy Austin, even in August, but being able to sit and talk with my fellow Head Geeks is by far the best reason.

 

Here's a bunch of links from the intertubz you might find interesting. Enjoy!

 

App sizes are out of control

I've been frustrated about this situation ever since I accidently started an update to an app while in another country and wasn't connected to Wi-Fi.

 

UK Writes GDPR into Law with New Data Protection Bill

Here's hoping that this is the start of people understanding that data is the most valuable asset they own.

 

Half of US Consumers Willing to Trade Data for Discounts

Then again, maybe not.

 

Will Blockchain End Poverty?

No.

 

How a fish tank helped hack a casino

I know that IoT is a security nightmare and all, but hackers may want to think twice about the people they steal from.

 

“E-mail prankster” phishes White House officials; hilarity ensues

This just shows that the folks in the White House are just as gullible as everyone else.

 

What’s in the path of the 2017 eclipse?

Interactive map showing you the path for the upcoming eclipse.

 

Event Season is starting and I may need to find a new place for all my conference badges.

Whatever business one might choose to examine, the network is the glue that holds everything together. Whether the network is the product (e.g. for a service provider) or simply an enabler for business operations, it is extremely important for the network to be both fast and reliable.

 

IP telephony and video conferencing have become commonplace, taking communications that previously required dedicated hardware and phone lines and moving them to the network. I have also seen many companies mothball their dedicated Storage Area Networks (SANs) and move them closer to Network Attached Storage, using iSCSI and NFS for data mounts. I also see applications utilizing cloud-based storage provided by services like Amazon's S3, which also depend on the network to move the data around. Put simply, the network is critical to modern companies.

 

Despite the importance of the network, many companies seem to have only a very basic understanding of their own network performance even though the ability to move data quickly around the network is key to success. It's important to set up monitoring to identify when performance is deviating from the norm, but in this post, I will share a few other thoughts to consider when looking at why network performance might not be what people expect it to be.

 

MTU

MTU (Maximum Transmission Unit) determines the largest frame of data that can be sent over an ethernet interface. It's important because every frame that's put on the wire contains overhead; that is, data that is not the actual payload. A typical ethernet interface might default to a physical MTU of around 1518 bytes, so let's look at how that might compare to a system that offers an MTU of 9000 bytes instead.

 

What's in a frame?

A typical TCP datagram has overhead like this:

 

  • Ethernet header (14 bytes)
  • IPv4 header (20 bytes)
  • TCP header (usually 20 bytes, up to 60 if TCP options are in play)
  • Ethernet Frame Check Sum (4 bytes)

 

That's a total of 58 bytes. The rest of the frame can be data itself, so that leaves 1460 bytes for data. The overhead for each frame represents just under 4% of the transmitted data.

 

The same frame with a 9000 byte MTU can carry 8942 bytes of data with just 0.65% overhead. Less overhead means that the data is sent more efficiently, and transfer speeds can be higher. Enabling jumbo frames (frames larger than 1500 bytes) and raising the MTU to 9000 if the hardware supports it can make a huge difference, especially for systems moving a lot of data around the network, such as the Network Attached Storage.

 

What's the catch?

Not all equipment supports a high MTU because it's hardware dependent, although most modern switches I've seen can handle 9000-byte frames reasonably well. Within a data center environment, large MTU transfers can often be achieved successfully, with positive benefits to applications as a result.

 

However, Wide Area Networks (WANs) and the internet are almost always limited to 1500 bytes, and that's a problem because those 9000-byte frames won't fit into 1500 bytes. In theory, a router can break large packets up into appropriately sized smaller chunks (fragments) and send them over links with reduced MTU, but many firewalls are configured to block fragments, and many routers refuse to fragment because of the need for the receiver to hold on to all the fragments until they arrive, reassemble the packet, then route it toward its destination. The solution to this is PMTUD (Path MTU Discovery). When a packet doesn't fit on a link without being fragmented, the router can send a message back to the sender saying, It doesn't fit, the MTU is... Great! Unfortunately, many firewalls have not been configured to allow the ICMP messages back in, for a variety of technical or security reasons, but with the ultimate result of breaking PMTUD. One way around this is to use one ethernet interface on a server for traffic internal to a data center (like storage) using a large MTU, and another interface with a smaller MTU for all other traffic. Messy, but it can help if PMTUD is broken.

 

Other encapsulations

The ethernet frame encapsulations don't end there. Don't forget there might be an additional 5 bytes required for VLAN tagging over trunk links, VXLAN encapsulation (50 bytes) and maybe even GRE or MPLS encapsulations (4 bytes each). I've found that despite the slight increase in the ratio of overhead to data, 1460 bytes is a reasonably safe MTU for most environments, but it's very dependent on exactly how the network is set up.

 

Latency

I had a complaint one time that while file transfers between servers within the New York data center were nice and fast, when the user transferred the same file to the Florida data center (basically going from near the top to the bottom of the Eastern coast of the United States) transfer rates were very disappointing, and they said the network must be broken. Of course, maybe it was, but the bigger problem without a doubt was the time it took for an IP packet to get from New York to Florida, versus the time it takes for an IP packet to move within a data center.

 

AT&T publishes a handy chart showing their current U.S. network latencies between pairs of cities. The New York to Orlando current shows that it has a 33ms latency, which is about what we were seeing on our internal network as well. Within a data center, I can move data in a millisecond or less, which is 33 times faster. What many people forget is that when using TCP, it doesn't matter how much bandwidth is available between two sites. A combination of end-to-end latency and congestion window (CWND) size will determine the maximum throughput for a single TCP session.

 

TCP session example

If it's necessary to transfer 100,000 files from NY to Orlando, which is faster:

 

  1. Transfer the files one by one?
  2. Transfer ten files in parallel?

 

It might seem that the outcome would be the same because a server with a 1G connection can only transfer 1Gbps, so whether you have one stream at 1Gbps or ten streams at 100Mbps, it's the same result. But actually, it isn't because the latency between the two sites will effectively limit the maximum bandwidth of each file transfer's TCP session. Therefore, to maximize throughput, it's necessary to utilize multiple parallel TCP streams (an approach taken very successfully for FTP/SCP transfers by the open source FileZilla tool). It's also the way that tools like those from Aspera can move data faster than a regular Windows file copy.

 

The same logic also applies to web browsers, which typically will open five or six parallel connections to a single site if there are sufficient resource requests to justify it. Of course, each TCP session requires a certain amount of overhead for connection setup. Usually a three-way handshake, and if the session is encrypted there may be a certificate or similar exchange to deal with as well. Another optimization that is available here is pipelining.

 

Pipelining

Pipelining uses a single TCP connection to issue multiple requests back to back. In HTTP protocol, this is accomplished by the HTTP header Connection: keep-alive, which is a default in HTTP/1.1. This request asks the destination server to keep the TCP connection open after completing the HTTP request in case the client has another request to make. Being able to do this allows the transfer of multiple resources with only a single TCP connection overhead (or, as many TCP connection overheads as there are parallel connections). Given that a typical web page may make many tens of calls to the same site (50+ is not unusual), this efficiency stacks up quite quickly. There's another benefit too, and that's the avoidance of TCP slow start.

 

TCP slow start

TCP is a reliable protocol. If a datagram (packet) is lost in transit, TCP can detect the loss and resend the data. To protect itself against unknown network conditions, however, TCP starts off each connection being fairly cautious about how much data it can send to the remote destination before getting confirmation back that each sent datagram was received successfully. With each successful loss-free confirmation, the sender exponentially increases the amount of data it is willing to send without a response, increasing the value of its congestion window (CWND). Packet loss causes CWND to shrink again, as does an idle connection during which TCP can't tell if network conditions changed, so to be safe it starts from a smaller number again. The problem is, as latency between endpoints increases, it takes progressively longer for TCP to get to its maximum CWND value, and thus longer to achieve maximum throughput. Pipelining can allow a connection to reach maximum CWND and keep it there while pushing multiple requests, which is another speed benefit.

 

Compression

I won't dwell on compression other than to say that it should be obvious that transferring compressed data is faster than transferring uncompressed data. For proof, ask any web browser or any streaming video provider.

 

Application vs network performance

Much of the TCP tuning and optimization that can take place is a server OS/application layer concern, but I mention it because even on the world's fastest network, an inefficiently designed application will still run inefficiently. If there is a load balancer front-ending an application, it may be able to do a lot to improve performance for a client by enabling compression or Connection: keep-alive, for example, even when an application does not.

 

Network monitoring

In the network itself, for the most part, things just work. And truthfully, there's not much one can do to make it work faster. However, the network devices should be monitored for packet loss (output drops, queue drops, and similar). One of the bigger causes of this is microbursting.

 

Microbursting

Modern servers are often connected using 10Gbps ethernet, which is wonderful except they are often over-eager to send out frames. Data is prepared and buffered by the server, then BLUURRRGGGGHH it is spewed at the maximum rate into the network. Even if this burst of traffic is relatively short, at 10Gbps it can fill a port's frame buffer and overflow it before you know what's happened, and suddenly the latter datagrams in the communication are being dropped because there's no more space to receive them. Anytime the switch can't move the frame from input to output port at least as fast as it's coming in on a given port, the input buffer comes into play and puts it at risk of getting overfilled. These are called microbursts because a lot of data is sent over a very short period. Short enough, in fact, for it to be highly unlikely that it will ever be identifiable in the interface throughput statistics that we all like to monitor. Remember, an interface running between 100% for half the time and 0% for the rest will likely show up as running at 50% capacity in a monitoring tool. What's the solution? MOAR BUFFERZ?! No.

 

Buffer bloat

I don't have space to go into detail here, so let me point you to a site that explains buffer bloat, and why it's a problem. The short story is that adding more buffers in the path can actually make things worse because it actively works against the algorithms within TCP that are designed to handle packet loss and congestion issues.

 

Monitor capacity

It sounds obvious, but a link that is fully utilized will lead to slower network speeds, whether through higher delays via queuing, or packet loss leading to connection slowdowns. We all monitor interface utilization, right? I thought so.

 

The perfect network

There is no perfect network, let's be honest. However, having an understanding not only of how the network itself (especially latency) can impact throughput, as well as an understanding of the way the network is used by the protocols running over it, might help with the next complaint that comes along. Optimizing and maintaining network performance is rarely a simple task, but given the network's key role in the business as a whole, the more we understand, the more we can deliver.

 

While not a comprehensive guide to all aspects of performance, I hope that this post might have raised something new, confirmed what you already know, or just provided something interesting to look into a bit more. I'd love to hear your own tales of bad network performance reports, application design stupidity, crazy user/application owner expectations (usually involving packets needing to exceed the speed of light) and hear how you investigated and hopefully fixed them!

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

With hybrid IT on the rise, I wanted to share a blog written earlier this year by my SolarWinds colleague Bob Andersen.

 

Hybrid IT – migrating some infrastructure to the cloud while continuing to maintain a significant number of applications and services onsite – is a shift in the technology landscape that is currently spreading across the federal government. Are federal IT professionals ready for the shift to this new type of environment?

 

Government IT pros must arm themselves with a new set of skills, products, and resources to succeed in the hybrid IT era. To help with this transition, we have put together a list of four tips that will help folks not only survive, but thrive within this new environment.

 

#1: Work across department silos.

 

Working across department silos will help speed up technology updates and changes, software deployments, and time-to-resolution for problems. What is the best way to establish these cross-departmental relationships? A good place to start is by implementing the principles of a DevOps approach, where the development and operations teams work together to achieve greater agility and organizational efficiency. DevOps, for example, sets the stage for quick updates and changes to infrastructure, which makes IT services – on-premises or within the cloud – more agile and scalable.

 

#2: Optimize visibility with a single version of the truth.

 

In a hybrid environment, the federal IT pro must manage both on-premises and cloud resources. This can present a challenge. The solution? Invest in a management and monitoring toolset that presents a single version of the truth across platforms. There will be metrics, alerts, and other collected data coming in from a broad range of applications, regardless of their location. Having a single view of all this information will enable a more efficient approach to remediation, troubleshooting, and optimization.

 

#3: Apply monitoring. Period.

 

Monitoring has always been the foundation of a successful IT department. In a hybrid IT environment, monitoring is absolutely critical. A hybrid environment is highly complex. Agencies must establish monitoring as a core IT function; only then will they realize the benefit of a more proactive IT management strategy, while also streamlining infrastructure performance, cost, and security.

 

#4: Improve cloud-service knowledge and skills.

 

As more IT services become available through the cloud – and, in turn, through cloud providers – it  becomes increasingly important for the federal IT pro to fully understand available cloud services. It’s also important to understand how traditional and cloud environments intersect. For example, service-oriented architectures, automation, vendor management, application migration, distributed architectures, application programming interfaces, hybrid IT monitoring and management tools, as well as metrics. Knowledge across boundaries will be the key to success in a hybrid IT environment.

 

Working through a technology shift is never easy, especially for the folks implementing, managing, and maintaining all the changes. That said, by following the above tips, agencies will be able to realize the benefits of a hybrid cloud environment, while the IT team thrives within the new environment.

 

Find the full article on Government Computer News.

As you know, we’re gearing up for THWACKcamp 2017, which promises to be our best yet. If you haven’t registered, you’ll want to get on that PRONTO! In our 100% free, virtual, multi-track IT learning event, thousands of attendees will have the opportunity to hear from industry experts and SolarWinds Head Geeks and technical staff. Registrants also get to interact with each other to discuss topics related to emerging IT challenges, including automation, hybrid IT, DevOps, and more.

 

We are continuing our expanded-session, two-day, two-track format for THWACKcamp 2017. SolarWinds product managers and technical experts will guide attendees through how-to sessions designed to shed light on new challenges, while Head Geeks and IT thought leaders will discuss, debate, and provide context for a range of industry topics. In my “If It’s Not in the Ticket, It Didn’t Happen” session, I'll be joined by SolarWinds Product Managers Kevin Sparenberg and Bryan Zimmerman to discuss best practices for streamlining the help desk request process.

 

Because I haven't worked in a help desk setting, and have likely been thought of as a “problem child” by a help desk or two, I’m sure I’ll appreciate the perspectives Kevin and Bryan will share in this session. I look forward to understanding the similarities and differences involved in supporting internal and external stakeholders, in addition to acting as an MSP in this same capacity. Tapping the wisdom they've accumulated from their individual experiences working on and leading help desk teams, Kevin and Bryan will offer help desk technicians advice on everything from the appropriate levels of information that are needed on the front end of the support process, to which tools can be used throughout the resolution workflow to help accelerate ticket resolution.

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

Logs are insights into events, incidents, and errors recorded over time on monitored systems, with the operative word being monitored. That’s because logging may need to be enabled for those systems that depend on defaults, or if you’ve inherited an environment that was not configured for logging. For the most part, logs are retained to maintain compliance and governance standards. Beyond this, logs play a vital role in troubleshooting.

 

For VMware® ESXi and Microsoft® Hyper-V® nodes, logs represent quintessential troubleshooting insights across that node’s stack, and can be combined with alerts to trigger automated responses to events or incidents. The logging process focuses on which logs to aggregate, how to tail and search those logs, and what analysis needs to look like with the appropriate reactions to that analysis. And most importantly, logging needs to be easy.

 

Configuring system logs for VMware and Microsoft is a straightforward process. For VMware, one can use the esxcli command or host profiles. For Microsoft, look in the Event Viewer under Application and Services Logs -> Microsoft -> Windows and specifically, Hyper-V-VMMS (Hyper-V Virtual Machine Management service) event logs. The challenge is efficiently and effectively handling the logging process as the number of nodes and VMs in your virtual environment increase in scale. The economies of scale can introduce multi-level logging complexities thereby creating troubleshooting nightmares instead of being the troubleshooting silver bullets. You can certainly follow the Papertrail if you want the easy log management button at any scale.

 

The question becomes, would your organization be comfortable with, and actually approve of, cloud-hosted log management, even with encrypted logging, where the storage is Amazon® S3 buckets? Let me know in the comment section below.

For most people these days, the word “hacking” conjures images of nefarious intruders attempting to gain illegal access to financial institutions, corporations, and private citizens’ computers for theft and profit. Exploitation of unsecured computer systems, cloud services, and networks make headlines daily, with large breaches of private consumer information becoming a regular event. Various studies predict the impact of global cybercrime, with one estimate from Cybersecurity Ventures predicting damages to exceed $6 trillion dollars by 2021. The impact of this is felt all over the world, with organizations rallying to protect their data, and spending over $80 billion in 2016 on cyber security.

 

There does remain some differentiation in the hacking world between “good” and “evil” and a variety of moral postures in between. Each of these terms being subjective and dependent on the point of view of the person using them, of course. There are the “good guys” – white hat hackers, and the “bad guys” – black hat hackers, and gray hats in-between. Terms and labels attributed to the traditional indicators of good and bad in Western movies and cowboys.

 

Tracing its Origins

 

Hacking in its infancy wasn’t about exploitation or theft. It also didn’t have anything to do with computers, necessarily. It was a term used to describe a method of solving a problem or fixing something using unorthodox or unusual methods. MacGyver, from the 1985 television show of the same name, was a hacker. He used whatever he had available to him at the moment, and his Swiss Army knife, to “hack” his way out of a jam.

The modern sense of the word hack has its origins dating back to the M.I.T. Tech Model Railroad Club minutes in 1955.

 

              “Mr. Eccles requests that anyone working or hacking on the electrical system turn off the power to avoid fuse blowing.”

 

There are some positive uses of the word in modern society, the website Lifehacker as one example, showing people how to solve everyday problems in unconventional, totally legal ways.

 

Captain Crunch

 

Early hacking took shape with tech-savvy individuals like John Draper, aka Captain Crunch, attempting to learn more about programmable systems, specifically phone networks. Coined “phreaking” at the time, these guys would hack the public switched phone system, often just for fun, or to learn as much as they could about them, and even for free phone calls. John Draper’s infamous nickname Captain Crunch was derived from the fact that a toy whistle found in Cap’n Crunch cereal, emitted a 2600 Hz tone that was used by phone carriers to cause a telephone switch to end a call, which left an open carrier line. This line could then be used to make free phone calls.

 

There were many such exploits on older telephone systems. In the mid-80’s I used to carry a safety pin with me at all times. Why? To make free phone calls. I didn’t understand the mechanism of how this worked at the time, but I knew that if I connected the pin end to the center hole of a pay-phone mouthpiece, and touched the other end to any exposed metal surface on the phone, often the handset cradle, you would hear a crackle or clicking noise, followed by a dial tone, and you would then be able to dial any number on the phone, without putting any money in it.

 

Later I would learn that this was due to the fact that older phone systems used ground-start signaling which required the phone line to be grounded to receive dial tone. Normally this grounding was accomplished with a coin inserted into the phone, which controlled a switch that would ground the line, but my method using a safety pin did the same thing.

 

I’m assuming of course, that the statute of limitations has run out on these types of phone hacks…

 

Hacking Motivation

 

Phone phreakers like Captain Crunch and even his friend Steve Wozniak (yes, the Woz) later on would develop these techniques further to hack the phone system and more often than not, for relatively harmless purposes. Draper cites a number of pranks they pulled through their phone hacking that included:

 

  • Calling the Pope to confess over the phone
  • Obtaining the CIA crisis hotline to the White House to let them know they were out of toilet paper
  • Punking Richard Nixon after learning his code name was “Olympus” when someone wanted to speak with him on the phone

 

Draper would eventually get caught and serve jail time for his phone escapades, but what he had done wasn’t done for profit or malicious reasons. He did it to learn how phone systems worked. Nothing more.

 

Kevin Mitnick, arguably the world’s most infamous hacker speaks in his books and his talks about the same thing. His adventures in hacking computer systems were done mostly “because he could” not because he thought there would be any big payoff from doing so. He found it a challenge and wanted to see how far he could get into some of these early networks and systems.

 

Hacking for the IT Professional

 

For the modern IT professional, hacking continues to hold a few different meanings. The first is the thing you must protect your network and your information from – malicious hacking. The next might be your approach to solving problems in non-traditional ways – hacking together a fix or solution to an IT problems. The next might be exposing yourself to the methods and techniques used by the black hat community in order to better understand and protect yourself from them – arguably the white hat hacking.

 

IT staff, especially those with responsibility for security can and should learn, practice, and develop some hacking skills to understand where their main vulnerabilities lie. How do we do this without getting arrested?

 

Over the next several posts, I'm going to discuss different options that you have, as the everyday IT pro, to learn and develop some practical, real-world hacking skills, safely and legally.

 

That said, I will offer a disclaimer here and in subsequent posts: Please check your local, state, county, provincial, and/or federal regulations regarding any of the methods, techniques, or equipment outlined in these articles before attempting to use any of them. And always use your own private, isolated test/lab environment.

 

Remember how much trouble Matthew Broderick got himself into in WarGames? And all he wanted to do was play some chess.

If cloud is so great, why is hybrid a thing?

 

Microsoft, Amazon, and Google are all telling us we don’t need our own data centers anymore. Heck, they’re even telling application developers that they don’t need servers anymore. The vendors will provide the underlying server infrastructure in a way that’s so seamless to the application, that the app can be moved around with ease. Bye, bye SysAdmins?

 

STOP. Time for a reality check.

 

It’s time to let loose on why the cloud is a bad, bad thing. I won’t even try to argue with you. I’ve worked in tech in a bank and for police and defense, so I know there are valid reasons. Here’s your chance to vent, I mean explain, why you’re throwing your eggs into a cloudy basket.

 

Cost 
It’s more than just a question of OPEX versus CAPEX. Sometimes it’s cash flow. Sometimes it’s return on investment. Maybe the numbers still don’t add up to use 20TB + of storage in the cloud versus some on-premises storage arrays?

 

Compliance
Banks, police, and defense aren’t the only ones bound by strict regulations. Do you have sensitive data or industry red tape that states that the buck stops with you and data can’t go outside of your walls (or your full control)?

 

Concealment
Because data security doesn’t start with a C. Vendor X is a third-party so maybe we don’t trust them with our data. Do we believe that we can secure our own systems the best because we have a vested interest here? After all, it is our business on the line, our customer data, our reputation. Or maybe there is a chance that Vendor X is snooping around in our data for their gain?

 

Control
Even if you did assume that Vendor X is better at security and data breach detection that you are (surely not!), they still have access to do stupid things like not manage servers and not have reliable backups. If it does all go horribly wrong, maybe your bosses legally need to have someone in-house to shout at (or fire) and can’t wave that off with a cloud services agreement? Do you have a locked down environment controlled by group policy that you can't replicate in the cloud unless you ran a full Windows server there, anyway?

 

Connectivity
Now I know enterprise people that laugh at this one, because surely everybody has great, fast, redundant internet. This is not always the case, especially in smaller organizations. Lack of network bandwidth or a reliable connection are the first roadblocks to going anywhere near the cloud. Add to that the complexity of VPNs for specific applications in some industries, and even Google, Amazon, or Microsoft might not be up to the task. Or maybe you need to add other firewall services, which makes the whole endeavor cost prohibitive. 

 

Another aspect to connectivity is integration with other systems and data sources. Maybe you've got some seriously complex systems that are a connected piece of a bigger puzzle and cloud can't support or replace that?

 

Concern about vendor lock-in
I don’t envy someone who has to make a choice about which cloud to use. Do you spread your risk? Can you easily move a server instance from one cloud to another? And we haven’t talked about SaaS solutions and data export/imports.

 

See what I did there?

 

So, go on. Tell me what’s holding you back from turning off every single on-premises server. I promise I’ll read the comments. I’m expecting some good ones!

This week's Actuator comes to you in advance of my heading to Austin for some THWACKcamp filming next week. This filming may or may not involve a bee costume. I'm not promising anything.

 

Microsoft Launches Windows Bug Bounty Program Because Late Is Better Than Never

Interesting that this took as long as it did, but it is wonderful to see Microsoft continue to make all the right moves.

 

Passwords Evolved: Authentication Guidance for the Modern Era

Great summary from Troy Hunt about the need for passwords to evolve.

 

A Wisconsin company will let employees use microchip implants to buy snacks and open doors

Or, forget passwords and just go to chip implants. I'm kinda shocked this isn't in use at a handful of hedge funds I know.

 

First Human Embryos Edited in U.S.

Time to upgrade the species, I guess.

 

Indoor robots gaining momentum - and notoriety | The Robot Report

Including this link mostly because I never knew "The Robot Report" existed, and now I can't look away.

 

The Worst Internet In America

I thought for sure this was a report about my neighborhood.

 

Why automation is different this time

Set aside 10 minutes to watch this video. I want you to understand why your job roles are going to change sooner than you might think.

 

I'm not sure this is legit, but the store did have a Mr. Fusion for sale:

By Theresa Miller and Phoummala Schmitt

 

Hybrid IT has moved from buzzword status to reality. More organizations are realizing that they are in a hybrid world. Including any potential impact, you should be thinking about the following: What is hybrid IT? Why do you care? What does it mean for the future of IT?

 

Hybrid IT and the Organization

 

The introduction of the cloud has made organizations wonder what hybrid IT means to them and their business applications. Hybrid IT is any combination of on-premises and cloud in just about any capacity. Cloud for Infrastructure as a Service (IAAS), Software as a Service (SAAS), Platform as a Service (PAAS), and any other cloud option you may choose. The moment you choose a cloud to provide something in your enterprise, you are not in a mode of hybrid IT.

 

With hybrid IT comes the same level of responsibility as on-premises. Moving an application to the cloud doesn’t mean that the cloud provider is responsible for backups, monitoring, software updates, or security unless that is part of your agreement with that cloud provider. Make sure you know your agreement and responsibilities from the beginning.

 

Hybrid IT can provide cost savings, and while some may argue otherwise, it comes down to a budget shift to operational cost. The true value is that you remove the capital overhead of maintaining your own servers, heating, cooling, and sometimes even software updates.

 

Is there value to a hybrid configuration within a Microsoft Exchange deployment?

 

Looking back, it seems Microsoft was one of the great innovators when it came to email in the cloud. It wasn't exactly successful in the beginning, but today this option has grown into a very stable product, making it the option of choice for many organizations. So how does email factor into hybrid? I wonder if migrating the Exchange online through hybrid is necessary? This is due to the ability to failback, the ability to keep some email workloads onsite, and the ability to create a migration experience similar to an on-premises deployment. These options work to create a more seamless migration experience overall.

 

How you ask? Here are some of the technical functionalities that are vital to that seamless experience.

 

  • Mail routing between sites – When correctly configured, internal and external routing appear seamless to the end-user
  • Mail routing in shared namespace – This is important to your configuration if the internal and external SMTP domain remains the same
  • Unified Global Address List – Contributing to the seamless user experience, the user sees all of their coworkers in one address list, regardless of whether or not they are on-premises or in the cloud
  • Free/Busy is shared between on-premises and cloud - This also contributes to the seamless user experience by featuring a visible calendar showing availability, no matter where the mailbox exists
  • A single Outlook web app URL for both on-premises and cloud – If your organization uses this functionality, your configuration can be set up with the same URL, regardless of mailbox location

 

How about hybrid IT for VDI?

 

VDI has been showing significant growth in organizations. It is becoming more interesting to companies with an estimated growth rate of 29% over the next couple of years. So what about hybrid? Well, to date we are still seeing the strongest options for VDI being within on-premises product options. That being said, there are some cloud options that are getting stronger that can definitely be considered.

 

Many of these options do not have strong plans for hybrid, but are rock solid if you are looking for one or the other: on-premises or cloud, but not both. So, what are the gaps for hybrid? To date, many of these options have proprietary components that only work with certain cloud providers. Connector options between on-premises and cloud are still in the early stages, and there needs to be more consideration around applications that are on-premises that need to work in the cloud.

 

Hybrid IT - Ready or not

 

So, if you are already moving just a single application to the cloud, you are embarking on the hybrid IT journey. When moving to Microsoft Exchange Online, be sure to use hybrid for your deployment. Last but not least, if you are ready for VDI, choose either on-premises or cloud only to get started. Also, be prepared for some bumps in the road if your applications are on-premises and you chose to put your VDI in the cloud. This is because this option is very new and every application has different needs and requirements.

 

If you would like to learn more about hybrid IT for VDI and Exchange, check out our recent webcast, "Hybrid IT: Transforming Your IT Organization. And let us know what you think!

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

In some ways, data has become just as much a colleague to federal IT managers as the person sitting next to them. Sure, data can’t pick up a burrito for you at lunchtime, but it’s still extraordinarily important to agency operations. Data keeps things going so that everyone in the agency can do their jobs – just like your fellow IT professionals.

 

Unfortunately, as my colleague Thomas LaRock wrote last year, too many people still treat data as a generic commodity instead of a critical component of application performance. But applications are at the heart of just about everything government employees do, and those applications are powered by databases. If there’s a problem with an application, it’s likely due to an underlying performance issue with the database it runs on.

 

As data and applications continue to become more intertwined, it’s time to get serious about employing strategies to ensure optimal database performance. Here are five tips to get you started:

 

1. Integrate DBAs into the IT mix

 

It may seem incredible, but to this day many agencies are still arranged in siloes, with different teams shouldering separate responsibilities. As such, despite their importance to network and application performance, many DBAs still operate in their own bubbles, separate from network administrators and IT managers. But, IT and DBA teams should work together to help ensure better application performance and availability for everyone’s sake.

 

2. Establish performance baselines before you start monitoring

 

Your strategy starts with monitoring for performance issues that may be causing problems with an agency’s applications. Before you even begin this, you’ll need to set up baselines to measure against. These baselines will allow you to track the code, resource, or configuration change that may be causing the anomalies and fix the issues before they become larger problems.

 

3. Start monitoring — but take it to the next level

 

Take things a step further by digging deeper into your data. Use real-time data collection and real-time monitoring in tandem with traditional network monitoring solutions to improve overall database performance, and maintain network and data availability. Incorporate tools with wait-time analysis capabilities to help identify how an application request is being processed, and which resources that application may be waiting on. This can help pinpoint the root cause of performance issues so you can see if they’re associated with your databases.

 

4. Then, go even further — into your application stack and beyond

 

Applications are co-dependent upon each other. When one slows down or fails, it could adversely affect the entire stack. Therefore, you’ll want to use monitoring solutions that provide visibility across your entire application stack, not just sections or individual applications. This includes software, middleware, and, especially, databases. This type of monitoring can help you zero in on potential issues wherever they may reside in your organization, and make it much easier to address them to minimize downtime and keep things rolling.

 

5. Don’t stop — be proactive and continuously monitor

 

Proactive and continuous monitoring is the best approach, and must involve software and teamwork. Start with deploying solutions that can automatically monitor applications and databases 24/7. Make sure that everyone is on the same page and appreciates end-user expectations in regards to page load and response times. Know that the work the team does impacts the entire agency, and can directly influence – positively or negatively – their colleagues’ efforts toward achieving their agencies’ goals.

 

Databases and applications will continue to play a part in these efforts, and you’ll be working alongside them for as long as you’re in federal IT. They might not be able to chat with you over a cup of coffee, but they’ll always be there for you – until they’re not.

 

Don’t let it get to that point. Do whatever it takes to keep your databases and applications working just as hard as you.

 

Find the full article on GovLoop

If budgets allowed, I think all of us in IT might spend a month every year attending events. And while Bruno Mars was apparently a great way to wrap-up Cisco Live 2017, the real draw for IT professionals, is and always will be, education. Technology is evolving more rapidly than ever. Cloud, DevOps, and even diversifying, specialized, on-premises gear like Flash, extorts more and more demand for learning just to keep up. Of course, the sad reality is that travel and education budgets don’t support a month of fun conference travel every year. This conundrum led to the genesis of THWACKcamp.

 

For 2017, SolarWinds will host its sixth THWACKcamp, a two-day, live, virtual learning event with eighteen sessions split into two tracks. This year, THWACKcamp has expanded to include topics from the breadth of the SolarWinds portfolio: there will be deep-dive presentations, thought leadership discussions, and panels that cover more than best practices, insider tips, and recommendations from the community about the SolarWinds Orion suite. This year we also introduce SolarWinds Monitoring Cloud product how-tos for cloud-native developers, as well as a peek into managed service providers’ approaches to assuring reliable service delivery to their subscribers. A holdover from 2016 is the senior executive keynote, with a look into the future of SolarWinds to see where its products are headed. THWACKcamp 2017 will be better than ever, so be sure you’re there October 18-19.

 

 

Registration is now open! Just click on the link below. If you like, you can also download calendar reminders to make sure you don’t miss a single opportunity to interact live with SolarWinds staff, technology experts, and, of course, the amazing members of the THWACK community. It wouldn’t be a THWACK event without lots of cool giveaways and live geek fun, so you’ll want to be there for that, too.

 

Thousands of IT professionals of all kinds attend THWACKcamp every year, whether they’re SolarWinds customers or not. Our goal, as always, is to provide expert advice on technology, skills, and career development to keep you ahead of the curve and loving your work.

 

See you there! Register now >>

Filter Blog

By date: By tag: