Tomorrow is Thanksgiving here in the USA. I’ve avoided doing a turkey-themed post this week, but included two links because reasons. I hope everyone has the opportunity to share a meal with family and friends. It’s healthy to get together and remember all the things for which we are thankful.
As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!
I’ve spent countless hours trying to find the perfect tool for the job. In fact, I’ve spent more hours searching at times than I have doing the work. You’ve probably done that before. I hear a lot of people are the same way. I look at it as if I’m searching for that needle in a haystack. When I find it, I’m over the moon. But what about when it comes to our end users? Do we trust them enough to locate that needle in the haystack when it comes to software that will enable them to perform their work? I'd venture to guess that in larger organizations the resounding answer is no! But why?
Let's use the media consumption application Elemedia Player as a working example. Now it's true that our end users probably won't need Elemedia Player to get their job done, but the method behind what happened here is what I'm interested in discussing.
On October 19, 2017, ESET Research reported that Elmedia Player had been briefly infected with the Proton malware strain. In fact, the developers of Elemedia Player, Eltimalater reported on their blog that they were directly distributing the compromised software from their servers. In this case, the malware is delivered via the supply chain.
Let's now put this in the perspective of our end users. Imagine that an end user shows up at work to find that their workstation doesn't have some set of software that they use at home. They feel comfortable using this software and decide to find it on the interwebs and download it themselves. They grab said software package and install it. Off to work! Another great day.
However, in this instance, they have obtained a package that's infected with malware. It's now on your network. Your day has just gone downhill fast. You'll likely spend the rest of the day restoring a machine or two, trying to figure out how far the malware has spread, and second-guessing every control you've put in place. In fact, you may not even realize that there's malware on the network initially, and it could be days or weeks before the impact is realized.
Taking it to the Enterprise
Let's move this discussion to something more fitting for the enterprise. We've already discussed online file storage services in this series, but let's revisit that a bit.
Imagine that we couple the delivery of malware packaged into an installer file along with the strong encryption that's performed by these online services and you can pretty much throw your visibility out the window. So here's the scenario. A user wants to share some files with a colleague. They grab a free Dropbox account and share some work stuff, some much, and some apps that they like, perhaps Elemedia Player with an infected installer. You can see where this is going.
My point to all of this is that we have to provide the tools and prohibit the user from finding their own or things start to go off the rails. There's no way for us to provide security for our organization if our users are running amuck on their own. They may not mean it, but it happens. In fact, this article written my Symantec discusses the very same idea. So instead of finding a needle in a haystack, we end up falling on a sword in a pile of straw.
How do you feel about these services being used by end users without IT governance? How do you handle these situations?
By Joe Kim, SolarWinds EVP, Engineering and Global CTO
As agencies look ahead to the new year, I wanted to share an insightful blog about the evolution of network technology written by my SolarWinds colleague, Leon Adato.
We are entering a new world where mobility, the Internet of Things (IoT), and software-defined networking (SDN) have dramatically changed the purview of federal IT pros. The velocity of network changes is expected to pick up speed in the next five years as new networking technologies are adopted and mature. The impact this will have on network administrators will be significant.
To help prepare for this new world, we have put together the following list of positive changes we expect federal administrators—and the tools they use to do their jobs every day—will experience between now and 2020.
Streamlined Network Troubleshooting
Today, network administrators spend a lot of time troubleshooting. Moving forward, this process will become far more streamlined by using data that already exists to free up administrator’s time.
The vast majority of applications in place today hold a lot of data that could be used to help in troubleshooting network issues. Unfortunately, they don’t quickly and easily bubble that data up to administrators. Soon, systems will be able to arm the administrator with enough information to fix problems much more quickly. The future will see administrators steeped in automated intelligence, where an emailed alert not only describes a present problem, but also includes details of similar past incidents and recent configuration changes that could be related.
Greater Ability to Resolve Potential Problems Before They Arise
With the development of advanced network management capabilities comes the ability to increase automation by tapping into a historical knowledge to predict problems before they happen.
Every agency would like the ability to have its systems effectively take care of themselves with a greater degree of automation. There is an emerging need for network technology that distinguishes from simply alerting administrators to problems, to alerting, fixing, and escalating a notification when conditions are ripe for an issue based on historical context.
Greater Awareness of Virtualization by Network Management Tools
Virtualization is increasingly common, yet many network management and monitoring tools have not evolved. All the tools that make up that portfolio of network management necessities need to be aware of the construct of what network virtualization is and the specificities it brings—in particular, how to operate, gather, and relay information within a more hybrid and software-defined world.
Increased Connectivity Across Devices
Finally, there’s IoT, which will, without a doubt, bring dramatic complexity to the evolution of network structures over the next three to five years.
The concept of IoT is really about connecting and networking unconventional things and turning them into data collection points. Think everything from sensors in military materiel shipments to connected cars. A lot of these “things” are being tested to see where we might consider the boundaries of the network edge to be, and where that data processing needs to take place.
The best approach is somewhere between decentralized and centralized—which is where network management will be heading. Network management tools and Federal network managers will need to build internal capability – staff and technology – to maintain visibility, awareness, and control over a growing evolution of network technology.
You’ve made the leap and have implemented a DevOps strategy in your organization. If everything is going just great, this post may not be for you. But if you haven’t implemented DevOps, or you have and things just don’t seem to be progressing as quickly as you had hoped, let’s discuss some of the reasons why your DevOps might be slow.
Before we get too far along into why your DevOps initiative might be slow, let’s first ask: Is it really slow? If DevOps is new to your company, there may be some period of adjustment as the teams get used to communicating with each other. Additionally, it’s possible that your expectations of how fast things should be might be out of alignment with how they really are. It’s easy to think that by transitioning to DevOps, everything will be all unicorns and rainbows and instantly churning out code to make your lives better. However, depending on where you start to focus, there could be some time before you start to see benefits.
What We Have Here is a Failure to Communicate
If you’ve been following the previous three posts in this blog series, it should come as no surprise that one of the key factors that could slow down your DevOps projects is communication. Do you have a process set up to interface the developers and operations personnel so that everyone agrees on the best way to communicate back and forth? Are your developers getting the information they need from operations in a timely manner? Can operations communicate feature requests, issues, and operations specific information to developers efficiently? If the answers to any of these questions are no, then you’ve started down the path of identifying your issue.
Keep in mind that just because you have a process in place to establish communication channels between the developers and operations personnel, you may still encounter issues. Just because a process has been established doesn’t mean it is the right process, or that people are following it. When evaluating, make sure that you don’t assume that the processes are appropriate for your company.
I’m Not Buying That
Sometimes, employees simply won't buy into DevOps. Maybe they think that they can get things done faster without user input and as a result, they ditch all processes that were in place to help facilitate developer-operator communications. As mentioned in DevOps Pitfalls, culture is a huge contributing factor to the success or failure of any DevOps initiative. The process becomes behavior which, in turn, becomes culture. If the process is being ignored, your organization needs to come up with a way of dealing with employees who choose not to follow it. This gets into a whole HR policy discussion which is way outside the scope of this blog.
I Used to Be a Developer, Now I’m a Developer Times Two!
Before you started doing DevOps, it’s likely you already had developers, and they already had a job writing code for some projects. Whether it’s because you are increasing automation or building software for a software-defined data center, the projects that you are considering the lead into DevOps are not the only projects that your developers are working on. When you make the choice to implement DevOps processes, carefully review your developers' current workloads. Based on your findings, you may need to hire more developers to help ensure that the project rolls out smoothly and in a reasonable time frame.
It doesn’t matter if you are developing a new software tool or deploying a new phone system, there is a tendency for a lot of people to want to get everything rolled into one big release. By doing so, users get to see the full glory of your project and you can sit back and enjoy being completely done with something.
The issue with this approach is that this could take a lot more time than users are expecting. It would be better to have some clear communication up front to identify the features that are time-critical for users to have, and build and release those first with a schedule to release the remaining features. By using this approach, developers and operators get an early win to address the critical issue. This is then followed up by additional wins as new functionality gets rolled into the software in future, short-timeline releases.
As you can see, reasons for a slow DevOps process are varied but can be largely attributed to the communications that are in place between developers and operators. What other issues have you seen that have slowed down a DevOps process?
In the next and final post in this series, we'll wrap up some of what has been discussed in the series, and also address some of the comments and questions that have cropped up along the way. Finally, I’ll leave you with some DevOps resources to give you more information than I can possibly provide in five blog posts!
In previous posts in this “My Life” blog series, I’ve written quite a bit about the project management/on-task aspects of how I keep my focus and direction top of mind while I practice and place emphasis on my goals. But, what happens when something doesn’t work?
In some cases, the task we’re trying to accomplish may be just too hard. In some cases, we’re just not there yet. Practice, in this case, is the right answer. As my french horn teacher used to say, "Practice does not make perfect. Perfect practice makes perfect." The problem, particularly in my guitar playing, is that I’m flying a little blind because I have no teacher helping me to practice perfectly. But, imagine a tricky chord sequence that has had me failing during practice. If I don’t burn through the changes as often as possible in my practice time, I’ll definitely fail when I’m on stage, attempting to play it in front of a live audience.
In an effort to avoid the embarrassment of that failure, I sandbox. At least that’s how I see it.
The same analogy can be transposed to my thoughts about implementing a new build of a server, an application that may or may not work in my VMware architecture, etc. We don’t want to see these things fail in production, so we test them out as developer-type machines in our sandbox. This is a truly practical approach. By making sure that all testing has taken place prior to launching a new application in production, we're helping to ensure that we'll experience as little downtime as possible.
I look at these exercises as an effort to enable a high degree of functionality.
The same can be said as it reflects on training myself to sleep better, or to gain efficiency in my exercise regime. If I were to undertake a new weightlifting program, and my lifts are poorly executed, I could strain a muscle or muscle group. How would that help me? It wouldn’t. I work out without a trainer, so I rely on the resources that are available to me, such as YouTube, to learn and improve. And when I’ve got a set of motions down pat, the new exercise gets rolled into my routine. Again, I’ve sandboxed and learned what lessons I need to know by trial and error. This helps me avoid potentially hazardous roadblocks, and in the case of my guitar, not looking like a fool. Okay, let’s be clear... avoid looking more like a fool than usual.
I know that this doesn’t feel spontaneous. Actually, it isn’t. Again, as I relate it to musical performance, the correlation is profound. If I know via practice and a degree of comfort with the material, it allows my improvisation to take place organically. I always know where I am, and where I want to be in the midst of a performance, and thus, my capacity to improvise can open up.
Every tech decision comes down to cost, benefit, and risk. NAT is no different. I’m not a network admin, but I find this discussion fascinating. It is also eerily similar to the myriad of debates I have with my data folk.
Munich is often hailed as a leader in how a city can operate without Microsoft software, but it has had enough of the dumpster fire they created by trying to switch to Linux. Somewhere,adatole is weeping.
By Joe Kim, SolarWinds EVP, Engineering and Global CTO
Most federal agencies have to contend with a constant lack of IT resources—staff and budget alike. In fact, many of these agencies have been performing IT tasks manually for years, heightening the pain of that already painful burden. Agencies, take heart. There is a way to ease that pain. In fact, there is a single solution that solves both of these issues: automation.
Today’s federal IT pros simply can’t afford to be burdened with the extra time manual interventions take. Automation can help eliminate wasted time, ease the burden of overtasked IT staff, and allow the IT team to focus on more mission-critical objectives.
Take alerts, for example. Alerts will always be a critical part of IT. But, there is a way to handle them that doesn’t take hours out of a federal IT professional’s day.
In a manual scenario, when a server alert is created because a disk is full, the response would be to, say, dump the temp directory. That takes time and effort. What if, instead, the administrator wrote a script for this task to take place automatically? That would save time and effort, and would take care of the fix more quickly than a manual approach.
Here’s another scenario: Let’s say an application stops working. The manual approach to getting that application back up and running would take an inordinate amount of time. With automation, the administrator can write a script enabling the application to restart automatically.
Of course, not all alerts can be solved with an automated response. That said, there are many that can, and that translates to time saved.
Going beyond simply automating alerts, think about the possibility of a self-healing data center, where scripts and actions are performed automatically by monitoring software issues as they happen.
There are tools available today that can absolutely provide this level of automation. Consider tools dedicated to change management and tracking, compliance auditing, and configuration backups. These types of tools will not only save administrators vast amounts of time and resources, but will also greatly reduce errors that are too often introduced through manual problem solving. These errors can lead to network downtime or even potential security breaches.
As a federal IT pro, your bottom line is the mission. The time you save through automation can help sharpen that focus. Enhancing that automation will allow additional time to focus on developing and deploying new and innovative applications, for instance, or ways to deliver those applications to users more effectively, so they can have the tools they need to do their jobs more efficiently.
When you automate, you make your life easier and your agency more agile, innovative, nimble, and secure.
Last year, we kicked off a new THWACK tradition: the December Word-a-Day Writing Challenge, which fostered a new kind of interaction in the THWACK community: By sharing personal essays, images, and thoughts, we started personal conversations and created human connections. In just one month, last year’s challenge generated nearly 20,000 views and over 1,500 comments. You can see the amazing writing and thoughtful responses here: Word-A-Day Challenge 2016
Much of this can be attributed to the amazing, engaging, thriving THWACK community itself. Whether the topic is the best starship captain, what IT lessons we can learn from blockbuster movies, or the best way to deal with nodes that have multiple IP addresses, we THWACKsters love to chat, debate, and most of all, help. But I also believe that some of last years' success can be attributed to the time of year. With December in sight, and some of us already working on budgets and project plans for the coming year, many of us find our thoughts taking an introspective turn. How did the last 12 months stack up to my expectations? What does the coming year hold? How can I best prepare to meet challenges head-on? By providing a simple prompt of a single, relatively innocuous word, the Word-a-Day Challenge gave many of us a much-needed blank canvas on which to paint our hopes, concerns, dreams, and experiences.
Which takes me to this years' challenge. Once again, each day will feature a single word. Once again, one brave volunteer will serve as the "lead" writer for the day, with his or her thoughts on the word of the day featured at the top of the post. Once again, you--the THWACK community--are invited to share your thoughts on the word of the day in the comments area below the lead post. And once again, we will be rewarding your contribution with sweet, sweet THWACK points.
What is different this year is that the word list has a decidedly more tech angle to it. Also, our lead writers represent a wider range of voices than last year, with contributors coming from SolarWinds product, marketing, and sales teams. The lead writers also include contributions from our MVP community, which gives you a chance to hear from some of our most experienced customer voices.
For those who are more fact-oriented, here are the challenge details:
One word will be posted per day, at midnight, US CST (GMT -6)
The community has until the following midnight, US CST, to post a meaningful comment on that days' word
Comments will earn you 150 THWACK points
One comment per THWACK ID per day will be awarded points
Points will appear throughout the challenge, BUT NOT INSTANTLY. Chill.
A "Meaningful" comment doesn't necessarily mean "long-winded." We're simply looking for something more than a "Me, too!" or "Nice job!" sort of response
Words WILL post on the weekends, BUT...
...For posts on Saturday and Sunday, the community will have until midnight CST on Monday (meaning the end of Monday, start of Tuesday) to share comments about those posts. For those folks who really REALLY don't work on the weekend (who ARE you people?!?)
Only comments posted in the comments area below the word for that day will be awarded THWACK points (which means words will noot count if posted on Geek Speak, product forums, your own blog, or on the psychic friend's network.)
Once again, the Word-a-Day 2017 challenge area can be found here: Word-A-Day Challenge 2017. While nothing will appear in this new forum until December 1, I encourage everyone to follow the page now to receive notifications about new posts as they appear.
If you would like to get to know our 28 contributors before the challenge starts, you can find their THWACK pages here:
And finally, I present the Word-a-Day list for 2017! I hope that posting them here and now will give you a chance to gather your thoughts and prepare your ideas prior to the Challenge. That way you can fully participate in the conversations that will undoubtedly arise each day.