Skip navigation
1 2 3 Previous Next

Geek Speak

2,302 posts

You’ve made the leap and have implemented a DevOps strategy in your organization. If everything is going just great, this post may not be for you. But if you haven’t implemented DevOps, or you have and things just don’t seem to be progressing as quickly as you had hoped, let’s discuss some of the reasons why your DevOps might be slow.

 

But First…

Before we get too far along into why your DevOps initiative might be slow, let’s first ask: Is it really slow?  If DevOps is new to your company, there may be some period of adjustment as the teams get used to communicating with each other. Additionally, it’s possible that your expectations of how fast things should be might be out of alignment with how they really are. It’s easy to think that by transitioning to DevOps, everything will be all unicorns and rainbows and instantly churning out code to make your lives better. However, depending on where you start to focus, there could be some time before you start to see benefits.

 

What We Have Here is a Failure to Communicate

If you’ve been following the previous three posts in this blog series, it should come as no surprise that one of the key factors that could slow down your DevOps projects is communication. Do you have a process set up to interface the developers and operations personnel so that everyone agrees on the best way to communicate back and forth? Are your developers getting the information they need from operations in a timely manner?  Can operations communicate feature requests, issues, and operations specific information to developers efficiently? If the answers to any of these questions are no, then you’ve started down the path of identifying your issue.

 

Keep in mind that just because you have a process in place to establish communication channels between the developers and operations personnel, you may still encounter issues. Just because a process has been established doesn’t mean it is the right process, or that people are following it. When evaluating, make sure that you don’t assume that the processes are appropriate for your company.

 

I’m Not Buying That

Sometimes, employees simply won't buy into DevOps. Maybe they think that they can get things done faster without user input and as a result, they ditch all processes that were in place to help facilitate developer-operator communications. As mentioned in DevOps Pitfalls, culture is a huge contributing factor to the success or failure of any DevOps initiative. The process becomes behavior which, in turn, becomes culture. If the process is being ignored, your organization needs to come up with a way of dealing with employees who choose not to follow it. This gets into a whole HR policy discussion which is way outside the scope of this blog.

 

I Used to Be a Developer, Now I’m a Developer Times Two!

Before you started doing DevOps, it’s likely you already had developers, and they already had a job writing code for some projects. Whether it’s because you are increasing automation or building software for a software-defined data center, the projects that you are considering the lead into DevOps are not the only projects that your developers are working on. When you make the choice to implement DevOps processes, carefully review your developers' current workloads. Based on your findings, you may need to hire more developers to help ensure that the project rolls out smoothly and in a reasonable time frame.

 

Size Matters

It doesn’t matter if you are developing a new software tool or deploying a new phone system, there is a tendency for a lot of people to want to get everything rolled into one big release. By doing so, users get to see the full glory of your project and you can sit back and enjoy being completely done with something.

 

The issue with this approach is that this could take a lot more time than users are expecting. It would be better to have some clear communication up front to identify the features that are time-critical for users to have, and build and release those first with a schedule to release the remaining features. By using this approach, developers and operators get an early win to address the critical issue. This is then followed up by additional wins as new functionality gets rolled into the software in future, short-timeline releases.

 

Wrap Up

As you can see, reasons for a slow DevOps process are varied but can be largely attributed to the communications that are in place between developers and operators. What other issues have you seen that have slowed down a DevOps process?

 

In the next and final post in this series, we'll wrap up some of what has been discussed in the series, and also address some of the comments and questions that have cropped up along the way. Finally, I’ll leave you with some DevOps resources to give you more information than I can possibly provide in five blog posts!

In previous posts in this “My Life” blog series, I’ve written quite a bit about the project management/on-task aspects of how I keep my focus and direction top of mind while I practice and place emphasis on my goals. But, what happens when something doesn’t work?

 

In some cases, the task we’re trying to accomplish may be just too hard. In some cases, we’re just not there yet. Practice, in this case, is the right answer. As my french horn teacher used to say, "Practice does not make perfect. Perfect practice makes perfect." The problem, particularly in my guitar playing, is that I’m flying a little blind because I have no teacher helping me to practice perfectly. But, imagine a tricky chord sequence that has had me failing during practice. If I don’t burn through the changes as often as possible in my practice time, I’ll definitely fail when I’m on stage, attempting to play it in front of a live audience.

 

In an effort to avoid the embarrassment of that failure, I sandbox. At least that’s how I see it.

 

The same analogy can be transposed to my thoughts about implementing a new build of a server, an application that may or may not work in my VMware architecture, etc. We don’t want to see these things fail in production, so we test them out as developer-type machines in our sandbox. This is a truly practical approach. By making sure that all testing has taken place prior to launching a new application in production, we're helping to ensure that we'll experience as little downtime as possible.

 

I look at these exercises as an effort to enable a high degree of functionality.

 

The same can be said as it reflects on training myself to sleep better, or to gain efficiency in my exercise regime. If I were to undertake a new weightlifting program, and my lifts are poorly executed, I could strain a muscle or muscle group. How would that help me? It wouldn’t. I work out without a trainer, so I rely on the resources that are available to me, such as YouTube, to learn and improve. And when I’ve got a set of motions down pat, the new exercise gets rolled into my routine. Again, I’ve sandboxed and learned what lessons I need to know by trial and error. This helps me avoid potentially hazardous roadblocks, and in the case of my guitar, not looking like a fool. Okay, let’s be clear... avoid looking more like a fool than usual.

 

I know that this doesn’t feel spontaneous. Actually, it isn’t. Again, as I relate it to musical performance, the correlation is profound. If I know via practice and a degree of comfort with the material, it allows my improvisation to take place organically. I always know where I am, and where I want to be in the midst of a performance, and thus, my capacity to improvise can open up.

I'm in Orlando this week attending SQL Server Live, part of the Live360 events. If you are attending, stop by the SolarWinds booth and say hello. I’m happy to talk data and databases with you.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Cray Supercomputers in Microsoft Azure

Remember when Microsoft announced at Ignite that they were going to offer quantum computing as a service in Azure? You need big computers for that. Looks like they found some.

 

An Opinion On Offense Against NAT

Every tech decision comes down to cost, benefit, and risk. NAT is no different. I’m not a network admin, but I find this discussion fascinating. It is also eerily similar to the myriad of debates I have with my data folk.

 

Munich IT chief slams city's decision to dump Linux for Windows

Munich is often hailed as a leader in how a city can operate without Microsoft software, but it has had enough of the dumpster fire they created by trying to switch to Linux. Somewhere,adatole is weeping.

 

YouTube to target disturbing videos masquerading as kids' shows

I’m trying to understand why this hasn’t been a focus for YouTube already. I’m guessing the answer is money.

 

Culture is the Behavior You Reward and Punish

Before you read this article, take five minutes to answer the question, “What makes people successful at your company?" Then, read this article.

 

Self-Operating Shuttle Bus Crashes After Las Vegas Launch

Classic PEBSWADS (Problem Exists Between Steering Wheel and Driver’s Seat). No robots were harmed as a result of this crash.

 

Someone Mapped Out Every Quantum Leap Scott Bakula Has Ever Done

Oh boy.

 

Sitting on top of the world, 1 billion users, and about to fall off a cliff. The world of tech changes quickly. Here's the cover of Forbes from 10 years ago:

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Most federal agencies have to contend with a constant lack of IT resources—staff and budget alike. In fact, many of these agencies have been performing IT tasks manually for years, heightening the pain of that already painful burden. Agencies, take heart. There is a way to ease that pain. In fact, there is a single solution that solves both of these issues: automation.

 

Today’s federal IT pros simply can’t afford to be burdened with the extra time manual interventions take. Automation can help eliminate wasted time, ease the burden of overtasked IT staff, and allow the IT team to focus on more mission-critical objectives.

 

Automating Alerts

 

Take alerts, for example. Alerts will always be a critical part of IT. But, there is a way to handle them that doesn’t take hours out of a federal IT professional’s day.

 

In a manual scenario, when a server alert is created because a disk is full, the response would be to, say, dump the temp directory. That takes time and effort. What if, instead, the administrator wrote a script for this task to take place automatically? That would save time and effort, and would take care of the fix more quickly than a manual approach.

 

Here’s another scenario: Let’s say an application stops working. The manual approach to getting that application back up and running would take an inordinate amount of time. With automation, the administrator can write a script enabling the application to restart automatically.

 

Of course, not all alerts can be solved with an automated response. That said, there are many that can, and that translates to time saved.

 

Beyond Alerts

 

Going beyond simply automating alerts, think about the possibility of a self-healing data center, where scripts and actions are performed automatically by monitoring software issues as they happen.

 

There are tools available today that can absolutely provide this level of automation. Consider tools dedicated to change management and tracking, compliance auditing, and configuration backups. These types of tools will not only save administrators vast amounts of time and resources, but will also greatly reduce errors that are too often introduced through manual problem solving. These errors can lead to network downtime or even potential security breaches.

 

As a federal IT pro, your bottom line is the mission. The time you save through automation can help sharpen that focus. Enhancing that automation will allow additional time to focus on developing and deploying new and innovative applications, for instance, or ways to deliver those applications to users more effectively, so they can have the tools they need to do their jobs more efficiently.

 

When you automate, you make your life easier and your agency more agile, innovative, nimble, and secure.

 

Find the full article on our partner DLT’s blog TechnicallySpeaking.

Last year, we kicked off a new THWACK tradition: the December Word-a-Day Writing Challenge, which fostered a new kind of interaction in the THWACK community: By sharing personal essays, images, and thoughts, we started personal conversations and created human connections. In just one month, last year’s challenge generated nearly 20,000 views and over 1,500 comments. You can see the amazing writing and thoughtful responses here: Word-A-Day Challenge 2016

 

Much of this can be attributed to the amazing, engaging, thriving THWACK community itself. Whether the topic is the best starship captain, what IT lessons we can learn from blockbuster movies, or the best way to deal with nodes that have multiple IP addresses, we THWACKsters love to chat, debate, and most of all, help. But I also believe that some of last years' success can be attributed to the time of year. With December in sight, and some of us already working on budgets and project plans for the coming year, many of us find our thoughts taking an introspective turn. How did the last 12 months stack up to my expectations? What does the coming year hold? How can I best prepare to meet challenges head-on? By providing a simple prompt of a single, relatively innocuous word, the Word-a-Day Challenge gave many of us a much-needed blank canvas on which to paint our hopes, concerns, dreams, and experiences.

 

Which takes me to this years' challenge. Once again, each day will feature a single word. Once again, one brave volunteer will serve as the "lead" writer for the day, with his or her thoughts on the word of the day featured at the top of the post. Once again, you--the THWACK community--are invited to share your thoughts on the word of the day in the comments area below the lead post. And once again, we will be rewarding your contribution with sweet, sweet THWACK points.

 

What is different this year is that the word list has a decidedly more tech angle to it. Also, our lead writers represent a wider range of voices than last year, with contributors coming from SolarWinds product, marketing, and sales teams. The lead writers also include contributions from our MVP community, which gives you a chance to hear from some of our most experienced customer voices.

 

For those who are more fact-oriented, here are the challenge details:

  • The words will appear in the Word-a-Day challenge area, located here: Word-A-Day Challenge 2017
  • The challenge runs from December 1 to December 31
  • One word will be posted per day, at midnight, US CST (GMT -6)
  • The community has until the following midnight, US CST, to post a meaningful comment on that days' word
    • Comments will earn you 150 THWACK points
    • One comment per THWACK ID per day will be awarded points
    • Points will appear throughout the challenge, BUT NOT INSTANTLY. Chill.
    • A "Meaningful" comment doesn't necessarily mean "long-winded." We're simply looking for something more than a "Me, too!" or "Nice job!" sort of response
    • Words WILL post on the weekends, BUT...

    • ...For posts on Saturday and Sunday, the community will have until midnight CST on Monday (meaning the end of Monday, start of Tuesday) to share comments about those posts. For those folks who really REALLY don't work on the weekend (who ARE you people?!?)

  • Only comments posted in the comments area below the word for that day will be awarded THWACK points (which means words will noot count if posted on Geek Speak, product forums, your own blog, or on the psychic friend's network.)

 

Once again, the Word-a-Day 2017 challenge area can be found here: Word-A-Day Challenge 2017. While nothing will appear in this new forum until December 1, I encourage everyone to follow the page now to receive notifications about new posts as they appear.

 

If you would like to get to know our 28 contributors before the challenge starts, you can find their THWACK pages here:

 

And finally, I present the Word-a-Day list for 2017! I hope that posting them here and now will give you a chance to gather your thoughts and prepare your ideas prior to the Challenge. That way you can fully participate in the conversations that will undoubtedly arise each day.

 

  • December 01 - Identity
  • December 02 - Access
  • December 03 - Insecure
  • December 04 - Imposter
  • December 05 - Code
  • December 06 - FUD (Fear, Uncertainty, and Doubt)
  • December 07 - Pattern
  • December 08 - Virtual
  • December 09 - Binary
  • December 10 - Footprint
  • December 11 - Loop
  • December 12 - Obfuscate
  • December 13 - Bootstrap
  • December 14 - Cookie
  • December 15 - Argument
  • December 16 - Backbone
  • December 17 - Character
  • December 18 - Fragment
  • December 19 - Gateway
  • December 20 - Inheritance
  • December 21 - Noise
  • December 22 - Object
  • December 23 - Parity
  • December 24 - Peripheral
  • December 25 - Platform
  • December 26 - Utility
  • December 27 - Initial
  • December 28 - Recovery
  • December 29 - Segment
  • December 30 - Density
  • December 31 - Postscript

In the last two posts, I talked about databases and why network and storage are so darn important for their well-being. These are two things that a lot of companies are struggling with and in some ways, seem to have mastered over the years in their own data centers. But now it seems they will be obsolete when things like cloud are quickly becoming the new normal.

 

Let’s leave the automotive comparison for now and let’s concentrate on how cloud strategy is affecting the way we use and support our databases.

 

There are a lot of ways to describe the cloud, but I think the best way for me to describe it is captured in the picture below:

As Master Yoda explains, cloud is not something magical. You cannot solve all of your problems by simply moving all your resources there. It will not mean that once your database is in the cloud that your storage and network challenges will magically vanish. It is literally just another computer that, in many cases, you will even share with other companies that have moved their resources to the cloud as well.

 

As we all know, knowledge is power. The cloud doesn’t change this. If we want to migrate database workloads to the cloud, we need to measure and monitor. We need to know the impact of moving things to the cloud. As with the Force, the cloud is always there ready to be used. It is the how and when that will impact the outcome the most. There are multiple ways to leverage the cloud as a resource to gain an advantage, but that doesn’t mean that moving to the cloud is the best answer for you. Offense can be the best defense, but defense can also be the best offense. Make sure to know thy enemy, so you won’t be surprised.

 

May the cloud be with you!

Home for a week before heading to Orlando and SQL Live. This is my third event in five weeks, and it will be four events in seven weeks once I get to AWS re:Invent. That's a bit more travel than usual, but being at events is the best way to communicate with customers. I love talking data and collecting feedback. Discussing common issues and ways to solve them are some of my favorite things. Yeah, I'm weird.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Microsoft says 40 percent of all VMs in Azure now are running Linux

Want to keep away the database administrators that should never be touching your servers? Run Linux. Without a GUI, they stay away. Problem solved.

 

Millions of Professional Drivers Will Be Replaced by Self-Driving Vehicles

An interesting factor that will help drive the development of autonomous vehicles. Often times we hear about robots taking our jobs, but in this case, robots are going to take the jobs we don’t want anyway.

 

As Amazon’s Alexa Turns Three, It’s Evolving Faster Than Ever

I broke down and purchased an Echo Dot, after having used one while on vacation recently. Ask Alexa to “open a box of cats." You’re welcome.

 

Three years in a row – Microsoft is a leader in the ODBMS Magic Quadrant

Just in case you didn’t know who made the best data platform on the planet.

 

Microsoft presenter downloads Chrome after Edge fails

Don’t judge. We've all been there.

 

There’s No Fire Alarm for Artificial General Intelligence

A bit long, but well worth the time. As I was reading this I was thinking about the fire alarm for those of us discussing things like autonomous databases and the future of systems and database administration.

 

Security and privacy, startups, and the Internet of Things: some thoughts

Also a bit long, but worth the time. Some interesting insights into the why and how we might have gotten ourselves into the data security and privacy mess we are in today. SPOILER ALERT: It’s money.

 

One good thing about visiting Seattle is that I get to stop by the grave of Microsoft Bob:

tiles spelling out DATA THEFT

In my eBook, 10 Ways We Can Steal Your Data, I reveal ways that people can steal or destroy the data in your systems. In this blog post, I'm focusing on un-monitored and poorly monitored systems.

 

Third-party Vendors

 

The most notorious case of this type is the 2013 Target data theft incident in which 40 million credit and debit cards were stolen from Target's systems. This data breach is a case study on the role of monitoring and alerting. It led to fines and costs in the hundreds of millions of dollars for the retailer. Target had security systems in place, but the company wasn't monitoring the security of their third-party supplier. And, among other issues, Target did not respond to their monitoring reports.

 

The third-party vendor, an HVAC services provider, had a public-facing portal for logging in to monitor their systems. Access to this system was breached via an email phishing attack. This information, together with a detailed security case study and architecture published by another Target vendor, gave the attackers the information they needed to successfully install malware on Target Point-of-Sale (POS) servers and systems.

 

Target listed their vendors on their website. This list provided a funnel for attackers to find and exploit vendor systems. The attackers found the right vulnerability to exploit with one of the vendors, then leveraged the details from the other vendor to do their work.

 

Misconfigured, Unprotected, and Unsecured Resources

 

The attackers used vulnerabilities (backdoors, default credentials, and misconfigured domain controllers) to work their way through the systems. These are easy things to scan for and monitor. So much so that "script kiddies" can do this without even knowing how their scripts work. Why didn't IT know about these misconfigurations? Why were default credentials left in enterprise data center applications?  Why was information about ports and other configurations published publicly? No one of these issues may have led to the same outcome, but as I'll cover below, these together formed the perfect storm of mismanaged resources to make the data breach possible.

People

 

When all this was happening, Target's offsite monitoring team was alerted that unexpected activities were happening on a large scale. They notified Target, but there was no response.

 

Some of the reasons given were that there were too many false positives, so security staff had grown slow to respond to all reports. Alert tuning would have helped this issue. Other issues included having too few and undertrained security staff.

 

Pulling it All Together

 

There were monitoring controls in place at Target, as well as security staff, third-party monitoring services, and up-to-date compliance auditing. But the system as a whole failed due to not having an integrated, system-wide approach to security and threat management.

 

 

How can we mitigate these types of events?

 

  • Don't use many, separate monitoring and alerting systems
  • Follow data flows through the whole system, not just one system at a time
  • Tune alerts so that humans respond
  • Test responders to see if the alerts are working
  • Read the SANS case study on this breach
  • Don't let DevOps performance get in the way of threat management
  • Monitor for misconfigured resources
  • Monitor for unpatched resources
  • Monitor for rogue software installs
  • Monitor for default credentials
  • Monitor for open ports
  • Educate staff on over-sharing about systems
  • Monitor the press for reports about technical resources
  • Perform regular pen testing
  • Treat security as a daily operational practice for everyone, not just an annual review
  • Think like a hacker

 

I could just keep adding to this list.  Do you have items to add? List them below and I'll update.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

There currently exists a global shortage in cybersecurity experts - not just by a few thousand, or even tens of thousands, but a shortage of some one million experts.

 

This isn’t good news for agencies, particularly with the rising complexity of hybrid IT infrastructures of both in-house managed and cloud-based services. Traditional network monitoring tools and strategies fail to provide complete visibility into the entire network, making it difficult to pinpoint the root causes of problems, let alone anticipate those problems before they occur. This can open up security holes to outside attackers and insider threats.

 

Federal cybersecurity teams need complete views of their networks and applications, regardless of whether they are on-site or hosted. IT managers must also be able to easily and quickly troubleshoot, identify, and fix issues wherever they reside. Even better, they should be equipped with systems that can predict when a problem may occur based on past historical data.

 

To help ensure the security of their networks, managers should explore options that offer three key benefits.

 

Better visibility. IT managers must manage and track multiple application stacks across their different environments. Therefore, they should consider solutions that track and monitor both on-premises and off-premises network activity.

 

These solutions must provide a single-pane-of-glass view into all network activities, and allow for review of data correlations across application stacks. Seeing different data types side by side can help you identify anomalies and track cybersecurity problems directly to the source. Timelines can be laid on top of this information to correlate the timing of an event to a specific slowdown or outage. This information can be used collectively to quickly remediate issues that could impact network security.

 

Better proactivity. Predictive analytics allows managers to create networks that effectively learn from past incidents and behaviors. Network monitoring tools can automatically scan for anomalies that have caused disruptions in the past. When something is detected, managers can receive notifications and directions on how to mitigate the problem before it happens.

 

In essence, IT managers go from reacting to network issues to proactively preventing them. This is a handy strategy that helps keep networks secure and running without demanding a lot of resources.

 

Better collaboration. One of the benefits of having a smaller staff is that the network management team can be more nimble, as long as they have the right collaboration tools in place. Individuals must be able to easily share data, charts, and metrics with the rest of the team. This sets up a baseline, helps prevent confusion, and helps bring the team together to tackle problems in a more efficient manner.

 

Collaboration becomes even more critical when working with hybrid IT environments. Everyone needs to be able to work off the same canvas to address potential security problems.

 

Better security and hybrid IT environments can coexist, but agencies need to make sure that the managers they have on staff are equipped with tools that make bringing these two vital concerns together in a more cohesive, efficient, and effective manner.

 

Find the full article on GovLoop.

I've rarely seen an employee of a company purposefully put their organization at risk while doing their job. If that happens, the employee is generally not happy, which likely means they’re not really doing their job. However, I have seen employees apply non-approved solutions to daily work issues. Why? Several reasons, probably, but I don’t think any are intentionally used to put their company at risk. How do I know this?

 

My early days as an instructor

When I started out as a Cisco instructor, I worked for a now-defunct learning partner that used Exchange for email. The server was spotty, and you could only check email on the go by using their Microsoft VPN. I hated it because it didn’t fit any of my workflows and created unnecessary friction. In response to this, I registered a domain that looked similar to the company’s domain and set up Google apps, now called G-Suite, for the domain. That way I could forward my work emails to an address that I set up. No one noticed for several months.  I would reply to them from my G-Suite address and they just went with it. Eventually, most people were sending emails directly to my “side” email.

 

After becoming the CTO, I migrated the company off our rusty Exchange server and over to G-Suite, but I couldn’t help but think that I would have reamed someone if they would have done what I did. In hindsight, it was not the smartest thing to do. But I wasn’t trying to cause any issues or leak confidential data; I was just trying to get my job done. Management needs to come to terms with the fact that if it makes an employee's work/life difficult, they will find another way. And it may not be the way you want.

 

Plugging the holes

Recently, I saw a commercial for FlexTAPE. It was amazing. In one part, you see a swimming pool with a huge hole in the side with water gushing out of it. A guy slaps a piece of FlexTAPE over the hole from the inside of the pool, and the water stops flowing. It reminded me of some IT organizations that metaphorically attempt to fix holes by applying FlexTAPE to them. But, by that point, so much water has escaped that the business has already been negatively impacted. Instead, companies should be looking for the slow leaks that can be repaired early on.

 

Going back to my first example, once people learned how I was handling my email, they started asking me to set up email addresses for them so they could do the same. First one colleague, then another. Eventually, several instructors had an “alternate” email address that they were using regularly. The size of that particular hole grew quite large.

 

At some point, management realized that they couldn’t pedal backward on the issue, and was forced to update certain protocols. I often wonder how much confidential information could have been leaked once I was no longer the only one using the new email domain. Fortunately, those who were using it didn’t have access to confidential information, but lots of content could have been exfiltrated. That would have been bad, but in my particular organization, I don’t know if anyone would have known.

 

Coming full circle

Today I own my own business and deal with several external clients. When I have employees, I try to be flexible because I understand the problem with friction. I also understand that friction may not be the only reason one turns to a non-approved solution to get their work done. For core business operations, organizations would do well to clearly define approved software packages. Should an employee use services like Dropbox, iCloud, Google Drive, or Box.com? If they do, are there controls in place? How does the solution impact their role? Do employees have a way to express their frustrations without fear of reprimand? Having an open line of communication with an employee can help them feel like their role is important. It also helps management really understand the issues they face. If you neglect that, employees will choose their own solutions to get work done, and potentially create security issues. And we don’t want that now, do we?

kong.yang

The Just Us League

Posted by kong.yang Employee Nov 3, 2017

The original Justice League consisted of seven superheroes: Superman, Aquaman, Flash, Green Lantern, Martian Manhunter, Batman, and Wonder Woman. In a parallel universe, there is the IT Justice League, a union of IT superheroes that aims to protect IT organizations and their users from the perils of application downtime, high latency, and IT disasters. As IT pros, we like to think we possess superpowers of our own. Sometimes those powers are used for good; sometimes for evil. Most of the time, we use them to keep our jobs.

 

To have a successful, long career in IT, you can’t operate in a silo. You simply can't accomplish organizational, global goals by walking alone. Plus, life's too short to walk the journey all alone. In other words, it's best to share your IT pains and gains. Moreover, most of us are neither blessed with innate talents like Superman and Wonder Woman nor are we blessed with endless resources and capital like Bruce Wayne/Batman or Aquaman. Heck, even with the greatest willpower, we can't create something out of anything like Green Lantern or morph objects based on our desires like Martian Manhunter. In spite of this, we still stand and deliver when called upon, such as when trouble befalls our applications. Teamwork, collaboration, and community make integrating and delivering application services a much easier and more gratifying experience.

 

This is where the challenge of the Just Us League appears. The Just Us League is one where everyone knows your name and you just fit in because it’s always been that way. But what if you’re not a part of the original Just Us League? How do you join? What are the rules for joining and participating in the Just Us League? What practical tips do you use to open the Just Us League to include new-to-you people? How do you incorporate the trust but verify modus operandi to make sure you sure you are nurturing vibrant and growing teams, collaboration, and community? Let me know in the comment section below.

While the previous two posts in this DevOps series have been open-ended and applicable to people on both the development and operations side, this post is focused on operations personnel. After all, if you're a developer and asking if you need to learn to code, then you might not be in the right job. I've recently had the chance to speak to several people in operations roles that are tackling what their organization is calling DevOps. Inevitably, the discussion turns to coding and whether or not operations staff need to pick up development skills. It depends, and in

 

Why You Shouldn't Learn Development as an Operator

It may be tempting to learn how to code so that you can be the sole source for your DevOps efforts.  After all, who is closer to daily operations in your organization than you?  And if you can make it to where you cut out the developer, that means you'll get to your DevOps nirvana sooner, right? Not usually. Here's why:

 

It's About Turning Code Around Quickly

You might be able to pick up the skills that you need to get your program off the ground, but how long is it going to take?  In addition to the learning curve to get everything programmed the way it needs to be, operations has its own goals for supporting and operating the environment, and often the demands for one will not match up with the demands of the other.  If you are fortunate enough to have a development team, they can really speed up this process.  In addition to already having a solid basis on the coding piece, they can continue developing for the environment when operations has a fire that requires attention. By having the teams work together, you can ensure that the developers know what you need to know while not taking significant time away from your day-to-day job.

 

It's About Good Development Practices

When it boils down to it, DevOps is a development practice. It's a way for development and operations to have continual communications in the development process to ensure that the final product is what operations is hoping for.  In addition to the communications, on the back end, there is a whole host of development practices that get involved.  Proper code documentation, testing, and validation for just a couple.  If you're picking up coding for your DevOps effort, it's highly likely that some of these practices may be forgotten in an effort to get yourself up to speed on the coding basics.

 

Value in Collaboration

Aside from the points mentioned in the previous two sections, there is value to having someone from outside your operations organization participate in the development process. You may know exactly what you want, but a healthy back and forth can help the development team come up with suggestions for handling an issue that you may not have previously considered. The development team may have a different, and sometimes better, perspective on how to approach and resolve a problem.

 

Why You Should Learn Development as an Operator

Whether you choose to learn how to code or not is largely going to depend on what the goal is. In this section, we'll go over some valid reasons for wanting to pick up this skillset.

 

There Are No Developers

In earlier posts in this series, several people have commented that their boss wants them to implement DevOps in their organization. The one problem? They have no developers. You could easily say, "That's not DevOps," and be right in this situation.  The fact of the matter is that the supervisor has said that they are doing DevOps, so in this case, you will need to pick up coding skills, even though you are still technically on the operations side.

 

Speak the Language

By learning how to code, you can pick up some non-coding-specific knowledge that can help out in your DevOps efforts. Understanding what it takes to write an application can give you more realistic expectations regarding timelines when you speak with the developers in your organization. Additionally, by understanding basic development concepts, you will be able to understand what things are possible (or more difficult) with code in your application. Finally, having a grasp of some of the basic terminology around coding may help you better communicate what you are wanting from your applications to the developers who are writing them. Being able to understand basic coding concepts can expedite your DevOps processes by simplifying communications and setting achievable goals for the project.

 

No One Told Me There Would Be Learning

As the saying goes, if you're not moving forward, you're falling behind. One of the best reasons to learn to code as an operator is simple: expanding your skillset. By learning to code, your organization will benefit from you being able to speak the language with developers. You also increase your skillset and value, which could help in your career development.

 

Wrap-Up

Whether you decide to pick up coding or not is ultimately up to you. However, you may notice one key point from all of the reasoning above. Unless there are no developers in your organization, your goal for learning to code should not be to do the developers job for them. Instead, if you choose to go that route, look at it as an opportunity to improve communications between teams, or to improve yourself.

I’m at PASS Summit this week in Seattle. If you are reading this and also at PASS Summit, stop by the SolarWinds booth and say hello.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Microsoft’s Sonar lets you check your website for performance and security issues

Microsoft continues to take steps to help users understand the importance of data security and privacy.

 

Amazon wants the keys to your home

Meanwhile, Amazon expects users to not understand the importance of security.

 

Microsoft finally kills off the Kinect, but the tech will live on in other devices

Proving that people are likely to get off the couch and move around just long enough to understand how much they prefer laying on the couch.

 

InfoSec Needs to Embrace New Tech Instead of Ridiculing It

A brilliant piece that tackles an issue that has frustrated me for a long time. I’m not an early-adopter of tech, but I don’t dismiss new things as quickly as they arise, and I don’t mock others for wanting to try new things.

 

Windows 10 tip: Turn on the new anti-ransomware features in the Fall Creators Update

I took the time to apply the update before taking my trip and was happy to come across this piece of information regarding an anti-ransomware feature in Windows 10. I enabled this and I think you should, too.

 

Does Apple Think We’re Clowns?

Yes. Next question.

 

Why Amazon and Microsoft Shouldn't Lose Sleep Over Oracle's New Cloud Database

Because (1) Oracle has only announced features, we still haven’t seen them; (2) Amazon and Microsoft actually have products with features and customers using them; and (3) they are a fraction of the cost that Oracle will charge. And yet, somehow, Larry will make money anyway.

 

How many minutes are in "several?" Asking for a friend.

 

Previously, I’ve spoken about project management and micro-services as being analogous to life management. To me, these have seemed to be really functional and so very helpful to my thought process regarding the goals I have set for myself in life. I hope that in some small way they’ve helped you to envision the things you want to change in your own life, and how you may be able to approach those changes as well.

 

In this, the third installment of the “My Life as IT Code” series, I’ll take those ideas, and try to explain how I visualize and organize my approach. From a project manager’s perspective, the flowchart has always been an important tool. The first step is to outline a precedence diagram, such as the one I created for a past project. It was a replication and disaster recovery project for a large, multi-site law firm that had leveraged VMware-based virtual machines, a Citrix remote desktop, and storage-to-storage replication to maintain consistent uptime. I broke individual time streams into flowcharts, giving the individual stakeholders clear indications of their personal tasks. I organized them into milestones that related to the project as a whole. I delivered both as Gantt Charts and flow charts to show how the projects could be visualized, revealing time used per task, as well as the tasks that I discretely broke into their constituent parts.

 

This same technique is applicable to some of these life hacks. While it can be difficult to assign timeframes to weight loss, for example, the milestones themselves can be quite easy to demarcate. There are, with some creative thinking, methods by which one may be able to achieve viable metrics against which progress can be marked, and effective tools for the establishment of reasonable milestones can be elucidated.

 

There are great tools to aid in exercise programs, which enforce daily or weekly targets, and these are great to leverage in one’s own plan. I have one on my phone called the seven-minute workout. I also note my daily steps, using the fitness tracker application on both my phone and on my smartwatch. The growth of these tools, along with the use of a scale, can show a graphic progress along the path to your goals. Weight loss is never a simple downward slope, but rather a decline that tends toward plateaus followed by restarts. However, as your progress does move forward, so, in turn, does the use of a graphical representation of your weight loss to encourage more work along those lines. For me, the best way to track progress is by using a spreadsheet, graphing on a simple x/y axis, which provides an effective visualization of progress. I do not suggest paying rigid attention to the scale, as these plateaus can be detrimental to the emotional effect of how one sees one’s progress.

 

I’ve never been a runner, but the ability to define distance plans, time to these distances, and delineations of the progress along those paths are most easily translatable to a flowchart. It’s important to know what you’re doing. Rather than saying things like, “I’m going to lose 40 lbs in 2018,” a more effective strategy is to focus on specifics, such as, "I plan on walking 10,000 steps per day, or five miles per day." It is easier and more impactful to adhere to smaller, more strategic commitments.

 

Meanwhile, the creation and attention to the flow chart is a hugely effective tool to help keep a person on track. It helps them pay attention to their goals, and gives them visibility into their progress. Once you have that, you can celebrate when you reach the milestones you have set for yourself.

 

As I’ve stated, the flowchart can be an amazing tool. And when you look at life goals with the eyes of a project manager, you give yourself defined timeframes and goals. The ability to visualize your goals, milestones, and intent can really assist in keeping them top of mind.

 

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Can you afford for your team to lose eight hours a day? According to the 2016 State of Data Center Architecture and Monitoring and Management report by ActualTech Media in partnership with my company, SolarWinds, that’s precisely what is happening to today’s IT administrators when they try to identify the root cause of virtualization and virtual machine (VM) performance problems. This is valuable time that could otherwise be spent developing applications and innovative solutions to help warfighters and agency employees achieve mission-critical objectives.

 

Virtualization has quickly become a foundational element of Federal IT, and while it offers many benefits, it’s also a major contributor to the increasing complexity. Adding additional hypervisors increases a network’s intricacies and makes it more difficult to manually discover the cause of a fault. There’s more to sift through and more opportunities for error.

 

Finding that error can be time-consuming if there are not automated virtualization management tools in place to help administrators track down the source. Automated solutions can provide actionable intelligence that can help federal IT administrators address virtual machine performance issues more quickly and proactively. They can save time and productivity by identifying virtualization issues in minutes, helping to ensure that networks remain operational.

 

Ironically, the key to saving time and improving productivity now and in the future involves travelling back in time through predictive analysis. This is the ability to identify and correlate current performance issues based on known issues that may have occurred in the past. Through predictive analysis, IT managers can access and analyze historical data and usage trends to respond to active issues.

 

Further, analysis of past usage trends and patterns helps IT administrators reclaim and allocate resources accordingly to respond to the demands their networks may be currently experiencing. They’ll be able to identify zombie, idle, or stale virtual machines that may be unnecessarily consuming valuable resources, and eradicate under- or over-allocated and orphaned files that may be causing application performance issues.

 

Predictive analytics can effectively be used to prevent future issues resulting from virtualization sprawl. By analyzing historical data and trends, administrators will be able to optimize their IT environments more effectively to better handle future workloads. They can run “what if” modeling scenarios using historical data to predict CPU, memory, network, and storage needs.

 

Predictive analytics is not just a “nice to have,” it’s something that is becoming increasingly in-demand among IT professionals. In fact, 86 percent of respondents to the State of Data Center Architecture and Monitoring and Management report identified predictive analytics as a “critical need.”

 

We often speak of virtualization management as the future of IT, but that’s only partially true. True virtualization management involves a combination of the past, present, and future. This combination gives federal IT managers the ability to better control their increasingly complex networks of virtual machines both today and tomorrow, by getting a glimpse into how those networks have performed in the past.

 

Find the full article on our partner DLT’s blog TechnicallySpeaking.

Filter Blog

By date: By tag: