Skip navigation
1 2 3 4 Previous Next

Geek Speak

2,431 posts

As predicted, each matchup in the elite 8 had me on the edge of my seat! The vote was nearly evenly split in each battle making this one of the closest races to the final 4 we’ve ever seen.

 

Here’s a look at who’s moving on from this round:

 

  • Cryptids round 3: Thunderbird vs Loch Ness Monster Easily one of the most surprising outcomes of this round, Nessie shocks everyone and swims away with this one! Thunderbird had a lot of support in the comment section because of its electric abilities, but ebradford had a solid argument for why Nessie should win, “...Of course, this contest isn't really fair since a Thunderbird is fictional, and Nessie is real.” 
  • Half & Halfs round 3: Griffin vs Minotaur This was stacking up to be a close race, but in the end it was Griffin’s fly skillz that tipped the scale in its favor. rschroeder “…Traditionally, the Minotaur is always defeated in stories.  Not so the Griffin, which attains a nobility and seems to be a higher, more enlightened entity than a Minotaur. Will the body of a lion, with its powerful legs and long, sharp claws, combined with the strong feet and talons and beak of an eagle, be weapons superior to the bovine horns and human arms and legs of the Minotaur? I think so. This one should go to the Griffin.”
  • Gruesomes round 3: Medusa vs Kraken Without question the biggest rivalry of the whole bracket battle, Kraken had a lot to prove this round given its history with Medusa. The comment section showed a lot of support for Medusa as she had easily won in the Clash of the Titans—not once, but twice! For the first time in this bracket battle the underdog fish came away with the W!!!
  • Fairy tales round 3: Dragon vs Phoenix This battle was on fire! Though the Phoenix possesses the ability to rise from the ashes every time the Dragon unleashes another attack, it wasn’t enough to secure the win and move on to the next round. I’m sure a lot of brackets were busted over this one!

 

Were you surprised by any of the winners this round? Comment below!

 

It’s time to check out the updated bracket & start voting in the ‘Mythical’ round! We need your help & input as we get one step closer to crowning the ultimate legend!

 

Access the bracket and make your picks HERE>>

 

I can’t wait to see who the community picks to face off in the final round!

Success. It marks the subtle difference between being productive and being busy. WordStream and MobileMonkey founder, Larry Kim, eloquently wrote about the 11 differences between busy people and productive people in a recent Inc. article. It is a great read that offers an interesting take on productivity. For instance, one of the eleven differences that Larry calls out is that productive people have a mission for their lives, while busy people simply look like they have a mission. The key is to correctly identify your purpose and the corresponding work that will fulfill your life's mission. There is no template for your mission because only you can define those core policies. Otherwise, it's someone else's mission. In the latter instance, you have less understanding of what "good" should look like, therefore you will be less efficient and effective in your work. 

 

So how do the productive versus busy insights play out in IT environments? Let's take the example of automation, one of the DART-SOAR skills. Many pundits believe that automation's objective is to save time--to do more stuff. This is what it means to be busy. In actuality, automation's true aim is not to save time, but rather to improve consistency of delivery, reliability of delivered services, and a normalized experience at scale. This exemplifies what it means to translate automation into productivity and deliver meaningful value.

 

Translating your skills, experience, and expertise into business value is how you make your career as a professional. Without business value, you won't have value.

 

Let me know what you think below in the comment section.

Logic and objective thinking are hallmarks of any engineering field. IT design and troubleshooting are no exceptions. Computers and networks are systems of logic so we, as humans, have to think in such terms to effectively design and manage these systems. The problem is that the human brain isn’t exactly the most efficient logical processing engine out there. Our logic is often skewed by what are called cognitive biases. These biases take many potential forms, but ultimately they skew our interpretation of information in one way or another. This leaves us believing we are approaching a problem logically, but in reality are operating on a distorted sense of reality.

 

What am I talking about? Below are some common examples of cognitive biases that I see all the time as a consultant in enterprise environments. This is by no means a comprehensive list. If you want to dig in further, Wikipedia has a great landing page with brief descriptions and links to more comprehensive entries on each.

 

Anchoring: Anchoring is when we value the information we learn first as the most important, with subsequent learned information having less weight or value. This is common in troubleshooting, where we often see a subset of symptoms before understanding the whole problem. Unless you can evaluate the value of your initial information against subsequent evidence, you’re likely to spin your wheels when trying to figure out why something is not working as intended.

 

Backfire effect: The backfire effect is what happens when someone further invests into an original idea or hypotheses, even when new evidence is learned that disproves the initial belief. Some might call this pride, but ultimately no one wants to be wrong even if it’s justifiable because all evidence wasn’t available when forming the original opinion or thought. I’ve seen this clearly demonstrated in organizations that have a blame-first culture. Nobody wants to be left holding the bag, so there is more incentive to be right than to solve the problem.

 

Outcome bias: This bias is our predisposition to judge a decision based on the outcome, rather than how logical of a decision it was at the time it was made. I see this regularly from insecure managers who are looking for reasons for why things went wrong. It plays a big part in blame culture. This can lead to decision paralysis when we are judged by outcomes we can’t control, rather than a methodical way of working through an unknown root cause.

 

Confirmation bias: With confirmation bias, we search for, and ultimately give more weight to, evidence that supports our original hypotheses or belief of the way things should be. This is incredibly common in all areas of life, including IT decision making. It reflects more on our emotional need to be right than any intentional negative trait.

 

Reactive devaluation: This bias is when someone devalues or dismisses an opinion not on merit, but on the fact that it came from an adversary, or someone you don’t like. I’m sure you’ve seen this one, too. It’s hard to admit when someone you don’t respect is right, but by not doing so, you may be dismissing relative information in your decision-making process.

 

Triviality/Bike shedding: This occurs when extraordinary attention is applied to an insignificant detail to avoid having to deal with the larger, more complex, or more challenging issue. By deeply engaging in a triviality, we feel like we provide real value to the conversation. The reality is that we expend cycles of energy on things that ultimately don’t need that level of detail applied.

 

Normalcy bias: This is a refusal to plan for or acknowledge the possibility of outcomes that haven’t happened before. This is common when thinking about DR/BC because we often can’t imagine or process things that have never occurred before. Our brains immediately work to fill in gaps based off our past experiences, leaving us blind to potential outcomes.

 

I point out the above examples just to demonstrate some of the many cognitive biases that exist in our collective way of processing information. I’m confident that you’ve seen many of them demonstrated yourself, but ultimately, they continue to persist because of the most challenging bias of them all:

 

Bias blind spot: This is our tendency to see others as more biased than ourselves, and not being able to identify as many cognitive biases in our own actions and decision making. It’s the main reason many of these persist even after we learn about them. Biases are often easy to identify when others demonstrate them, but we often can’t see our own biases when our thinking is being impacted by a bias like those above. The only way to identify our own biases is through an honest and self-reflective post mortem of decision making, looking specifically for areas where our bias impacted our view of reality.

 

Final Thoughts

 

Even in a world dominated by objectivity and logical thinking, cognitive biases can be found everywhere. It’s just one of the oddities of the human condition. And bias affects everyone, regardless of intent. If you’ve read the list above and have identified a bias that you’ve fallen for, there’s nothing to be ashamed of. The best minds in the world have the same flaws. The only way to overcome these biases is to inform yourself of them, identify which ones you typically fall prey to, and actively work against those biases when trying to approach a subject objectively. It’s not the mere presence of bias that is a problem. Rather, it’s the lack of awareness of bias that leads people to incorrect decision making.

The sweet 16 was bitter for some. It looks like we aren’t going to see any Cinderella stories in this year’s bracket battle.  The last of the little guys literally had to face off with a Dragon, and all I can say is don’t play with fire if you don’t want to get burned #amirite.

 

It’s going to be an uphill battle for our elite 8, no easy matchups here!

 

Let’s take a look at who came out on top in round 2:

 

Nail-biters:

  • Cryptids round 2: Thunderbird vs Yeti Despite the all the cheering coming from the comment section, this was not an easy win for the Thunderbird. tinmann0715 thinks Thunderbird has what it takes to.go.all.the.way! “My prediction is ringing true for the underdog Thunderbird. It is becoming my "Dark Horse" favorite for the rest of the tourney.”
  • Cryptids round 2: Chupacabra vs Loch Ness Monster Nessie is still in it after a tough match with Chupacabra. It looks like it came down to a battle of “Which one is scarier” rschroeder  “Nessie vs. Chupacabra.  What's scarier--a fish-eating dinosaur or a man/lizard that interacts harmlessly with goats?  OK, a man/lizard probably will generate more nightmares. Which would defeat the other in battle?  I'm pretty certain Nessie could out-chomp and crush Chupie.  Loch Ness for the win.”
  • Half & Halfs round 2: Griffin vs Pegasus This one was almost too close to call! Judging by your comments no one was certain who to root for since these two were so evenly matched.
  • Half & Halfs round 2: Minotaur vs Centaur Again another half & halfs match that could have gone either way. asheppard970 said it better than I could, “I think this is coming down to a "brains vs. brawn" battle, and brawn appears to be winning.”
  • Gruesomes round 2: Medusa vs Werewolf The #GRLPWR was strong with this one. Medusa and her stony stare advance to the next round!

 

Shutouts:

  • Gruesomes round 2: Vampire vs Kraken Based on the results of this match, the Vampire needs to hire a new PR agent that doesn’t suck. cdow2011 “The kraken and Perseus...still a better love story than Twilight.”
  • Fairy tales round 2: Leprechaun vs Dragon A true David and Goliath match, but our pint-sized friend from the Emerald Isle was no match for brute strength and power of the Dragon.  caleyjay7 “Dragon: Teeth, Claws, tail-whip, potentially fire breathing... Leprechaun: general tomfoolery and lucky charms...I know who my money's on.”
  • Fairy tales round 2: Banshee vs Phoenix There was some serious debate about how outside influence from pop-culture affected the results of this match. Phoenix manages to run fly away with this one in the face of conflict.

 

Were you surprised by any of the shutouts or nail-biters for this round? Comment below!

 

It’s time to check out the updated bracket & start voting in the ‘Fantastical’ round! We need your help & input as we get one step closer to crowning the ultimate legend!

 

Access the bracket and make your picks HERE>>

It's the last week of March, which means the year is about 25% complete. Time to check in on your New Year's goals and see how you have progressed. There is still time to follow through on those promises you made to yourself.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

12 Things Everyone Should Understand About Tech

Understanding these twelve things is crucial if we want to work together to make tech better for everyone.

 

Expedia's Orbitz Suspects 880,000 Payment Cards Stolen

“Orbitz says the breached system is not part of its current website.” In other words, they weren’t hacked through the website, they let their data get stolen because they lacked proper internal security measures. But here’s the #hardtruth: They are not unique, many companies fail in this area, they just don’t know it yet.

 

AVA: The Art and Science of Image Discovery at Netflix

Ever wonder how Netflix decides what images to use? Meet AVA, the brains behind the machine.

 

Ex-Googler Wants to Upend Pigs and Hotels With the Blockchain

Finally, a practical use case for Blockchain! I can’t tell you how many times I’ve had issues getting quality bacon delivered to my hotel room.

 

Silicon Valley Has Failed to Protect Our Data. Here’s How to Fix It

I love this idea except for one detail, and that is I don’t want the government to have any part in this effort. They move too slow, and are often at the bequests of lobbyists. Seems like something Bill Gates could put a billion dollars behind and create something more useful than anything Congress would do.

 

Facebook denies it collects call and SMS data from phones without permission

I want to believe Facebook here, but, well, they haven’t exactly demonstrated that they can be trusted with our data. It’s quite possible that such data was collected, but not in an official capacity. So they can deny they are doing it, which is not the same as saying it never happened.

 

Ford vending machine begins dispensing cars in China

I love this idea, but I’d love it more if it were full of Jeeps.

 

Nothing makes a meeting more fun than showing up wearing a Luchador mask:

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

As more agencies adopt the cloud, here's an applicable article from my colleague Joe Kim, in which he suggests achieving a balance between security and efficiency.

 

Federal agencies are using the cloud more than ever before, but they’re also not ready to abandon the safety and security of their on-premises infrastructures. That’s the message sent by survey respondents who participated in SolarWinds’ 2017 IT Trends Report.

 

The annual federal IT pulse check—which is based on feedback from public sector IT practitioners, managers, and directors—indicated a marked increase in cloud adoption over the past year. Ninety-six percent of survey respondents stated that they have migrated critical applications and IT infrastructure to the cloud over the past year. The migration was driven by the potential of increased return on investment, cost efficiency, availability and reliability. Fifty-eight percent of survey respondents believe they have received most, if not all, of the benefits they expected from their cloud migrations.

 

But no one ever said this cloud thing was going to be easy or clear cut. A substantial number of respondents—29 percent—stated they have actually brought applications and infrastructure back on-premises after having initially moved them to the cloud. Their reasons included concerns over security and compliance (45 percent of respondents), poor performance (14 percent), and technical challenges with their migrations (14 percent).

 

As a result, hybrid IT infrastructures are thriving. Many agencies continue to be uncomfortable with migrating all of their infrastructure or applications to the cloud. The facets of their environments that are security-sensitive, for example, are for the most part remaining on-premises.

 

There’s no indication that these agencies will be embracing an all-cloud IT infrastructure anytime soon. According to the survey, a large percentage of organizations (37 percent) report hosting one to nine percent of their infrastructures entirely in the cloud, while just one percent of respondents said all of their infrastructures are hosted in the cloud.

 

Some other interesting points of note:

 

  • 40 percent of respondents said their organizations spend 70 percent or more of their annual IT budgets on on-premises (traditional) applications and infrastructure
  • 62 percent indicated that the existence of the cloud and hybrid IT have had at least somewhat of an impact on their careers, while 11 percent said they have had a tremendous impact
  • 65 percent said their organizations use up to three cloud provider environments

 

All of these findings point to some clear recommendations. Managers must implement pervasive monitoring strategies that provide complete visibility into their entire network and all applications, both on-premises and off. Security, compliance, and performance should be just as important as cost efficiencies when considering cloud migration. IT professionals must continue to hone their cloud skills and be open and agile in adopting best-of-breed cloud and hybrid IT elements. And agencies should elect to work with trusted cloud vendors that are willing to provide federal IT professionals with control and visibility over their hosted workloads.

 

This is just a snapshot of where things stand in 2017. The full report contains a more complete picture of a federal IT world that continues to move to the cloud, but isn’t quite ready to fully commit.

 

Find the full article on Nextgov.

Anyone else already #BracketBusted?

If you came to root for the underdogs in this year’s bracket battle, you are going to be sorely disappointed.

Across the board the titans of the bracket stomped out the little guys.

 

Play-In Round: Cerberus vs. Anansi

Dog v spider, sounds like the title of a viral YouTube video, no? In the end, the arachnid didn’t stand a chance against the hound of Hades.

 

Let’s take a look at how our legends fared in round 1, shall we?

 

Shutouts:

 

Nail-biters:

  • Cryptids Round 1: Yeti vs. Bigfoot - This one really could have gone either way as these two opponents were the most equally matched pair of round 1. The abominable snowman left our forested friend with frostbite and manages to roll into round 2.
  • Half & Halfs Round 1: Hippogriff vs. Pegasus - This half & halfs match-up was nearly split 50/50! It came down to a photo finish in favor of Pegasus!
  • Half & Halfs Round 1: Manticore vs. Minotaur - rschroeder's commentary explains where it all went downhill for the Manticore: “Manticore is the bigger threat.  But the Minotaur has more terror and less horror, given it's half man / half male bovine, lives in a maze that you'd never find your way out of before it got you, and the depictions I've seen have all imagined the Minotaur's Labyrinth to be all in the dark.  There's nothing quite like knowing there's a scary thing in the dark hunting you to increase your terror . . .”
  • Fairy Tales Round 1: Troll vs. Banshee - I really can’t argue with zennifer’s statement on this one “Yeah.. you need to be afraid of anything with a SHE in its name!” Though this was a close call, the Banshee earned her victory shrieks this round.

 

Were you surprised by any of the shutouts or nail-biters for this round? Comment below!

 

It’s time to check out the updated bracket & start voting in the ‘Unbelievable’ round! We need your help & input as we get one step closer to crowning the ultimate legend!

 

Access the bracket and make your picks HERE>>

One of the biggest complaints you'll likely hear after moving to Office 365 will be about its speed, or lack thereof. Now that data isn't sitting on your LAN, there is lots of room for latency to hit your connection. There’s no doubt that your users will alert you to the problem in short order. So what can you do about it?

 

IT'S ALL ABOUT SPEED …

 

If speed is the primary concern, one of the first things you should do is get a baseline. If someone is complaining that performing a task is slow, how long is it taking? Minutes? Seconds? When it comes to making improvements, you need a way to ensure that changes are having a positive impact. In the case of Skype for Business, Microsoft actually has a tool to help assess your network.

 

Along with speed, you'll want to be able to figure out where the problem lies. Now that large amounts of your data are in the cloud, you'll have a lot more WAN traffic. Be sure to check your perimeter devices. With the increased volumes, these could easily be your bottlenecks. If the congestion lies past your perimeter, you can take a look at Azure ExpressRoute. Using this, you can create a private connection to Azure data centers, for a price.

 

…EXCEPT WHEN IT’S NOT ABOUT SPEED

 

Although speed will likely be one of the first and loudest complaints, you'll also want to monitor availability. Microsoft offers service dashboards when you log into the portal, but you should also consider third-party monitoring solutions. Some of these solutions can regularly check SMTP to make sure it is accepting mail, or routinely make sure that your DNS is properly configured.

 

Routine checks like these can help keep the environment healthy. The benefit of going with a service for these sorts of checks is that they can alert you fairly quickly. Also, you won't need to remember to actually do it yourself. Be sure to know what your SLA terms are as well—depending on what sort of downtime you are seeing, you may qualify for credits.

 

DON’T FORGET ABOUT SECURITY

 

Office 365 is a ripe target for hackers, plain and simple. Phishing attempts are the perfect attack vector because users might be used to logging in with their credentials on a regular basis. The point I’m getting at is that you’ll want to make sure you consider security when putting together a monitoring plan.

 

Office 365 has a Security and Compliance Center, which is a great place to start securing your environment. You define known IP ranges or audit user mailbox logins, and from what IP. Once again, there are plenty of third-party services that can yield additional reporting that isn't available "out of the box" (or should that be "out of the cloud").

 

WHO SHOULD KNOW WHAT?

 

In smaller environments, a lot of folks wear multiple hats. Reporting tools can quickly get folks the information they need. In larger environments, there are usually multiple teams involved. Similar to a point made in an earlier post, knowing who should be aware of problems is key. This also applies to users. If your monitoring tells you that a large portion of your users' mailboxes are offline, what's your plan to alert them?

 

Being able to monitor your environment's health is one thing, but taking actions is another. This doesn't just apply to Office 365. Hopefully, these past few posts and the fantastic comments from the community have helped with planning out a smooth migration. But don’t forget to also plan for disaster.

It’s March Madness time here in the USA. I love this time of year. Not just because I have a former life as a player and coach, but because here in the northeast it is that time when winter finally gives way to spring.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Project Nimble: Region Evacuation Reimagined

This one provides insight into how Netflix was able to cut their failover time from 50 minutes down to 8. Next time someone tells me that they have too much data to failover quickly, I’m just going to point them to this post.

 

Microsoft’s Adding new Data Centers in Europe and the Middle East

Slowly adding more capacity and offering local data storage in regions that have strict laws. Microsoft is buying entire data centers from companies and converting them to Azure in the biggest "lift and shift" imaginable.

 

The Reason Software Remains Insecure

Short and direct to the point. This is why we can’t have nice things.

 

Facebook confirms Cambridge Analytica stole its data; it’s a plot, claims former director

This story is going to get ugly, and fast. It’s also a nice reminder that the most valuable asset anyone can have is not money, it’s data. Facebook owns a lot of data, offered up for free from users. The same users that annoy you with Farmville invites.

 

To find suspects, police quietly turn to Google

You know who else has a lot of data? Google, that’s who. Your phone will track you even when you think it is not, and the police can get data from Google to find out who was near the scene of a crime. This isn’t Minority Report stuff yet, but it is getting us closer.

 

Your Data is Being Manipulated

And since I’m clearly on a data-centric theme this week, let me share with you this article and the following quote: “In short, I think we need to reconsider what security looks like in a data-driven world.” Yes, yes we do.

 

Google plans to boost Amazon competitors in search shopping ads

Competition is good, right? The best part about this is the fact that Google is being open about what they are doing, which is different than what they have done in the past.

 

My high school made it to their first-ever state championship basketball game this past Saturday. There was no question I would attend wearing my old varsity jacket:

So, I wanted to at least touch base with everyone on the “scandal” of the week. Is it fake news? New ways for stock gouging? New ransom type embankments? Corporate espionage?

 

I waited until at least some of the dust had settled to write this post. I wanted to be able to make accurate judgment calls and present a level-headed offering of thoughts and ideas. Here they are:

 

  1. Yes, there are security flaws (over a dozen) within these processors.
  2. No, at this time they are not mission critical because they have to have physical access AND the administrator\root information.
  3. The lab that sent out these security flaws had stock associated with their finds.
  4. They only gave AMD 24 hours to resolve the issue before they sent the processors out.

 

People are still discussing the processor story, so consider this an up-to-date discussion. Let it also be a friendly reminder that we have to check the general “sky is falling” mentality, especially in security. Key takeaway? Focus on best practices.

 

 

We should strive to have due diligence on the risk, determine appropriate measures to respond, and showcase the balance between risk and business as usual.

 

Since I believe you can benefit from them, here are my top three security practices:

 

Infrastructure monitoring

Determining baselines winds up bringing incredible value to any organization, department, and technology as a whole. The importance and power of baselines sometimes gets overlooked, and that saddens me. It is all too common for folks to wait until after they experience an incident to set up monitoring. That is simply a reaction, not a proactive approach.

 

Once you begin monitoring, you can start comparing solutions to risk. This is how you can test solutions to risks and vulnerabilities before you go full on “PLAID” mode (Spaceballs reference. #sorrynotsorry), only to find that you have created a larger issue than the risk itself. Comparative reporting is an excellent way to prove that you have done your due diligence in understanding the impact of the threat and the solution as a whole.

 

Threat management policies

You should determine a policy that addresses ways to deal with threats, vulnerabilities, and concerns immediately and openly.  It should live where everyone can access it, and be clearly outlined so everyone knows what is happening even before you have the solution. This helps to stop or at least slow down management fire alarms, universally expressed as, “What are we going to do NOW?”

 

The policy should include a timeline of events that everyone can understand. For example, let everyone know that there will be an email update outlining next steps with 48 hours of the incident.  In other words, you are telling everyone, “ Hey, I’m working on the issue and I’ll make sure I update you. In the meantime, I’m doing my due diligence to make sure the outcome is beneficial for our company.”

 

Asset Management

You can't quickly assess your infrastructure if you are not aware of everything you manage, period.

 

There is power in knowing what you are managing many realms, but my first go-to are asset reports. I need to know quickly what could—and, more importantly—what could not be associated with any new threats, concerns, or vulnerabilities.

 

The types of tools that allow me to monitor and update my assets give me much needed insight into where my focus should be, which is why I go there first. Doing so ensures that I won’t be distracted or overwhelmed by data points that aren’t relevant.

 

Finally, the responsibility of tracking and understanding any types of threat should be proactive and fully vetted. We should want to understand the issues before we blindly implement Band-Aids that can, potentially, hinder our business goals.

 

Using information to better the security within our organizations also brings us into the fabric of the business, assisting efforts to keep business costs low.

    

I hope you join this conversation because there are several touch points here. I’m very curious to hear your thoughts, comments, and opinions. For example, did you believe, when the processors were released, that they were a form of ransom? Do you see other opportunities to manhandle a company’s earnings by highlighting exploits for others’ gain?  Or, maybe you just sit back, watch the news with a scotch in your hand, and laugh.

 

Let's talk this over, shall we?

 

~Dez~

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

With the influx of natural disasters, hacks, and increasingly more common ransomware, being able to recover from a disaster is quickly moving up the priority list for IT departments across the globe. In addition to awareness, we are seeing our data centers move from a very static deployment to an ever-changing environment. Each day we see more and more applications getting deployed, either on-premises or in the cloud, and each day we, as IT professionals, have the due diligence to ensure that when disaster strikes we can recover these applications. Without the proper procedures in place to consistently update our DR plans, no matter how well-crafted or detailed they are, the confidence in completing successful failovers decreases. So what now?

 

We’ve already discussed the first step in our DR process: creating our plan. We’ve also touched on the second step, which is to make it a living document to accommodate for data center change. But there is one more step we need to put in place for a successful failover, and that's testing. It boosts the confidence in the IT department and the organization as a whole.

 

Testing our DR plan - We learn by doing!

 

When thinking of DR plan testing, I always like to compare it to a child. I know, a weird analogy, but if we think about how children learn and get better, it begins to make sense. Children learn by doing; they learn to talk by talking, learn to play sports by playing, etc. The point is that by “walking the walk,” we tend to improve ourselves. The same applies to our DR plans. We can have as many details and processes laid out on paper as we want, but if we can't restore when we need to, we've failed. Essentially, our DR plans are set up for success by also walking the walk, aka testing.

 

Start small, get bigger!

 

I’m not recommending going and pulling the plug on your data center tomorrow to see if your plan works. That would certainly be a career-changing move. Instead, you should start small. Take a couple key services as defined in your DR plan and begin to draft a plan on how to test a failover of the components and servers contained within them. Just as when creating our DR plan, details and coordination are the key to success when creating our testing plan.  Know exactly what you are testing for. Don’t simply acknowledge that the servers have booted as a success. Instead, go deeper. Can you log into the application? Can you use the application? Can a member of the department that owns the application sign off stating that it is indeed functioning normally? By knowing exactly what the end goal is you can sign off on a successful test, or on the flip side, take the failures which have occurred and learn from them, updating our plan to reflect any changes, and be prepared for the next testing cycle.

 

Once you have a couple services defined go ahead and begin to integrate more and ensure that recurring time has been set aside and defined within the DR plan to carry out these tests. A full-scale DR test is not something that can be performed on a regular basis, but we can carry out smaller tests on a monthly or quarterly basis. Without a consistent schedule and attention to detail we can almost guarantee that items like configuration drift will soon creep up and cause our DR testing to fail, or worse, our DR execution to fail.

 

I’ve mentioned before that not keeping our DR plans up to date is perhaps one the biggest flaws in the whole DR process. However, not applying a consistent testing plan trumps this. Disaster Recovery, in my opinion, cannot be classified as a project. It cannot have an end date and a closing. We must always ensure, when deploying new services and changing existing applications, that we revisit the DR plan, updating both the process of recovering and the process for testing said recovery. Testing our DR plan is a key component in ensuring that all that work we have done in creating our plan will be successful when the plan is most needed. Let’s face it. A failed recovery will put a blemish on the entire DR planning process and all the work that has gone into it. Test and test often to make sure this doesn't happen to you.

 

I’d love to hear from all of you regarding how you go about testing, or if you even do? Are there any specific starting points for tests that you recommend? Do you start small and then expand? Do you utilize any specific pieces of software, resources or tools to help test your recovery? If you do test, how often? And finally, let’s hear those horror/success stories of any incidents gone bad (or extremely well) as it relates directly to your DR testing procedures. Thanks for reading!

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

We all know that security concerns go hand in hand with IoT. Here's an interesting article from my colleague Joe Kim, in which he suggests ways to overcome the challenges.

 

Agencies should not wait on IoT security

 

The U.S. Defense Department is investing heavily to leverage the benefits provided by the burgeoning Internet of Things (IoT) environment.

 

With federal IoT spending already hitting nearly $9 billion in fiscal year 2015, according to research firm Govini, it’s a fair bet that IoT spending will continue to increase, particularly considering the department’s focus on arming warfighters with innovative and powerful technologies.

 

Security risks exist that must not be overlooked. An increase in connected devices leads to a larger and more vulnerable attack surface offering a greater number of entry points for bad actors to exploit.

 

While the BYOD wave might have been good prep for a connected future, the IoT ecosystem will make managing smartphones and tablets seem like child’s play. To quote my colleague Patrick Hubbard, “IoT is a slowly rising tide that will eventually make IoT accommodation strategies pretty quaint.” That’s because we are talking about many proprietary operating systems that will need to be managed individually.

 

DHS has acknowledged the problems that the IoT presents and the opportunity to address security challenges. Furthermore, the DoD is making significant strides to fortify the government’s IoT deployments. In addition to DoD’s overall significant investment in wireless devices, sensors and cloud storage, the NIST has issued an IoT model designed to provide researchers with a better understanding of the ecosystem and its security challenges.

 

The government IoT market remains very much in its nascent stage. While agencies might understand its promise and potential, the true security ramifications must still be examined. One thing’s for certain: Agency IT administrators must fortify their networks now.

 

A good first step toward meeting the security challenges is through user device tracking, which lets administrators closely monitor devices and block rogue or unauthorized devices that could compromise security. With this strategy, administrators can track endpoint devices by message authentication code and internet protocol addresses, and trace them to individual users.

 

In addition to tracking the devices themselves, administrators also must identify effective ways to upgrade the firmware on approved devices, which can be an enormous challenge. In government, many firmware updates are still executed through a manual process.

 

Simultaneously, networks eventually must be able to self-heal and remediate security issues within minutes instead of days, significantly reducing the damage hackers can cause. NSA, DHS, and Defense Advanced Research Projects Agency have been working on initiatives, some of which are well underway.

 

While the challenges of updates and remediation are being addressed, administrators must devise an effective safety net to catch unwanted intrusions. That’s where log and event management come into play. Systems automatically can scan for suspicious activity and actively respond to potential threats by blocking internet protocol addresses, disabling users, and barring devices from accessing an agency’s network. Log and event management provide other benefits, including insider threat detection and real-time event remediation.

 

Regardless of its various security challenges, the IoT has great promise for the Defense Department. The various connections, from warfighters’ uniforms to tanks and major weapons systems, will provide invaluable data for more effective modern warfare.

 

Find the full article on SIGNAL.

They lurk in the shadows, they creep in the dark

You may hear them shriek, howl, grunt, or bark

Fact or fiction, it’s hard to be sure

If these creatures are caught on camera, they’re only a blur

Their stories have been told for hundreds of years

Each one a lesson that forces you to confront your fears

Now it’s your turn to vote and decide forevermore

Who should be crowned the most legendary of all folklore?

 

Starting today, 33 of the most mythical creatures will battle it out until only one remains and reigns supreme as the ultimate legend.

 

The starting categories are as follows:

 

  • Cryptids
  • Half & Halfs
  • The Gruesomes
  • Fairy Tales

 

We picked the starting point and initial match-ups; however, just like in bracket battles past, it will be up to the community to decide who they think is the most legendary contestant.

 

*NEW* Submit your bracket: To up the ante this year, we’re giving you a chance to earn 1,000 bonus THWACK points if you correctly guess the final four bracket contestants. To do this, you’ll need to go to the personal bracket page and select your pick for each category. Points will be awarded after the final four are revealed.

 

 

Bracket battle rules:

 

Match-up analysis:

  • For each urban legend, we’ve provided reference links to wiki pages—to access these, just click on their name on the bracket
  • A breakdown of each match-up is available by clicking on the VOTE link
  • Anyone can view the bracket and match-ups, but in order to vote or comment, you must have a THWACK® account

 

Voting:

  • Again, you must be logged in to vote and trash talk
  • You may vote ONCE for each match-up
  • Once you vote on a match, click the link to return to the bracket and vote on the next match-up in the series
  • Each vote earns you 50 THWACK points. If you vote on every match-up in the bracket battle, you can earn up to 1,550 points

 

Campaigning:

  • Please feel free to campaign for your favorite legends and debate the match-ups via the comment section (also, feel free to post pictures of bracket predictions on social media)
  • To join the conversation on social media, use hashtag #SWBracketBattle
  • There is a PDF printable version of the bracket available, so you can track the progress of your favorite picks

 

Schedule:

  • Bracket release is TODAY, March 19
  • Voting for each round will begin at 10 a.m. CDT
  • Voting for each round will close at 11:59 p.m. CDT on the date listed on the bracket home page
  • Play-in battle opens TODAY, March 19
  • Round 1 OPENS March 21
  • Round 2 OPENS March 26
  • Round 3 OPENS March 29
  • Round 4 OPENS April 2
  • Round 5 OPENS April 5
  • Ultimate legend announced April 11

 

If you have any other questions, please feel free to comment below and we’ll get back to you!

 

Who (or what) will be crowned the ultimate legend?

We’ll let the votes decide!

 

Access the bracket overview HERE>>

In system design, every technical decision can be seen as a series of trade-offs. If I choose to implement Technology A it will provide a positive outcome in one way, but introduce new challenges that I wouldn’t have if I had chose Technology B. There are very few decisions in systems design that don’t come down to tradeoffs like this. This is the fundamental reason why we have multiple technology solutions that solve similar problem sets. One of the most common tradeoffs we see is in how tightly, or loosely, technologies and systems are coupled together.  While coupling is often a determining factor in many design decisions, many businesses aren’t directly considering the impact of coupling in their decision making process. In this article I want to step through this concept, defining what coupling is and why it matters when thinking about system design.

 

We should start with a definition. Generically, coupling is a term we use to indicate how interdependent individual components of a system are. A tightly coupled system will be highly interdependent, where a loosely coupled system will have components that run independent from each other. Let’s look at some of the characteristics of each.

 

Tightly coupled systems can be identified by the following characteristics:

 

  • Connections between components in the system are strong
  • Parts of the system are directly dependent on one another
  • A change in one area directly impacts other areas of the system
  • Efficiency is high across the entire system
  • Brittleness increases as complexity or components are added to the system

 

    Loosely coupled systems can be identified by the following characteristics:

 

  • Connections between components in the system are weak
  • Parts within the system run independently of other parts within the system
  • A change in one area has little or no impact on other areas of the system
  • Sub-optimal levels of efficiency are common
  • Resiliency increases as components are added

 

So which is better?

 

Like all proper technology questions, the answer is “It depends!”  The reality is that technologies and architectures sit somewhere on the spectrum between completely loose and completely tight, with both having advantages and disadvantages.

 

When speaking of systems, efficiency is almost always something we’re concerned about so tight coupling seems like a logical direction to look. We want systems that act in a completely coordinated fashion, delivering value to the business with as little wasted effort or resources as possible. It’s a noble goal. However, we often have to solve for resiliency as well, which logically points to loosely coupled systems. Tightly coupled systems become brittle because every part is dependent on the other parts to function. If one part breaks, the rest are incapable of doing what they were intended to do. This is bad for resiliency.

 

This is better understood with an example, so let’s use DNS as a simple one.

 

Generally speaking, using DNS instead of directly referencing IP addresses gives efficiency and flexibility to your systems. It allows you to redirect traffic to different hosts at will by modifying a central DNS record rather than having to change an IP address reference in multiple locations. It also is a great central information repository on how to reach many devices on your network. We often recommend that applications should use DNS lookups, rather than direct IP address references, because of the additional value it provides. The downside is that this name reference now introduces a false dependency. Many of your applications can work perfectly fine without referring to DNS, but by introducing it into them you have tightened coupling between the DNS system and your application. An application which could previously run independently now depends on name resolution and your applications fails if DNS fails.

 

In this scenario you have a decision to make. Does the value and efficiency of adding DNS lookups to your application outweigh the deterrent of now needing both systems up and running for your application to work. You can see this is a very simple example, but as we begin layering technology, on top of technology, the coupling and dependencies can become both very strong and very hard to actually identify. I’m sure many of you have been in the situation where the failure of one seemingly unrelated system has impacted another system on your network. This is due to hidden coupling, interaction surfaces, and the law of unintended consequences.

 

To answer the question “Which is better?” again, there is no right answer. We need both. There are times where highly coordinated action is required. There are times when high levels of resilience is required. Most commonly we need both. When designing and deploying systems, coupling needs to be considered so you can mitigate the downsides of each while taking advantages of the positives they provide.

Most enterprises rely on infrastructure and applications in the cloud. Whether it’s SaaS services like Office 365, IaaS in AWS, PaaS in Azure, or analytics services in Google Cloud, organizations now rely on systems that do not reside on their infrastructure. Unfortunately, connectivity requirements are often overlooked when the decision is made to migrate services to the cloud. Cloud service providers downplay connectivity challenges, and organizations new to cloud computing don’t know the right questions to ask. 

 

SaaS: It’s just the Internet

When organizations begin to discuss cloud infrastructure, an early assumption is that all connectivity will simply happen via the internet. While many SaaS services are accessible from anywhere via the internet, large organizations need to consider how new traffic patterns will affect their current infrastructure. For example, Office 365 recommends you plan for 10 TCP port connections per device. You can support, at most, 6,000 devices behind a single IP address. If you have a large network and a small PAT pool for client egress, PAT exhaustion will quickly become a problem.

 

Internet-based SaaS applications make hub-and-spoke networks with centralized internet less efficient. Many WAN solutions use local internet connections to build encrypted tunnels to other sites. You can dramatically reduce network traffic by offloading SaaS applications to a local internet connection instead of backhauling traffic to a centralized data center. However, be mindful of the impact of your security footprint as you decentralize internet access across your organization.

 

But What About the Data Center?

Invariably, as teams begin to build IaaS and PaaS infrastructure in the cloud, they need access to resources and data that live in an on-premises data center. Most organizations begin with IPSec tunnels to connect disparate resources. Care must be taken when building IPSec tunnels to understand cloud requirements. Many cloud teams assume dynamic routing with BGP over VPN tunnels. In my experience, most network engineers assume static routing over IPSec tunnels. Be sure to have conversations about requirements up front.

 

When building VPNs to the cloud, throughput can be an issue. Most VPN connections are built on underlying infrastructure with throughput limitations. If you need higher throughput than cloud VPN infrastructure will support, you will need to consider a direct connection to the cloud.

 

Plug Me In to the Cloud, Please

There are several options to connect directly to the cloud. If you have an existing MPLS provider, most offer services to provide direct connectivity into cloud services. There are technical limitations to these services, however. Pay special attention to your routing and segmentation requirements. MPLS connectivity will likely not be as simple as your provider describes in the sales meeting.

 

If you do not want to leverage MPLS service to connect to the cloud, you can provision a point-to-point circuit from your premises to a cloud service provider. Cloud services publish ample documentation for direct connections.

 

Another option is to lease space from a co-located provider who can peer with multiple cloud service providers (CSPs). You provide circuits and hardware that reside in the co-lo, and the co-lo provides peering services to the one or more cloud providers. Be aware that each CSP charges a direct connect fee on top of your circuit costs. There may also be data ingress and egress fees.

 

You Want to Route What on my Network?

Cloud service providers operate their networks with technologies similar to service providers. Many SaaS services are routable only with public IP addresses. For example, if you want to connect to SalesForce, Office365, or Azure Platform Services, you will need to route their public IP addresses on your internal network to force traffic across direct connect circuits. Network engineers who have always routed internet-facing traffic with a default route injected into their IGP will have to rethink their routing design to get full use of direct connectivity into the cloud.

 

I Thought the Cloud was Simple

The prevailing cloud messaging tells us that the cloud makes infrastructure simpler. There is some truth in this view from a developer’s perspective. However, for the network engineer, the cloud brings new connectivity challenges and forces us to think differently about how to engineer traffic through our networks. As you look to integrate cloud services into your on-premises data center, read up on the documentation from your cloud service provider and brush up on BGP. These tools will position you to address whatever challenges the cloud throws your way.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.