Skip navigation

You've probably heard about the importance of business continuity and disaster recovery. Today, more businesses have business continuity plans than ever before. With so many businesses looking to secure their future, there are still a few aspects of business continuity that today’s business need to understand. After all, there is more to it than just data backup. Disaster recovery is something that needs to be planned, practiced, and updated regularly, and it’s important to have a management system that helps you predict, monitor, and execute your business continuity plan.

Over the last few years, business continuity has changed how it is perceived within a business. In a previous position, the company I worked for had multiple physical data centers with hardware at both sites with a full failover from one site to the next. But today’s ever-changing, always-on data requirements bring new complications to the business continuity plan. Today, infrastructure and applications can be hosted across multiple platforms, from on-premises to the public cloud. With these disparate environments and multiple management tools, it is key to know what is going on within your business.   

So, let’s look at how some of the software packages in the market can help your business monitor your infrastructure and provide critical insights into your data.

Availability Monitoring

Experience has taught me that a lot of outages can be tracked to network issues. In the majority of cases, these outages could have been avoided. Availability monitoring software provides you a way to help identify and proactively troubleshoot network issues early. I have often blamed the network team for issues with my data center, putting pressure on them to work out what’s wrong, when the issue ended up being related to disk access or performance. With an availability monitoring solution, you can provide a quick response to your teams, helping you troubleshoot the issue before the business or end-user is affected. Availability monitoring tools can work in a standalone fashion to provide quick response for small organizations or for companies looking to provide information about a specific project. A larger organization may want to integrate availability monitoring into a more comprehensive platform.

Interface Monitoring

Sometimes you have to delve deeper into the environment when no real issues show on the network, but you can begin to see actual issues with each individual interface. Take a service provider, for instance. They have large distributed and shared networks with multiple VLANs and dedicated ports for each customer. Not to mention the different types of interfaces: 1Gb, 10Gb, Ethernet, all the way through to high-speed fiber. Now, I'm no expert when it comes to networks, so I would need help to start to decipher the issues. Using SNMP to collect the interface stats within the environment, and ICMP packet reports to collect data (such as packet loss, round-trip times, etc.), helps the network administrator identify application performance issues in the network quickly.

Virtualization Manager 

This is the tool that I find really cool. When I was an IT Manager, every day was a challenge, especially when we started introducing larger applications. But this was eight years ago, long before I knew about the category of virtualization management. Back then, if I had a piece of software that could proactively recommend what I needed to do with my VMs, I would have slept a lot easier. I will more time going over the benefits of virtualization management software in the future because it is a large and very detailed category, but today I want to highlight the features that I think can help your business continuity plan, including Predictive Recommendations and Active Virtualization Alerts.

Predictive Recommendations proactively monitors and calculates active and historical data to help you prevent and fix performance issues. You can review each recommendation and choose to act now or schedule for later time and date. This gives you choice and control over your environment. You can also use VMAN to help prevent future issues, by implementing resource settings and plans that can actioned if any performance thresholds are breeched. Now, what's key for me is not only the value that’s provided by saving time and resources but also the uptime that can be achieved by making sure the VMs are in the right place.

In my next article, we will look at providing insight into your infrastructure to meet the compliancy challenges we are seeing today.

This week's edition of The Actuator comes to you direct from Las Vegas and AWS re:Invent. This is my first time at this event, and I am fortunate to be here as an attendee. For the past 15 years or so, I have been working events, not attending them. This week I have been hanging around the data folks, trying to soak up some knowledge on RDS, Aurora, Redshift, and I also attended a workshop on building Alexa Skills. So, yeah, I'm in data geek heaven here.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Amazon launches new cloud storage service for U.S. spy agencies

“Amazon says it is the only commercial cloud provider to offer regions to serve government workloads across the full range of data classifications, including unclassified, sensitive, secret and top secret.” Well, except for Azure, which has offered this for more than a year already. But, hey, with AWS re:Invent happening, why not embellish the truth a bit, right?

 

Google admits it tracked user location data even when the setting was turned off

Remember when Google wasn’t the evilest company on Earth? Neither do I. Now we can only imagine what horrible things they are doing to user data stored in Google Cloud.

 

Decision making under stress

I read this and remembered every triage call I participated in through the years. This is why we have things like checklists and automated scripts to remove the human element and lower stress in group situations where we feel we have no control.

 

Uber Concealed Breach of 57 Million Accounts for a Year

Remember when Uber wasn’t as evil as Google? Yeah, that was three minutes ago. When I woke up today I didn’t need yet another reason to dislike Uber, but here we are.

 

Run the First Edition of Unix (1972) with Docker

“I bet people from the 1970s were really good spellers because not being able to move your cursor is pretty brutal.”

 

Microsoft is Bringing VMware to Azure, VMware Is Not A Happy Camper

One thing I learned a long time ago is that “unsupported” really means “you pay extra for support." Of course, VMware will tell you that this is unsupported. They have to say that because of their agreement with AWS. But at the end of the day, VMware wants to survive, and the best way to do that is to allow their software to run on any cloud.

 

Gambling regulators to investigate 'loot boxes' in video games

I was thinking of getting Battlefront 2 as a gift this year, but I’ve decided that I’d be better off buying actual lottery tickets instead.

 

If you have never been to re:Invent, here's a sample of what you should expect for the week: Standing in lines.

Over the next few weeks, I will be releasing a series of articles covering the value of data analytics and insight, focusing on five of the key business drivers that I have seen within the industry. With the IT landscape constantly changing, let’s look at what is driving businesses further down the path toward data analytics. More importantly, we will also at the drive to understand what is happening within current infrastructures and how this knowledge can deliver value.

 

Business Continuity

 

In an ideal world, your organization would run effortlessly to provide both your business and your customers with data and resources at all times. In reality, no matter how successful, no business is without its challenges, and it often has to mitigate and eventually overcome these challenges to make sure the business can achieve its outcomes. One of the ways that organizations can prepare for disruptive events is through Business Continuity Management (BCM). For some businesses, this means deploying BCM software and creating business procedures to continue operating when the unexpected occurs.

 

When I am talking to businesses about Business Continuity, it is important to highlight at an early stage what the business needs to keep running and what it is that they want to monitor. Deploying the right BCM software will help businesses identify, manage, and prevent issues before they occur. This has the added benefit of possibly reducing the need to activate disaster recovery or business continuity plans.

 

Compliance

 

Compliance has become a huge topic of conversation over the past year. The issue was prompted mainly by the introduction of General Data Protection Regulation (GDPR), which goes into effect on the 25th of May 2018. I have spent the last year working with customers to help them understand the impact of GDPR, and the importance of making sure their businesses stay compliant. Now, I am no expert when it comes to its legalities, but what I have found is that becoming GDPR-compliant is not the major challenge. Instead, I have discovered that businesses are more concerned about staying compliant. It is important to have a monitoring tool that can help you maintain compliance from Security and Access control through to patch management and device tracking. These aren’t all tied to GDPR, but each one, working in collaboration, will help your business stay compliant.

 

Responsiveness

 

This challenge has been brought to my attention in many different ways. For me, responsiveness can mean anything having to do with networks, data access, hyperscaling, and even cloudbursting as the business needs. With the ever-changing user requirements, for applications, or the business as a whole, monitoring is crucial. Businesses want tools that can give them proactive information that will help them make the kind of decisions that guide them toward becoming competitive within the market.

 

Collaboration

 

As mentioned above, it is becoming more important to manage large infrastructures from one central management platform. As technologies move forward, there is no longer a one-stop shop for all your business requirements. Infrastructures and applications are brought in to supply the businesses needs from a plethora of vendors. It is important for businesses to stay agile to best meet the trends of the market. Therefore, they must use the best possible tools to help them achieve and keep that competitive edge.

 

Efficiency

 

It's common practice for businesses to run their infrastructures efficiently. As we all know, though, this isn’t as easy as it sounds. With multiple disparate environments all having their own operating system and management tools, it’s very hard to keep a track of it all. Increasingly, I find myself talking to customers about centralized management and efficiency. I believe Solarwinds Orion Platform helps businesses manage those very specific challenges.

 

Over the next four weeks, I am going to delve into each of these subjects individually in a lot more detail. I will also reveal how SolarWinds can help businesses deliver value using insightful data analytics.

Today you can find any number of online articles about the impending loss of Net Neutrality in the United States and around the world. I think most web surfers don't understand the potential impact.

 

 

There are many examples of corporations and governments violating the principles of Net Neutrality.  Here are just a few:

  • Comcast secretly injected forged packets to slow certain users' traffic down. When discovered, they didn't stop until the FCC forced them to.
  • In a different instance, the FCC fined a small ISP $15000 for restricting their customers' access to a rival ISP's services.
  • AT&T was caught limiting their customers' access to a specific public site unless the customers paid AT&T more for the access.
  • More recently, Verizon secretly restricted its customers' ability to stream from Netflix and Youtube, until they were caught and forced to change.
  • Outside the U.S, in some countries, the general population can't access any information not officially approved by the government. They can't email anyone they'd like, search for information about politics, medicine, religions, etc.

 

That's what it's already like today when Net Neutrality is not followed.   Net neutrality - Wikipedia

 

The obvious fallout from losing Net Neutrality is separating people from more money and seeing it sent to carriers and big corporations. But although money is the front reason for doing away with it, it's not the worst reason.

 

Suppose Net Neutrality goes away in the United States due to an act of Congress, and it becomes legal for carriers and ISPs and corporations to slow traffic down or shut it down entirely based on:

  • Who you are
  • What you want to learn
  • Your past browsing history
  • Your income
  • Your race
  • Your religion
  • Your political views
  • How much extra you're willing to pay

 

Losing Net Neutrality sounds a lot like trading the freedoms and rights that come from living in the United States and moving to Russia or China or Iran or Syria. We could be giving up a LOT of freedoms and speeds that we've always taken for granted.

 

Are there any justifiable reasons for slowing or stopping your traffic? Maybe.

  • It costs carriers and ISPs more and more as people increase bandwidth demand by streaming audio and video, or moving increasingly larger files for work or pleasure. Should you be denied access, or slowed down, or forced to pay more to have the speed and access you already have today?
  • If you use a lot of bandwidth streaming entertainment or playing games during business hours, some online businesses may not be able to serve their customers as well. Is that a fair reason for you to be denied speed or access to what you want?

 

It occurred to me that throttling bandwidth with QoS is similar to a clogged freeway (aka "oversubscribed") that the DoT "fixes" by dedicating a lane to folks who pay more to bypass the slow traffic. When does slowing someone's traffic begin a movement toward losing Net Neutrality?

  • How about if the DoT intentionally slowed everyone EXCEPT you down, and you must drive 55 mph where you'd been driving 70, and everyone else must drive 40 mph?
  • Or if they said, "You may go 70 mph because you're wealthy. People who earn less than you won't be allowed to go that fast."
  • Or "Your religion or color or gender all are reasons why you may not access these sites with good speeds, or perhaps to be able to access them at all. Further, all of your surfing will be slowed until you comply with some new policy we will indicate at a later time, perhaps a loyalty oath."

 

Put on your best network administrator hat and imagine how monitoring will play a role in this. You might be asked to prove that your ISP/carrier/remote online service is flowing as fast as your company pays it to be. And you might be asked to identify IP addresses, users, protocols, and destinations and throttle their throughput per the demands of some higher-up in your organization.

 

Maybe you already do this with QoS, prioritizing traffic because your internet pipe or WAN pipes aren't big enough for the demand.  Is that parallel to not being able to access what you want privately, or at home, or at work?

 

How will you react when Net Neutrality is gone and you learn that you:

  • Cannot access the sites you used to enjoy?
  • Cannot stream A/V content as fast as you used to?
  • Cannot research what you want?
  • Are prevented equal access to information based on your politics/income/race/religion/age or gender?
  • Will be forced to pay additional fees each time you use certain protocols or sites?

 

Think about how important Net Neutrality has been to you in the past, and how we've taken it for granted.

 

Then imagine it being used as a tool to separate you from more money, and to keep you in the dark about your health or your government's activities, or even from learning about coming weather conditions.

 

What good things could come from losing Net Neutrality?

  • Users might abandon the internet and get physically active and become fit again?
  • Think of the zillions of bits conserved!
  • Discovering ignorance is bliss?  (If this makes you smile, I recommend you read George Orwell's book 1984).

 

How will losing Net Neutrality affect you and your business and your network monitoring demands and discoveries?

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Through its significant investment in networked systems and smart devices, the DOD has created an enormously effective—yet highly vulnerable—approach to national network security threats. The department has begun investing more in the Internet of Things (IoT), which has gone a long way toward making ships, planes, tanks, and other weapon systems far more lethal and effective. Unfortunately, the IoT's pervasive connectivity also has increased the vulnerability of defense networks and the potential for cyberattacks.

 

That attack surface only continues to grow and evolve, with new cyberthreats against the government coming in a regular cadence. DOD must adapt to this rapidly changing threat landscape by embracing a two-phase plan to make network security more agile and automated.

 

Phase One: Speeding Up Tech Procurement

 

The government first must accelerate its technology procurement process. Agencies must quickly deploy easily customizable and highly adaptable tools that effectively address changing network security threat vectors. These tools must be simple to install and maintain, with frequent updates to ensure that networks remain well fortified against the latest viruses or hacker strategies.

 

There is hope. In recent years, the government has made it easier for agencies to buy software through a handful of measures, such as the General Services Administration Schedule and the Department of Defense Enterprise Software Initiative. All have been carefully vetted to work within government regulations and certifications.

 

Phase Two: Automating Network Security

 

Automated network security solutions to alert agency administrators to possible threats are also important. The government should implement these types of solutions to monitor activity from the myriad devices using Defense Department networks. Administrators can be alerted to potential security breaches and software vulnerabilities to provide real-time threat response capabilities.

 

The SolarWinds® Log & Event Manager (LEM) lets administrators gain real-time intelligence about the activity happening on their networks, alerting them to suspicious behavior. Administrators can trace questionable activity back to its source and set up automated responses—including blocking IPs, disabling users, and more— to prevent potentially hazardous and malicious intrusions.

 

The number of connected devices operating on government networks makes a comprehensive User Device Tracker (UDT) a necessary counterpart to LEM. UDTs have gained a significant amount of traction over the past couple of years, particularly since the workforce began using personal mobile devices over government networks.

 

Today, federal administrators must deploy solutions that automatically detect who and what are using the network at all times. Solutions should easily locate the devices through various means, so administrators quickly can prevent major breaches that have become all too common.

 

Prevention is more about implementing network security measures quickly and automatically than it is about who has the better firewall. For the Defense Department, which has become so dependent on connected devices and the information they provide, there’s simply no time for that type of old-school thinking. Federal administrators must act now and invest in automated, agile, and efficient solutions to keep their networks safe from cyberattacks.

 

Find the full article on Signal.

A thing that has never been is happening right now: We’re recording SolarWinds Lab #60 Live on-location at AWS re:Invent 2017 in Las Vegas. Head Geek patrick.hubbard and Director, Product Management for SolarWinds Cloud Michael Yang will review all of the latest from SolarWinds Cloud, including AppOptics.

 

Lab 60 from Las Vegas

 

Join us Wednesday, December 13, 2017 at 1p CST for a Lab you won’t forget.

 

SolarWinds Lab #60 - SolarWinds Cloud, GO! Learn Modern Monitoring for Apps, Cloud & DevOps

Someday, we may look back on IT as a subset of social science as much as a technological discipline. Because it sits at the intersection of business and technology, visibility and information are at a premium within organizations.

 

In another social science, economics, there is a theory that given perfect information, rational humans will behave predictably. Putting aside the assumption about rationality (and that's a major bone of contention), we can use that same principle to say that when people seem to behave unpredictably, it's a failure of information.

 

In any organization at scale, information disparities have the potential to cause confusion, or worse, conflict. One of the most typical examples of this in the enterprise is between storage administrators and database administrators (DBAs). Very often, these two roles butt heads. But, why? After all, at the end of the day, both are trying to serve the same organization with a clearly defined mission, as discussed in the eBook, The Unifying Force of Data, co-authored by myself and Keith Townsend.

 

So, it could be said that both roles have the same overall goal, but have different priorities within it. These priorities inform how each determines their success within the organization. It's due to a lack of knowledge of their counterparts' priorities that often cause this seemingly inherent conflict.

 

So, what are these priorities? From a storage admin, much of their focus rests on cost. As data continues to eat the data center, the amount, and subsequent cost, of storage, is the fastest-rising cost in a data center. This tends to make them the master of "no" when it comes to storage requests.

 

This focus on costs informs how storage admins interact with other members of an organization. When it comes to DBAs, this can create a vicious cycle of assumptions. If a DBA requests an additional allocation of storage at a given performance tier, the storage admin is naturally skeptical if it's "really" required. The storage admin will look at what the DBA actually used out of their last allocation, perhaps even digging into IOPS requested by an application as part of their calculation.

 

In the back of the storage admin's mind, there may be an assumption that the DBA is actually asking for more than they need at this moment. This might be because of the lag time in provisioning additional storage. Regardless, the storage admin is trained to be skeptical of DBA provisioning requesting.

 

This is where a lack of information can really hamper organizations. The storage admin thinks the DBA will always seek to overprovision either capacity or speed. This causes additional delays as the admin tries to determine the "actual" requirements. Meanwhile, the DBA assumes the storage admin will be difficult to work with, and often overprovisions as a way to hedge against additional negative interactions.

 

This cycle isn't because the storage admin is a pessimistic sadist looking to make everyone's lives miserable. The point of storage in an organization is for it to be used to support the mission. The storage admin must provide applications with the storage they need. But they feel the squeeze on cost to make this as lean as possible.

 

Now it's impossible to expect either storage admins or DBAs to have perfect information on each other's roles and priorities. That would result in duplicated effort, cognitive overload, and a waste of human capital. instead, a monitoring scheme might approach this to effectively allow the two to troubleshoot their issues by effectively correlating behavior up and down the application stack.

 

This kind of monitoring requires the collaboration of both a storage admin and DBA. It may require a bit of planning, and probably won't be perfect from the start. But it might be the only way to solve the underlying information imbalance causing the conflict between the two roles.

In this, my fifth and final post about life hacks, I’ll talk about the communication process, clarifications across all key personnel, and a big approach in how some of these are accomplished: the Stand-up/Walking the Wall approach.

 

What is it? Well, the concept of walking the wall is a somewhat structured “Stand-up” meeting approach. The goal here is to facilitate a smooth communications process with management and a way to give visibility to your current projects.

 

Imagine you’re part of a team, have a series of tasks in flight, and are hoping to gain clarity toward the full spectrum of where each team and team member is in the process. When these meetings are organized, scrum fashion, they’re set up with a goal of very quickly pushing through to clarity, with as little time wasted as possible. As a result, we stand and begin each meeting with an agenda. The agenda is often so repeated that it’s almost unnecessary, though we often have a whiteboard with an outline clearly laid out.

 

Essentially, each team talks about their individual projects in flight, with each team member discussing their current responsibilities, the obstacles to achieving those tasks, and the progress therein. With this cadence in mind, all the responsible individuals can be queried by any of the other team members. Often, we find interrelations between team members on discrete tasks, reliances, and precedential milestones that must stay on schedule to achieve completion.

 

My first exposure to this type of meeting came when I was responsible for the project management and implementation of a large VDI project at a prestigious hedge fund in Connecticut. At this firm, which was famous for its approach to project management, there was no end of conflict in these meetings. The idea here was to challenge each statement, and through the conflict and drilling, try to uncover the holes in the plan. Have you considered...? Did you think about…? How would the scope of the project be affected if…? I found this to be a highly unpleasant approach to building commitments toward a solid project plan. The thing is, the approach was entirely effective. Through these arguments, which were often quite aggressive, if the person deemed responsible was proven to have lost control over their scheduled tasks, they’d “lose their spot.” It was a shameful experience for those who did, but by fear and intimidation, they were able to achieve greatness as an organization. Believe me, this firm was incredibly successful. They understood that only 10% of those hired would survive their first year. However, if they did survive, great things could happen.

 

I’m going to link here to the principles of the founder. I think that this document is incredibly powerful, and really defines how they’ve been able to achieve their goals.

 

We have long known that the meetings take way too much time out of our days. I can recall having meetings to schedule the next meeting, which was frustratingly tedious and ineffectual. So, this “Walking the Wall” methodology, to me, is very effective. And from a PM perspective, it gives clarity to the entire team regarding successes, shortfalls, and potential hindrances to meeting scheduled timeframes.

 

I often have these meetings with myself in my head when I think about the goals I’m hoping to accomplish in my life. I try to think about the things that are blocking my progress, the things that I rely on for success. As this entire series of postings has solidified, it’s become clear to me that I, while not preaching to others about how they should maintain control over their lives, use a very specific project manager’s mentality as my approach to the various tasks I hope to accomplish in my life, my career, my health, my music, and my relationships. This is just how I do things. It keeps things clear and pragmatic for me.

Datachick LEGO at a SolarWinds Desk with a water cooler

In my recent post  5 More Ways I Can Steal Your Data - Work for You & Stop Working for You I started telling the story of a security guard who helped a just fired contractor take servers with copies of production data out of the building:

 

Soon after he was rehired, the police called to say they had raided his home and found servers and other computer equipment with company asset control tags on them. They reviewed surveillance video that showed a security guard holding the door for the man as he carried equipment out in the early hours of the morning. The servers contained unencrypted personal data, including customer and payment information. Why? These were development servers where backups of production data were used as test data.

Apparently, the contractor was surprised to be hired back by a company that had caught him stealing, so he decided since he knew about physical security weaknesses, he would focus not on taking equipment, but the much more valuable customer and payment data.

 

How the Heck Was He Able to Do This?

 

You might think he was able to get away with this by having insider help, right?  He did, sort of.  But it didn't come from the security guard.  It came from poor management practices, not enough resources, and more. I'm going to refer to the thief here as "Our Friend".

 

Not Enough Resources

 

Our Friend had insider information about how lax physical security was at this location.  There was only ever one security person working at a time.  When she took breaks, or had to deal with a security issues elsewhere, no one else was there to cover the entrance.  Staff could enter with badges and anyone could exit.  Badging systems were old and nearly featureless.  Printers and other resources available to the security group were old and nearly non-functioning.  Staff in security weren't required or tested to be security minded.

 

In this case, it was easy to figure out the weaknesses in this system.

 

Poor Security Practices

 

In the case of Our Friend, he was rehired by a different group who had no access to a "do not hire" list because he was a contractor, not an employee.  He was surprised at being rehired (as were others).  This culture of this IT group was very much "mind your own business" and "don't make waves".  I find that a toxic management culture plays a key role in security approaches.  When security issues were raised, the response was more often than not "we don't have time to worry about that" or "focus on your own job".

 

Poor Physical Security

 

Piggybacking or Tailgating (following a person with access through a door without scanning a badge) is a common unenforced practice in many facilities.  Sometimes employees would actually hold the door open for complete strangers.  This seems like being nice, but it's not. Another contractor, who had recently been let go, was let in several times during off hours to wander the hallways looking for his former work laptop.  He wanted to remove traces of improper files and photos.  He accomplished this by tailgating his way into the building.  This happened just weeks before Our Friend carried out his acts.

 

When Our Friend was rehired, there was a printout of his old badge photo hanging on the wall at the security area.  It was a low-resolution photo printed on a cheap inkjet printer running low on ink.  The guard working that day couldn't even tell that this guy had a "no entry" warning.  The badge printing software had no checks for "no new badge".

 

After being rehired, Our Friend was caught again stealing networking equipment and was let go.  Security was notified and another poorly printed photo was put up in the security area. Then Our Friend came back in the early morning hours on the weekend, said he forgot his badge and was issued a new one.  Nothing in the system set up an alert.

 

He spent some time gathering computers that were installed in development and QA labs, then some running in other unsecured areas.  He got a cart, and the security guard held the door open while he took them out to his car.  How do we know this?  There were video tapes. How do we know this? The security guard sold the tapes to a local news station. News stations love when there is video.

 

Data Ignorance

 

Ask I mentioned in the previous post, the company didn't even know the items were missing. It took several calls from the local police to get a response.  And even then the company denied anything was missing.  Because they didn't know.   Many of us knew that these computers would have production data on them because this organization used production data in their development and test processes.

 

But the company itself had no data inventory system. They had no way of knowing just what data was on those computers.  It was also common to find these systems had virtually no security or they had a single login for the QA environment that was written on the whiteboard in the QA labs. No one knew just what data was copied where.  Anyone could deploy production data anywhere they could find. Request for production data were pretty much allowed for anyone in IT or the rest of the company.   Requests could be done verbally.  There were no records of any request or the provision of data.  Employees were given no indication that any set of data held sensitive or otherwise protected data.

 

The lack of inventory let the company spokesperson say something like "These were just test devices; we have no indication that any customer data was involved in this theft".

 

Fixing It

 

I could go on with a list of tips on how to fix these issues. But the main fix, that no one wants to embrace, is to stop using production data for dev and test.  I have some more writing on this topic, but this will be my agenda for 2018.  If this company had embraced this option, the theft would have been just of equipment and some test data with no value.

 

The main fix that no one wants to embrace is to stop using production data for dev and test.

 

If we as IT professionals started following the practice of having real test data, many of the breaches we know of would not have been breaches of real data.  Yes, we need to fix physical security issues.  But let's keep production data in production.  Unless we are testing a production migration, there's no need to use production data for any reason.  In fact, many data protection compliance schemes forbid it.

Have you developed real test data, not based on just trying to obscure production data, for all your dev/text needs?

Tomorrow is Thanksgiving here in the USA. I’ve avoided doing a turkey-themed post this week, but included two links because reasons. I hope everyone has the opportunity to share a meal with family and friends. It’s healthy to get together and remember all the things for which we are thankful.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Amazon is Becoming the New Microsoft

And not in a good way.

 

Switching Jobs: When people move to different jobs, here's where they go.

If you’ve ever wondered where you are headed in your current job, have a look at where your peers are going next.

 

Should I Leave My Laptop Plugged In All The Time?

The answer to a question we’ve all asked.

 

The Power of Anti-Goals

I love this line of thinking. Want to make your day, week, month, and life happier? Anti-goals may be the answer you’ve been looking for.

 

Schneier: It's Time to Regulate IoT to Improve Cyber-Security

I’d love to believe that this presentation will serve as a wake up call for the industry, but I know it won’t. You make more money building cool apps and devices than you do for securing data.

 

Thanksgiving Hack: Cook Your Turkey Sous Vide

OK, I want to try this, but I need to deep fry a turducken first.

 

How to survive Thanksgiving when politics loom large

Some decent advice in here for any meal that involves extended family at any time of the year. Or, for any gathering of professionals (conferences, etc.).

 

Then again, maybe deep frying a turducken isn't such a good idea after all:

 

I’ve spent countless hours trying to find the perfect tool for the job. In fact, I’ve spent more hours searching at times than I have doing the work. You’ve probably done that before. I hear a lot of people are the same way. I look at it as if I’m searching for that needle in a haystack. When I find it, I’m over the moon. But what about when it comes to our end users? Do we trust them enough to locate that needle in the haystack when it comes to software that will enable them to perform their work? I'd venture to guess that in larger organizations the resounding answer is no! But why?

 

Elemedia Player

Let's use the media consumption application Elemedia Player as a working example. Now it's true that our end users probably won't need Elemedia Player to get their job done, but the method behind what happened here is what I'm interested in discussing.

 

On October 19, 2017, ESET Research reported that Elmedia Player had been briefly infected with the Proton malware strain. In fact, the developers of Elemedia Player, Eltimalater reported on their blog that they were directly distributing the compromised software from their servers. In this case, the malware is delivered via the supply chain.

 

Let's now put this in the perspective of our end users. Imagine that an end user shows up at work to find that their workstation doesn't have some set of software that they use at home. They feel comfortable using this software and decide to find it on the interwebs and download it themselves. They grab said software package and install it. Off to work! Another great day.

 

However, in this instance, they have obtained a package that's infected with malware. It's now on your network. Your day has just gone downhill fast. You'll likely spend the rest of the day restoring a machine or two, trying to figure out how far the malware has spread, and second-guessing every control you've put in place. In fact, you may not even realize that there's malware on the network initially, and it could be days or weeks before the impact is realized.

 

Taking it to the Enterprise

Let's move this discussion to something more fitting for the enterprise. We've already discussed online file storage services in this series, but let's revisit that a bit.

 

Imagine that we couple the delivery of malware packaged into an installer file along with the strong encryption that's performed by these online services and you can pretty much throw your visibility out the window. So here's the scenario. A user wants to share some files with a colleague. They grab a free Dropbox account and share some work stuff, some much, and some apps that they like, perhaps Elemedia Player with an infected installer. You can see where this is going.

 

My point to all of this is that we have to provide the tools and prohibit the user from finding their own or things start to go off the rails. There's no way for us to provide security for our organization if our users are running amuck on their own. They may not mean it, but it happens. In fact, this article written my Symantec discusses the very same idea. So instead of finding a needle in a haystack, we end up falling on a sword in a pile of straw.

 

How do you feel about these services being used by end users without IT governance? How do you handle these situations?

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

As agencies look ahead to the new year, I wanted to share an insightful blog about the evolution of network technology written by my SolarWinds colleague, Leon Adato.

 

We are entering a new world where mobility, the Internet of Things (IoT), and software-defined networking (SDN) have dramatically changed the purview of federal IT pros. The velocity of network changes is expected to pick up speed in the next five years as new networking technologies are adopted and mature. The impact this will have on network administrators will be significant.

 

To help prepare for this new world, we have put together the following list of positive changes we expect federal administrators—and the tools they use to do their jobs every day—will experience between now and 2020.

 

Streamlined Network Troubleshooting

 

Today, network administrators spend a lot of time troubleshooting. Moving forward, this process will become far more streamlined by using data that already exists to free up administrator’s time.

 

The vast majority of applications in place today hold a lot of data that could be used to help in troubleshooting network issues. Unfortunately, they don’t quickly and easily bubble that data up to administrators. Soon, systems will be able to arm the administrator with enough information to fix problems much more quickly. The future will see administrators steeped in automated intelligence, where an emailed alert not only describes a present problem, but also includes details of similar past incidents and recent configuration changes that could be related.

 

Greater Ability to Resolve Potential Problems Before They Arise

 

With the development of advanced network management capabilities comes the ability to increase automation by tapping into a historical knowledge to predict problems before they happen.

 

Every agency would like the ability to have its systems effectively take care of themselves with a greater degree of automation. There is an emerging need for network technology that distinguishes from simply alerting administrators to problems, to alerting, fixing, and escalating a notification when conditions are ripe for an issue based on historical context.

 

Greater Awareness of Virtualization by Network Management Tools

 

Virtualization is increasingly common, yet many network management and monitoring tools have not evolved. All the tools that make up that portfolio of network management necessities need to be aware of the construct of what network virtualization is and the specificities it brings—in particular, how to operate, gather, and relay information within a more hybrid and software-defined world.

 

Increased Connectivity Across Devices

 

Finally, there’s IoT, which will, without a doubt, bring dramatic complexity to the evolution of network structures over the next three to five years.

 

The concept of IoT is really about connecting and networking unconventional things and turning them into data collection points. Think everything from sensors in military materiel shipments to connected cars. A lot of these “things” are being tested to see where we might consider the boundaries of the network edge to be, and where that data processing needs to take place.

 

The best approach is somewhere between decentralized and centralized—which is where network management will be heading. Network management tools and Federal network managers will need to build internal capability – staff and technology – to maintain visibility, awareness, and control over a growing evolution of network technology.

 

Find the full article on Federal Technology Insider.

You’ve made the leap and have implemented a DevOps strategy in your organization. If everything is going just great, this post may not be for you. But if you haven’t implemented DevOps, or you have and things just don’t seem to be progressing as quickly as you had hoped, let’s discuss some of the reasons why your DevOps might be slow.

 

But First…

Before we get too far along into why your DevOps initiative might be slow, let’s first ask: Is it really slow?  If DevOps is new to your company, there may be some period of adjustment as the teams get used to communicating with each other. Additionally, it’s possible that your expectations of how fast things should be might be out of alignment with how they really are. It’s easy to think that by transitioning to DevOps, everything will be all unicorns and rainbows and instantly churning out code to make your lives better. However, depending on where you start to focus, there could be some time before you start to see benefits.

 

What We Have Here is a Failure to Communicate

If you’ve been following the previous three posts in this blog series, it should come as no surprise that one of the key factors that could slow down your DevOps projects is communication. Do you have a process set up to interface the developers and operations personnel so that everyone agrees on the best way to communicate back and forth? Are your developers getting the information they need from operations in a timely manner?  Can operations communicate feature requests, issues, and operations specific information to developers efficiently? If the answers to any of these questions are no, then you’ve started down the path of identifying your issue.

 

Keep in mind that just because you have a process in place to establish communication channels between the developers and operations personnel, you may still encounter issues. Just because a process has been established doesn’t mean it is the right process, or that people are following it. When evaluating, make sure that you don’t assume that the processes are appropriate for your company.

 

I’m Not Buying That

Sometimes, employees simply won't buy into DevOps. Maybe they think that they can get things done faster without user input and as a result, they ditch all processes that were in place to help facilitate developer-operator communications. As mentioned in DevOps Pitfalls, culture is a huge contributing factor to the success or failure of any DevOps initiative. The process becomes behavior which, in turn, becomes culture. If the process is being ignored, your organization needs to come up with a way of dealing with employees who choose not to follow it. This gets into a whole HR policy discussion which is way outside the scope of this blog.

 

I Used to Be a Developer, Now I’m a Developer Times Two!

Before you started doing DevOps, it’s likely you already had developers, and they already had a job writing code for some projects. Whether it’s because you are increasing automation or building software for a software-defined data center, the projects that you are considering the lead into DevOps are not the only projects that your developers are working on. When you make the choice to implement DevOps processes, carefully review your developers' current workloads. Based on your findings, you may need to hire more developers to help ensure that the project rolls out smoothly and in a reasonable time frame.

 

Size Matters

It doesn’t matter if you are developing a new software tool or deploying a new phone system, there is a tendency for a lot of people to want to get everything rolled into one big release. By doing so, users get to see the full glory of your project and you can sit back and enjoy being completely done with something.

 

The issue with this approach is that this could take a lot more time than users are expecting. It would be better to have some clear communication up front to identify the features that are time-critical for users to have, and build and release those first with a schedule to release the remaining features. By using this approach, developers and operators get an early win to address the critical issue. This is then followed up by additional wins as new functionality gets rolled into the software in future, short-timeline releases.

 

Wrap Up

As you can see, reasons for a slow DevOps process are varied but can be largely attributed to the communications that are in place between developers and operators. What other issues have you seen that have slowed down a DevOps process?

 

In the next and final post in this series, we'll wrap up some of what has been discussed in the series, and also address some of the comments and questions that have cropped up along the way. Finally, I’ll leave you with some DevOps resources to give you more information than I can possibly provide in five blog posts!

In previous posts in this “My Life” blog series, I’ve written quite a bit about the project management/on-task aspects of how I keep my focus and direction top of mind while I practice and place emphasis on my goals. But, what happens when something doesn’t work?

 

In some cases, the task we’re trying to accomplish may be just too hard. In some cases, we’re just not there yet. Practice, in this case, is the right answer. As my french horn teacher used to say, "Practice does not make perfect. Perfect practice makes perfect." The problem, particularly in my guitar playing, is that I’m flying a little blind because I have no teacher helping me to practice perfectly. But, imagine a tricky chord sequence that has had me failing during practice. If I don’t burn through the changes as often as possible in my practice time, I’ll definitely fail when I’m on stage, attempting to play it in front of a live audience.

 

In an effort to avoid the embarrassment of that failure, I sandbox. At least that’s how I see it.

 

The same analogy can be transposed to my thoughts about implementing a new build of a server, an application that may or may not work in my VMware architecture, etc. We don’t want to see these things fail in production, so we test them out as developer-type machines in our sandbox. This is a truly practical approach. By making sure that all testing has taken place prior to launching a new application in production, we're helping to ensure that we'll experience as little downtime as possible.

 

I look at these exercises as an effort to enable a high degree of functionality.

 

The same can be said as it reflects on training myself to sleep better, or to gain efficiency in my exercise regime. If I were to undertake a new weightlifting program, and my lifts are poorly executed, I could strain a muscle or muscle group. How would that help me? It wouldn’t. I work out without a trainer, so I rely on the resources that are available to me, such as YouTube, to learn and improve. And when I’ve got a set of motions down pat, the new exercise gets rolled into my routine. Again, I’ve sandboxed and learned what lessons I need to know by trial and error. This helps me avoid potentially hazardous roadblocks, and in the case of my guitar, not looking like a fool. Okay, let’s be clear... avoid looking more like a fool than usual.

 

I know that this doesn’t feel spontaneous. Actually, it isn’t. Again, as I relate it to musical performance, the correlation is profound. If I know via practice and a degree of comfort with the material, it allows my improvisation to take place organically. I always know where I am, and where I want to be in the midst of a performance, and thus, my capacity to improvise can open up.

I'm in Orlando this week attending SQL Server Live, part of the Live360 events. If you are attending, stop by the SolarWinds booth and say hello. I’m happy to talk data and databases with you.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Cray Supercomputers in Microsoft Azure

Remember when Microsoft announced at Ignite that they were going to offer quantum computing as a service in Azure? You need big computers for that. Looks like they found some.

 

An Opinion On Offense Against NAT

Every tech decision comes down to cost, benefit, and risk. NAT is no different. I’m not a network admin, but I find this discussion fascinating. It is also eerily similar to the myriad of debates I have with my data folk.

 

Munich IT chief slams city's decision to dump Linux for Windows

Munich is often hailed as a leader in how a city can operate without Microsoft software, but it has had enough of the dumpster fire they created by trying to switch to Linux. Somewhere,adatole is weeping.

 

YouTube to target disturbing videos masquerading as kids' shows

I’m trying to understand why this hasn’t been a focus for YouTube already. I’m guessing the answer is money.

 

Culture is the Behavior You Reward and Punish

Before you read this article, take five minutes to answer the question, “What makes people successful at your company?" Then, read this article.

 

Self-Operating Shuttle Bus Crashes After Las Vegas Launch

Classic PEBSWADS (Problem Exists Between Steering Wheel and Driver’s Seat). No robots were harmed as a result of this crash.

 

Someone Mapped Out Every Quantum Leap Scott Bakula Has Ever Done

Oh boy.

 

Sitting on top of the world, 1 billion users, and about to fall off a cliff. The world of tech changes quickly. Here's the cover of Forbes from 10 years ago:

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Most federal agencies have to contend with a constant lack of IT resources—staff and budget alike. In fact, many of these agencies have been performing IT tasks manually for years, heightening the pain of that already painful burden. Agencies, take heart. There is a way to ease that pain. In fact, there is a single solution that solves both of these issues: automation.

 

Today’s federal IT pros simply can’t afford to be burdened with the extra time manual interventions take. Automation can help eliminate wasted time, ease the burden of overtasked IT staff, and allow the IT team to focus on more mission-critical objectives.

 

Automating Alerts

 

Take alerts, for example. Alerts will always be a critical part of IT. But, there is a way to handle them that doesn’t take hours out of a federal IT professional’s day.

 

In a manual scenario, when a server alert is created because a disk is full, the response would be to, say, dump the temp directory. That takes time and effort. What if, instead, the administrator wrote a script for this task to take place automatically? That would save time and effort, and would take care of the fix more quickly than a manual approach.

 

Here’s another scenario: Let’s say an application stops working. The manual approach to getting that application back up and running would take an inordinate amount of time. With automation, the administrator can write a script enabling the application to restart automatically.

 

Of course, not all alerts can be solved with an automated response. That said, there are many that can, and that translates to time saved.

 

Beyond Alerts

 

Going beyond simply automating alerts, think about the possibility of a self-healing data center, where scripts and actions are performed automatically by monitoring software issues as they happen.

 

There are tools available today that can absolutely provide this level of automation. Consider tools dedicated to change management and tracking, compliance auditing, and configuration backups. These types of tools will not only save administrators vast amounts of time and resources, but will also greatly reduce errors that are too often introduced through manual problem solving. These errors can lead to network downtime or even potential security breaches.

 

As a federal IT pro, your bottom line is the mission. The time you save through automation can help sharpen that focus. Enhancing that automation will allow additional time to focus on developing and deploying new and innovative applications, for instance, or ways to deliver those applications to users more effectively, so they can have the tools they need to do their jobs more efficiently.

 

When you automate, you make your life easier and your agency more agile, innovative, nimble, and secure.

 

Find the full article on our partner DLT’s blog TechnicallySpeaking.

Last year, we kicked off a new THWACK tradition: the December Word-a-Day Writing Challenge, which fostered a new kind of interaction in the THWACK community: By sharing personal essays, images, and thoughts, we started personal conversations and created human connections. In just one month, last year’s challenge generated nearly 20,000 views and over 1,500 comments. You can see the amazing writing and thoughtful responses here: Word-A-Day Challenge 2016

 

Much of this can be attributed to the amazing, engaging, thriving THWACK community itself. Whether the topic is the best starship captain, what IT lessons we can learn from blockbuster movies, or the best way to deal with nodes that have multiple IP addresses, we THWACKsters love to chat, debate, and most of all, help. But I also believe that some of last years' success can be attributed to the time of year. With December in sight, and some of us already working on budgets and project plans for the coming year, many of us find our thoughts taking an introspective turn. How did the last 12 months stack up to my expectations? What does the coming year hold? How can I best prepare to meet challenges head-on? By providing a simple prompt of a single, relatively innocuous word, the Word-a-Day Challenge gave many of us a much-needed blank canvas on which to paint our hopes, concerns, dreams, and experiences.

 

Which takes me to this years' challenge. Once again, each day will feature a single word. Once again, one brave volunteer will serve as the "lead" writer for the day, with his or her thoughts on the word of the day featured at the top of the post. Once again, you--the THWACK community--are invited to share your thoughts on the word of the day in the comments area below the lead post. And once again, we will be rewarding your contribution with sweet, sweet THWACK points.

 

What is different this year is that the word list has a decidedly more tech angle to it. Also, our lead writers represent a wider range of voices than last year, with contributors coming from SolarWinds product, marketing, and sales teams. The lead writers also include contributions from our MVP community, which gives you a chance to hear from some of our most experienced customer voices.

 

For those who are more fact-oriented, here are the challenge details:

  • The words will appear in the Word-a-Day challenge area, located here: Word-A-Day Challenge 2017
  • The challenge runs from December 1 to December 31
  • One word will be posted per day, at midnight, US CST (GMT -6)
  • The community has until the following midnight, US CST, to post a meaningful comment on that days' word
    • Comments will earn you 150 THWACK points
    • One comment per THWACK ID per day will be awarded points
    • Points will appear throughout the challenge, BUT NOT INSTANTLY. Chill.
    • A "Meaningful" comment doesn't necessarily mean "long-winded." We're simply looking for something more than a "Me, too!" or "Nice job!" sort of response
    • Words WILL post on the weekends, BUT...

    • ...For posts on Saturday and Sunday, the community will have until midnight CST on Monday (meaning the end of Monday, start of Tuesday) to share comments about those posts. For those folks who really REALLY don't work on the weekend (who ARE you people?!?)

  • Only comments posted in the comments area below the word for that day will be awarded THWACK points (which means words will noot count if posted on Geek Speak, product forums, your own blog, or on the psychic friend's network.)

 

Once again, the Word-a-Day 2017 challenge area can be found here: Word-A-Day Challenge 2017. While nothing will appear in this new forum until December 1, I encourage everyone to follow the page now to receive notifications about new posts as they appear.

 

If you would like to get to know our 28 contributors before the challenge starts, you can find their THWACK pages here:

 

And finally, I present the Word-a-Day list for 2017! I hope that posting them here and now will give you a chance to gather your thoughts and prepare your ideas prior to the Challenge. That way you can fully participate in the conversations that will undoubtedly arise each day.

 

  • December 01 - Identity
  • December 02 - Access
  • December 03 - Insecure
  • December 04 - Imposter
  • December 05 - Code
  • December 06 - FUD (Fear, Uncertainty, and Doubt)
  • December 07 - Pattern
  • December 08 - Virtual
  • December 09 - Binary
  • December 10 - Footprint
  • December 11 - Loop
  • December 12 - Obfuscate
  • December 13 - Bootstrap
  • December 14 - Cookie
  • December 15 - Argument
  • December 16 - Backbone
  • December 17 - Character
  • December 18 - Fragment
  • December 19 - Gateway
  • December 20 - Inheritance
  • December 21 - Noise
  • December 22 - Object
  • December 23 - Parity
  • December 24 - Peripheral
  • December 25 - Platform
  • December 26 - Utility
  • December 27 - Initial
  • December 28 - Recovery
  • December 29 - Segment
  • December 30 - Density
  • December 31 - Postscript

In the last two posts, I talked about databases and why network and storage are so darn important for their well-being. These are two things that a lot of companies are struggling with and in some ways, seem to have mastered over the years in their own data centers. But now it seems they will be obsolete when things like cloud are quickly becoming the new normal.

 

Let’s leave the automotive comparison for now and let’s concentrate on how cloud strategy is affecting the way we use and support our databases.

 

There are a lot of ways to describe the cloud, but I think the best way for me to describe it is captured in the picture below:

As Master Yoda explains, cloud is not something magical. You cannot solve all of your problems by simply moving all your resources there. It will not mean that once your database is in the cloud that your storage and network challenges will magically vanish. It is literally just another computer that, in many cases, you will even share with other companies that have moved their resources to the cloud as well.

 

As we all know, knowledge is power. The cloud doesn’t change this. If we want to migrate database workloads to the cloud, we need to measure and monitor. We need to know the impact of moving things to the cloud. As with the Force, the cloud is always there ready to be used. It is the how and when that will impact the outcome the most. There are multiple ways to leverage the cloud as a resource to gain an advantage, but that doesn’t mean that moving to the cloud is the best answer for you. Offense can be the best defense, but defense can also be the best offense. Make sure to know thy enemy, so you won’t be surprised.

 

May the cloud be with you!

Home for a week before heading to Orlando and SQL Live. This is my third event in five weeks, and it will be four events in seven weeks once I get to AWS re:Invent. That's a bit more travel than usual, but being at events is the best way to communicate with customers. I love talking data and collecting feedback. Discussing common issues and ways to solve them are some of my favorite things. Yeah, I'm weird.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Microsoft says 40 percent of all VMs in Azure now are running Linux

Want to keep away the database administrators that should never be touching your servers? Run Linux. Without a GUI, they stay away. Problem solved.

 

Millions of Professional Drivers Will Be Replaced by Self-Driving Vehicles

An interesting factor that will help drive the development of autonomous vehicles. Often times we hear about robots taking our jobs, but in this case, robots are going to take the jobs we don’t want anyway.

 

As Amazon’s Alexa Turns Three, It’s Evolving Faster Than Ever

I broke down and purchased an Echo Dot, after having used one while on vacation recently. Ask Alexa to “open a box of cats." You’re welcome.

 

Three years in a row – Microsoft is a leader in the ODBMS Magic Quadrant

Just in case you didn’t know who made the best data platform on the planet.

 

Microsoft presenter downloads Chrome after Edge fails

Don’t judge. We've all been there.

 

There’s No Fire Alarm for Artificial General Intelligence

A bit long, but well worth the time. As I was reading this I was thinking about the fire alarm for those of us discussing things like autonomous databases and the future of systems and database administration.

 

Security and privacy, startups, and the Internet of Things: some thoughts

Also a bit long, but worth the time. Some interesting insights into the why and how we might have gotten ourselves into the data security and privacy mess we are in today. SPOILER ALERT: It’s money.

 

One good thing about visiting Seattle is that I get to stop by the grave of Microsoft Bob:

tiles spelling out DATA THEFT

In my eBook, 10 Ways We Can Steal Your Data, I reveal ways that people can steal or destroy the data in your systems. In this blog post, I'm focusing on un-monitored and poorly monitored systems.

 

Third-party Vendors

 

The most notorious case of this type is the 2013 Target data theft incident in which 40 million credit and debit cards were stolen from Target's systems. This data breach is a case study on the role of monitoring and alerting. It led to fines and costs in the hundreds of millions of dollars for the retailer. Target had security systems in place, but the company wasn't monitoring the security of their third-party supplier. And, among other issues, Target did not respond to their monitoring reports.

 

The third-party vendor, an HVAC services provider, had a public-facing portal for logging in to monitor their systems. Access to this system was breached via an email phishing attack. This information, together with a detailed security case study and architecture published by another Target vendor, gave the attackers the information they needed to successfully install malware on Target Point-of-Sale (POS) servers and systems.

 

Target listed their vendors on their website. This list provided a funnel for attackers to find and exploit vendor systems. The attackers found the right vulnerability to exploit with one of the vendors, then leveraged the details from the other vendor to do their work.

 

Misconfigured, Unprotected, and Unsecured Resources

 

The attackers used vulnerabilities (backdoors, default credentials, and misconfigured domain controllers) to work their way through the systems. These are easy things to scan for and monitor. So much so that "script kiddies" can do this without even knowing how their scripts work. Why didn't IT know about these misconfigurations? Why were default credentials left in enterprise data center applications?  Why was information about ports and other configurations published publicly? No one of these issues may have led to the same outcome, but as I'll cover below, these together formed the perfect storm of mismanaged resources to make the data breach possible.

People

 

When all this was happening, Target's offsite monitoring team was alerted that unexpected activities were happening on a large scale. They notified Target, but there was no response.

 

Some of the reasons given were that there were too many false positives, so security staff had grown slow to respond to all reports. Alert tuning would have helped this issue. Other issues included having too few and undertrained security staff.

 

Pulling it All Together

 

There were monitoring controls in place at Target, as well as security staff, third-party monitoring services, and up-to-date compliance auditing. But the system as a whole failed due to not having an integrated, system-wide approach to security and threat management.

 

 

How can we mitigate these types of events?

 

  • Don't use many, separate monitoring and alerting systems
  • Follow data flows through the whole system, not just one system at a time
  • Tune alerts so that humans respond
  • Test responders to see if the alerts are working
  • Read the SANS case study on this breach
  • Don't let DevOps performance get in the way of threat management
  • Monitor for misconfigured resources
  • Monitor for unpatched resources
  • Monitor for rogue software installs
  • Monitor for default credentials
  • Monitor for open ports
  • Educate staff on over-sharing about systems
  • Monitor the press for reports about technical resources
  • Perform regular pen testing
  • Treat security as a daily operational practice for everyone, not just an annual review
  • Think like a hacker

 

I could just keep adding to this list.  Do you have items to add? List them below and I'll update.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

There currently exists a global shortage in cybersecurity experts - not just by a few thousand, or even tens of thousands, but a shortage of some one million experts.

 

This isn’t good news for agencies, particularly with the rising complexity of hybrid IT infrastructures of both in-house managed and cloud-based services. Traditional network monitoring tools and strategies fail to provide complete visibility into the entire network, making it difficult to pinpoint the root causes of problems, let alone anticipate those problems before they occur. This can open up security holes to outside attackers and insider threats.

 

Federal cybersecurity teams need complete views of their networks and applications, regardless of whether they are on-site or hosted. IT managers must also be able to easily and quickly troubleshoot, identify, and fix issues wherever they reside. Even better, they should be equipped with systems that can predict when a problem may occur based on past historical data.

 

To help ensure the security of their networks, managers should explore options that offer three key benefits.

 

Better visibility. IT managers must manage and track multiple application stacks across their different environments. Therefore, they should consider solutions that track and monitor both on-premises and off-premises network activity.

 

These solutions must provide a single-pane-of-glass view into all network activities, and allow for review of data correlations across application stacks. Seeing different data types side by side can help you identify anomalies and track cybersecurity problems directly to the source. Timelines can be laid on top of this information to correlate the timing of an event to a specific slowdown or outage. This information can be used collectively to quickly remediate issues that could impact network security.

 

Better proactivity. Predictive analytics allows managers to create networks that effectively learn from past incidents and behaviors. Network monitoring tools can automatically scan for anomalies that have caused disruptions in the past. When something is detected, managers can receive notifications and directions on how to mitigate the problem before it happens.

 

In essence, IT managers go from reacting to network issues to proactively preventing them. This is a handy strategy that helps keep networks secure and running without demanding a lot of resources.

 

Better collaboration. One of the benefits of having a smaller staff is that the network management team can be more nimble, as long as they have the right collaboration tools in place. Individuals must be able to easily share data, charts, and metrics with the rest of the team. This sets up a baseline, helps prevent confusion, and helps bring the team together to tackle problems in a more efficient manner.

 

Collaboration becomes even more critical when working with hybrid IT environments. Everyone needs to be able to work off the same canvas to address potential security problems.

 

Better security and hybrid IT environments can coexist, but agencies need to make sure that the managers they have on staff are equipped with tools that make bringing these two vital concerns together in a more cohesive, efficient, and effective manner.

 

Find the full article on GovLoop.

I've rarely seen an employee of a company purposefully put their organization at risk while doing their job. If that happens, the employee is generally not happy, which likely means they’re not really doing their job. However, I have seen employees apply non-approved solutions to daily work issues. Why? Several reasons, probably, but I don’t think any are intentionally used to put their company at risk. How do I know this?

 

My early days as an instructor

When I started out as a Cisco instructor, I worked for a now-defunct learning partner that used Exchange for email. The server was spotty, and you could only check email on the go by using their Microsoft VPN. I hated it because it didn’t fit any of my workflows and created unnecessary friction. In response to this, I registered a domain that looked similar to the company’s domain and set up Google apps, now called G-Suite, for the domain. That way I could forward my work emails to an address that I set up. No one noticed for several months.  I would reply to them from my G-Suite address and they just went with it. Eventually, most people were sending emails directly to my “side” email.

 

After becoming the CTO, I migrated the company off our rusty Exchange server and over to G-Suite, but I couldn’t help but think that I would have reamed someone if they would have done what I did. In hindsight, it was not the smartest thing to do. But I wasn’t trying to cause any issues or leak confidential data; I was just trying to get my job done. Management needs to come to terms with the fact that if it makes an employee's work/life difficult, they will find another way. And it may not be the way you want.

 

Plugging the holes

Recently, I saw a commercial for FlexTAPE. It was amazing. In one part, you see a swimming pool with a huge hole in the side with water gushing out of it. A guy slaps a piece of FlexTAPE over the hole from the inside of the pool, and the water stops flowing. It reminded me of some IT organizations that metaphorically attempt to fix holes by applying FlexTAPE to them. But, by that point, so much water has escaped that the business has already been negatively impacted. Instead, companies should be looking for the slow leaks that can be repaired early on.

 

Going back to my first example, once people learned how I was handling my email, they started asking me to set up email addresses for them so they could do the same. First one colleague, then another. Eventually, several instructors had an “alternate” email address that they were using regularly. The size of that particular hole grew quite large.

 

At some point, management realized that they couldn’t pedal backward on the issue, and was forced to update certain protocols. I often wonder how much confidential information could have been leaked once I was no longer the only one using the new email domain. Fortunately, those who were using it didn’t have access to confidential information, but lots of content could have been exfiltrated. That would have been bad, but in my particular organization, I don’t know if anyone would have known.

 

Coming full circle

Today I own my own business and deal with several external clients. When I have employees, I try to be flexible because I understand the problem with friction. I also understand that friction may not be the only reason one turns to a non-approved solution to get their work done. For core business operations, organizations would do well to clearly define approved software packages. Should an employee use services like Dropbox, iCloud, Google Drive, or Box.com? If they do, are there controls in place? How does the solution impact their role? Do employees have a way to express their frustrations without fear of reprimand? Having an open line of communication with an employee can help them feel like their role is important. It also helps management really understand the issues they face. If you neglect that, employees will choose their own solutions to get work done, and potentially create security issues. And we don’t want that now, do we?

kong.yang

The Just Us League

Posted by kong.yang Employee Nov 3, 2017

The original Justice League consisted of seven superheroes: Superman, Aquaman, Flash, Green Lantern, Martian Manhunter, Batman, and Wonder Woman. In a parallel universe, there is the IT Justice League, a union of IT superheroes that aims to protect IT organizations and their users from the perils of application downtime, high latency, and IT disasters. As IT pros, we like to think we possess superpowers of our own. Sometimes those powers are used for good; sometimes for evil. Most of the time, we use them to keep our jobs.

 

To have a successful, long career in IT, you can’t operate in a silo. You simply can't accomplish organizational, global goals by walking alone. Plus, life's too short to walk the journey all alone. In other words, it's best to share your IT pains and gains. Moreover, most of us are neither blessed with innate talents like Superman and Wonder Woman nor are we blessed with endless resources and capital like Bruce Wayne/Batman or Aquaman. Heck, even with the greatest willpower, we can't create something out of anything like Green Lantern or morph objects based on our desires like Martian Manhunter. In spite of this, we still stand and deliver when called upon, such as when trouble befalls our applications. Teamwork, collaboration, and community make integrating and delivering application services a much easier and more gratifying experience.

 

This is where the challenge of the Just Us League appears. The Just Us League is one where everyone knows your name and you just fit in because it’s always been that way. But what if you’re not a part of the original Just Us League? How do you join? What are the rules for joining and participating in the Just Us League? What practical tips do you use to open the Just Us League to include new-to-you people? How do you incorporate the trust but verify modus operandi to make sure you sure you are nurturing vibrant and growing teams, collaboration, and community? Let me know in the comment section below.

While the previous two posts in this DevOps series have been open-ended and applicable to people on both the development and operations side, this post is focused on operations personnel. After all, if you're a developer and asking if you need to learn to code, then you might not be in the right job. I've recently had the chance to speak to several people in operations roles that are tackling what their organization is calling DevOps. Inevitably, the discussion turns to coding and whether or not operations staff need to pick up development skills. It depends, and in

 

Why You Shouldn't Learn Development as an Operator

It may be tempting to learn how to code so that you can be the sole source for your DevOps efforts.  After all, who is closer to daily operations in your organization than you?  And if you can make it to where you cut out the developer, that means you'll get to your DevOps nirvana sooner, right? Not usually. Here's why:

 

It's About Turning Code Around Quickly

You might be able to pick up the skills that you need to get your program off the ground, but how long is it going to take?  In addition to the learning curve to get everything programmed the way it needs to be, operations has its own goals for supporting and operating the environment, and often the demands for one will not match up with the demands of the other.  If you are fortunate enough to have a development team, they can really speed up this process.  In addition to already having a solid basis on the coding piece, they can continue developing for the environment when operations has a fire that requires attention. By having the teams work together, you can ensure that the developers know what you need to know while not taking significant time away from your day-to-day job.

 

It's About Good Development Practices

When it boils down to it, DevOps is a development practice. It's a way for development and operations to have continual communications in the development process to ensure that the final product is what operations is hoping for.  In addition to the communications, on the back end, there is a whole host of development practices that get involved.  Proper code documentation, testing, and validation for just a couple.  If you're picking up coding for your DevOps effort, it's highly likely that some of these practices may be forgotten in an effort to get yourself up to speed on the coding basics.

 

Value in Collaboration

Aside from the points mentioned in the previous two sections, there is value to having someone from outside your operations organization participate in the development process. You may know exactly what you want, but a healthy back and forth can help the development team come up with suggestions for handling an issue that you may not have previously considered. The development team may have a different, and sometimes better, perspective on how to approach and resolve a problem.

 

Why You Should Learn Development as an Operator

Whether you choose to learn how to code or not is largely going to depend on what the goal is. In this section, we'll go over some valid reasons for wanting to pick up this skillset.

 

There Are No Developers

In earlier posts in this series, several people have commented that their boss wants them to implement DevOps in their organization. The one problem? They have no developers. You could easily say, "That's not DevOps," and be right in this situation.  The fact of the matter is that the supervisor has said that they are doing DevOps, so in this case, you will need to pick up coding skills, even though you are still technically on the operations side.

 

Speak the Language

By learning how to code, you can pick up some non-coding-specific knowledge that can help out in your DevOps efforts. Understanding what it takes to write an application can give you more realistic expectations regarding timelines when you speak with the developers in your organization. Additionally, by understanding basic development concepts, you will be able to understand what things are possible (or more difficult) with code in your application. Finally, having a grasp of some of the basic terminology around coding may help you better communicate what you are wanting from your applications to the developers who are writing them. Being able to understand basic coding concepts can expedite your DevOps processes by simplifying communications and setting achievable goals for the project.

 

No One Told Me There Would Be Learning

As the saying goes, if you're not moving forward, you're falling behind. One of the best reasons to learn to code as an operator is simple: expanding your skillset. By learning to code, your organization will benefit from you being able to speak the language with developers. You also increase your skillset and value, which could help in your career development.

 

Wrap-Up

Whether you decide to pick up coding or not is ultimately up to you. However, you may notice one key point from all of the reasoning above. Unless there are no developers in your organization, your goal for learning to code should not be to do the developers job for them. Instead, if you choose to go that route, look at it as an opportunity to improve communications between teams, or to improve yourself.

I’m at PASS Summit this week in Seattle. If you are reading this and also at PASS Summit, stop by the SolarWinds booth and say hello.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Microsoft’s Sonar lets you check your website for performance and security issues

Microsoft continues to take steps to help users understand the importance of data security and privacy.

 

Amazon wants the keys to your home

Meanwhile, Amazon expects users to not understand the importance of security.

 

Microsoft finally kills off the Kinect, but the tech will live on in other devices

Proving that people are likely to get off the couch and move around just long enough to understand how much they prefer laying on the couch.

 

InfoSec Needs to Embrace New Tech Instead of Ridiculing It

A brilliant piece that tackles an issue that has frustrated me for a long time. I’m not an early-adopter of tech, but I don’t dismiss new things as quickly as they arise, and I don’t mock others for wanting to try new things.

 

Windows 10 tip: Turn on the new anti-ransomware features in the Fall Creators Update

I took the time to apply the update before taking my trip and was happy to come across this piece of information regarding an anti-ransomware feature in Windows 10. I enabled this and I think you should, too.

 

Does Apple Think We’re Clowns?

Yes. Next question.

 

Why Amazon and Microsoft Shouldn't Lose Sleep Over Oracle's New Cloud Database

Because (1) Oracle has only announced features, we still haven’t seen them; (2) Amazon and Microsoft actually have products with features and customers using them; and (3) they are a fraction of the cost that Oracle will charge. And yet, somehow, Larry will make money anyway.

 

How many minutes are in "several?" Asking for a friend.

 

Filter Blog

By date: By tag: