Skip navigation
1 6 7 8 9 10 Previous Next

Geek Speak

2,018 posts

I'm heading to VMworld in Barcelona next week, so if you are there let me know as I would love to talk data and databases with you. I'm co-presenting with David Klee and we are going to be talking about virtualizing your database server. I have not been to Barcelona in 10 years, I'm looking forward to seeing the city again, even briefly.

 

Here's a bunch of stuff I found on the Intertubz in the past week that you might find interesting, enjoy!

 

Cloud by the Megawatt: Inside IBM’s Cloud Data Center Strategy

If you are like me you will read this article and think to yourself "Wait, IBM has a Cloud?"

 

VMware, AWS Join Forces in Battle for Enterprise Cloud Market

This partnership marks an important shift in the market and should cause some concern for Microsoft. That being said, I also see this partnership as a last-ditch effort to keep VMware relevant before being absorbed completely by Dell.

 

Here are the 61 passwords that powered the Mirai IoT botnet

Proving once again that the cobbler's children have no shoes, we have an army of devices built by people that should know better, but don't put into practice the basics of security.

 

Twitter, Microsoft, Google and others say they haven’t scanned messages like Yahoo

I feel I have heard this story before, and I think I know how it ends.

 

Are microservices for you? You might be asking the wrong question

"Change for the sake of change is rarely a sensible use of time." If only they taught this at all the charm schools known as the MBA.

 

Latency numbers every programmer should know

Not a bad start at a complete list, and I would argue that more than just programmers should know these numbers. I've had to explain the speed of light to managers before.

 

7 Times Technology Almost Destroyed The World

Here's hoping the robots can act a bit more like humans when it counts the most.

 

Autumn has arrived here in New England, and that means apple picking is in full swing:

apple - 1.jpg

One of the questions that comes up during the great debate on the security of Internet of Things (IoT) is the responsibility of device manufacturers to support those devices. When we buy a refrigerator or a toaster, we expect that device to last through the warranty date and well beyond. Assuming it is a well-made unit it may last for a long time. But what about devices that only live as long as someone else wants them to?

Time's Up!

Remember Revolv? The smart hub for your home that was purchased by Nest? They stopped selling the device in October 2014 after being purchased, but a year and a half later they killed the service entirely. The Internet was noticeably upset and cried out that Google and Nest had done a huge disservice to their customers by killing the product. The outcry was so fierce that Nest ended up offering refunds for devices.

The drive to create new devices for IoT consumption is huge. Since most of them require some kind of app or service to operate correctly, it also stands to reason that these devices are reliant on the app to work properly. In the case of Revolv, once the app was shut down the device was no longer able to coordinate services and essentially became a huge paperweight. A few companies have chosen to create a software load that allows devices to function in isolated mode, but those are few and far between.

The biggest concern for security here is what happens when those devices that are abandoned by still function are left to their own ends. A fair number of the devices used in the recent IoT botnet attacks were abandonware cameras that were running their last software update. Those devices aren't going to have security holes patched or get new features. The fact that they work at all owes more to them being IP-based devices than anything else.

Killing In The Name Of IoT

However, if those manufacturers had installed a kill switch instead of allow the cameras to still work it would have prevented some of the chaos from the attack. Yes, the buyers of those cameras would have been irritated that the functionality was lost. But it could have made a massive security issue easier to deal with.

Should manufacturers be responsible for installing a software cut-out that allows a device to be remotely disabled when the support period expires? That's a thorny legal question. It opens the manufacturer up to lawsuits and class action filings about creating products with known defects. But it also raises the question of whether or now these same manufacturers should have a greater duty to the safety of the Internet.

And this isn't taking into account the huge issues with industrial IoT devices. Could you imagine what might happen if an insulin pump or an electrical smart meter was compromised and used as an attack vector? The damage could be catastrophic. Worse yet, even with a kill switch or cut-out to prevent transmission, neutering those devices renders them non-functional and potentially hazardous. Medical devices that stop working cause harm and possibly death. Electrical meters that go offline create hazards for people living in homes.

Decisions, Decisions

There's no easy answer to all these problems. Someone is going to be mad no matter what we decide. Either these devices live on in their last known configuration and can be exploited or they get neutered when they shutdown. The third option, having manufacturers support devices forever, isn't feasible either. So we have to make some choices here. We have to stand up for what we think is right and make it happen.

Make sure your IoT policy spells out what happens to out-of-support devices. Make sure your users know that you are going to implement a traffic kill switch if your policy spells it out. Knowledge is key here. Users will understand your reasons if communicated ahead of time. And using tools from Solarwinds to track those devices and keep tabs on them helps you figure out when it's time to implement those policies. Better to have it sorted out now than have to deal with a problem when it happens.

6622814393_7fbe9569da_o.jpg

Image courtesy of Spirit-Fire on Flickr

 

I'd think I'd like to mirror a session title from the recent ThwackCamp and subtitle this particular post "Don't Hate Your Monitoring." We all face an enormous challenge in monitoring our systems and infrastructure, and in part that's caused by an underlying conflict:

 

monitor_all.jpg Color_Overload.jpg

Image Courtesy D Sharon Pruitt

 

This is a serious problem for everybody. We want to monitor everything we possibly can. We NEED to monitor everything we can, because heaven help us if we miss something important because we don't have the data available. At the same time, we cannot possibly cope with the volume of information coming into our monitoring systems; it's overwhelming, and trying to manually sift through to find the alerts or data that actually matter to your business. And then we wonder why people are stressed, and why we have a love/hate relationship with our monitoring systems!

 

How can the chaos be minimized? Well, some manual labor is required up front, and after that it will be an iterative process that's never truly complete.

 

Decide what actually needs to be monitored

It's tempting to monitor every port on every device, but do you really need to monitor every access switch port? Even if you want to maintain logs for those ports for other reasons, you'll want to filter alerts for those ports so that they don't show up in your day to day monitoring. If somebody complains about bad performance, then digging in to the monitoring and alerting is a good next step (maybe the port is fully utilized, or spewing errors constantly), but that's not business critical, perhaps unless that's your CEO's switchport.

 

Focus on which alerts you generate in the first place

  • Use Custom Properties to allow identification of related systems so that alerts can be generated in an intelligent way using custom labels to identify related systems.
  • Before diving into the Alert Suppression tab to keep things quiet, look carefully at Trigger Conditions and try to add intelligent queries in order to minimize the generation of alerts in the first place. The trigger conditions allow for some quite complex nested logic which can really help make sure that only the most critical alerts hit the top of your list.
  • Use trigger conditions to suppress downstream alerts (e.g if a site router is down, don't trigger alerts from devices behind that router that are now inaccessible)

 

Suppress Alerts!

I know I just said not to dive into Alert Suppression, but it's still useful as the cherry on top of the cream that is carefully managed triggers.

  • It's better in general to create appropriate rules governing when an alert is triggered than to suppress it afterwards. Alert suppression is in some ways rather a blunt tool; if the condition is true, all alerts are suppressed.
  • One way to achieve downstream alert suppression is to add a suppression condition to devices on a given site that queries for the status of that site's edge router; if the router status is not "Up", the condition becomes true, and it should suppress the triggered alerts from that end device. This could also be achieved using Trigger Conditions, but it's cleaner in my opinion to do it in the Alert suppression tab. Note that I said "not Up" for the node status rather than "Down"; that means that the condition will evaluate to true for any status except Up, rather than explicitly requiring it to be only "Down". The more you know, etc...

 

Other features that may be helpful

  • Use dependencies! Orion is smart enough to know the implicit dependencies of, say, CPU and Memory on the Host in which they are found, but site or application-level dependencies are just a little bit trickier for Orion to guess. The Dependencies feature allows you to create relationships between groups of devices so that if the 'parent' group is down, alerts from the 'child' group can be automatically suppressed. This is another way to achieve downstream alert suppression at a fairly granular level.
  • Time-based monitoring may help for sites where the cleaner unplugs the server every night (or the system has a scheduled reboot), for example.
  • Where approptiate, consider using the "Condition must exist for more than <x> minutes" option within Trigger Conditions to avoid getting an alert for every little blip in a system. This theoretically slows down your notification time, but can help clear out transient problems before they disturb you.
  • Think carefully about where each alert type should be sent. Which ones are pager-worthy, for example, versus ones that should just be sent to a file for historical record keeping?

 

Performance and Capacity Monitoring

  • Baselining. As I discussed in a previous post, if you don't know what the infrastructure is doing when things are working correctly, it makes it even harder to figure out what's wrong when then there's a problem. This might apply to element utilization, network routing issues, and more. This information doesn't have to be in your face all the time, but having it to hand is very valuable.

 

BUT!

 

Everything so far talks about how to handle alerting when events occur. This is "reactive" monitoring, and it's what most of us end up doing. However, to achieve true inner peace we need to look beyond the triggers and prevent the event from happening in the first place. Obviously there's not much that can be done about power outages or hardware failures, but in other ways we can help ourselves by proactively.

 

Proactive monitoring basically means preempting avoidable alerts. Solarwinds software offers a number of features to forecast and plan for capacity issues before they become alerts. For example, Virtualization Manager can warn of impending doom for VMs and their hosts; Storage Resource Monitor tracks capacity trends for storage devices; Network Performance Manager can forecast exhaustion dates on the network; User Device Tracker can monitor switch port utilization. Basically, we need to use the forecasting/trending tools provided to look for any measurement that looks like it's going to hit a threshold, check with the business to determine any additional growth expected, then make plans to mitigate the issue before it becomes one.

 

Hating Our Monitoring

 

We don't have to hate our monitoring. Sadly, the tools tend to do exactly what we tell them to, and we sometimes expect a little too much from them in terms of having the intelligence to know which alerts are important, and which are not. However, we have the technology at our fingertips, and we can make our infrastructure monitoring dance, if not to our tune (because sometimes we need something that just isn't possible at the moment), then at least to the same musical genre. With careful tuning, alerting can largely be mastered and minimized. With proactive monitoring and forecasting, we can avoid some of those alerts in the first place. After all -- and without wishing to sound too cheesy -- the best alert is the one that never triggers.

For the unprepared, managing your agency’s modern IT infrastructure with all its complexity can be a little scary. Evolving mandates, the constant threat of a cyber-attack and a connected workforce that demands access to information when they want it, where they want it, places more pressure on the government’s IT professionals than ever. And at the heart of it all is still the network.

 

At SolarWinds we know today’s government IT pro is a Bear Grylls-style survival expert. And in true Man vs. Wild fashion, the modern IT pro needs a network survival guide to be prepared for everything.

 

Assess the Network

 

Every explorer needs a map. IT Pros are no different, and the map you need is of your network. Understanding your networks capabilities, needs and resources is the first step of network survival.

 

Ask yourself the following questions:

 

  • How many sites need to communicate?
  • Are they located on the intranet, or externally and accessed via a datacenter?
  • Is the bulk of my traffic internal, or is it all bound for the Internet? How about any key partners and contractors?
  • Which are the key interfaces to monitor?
  • Where should deep packet inspection (DPI) agents go?
  • What is the scope and scale of what needs to be monitored?

 

Acknowledge that Wireless is the Way

 

What’s needed are tools like wireless heat maps to manage over-subscribed access points and user device tracking tools that allow agencies to track rogue devices and enforce their BYOD policies. The problem is that many of these tools have traditionally been cost-prohibitive, but newer options open doors to implementing these technologies you might not be aware of.

 

Prepare for the Internet of Things

 

The government can sometimes be slower to adopt new technology, but agencies are increasingly experimenting with the Internet of Things. How do you overcome these challenges? True application firewalls can untangle the most sneaky device conversation, get IP address management under control and get gear ready for IPv6. They can also classify and segment your device traffic; implement effective quality of service to ensure that critical business traffic has headroom; and of course, monitor flow.

 

Understand that Scalability is Inevitable

 

It is important to leverage capacity forecasting tools, configuration management, and web-based reporting to be able to predict and document scalability and growth needs so you can justify your budget requests and stay ahead of infrastructure demands.

 

Just admit it already—it’s All About the Application

 

Everything we do is because of and for the end-users. The whole point of having a network is to achieve your mission and support your stakeholders. Seek a holistic view of the entire infrastructure, including the impact of the network on application issues and don’t silo network management anymore.

 

A Man is Only as Good as His Tools

 

Having sophisticated network monitoring and management tools is an important part of arming IT professionals for survival, but let’s not overlook the need for certifications and training, so the tools can be used to effectively manage the network.

 

Revisit, Review, Revise

 

What’s needed to keep your network running at its peak will change, so your plans need to adapt to keep up. Constantly reexamine your network to be sure that you’re addressing changes as they arise. Successful network management is a cyclical process, not a one-way journey.

 

Find the full article on Federal Technology Insider.

A Neverending IT Journey around Optimizing, Automating, and Reporting on Your Virtual Data Center

Introduction

 

The journey of one begins with a single virtual machine (VM) on a host. The solitary instance in a virtual universe with the vastness of the data center as a mere dream in the background. By itself, the VM is just a one-to-one representation of its physical instantiation. But virtualized, it has evolved, becoming software defined and abstracted. It’s able to draw upon a larger pool of resources should its host be added to a cluster. With that transformation, it becomes more available, more scalable, and more adaptable for the application that it is supporting.

 

The software abstraction enabled by virtualization provides the ability to quickly scale across many axes without scaling their overall physical footprint. The skills required to do this efficiently and effectively are encompassed by optimization, automation, and report. The last skill is key because IT professionals cannot save their virtual data center if no one listens to and seeks to understand them. Moreover, the former two skills are complementary. And as always, actions speak louder than words.

 

SOAR.PNG

 

In the following weeks, I will cover practical examples of optimization, automation, and reporting in the virtual data center. Next week will cover optimization in the virtual data center. The week after will follow with automation. And the final week will discuss reporting. In this case, order does matter. Automation without optimization consideration will lead to work being done that serves no business-justified purpose. Optimization and automation without reporting will lead to insufficient credit for the work done right, as well as misinforming decision makers of the proper course of actions to take.

 

I hope you’ll join me for this journey into the virtual IT universe.

Last week was Microsoft Ignite in Atlanta. I had the privilege of giving two presentations, one of which was titled "Performance Tuning Essentials for the Cloud DBA." I was thinking of sharing the slides, but the slides are just there to enhance the story I was telling. So I've decided instead to share the narrative here in this post today, including the relevant images. As always, you're welcome.

 

I started with two images from the RightScale 2016 State of the Cloud Report:

Slide05.jpg

 

The results of that survey help to show that hybrid IT is real, it's here, and it is growing. Using that information, combined with the rapid advances we see in the technology field with each passing year, I pointed out how we won't recognize IT departments in five years.

 

For a DBA today, and also the DBA in five years, it shouldn't matter where the data resides. The data can be either down the hall or in the cloud. That's the hybrid part, noted already. But how does one become a DBA? Many of us start out as accidental DBAs, or accidental whatevers, and in five years there will be accidental cloud DBAs. And those accidental cloud DBAs will need help. Overwhelmed at first, the cloud DBA will soon learn to focus on their core mission:

 

paid.png

 

Once the cloud DBA learns to focus on his or her core mission (recovery), they can start learning how to do performance troubleshooting (because bacon ain't free). I believe that when it comes to troubleshooting, it is best to think in buckets. For example, if you are troubleshooting a virtualized database server workload, the first question you should be asking yourself is, "Is the issue inside the database engine or is it external, possibly within the virtual environment?" In time, the cloud DBA learns to think about all kinds of buckets: virtual layers, memory, CPU, disk, network, locking, and blocking. Existing DBAs already have these skills. But as we transition to being cloud DBAs, we must acknowledge that there is a gap in our knowledge and experience.

 

That gap is the network.

 

Most DBAs have little to no knowledge of how networks work, or how network traffic is utilized. A database engine, such as SQL Server, has little knowledge of any network activity. There is no DMV to expose such details, and a DBA would need to collect O/S level details on all the machines involved. That's not something a DBA currently does; we take networks for granted. To a DBA, networks are like the plumbing in your house. It's there, and it works, and sometimes it gets clogged.

 

But the cloud demands that you understand networks. Once you go cloud, you become dependent upon networks working perfectly, all the time. One little disruption, because someone didn't call 1-800-DIG-SAFE before carving out some earth in front of your office building, and you are in trouble. And it's more than just the outage that may happen from time to time. No. You need to know about your network as a cloud DBA for the following reasons: RPO, RTO, SLA, and MTT. I've talked before about RPO ands RTO here, and I think anyone reading this would know what SLA means. MTTI might be unfamiliar, though. I borrowed that from adatole. It stands for Mean Time To Innocence, and it is something you want to keep as short as possible, no matter where your data resides.

 

You may have your RPO and RTO well-defined right now, but do you know if you can meet those metrics right now? Turns out the internet is a complicated place:

 

Slide20.jpg

 

Given all that complexity, it is possible that data recovery may take a bit longer than expected. When you are a cloud DBA, the network is a HUGE part of your recovery process. The network becomes THE bottleneck that you must focus on first and foremost in any situation. In fact, when you go cloud, the network becomes the first bucket you need to consider. The cloud DBA will need to be able to know and understand in five minutes or less if the network is the issue first, before spending any time on trying to tune a query. And that means the cloud DBA is going to have to understand what is clogging that pipe:

 

Slide23.jpg

 

Because when your phone rings, and the users are yelling at you that the system is slow, you will want to know that the bulk of the traffic in that pipe is Pokemon Go, and not the data traffic you were expecting.

 

Here's a quick list of tips and tricks to follow as a cloud DBA.

 

  1. Use the Azure Express! Azure Express Route is a dedicated link to Azure, and you can get it from Microsoft or a managed service provider that partners with Microsoft. It's a great way to reduce the complex web known as the internet, and give you better throughput. Yes, it costs extra, but only because it is worth the price.
  2. Consider Alt-RPO, Alt-RTO. For those times when your preferred RPO and RTO needs won't work, you will want an alternative. For example, you have an RPO of 15 minutes, and an RTO of five minutes. But the network is down, so you have an Alt-RPO of an hour and an Alt-RTO of 30 minutes, and you are storing backups locally instead of in the cloud. The business would rather be back online, even to the last hour, as opposed to waiting for the original RPO/RTO to be met.
  3. Use the right tools. DBAs have no idea about networks because they don't have any tools to get them the details they need. That's where a company like SolarWinds comes in to be the plumber and help you unclog those pipes.

 

Thanks to everyone that attended the session last week, and especially to those that followed me back to the booth to talk data and databases.

476.JPG

In previous posts, I've written about making the best of your accidental DBA situation.  Today I'm going to give you my advice on the things you should focus on if you want to move from accidental DBA to full data professional and DBA.

 

As you read through this list, I know you'll be thinking, "But my company won't support this, that's why I'm an accidental DBA." You are 100% correct.  Most companies that use accidental DBAs don't understand the difference between developer and DBA, so many of these items will require you to take your own initiative. But I know since you are reading this you are already a great candidate to be that DBA.

 

Training

 

Your path to becoming a DBA has many forks, but I'm a huge fan of formal training. This can be virtual or in-person. By virtual I mean a formal distance-learning experience, with presentations, instructor Q&A, hands-on labs, exams and assignments. I don't mean watching videos of presentations. Those offerings are covered later.

 

Formal training gives you greater confidence and evidence that you learned a skill, not that you only understand it. Both are important, but when it comes to that middle-of-the-night call alerting you that databases are down, you want to know that you have personal experience in bringing systems back online.

 

Conferences

Conferences are a great way to learn, and not just from invited speakers. Speaking with fellow attendees, via the hallway conferences that happen in conjunction with the formal event,  gives you the opportunity to network with people who boast a range of skill levels. Sharing resource tips with these folks is worth the price of admission.

 

User Groups and Meetups

I run the Toronto Data Platform and SQL Server Usergroup and Meetup, so I'm a bit biased on this point. However, these opportunities to network and learn from local speakers are often free to attend.  Such a great value! Plus, there is usually free pizza. Just saying. You will never regret having met other data professionals in your local area when you are looking for you next project.

 

Online Resources

Online videos are a great way to supplement your formal training. I like Pluralsight because it's subscription-based, not per video. They offer a free trial, and the annual subscription is affordable, given the breadth of content offered.

 

Webinars given by experts in the field are also a great way to get real-world experts to show and tell you about topics you'll need to know. Some experts host their own, but many provide content via software vendor webinars, like these from SolarWinds.

 

Blogs

Blogs are a great way to read tips, tricks and how tos. It's especially important to validate the tips you read about. My recommendation is that you validate any rules of thumb or recommendations you find by going directly to the source: vendor documentation and guidelines, other experts, and asking for verification from people you trust. This is especially true if the post you are reading is more than three months old.

 

But another great way to become a professional DBA is to write content yourself.  As you learn something, get hands-on experience using it, write a quick blog post. Nothing makes you understand a topic better than trying to explain it to someone else.

 

Tools

I've learned a great deal more about databases by using tools that are designed to work with them. This can be because the tools offer guidance on configuration, do validations and/or give you error messages when you are about to do something stupid.  If you want to be a professional DBA, you should be giving Database Performance Analyzer a test drive.  Then when you see how much better it is at monitoring and alerting, you can get training on it and be better at databasing than an accidental DBA with just native database tools.

 

Labs

The most important thing you can do to enhance your DBA career is to get hands-on with the actual technologies you will need to support. I highly recommend you host your labs via the cloud. You can get a free trial for most. I recommend Microsoft Azure cloud VMs because you likely already have free credits if you have an MSDN subscription. There's also a generous 30-day trial available.


I recommend you set up VMs with various technologies and versions of databases, then turn them off.  With most cloud providers, such as Azure, a VM that is turned off has no charge except for storage, which is very inexpensive.  Then when you want to work with that version of software, you turn on your VM, wait a few minutes, start working, then turn it off when you need to move on to another activity.

 

The other great thing about working with Azure is that you aren't limited to Microsoft technologies.  There are VMs and services available for other relational database offerings, plus NoSQL solutions. And, of course, you can run these on both Windows and Linux.  It's a new Microsoft world.

 

The next best thing about having these VMs ready at your fingertips is that you can use them to test new automation you have developed, test new features you are hoping to deploy, and develop scripts for your production environments.

 

Think Like a DBA, Be a DBA

The last step is to realize that a DBA must approach issues differently than a developer, data architect, or project manager would. A DBA's job is to keep the database up and running, with correct and timely data.  That goal requires different thinking and different methods.  If you don't alter your problem-management thinking, you will likely come to different cost, benefit, and risk decisions.  So think like a DBA, be a DBA, and you'll get fewer middle-of-the-night calls.

Thanks to everyone that stopped by the booth at Microsoft Ignite last week, it was great talking data and databases. I'm working on a summary recap of the event so look for that as a separate post in Geek Speak later this week.

 

In the meantime, here's a bunch of stuff I found on the Intertubz in the past week that you might find interesting, enjoy!

 

Will IoT become too much for IT?

The IoT is made up of billions of unpatched and unmonitored devices, what could possibly go wrong?

 

Largest ever DDoS attack: Hacker makes Mirai IoT botnet source code freely available on HackForums

This. This is what could go wrong.

 

Clinton Vows To Retaliate Against Foreign Hackers

I don't care who you vote for, there is no way you can tell me you think either candidate has any idea what to do about the Cyber.

 

Marissa Mayer Forced To Admit That She Let Your Mom’s Email Account Get Hacked

For example, here is one of the largest tech companies making horrible decisions about 500 MILLION accounts being hacked. I have little faith in anyone when it comes to data security except for Troy Hunt. Is it too late to elect him Data Security Czar?

 

California OKs Self-Driving Vehicles Without Human Backup

Because it seemed weird to not include yet another link about self-driving cars. 

 

BlackBerry To Stop Making Smartphones

The biggest shock I had while reading this was learning that BlackBerry was still making smartphones. 

 

Fake island Hy-Brasil printed for 500 years

I'm going with the theory that this island was put there in order to catch people making copies of the original work, but this article is a nice reminder why crowd-sourced pieces of work (hello Wikipedia) are often filled with inaccurate data.

 

At Microsoft Ignite last week patrick.hubbard found this documentation gem:

 

ports - 1.jpg

I’ve come to a crossroads. Regular SolarWinds Lab viewers and new THWACKcamp attendees might have noticed my fondness for all things programmable. I can’t help smiling when I talk about it; I came to enterprise IT after a decade as a developer. But if you run across my spoor outside of SolarWinds, you may notice a thinly-veiled, mild but growing despair. On the flight back from Microsoft® Ignite last week, I reluctantly accepted reality: IT, as we know it, will die.

 

Origin of a Bummer

 

On one hand, this should be good news because a lot of what we deal with in IT is, quite frankly, horrible and completely unnecessary. I’m not referring to managers who schedule weekly all-hands that last an hour because that’s the default meeting period in Outlook®. Also not included are 3:00am crisis alerts that prompt you to stumble to the car with a baseball hat because the issue is severe enough to take out the VPN, too. Sometimes it’s okay to be heroic, especially if that means you get to knock off early at 2:00pm.

 

The perennial horror of IT is boredom. Tedium. Repetitive, mindless, soul-crushing tasks that we desperately want to remediate, delegate, or automate, but can’t because there’s no time, management won’t let us, or we don’t have the right tools.

 

All of this might be okay, except for two things: accelerating complexity and the move to the cloud. The skinny jeans-clad new kids can’t imagine any other way, and many traditional large enterprise IT shops also hit the cloud hookah and discovered real benefits. Both groups recognized dev as a critical component, and their confidence comes from knowing that they can and will create whatever their IT requires to adapt and realize the benefits of new technology.

 

No, the reason this is a bummer – if only for five-or-so-years – is that it’s going to hit the people I have the greatest affinity for the hardest: small to medium business IT, isolated independent department IT in large organizations, and superstar admins with deep knowledge in highly specialized IT technology. In short, those of you who’ve worn all the hats I have at one point or another over the last two decades.

 

I totally understand the reasonable urge to stand in front of a gleaming Exchange rack and tell all the SaaS kids to get off your lawn. But that’s a short-term solution that won’t help your career. In fact, if you’re nearing or over 50, this is an especially critical time to be thinking about the next five years. I watched some outstanding rack-and-stack app infrastructure admins gray out and fade away because they resisted the virtualization revolution. Back then, I had a few years to get on board, gain the skills to tame VMs, and accelerate my career.

 

This time, however, I’m actively looking ahead, transitioning my education and certification, and working in production at least a little every week with cloud and PaaS technology. I’m also talking to management about significant team restructuring to embrace new techniques.

 

Renewed Mission

 

Somewhere over Louisiana I accepted the macro solution that we’ll each inevitably face, but also my personal role in it. We must tear down IT as we know it, and rebuild something better suited to a data center-less reality. We’ll abandon procedural ticket-driven change processes, learn Sprints, Teaming, Agile, and, if we’re lucky, get management on board with a little Kanban, perhaps at a stand-up meeting.

 

And if any or all of that sounds like New Age, ridiculous mumbo jumbo, that’s perfectly okay. That is a natural and understandable reaction of pragmatic professionals who need to get tish done. My role is to help my peers discover, demystify, and develop these new skills. Further, it’s to help management stop thinking of you as rigidly siloed and ultimately replicable when new technology forces late-adopting organizations into abrupt shifts and spasms of panicked change.

 

But more than that, if these principles are even partially adopted to enable DevOps-driven IT, life is better. The grass really is greener. I’ve seen it, lived it, and, most surprising to this skeptical, logical, secret introvert, I’ve come to believe it. My job now is to combine my fervor for the tools we’ll use with a career of hard-won IT lessons and do everything I can to help. Don’t worry. This. Is. Gonna. Be. Awesome.

I’ve come to a crossroads. Regular SolarWinds Lab viewers and new THWACKcamp attendees might have noticed my fondness for all things programmable. I can’t help smiling when I talk about it; I came to enterprise IT after a decade as a developer. But if you run across my spoor outside of SolarWinds, you may notice a thinly-veiled, mild but growing despair. On the flight back from Microsoft® Ignite last week, I reluctantly accepted reality: IT, as we know it, will die.

 

Origin of a Bummer

 

On one hand, this should be good news because a lot of what we deal with in IT is, quite frankly, horrible and completely unnecessary. I’m not referring to managers who schedule weekly all-hands that last an hour because that’s the default meeting period in Outlook®. Also not included are 3:00am crisis alerts that prompt you to stumble to the car with a baseball hat because the issue is severe enough to take out the VPN, too. Sometimes it’s okay to be heroic, especially if that means you get to knock off early at 2:00pm.

 

The perennial horror of IT is boredom. Tedium. Repetitive, mindless, soul-crushing tasks that we desperately want to remediate, delegate, or automate, but can’t because there’s no time, management won’t let us, or we don’t have the right tools.

 

All of this might be okay, except for two things: accelerating complexity and the move to the cloud. The skinny jeans-clad new kids can’t imagine any other way, and many traditional large enterprise IT shops also hit the cloud hookah and discovered real benefits. Both groups recognized dev as a critical component, and their confidence comes from knowing that they can and will create whatever their IT requires to adapt and realize the benefits of new technology.

 

No, the reason this is a bummer – if only for five-or-so-years – is that it’s going to hit the people I have the greatest affinity for the hardest: small to medium business IT, isolated independent department IT in large organizations, and superstar admins with deep knowledge in highly specialized IT technology. In short, those of you who’ve worn all the hats I have at one point or another over the last two decades.

 

I totally understand the reasonable urge to stand in front of a gleaming Exchange rack and tell all the SaaS kids to get off your lawn. But that’s a short-term solution that won’t help your career. In fact, if you’re nearing or over 50, this is an especially critical time to be thinking about the next five years. I watched some outstanding rack-and-stack app infrastructure admins gray out and fade away because they resisted the virtualization revolution. Back then, I had a few years to get on board, gain the skills to tame VMs, and accelerate my career.

 

This time, however, I’m actively looking ahead, transitioning my education and certification, and working in production at least a little every week with cloud and PaaS technology. I’m also talking to management about significant team restructuring to embrace new techniques.

 

Renewed Mission

 

Somewhere over Louisiana I accepted the macro solution that we’ll each inevitably face, but also my personal role in it. We must tear down IT as we know it, and rebuild something better suited to a data center-less reality. We’ll abandon procedural ticket-driven change processes, learn Sprints, Teaming, Agile, and, if we’re lucky, get management on board with a little Kanban, perhaps at a stand-up meeting.

 

And if any or all of that sounds like New Age, ridiculous mumbo jumbo, that’s perfectly okay. That is a natural and understandable reaction of pragmatic professionals who need to get tish done. My role is to help my peers discover, demystify, and develop these new skills. Further, it’s to help management stop thinking of you as rigidly siloed and ultimately replicable when new technology forces late-adopting organizations into abrupt shifts and spasms of panicked change.

 

  But more than that, if these principles are even partially adopted to enable DevOps-driven IT, life is better. The grass really is greener. I’ve seen it, lived it, and, most surprising to this skeptical, logical, secret introvert, I’ve come to believe it. My job now is to combine my fervor for the tools we’ll use with a career of hard-won IT lessons and do everything I can to help. Don’t worry. This. Is. Gonna. Be. Awesome.

Simplifying network management is a challenging task for any organization, especially those that have chosen a best of breed route and have a mix of vendors. I ask my customers to strive for these things when looking to improve their network management and gain some efficiency.

 

  1. Strive for a Single Source of Truth—As an administrator there should be a single place that you manage information about a specific set of users or devices (e.g. Active Directory as the only user database). Everything else on the network should reference that source for its specific information. Multiple domains or maintaining a mix of LDAP and RADIUS users makes authentication complicated and arguably may make your organization less secure as maintaining these multiple sources is burdensome. Invest in doing one right and exclusively.
  2. Standardization—A tremendous amount of time savings can be found by eliminating one-off configurations/sites, situations, etc. An often overlooked part in this time savings is in consulting and contractor costs, the easier it is for an internal team to quickly identify a location, IDF, device, etc. the easier it will be for your hired guns as well. A system should be in place for IP address schemes, VLAN numbering, naming conventions, low voltage cabling, switch port usage, redundancy, etc.
  3. Configuration Management—Creating a plan for standardization is one thing, ensuring it gets executed is tougher. There are numerous tools that allow for template-based configuration or script-based configuration. If your organization is going to take the time to standardize the network, it is critical that it gets followed through on the configuration side. DevOps environments may turn to products like Chef, Puppet or Ansible to help with this sort of management.
  4. Auditing and Accountability—Being proactive about policing these efforts is important and to do that some sort of accountability needs to be in place. This should happen in change control meetings to ensure changes are well thought out and meet the design standards, safeguards are in place to ensure the right people are making the changes and that those changes can be tracked back to a specific person (no shared “admin” or “root” accounts!) to help ensure that all of the hard work put in to this point is actually maintained. New hires should be trained and indoctrinated in the system to ensure that they follow the process.

 

Following these steps will simplify the network, increase visibility, speed troubleshooting, and even help security. What steps have you taken in your environment to simplify network management?  We’d love to hear it!

With Data breaches and insider threats increasing, a vulnerable network can be an ideal entry point that puts sensitive data at risk. As a result, federal IT professionals, like yourself, need to worry not only about keeping people out, but keeping those who are already in from doing damage.

 

But while you can’t completely isolate your network, you can certainly make sure that all access points are secure. To do so, you’ll need to concentrate on three things: devices, traffic, and planning.

 

Checkpoint 1: Monitor embedded and mobile devices

 

Although you may not know everything about what your HVAC or other systems with embedded devices are doing, you still need to do what you can to manage them. This means frequent monitoring and patching, which can be accomplished through network performance monitors and patch management systems. The former can give you detailed insight into fault, performance, security and overall network availability, while the latter can provide automated patching and vulnerability management.

 

According to a recent study by Lookout, mobile devices continue to be extremely prevalent in federal agencies, but an alarming number of them are unsanctioned devices that are being used in ways that could put information at risk. A staggering eighty percent of respondents in a SolarWinds survey believe that mobile devices pose some sort of threat to their agency’s security. But, you can gain control of the situation with user device tracking software, which can identify the devices that are accessing your network, alert you to rogue devices, and track them back to individual users.

 

Checkpoint 2: Keep an eye on network traffic

 

Network traffic analysis and bandwidth monitoring solutions can help you gain the visibility you may currently lack. You can closely monitor bandwidth and traffic patterns to identify any anomalies that can be addressed before they become threats. Bandwidth can be traced back to individual users so you can see who and what might be slowing down your network and you can receive automated alerts to let you know of any red flags that might arise.

 

Checkpoint 3: Have a response plan in place

 

While it’s a downer to say you should always assume the worst, it’s sadly true. There’s a bright side, though! If you assume a breach is going to happen, you’re more likely to be well prepared when it does. If one has happened, you’ll be more likely to find it.

 

This will require developing a process for responding to attacks and identifying breaches. Begin by asking yourself, “given my current state, how quickly would I be able to identify and respond to an attack?” Follow that up with, “what tools do I have in place that will help me prevent and manage a breach?”

 

If you’re uncomfortable with the answers, it’s time to begin thinking through a solid, strategic approach to network security – and start deploying tools that will keep your data from walking out the front door.

 

Find the full article on our partner DLT’s blog, TechnicallySpeaking.

When it comes to the technical aspects of PCI DSS, HIPAA, SOX, and other regulatory frameworks, the goals are often the same: to protect the privacy and security of sensitive data. But the motivators for businesses to comply with these regulatory schemes varies greatly.

Penalties for Noncompliance

 

Regulatory Compliance Framework

Industry

Scope

Year

Established

Governing Body

Penalties

PCI DSS

Payment Card Industry Data Security Standards

Applies to any organization that accepts credit cards for payment

2004

Payment Card Industry Security Standards Council (PCI SSC)[1]

  • Fines up to $200,000/violation
  • Censure from credit card transactions

HIPAA

Health Insurance Portability and Accountability Act[2]

Applies to healthcare-related businesses deemed either covered entities or business associates by law

1996

The Department of Health and Human Services (HHS) Office for Civil Rights (OCR)

  • Up to $50,000 per record
  • Maximum on $1.5M/year

SOX

Sarbanes–Oxley Act

 

Applies to any publicly traded company

2002

The Security and Exchange Commission (SEC)

  • Fines up to $5M
  • Up to 20 years in prison

NCUA

National Credit Union Association

Applies to credit unions

1934
(r. 2013)

NCUA is the federal agency assigned to enforce a broad range of consumer regulations that apply to federally chartered credit unions and, to a lesser degree, federally insured state chartered

credit unions.[3]

  • Dissolve your credit union
  • Civil money penalties

GLBA

Gramm-Leach-Bliley Act

Applies to financial institutions that offer products or services to individuals, like loans, financial or investment advice, or insurance

1999

Federal Trade Commission (FTC)

  • $100,000 per violation
  • Up to 5 years in prison

FISMA

Federal Information Security Management Act

Applies to the federal government and companies with government contracts

2002

Office of Management and Budget (OMB), a child agency of the Executive Office of the President of the United States

  • Loss of federal funding
  • Censure from future contracts

 

 

This list only represents a fraction of the entire regulatory compliance structures that govern the use of information technology and processes involved in maintaining the confidentiality, integrity, and availability of sensitive data of all types.

 

Yes, there are monetary fines for noncompliance or unlawful uses or disclosures of sensitive information – the chart above provides an overview of that – and for most, that alone offers plenty of incentive to comply. But beyond this, businesses should be aware of the many other consequences that can result from non-compliance or any other form of negligence that results in a breach.

 

Indirect Consequences of Noncompliance

 

Noncompliance whether validated by audits, or discovered as the result of a breach, can be devastating for a business. Though, when a breach occurs, its impact often extends well beyond the fines and penalties levied by enforcement agencies. It can include the cost of detecting the root cause of a breach, remediating it, and notifying those affected. Further, the cost balloons when you factor in legal expenditures, business-related expenses, and loss of revenues faced by damaged brand reputation.

 

As if IT pros did not have enough to worry about these days, yes, unfortunately compliance too falls into their laps. But depending on the industries they serve and the types of data their business interacts with, what compliance actually entails can be quite different.

 

Regulatory Compliance and the Intersection with IT

 

Without a doubt, there are many aspects of data security standards and compliance regulations that overshadow everything from IT decision-making and purchasing, to configurations, and the policies and procedures a company must create and enforce to uphold this important task.

 

Organizations looking to comply with a particular regulatory framework must understand that no one solution, and no one vendor, can help prepare them for all aspects of compliance. It is important that IT professionals understand the objectives of every compliance framework they are subject to, and plan accordingly. 

 


[1] The PCI SSC was founded by American Express, Discover Financial Services, JCB International, MasterCard Worldwide and Visa Inc. Participating organizations include merchants, payment card-issuing banks, processors, developers, and other vendors.

[2] The Health Information Technology for Economic and Clinical Health (HITECH) Act, which was enacted as part of the American Recovery and Reinvestment Act of 2009, prompted the adoption of Health Information Technology. This act is recognized as giving “teeth” to HIPAA as it established stricter requirements by establishing the Privacy, Security, and Breach Notification Rules, as well as stiffer penalties for violations. The HIPAA Omnibus Rule, which went into effect in 2013, further strengthened the OCR’s ability to enforce compliance, and clearly defined the responsibility of compliance for all parties that interact with electronic protected health information (ePHI).

[3] It is important to note that in the financial world, guidance from the Federal Financial Institute of Examiners Council (FFIEC) to a bank is mandatory because the guidance specifies the standards that the examiner will use to evaluate the bank. Credit unions technically fall under a different regulator than banks, however, the National Credit Union Association closely follows the FFIEC guidance.

 

1606_LEM_Compliance-Campaign_WP_640x200_Intro.png

For a year that started out so very slowly in terms of “The Big Exit,” 2016 has become so compelling.

 

There have been very few infrastructural IPO’s this year, but there have been some interesting acquisitions, and some truly significant changes in corporate structures. I will highlight a few, and maybe even posit an opinion or two.

 

The Hyper-Converged market is growing steadily, with new players on a practically daily basis. Nutanix, who has stated from an early position that their exit would be one of Initial Public Offering, has pulled back on their timeframe a couple of times recently. They are consistently viewed as the big dog on the Hyper-Converged Infrastructure. With strong numbers, a loyal fanbase, and a team of salespeople, engineers, and SE’s who’ve at times rabidly promoted their offerings, they come by the stature in this space quite rightly. These statements are not, from me, about the quality, comparison, or reflections on any of the competitors in this growing space. It does seem that the only thing really holding back this company from their desired exit is one of the marketplace shying away from IPO’s, and that if they’d wanted to become an acquisition target, that quite possibly, their brand could have become a part of some other organization’s product line.

 

The massive companies in the space are not immune to these kinds of changes either. For example, after Dell decided to go private, and made that a reality, they then set their sights toward acquiring the largest storage vendor in the world. After what I’m sure have been long and arduous conversations, much negotiation, and quite a bit of oversight from the financial world, they’ve now made the acquisition of EMC a reality. While we will likely not see the full fallout of which companies stay in the newly created DellEMC, and which get sold off in order to make up some of the costs of the acquisition. The new company will be the largest technology company in the world and comprise so many different services offerings, storage offerings, converged architectures, etc. It’s a truly stunning acquisition, which will theoretically alter the landscape of the industry in profound ways. The future remains uncertain regarding Compellent, EqualLogic, Isilon, ExtremeIO, VNx, and Vmax, to name a few storage only brands, Dell Professional Services, Virtustream, and even VMware can be potentially be spun off. Although, I do suspect that VMware will remain a company doing business as it always has, a free spirit remaining a part of the architectural ecosystem, and beholden to none. VCE itself, could change drastically, as we really don’t know how this will affect the Cisco UCS/NEXUS line.

 

I recently wrote a posting on the huge and earth shattering changes taking place at HPe as well. Under the guidance of Meg Whitman, HP has now split itself into two distinct companies. Consumer division (Hewlett Packard) comprised of desktops, laptops, and printing as well as other well-known HP brands on one side, while Servers, enterprise storage, and Aruba Networking etc. become part of the other side (Hewlett Packard Enterprise). The transition, first launched at HP Discover 2015, has gone quite smoothly. Channel relationships have, if anything grown stronger. From this man’s perspective, I am impressed. Now, the recent announcement that Enterprise Software will be sold off to MicroFocus, the brand that used to market a version of Cobol, a global presence, will now be the owner of these major software releases. For my money, the operations should be just fine. I don’t really see how it’ll change things. Certainly some of our contacts will change, but as to how smoothly this newest change will transition is left to be seen.

 

Pure Storage, the last big IPO to transpire in the enterprise infrastructure space, has gone, for the most part very well. These folks seem to have a great vision of where they want to head. They’ve successfully built a second storage platform, essentially on the sly (FlashBlade), meanwhile their numbers have been on the whole, quite good. I’m so interested to see where things go with Pure. I do feel that they’ll handle their future with aplomb, continue to grow their market share, and create new products with an eye to gaps, and customer requirements. Their Professional services group, as well as their marketing have been standout performers within the industry. I do find it interesting, though, that they have been turning the world orange and converting customers away from legacy storage brands to new platform approach, while taking the needs of their customers even in use-cases that hadn’t necessarily been core to their approach gracefully, aggressively, and competently.

 

Finally, I’ll mention another key acquisition. NetApp, one of those stalwart legacy storage companies, at one time a great alternative to other monolithic storage vendors had gotten stale. By their admission, the reliance they had on an older architecture truly needed a shot in the arm. Well, they achieved this by purchasing SolidFire. SolidFire, a very tightly focused storage player in the All Flash Array market, was accomplishing some truly high-end numbers for a startup, going into datacenters, and service providers and replacing far larger players meanwhile solving problems that in many cases had existed for years, or creating solutions for brand new Cloud related issues. A truly impressive feat for such a lean startup. They’ve proven to be just the key that fit the lock. I’m very interested to see how smoothly this acquisition will go moving forward. I wonder how well that startup mentality will fuse with the attitudes of a much slower moving culture, as NetApp had become. Will attrition force the rockstar development team to slow down, focusing on integration, or will they be allowed to continue along the path they’d cut over the previous few years, run as a separate entity amongst the rest of NetApp? I am curious as to whether some of the great SolidFire people will leave once the dust settles, or if it’s to their benefit to grow within this larger entity. The truth will prove itself.

 

The current crop of candidates looking to go public all seem to revolve around the Software related cloud business model, companies like Twilio, Blue Apron, and Casper Mattresses appear to be the kind of contender that are poised to transform. They seem to focus on software as their model. From a real IT perspective, I’ve heard Basho mentioned, (A brand new platform for distributed Database), Code42 (the creators of Crash Plan), DropBox (the ubiquitous cloud file share/storage location), and xMatters (A leader in the IOT landscape) as potential candidates for public offering.

 

As to mergers and acquisitions, there does seem to be a better taste toward companies like Acacia (in Optical Networking), Pandora (the streaming media company), Zynga (video games like Farmville), and a couple of semiconductor firms: Macom and SunEdison.

 

Updates after writing and before posting: On Tuesday September 20, Nutanix filed (Again) for an IPO, setting the Initial public offering at 209Million based on a corporate valuation at approximately 1.8Billion. Filing paperwork here. And, the VMware vRealize Management Suite, as yet another fallout from the DellEMC deal, has been sold off to Skyview Capital. I’m fairly confident that we’ll be seeing more and more changes in this shifting landscape in the very near future.

 

We are living in a time of change in tech business. What or who is next for companies like those I’ve highlighted? Who will come up with the next ground breaking tech? And, who’s next to set the world on fire with their huge business news?

There's been a long-standing "discussion" in the world of storage regarding snapshots and backups. Some people say that snapshots can replace backups, while others say that just can't be true. I side with the latter, but the latest industry developments are making me reconsider that stance.

 

What's a Backup?

 

A backup isn't just a copy of data. A backup has to be recoverable and reliable, and most snapshots just don't meet that criteria.

 

What does "recoverable" mean? Backups have to be indexed and searchable by common criteria like date, file name, location, file type, and so on. Ideally, you could also search by less-common criteria like owner, content, or department. But at the very least there should be a file-level index, and most snapshot tools don't even have this. It's hard to expect a block snapshot to include a file index, but most NAS systems don't have one either! That's just not a backup.

 

Then we have to think about reliability. The whole point of a backup is to protect your data. Snapshots can protect against deletion and corruption, but they don't do much if the datacenter catches on fire or a bug corrupts your storage array. And many snapshot systems don't "snap" frequently enough or keep enough copies long enough to protect against corruption very long. This is why storage nerds like me say "your backup should be on a different codebase and your archive in a different zip code."

 

Then there's the question of management. Most backup systems have "friendly" interfaces to schedule regular backup passes, set retention options, and execute restores. Many years ago, NetApp showed just how friendly a snapshot restore can be, but options for what to backup and when remain pretty scarce. Although backup software isn't known for having the friendliest interface, you usually have lots more options.

 

Snap-Based Backups

 

But array snapshots can be an important part of a backup environment, and many companies are headed in that direction.

 

Most of today's best backup products use snapshots as a data source, giving a consistent data set from which to read. And most of these products sport wide-reaching snapshot support, from storage array vendors to logical volume managers. This is one source of irritation when people claim that snapshots have nothing to do with backups - of course they do!

 

Some snapshot systems also work in concert with data replication solutions, moving data off-site automatically. I've enjoyed the speed boost of ZFS Send/Receive, for example, and have come to rely on it as part of my data protection strategy. This alleviates my "different zip code" concern, but I would prefer a "different codebase" as well. That's one thing I liked at this week's NetApp Insight show: A glimpse of Amazon S3 as a replication target.

 

Then there are the snapshot-integrated "copy data management" products from Catalogic, Actifio, and (soon) NetApp. These index and manage the data, not just the snapshot. And they can do some very cool things besides backup, including test and development support.

 

Stephen's Stance

 

Snapshots aren't backups, but they can be a critical part of the backup environment. And, increasingly, companies are leveraging snapshot technology to make better backups.

 

I am Stephen Foskett and I love storage. You can find more writing like this at blog.fosketts.net, connect with me as @SFoskett on Twitter, and check out my Tech Field Day events.

Filter Blog

By date:
By tag: